A pedagogical model by any other name

Educational technology for personalized or adaptive learning is based on various types of pedagogical models.

Google provides the following info-box and link concerning pedagogical models:

Many companies have different names for their models, including:

  1. Knewton: Knowledge Graph
    (partially described in How do Knewton recommendations work in Pearson MyLab)
  2. Kahn Academy: Knowledge Map
  3. Declara: Cognitive Graph

The term “knowledge graph” is particularly common, including Google’s (link).

There are some serious problems with these names and some varying limitations in their pedagogical efficacy.  There is nothing cognitive about Declara’s graph, for example.  Like the others it may organize “concepts” (i.e., nodes) by certain relationships (i.e., links) that pertain to learning, but none of these graphs purports to represent knowledge other than superficially for limited purposes.

  • Each of Google’s, Knewton’s, and Declara’s graphs are far from sufficient to represent the knowledge and cognitive skills expected of a masterful student.
  • Each of them is also far from sufficient to represent the knowledge of learning objectives and instructional techniques expected of a proficient instructor.

Nonetheless, such graphs are critical to active e-learning technology, even if they fall short of our ambitious hope to dramatically improve learning and learning outcomes.

The most critical ingredients of these so-called “knowledge” or “cognitive” graphs include the following:

  1. learning objectives
  2. educational resources, including instructional and formative or summative assessment items
  3. relationships between educational resources and learning objectives (i.e., what they instruct and/or assess)
  4. relationships between learning objectives (e.g., dependencies such as prerequisites)

The following user interface supports curation of the alignment of educational resources and learning objectives, for example:

And the following supports curation of the dependencies between learning objectives (as in a prerequisite graph):

Here is a presentation of similar dependencies from Kahn Academy:

And here is a depiction of such dependencies in Pearson’s use of Knewton within MyLab (cited above):

Of course there is much more that belongs in a pedagogical model, but let’s look at the fundamentals and their limitations before diving too deeply.

Prerequisite Graphs

The link on Knewton recommendations cited above includes a graphic showing some of the learning objectives and their dependencies concerning arithmetic.  The labels of these learning objectives include:

  • reading & writing whole numbers
  • adding & subtracting whole numbers
  • multiplying whole numbers
  • comparing & rounding whole numbers
  • dividing whole numbers

And more:

  • basics of fractions
  • exponents & roots
  • basics of mixed numbers
  • factors of whole numbers

But:

  • There is nothing in the “knowledge graph” that represents the semantics (i.e., meaning) of “number”, “fraction”, “exponent”, or “root”.
  • There is nothing in the “knowledge graph” that represents what distinguishes whole from mixed numbers (or even that fractions are numbers).
  • There is nothing in the “knowledge graph” that represents what it means to “read”, “write”, “multiply”, “compare”, “round”, or “divide”.

Graphs, Ontology, Logic, and Knowledge

Because systems with knowledge or cognitive graphs lack such representation, they suffer from several problems, including the following, which are of immediate concern:

  1. dependencies between learning objectives must be explicitly established by people, thereby increasing the time, effort, and cost of developing active learning solutions, or
  2. learning objectives that are not explicitly dependent may become dependent as evidence indicates, which requires exponentially increasing data as the number of learning objectives increases, thereby offering low initial and asymptotic efficacy versus more intelligent and knowledge-based approaches

For example, more advanced semantic technology standards (e.g., OWL and/or SBVR or defeasible modal logic) can represent that digits are integers are numbers and an integer divided by another is a fraction.  Consequently, a knowledge-based system can infer that learning objectives involving fractions depend on some learning objectives involving integers.  Such deductions can inform machine learning such that better dependencies are induced (initially and asymptotically) and can increase the return on investment of human intelligence in a cognitive computing approach to pedagogical modeling.

As another example, consider that adaptive educational technology either knows or do not know that multiplication of one integer by another is equivalent to computing the sum of one the other number of times.  Similarly, they either know or they do not know how multiplication and division are related.  How effectively can the improve learning if they do not know?  How much more work is required to get such systems to achieve acceptable efficacy without such knowledge?  Would you want your child instructed by someone who was incapable of understanding and applying such knowledge?

Semantics of Concepts, Cognitive Skills, and Learning Objectives

Consider again the labels of the nodes in Knewton/Pearson’s prerequisite graph listed above.  Notice that:

  • the first group of labels are all sentences while the second group are all noun phrases
  • the first group (of sentences) are cognitive skills more than they are learning objectives
    • i.e., they don’t specify a degree of proficiency, although one may be implicit with regard to the educational resources aligned with those sentences
  • the second group (of noun phrases) refer to concepts (or, implicitly, sentences that begin with “understanding”)
  • the second group (of noun phrases) that begin with “basics” are unclear learning objectives or references to concepts

For adaptive educational technology that does not “know” what these labels mean nor anything about the meanings of the words that occur in them, the issues noted above may not seem important but they clearly limit the utility and efficacy of such graphs.

Taking a cognitive computing approach, human intelligence helps artificial intelligence understand these sentences and phrases deeply and precisely.  A cognitive computing approach also results in artificial intelligence that deeply and precisely understands many additional sentences of knowledge that don’ fit into such graphs.

For example, the system comes to know that reading and writing whole numbers is a conjunction of finer grained learning objectives and that, in general, reading is a prerequisite to writing.  It comes to know that whole numbers are non-negative integers which are typically positive.  It comes to know that subtraction is the inverse of addition (which implies some dependency relationship between addition and subtraction).  In order to understand exponents, the system is told and learns about raising numbers to powers and about what it means to square a number.  The system is told and learns about roots how they relate to exponents and powers, including how square roots relate to squaring numbers.  The system is told that a mixed number is an integer and proper fraction corresponding to an improper fraction.

Adaptive educational technology either understands such things or it does not.  If it does not, human beings will have to work much harder to achieve a system with a given level of efficacy and subsequent machine learning will take a longer time to reach a lower asymptote of efficacy.

Thiel, Creativity, and Jobs

A colleague at knowmatters.com sent me a an interview with Peter Thiel as I was reading about a battery less transceiver where I found a partial quote from Steve Jobs.  The original quote, which can be found at the Wall Street Journal, goes on to further agree with Mr. Thiel:

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people. “Unfortunately, that’s too rare a commodity. A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.

As it turns out, my colleagues at Knowmatters worked extensively with Jobs.  I only pursued the quote further because of the stories they have told.

The interview/article states:

Mr. Thiel spends much of his time agitating to change how we educate people and create economic and technological growth. In his book “Zero to One,” written with Blake Masters, Mr. Thiel argues that society has become too rule-oriented, and people need to devise ways to think differently, and find like-minded individuals to realize goals.

And quotes him:

We’ve built a country in which people are tracked, from kindergarten to graduate school, and everyone who is “successful” acts the same way. That is overrated. It distorts things and hurts growth.

This is the “one size fits all” approach to standardized education.  Personalized e-learning promises to disrupt this.  The resulting creativity and growth is aspirational for Knowmatters.

Making Cognitive Tutors Easier

There are (essentially) two types of e-learning systems:

  1. Cognitive tutors – which emphasize deeper problem solving skills
  2. Curriculum sequencing  – which don’t diagnose cognitive skills but adapt engagement in various ways

For a good overview, see:

  • Desmarais, Michel C., and Ryan SJ d Baker. “A review of recent advances in learner and skill modeling in intelligent learning environments.” User Modeling and User-Adapted Interaction 22.1-2 (2012): 9-38.

The following is discussed in that article with regard to two approaches to deeper cognitive tutoring.

One of the reasons that cognitive tutors are not more pervasive is that the knowledge engineering required for each problem is significant.  And that knowledge engineering involves a lot of technical skill.

We’re trying to bend the cost curve for deeper (i.e., more cognitive) tutoring by simplifying the knowledge engineering process and skill requirements.  For example, here’s a sentence translated into under-specified logic:

The underspecified quantifier here is a little sophisticated, but the machine can handle it (i.e., the variable ?x7 refers to the pair of angles, not the individual angles).

We’re hoping that a few hundred (or even thousands) of these sentences with the reasoning infrastructure (akin to Watson’s) will allow deeper tutors to be developed more easily by communities of educators.

Personalized e-Learning Styles

Although a bit of a stretch, this post was inspired by the following blog post, which talks about the Facebook API in terms of learning styles. If you’re interested in such things, you are probably also aware of learning record stores and things like the Tin Can API.  You need these things if you’re supporting e-learning across devices, for example…

Really, though, I was looking for information like the following nice rendering from Andrew Chua:

There’s a lot of hype in e-learning about what it means to “personalize” learning.  You hear people say that their engagements (or recommendations for same) take into account individuals’ learning styles.  Well…

Continue reading “Personalized e-Learning Styles”

Wiley PLUS and Quantum

I was recently pleased to come across this video showing that Quantum has done a nice job of knowledge engineering in the domain of accounting with Wiley.

Most of the success of cognitive tutors has been confined to mathematics, but Quantum has an interesting history of applying knowledge-based cognitive tutoring techniques to chemistry.  In this case, they’ve stepped a little away from abstract math into accounting.

It was a little surprising, but then not, to learn that the folks of Quantum are AI folks dating back to Carnegie Group.  They’re based here in Pittsburgh!

Nice work.  The question I find myself wondering is whether they’ve changed the cost curve for cognitive tutors….

Authoring questions for education and assessment

Thesis: the overwhelming investment in educational technology will have its highest economic and social impact in technology that directly increases the rate of learning and the amount learned, not in technology that merely provides electronic access to abundant educational resources or on-line courses.  More emphasis is needed on the cognitive skills and knowledge required to achieve learning objectives and how to assess them.

Continue reading “Authoring questions for education and assessment”

Cognitive modeling of assessment items

Look for a forthcoming post on the broader subject of personalized e-learning.  In the meantime, here’s a tip on writing good multiple choice questions:

  • target wrong answers to diagnose misconceptions.

Better approaches to adaptive education model the learning objectives that are assessed by questions such as the following from the American Academy for the Advancement of Science which goes the extra mile to also model the misconceptions underlying incorrect answers:

Electronically enhanced learning

We are working on educational technology.  That is, technology to assist in education.  More specifically, we are developing software that helps people learn.  There are many types of such software.  We are most immediately focused on two such types.

  1. adaptive educational technology for personalized learning
  2. cognitive tutors

The term “adaptive” with regard to educational technology has various interpretations.  Educational technology that adapts to individuals in any of various ways is the most common interpretation of adaptive educational technology.  This interpretation is a form of personalized learning.  Personalized learning is often considered a more general term which includes human tutors who adapt how they engage with and educate learners.  In the context of educational technology, these senses of adaptive and personalized learning are synonymous. Continue reading “Electronically enhanced learning”

IBM Watson in medical education

IBM recently posted this video which suggests the relevance of Watson’s capabilities to medical education. The demo uses cases such as occur on the USMLE exam and Waton’s ability to perform evidentiary reason given large bodies of text. The “reasoning paths” followed by Watson in presenting explanations or decision support material use a nice, increasingly popular graphical metaphor.

One intriguing statement in the video concerns Watson “asking itself questions” during the reasoning process. It would be nice to know more about where Watson gets its knowledge about the domain, other than from statistics alone. As I’ve written previously, IBM openly admits that it avoided explicit knowledge in its approach to Jeopardy!

The demo does a nice job with questions in which it is given answers (e.g., multiple choice questions), in particular. I am most impressed, however, with its response on the case beginning at 3 minutes into the video.

Suggested questions: Inquire vs. Knewton

Knewton is an interesting company providing a recommendation service for adaptive learning applications.  In a recent post, Jonathon Goldman describes an algorithmic approach to generating questions.  The approach focuses on improving the manual authoring of test questions (known in the educational realm as “assessment items“).  It references work at Microsoft Research on the problem of synthesizing questions for a algebra learning game.

We agree that more automated generation of questions can enrich learning significantly, as has been demonstrated in the Inquire prototype.  For information on a better, more broadly applicable approach, see the slides beginning around page 16 in Peter Clark’s invited talk.

What we think is most promising, however, is understanding the reasoning and cognitive skill required to answer questions (i.e., Deep QA).  The most automated way to support this is with machine understanding of the content sufficient to answer the questions by proving answers (i.e., multiple choices) right or wrong, as we discuss in this post and this presentation.