There are (essentially) two types of e-learning systems:
Cognitive tutors – which emphasize deeper problem solving skills
Curriculum sequencing – which don’t diagnose cognitive skills but adapt engagement in various ways
For a good overview, see:
Desmarais, Michel C., and Ryan SJ d Baker. “A review of recent advances in learner and skill modeling in intelligent learning environments.” User Modeling and User-Adapted Interaction 22.1-2 (2012): 9-38.
The following is discussed in that article with regard to two approaches to deeper cognitive tutoring.
One of the reasons that cognitive tutors are not more pervasive is that the knowledge engineering required for each problem is significant. And that knowledge engineering involves a lot of technical skill.
We’re trying to bend the cost curve for deeper (i.e., more cognitive) tutoring by simplifying the knowledge engineering process and skill requirements. For example, here’s a sentence translated into under-specified logic:
The underspecified quantifier here is a little sophisticated, but the machine can handle it (i.e., the variable ?x7 refers to the pair of angles, not the individual angles).
We’re hoping that a few hundred (or even thousands) of these sentences with the reasoning infrastructure (akin to Watson’s) will allow deeper tutors to be developed more easily by communities of educators.
Really, though, I was looking for information like the following nice rendering from Andrew Chua:
There’s a lot of hype in e-learning about what it means to “personalize” learning. You hear people say that their engagements (or recommendations for same) take into account individuals’ learning styles. Well…
I was recently pleased to come across this video showing that Quantum has done a nice job of knowledge engineering in the domain of accounting with Wiley.
Most of the success of cognitive tutors has been confined to mathematics, but Quantum has an interesting history of applying knowledge-based cognitive tutoring techniques to chemistry. In this case, they’ve stepped a little away from abstract math into accounting.
It was a little surprising, but then not, to learn that the folks of Quantum are AI folks dating back to Carnegie Group. They’re based here in Pittsburgh!
Nice work. The question I find myself wondering is whether they’ve changed the cost curve for cognitive tutors….
Thesis: the overwhelming investment in educational technology will have its highest economic and social impact in technology that directly increases the rate of learning and the amount learned, not in technology that merely provides electronic access to abundant educational resources or on-line courses. More emphasis is needed on the cognitive skills and knowledge required to achieve learning objectives and how to assess them.
Look for a forthcoming post on the broader subject of personalized e-learning. In the meantime, here’s a tip on writing good multiple choice questions:
target wrong answers to diagnose misconceptions.
Better approaches to adaptive education model the learning objectives that are assessed by questions such as the following from the American Academy for the Advancement of Science which goes the extra mile to also model the misconceptions underlying incorrect answers:
We are working on educational technology. That is, technology to assist in education. More specifically, we are developing software that helps people learn. There are many types of such software. We are most immediately focused on two such types.
adaptive educational technology for personalized learning
The term “adaptive” with regard to educational technology has various interpretations. Educational technology that adapts to individuals in any of various ways is the most common interpretation of adaptive educational technology. This interpretation is a form of personalized learning. Personalized learning is often considered a more general term which includes human tutors who adapt how they engage with and educate learners. In the context of educational technology, these senses of adaptive and personalized learning are synonymous. Continue reading →
For those of us that enjoy the intersection of machine learning and natural language, including “deep learning”, which is all the rage, here is an interesting paper on generalizing vector space models of words to broader semantics of English by Jayant Krishnamurthy, a PhD student of Tom Mitchell at Carnegie Mellon University:
Essentially, the paper demonstrates how the features of high-precision lexicalized grammars allow machines to learn the compositional semantics of English. More specifically, the paper demonstrates learning of compositional semantics beyond the capabilities of recurrent neural networks (RNN). In summary, the paper suggests that deep parsing is better than deep learning for understanding the meaning of natural language.
For more information and a different perspective, I recommend the following paper, too:
Note that the authors use Combinatory Categorial Grammar (CCG) while our work uses head-driven phrase structure grammar (HPSG), but this is a minor distinction. For example, compare the logical forms in the Groningen Meaning Bank with the logic produced by the Linguist. The former uses CCG to produce lambda calculus while the latter uses HPSG to produce predicate calculus (ignoring vagaries of under-specified representation which are useful for hypothetical reasoning and textual entailment).
IBM recently posted this video which suggests the relevance of Watson’s capabilities to medical education. The demo uses cases such as occur on the USMLE exam and Waton’s ability to perform evidentiary reason given large bodies of text. The “reasoning paths” followed by Watson in presenting explanations or decision support material use a nice, increasingly popular graphical metaphor.
One intriguing statement in the video concerns Watson “asking itself questions” during the reasoning process. It would be nice to know more about where Watson gets its knowledge about the domain, other than from statistics alone. As I’ve written previously, IBM openly admits that it avoided explicit knowledge in its approach to Jeopardy!
The demo does a nice job with questions in which it is given answers (e.g., multiple choice questions), in particular. I am most impressed, however, with its response on the case beginning at 3 minutes into the video.