Thesis: the overwhelming investment in educational technology will have its highest economic and social impact in technology that directly increases the rate of learning and the amount learned, not in technology that merely provides electronic access to abundant educational resources or on-line courses. More emphasis is needed on the cognitive skills and knowledge required to achieve learning objectives and how to assess them.
Look for a forthcoming post on the broader subject of personalized e-learning. In the meantime, here’s a tip on writing good multiple choice questions:
- target wrong answers to diagnose misconceptions.
Better approaches to adaptive education model the learning objectives that are assessed by questions such as the following from the American Academy for the Advancement of Science which goes the extra mile to also model the misconceptions underlying incorrect answers:
We are working on educational technology. That is, technology to assist in education. More specifically, we are developing software that helps people learn. There are many types of such software. We are most immediately focused on two such types.
- adaptive educational technology for personalized learning
- cognitive tutors
The term “adaptive” with regard to educational technology has various interpretations. Educational technology that adapts to individuals in any of various ways is the most common interpretation of adaptive educational technology. This interpretation is a form of personalized learning. Personalized learning is often considered a more general term which includes human tutors who adapt how they engage with and educate learners. In the context of educational technology, these senses of adaptive and personalized learning are synonymous. Continue reading →
For those of us that enjoy the intersection of machine learning and natural language, including “deep learning”, which is all the rage, here is an interesting paper on generalizing vector space models of words to broader semantics of English by Jayant Krishnamurthy, a PhD student of Tom Mitchell at Carnegie Mellon University:
- Krishnamurthy, Jayant, and Tom M. Mitchell. “Vector Space Semantic Parsing: A Framework for Compositional Vector Space Models.” ACL 2013 (2013): 1.
Essentially, the paper demonstrates how the features of high-precision lexicalized grammars allow machines to learn the compositional semantics of English. More specifically, the paper demonstrates learning of compositional semantics beyond the capabilities of recurrent neural networks (RNN). In summary, the paper suggests that deep parsing is better than deep learning for understanding the meaning of natural language.
For more information and a different perspective, I recommend the following paper, too:
Note that the authors use Combinatory Categorial Grammar (CCG) while our work uses head-driven phrase structure grammar (HPSG), but this is a minor distinction. For example, compare the logical forms in the Groningen Meaning Bank with the logic produced by the Linguist. The former uses CCG to produce lambda calculus while the latter uses HPSG to produce predicate calculus (ignoring vagaries of under-specified representation which are useful for hypothetical reasoning and textual entailment).
IBM recently posted this video which suggests the relevance of Watson’s capabilities to medical education. The demo uses cases such as occur on the USMLE exam and Waton’s ability to perform evidentiary reason given large bodies of text. The “reasoning paths” followed by Watson in presenting explanations or decision support material use a nice, increasingly popular graphical metaphor.
One intriguing statement in the video concerns Watson “asking itself questions” during the reasoning process. It would be nice to know more about where Watson gets its knowledge about the domain, other than from statistics alone. As I’ve written previously, IBM openly admits that it avoided explicit knowledge in its approach to Jeopardy!
The demo does a nice job with questions in which it is given answers (e.g., multiple choice questions), in particular. I am most impressed, however, with its response on the case beginning at 3 minutes into the video.
Knewton is an interesting company providing a recommendation service for adaptive learning applications. In a recent post, Jonathon Goldman describes an algorithmic approach to generating questions. The approach focuses on improving the manual authoring of test questions (known in the educational realm as “assessment items“). It references work at Microsoft Research on the problem of synthesizing questions for a algebra learning game.
We agree that more automated generation of questions can enrich learning significantly, as has been demonstrated in the Inquire prototype. For information on a better, more broadly applicable approach, see the slides beginning around page 16 in Peter Clark’s invited talk.
What we think is most promising, however, is understanding the reasoning and cognitive skill required to answer questions (i.e., Deep QA). The most automated way to support this is with machine understanding of the content sufficient to answer the questions by proving answers (i.e., multiple choices) right or wrong, as we discuss in this post and this presentation.
Here is a graphic on how various reasoning technologies fit the practical requirements for reasoning discussed below:
This proved surprisingly controversial during correspondence with colleagues from the Vulcan work on SILK and its evolution at http://www.coherentknowledge.com.
The requirements that motivated this were the following: Continue reading →
Orin Etzioni is a marvelous choice to lead the Allen Institute for AI (aka AI2). The NL/ML path is the right path for scaling up the deep knowledge that Paul Allen’s vision of a Digital Aristotle requires. You can read more about it below and here’s more background on the change in the direction and on some evidence that the path holds great promise.
Benjamin Grosof, co-founder of Coherent Knowledge Systems, is also involved with developing a standard ontology for the financial services industry (i.e., FIBO). In the course of working on FIBO, he is developing a demonstration of defeasible logic concerning Regulation W of the The Federal Reserve Act. Regulation W specifies which transactions involving banks and their affiliates are prohibited under Section 23A of the Act. In the course of doing this, there are various documents which are being captured within the Linguist™ platform. This is a brief note of how those documents can be imported into the platform for curation into formal semantics and logic (as Benjamin and Coherent are doing). Continue reading →
As I mentioned in this post, we’re having fun layering questions and answers with explanations on top of electronic textbook content.
The basic idea is to couple a graph structure of questions, answers, and explanations into the text using semantics. The trick is to do that well and automatically enough that we can deliver effective adaptive learning support. This is analogous to the knowledge graph that users of Knewton’s API create for their content. The difference is that we get the graph from the content, including the “assessment items” (that’s what educators call questions, among other things). Essentially, we parse the content, including the assessment items (i.e., the questions and each of their answers and explanations). The result of this parsing is, as we’ve described elsewhere, precise lexical, syntactic, semantic, and logic understanding of each sentence in the content. But we don’t have to go nearly that far to exceed the state of the art here. Continue reading →