Going on 5 years ago, I wrote part 1. Now, finally, it’s time for the rest of the story.
We are working on educational technology. That is, technology to assist in education. More specifically, we are developing software that helps people learn. There are many types of such software. We are most immediately focused on two such types.
- adaptive educational technology for personalized learning
- cognitive tutors
The term “adaptive” with regard to educational technology has various interpretations. Educational technology that adapts to individuals in any of various ways is the most common interpretation of adaptive educational technology. This interpretation is a form of personalized learning. Personalized learning is often considered a more general term which includes human tutors who adapt how they engage with and educate learners. In the context of educational technology, these senses of adaptive and personalized learning are synonymous. Continue reading “Electronically enhanced learning”
Knewton is an interesting company providing a recommendation service for adaptive learning applications. In a recent post, Jonathon Goldman describes an algorithmic approach to generating questions. The approach focuses on improving the manual authoring of test questions (known in the educational realm as “assessment items“). It references work at Microsoft Research on the problem of synthesizing questions for a algebra learning game.
We agree that more automated generation of questions can enrich learning significantly, as has been demonstrated in the Inquire prototype. For information on a better, more broadly applicable approach, see the slides beginning around page 16 in Peter Clark’s invited talk.
What we think is most promising, however, is understanding the reasoning and cognitive skill required to answer questions (i.e., Deep QA). The most automated way to support this is with machine understanding of the content sufficient to answer the questions by proving answers (i.e., multiple choices) right or wrong, as we discuss in this post and this presentation.
As I mentioned in this post, we’re having fun layering questions and answers with explanations on top of electronic textbook content.
The basic idea is to couple a graph structure of questions, answers, and explanations into the text using semantics. The trick is to do that well and automatically enough that we can deliver effective adaptive learning support. This is analogous to the knowledge graph that users of Knewton‘s API create for their content. The difference is that we get the graph from the content, including the “assessment items” (that’s what educators call questions, among other things). Essentially, we parse the content, including the assessment items (i.e., the questions and each of their answers and explanations). The result of this parsing is, as we’ve described elsewhere, precise lexical, syntactic, semantic, and logic understanding of each sentence in the content. But we don’t have to go nearly that far to exceed the state of the art here. Continue reading “Automatic Knowledge Graphs for Assessment Items and Learning Objects”
Over the last two years, machines have demonstrated their ability to read, listen, and understand English well enough to beat the best at Jeopardy!, answer questions via iPhone, and earn college credit on college advanced placement exams. Today, Google, Microsoft and others are rushing to respond to IBM and Apple with ever more competent artificially intelligent systems that answer questions and support decisions.
What do such developments suggest for the future of education? Continue reading “Artificially Intelligent Educational Technology”
This US News & World Report opinion is on the right track about the macro trend towards increasingly technology-enabled education:
But it also sounds like what I heard during the dot-com boom of the 1990s when a lot of companies—including Blackboard—began using technology to “disrupt” the education status quo. Since then we’ve made some important progress, but in many ways the classroom still looks the same as it did 100 years ago. So what’s different this time? Is all the talk just hype? Or are we really starting to see the beginnings of major change? I believe we are.
The comments about active learning are particularly on-target. Delivering a textbook electronically or a course on-line is hardly the point. For example, textbooks and courses that understand their subject matter well enough to ask appropriate questions and that can explain the answers, assess the learner’s comprehension, guide them through the subject matter and accommodate their learning style dynamically are where the action will be soon enough. This is not at all far-fetched or years off. Look at Watson and some of these links to see how imminent such educational technology could be!
- Award-winning video of Inquire: An Intelligent Textbook
- Presentation of Vulcan’s Digital Aristotle (PDF slides, streaming recording)
- article on Vulcan’s Digital Aristotle, Aura, Inquire, and Campbell’s Biology (PDF)
We’ve been working for several years on applications of artificial intelligence in education, as in Project Sherlock and this presentation. Please get in touch if you’re interested in advancing education along such lines.
In Vulcan’s Project Halo, we developed means of extracting the structure of logical proofs that answer advanced placement (AP) questions in biology. For example, the following shows a proof that separation of chromatids occurs during prophase.
This explanation was generated using capabilities of SILK built on those described in A SILK Graphical UI for Defeasible Reasoning, with a Biology Causal Process Example. That paper gives more details on how the proof structures of questions answered in Project Sherlock are available for enhancing the suggested questions of Inquire (which is described in this post, which includes further references). SILK justifications are produced using a number of higher-order axioms expressed using Flora‘s higher-order logic syntax, HiLog. These meta rules determine which logical axioms can or do result in a literal. (A literal is an positive or negative atomic formula, such as a fact, which can be true, false, or unknown. Something is unknown if it is not proven as true or false. For more details, you can read about the well-founded semantics, which is supported by XSB. Flora is implemented in XSB.)
Now how does all this relate to pedagogy in future derivatives of electronic learning software or textbooks, such as Inquire?
Well, here’s a use case: Continue reading “Pedagogical applications of proofs of answers to questions”
In the spring of 2012, Vulcan engaged Automata for a knowledge acquisition (KA) experiment. This article provides background on the context of that experiment and what the results portend for artificial intelligence applications, especially in the areas of education. Vulcan presented some of the award-winning work referenced here at an AI conference, including a demonstration of the electronic textbook discussed below. There is a video of that presentation here. The introductory remarks are interesting but not pertinent to this article.
Background on Vulcan’s Project Halo
From 2002 to 2004, Vulcan developed a Halo Pilot that could correctly answer between 30% and 50% of the questions on advanced placement (AP) tests in chemistry. The approaches relied on sophisticated approaches to formal knowledge representation and expert knowledge engineering. Of three teams, Cycorp fared the worst and SRI fared the best in this competition. SRI’s system performed at the level of scoring a 3 on the AP, which corresponds to earning course credit at many universities. The consensus view at that time was that achieving a score of 4 on the AP was feasible with limited additional effort. However, the cost per page for this level of performance was roughly $10,000, which needed to be reduced significantly before Vulcan’s objective of a Digital Aristotle could be considered viable.