Although a bit of a stretch, this post was inspired by the following blog post, which talks about the Facebook API in terms of learning styles. If you’re interested in such things, you are probably also aware of learning record stores and things like the Tin Can API. You need these things if you’re supporting e-learning across devices, for example…
Really, though, I was looking for information like the following nice rendering from Andrew Chua:
There’s a lot of hype in e-learning about what it means to “personalize” learning. You hear people say that their engagements (or recommendations for same) take into account individuals’ learning styles. Well…
Continue reading “Personalized e-Learning Styles”
Thesis: the overwhelming investment in educational technology will have its highest economic and social impact in technology that directly increases the rate of learning and the amount learned, not in technology that merely provides electronic access to abundant educational resources or on-line courses. More emphasis is needed on the cognitive skills and knowledge required to achieve learning objectives and how to assess them.
Look for a forthcoming post on the broader subject of personalized e-learning. In the meantime, here’s a tip on writing good multiple choice questions:
- target wrong answers to diagnose misconceptions.
Better approaches to adaptive education model the learning objectives that are assessed by questions such as the following from the American Academy for the Advancement of Science which goes the extra mile to also model the misconceptions underlying incorrect answers:
As I mentioned in this post, we’re having fun layering questions and answers with explanations on top of electronic textbook content.
The basic idea is to couple a graph structure of questions, answers, and explanations into the text using semantics. The trick is to do that well and automatically enough that we can deliver effective adaptive learning support. This is analogous to the knowledge graph that users of Knewton‘s API create for their content. The difference is that we get the graph from the content, including the “assessment items” (that’s what educators call questions, among other things). Essentially, we parse the content, including the assessment items (i.e., the questions and each of their answers and explanations). The result of this parsing is, as we’ve described elsewhere, precise lexical, syntactic, semantic, and logic understanding of each sentence in the content. But we don’t have to go nearly that far to exceed the state of the art here. Continue reading “Automatic Knowledge Graphs for Assessment Items and Learning Objects”
We’re collaborating on some educational work and came across this sentence in a textbook on finance and accounting:
- All of these are potentially good economic decisions.
We use statistical NLP but assist with the ambiguities. In doing this, we relate questions and answers and explanations to the text.
We also extract the terminology and produce a rich lexicalized ontology of the subject matter for pedagogical uses, assessment, and adaptive learning.
Here’s one that just struck me as interesting. This is a case where the choice looks like it won’t matter much either way, but …
Continue reading “Higher Education on a Flatter Earth”
In Vulcan’s Project Halo, we developed means of extracting the structure of logical proofs that answer advanced placement (AP) questions in biology. For example, the following shows a proof that separation of chromatids occurs during prophase.
This explanation was generated using capabilities of SILK built on those described in A SILK Graphical UI for Defeasible Reasoning, with a Biology Causal Process Example. That paper gives more details on how the proof structures of questions answered in Project Sherlock are available for enhancing the suggested questions of Inquire (which is described in this post, which includes further references). SILK justifications are produced using a number of higher-order axioms expressed using Flora‘s higher-order logic syntax, HiLog. These meta rules determine which logical axioms can or do result in a literal. (A literal is an positive or negative atomic formula, such as a fact, which can be true, false, or unknown. Something is unknown if it is not proven as true or false. For more details, you can read about the well-founded semantics, which is supported by XSB. Flora is implemented in XSB.)
Now how does all this relate to pedagogy in future derivatives of electronic learning software or textbooks, such as Inquire?
Well, here’s a use case: Continue reading “Pedagogical applications of proofs of answers to questions”