CPC and the Grandmother Neuron

A lot of recent work has advanced the learning of increasingly context-sensitive distributed representations (i.e., so-called ’embeddings’). In particular. DeepMind’s paper on “Contrastive Predictive Coding” (CPC) is particularly interesting and advances on a number of fronts. For example, in wav2vec, Facebook AI Research (FAIR) uses CPC to obtain apparently superior acoustic modeling results to DeepSpeech’s connectionist temporal classification (CTC) approach. In the CPC paper, the following image is particularly striking, harkening back to the early notion of a Grandmother Cell.

grandmother cells resulting from CPC

Impressive result from Google

This is pretty impressive work by Google!

They are seeing the objective behind the query.  It’s pretty simple, in theory, to see the verb “read” operating on the object “string” with source (i.e., “from) being consistent with an input stream (also handling the concatenated compound).

More impressive is that they have learned from such queries and content that people view following such queries, perhaps even more deeply, that character streams, scanners, and stream APIs are relevant.

And they have also narrow my results based on the frequency that I look at Java versus other implementation languages.

Nice AI Video

I found the following video in a recent post by Steve DeAngelis  of Enterra Solutions:

It’s a bit too far towards the singularity/general AI end of the spectrum for me, but nicely done and fun for many not in the field, perhaps:

Enterra is an interesting company,too, FYI.  They are in cognitive computing with a bunch of people formerly of Inference Corporation where I was Chief Scientist.  Doug Lenat of Cycorp was one of our scientific advisors.  Interestingly enough, Enterra uses Cyc!

A good example of the need for better assessment

The Andes Physics Tutor [1,2] is a nice piece of work.  Looking at the data from the Spring 2010 physics course at the US Naval Academy [3] there are obvious challenges facing the current generation of adaptive education vendors.

Most of the modeling going on today is limited to questions where a learner gets the answer right or wrong with nothing in between.  Most assessment systems use such simple “dichotomous” models based on item response theory (IRT).  IRT models range in sophistication based on the number of “logistic parameters” that the models use to describe assessment items.  IRT models also come in many variations that address graded responses (e.g., Likert scales [4]) or partial credit and onto multi-dimensional and/or multi-level/hierarchical and/or longitudinal models.  I am not aware of any model that addresses the richness of pedagogical data available from the physics tutor, however.

Having spent too long working through too much of the research literature in the area, a summary path through all this may be helpful…  There are 1, 2, and 3 parameter logistic IRT models that characterize assessment items in terms of their difficulty, discrimination, and guess-ability. These are easiest to understand initially in the context of an assessment item that assesses a single cognitive skill by scoring a learner’s performance as passing or failing on the item.   Wikipedia does a good job of illustrating how these three numbers describe the cumulative distribution function of the probability that a learner will pass as his or her skill increases [5].   A 1 parameter logistic (1PL) model describes an assessment item only in terms of a threshold of skill at which half the population falls on either side.  The 2PL IRT model also considers the steepness of the curve.  The steeper it is the more precisely the assessment item discriminates between above and below average levels of skill.  And the 3PL model takes into consideration the odds that the a passing result is a matter of luck, such as in guessing the right answer from a multiple choice question.

In the worlds of standardized testing and educational technology, especially with regard to personalized learning (as in adaptive assessment, curriculum sequencing, and intelligent tutoring), multiple choice and fill-in-the-blank questions dominate because grading can be automated.  It is somewhat obvious then that 3PL IRT is the appropriate model for standardized testing (which is all multiple choice) and a large fraction of personalized learning technology.  Fill-in-the-blank questions are sometimes less vulnerable to guessing, in which case 2PL may suffice.  You may be surprised to learn, however, that even though 3PL is strongly indicated, its use is not pervasive because estimating the parameters of a 3PL model is mathematically and algorithmically sophisticated as well as computationally demanding.  Nonetheless, algorithmic advances in Bayesian inference have matured over the last decade such that 3PL should become much more pervasive in the next few years.  (To their credit, for example, Knewton appears to use such techniques [6].)

It gets worse, though.  Even if an ed-tech vendor is sophisticated enough to employ 3PL IRT, they are far less likely to model assessment items that involve multiple or hierarchical skills and assessments other than right or wrong. And there’s more beyond these complications, but let’s pause to consider two of these for a moment.  In solving a word problem, such as in physics or math, for example, a learner needs linguistic skills to read and comprehend the problem before going on to formulate and solve the problem.  These are different skills.  They could be modeled coarsely, such as a skill for reading comprehension, a skill for formulating equations, and a skill for solving equations,  but conflating these skills into what ITR folks sometimes call a single latent trait is behind the state of the art.

Today, multi-dimensional IRT is hot given recent advances in Bayesian methods.  Having said that, it’s worth noting that multiple skills have been on experts’ minds for a long time (e.g., [7]) and have been prevalent in higher-end educational technology, such as intelligent tutoring systems (aka, cognitive tutors), for years.  These issues are prevalent in almost all the educational data available at PLSC’s DataShop [3].  Unfortunately, the need to associate multiple skills with assessment items exacerbates a primary obstacle to broader deployment of better personalized learning solutions: pedagogical modeling.  One key aspect of a pedagogical model is the relationship between assessment items and the cognitive skills they require (and assess).  Given such information, multi-dimensional IRT can be employed, but even articulating a single learning objective per assessment item and normalizing those learning objectives over thousands of assessment items is a major component of the cost of developing curriculum sequencing solutions.  (We’ll be announcing progress on this problem “soon”.)

In addition to multi-dimensional IRT, which promises more cognitive tutors, there are other aspects of modeling assessment, such as in hierarchical or multi-level IRT.   Although the terms hierarchical and multi-level are sometimes used interchangeably with respect to models of assessment, we are more comfortable with the former being with respect to skills and the latter with regard to items.  A hierarchical model is similar to a multi-dimensional model in that an item involves multiple skills but most typically where those skills have some taxonomic relationship.  A multi-level model allows for shared structure or focus between assessment items, including multiple questions concerning a common passage  or scenario, as well as drill-down items.  All of the issues discussed in this paragraph are prevalent in the Andes Physics Tutor data.  Many other data models available at PLSC’s Datashop also involve hierarchically organization of multiple skills (aka, “knowledge components”).

And we have yet to address other critical aspects of a robust model of assessment!  For example, we have not considered how the time taken to perform an assessment reflects on a learner’s skil nor graded responses or grades other than pass/fail (i.e., polytomous vs. dichotomous models).  The former is available in the data (along with multiple attempts, hints, and explanations that we have not touched on).  The latter  remains largely unaddressed despite being technically straightforward (albeit somewhat sophisticated).  All of these are important, so put them on your assessment or pedagogical modeling and personalized learning checklist and stay posted!

[1] http://www.andestutor.org/

[2] Vanlehn, Kurt, et al. “The Andes physics tutoring system: Lessons learned.” International Journal of Artificial Intelligence in Education 15.3 (2005): 147-204.

[3] available on-line at the Pittsburgh Learning Sciences Collaboration (PLSC) Datashop

[4] Likert scales & items at Wikipedia

[5] the item response function of ITR at Wikipedia

[6] see the references to Monte Carlo Markov Chain estimation of ITR models in this post from Knewton’s tech blog

[7] Kelderman, Henk, and Carl PM Rijkes. “Loglinear multidimensional IRT models for polytomously scored items.” Psychometrika 59.2 (1994): 149-176.

IBM Watson in medical education

IBM recently posted this video which suggests the relevance of Watson’s capabilities to medical education. The demo uses cases such as occur on the USMLE exam and Waton’s ability to perform evidentiary reason given large bodies of text. The “reasoning paths” followed by Watson in presenting explanations or decision support material use a nice, increasingly popular graphical metaphor.

One intriguing statement in the video concerns Watson “asking itself questions” during the reasoning process. It would be nice to know more about where Watson gets its knowledge about the domain, other than from statistics alone. As I’ve written previously, IBM openly admits that it avoided explicit knowledge in its approach to Jeopardy!

The demo does a nice job with questions in which it is given answers (e.g., multiple choice questions), in particular. I am most impressed, however, with its response on the case beginning at 3 minutes into the video.