A pedagogical model by any other name

Educational technology for personalized or adaptive learning is based on various types of pedagogical models.

Google provides the following info-box and link concerning pedagogical models:

Many companies have different names for their models, including:

  1. Knewton: Knowledge Graph
    (partially described in How do Knewton recommendations work in Pearson MyLab)
  2. Kahn Academy: Knowledge Map
  3. Declara: Cognitive Graph

The term “knowledge graph” is particularly common, including Google’s (link).

There are some serious problems with these names and some varying limitations in their pedagogical efficacy.  There is nothing cognitive about Declara’s graph, for example.  Like the others it may organize “concepts” (i.e., nodes) by certain relationships (i.e., links) that pertain to learning, but none of these graphs purports to represent knowledge other than superficially for limited purposes.

  • Each of Google’s, Knewton’s, and Declara’s graphs are far from sufficient to represent the knowledge and cognitive skills expected of a masterful student.
  • Each of them is also far from sufficient to represent the knowledge of learning objectives and instructional techniques expected of a proficient instructor.

Nonetheless, such graphs are critical to active e-learning technology, even if they fall short of our ambitious hope to dramatically improve learning and learning outcomes.

The most critical ingredients of these so-called “knowledge” or “cognitive” graphs include the following:

  1. learning objectives
  2. educational resources, including instructional and formative or summative assessment items
  3. relationships between educational resources and learning objectives (i.e., what they instruct and/or assess)
  4. relationships between learning objectives (e.g., dependencies such as prerequisites)

The following user interface supports curation of the alignment of educational resources and learning objectives, for example:

And the following supports curation of the dependencies between learning objectives (as in a prerequisite graph):

Here is a presentation of similar dependencies from Kahn Academy:

And here is a depiction of such dependencies in Pearson’s use of Knewton within MyLab (cited above):

Of course there is much more that belongs in a pedagogical model, but let’s look at the fundamentals and their limitations before diving too deeply.

Prerequisite Graphs

The link on Knewton recommendations cited above includes a graphic showing some of the learning objectives and their dependencies concerning arithmetic.  The labels of these learning objectives include:

  • reading & writing whole numbers
  • adding & subtracting whole numbers
  • multiplying whole numbers
  • comparing & rounding whole numbers
  • dividing whole numbers

And more:

  • basics of fractions
  • exponents & roots
  • basics of mixed numbers
  • factors of whole numbers

But:

  • There is nothing in the “knowledge graph” that represents the semantics (i.e., meaning) of “number”, “fraction”, “exponent”, or “root”.
  • There is nothing in the “knowledge graph” that represents what distinguishes whole from mixed numbers (or even that fractions are numbers).
  • There is nothing in the “knowledge graph” that represents what it means to “read”, “write”, “multiply”, “compare”, “round”, or “divide”.

Graphs, Ontology, Logic, and Knowledge

Because systems with knowledge or cognitive graphs lack such representation, they suffer from several problems, including the following, which are of immediate concern:

  1. dependencies between learning objectives must be explicitly established by people, thereby increasing the time, effort, and cost of developing active learning solutions, or
  2. learning objectives that are not explicitly dependent may become dependent as evidence indicates, which requires exponentially increasing data as the number of learning objectives increases, thereby offering low initial and asymptotic efficacy versus more intelligent and knowledge-based approaches

For example, more advanced semantic technology standards (e.g., OWL and/or SBVR or defeasible modal logic) can represent that digits are integers are numbers and an integer divided by another is a fraction.  Consequently, a knowledge-based system can infer that learning objectives involving fractions depend on some learning objectives involving integers.  Such deductions can inform machine learning such that better dependencies are induced (initially and asymptotically) and can increase the return on investment of human intelligence in a cognitive computing approach to pedagogical modeling.

As another example, consider that adaptive educational technology either knows or do not know that multiplication of one integer by another is equivalent to computing the sum of one the other number of times.  Similarly, they either know or they do not know how multiplication and division are related.  How effectively can the improve learning if they do not know?  How much more work is required to get such systems to achieve acceptable efficacy without such knowledge?  Would you want your child instructed by someone who was incapable of understanding and applying such knowledge?

Semantics of Concepts, Cognitive Skills, and Learning Objectives

Consider again the labels of the nodes in Knewton/Pearson’s prerequisite graph listed above.  Notice that:

  • the first group of labels are all sentences while the second group are all noun phrases
  • the first group (of sentences) are cognitive skills more than they are learning objectives
    • i.e., they don’t specify a degree of proficiency, although one may be implicit with regard to the educational resources aligned with those sentences
  • the second group (of noun phrases) refer to concepts (or, implicitly, sentences that begin with “understanding”)
  • the second group (of noun phrases) that begin with “basics” are unclear learning objectives or references to concepts

For adaptive educational technology that does not “know” what these labels mean nor anything about the meanings of the words that occur in them, the issues noted above may not seem important but they clearly limit the utility and efficacy of such graphs.

Taking a cognitive computing approach, human intelligence helps artificial intelligence understand these sentences and phrases deeply and precisely.  A cognitive computing approach also results in artificial intelligence that deeply and precisely understands many additional sentences of knowledge that don’ fit into such graphs.

For example, the system comes to know that reading and writing whole numbers is a conjunction of finer grained learning objectives and that, in general, reading is a prerequisite to writing.  It comes to know that whole numbers are non-negative integers which are typically positive.  It comes to know that subtraction is the inverse of addition (which implies some dependency relationship between addition and subtraction).  In order to understand exponents, the system is told and learns about raising numbers to powers and about what it means to square a number.  The system is told and learns about roots how they relate to exponents and powers, including how square roots relate to squaring numbers.  The system is told that a mixed number is an integer and proper fraction corresponding to an improper fraction.

Adaptive educational technology either understands such things or it does not.  If it does not, human beings will have to work much harder to achieve a system with a given level of efficacy and subsequent machine learning will take a longer time to reach a lower asymptote of efficacy.

Affiliate Transactions covered by The Federal Reserve Act (Regulation W)

Benjamin Grosof, co-founder of Coherent Knowledge Systems, is also involved with developing a standard ontology for the financial services industry (i.e., FIBO).  In the course of working on FIBO, he is developing a demonstration of defeasible logic concerning Regulation W of the The Federal Reserve Act.  Regulation W specifies which transactions involving banks and their affiliates are prohibited under Section 23A of the Act.  In the course of doing this, there are various documents which are being captured within the Linguist™ platform.  This is a brief note of how those documents can be imported into the platform for curation into formal semantics and logic (as Benjamin and Coherent are doing). Continue reading “Affiliate Transactions covered by The Federal Reserve Act (Regulation W)”

Higher Education on a Flatter Earth

We’re collaborating on some educational work and came across this sentence in a textbook on finance and accounting:

  • All of these are potentially good economic decisions.

We use statistical NLP but assist with the ambiguities.  In doing this, we relate questions and answers and explanations to the text.

We also extract the terminology and produce a rich lexicalized ontology of the subject matter for pedagogical uses, assessment, and adaptive learning.

Here’s one that just struck me as interesting.  This is a case where the choice looks like it won’t matter much either way, but …

Continue reading “Higher Education on a Flatter Earth”

SBVR in OWL

In preparation for generating RIF and SBVR from the Linguist, we have produced an OWL ontology for the pertinent aspects of the SBVR specification.  We hope that this is helpful to others and would sincerely appreciate any corrections or comments on how to improve it.

Paul

Project Sherlock

Working as part of Vulcan’s Project Halo[1], Automata is applying a natural language understanding system that translates carefully formulated sentences into formal logic so as to answer questions that typically require deeper knowledge and inference than demonstrated by Watson.

The objective over the next three quarters is to acquire enough knowledge from the 9th edition of Campbell’s Biology textbook to demonstrate three things.

  • First, that the resulting system answers, for example, biology advanced placement (AP) exam questions more competently than existing systems (e.g., Aura[2] or Inquire[3]).
  • Second, that knowledge from certain parts of the textbook is effectively translated from English into formal knowledge with sufficient breadth and depth of coverage and semantics.
  • Third, that the knowledge acquisition process proves efficacious and accessible to less than highly skilled knowledge engineers so as to accelerate knowledge acquisition beyond 2012.

Included in the second of these is a substantial ontology of background knowledge expected of students in order to comprehend the selected parts of the textbook using a combination of OWL, logic, and English sentences from sources other than the textbook.

Automata is hiring logicians, linguists, and biologists to work as consultants, contracts, or employees for:

  • Interactive tree-banking and word-sense disambiguation of several thousand sentences.[4]
  • Extending its lexical ontology and a broad-coverage grammar of English with additional vocabulary and deeper semantics, especially concerning cellular biology and related scientific knowledge including chemistry, physics, and math.
  • Maturing its upper and middle ontology of domain independent knowledge using OWL in combination with various other technologies, including description logic, first-order logic, high-order logic, modal logic, and defeasible logic.[5]
  • Enhancing its platform for text-driven knowledge engineering towards a collaborative wiki-like architecture for self-aware content in scientific education and biomedical applications.

Terms of engagement are flexible; ranging from small units of work to full-time employment.  We are based in Pittsburgh, Pennsylvania and Vulcan is headquartered in Seattle, Washington, but the team is distributed across the country and overseas.

Please contact Paul Haley by e-mail to his first name at this domain.


[1] Vulcan: http://www.vulcan.com/TemplateCompany.aspx?contentId=54; Project Halo: http://www.projecthalo.com/
Video introduction/overview :http://videolectures.net/aaai2011_gunning_halobook/

[2] Aura: http://www.ai.sri.com/project/aura

[3] Inquire: http://www.franz.com/success/customer_apps/artificial_intelligence/aura.lhtml

[4] tree-banking and WSD: http://www.omg.org/spec/SBVR and http://en.wikipedia.org/wiki/Word-sense_disambiguation

[5] e.g., SILK (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.174.1796 ) and SBVR (http://www.omg.org/spec/SBVR)

Event-centric BPM and goal-driven processing

The slides for my Business Rules Forum presentation on event semantics and focusing on events in order to simplify process definition and to facilitate more robust governance and compliance are at Event-centric BPM.

After the talk I spoke with Jan Verbeek and Gartjan Grijzen of Be Informed and reviewed their software, which is excellent.  They have been quite successful with various government agencies in applying  the event-centric methodology to produce goal-driven processing.  Their approach is elegant and effective.  It clearly demonstrates the merits of an event-centric approach and the power that emerges from understanding event-dependencies.  Also, it is very semantic, ontological, and logic-programming oriented in its approach (e.g., they use OWL and a backward-chaining inference engine).

They do not have the top-down knowledge management approach that I advocate nor do they provide the logical verification of governing policies and compliance (i.e., using theorem provers) that I mention in the talk (see Guido Governatori‘s 2010 publications and Travis Breaux‘s research at CMU, for example) but theirs is the best commercially deployed work in separating business process description from procedural implementation that comes to mind. (Note that Ed Barkmeyer of NIST reports some use of SBVR descriptions of manufacturing processes with theorem provers.  Some in automotive and aerospace industries have been interested in this approach for quality purposes, too.)

BeInformed is now expanding into the United States with the assistance of Mills Davis and others.  Their software is definitely worth consideration and, in my opinion, is more elegant and effective than the generic BPMN approach.

Simple problems with the semantic web

The standard for defining ontologies these days is OWL and Protege.  Unfortunately, OWL lacks any notion of exceptions in inheritance or any other notion of defeasibility.

So, although you may want to say that birds fly, you’re ontology will be broken (or become much more complicated) when you realize there are birds that can’t fly, such as penguins or ostriches, or even sick or injured birds.

Practically speaking, you need something like courteous logic or the defeasibility in SILK to handle this (or any 1980s expert system shell or even earlier frame system).  OWL is very hard on mortal man (e.g., mainstream IT) in this regard.

How can I tell OWL that a pronoun is a noun but that pronouns are a closed class of words, unlike nouns, verbs, adjectives, and adverbs (in general).  Well, I’ll have to tell it about open-class nouns versus closed class nouns.  What a pain!

This is why we use Protege primarily as a drafting tool and, for example, SILK, to do reasoning.   Non-defeasible description logic and first-order reasoners are difficult to get along with, in practice (and make sustainable knowledge repositories too difficult – which inhibits adoption, obviously).

What is has always been going to be

I’ve been working for a while now on an ontology for representing events (which includes process, of course).  One of the requirements of a system that is to monitor, govern, implement, or reason about processes is that it consider “situations”, which are things that happen or occur, including events and states.  (See, for example, the perdurants of the DOLCE ontology, BFO‘s occurents, or OpenCyc’s situations.)  This requires the representation of time-variant information at various points or during various intervals of time (more than just the Allen relations or OWL Time).   If you’re interested in such things, I’d recommend Parsons‘ “Events in the Semantics of English” or Pustejovsky‘s “Syntax of Event Structure“, both of which look at the subject from a linguistic rather than inferential perspective.  When you pursue this to the point that you implement the axioms that an artificial intelligence needs to provide assistance in defining or governing a business process (or answering questions about molecular biological processes) you land up in some pretty abstract stuff, including the Stanford Encyclopedia of Philosophy.  I found the title of this post entertaining within the page on temporal logic.