Automatic Knowledge Graphs for Assessment Items and Learning Objects

As I mentioned in this post, we’re having fun layering questions and answers with explanations on top of electronic textbook content.

The basic idea is to couple a graph structure of questions, answers, and explanations into the text using semantics.  The trick is to do that well and automatically enough that we can deliver effective adaptive learning support.  This is analogous to the knowledge graph that users of Knewton‘s API create for their content.  The difference is that we get the graph from the content, including the “assessment items” (that’s what educators call questions, among other things).  Essentially, we parse the content, including the assessment items (i.e., the questions and each of their answers and explanations).   The result of this parsing is, as we’ve described elsewhere, precise lexical, syntactic, semantic, and logic understanding of each sentence in the content.  But we don’t have to go nearly that far to exceed the state of the art here. Continue reading “Automatic Knowledge Graphs for Assessment Items and Learning Objects”

Knowledge acquisition using lexical and semantic ontology

In developing a compliance application based on the institutional review board policies of John Hopkins’ Dept. of Medicine, we have to clarify the following sentence:

  • Projects involving drugs or medical devices other than the use of an approved drug or medical device in the course of medical practice and projects whose data will be submitted to or held for inspection by the FDA will not be exempt from JHM IRB review UNLESS that use falls within the Emergency Use provisions of 21 CFR 56.102 (d).

As you can see, there are a number of compound words and acronyms, as well as references to the Code of Federal Regulations that need to be defined or recognized to understand this sentence.  Continue reading “Knowledge acquisition using lexical and semantic ontology”

Confessions of a production rule vendor (part 1)

If you are using one of the more popular rules engines, chances are you can blame me.  I popularized the technology of forward-chaining production rules based on the Rete Algorithm.  Others have certainly contributed; my path is the one that led to open-source implementations and many commercial products, including those of IBM, Oracle, SAP, TIBCO, Red Hat, and too many others to mention (e.g., see this).

Today, I want to make clear that the future prospects for production rule technology are diminishing.  My objective here is to explain why most rule-based technologies are no good and why some are much better.  Although production rule technology is much better than most rule-based technologies, I hope to also make clear that in the age of IBM’s Watson, Google’s Brain, and the semantic web, production rule technology is inadequate.

They are not created equal.

Rules have become so pervasive in the software business that vendors of all types of software say they have them.  Consider, for example, that even Microsoft Outlook has rules!

Continue reading “Confessions of a production rule vendor (part 1)”

Financial industry to define standards using defeasible logic and semantic web technologies

Last week, I attended the FIBO (Financial Business Industry Ontology) Technology Summit along with 60 others.

The effort is building an ontology of fundamental concepts in the financial services. As part of the effort, there is surprisingly clear understanding that for the resulting representation to be useful, there is a need for logical and rule-based functionality that does not fit within OWL (the web ontology language standard) or SWRL (a simple semantic web rule language). In discussing how to meet the reasoning and information processing needs of consumers of FIBO, there was surprisingly rapid agreement that the functionality of Flora-2 was most promising for use in defining and exemplifying the use of the emerging standard. Endorsers including Benjamin Grosof and myself, along with a team from SRI International. Others had a number of excellent questions, such as concerning open- vs. closed-world semantics, which are addressed by support for the well-founded semantics in Flora-2 and XSB.

Thanks go to Vulcan for making the improvements to Flora and XSB that have been developed in Project Halo available to all!

Background for our Semantic Technology 2013 presentation

In the spring of 2012, Vulcan engaged Automata for a knowledge acquisition (KA) experiment.  This article provides background on the context of that experiment and what the results portend for artificial intelligence applications, especially in the areas of education.  Vulcan presented some of the award-winning work referenced here at an AI conference, including a demonstration of the electronic textbook discussed below.  There is a video of that presentation here.  The introductory remarks are interesting but not pertinent to this article.

Background on Vulcan’s Project Halo

Background on Vulcan's Project Halo

From 2002 to 2004, Vulcan developed a Halo Pilot that could correctly answer between 30% and 50% of the questions on advanced placement (AP) tests in chemistry.  The approaches relied on sophisticated approaches to formal knowledge representation and expert knowledge engineering.  Of three teams, Cycorp fared the worst and SRI fared the best in this competition.  SRI’s system performed at the level of scoring a 3 on the AP, which corresponds to earning course credit at many universities.  The consensus view at that time was that achieving a score of 4 on the AP was feasible with limited additional effort.  However, the cost per page for this level of performance was roughly $10,000, which needed to be reduced significantly before Vulcan’s objective of a Digital Aristotle could be considered viable.

Continue reading “Background for our Semantic Technology 2013 presentation”

SBVR in OWL

In preparation for generating RIF and SBVR from the Linguist, we have produced an OWL ontology for the pertinent aspects of the SBVR specification.  We hope that this is helpful to others and would sincerely appreciate any corrections or comments on how to improve it.

Paul

NLP: depictive in an HPSG lexicon?

We’re working with the English Resource Grammar (ERG), OWL, and Vulcan’s SILK to educate the machine by translating textbooks into defeasible logic.  Part of this involves an ontology that models semantics more deeply than the ERG, which is based on head-driven phrase structure grammar (HPSG), which provides deeper parsing and, with the ERG and the DELPH-IN infrastructure, also provides a simple under-specified semantic representation called minimal recursion semantics (MRS).

We’re having a great time using OWL to clarify and enrich the semantics of the rich model underlying the ERG.  Here’s an example, FYI.  If you’d like to know more (or help), please drop us a line!  Overall the project will demonstrate our capabilities for transforming everyday sentences into RIF and business rule languages using SBVR extended with defeasibility and other capabilities, all modeled in the same OWL ontology.

What triggered this blog entry was a bit of a surprise in seeing that whether or not an adjective could be used depictively is sometimes encoded in the lexicon.  This is one of the problems of TDL versus a description-logic based model with more expressiveness.  It results in more lexical entries than necessary, which has been discussed by others when contrasted with the attributed logic engine (ALE), for example.

In trying to model the semantics of words like ‘same’ and ‘different’, we are scratching our heads about these lines from the ERG’s lexicon:

  1. same_a1 := aj_pp_i-cmp-sme_le & [ ORTH < “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_same_a_as_rel”, …
  2. the_same_a1 := aj_-_i-prd-ndpt_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  3. the_same_adv1 := av_-_i-vp-po_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  4. exact_a2 := aj_pp_i-cmp-sme_le & [ ORTH < “exact” >, SYNSEM [ LKEYS.KEYREL.PRED “_exact_a_same-as_rel”…

One of the interesting things about lexicalized grammars is that lexical entries (i.e., ‘words’) are described with almost arbitrary combinations of their lexical, syntactic, and semantic characteristics.

The preceding code is expressed in a type description language (TDL) used by the Lisp-based LKB (and its C++ counterpart, PET, which are unification-based parsers that produce a chart of plausible parses with some efficiency.  What is given above is already deeper than what you can expect from a statistical parser (but richer descriptions of lexical entries promises to make statistical parsing much better, too).

Unfortunately, there is no available documentation on why the ERG was designed as it is, so the meaning of the above is difficult to interpret.  For example, the types of lexical entries (the symbols ending in ‘_le’) referenced above are defined as follows:

  1. aj_pp_i-cmp-sme_le := basic_adj_comp_lexent & [SYNSEM[LOCAL[CAT[HEAD superl_adj &[PRD -,MOD <[LOCAL.CAT.VAL.SPR <[–MIN def_or_demon_q_rel]>]>],VAL.SPR.FIRST.–MIN much_deg_rel],CONT.RELS <!relation,relation!>],MODIFD.LPERIPH bool,LKEYS[ALTKEYREL.PRED comp_equal_rel,–COMPKEY _as_p_comp_rel]]].
  2. aj_-_i-prd-ndpt_le := nonc-hm-nab & [SYNSEM basic_adj_abstr_lex_synsem & [LOCAL[CAT[HEAD adj & [PRD +,MINORS[MIN norm_adj_rel,NORM norm_rel],TAM #tam,MOD < anti_synsem_min >],VAL[SPR.FIRST anti_synsem_min,COMPS < >],POSTHD +],CONT[HOOK[LTOP #ltop,INDEX #arg0 &[E #tam],XARG #xarg],RELS <! #keyrel & adj_relation !>,HCONS <! !>]],NONLOC non-local_none,MODIFD notmod &[LPERIPH bool],LKEYS.KEYREL #keyrel &[LBL #ltop,ARG0 #arg0,ARG1 #xarg & non_expl-ind]]].

Needless to say, that’s a mouthful!  Chasing this down, the following ‘informs’ us that “the same”, which uses type #2 above, is defined using the following lexical types:

  1. nonc-hm-nab := nonc-h-nab & mcna.
  2. nonc-h-nab := nonconj & hc-to-phr & non_affix_bearing.
  3. mcna := word & [ SYNSEM.LOCAL.CAT.MC na ].

Which is to say that it is non-conjunctive, complements a head to form a phrase, can’t be affixed, cannot constitute a main clause, and is a word.

The fact that the lexical entry for “the same” is adjectival is given the definition of the following type(s) used in the SYNSEM feature:

  1. basic_adj_comp_lexent := compar_superl_adj_word & [SYNSEM adj_unsp_ind_twoarg_synsem & [LOCAL[CAT.VAL[COMPS <canonical_or_unexpressed & [–MIN #cmin,LOCAL [CAT basic_pp_cat,CONJ cnil,CONT.HOOK [LTOP #ltop,INDEX #ind]]]>],CONT.HOOK [ LTOP #ltop, XARG #xarg]],LKEYS [ KEYREL.ARG1 #xarg,ALTKEYREL.ARG2 #ind,–COMPKEY #cmin]]].b
  2. compar_superl_adj_word := nonc-hm-nab & [SYNSEM adj_unsp_ind_synsem & [LOCAL[CAT[HEAD[MOD <[–SIND #ind & non_expl]>,TAM #tam,MINORS.MIN abstr_adj_rel],VAL.SPR.FIRST.LOCAL.CONT.HOOK.XARG #altarg0],CONT[HOOK[XARG #ind,INDEX #arg0 & [E #tam]],RELS.LIST <[LBL #hand,ARG1 #ind],#altkeyrel & [LBL #hand,ARG0 event & #altarg0,ARG1 #arg0],…>]],LKEYS.ALTKEYREL #altkeyrel]].

Which is to say that it is a comparative or superlative adjectival word (even though it consists of two lexemes in its ‘orthography’) that involves two semantic arguments including one complement which may be unexpressed prepositional phrase.  A comparative or superlative adjective, in turn, is non-conjunctive, complements a head to form a phrase, is non-affix bearing (?), and non-clausal, as defined by the type ‘nonc-hm-nab’ above.

The types used in the syntax and semantic (i.e., SYNSEM) feature of the two lexical types are defined as follows (none of which is documented):

  1. adj_unsp_ind_twoarg_synsem := adj_unsp_ind_synsem & two_arg.
  2. adj_unsp_ind_synsem := basic_adj_lex_synsem & lex_synsem & adj_synsem_lex_or_phrase & isect_synsem & [LOCAL.CONT.HOOK.INDEX #ind,LKEYS.KEYREL.ARG0 #ind].

In a moment, we’ll discuss the types used in the second of these, but first, some basics on the semantics that are mixed with the syntax above.

In effect, the above indicates that a new ‘elementary predication’ will be needed in the MRS to represent the adjectival relationship in the logic derived in the course of parsing (i.e., that’s what ‘unsp_ind’ means, although it’s not documented, which I will try not to bemoan much further.)

The following indicates that the newly formed elementary predicate is not (initially) within any scope and that it has two arguments whose semantics (i.e., their RELations) are concatenated for propagation into the list of elementary predications that will constitute the MRS for any parses found.

  1. two_arg := basic_two_arg & [LOCAL.CONT.HCONS <! !>].
  2. basic_two_arg := unspec_two_arg & lex_synsem.
  3. unspec_two_arg := basic_lex_synsem & [LOCAL.ARG-S <[LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC [SLASH[LIST #smiddle,LAST #slast],REL [LIST #rmiddle,LAST #rlast],QUE[LIST #qmiddle,LAST #qlast]]],[LOCAL.CONT.HOOK.–SLTOP #sltop, NONLOC[SLASH[LIST #sfirst,LAST #smiddle],REL[LIST #rfirst,LAST #rmiddle],QUE[LIST #qfirst,LAST #qmiddle]]]>,LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC[SLASH[LIST #sfirst,LAST #slast],REL[LIST #rfirst,LAST #rlast],QUE[LIST #qfirst,LAST #qlast]]].
  4. lex_synsem := basic_lex_synsem & [LEX +].

The last of these expresses that the constuction is lexical rather than phrasal (which includes clausal in the ERG).

Continuing with the definition of “the same” as an adjective, the following finally clarifies what it means to be a basic adjective:

  1. basic_adj_lex_synsem := basic_adj_abstr_lex_synsem & [LOCAL[ARG-S <#spr . #comps>,CAT[HEAD adj_or_intadj,VAL[SPR<#spr & synsem_min &[–MIN degree_rel,LOCAL[CAT[VAL[SPR *olist*,SPEC <[LOCAL.CAT.HS-LEX #hslex]>],MC na],CONT.HOOK.LTOP #ltop],NONLOC.SLASH 0-dlist,OPT +],anti_synsem_min &[–MIN degree_rel]>,COMPS #comps],HS-LEX #hslex],CONT.RELS.LIST <#keyrel,…>],LKEYS.KEYREL #keyrel & [LBL #ltop]].

Well, ‘clarifies’ might not have been the right word!  Essentially, it indicates that the adjective may have an optional degree specifier (which semantically modifies the predicate of the adjective) and that the predicate specified in the lexical entry becomes the predicate used in the MRS.  The rest is defined below:

  1. basic_adj_abstr_lex_synsem := basic_adj_synsem_lex_or_phrase & abstr_lex_synsem & [LOCAL.CONT.RELS.LIST.FIRST basic_adj_relation].
  2. basic_adj_synsem_lex_or_phrase := canonical_synsem & [LOCAL[AGR #agr,CAT[HEAD[MINORS.MIN basic_adj_rel],VAL[SUBJ <>,SPCMPS <>]],CONT.HOOK[INDEX non_conj_sement,XARG #agr]]].
  3. canonical_synsem := expressed_synsem & canonical_or_unexpressed.
  4. expressed_synsem := synsem.
  5. canonical_or_unexpressed := synsem_min0.
  6. synsem_min0 := synsem_min & [LOCAL mod_local,NONLOC non-local_min].

Which ends with a bunch of basic setup types except for constraining for relation for an adjective to be ‘basically adjectivally’ on the first two lines.  Also on these first two lines, it specifies that its subject and its specifier, if any, must be completed (i.e., empty) and agree with its non-conjunctive argument (which is not to say that it cannot be conjunctive, but that it modifies the conjunction as a whole, if so.)  Whether or not it is expressed will determine if there are any further predicates about its arguments or if its unexpressed argument is identified by an otherwise unreferenced variable in any resulting MRS.

The lexical grounding of this type specification is given below, indicating that it may (or not) have phonology (e.g., pronunciation, such as whether its onset is voiced) and if and how and with what punctuation it may appear, if any.  In general, a semantic argument may be lexical or phrasal and optional but if it appears it corresponds to some semantic index (think variable) in sort of predicate in any resulting MRS.  (The *_min types do not constrain the values of their features any further).

  1. basic_lex_synsem := abstr_lex_synsem & lex_or_nonlex_synsem.
  2. abstr_lex_synsem := canonical_lex_or_phrase_synsem & [LKEYS lexkeys].
  3. canonical_lex_or_phrase_synsem := canonical_synsem & lex_or_phrase.
  4. lex_or_phrase := synsem_min2.
  5. synsem_min2 := synsem_min1 & [LEX luk,MODIFD xmod_min,PHON phon_min,PUNCT punctuation_min].
  6. synsem_min1 := synsem_min0 & [OPT bool,–MIN predsort,–SIND *top*].
  7. adj_synsem_lex_or_phrase := basic_adj_synsem_lex_or_phrase &[LOCAL[CAT.HEAD.MOD <synsem_min &[LOCAL[CAT[HEAD basic_nom_or_ttl & [POSS -],VAL[SUBJ <>,SPR.FIRST synsem &[–MIN quant_or_deg_rel],COMPS <>],MC na],CONJ cnil],–SIND #ind]>,CONT.HOOK.XARG #ind]].

Note that an adjective is not possess-able and that it modifies something nominal (or a title) and that if it has a specifier that it is a quantifier or degree (e.g., ‘very’).  Again, an adjective cannot function as a main clause or be conjunctive (in and of itself).

Finally, if you look far above you will see that the basic semantics of an adjective with an additional semantic argument is ‘intersective’, as in:

  1. isect_synsem := abstr_lex_synsem & [LOCAL[CAT.HEAD.MOD <[LOCAL intersective_mod,NONLOC.REL 0-dlist]>,CONT.HOOK.LTOP #hand],LKEYS.KEYREL.LBL #hand].

Here, the length 0 difference list and the following definitions indicate that intersective semantics do not accept anything but local modification:

  1. intersective_mod := mod_local.
  2. mod_local := *avm*.

AVM stands for ‘attribute value matrix’, which is the structure by which types and their features are defined (with nesting and unification constraints using # to indicate equality).

By now you’re probably getting the idea that there is fairly significant model of the English language, including its lexical and syntactic aspects, but if you look there is a lot about semantics here, too.

Simple problems with the semantic web

The standard for defining ontologies these days is OWL and Protege.  Unfortunately, OWL lacks any notion of exceptions in inheritance or any other notion of defeasibility.

So, although you may want to say that birds fly, you’re ontology will be broken (or become much more complicated) when you realize there are birds that can’t fly, such as penguins or ostriches, or even sick or injured birds.

Practically speaking, you need something like courteous logic or the defeasibility in SILK to handle this (or any 1980s expert system shell or even earlier frame system).  OWL is very hard on mortal man (e.g., mainstream IT) in this regard.

How can I tell OWL that a pronoun is a noun but that pronouns are a closed class of words, unlike nouns, verbs, adjectives, and adverbs (in general).  Well, I’ll have to tell it about open-class nouns versus closed class nouns.  What a pain!

This is why we use Protege primarily as a drafting tool and, for example, SILK, to do reasoning.   Non-defeasible description logic and first-order reasoners are difficult to get along with, in practice (and make sustainable knowledge repositories too difficult – which inhibits adoption, obviously).

Google vs. Facebook and Bing (again)

Almost a year ago, I wrote about semantics and social networking as threats to Google.  In that post, I referenced a prior article on investments in natural language processing, such as Microsoft’s acquisition of Powerset, which is now part of Bing.

Today, there are two articles I recommend.  The first addresses the extent to which Google’s Superbowl ad is a response to the threat from Bing.  The second addresses Facebook overtaking Google.

Accenture, Public Policy and Governance at Oracle

Some time ago I spoke with public sector leadership at Oracle and Accenture about  applications in Health and Human Services.   Oracle was already my client with what was then Haley Authority (now Oracle Policy Automation) integrated within Siebel CRM.  Lagan was also one of my clients who competed with Oracle and others, such as Curam Software, for public sector case management applications.  It was obvious then that then market-leading approach of Curam Software, which largely relied on IBM Global Services to codify the policies that determine eligibility and levels of benefit for various programs would not be viable for much longer.  Oracle and Lagan were going to change the playing field with a more accessible and knowledge-centric approach based in Haley’s natural language business rules management system. 

There was a current battle going on in one state (Kansas, as I recall) among these three companies which went Oracle’s way thanks to Accenture and support from Haley.  We were also working with them on a larger opportunity in Ontario.  Continue reading “Accenture, Public Policy and Governance at Oracle”