SBVR in OWL

In preparation for generating RIF and SBVR from the Linguist, we have produced an OWL ontology for the pertinent aspects of the SBVR specification.  We hope that this is helpful to others and would sincerely appreciate any corrections or comments on how to improve it.

Paul

Semantic Technology & Business Conference (SemTechBiz)

Benjamin Grosof and I will be presenting the following review of recent work at Vulcan towards Digital Aristotle as part of Project Halo at SemTechBiz in San Francisco the first week of June.

Acquiring deep knowledge from text

We show how users can rapidly specify large bodies of deep logical knowledge starting from practically unconstrained natural language text.

English sentences are semi-automatically interpreted into  predicate calculus formulas, and logic programs in SILK, an expressive knowledge representation (KR) and reasoning system which tolerates practically inevitable logical inconsistencies arising in large knowledge bases acquired from and maintained by distributed users possessing varying linguistic and semantic skill sets who collaboratively disambiguate grammar, logical quantification and scope, co-references, and word senses.

The resulting logic is generated as Rulelog, a draft standard under W3C Rule Interchange Format’s Framework for Logical Dialects, and relies on SILK’s support for FOL-like formulas, polynomial-time inference, and exceptions to answer questions such as those found in advanced placement exams.

We present a case study in understanding cell biology based on a first-year college level textbook.

Deep QA

Our efforts at acquiring deep knowledge from a college biology text have enabled us to answer a number of questions that are beyond what has been previously demonstrated.

For example, we’re answering questions like:

  1. Are the passage ways provided by channel proteins hydrophilic or hydrophobic?
  2. Will a blood cell in a hypertonic environment burst?
  3. If a Paramecium swims from a hypotonic environment to an isotonic environment, will its contractile vacuole become more active?

A couple of these are at higher levels on the Bloom scale of cognitive skills than Watson can reach (which is significantly higher than search engines).

As some other posts have shown in images, we can translate completely natural sentences into formal logic.  We actually do the reasoning using Vulcan’s SILK, which has great capabilities, including defeasibility.  We can also output to RIF or SBVR, but the temporal aspects and various things such as modality and the need for defeasibility favor SILK or Cyc for the best reasoning and QA performance.

One thing in particular is worth noting:  this approach does better with causality and temporal logic than is typically considered by most controlled natural language systems, whether they are translating to a business rules engine or a logic formalism, such as first order or description logic.  The approach promises better application development and knowledge management capabilities for more of the business process management and complex event processing markets.

Understanding English promotes better policies and requirements

Capturing some policies from a publication by the Health and Human Services department recently turned up the following….
It’s probably the case that there are more specific lists than just “some list” or “any list”, as suggested below.
This is a good thing about applying deep natural language understanding to policy statements.  It helps you say precisely what you mean, even if you are not using a rule or logic engine, but just trying to articulate your policies or requirements clearly and precisely.

HHS on expedited review

Logic from the English of Science, Government, and Business

Our software is translating even long and complicated sentences from regulations to textbooks into formal logic (i.e,, not necessarily first-order logic, but more general predicate calculus).   As you can see below, we can translate this understanding into various logical formalisms including defeasible first-order logic, which we are applying in Vulcan’s Project Halo.  This includes classical first-order logic and related standards such as RIF or SBVR, as well as building or extending an ontology or description logic (e.g., OWL-DL).

We’re excited about these capabilities in various applications, such as in advancing science and education at Vulcan and formally understanding, analyzing and automating policy and regulations in enterprises.

English translated into predicate calculus

English translated into predicate calculus

a sentence understood by Automata

an unambiguously, formally understood sentence


English translated into SILK and Prolog

English translated into SILK and Prolog

Project Sherlock

Working as part of Vulcan’s Project Halo[1], Automata is applying a natural language understanding system that translates carefully formulated sentences into formal logic so as to answer questions that typically require deeper knowledge and inference than demonstrated by Watson.

The objective over the next three quarters is to acquire enough knowledge from the 9th edition of Campbell’s Biology textbook to demonstrate three things.

  • First, that the resulting system answers, for example, biology advanced placement (AP) exam questions more competently than existing systems (e.g., Aura[2] or Inquire[3]).
  • Second, that knowledge from certain parts of the textbook is effectively translated from English into formal knowledge with sufficient breadth and depth of coverage and semantics.
  • Third, that the knowledge acquisition process proves efficacious and accessible to less than highly skilled knowledge engineers so as to accelerate knowledge acquisition beyond 2012.

Included in the second of these is a substantial ontology of background knowledge expected of students in order to comprehend the selected parts of the textbook using a combination of OWL, logic, and English sentences from sources other than the textbook.

Automata is hiring logicians, linguists, and biologists to work as consultants, contracts, or employees for:

  • Interactive tree-banking and word-sense disambiguation of several thousand sentences.[4]
  • Extending its lexical ontology and a broad-coverage grammar of English with additional vocabulary and deeper semantics, especially concerning cellular biology and related scientific knowledge including chemistry, physics, and math.
  • Maturing its upper and middle ontology of domain independent knowledge using OWL in combination with various other technologies, including description logic, first-order logic, high-order logic, modal logic, and defeasible logic.[5]
  • Enhancing its platform for text-driven knowledge engineering towards a collaborative wiki-like architecture for self-aware content in scientific education and biomedical applications.

Terms of engagement are flexible; ranging from small units of work to full-time employment.  We are based in Pittsburgh, Pennsylvania and Vulcan is headquartered in Seattle, Washington, but the team is distributed across the country and overseas.

Please contact Paul Haley by e-mail to his first name at this domain.


[1] Vulcan: http://www.vulcan.com/TemplateCompany.aspx?contentId=54; Project Halo: http://www.projecthalo.com/
Video introduction/overview :http://videolectures.net/aaai2011_gunning_halobook/

[2] Aura: http://www.ai.sri.com/project/aura

[3] Inquire: http://www.franz.com/success/customer_apps/artificial_intelligence/aura.lhtml

[4] tree-banking and WSD: http://www.omg.org/spec/SBVR and http://en.wikipedia.org/wiki/Word-sense_disambiguation

[5] e.g., SILK (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.174.1796 ) and SBVR (http://www.omg.org/spec/SBVR)

NLP: depictive in an HPSG lexicon?

We’re working with the English Resource Grammar (ERG), OWL, and Vulcan’s SILK to educate the machine by translating textbooks into defeasible logic.  Part of this involves an ontology that models semantics more deeply than the ERG, which is based on head-driven phrase structure grammar (HPSG), which provides deeper parsing and, with the ERG and the DELPH-IN infrastructure, also provides a simple under-specified semantic representation called minimal recursion semantics (MRS).

We’re having a great time using OWL to clarify and enrich the semantics of the rich model underlying the ERG.  Here’s an example, FYI.  If you’d like to know more (or help), please drop us a line!  Overall the project will demonstrate our capabilities for transforming everyday sentences into RIF and business rule languages using SBVR extended with defeasibility and other capabilities, all modeled in the same OWL ontology.

What triggered this blog entry was a bit of a surprise in seeing that whether or not an adjective could be used depictively is sometimes encoded in the lexicon.  This is one of the problems of TDL versus a description-logic based model with more expressiveness.  It results in more lexical entries than necessary, which has been discussed by others when contrasted with the attributed logic engine (ALE), for example.

In trying to model the semantics of words like ‘same’ and ‘different’, we are scratching our heads about these lines from the ERG’s lexicon:

  1. same_a1 := aj_pp_i-cmp-sme_le & [ ORTH < “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_same_a_as_rel”, …
  2. the_same_a1 := aj_-_i-prd-ndpt_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  3. the_same_adv1 := av_-_i-vp-po_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  4. exact_a2 := aj_pp_i-cmp-sme_le & [ ORTH < “exact” >, SYNSEM [ LKEYS.KEYREL.PRED “_exact_a_same-as_rel”…

One of the interesting things about lexicalized grammars is that lexical entries (i.e., ‘words’) are described with almost arbitrary combinations of their lexical, syntactic, and semantic characteristics.

The preceding code is expressed in a type description language (TDL) used by the Lisp-based LKB (and its C++ counterpart, PET, which are unification-based parsers that produce a chart of plausible parses with some efficiency.  What is given above is already deeper than what you can expect from a statistical parser (but richer descriptions of lexical entries promises to make statistical parsing much better, too).

Unfortunately, there is no available documentation on why the ERG was designed as it is, so the meaning of the above is difficult to interpret.  For example, the types of lexical entries (the symbols ending in ‘_le’) referenced above are defined as follows:

  1. aj_pp_i-cmp-sme_le := basic_adj_comp_lexent & [SYNSEM[LOCAL[CAT[HEAD superl_adj &[PRD -,MOD <[LOCAL.CAT.VAL.SPR <[–MIN def_or_demon_q_rel]>]>],VAL.SPR.FIRST.–MIN much_deg_rel],CONT.RELS <!relation,relation!>],MODIFD.LPERIPH bool,LKEYS[ALTKEYREL.PRED comp_equal_rel,–COMPKEY _as_p_comp_rel]]].
  2. aj_-_i-prd-ndpt_le := nonc-hm-nab & [SYNSEM basic_adj_abstr_lex_synsem & [LOCAL[CAT[HEAD adj & [PRD +,MINORS[MIN norm_adj_rel,NORM norm_rel],TAM #tam,MOD < anti_synsem_min >],VAL[SPR.FIRST anti_synsem_min,COMPS < >],POSTHD +],CONT[HOOK[LTOP #ltop,INDEX #arg0 &[E #tam],XARG #xarg],RELS <! #keyrel & adj_relation !>,HCONS <! !>]],NONLOC non-local_none,MODIFD notmod &[LPERIPH bool],LKEYS.KEYREL #keyrel &[LBL #ltop,ARG0 #arg0,ARG1 #xarg & non_expl-ind]]].

Needless to say, that’s a mouthful!  Chasing this down, the following ‘informs’ us that “the same”, which uses type #2 above, is defined using the following lexical types:

  1. nonc-hm-nab := nonc-h-nab & mcna.
  2. nonc-h-nab := nonconj & hc-to-phr & non_affix_bearing.
  3. mcna := word & [ SYNSEM.LOCAL.CAT.MC na ].

Which is to say that it is non-conjunctive, complements a head to form a phrase, can’t be affixed, cannot constitute a main clause, and is a word.

The fact that the lexical entry for “the same” is adjectival is given the definition of the following type(s) used in the SYNSEM feature:

  1. basic_adj_comp_lexent := compar_superl_adj_word & [SYNSEM adj_unsp_ind_twoarg_synsem & [LOCAL[CAT.VAL[COMPS <canonical_or_unexpressed & [–MIN #cmin,LOCAL [CAT basic_pp_cat,CONJ cnil,CONT.HOOK [LTOP #ltop,INDEX #ind]]]>],CONT.HOOK [ LTOP #ltop, XARG #xarg]],LKEYS [ KEYREL.ARG1 #xarg,ALTKEYREL.ARG2 #ind,–COMPKEY #cmin]]].b
  2. compar_superl_adj_word := nonc-hm-nab & [SYNSEM adj_unsp_ind_synsem & [LOCAL[CAT[HEAD[MOD <[–SIND #ind & non_expl]>,TAM #tam,MINORS.MIN abstr_adj_rel],VAL.SPR.FIRST.LOCAL.CONT.HOOK.XARG #altarg0],CONT[HOOK[XARG #ind,INDEX #arg0 & [E #tam]],RELS.LIST <[LBL #hand,ARG1 #ind],#altkeyrel & [LBL #hand,ARG0 event & #altarg0,ARG1 #arg0],…>]],LKEYS.ALTKEYREL #altkeyrel]].

Which is to say that it is a comparative or superlative adjectival word (even though it consists of two lexemes in its ‘orthography’) that involves two semantic arguments including one complement which may be unexpressed prepositional phrase.  A comparative or superlative adjective, in turn, is non-conjunctive, complements a head to form a phrase, is non-affix bearing (?), and non-clausal, as defined by the type ‘nonc-hm-nab’ above.

The types used in the syntax and semantic (i.e., SYNSEM) feature of the two lexical types are defined as follows (none of which is documented):

  1. adj_unsp_ind_twoarg_synsem := adj_unsp_ind_synsem & two_arg.
  2. adj_unsp_ind_synsem := basic_adj_lex_synsem & lex_synsem & adj_synsem_lex_or_phrase & isect_synsem & [LOCAL.CONT.HOOK.INDEX #ind,LKEYS.KEYREL.ARG0 #ind].

In a moment, we’ll discuss the types used in the second of these, but first, some basics on the semantics that are mixed with the syntax above.

In effect, the above indicates that a new ‘elementary predication’ will be needed in the MRS to represent the adjectival relationship in the logic derived in the course of parsing (i.e., that’s what ‘unsp_ind’ means, although it’s not documented, which I will try not to bemoan much further.)

The following indicates that the newly formed elementary predicate is not (initially) within any scope and that it has two arguments whose semantics (i.e., their RELations) are concatenated for propagation into the list of elementary predications that will constitute the MRS for any parses found.

  1. two_arg := basic_two_arg & [LOCAL.CONT.HCONS <! !>].
  2. basic_two_arg := unspec_two_arg & lex_synsem.
  3. unspec_two_arg := basic_lex_synsem & [LOCAL.ARG-S <[LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC [SLASH[LIST #smiddle,LAST #slast],REL [LIST #rmiddle,LAST #rlast],QUE[LIST #qmiddle,LAST #qlast]]],[LOCAL.CONT.HOOK.–SLTOP #sltop, NONLOC[SLASH[LIST #sfirst,LAST #smiddle],REL[LIST #rfirst,LAST #rmiddle],QUE[LIST #qfirst,LAST #qmiddle]]]>,LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC[SLASH[LIST #sfirst,LAST #slast],REL[LIST #rfirst,LAST #rlast],QUE[LIST #qfirst,LAST #qlast]]].
  4. lex_synsem := basic_lex_synsem & [LEX +].

The last of these expresses that the constuction is lexical rather than phrasal (which includes clausal in the ERG).

Continuing with the definition of “the same” as an adjective, the following finally clarifies what it means to be a basic adjective:

  1. basic_adj_lex_synsem := basic_adj_abstr_lex_synsem & [LOCAL[ARG-S <#spr . #comps>,CAT[HEAD adj_or_intadj,VAL[SPR<#spr & synsem_min &[–MIN degree_rel,LOCAL[CAT[VAL[SPR *olist*,SPEC <[LOCAL.CAT.HS-LEX #hslex]>],MC na],CONT.HOOK.LTOP #ltop],NONLOC.SLASH 0-dlist,OPT +],anti_synsem_min &[–MIN degree_rel]>,COMPS #comps],HS-LEX #hslex],CONT.RELS.LIST <#keyrel,…>],LKEYS.KEYREL #keyrel & [LBL #ltop]].

Well, ‘clarifies’ might not have been the right word!  Essentially, it indicates that the adjective may have an optional degree specifier (which semantically modifies the predicate of the adjective) and that the predicate specified in the lexical entry becomes the predicate used in the MRS.  The rest is defined below:

  1. basic_adj_abstr_lex_synsem := basic_adj_synsem_lex_or_phrase & abstr_lex_synsem & [LOCAL.CONT.RELS.LIST.FIRST basic_adj_relation].
  2. basic_adj_synsem_lex_or_phrase := canonical_synsem & [LOCAL[AGR #agr,CAT[HEAD[MINORS.MIN basic_adj_rel],VAL[SUBJ <>,SPCMPS <>]],CONT.HOOK[INDEX non_conj_sement,XARG #agr]]].
  3. canonical_synsem := expressed_synsem & canonical_or_unexpressed.
  4. expressed_synsem := synsem.
  5. canonical_or_unexpressed := synsem_min0.
  6. synsem_min0 := synsem_min & [LOCAL mod_local,NONLOC non-local_min].

Which ends with a bunch of basic setup types except for constraining for relation for an adjective to be ‘basically adjectivally’ on the first two lines.  Also on these first two lines, it specifies that its subject and its specifier, if any, must be completed (i.e., empty) and agree with its non-conjunctive argument (which is not to say that it cannot be conjunctive, but that it modifies the conjunction as a whole, if so.)  Whether or not it is expressed will determine if there are any further predicates about its arguments or if its unexpressed argument is identified by an otherwise unreferenced variable in any resulting MRS.

The lexical grounding of this type specification is given below, indicating that it may (or not) have phonology (e.g., pronunciation, such as whether its onset is voiced) and if and how and with what punctuation it may appear, if any.  In general, a semantic argument may be lexical or phrasal and optional but if it appears it corresponds to some semantic index (think variable) in sort of predicate in any resulting MRS.  (The *_min types do not constrain the values of their features any further).

  1. basic_lex_synsem := abstr_lex_synsem & lex_or_nonlex_synsem.
  2. abstr_lex_synsem := canonical_lex_or_phrase_synsem & [LKEYS lexkeys].
  3. canonical_lex_or_phrase_synsem := canonical_synsem & lex_or_phrase.
  4. lex_or_phrase := synsem_min2.
  5. synsem_min2 := synsem_min1 & [LEX luk,MODIFD xmod_min,PHON phon_min,PUNCT punctuation_min].
  6. synsem_min1 := synsem_min0 & [OPT bool,–MIN predsort,–SIND *top*].
  7. adj_synsem_lex_or_phrase := basic_adj_synsem_lex_or_phrase &[LOCAL[CAT.HEAD.MOD <synsem_min &[LOCAL[CAT[HEAD basic_nom_or_ttl & [POSS -],VAL[SUBJ <>,SPR.FIRST synsem &[–MIN quant_or_deg_rel],COMPS <>],MC na],CONJ cnil],–SIND #ind]>,CONT.HOOK.XARG #ind]].

Note that an adjective is not possess-able and that it modifies something nominal (or a title) and that if it has a specifier that it is a quantifier or degree (e.g., ‘very’).  Again, an adjective cannot function as a main clause or be conjunctive (in and of itself).

Finally, if you look far above you will see that the basic semantics of an adjective with an additional semantic argument is ‘intersective’, as in:

  1. isect_synsem := abstr_lex_synsem & [LOCAL[CAT.HEAD.MOD <[LOCAL intersective_mod,NONLOC.REL 0-dlist]>,CONT.HOOK.LTOP #hand],LKEYS.KEYREL.LBL #hand].

Here, the length 0 difference list and the following definitions indicate that intersective semantics do not accept anything but local modification:

  1. intersective_mod := mod_local.
  2. mod_local := *avm*.

AVM stands for ‘attribute value matrix’, which is the structure by which types and their features are defined (with nesting and unification constraints using # to indicate equality).

By now you’re probably getting the idea that there is fairly significant model of the English language, including its lexical and syntactic aspects, but if you look there is a lot about semantics here, too.

Recruiting: IBM Ilog vs. JBoss Drools

I received notice of a Victorian government position offering $106k, as follows, today:

BRMS Developer (WebSphere ILOG JRules)

You will have proven experience as a BRMS Developer within a Java/JEE environment using IBM‘s WebSphere ILOG JRules platform. You will have implementation experience using integration technologies (e.g. Web Services, JMS) and have the ability to liaise with and engage key stakeholders.  Ideally you will also have knowledge and/or exposure to IBM‘s WebSphere integration suite (including the MQ Series).

This got a reaction out of me since we’re looking for people (although emphasizing logic, semantics, and English rather than any particular engine).  At first, I thought it must be a Java job, but stakeholder engagement indicates this is a full-fledged knowledge engineering position.

$100k for anyone with strong, specific experience seems low.  For someone that can understand objectives and translate requirements into operational business logic, it seems lower.

I’m surprised there isn’t more of an Ilog premium, too.  JBoss Drools consultants can make more than this.

Blaze down in Fair Isaac’s Q1 2012

FICO reported 9% growth in revenues year over year.

  • the bulk of revenues and all the growth was in pre-configured Decision Management applications
  • FICO score revenues were half as much, w/ B2B growing as B2C (myFICO) waned
  • tools revenues were less than half again as much and flat
    • optimization (XPress) was up
    • Blaze Advisor was down

This is in sharp contrast to the success that Ilog has enjoyed under the IBM umbrella.

Blaze Advisor doesn’t seem to make sense as a stand-alone tool any more.   Applications are great, and so are combinations of BI/optimization/rules, but if the BRMS tool will survive independently it needs to find more traction, perhaps outside of Fair Isaac.

Pursuing a decision tree down a rat hole

Fair Isaac’s recent press release touts the “key differentiator” of Blaze Advisor 7.0 as:

the innovative Decision Graph visual metaphor, a decision tree management solution that makes even the most complex rule sets easier to manage and explain

Of course, a decision tree is really more like a root system (i.e., the tree is upside down).  So, what this capability is particularly good for explaining how some pretty structured logic gets wherever it lands up, which FICO touts:

The new capability is especially valuable to businesses that need to be able to explain their decision logic to external auditors and regulators, or to internal parties such as senior management.

Unfortunately, this capability is only good for business logic that has been heavily analyzed and transformed from more natural forms of knowledge that stakeholders and regulators understand or communicate about.

FICO’s suggestion is that after operational guidelines and regulations have been laboriously translated (literally and, hopefully, appropriately) into if-then-else logic (i.e., decision trees) that viewing the path through the “code” will be informative to stakeholders and suitable for regulators.  That seems like quite a stretch given the immediately following sentence, which indicates the complexity of using the metaphor in the first place:

Decision Graph gives business analysts a more intuitive way to view and navigate decision trees, which can reach 10,000 nodes or more.

Lots of people are attracted to such “visual programming” metaphors, but they are extremely limited in their logical expressiveness and, therefore, in their utility.  Still, if you are using induction technology (such as FICO’s)  to produce very large decision trees (independent of governing policies or regulations) such a tool can be useful, if only to understand what the machine “discovered” or to explain “how” it reached a decision, albeit post facto, which may or may not be compliant with governing doctrine.

The press release also talks about using this structure to experiment or optimize certain decisions by simulation, which is good stuff.  FICO has long led in this area, especially in the markets it focuses on (i.e., B2C financial).  This would have been a better central point, imo.

Overall, I agree with the following quote embedded within the release:

“Business rules management provides a shared platform for CIOs and business managers to help their enterprises stay competitive, and making business logic clearer to all parties is an essential part of that collaboration,” said Jim Sinur, a vice president at Gartner Research specializing in business rules management systems. “Better visualization of business logic can provide a huge uplift for companies that are looking for ways to improve business decisions.”

Showing thousands of nodes in a tree or cells in a table does not accomplish the appropriate goal (i.e., effective collaboration) of the first clause, however.  And Jim did not say that decision trees provide effective visualization.

My take: the best approach is to guarantee that the statements of business policy and regulation are unambiguously understood by machine intelligence that automatically translates them into completely reliable systems.

That is, the best visualization for general purposes may be plain English.