SBVR in OWL

In preparation for generating RIF and SBVR from the Linguist, we have produced an OWL ontology for the pertinent aspects of the SBVR specification.  We hope that this is helpful to others and would sincerely appreciate any corrections or comments on how to improve it.

Paul

Semantic Technology & Business Conference (SemTechBiz)

Benjamin Grosof and I will be presenting the following review of recent work at Vulcan towards Digital Aristotle as part of Project Halo at SemTechBiz in San Francisco the first week of June.

Acquiring deep knowledge from text

We show how users can rapidly specify large bodies of deep logical knowledge starting from practically unconstrained natural language text.

English sentences are semi-automatically interpreted into  predicate calculus formulas, and logic programs in SILK, an expressive knowledge representation (KR) and reasoning system which tolerates practically inevitable logical inconsistencies arising in large knowledge bases acquired from and maintained by distributed users possessing varying linguistic and semantic skill sets who collaboratively disambiguate grammar, logical quantification and scope, co-references, and word senses.

The resulting logic is generated as Rulelog, a draft standard under W3C Rule Interchange Format’s Framework for Logical Dialects, and relies on SILK’s support for FOL-like formulas, polynomial-time inference, and exceptions to answer questions such as those found in advanced placement exams.

We present a case study in understanding cell biology based on a first-year college level textbook.

Deep QA

Our efforts at acquiring deep knowledge from a college biology text have enabled us to answer a number of questions that are beyond what has been previously demonstrated.

For example, we’re answering questions like:

  1. Are the passage ways provided by channel proteins hydrophilic or hydrophobic?
  2. Will a blood cell in a hypertonic environment burst?
  3. If a Paramecium swims from a hypotonic environment to an isotonic environment, will its contractile vacuole become more active?

A couple of these are at higher levels on the Bloom scale of cognitive skills than Watson can reach (which is significantly higher than search engines).

As some other posts have shown in images, we can translate completely natural sentences into formal logic.  We actually do the reasoning using Vulcan’s SILK, which has great capabilities, including defeasibility.  We can also output to RIF or SBVR, but the temporal aspects and various things such as modality and the need for defeasibility favor SILK or Cyc for the best reasoning and QA performance.

One thing in particular is worth noting:  this approach does better with causality and temporal logic than is typically considered by most controlled natural language systems, whether they are translating to a business rules engine or a logic formalism, such as first order or description logic.  The approach promises better application development and knowledge management capabilities for more of the business process management and complex event processing markets.

Understanding English promotes better policies and requirements

Capturing some policies from a publication by the Health and Human Services department recently turned up the following….
It’s probably the case that there are more specific lists than just “some list” or “any list”, as suggested below.
This is a good thing about applying deep natural language understanding to policy statements.  It helps you say precisely what you mean, even if you are not using a rule or logic engine, but just trying to articulate your policies or requirements clearly and precisely.

HHS on expedited review

Logic from the English of Science, Government, and Business

Our software is translating even long and complicated sentences from regulations to textbooks into formal logic (i.e,, not necessarily first-order logic, but more general predicate calculus).   As you can see below, we can translate this understanding into various logical formalisms including defeasible first-order logic, which we are applying in Vulcan’s Project Halo.  This includes classical first-order logic and related standards such as RIF or SBVR, as well as building or extending an ontology or description logic (e.g., OWL-DL).

We’re excited about these capabilities in various applications, such as in advancing science and education at Vulcan and formally understanding, analyzing and automating policy and regulations in enterprises.

English translated into predicate calculus

English translated into predicate calculus

a sentence understood by Automata

an unambiguously, formally understood sentence


English translated into SILK and Prolog

English translated into SILK and Prolog

Project Sherlock

Working as part of Vulcan’s Project Halo[1], Automata is applying a natural language understanding system that translates carefully formulated sentences into formal logic so as to answer questions that typically require deeper knowledge and inference than demonstrated by Watson.

The objective over the next three quarters is to acquire enough knowledge from the 9th edition of Campbell’s Biology textbook to demonstrate three things.

  • First, that the resulting system answers, for example, biology advanced placement (AP) exam questions more competently than existing systems (e.g., Aura[2] or Inquire[3]).
  • Second, that knowledge from certain parts of the textbook is effectively translated from English into formal knowledge with sufficient breadth and depth of coverage and semantics.
  • Third, that the knowledge acquisition process proves efficacious and accessible to less than highly skilled knowledge engineers so as to accelerate knowledge acquisition beyond 2012.

Included in the second of these is a substantial ontology of background knowledge expected of students in order to comprehend the selected parts of the textbook using a combination of OWL, logic, and English sentences from sources other than the textbook.

Automata is hiring logicians, linguists, and biologists to work as consultants, contracts, or employees for:

  • Interactive tree-banking and word-sense disambiguation of several thousand sentences.[4]
  • Extending its lexical ontology and a broad-coverage grammar of English with additional vocabulary and deeper semantics, especially concerning cellular biology and related scientific knowledge including chemistry, physics, and math.
  • Maturing its upper and middle ontology of domain independent knowledge using OWL in combination with various other technologies, including description logic, first-order logic, high-order logic, modal logic, and defeasible logic.[5]
  • Enhancing its platform for text-driven knowledge engineering towards a collaborative wiki-like architecture for self-aware content in scientific education and biomedical applications.

Terms of engagement are flexible; ranging from small units of work to full-time employment.  We are based in Pittsburgh, Pennsylvania and Vulcan is headquartered in Seattle, Washington, but the team is distributed across the country and overseas.

Please contact Paul Haley by e-mail to his first name at this domain.


[1] Vulcan: http://www.vulcan.com/TemplateCompany.aspx?contentId=54; Project Halo: http://www.projecthalo.com/
Video introduction/overview :http://videolectures.net/aaai2011_gunning_halobook/

[2] Aura: http://www.ai.sri.com/project/aura

[3] Inquire: http://www.franz.com/success/customer_apps/artificial_intelligence/aura.lhtml

[4] tree-banking and WSD: http://www.omg.org/spec/SBVR and http://en.wikipedia.org/wiki/Word-sense_disambiguation

[5] e.g., SILK (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.174.1796 ) and SBVR (http://www.omg.org/spec/SBVR)

NLP: depictive in an HPSG lexicon?

We’re working with the English Resource Grammar (ERG), OWL, and Vulcan’s SILK to educate the machine by translating textbooks into defeasible logic.  Part of this involves an ontology that models semantics more deeply than the ERG, which is based on head-driven phrase structure grammar (HPSG), which provides deeper parsing and, with the ERG and the DELPH-IN infrastructure, also provides a simple under-specified semantic representation called minimal recursion semantics (MRS).

We’re having a great time using OWL to clarify and enrich the semantics of the rich model underlying the ERG.  Here’s an example, FYI.  If you’d like to know more (or help), please drop us a line!  Overall the project will demonstrate our capabilities for transforming everyday sentences into RIF and business rule languages using SBVR extended with defeasibility and other capabilities, all modeled in the same OWL ontology.

What triggered this blog entry was a bit of a surprise in seeing that whether or not an adjective could be used depictively is sometimes encoded in the lexicon.  This is one of the problems of TDL versus a description-logic based model with more expressiveness.  It results in more lexical entries than necessary, which has been discussed by others when contrasted with the attributed logic engine (ALE), for example.

In trying to model the semantics of words like ‘same’ and ‘different’, we are scratching our heads about these lines from the ERG’s lexicon:

  1. same_a1 := aj_pp_i-cmp-sme_le & [ ORTH < “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_same_a_as_rel”, …
  2. the_same_a1 := aj_-_i-prd-ndpt_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  3. the_same_adv1 := av_-_i-vp-po_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  4. exact_a2 := aj_pp_i-cmp-sme_le & [ ORTH < “exact” >, SYNSEM [ LKEYS.KEYREL.PRED “_exact_a_same-as_rel”…

One of the interesting things about lexicalized grammars is that lexical entries (i.e., ‘words’) are described with almost arbitrary combinations of their lexical, syntactic, and semantic characteristics.

The preceding code is expressed in a type description language (TDL) used by the Lisp-based LKB (and its C++ counterpart, PET, which are unification-based parsers that produce a chart of plausible parses with some efficiency.  What is given above is already deeper than what you can expect from a statistical parser (but richer descriptions of lexical entries promises to make statistical parsing much better, too).

Unfortunately, there is no available documentation on why the ERG was designed as it is, so the meaning of the above is difficult to interpret.  For example, the types of lexical entries (the symbols ending in ‘_le’) referenced above are defined as follows:

  1. aj_pp_i-cmp-sme_le := basic_adj_comp_lexent & [SYNSEM[LOCAL[CAT[HEAD superl_adj &[PRD -,MOD <[LOCAL.CAT.VAL.SPR <[–MIN def_or_demon_q_rel]>]>],VAL.SPR.FIRST.–MIN much_deg_rel],CONT.RELS <!relation,relation!>],MODIFD.LPERIPH bool,LKEYS[ALTKEYREL.PRED comp_equal_rel,–COMPKEY _as_p_comp_rel]]].
  2. aj_-_i-prd-ndpt_le := nonc-hm-nab & [SYNSEM basic_adj_abstr_lex_synsem & [LOCAL[CAT[HEAD adj & [PRD +,MINORS[MIN norm_adj_rel,NORM norm_rel],TAM #tam,MOD < anti_synsem_min >],VAL[SPR.FIRST anti_synsem_min,COMPS < >],POSTHD +],CONT[HOOK[LTOP #ltop,INDEX #arg0 &[E #tam],XARG #xarg],RELS <! #keyrel & adj_relation !>,HCONS <! !>]],NONLOC non-local_none,MODIFD notmod &[LPERIPH bool],LKEYS.KEYREL #keyrel &[LBL #ltop,ARG0 #arg0,ARG1 #xarg & non_expl-ind]]].

Needless to say, that’s a mouthful!  Chasing this down, the following ‘informs’ us that “the same”, which uses type #2 above, is defined using the following lexical types:

  1. nonc-hm-nab := nonc-h-nab & mcna.
  2. nonc-h-nab := nonconj & hc-to-phr & non_affix_bearing.
  3. mcna := word & [ SYNSEM.LOCAL.CAT.MC na ].

Which is to say that it is non-conjunctive, complements a head to form a phrase, can’t be affixed, cannot constitute a main clause, and is a word.

The fact that the lexical entry for “the same” is adjectival is given the definition of the following type(s) used in the SYNSEM feature:

  1. basic_adj_comp_lexent := compar_superl_adj_word & [SYNSEM adj_unsp_ind_twoarg_synsem & [LOCAL[CAT.VAL[COMPS <canonical_or_unexpressed & [–MIN #cmin,LOCAL [CAT basic_pp_cat,CONJ cnil,CONT.HOOK [LTOP #ltop,INDEX #ind]]]>],CONT.HOOK [ LTOP #ltop, XARG #xarg]],LKEYS [ KEYREL.ARG1 #xarg,ALTKEYREL.ARG2 #ind,–COMPKEY #cmin]]].b
  2. compar_superl_adj_word := nonc-hm-nab & [SYNSEM adj_unsp_ind_synsem & [LOCAL[CAT[HEAD[MOD <[–SIND #ind & non_expl]>,TAM #tam,MINORS.MIN abstr_adj_rel],VAL.SPR.FIRST.LOCAL.CONT.HOOK.XARG #altarg0],CONT[HOOK[XARG #ind,INDEX #arg0 & [E #tam]],RELS.LIST <[LBL #hand,ARG1 #ind],#altkeyrel & [LBL #hand,ARG0 event & #altarg0,ARG1 #arg0],…>]],LKEYS.ALTKEYREL #altkeyrel]].

Which is to say that it is a comparative or superlative adjectival word (even though it consists of two lexemes in its ‘orthography’) that involves two semantic arguments including one complement which may be unexpressed prepositional phrase.  A comparative or superlative adjective, in turn, is non-conjunctive, complements a head to form a phrase, is non-affix bearing (?), and non-clausal, as defined by the type ‘nonc-hm-nab’ above.

The types used in the syntax and semantic (i.e., SYNSEM) feature of the two lexical types are defined as follows (none of which is documented):

  1. adj_unsp_ind_twoarg_synsem := adj_unsp_ind_synsem & two_arg.
  2. adj_unsp_ind_synsem := basic_adj_lex_synsem & lex_synsem & adj_synsem_lex_or_phrase & isect_synsem & [LOCAL.CONT.HOOK.INDEX #ind,LKEYS.KEYREL.ARG0 #ind].

In a moment, we’ll discuss the types used in the second of these, but first, some basics on the semantics that are mixed with the syntax above.

In effect, the above indicates that a new ‘elementary predication’ will be needed in the MRS to represent the adjectival relationship in the logic derived in the course of parsing (i.e., that’s what ‘unsp_ind’ means, although it’s not documented, which I will try not to bemoan much further.)

The following indicates that the newly formed elementary predicate is not (initially) within any scope and that it has two arguments whose semantics (i.e., their RELations) are concatenated for propagation into the list of elementary predications that will constitute the MRS for any parses found.

  1. two_arg := basic_two_arg & [LOCAL.CONT.HCONS <! !>].
  2. basic_two_arg := unspec_two_arg & lex_synsem.
  3. unspec_two_arg := basic_lex_synsem & [LOCAL.ARG-S <[LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC [SLASH[LIST #smiddle,LAST #slast],REL [LIST #rmiddle,LAST #rlast],QUE[LIST #qmiddle,LAST #qlast]]],[LOCAL.CONT.HOOK.–SLTOP #sltop, NONLOC[SLASH[LIST #sfirst,LAST #smiddle],REL[LIST #rfirst,LAST #rmiddle],QUE[LIST #qfirst,LAST #qmiddle]]]>,LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC[SLASH[LIST #sfirst,LAST #slast],REL[LIST #rfirst,LAST #rlast],QUE[LIST #qfirst,LAST #qlast]]].
  4. lex_synsem := basic_lex_synsem & [LEX +].

The last of these expresses that the constuction is lexical rather than phrasal (which includes clausal in the ERG).

Continuing with the definition of “the same” as an adjective, the following finally clarifies what it means to be a basic adjective:

  1. basic_adj_lex_synsem := basic_adj_abstr_lex_synsem & [LOCAL[ARG-S <#spr . #comps>,CAT[HEAD adj_or_intadj,VAL[SPR<#spr & synsem_min &[–MIN degree_rel,LOCAL[CAT[VAL[SPR *olist*,SPEC <[LOCAL.CAT.HS-LEX #hslex]>],MC na],CONT.HOOK.LTOP #ltop],NONLOC.SLASH 0-dlist,OPT +],anti_synsem_min &[–MIN degree_rel]>,COMPS #comps],HS-LEX #hslex],CONT.RELS.LIST <#keyrel,…>],LKEYS.KEYREL #keyrel & [LBL #ltop]].

Well, ‘clarifies’ might not have been the right word!  Essentially, it indicates that the adjective may have an optional degree specifier (which semantically modifies the predicate of the adjective) and that the predicate specified in the lexical entry becomes the predicate used in the MRS.  The rest is defined below:

  1. basic_adj_abstr_lex_synsem := basic_adj_synsem_lex_or_phrase & abstr_lex_synsem & [LOCAL.CONT.RELS.LIST.FIRST basic_adj_relation].
  2. basic_adj_synsem_lex_or_phrase := canonical_synsem & [LOCAL[AGR #agr,CAT[HEAD[MINORS.MIN basic_adj_rel],VAL[SUBJ <>,SPCMPS <>]],CONT.HOOK[INDEX non_conj_sement,XARG #agr]]].
  3. canonical_synsem := expressed_synsem & canonical_or_unexpressed.
  4. expressed_synsem := synsem.
  5. canonical_or_unexpressed := synsem_min0.
  6. synsem_min0 := synsem_min & [LOCAL mod_local,NONLOC non-local_min].

Which ends with a bunch of basic setup types except for constraining for relation for an adjective to be ‘basically adjectivally’ on the first two lines.  Also on these first two lines, it specifies that its subject and its specifier, if any, must be completed (i.e., empty) and agree with its non-conjunctive argument (which is not to say that it cannot be conjunctive, but that it modifies the conjunction as a whole, if so.)  Whether or not it is expressed will determine if there are any further predicates about its arguments or if its unexpressed argument is identified by an otherwise unreferenced variable in any resulting MRS.

The lexical grounding of this type specification is given below, indicating that it may (or not) have phonology (e.g., pronunciation, such as whether its onset is voiced) and if and how and with what punctuation it may appear, if any.  In general, a semantic argument may be lexical or phrasal and optional but if it appears it corresponds to some semantic index (think variable) in sort of predicate in any resulting MRS.  (The *_min types do not constrain the values of their features any further).

  1. basic_lex_synsem := abstr_lex_synsem & lex_or_nonlex_synsem.
  2. abstr_lex_synsem := canonical_lex_or_phrase_synsem & [LKEYS lexkeys].
  3. canonical_lex_or_phrase_synsem := canonical_synsem & lex_or_phrase.
  4. lex_or_phrase := synsem_min2.
  5. synsem_min2 := synsem_min1 & [LEX luk,MODIFD xmod_min,PHON phon_min,PUNCT punctuation_min].
  6. synsem_min1 := synsem_min0 & [OPT bool,–MIN predsort,–SIND *top*].
  7. adj_synsem_lex_or_phrase := basic_adj_synsem_lex_or_phrase &[LOCAL[CAT.HEAD.MOD <synsem_min &[LOCAL[CAT[HEAD basic_nom_or_ttl & [POSS -],VAL[SUBJ <>,SPR.FIRST synsem &[–MIN quant_or_deg_rel],COMPS <>],MC na],CONJ cnil],–SIND #ind]>,CONT.HOOK.XARG #ind]].

Note that an adjective is not possess-able and that it modifies something nominal (or a title) and that if it has a specifier that it is a quantifier or degree (e.g., ‘very’).  Again, an adjective cannot function as a main clause or be conjunctive (in and of itself).

Finally, if you look far above you will see that the basic semantics of an adjective with an additional semantic argument is ‘intersective’, as in:

  1. isect_synsem := abstr_lex_synsem & [LOCAL[CAT.HEAD.MOD <[LOCAL intersective_mod,NONLOC.REL 0-dlist]>,CONT.HOOK.LTOP #hand],LKEYS.KEYREL.LBL #hand].

Here, the length 0 difference list and the following definitions indicate that intersective semantics do not accept anything but local modification:

  1. intersective_mod := mod_local.
  2. mod_local := *avm*.

AVM stands for ‘attribute value matrix’, which is the structure by which types and their features are defined (with nesting and unification constraints using # to indicate equality).

By now you’re probably getting the idea that there is fairly significant model of the English language, including its lexical and syntactic aspects, but if you look there is a lot about semantics here, too.

What is has always been going to be

I’ve been working for a while now on an ontology for representing events (which includes process, of course).  One of the requirements of a system that is to monitor, govern, implement, or reason about processes is that it consider “situations”, which are things that happen or occur, including events and states.  (See, for example, the perdurants of the DOLCE ontology, BFO‘s occurents, or OpenCyc’s situations.)  This requires the representation of time-variant information at various points or during various intervals of time (more than just the Allen relations or OWL Time).   If you’re interested in such things, I’d recommend Parsons‘ “Events in the Semantics of English” or Pustejovsky‘s “Syntax of Event Structure“, both of which look at the subject from a linguistic rather than inferential perspective.  When you pursue this to the point that you implement the axioms that an artificial intelligence needs to provide assistance in defining or governing a business process (or answering questions about molecular biological processes) you land up in some pretty abstract stuff, including the Stanford Encyclopedia of Philosophy.  I found the title of this post entertaining within the page on temporal logic.

IBM Ilog JRules for business modeling and rule authoring

If you are considering the use of any of the following business rules management systems (BRMS):

  • IBM Ilog JRules
  • Red Hat JBoss Rules
  • Fair Isaac Blaze Advisor
  • Oracle Policy Automation (i.e., Haley in Siebel, PeopleSoft, etc.)
  • Oracle Business Rules (i.e., a derivative of JESS in Fusion)

you can learn a lot by carefully examining this video on decisions using scoring in Ilog.  (The video is also worth considering with respect to Corticon since it authors and renders conditions, actions, and if-then rules within a table format.)

This article is a detailed walk through that stands completely independently of the video (I recommend skipping the first 50 seconds and watching for 3 minutes or so).  You will find detailed commentary and insights here, sometimes fairly critical but in places complimentary.  JRules is a mature and successful product.  (This is not to say to a CIO that it is an appropriate or low risk alternative, however. I would hold on that assessment pending an understanding of strategy.)

The video starts by creating a decision table using this dialog:

Note that the decision reached by the resulting table is labeled but not defined, nor is the information needed to consult the table specified.  As it turns out, this table will take an action rather than make a decision.  As we will see it will “set the score of result to a number”. As we will also see, it references an application.  Given an application, it follows references to related concepts, such as borrowers (which it errantly considers synonomous with applicants), concerning which it further pursues employment information.

Continue reading “IBM Ilog JRules for business modeling and rule authoring”

Google vs. Facebook and Bing (again)

Almost a year ago, I wrote about semantics and social networking as threats to Google.  In that post, I referenced a prior article on investments in natural language processing, such as Microsoft’s acquisition of Powerset, which is now part of Bing.

Today, there are two articles I recommend.  The first addresses the extent to which Google’s Superbowl ad is a response to the threat from Bing.  The second addresses Facebook overtaking Google.