SBVR in OWL

In preparation for generating RIF and SBVR from the Linguist, we have produced an OWL ontology for the pertinent aspects of the SBVR specification.  We hope that this is helpful to others and would sincerely appreciate any corrections or comments on how to improve it.

Paul

NLP: depictive in an HPSG lexicon?

We’re working with the English Resource Grammar (ERG), OWL, and Vulcan’s SILK to educate the machine by translating textbooks into defeasible logic.  Part of this involves an ontology that models semantics more deeply than the ERG, which is based on head-driven phrase structure grammar (HPSG), which provides deeper parsing and, with the ERG and the DELPH-IN infrastructure, also provides a simple under-specified semantic representation called minimal recursion semantics (MRS).

We’re having a great time using OWL to clarify and enrich the semantics of the rich model underlying the ERG.  Here’s an example, FYI.  If you’d like to know more (or help), please drop us a line!  Overall the project will demonstrate our capabilities for transforming everyday sentences into RIF and business rule languages using SBVR extended with defeasibility and other capabilities, all modeled in the same OWL ontology.

What triggered this blog entry was a bit of a surprise in seeing that whether or not an adjective could be used depictively is sometimes encoded in the lexicon.  This is one of the problems of TDL versus a description-logic based model with more expressiveness.  It results in more lexical entries than necessary, which has been discussed by others when contrasted with the attributed logic engine (ALE), for example.

In trying to model the semantics of words like ‘same’ and ‘different’, we are scratching our heads about these lines from the ERG’s lexicon:

  1. same_a1 := aj_pp_i-cmp-sme_le & [ ORTH < “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_same_a_as_rel”, …
  2. the_same_a1 := aj_-_i-prd-ndpt_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  3. the_same_adv1 := av_-_i-vp-po_le & [ ORTH < “the”, “same” >, SYNSEM [ LKEYS.KEYREL.PRED “_the+same_a_1_rel”, …
  4. exact_a2 := aj_pp_i-cmp-sme_le & [ ORTH < “exact” >, SYNSEM [ LKEYS.KEYREL.PRED “_exact_a_same-as_rel”…

One of the interesting things about lexicalized grammars is that lexical entries (i.e., ‘words’) are described with almost arbitrary combinations of their lexical, syntactic, and semantic characteristics.

The preceding code is expressed in a type description language (TDL) used by the Lisp-based LKB (and its C++ counterpart, PET, which are unification-based parsers that produce a chart of plausible parses with some efficiency.  What is given above is already deeper than what you can expect from a statistical parser (but richer descriptions of lexical entries promises to make statistical parsing much better, too).

Unfortunately, there is no available documentation on why the ERG was designed as it is, so the meaning of the above is difficult to interpret.  For example, the types of lexical entries (the symbols ending in ‘_le’) referenced above are defined as follows:

  1. aj_pp_i-cmp-sme_le := basic_adj_comp_lexent & [SYNSEM[LOCAL[CAT[HEAD superl_adj &[PRD -,MOD <[LOCAL.CAT.VAL.SPR <[–MIN def_or_demon_q_rel]>]>],VAL.SPR.FIRST.–MIN much_deg_rel],CONT.RELS <!relation,relation!>],MODIFD.LPERIPH bool,LKEYS[ALTKEYREL.PRED comp_equal_rel,–COMPKEY _as_p_comp_rel]]].
  2. aj_-_i-prd-ndpt_le := nonc-hm-nab & [SYNSEM basic_adj_abstr_lex_synsem & [LOCAL[CAT[HEAD adj & [PRD +,MINORS[MIN norm_adj_rel,NORM norm_rel],TAM #tam,MOD < anti_synsem_min >],VAL[SPR.FIRST anti_synsem_min,COMPS < >],POSTHD +],CONT[HOOK[LTOP #ltop,INDEX #arg0 &[E #tam],XARG #xarg],RELS <! #keyrel & adj_relation !>,HCONS <! !>]],NONLOC non-local_none,MODIFD notmod &[LPERIPH bool],LKEYS.KEYREL #keyrel &[LBL #ltop,ARG0 #arg0,ARG1 #xarg & non_expl-ind]]].

Needless to say, that’s a mouthful!  Chasing this down, the following ‘informs’ us that “the same”, which uses type #2 above, is defined using the following lexical types:

  1. nonc-hm-nab := nonc-h-nab & mcna.
  2. nonc-h-nab := nonconj & hc-to-phr & non_affix_bearing.
  3. mcna := word & [ SYNSEM.LOCAL.CAT.MC na ].

Which is to say that it is non-conjunctive, complements a head to form a phrase, can’t be affixed, cannot constitute a main clause, and is a word.

The fact that the lexical entry for “the same” is adjectival is given the definition of the following type(s) used in the SYNSEM feature:

  1. basic_adj_comp_lexent := compar_superl_adj_word & [SYNSEM adj_unsp_ind_twoarg_synsem & [LOCAL[CAT.VAL[COMPS <canonical_or_unexpressed & [–MIN #cmin,LOCAL [CAT basic_pp_cat,CONJ cnil,CONT.HOOK [LTOP #ltop,INDEX #ind]]]>],CONT.HOOK [ LTOP #ltop, XARG #xarg]],LKEYS [ KEYREL.ARG1 #xarg,ALTKEYREL.ARG2 #ind,–COMPKEY #cmin]]].b
  2. compar_superl_adj_word := nonc-hm-nab & [SYNSEM adj_unsp_ind_synsem & [LOCAL[CAT[HEAD[MOD <[–SIND #ind & non_expl]>,TAM #tam,MINORS.MIN abstr_adj_rel],VAL.SPR.FIRST.LOCAL.CONT.HOOK.XARG #altarg0],CONT[HOOK[XARG #ind,INDEX #arg0 & [E #tam]],RELS.LIST <[LBL #hand,ARG1 #ind],#altkeyrel & [LBL #hand,ARG0 event & #altarg0,ARG1 #arg0],…>]],LKEYS.ALTKEYREL #altkeyrel]].

Which is to say that it is a comparative or superlative adjectival word (even though it consists of two lexemes in its ‘orthography’) that involves two semantic arguments including one complement which may be unexpressed prepositional phrase.  A comparative or superlative adjective, in turn, is non-conjunctive, complements a head to form a phrase, is non-affix bearing (?), and non-clausal, as defined by the type ‘nonc-hm-nab’ above.

The types used in the syntax and semantic (i.e., SYNSEM) feature of the two lexical types are defined as follows (none of which is documented):

  1. adj_unsp_ind_twoarg_synsem := adj_unsp_ind_synsem & two_arg.
  2. adj_unsp_ind_synsem := basic_adj_lex_synsem & lex_synsem & adj_synsem_lex_or_phrase & isect_synsem & [LOCAL.CONT.HOOK.INDEX #ind,LKEYS.KEYREL.ARG0 #ind].

In a moment, we’ll discuss the types used in the second of these, but first, some basics on the semantics that are mixed with the syntax above.

In effect, the above indicates that a new ‘elementary predication’ will be needed in the MRS to represent the adjectival relationship in the logic derived in the course of parsing (i.e., that’s what ‘unsp_ind’ means, although it’s not documented, which I will try not to bemoan much further.)

The following indicates that the newly formed elementary predicate is not (initially) within any scope and that it has two arguments whose semantics (i.e., their RELations) are concatenated for propagation into the list of elementary predications that will constitute the MRS for any parses found.

  1. two_arg := basic_two_arg & [LOCAL.CONT.HCONS <! !>].
  2. basic_two_arg := unspec_two_arg & lex_synsem.
  3. unspec_two_arg := basic_lex_synsem & [LOCAL.ARG-S <[LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC [SLASH[LIST #smiddle,LAST #slast],REL [LIST #rmiddle,LAST #rlast],QUE[LIST #qmiddle,LAST #qlast]]],[LOCAL.CONT.HOOK.–SLTOP #sltop, NONLOC[SLASH[LIST #sfirst,LAST #smiddle],REL[LIST #rfirst,LAST #rmiddle],QUE[LIST #qfirst,LAST #qmiddle]]]>,LOCAL.CONT.HOOK.–SLTOP #sltop,NONLOC[SLASH[LIST #sfirst,LAST #slast],REL[LIST #rfirst,LAST #rlast],QUE[LIST #qfirst,LAST #qlast]]].
  4. lex_synsem := basic_lex_synsem & [LEX +].

The last of these expresses that the constuction is lexical rather than phrasal (which includes clausal in the ERG).

Continuing with the definition of “the same” as an adjective, the following finally clarifies what it means to be a basic adjective:

  1. basic_adj_lex_synsem := basic_adj_abstr_lex_synsem & [LOCAL[ARG-S <#spr . #comps>,CAT[HEAD adj_or_intadj,VAL[SPR<#spr & synsem_min &[–MIN degree_rel,LOCAL[CAT[VAL[SPR *olist*,SPEC <[LOCAL.CAT.HS-LEX #hslex]>],MC na],CONT.HOOK.LTOP #ltop],NONLOC.SLASH 0-dlist,OPT +],anti_synsem_min &[–MIN degree_rel]>,COMPS #comps],HS-LEX #hslex],CONT.RELS.LIST <#keyrel,…>],LKEYS.KEYREL #keyrel & [LBL #ltop]].

Well, ‘clarifies’ might not have been the right word!  Essentially, it indicates that the adjective may have an optional degree specifier (which semantically modifies the predicate of the adjective) and that the predicate specified in the lexical entry becomes the predicate used in the MRS.  The rest is defined below:

  1. basic_adj_abstr_lex_synsem := basic_adj_synsem_lex_or_phrase & abstr_lex_synsem & [LOCAL.CONT.RELS.LIST.FIRST basic_adj_relation].
  2. basic_adj_synsem_lex_or_phrase := canonical_synsem & [LOCAL[AGR #agr,CAT[HEAD[MINORS.MIN basic_adj_rel],VAL[SUBJ <>,SPCMPS <>]],CONT.HOOK[INDEX non_conj_sement,XARG #agr]]].
  3. canonical_synsem := expressed_synsem & canonical_or_unexpressed.
  4. expressed_synsem := synsem.
  5. canonical_or_unexpressed := synsem_min0.
  6. synsem_min0 := synsem_min & [LOCAL mod_local,NONLOC non-local_min].

Which ends with a bunch of basic setup types except for constraining for relation for an adjective to be ‘basically adjectivally’ on the first two lines.  Also on these first two lines, it specifies that its subject and its specifier, if any, must be completed (i.e., empty) and agree with its non-conjunctive argument (which is not to say that it cannot be conjunctive, but that it modifies the conjunction as a whole, if so.)  Whether or not it is expressed will determine if there are any further predicates about its arguments or if its unexpressed argument is identified by an otherwise unreferenced variable in any resulting MRS.

The lexical grounding of this type specification is given below, indicating that it may (or not) have phonology (e.g., pronunciation, such as whether its onset is voiced) and if and how and with what punctuation it may appear, if any.  In general, a semantic argument may be lexical or phrasal and optional but if it appears it corresponds to some semantic index (think variable) in sort of predicate in any resulting MRS.  (The *_min types do not constrain the values of their features any further).

  1. basic_lex_synsem := abstr_lex_synsem & lex_or_nonlex_synsem.
  2. abstr_lex_synsem := canonical_lex_or_phrase_synsem & [LKEYS lexkeys].
  3. canonical_lex_or_phrase_synsem := canonical_synsem & lex_or_phrase.
  4. lex_or_phrase := synsem_min2.
  5. synsem_min2 := synsem_min1 & [LEX luk,MODIFD xmod_min,PHON phon_min,PUNCT punctuation_min].
  6. synsem_min1 := synsem_min0 & [OPT bool,–MIN predsort,–SIND *top*].
  7. adj_synsem_lex_or_phrase := basic_adj_synsem_lex_or_phrase &[LOCAL[CAT.HEAD.MOD <synsem_min &[LOCAL[CAT[HEAD basic_nom_or_ttl & [POSS -],VAL[SUBJ <>,SPR.FIRST synsem &[–MIN quant_or_deg_rel],COMPS <>],MC na],CONJ cnil],–SIND #ind]>,CONT.HOOK.XARG #ind]].

Note that an adjective is not possess-able and that it modifies something nominal (or a title) and that if it has a specifier that it is a quantifier or degree (e.g., ‘very’).  Again, an adjective cannot function as a main clause or be conjunctive (in and of itself).

Finally, if you look far above you will see that the basic semantics of an adjective with an additional semantic argument is ‘intersective’, as in:

  1. isect_synsem := abstr_lex_synsem & [LOCAL[CAT.HEAD.MOD <[LOCAL intersective_mod,NONLOC.REL 0-dlist]>,CONT.HOOK.LTOP #hand],LKEYS.KEYREL.LBL #hand].

Here, the length 0 difference list and the following definitions indicate that intersective semantics do not accept anything but local modification:

  1. intersective_mod := mod_local.
  2. mod_local := *avm*.

AVM stands for ‘attribute value matrix’, which is the structure by which types and their features are defined (with nesting and unification constraints using # to indicate equality).

By now you’re probably getting the idea that there is fairly significant model of the English language, including its lexical and syntactic aspects, but if you look there is a lot about semantics here, too.

Event-centric BPM and goal-driven processing

The slides for my Business Rules Forum presentation on event semantics and focusing on events in order to simplify process definition and to facilitate more robust governance and compliance are at Event-centric BPM.

After the talk I spoke with Jan Verbeek and Gartjan Grijzen of Be Informed and reviewed their software, which is excellent.  They have been quite successful with various government agencies in applying  the event-centric methodology to produce goal-driven processing.  Their approach is elegant and effective.  It clearly demonstrates the merits of an event-centric approach and the power that emerges from understanding event-dependencies.  Also, it is very semantic, ontological, and logic-programming oriented in its approach (e.g., they use OWL and a backward-chaining inference engine).

They do not have the top-down knowledge management approach that I advocate nor do they provide the logical verification of governing policies and compliance (i.e., using theorem provers) that I mention in the talk (see Guido Governatori‘s 2010 publications and Travis Breaux‘s research at CMU, for example) but theirs is the best commercially deployed work in separating business process description from procedural implementation that comes to mind. (Note that Ed Barkmeyer of NIST reports some use of SBVR descriptions of manufacturing processes with theorem provers.  Some in automotive and aerospace industries have been interested in this approach for quality purposes, too.)

BeInformed is now expanding into the United States with the assistance of Mills Davis and others.  Their software is definitely worth consideration and, in my opinion, is more elegant and effective than the generic BPMN approach.

Simple problems with the semantic web

The standard for defining ontologies these days is OWL and Protege.  Unfortunately, OWL lacks any notion of exceptions in inheritance or any other notion of defeasibility.

So, although you may want to say that birds fly, you’re ontology will be broken (or become much more complicated) when you realize there are birds that can’t fly, such as penguins or ostriches, or even sick or injured birds.

Practically speaking, you need something like courteous logic or the defeasibility in SILK to handle this (or any 1980s expert system shell or even earlier frame system).  OWL is very hard on mortal man (e.g., mainstream IT) in this regard.

How can I tell OWL that a pronoun is a noun but that pronouns are a closed class of words, unlike nouns, verbs, adjectives, and adverbs (in general).  Well, I’ll have to tell it about open-class nouns versus closed class nouns.  What a pain!

This is why we use Protege primarily as a drafting tool and, for example, SILK, to do reasoning.   Non-defeasible description logic and first-order reasoners are difficult to get along with, in practice (and make sustainable knowledge repositories too difficult – which inhibits adoption, obviously).

Extended Enterprise Ontology

In a recent post I mentioned comments by Sir Tim Berners-Lee concerning the overlap between enterprise information models and semantic web ontology supporting the concept of linked data.  Sir Berners-Lee argued that overlap is already sufficient to have a transformative effect on mainstream IT.  I think he is right, but also that we are not there yet.  There are many obstacles to adoption, not the least of which is the inertia of enterprise IT.  Disruptive approaches to software development typically require ten years or so to cross the chasm from visionary and early adopters to the mainstream.  We are only a few years into this and the technology is not ready.

First, let’s establish that there is plenty of semantics available for reuse now.  There are existing models, some of which are well-designed, mature, and widely used.  Unfortunately, most of what exists has little apparent relevance to enterprises.  There is little on this diagram that would draw the attention of an enterprise architect, for example.

Continue reading “Extended Enterprise Ontology”

Over $100m in 12 months backs natural language for the semantic web

Radar Networks is accelerating down the path towards the world’s largest body of knowledge about what people care about using Twine to organize their bookmarks.  Unlike social bookmarking sites, Twine uses natural language processing technology to read and categorize people’s bookmarks in a substantial ontology.  Using this ontology, Twine not only organizes their bookmarks intelligently but also facilitates social networking and collaborative filtering that result in more relevant suggestions of others’ bookmarks than other social bookmarking sites can provide.

Twine should rapidly eclipse social bookmarking sites, like Digg and Redditt.  This is no small feat!

The underlying capabilities of Twine present Radar Networks with many other opportunities, too.  Twine could spider out from bookmarks and become a general competitor to Google, as Powerset hopes to become.  Twine could become the semantic web’s Wikipedia, to which Metaweb’s Freebase aspires. Continue reading “Over $100m in 12 months backs natural language for the semantic web”

When Rules Meet Requirements

I am working on some tutorial material for business analysts tasked with eliciting and harvesting rules using some commercial business rules management systems (BRMS). The knowledgeable consumers of this material intuitively agree that capturing business rules should be performed by business analysts who also capture requirements. They understand that the clarity of rules is just as critical to successful application of BRMS as the clarity of requirements is to “whirlpool” development.[1]  But they are frustrated by the distinct training for requirements versus rules. They believe, and I agree, that unification of requirements and rules management is needed.

Consider these words from Forrester:

One might argue that Word documents, email, phone calls, and stakeholder meetings alone are adequate for managing rules. In fact, that is the methodology currently used for most projects in a large number of IT shops. However, this informal, ad hoc approach doesn’t ensure rigorous rules definition that is communicated and understood by all parties. More importantly, it doesn’t lend itself to managing the inevitable rules changes that will occur throughout the life of the project. The goal must be to embrace and manage change, not to prevent it. [2]

But note that Forrester used the word “requirements” everywhere I used “rules” above!

Continue reading “When Rules Meet Requirements”

Business Rules Market Maturity

Some recent correspondence with clients and prospective adopters of business rules technology indicates interested mainstream has become increasing concerned and confused by consolidation in the business rules market.

On the analyst front, they read advice such as the following from Gartner:[1]

As Gartner has stated, the BRE market is a volatile technology sector, and market trends point to increased consolidation. In recent research, we stated that some consolidation will come from rules-to-rules acquisitions. Recent examples of this include Trilogy/Versata buying Gensym and now, RuleBurst purchasing Haley Systems.

Another form this consolidation will take is application vendors or business process management suite vendors buying much-needed rule technology, as seen in SAP’s recently announced intention to purchase Yasu Technologies. In either case, rule technology will persist, but the vendors selling the technology will often be different.

I agree with Gartner, enterprise app and BPM vendors desperately need rules technology. I also agree with the following analysis from Forrester:[2]

SAP’s decision to purchase Hyderabad, India-based Yasu Technologies greatly improves its business rules management capabilities. Other large vendors would be wise to follow SAP’s lead in the business rules market. If you look at the big vendors, they’re all going to need this technology. SAP’s competitors are going to have to step up to these requirements also.

It’s encouraging that SAP bought Business Objects and is now buying Yasu. We’re seeing requirements to link business rules and business intelligence or analytics. SAP has told us they have seen these requests, and we’re encouraged that SAP is now acting.”

Unfortunately, Gartner’s concluding advice could have been more constructive:

Prospective BRE customers: Buyer beware – the rule engine market is a volatile sector. Choose your vendors carefully and be prepared to see more BRE acquisitions.

Continue reading “Business Rules Market Maturity”