Simply Logical English

This is not all that simple of an article, but it walks you through, from start to finish, how we get from English to logic. In particular, it shows how English sentences can be directly translated into formal logic for use with in automated reasoning with theorem provers, logic programs as simple as Prolog, and even into production rule systems.

There is a section in the middle that is a bit technical about the relationship between full logic and more limited systems (e.g., Prolog or production rule systems). You don’t have to appreciate the details, but we include them to avoid the impression of hand-waving

The examples here are trivial. You can find many and more complex examples throughout Automata’s web site.

Consider the sentence, “A cell has a nucleus.”:

Continue reading “Simply Logical English”

Deep question answering: Watson vs. Aristotle

At the SemTech conference last week, a few companies asked me how to respond to IBM’s Watson given my involvement with rapid knowledge acquisition for deep question answering at Vulcan.  My answer varies with whether there is any subject matter focus, but essentially involves extending their approach with deeper knowledge and more emphasis on logical in additional to textual entailment.

Today, in a discussion on the LinkedIn NLP group, there was some interest in finding more technical details about Watson.  A year ago, IBM published the most technical details to date about Watson in the IBM Journal of Research and Development.  Most of those journal articles are available for free on the web.  For convenience, here are my bookmarks to them.

Pedagogical applications of proofs of answers to questions

In Vulcan’s Project Halo, we developed means of extracting the structure of logical proofs that answer advanced placement (AP) questions in biology.  For example, the following shows a proof that separation of chromatids occurs during prophase.

textual explanation of entailment using the Linguist and SILK

This explanation was generated using capabilities of SILK built on those described in A SILK Graphical UI for Defeasible Reasoning, with a Biology Causal Process Example.  That paper gives more details on how the proof structures of questions answered in Project Sherlock are available for enhancing the suggested questions of Inquire (which is described in this post, which includes further references).  SILK justifications are produced using a number of higher-order axioms expressed using Flora‘s higher-order logic syntax, HiLog.  These meta rules determine which logical axioms can or do result in a literal.  (A literal is an positive or negative atomic formula, such as a fact, which can be true, false, or unknown.  Something is unknown if it is not proven as true or false.  For more details, you can read about the well-founded semantics, which is supported by XSB. Flora is implemented in XSB.)

Now how does all this relate to pedagogy in future derivatives of electronic learning software or textbooks, such as Inquire?

Well, here’s a use case: Continue reading “Pedagogical applications of proofs of answers to questions”

Background for our Semantic Technology 2013 presentation

In the spring of 2012, Vulcan engaged Automata for a knowledge acquisition (KA) experiment.  This article provides background on the context of that experiment and what the results portend for artificial intelligence applications, especially in the areas of education.  Vulcan presented some of the award-winning work referenced here at an AI conference, including a demonstration of the electronic textbook discussed below.  There is a video of that presentation here.  The introductory remarks are interesting but not pertinent to this article.

Background on Vulcan’s Project Halo

Background on Vulcan's Project Halo

From 2002 to 2004, Vulcan developed a Halo Pilot that could correctly answer between 30% and 50% of the questions on advanced placement (AP) tests in chemistry.  The approaches relied on sophisticated approaches to formal knowledge representation and expert knowledge engineering.  Of three teams, Cycorp fared the worst and SRI fared the best in this competition.  SRI’s system performed at the level of scoring a 3 on the AP, which corresponds to earning course credit at many universities.  The consensus view at that time was that achieving a score of 4 on the AP was feasible with limited additional effort.  However, the cost per page for this level of performance was roughly $10,000, which needed to be reduced significantly before Vulcan’s objective of a Digital Aristotle could be considered viable.

Continue reading “Background for our Semantic Technology 2013 presentation”

Semantic Technology & Business Conference (SemTechBiz)

Benjamin Grosof and I will be presenting the following review of recent work at Vulcan towards Digital Aristotle as part of Project Halo at SemTechBiz in San Francisco the first week of June.

Acquiring deep knowledge from text

We show how users can rapidly specify large bodies of deep logical knowledge starting from practically unconstrained natural language text.

English sentences are semi-automatically interpreted into  predicate calculus formulas, and logic programs in SILK, an expressive knowledge representation (KR) and reasoning system which tolerates practically inevitable logical inconsistencies arising in large knowledge bases acquired from and maintained by distributed users possessing varying linguistic and semantic skill sets who collaboratively disambiguate grammar, logical quantification and scope, co-references, and word senses.

The resulting logic is generated as Rulelog, a draft standard under W3C Rule Interchange Format’s Framework for Logical Dialects, and relies on SILK’s support for FOL-like formulas, polynomial-time inference, and exceptions to answer questions such as those found in advanced placement exams.

We present a case study in understanding cell biology based on a first-year college level textbook.

Is Freebase worth much?

There has been some speculation that Freebase is a vehicle for Metaweb to prosper from its semantic web infrastructure when used for commercial purposes.  As I recall, Metaweb raised over $40 million in Series B around the time they started building Freebase. The investment was led by Goldman Sachs.  Metaweb’s seasoned investors were unlikely to invest so much in a business that cannot project a return on that investment.  Almost certainly, Metaweb has firm plans for realizing over $100 million in revenues.  Most likely, for these investors and the amount of capital, target revenues by 2014, five years after the second round, would be in the vicinity of $1 billion.  Obviously, there is a lot of work to get there from around zero today.

Some of the bubble in raising those funds has burst.  The economy would crimp the valuation and investment if made today.  And the semantic web has yet to produce a winner, so with less enthusiasm, the investment would again be less favorable today.  All this is modulo the business plan.  If the business plan withstands scrutiny and the rate of return from credibly achievable projections justifies investment, they could get the money again, even now.  But no one that I have heard or read over the past few years can explain the business plan adequately – that is, concretely.  I would appreciate any insights or opinions on the topic.  I believe these are smart people, in the company and among its investors, so I am sure it is there.  I just don’t believe in the “we’ll figure out how to make money eventually” business plan in this case.

Some Freebase terms that are worth knowing but are commercially reasonable for any site that provides a free service include:

  1. The terms of service are subject to change (upon posting).
  2. The service may be changed or discontinued at any time and without notice.
  3. Limits concerning access to or use of the services may be established.
  4. Any disputes shall be heard in San Francisco and governed by California law.

Continue reading “Is Freebase worth much?”

Google follows Microsoft’s lead towards intelligence

Being a fan of increased intelligence on the web, including Bing’s use of Powerset and True Knowledge, I enjoyed cnet’s report, “Google search gets answer highlights and events.”

Google now shows the following “The Empire State Building rises to 1250 ft (381 m) at the 102nd floor” in response to the classic semantic web test question.

Also, Google leverages more of the content of text or structure of linked data in its Rich Snippet answers:

Rich Snippet shows Google "understands" events

As search engines increase their understanding of concepts and how to extract them from content or linked data and present them as Google does here or above in a sentence, the web will begin to feel a lot smarter. 

As these simple enhancements indicate, the intelligent web is taking off and that feeling of intelligence will come sooner than expected.  Of course, there is a long way to go.   For more on that, I here there is an upcoming issue of AI Magazine that will survey the state of the art in question answering, including coverage of Vulcan’s Project Halo and IBM’s Jeopardy effort, among others.  Also, if you are interested in what bright minds are looking forward to in this regard, see Nova Spivak’s recent blogging and his post on “will the web become conscious?”

Time for the next generation of knowledge automation

In preparing for my workshop at the Business Rules Forum in Las Vegas on November 5th, I have focused on the following needs in reasoning about processes, about events, and about or over time:

  1. Reasoning at a point within a [business] process
  2. Reasoning about events that occur over time.
  3. Reasoning about a [business] process (as in deciding what comes next)
  4. Reasoning about and across different states (as in planning)

Enterprise decision management (EDM) addresses the first.  Complex event processing (CEP) is concerned with the second.  In theory, EDM could address the third but it does not in practice.  This third item includes  the issue of governing and defining workflow or event-driven business processes rather than point decisions within such business processes. 

Business applications of rules have not advanced to include the fourth item.  That is to say, business has yet to significantly leverage reasoning or problem solving techniques that are common in artificial intelligence.  For example, artificially intelligent question and answer systems, which are being developed for  the semantic web,  can do more than retrieve data – they perform inference.  Commercial database and business intelligence queries are typically much less intelligent, which presents a number of opportunities that I don’t want to go into here but would happy to discuss with interested parties.  The point here is that business does not use reasoning much at all, let alone to search across the potential ramifications of alternative decisions or courses of action before making or taking one.  Think of playing chess or a soccer-playing robot planning how to advance the ball on goal.  Why shouldn’t business strategies or tactical business decisions benefit from a little simulated look-ahead along with a lot of inference and evaluation?

Even though I have recently become more interested in the fourth of these areas, I expect the audience at the business rules forum to be most interested in the first two points above.  There will also be some who have enough experience with complex business processes, which are common in larger enterprises.  These folks will be interested in the third item.  Only the most advanced applications, such as in biochemical process planning, will be interested in the fourth.  I don’t expect many of them to attend!

The notion of enterprise decision management (EDM) is focused on point decision making within a business process.  For enterprises that are concerned with governing business processes, a model of the process itself must be available to the business rules that govern its operation.  I’ve written elsewhere about the need for an ontology of events and processes in order to effectively integrate business process management (BPM) with business rules.  Here, and in the workshop, I intend to get a little more specific about the requirements, what is lacking in current standards and offerings, and what we’re trying to do about it. Continue reading “Time for the next generation of knowledge automation”