Simple, Fast, Effective, Active Learning

Recently, we “read” ten thousand recipes or so from a cooking web site.  The purpose of doing so was to produce a formal representation of those recipes for use in temporal reasoning by a robot.

Our task was to produce ontology by reading the recipes subject to conflicting goals.  On the one hand, the ontology was to be accurate so that the robot could reason, plan, and answer questions robustly.  On the other hand, the ontology was to be produced automatically (with minimal human effort).[1]

In order to minimize human effort while still obtaining deep parses from which we produce ontology, we used more techniques from statistical natural language processing than we typically do in knowledge acquisition for deep QA, compliance, or policy automation.  (Consider that NLP typically achieves less than 90% syntactic accuracy while such work demands near 100% semantic accuracy.)[2]

In the effort, we refined some prior work on representing words as vectors and semi-supervised learning.  In particular, we adapted semi-supervised, active learning similar to Stratos & Collins 2015 using enhancements to the canonical correlation analysis (CCA) of Dhillon et al 2015 to obtain accurate part of speech tagging, as conveyed in the following graphic from Stratos & Collins:

Continue reading “Simple, Fast, Effective, Active Learning”

Deep Parsing vs. Deep Learning

For those of us that enjoy the intersection of machine learning and natural language, including “deep learning”, which is all the rage, here is an interesting paper on generalizing vector space models of words to broader semantics of English by Jayant Krishnamurthy, a PhD student of Tom Mitchell at Carnegie Mellon University:

Essentially, the paper demonstrates how the features of high-precision lexicalized grammars allow machines to learn the compositional semantics of English.  More specifically, the paper demonstrates learning of compositional semantics beyond the capabilities of recurrent neural networks (RNN).  In summary, the paper suggests that deep parsing is better than deep learning for understanding the meaning of natural language.

For more information and a different perspective, I recommend the following paper, too:

Note that the authors use Combinatory Categorial Grammar (CCG) while our work uses head-driven phrase structure grammar (HPSG), but this is a minor distinction.  For example, compare the logical forms in the Groningen Meaning Bank with the logic produced by the Linguist.  The former uses CCG to produce lambda calculus while the latter uses HPSG to produce predicate calculus (ignoring vagaries of under-specified representation which are useful for hypothetical reasoning and textual entailment).

Natural Language Leadership at the Allen Institute for Artificial Intelligence (AI2)

Orin Etzioni is a marvelous choice to lead the Allen Institute for AI (aka AI2).  The NL/ML path is the right path for scaling up the deep knowledge that Paul Allen’s vision of a Digital Aristotle requires.  You can read more about it below and here’s more background on the change in the direction and on some evidence that the path holds great promise.

Going beyond Siri and Watson: Microsoft co-founder Paul Allen taps Oren Etzioni to lead new Artificial Intelligence Institute

Sir Tim Berners-Lee on Ontology

A panel on whether or not ontology is needed to achieve a collective vision for the semantic web was held on Tuesday at the International Semantic Web Conference (ISWC 2009) near Washington, DC.  For most of the panelists the question was rhetorical.  But there were a few interesting points made, including that machine learning of ontology is one extreme of a spectrum that extends to human authoring of ontology (however authoritative or coordinated).  Nobody on the panel or in the audience felt that the extreme of human authored ontology was viable for the long-term vision of a comprehensively semantic and intelligent web.  It was clear that the panelists believed that machine learning of ontology will substantially enrich and automate ontology construction, although the timeframe was not discussed.  Nonetheless, the subjective opinion that substantial ontology will be acquired automatically within the next decade or so was clear.  There was much discussion about the knowledge being in the data and so on.  The discussion had a bit of the statistics versus logic debate to it.  Generally, the attitude was “get over it” and even Pat Hayes, who gave a well-received talk on Blogic and whom one would expect to take the strict logic side of the argument, pointed out seminal work on combining machine learning and logic in natural language understanding of text.

David Karger of MIT’s AI lab challenged the panel from the audience by asserting that the data people posted on the web is much more important than any ontology that might define what that data means.  This set off a bit of a firestorm.  There was consensus that data itself is critically important, if not central.  For the most part, panelists were aghast at the notion that spreadsheets of data would be useless to computers unless the meaning of its headings, for example, were related to concepts defined by reference to ontology those computers understood. 

With respectful deference, the panel and audience yielded.  Sir Tim Berners-Lee took the floor. Continue reading “Sir Tim Berners-Lee on Ontology”