The work he describes overlaps our approach to robust inference from the deep, variable-precision semantics that result from linguistic analysis and disambiguation using the English Resource Grammar (ERG) and the Linguist™.
For those of us that enjoy the intersection of machine learning and natural language, including “deep learning”, which is all the rage, here is an interesting paper on generalizing vector space models of words to broader semantics of English by Jayant Krishnamurthy, a PhD student of Tom Mitchell at Carnegie Mellon University:
- Krishnamurthy, Jayant, and Tom M. Mitchell. “Vector Space Semantic Parsing: A Framework for Compositional Vector Space Models.” ACL 2013 (2013): 1.
Essentially, the paper demonstrates how the features of high-precision lexicalized grammars allow machines to learn the compositional semantics of English. More specifically, the paper demonstrates learning of compositional semantics beyond the capabilities of recurrent neural networks (RNN). In summary, the paper suggests that deep parsing is better than deep learning for understanding the meaning of natural language.
For more information and a different perspective, I recommend the following paper, too:
Note that the authors use Combinatory Categorial Grammar (CCG) while our work uses head-driven phrase structure grammar (HPSG), but this is a minor distinction. For example, compare the logical forms in the Groningen Meaning Bank with the logic produced by the Linguist. The former uses CCG to produce lambda calculus while the latter uses HPSG to produce predicate calculus (ignoring vagaries of under-specified representation which are useful for hypothetical reasoning and textual entailment).