Commercial Intelligence Rotating Header Image

Posts under ‘Natural Language’

Robust Inference and Slacker Semantics

In preparing for some natural language generation[1], I came across some work on natural logic[2][3] and reasoning by textual entailment[4] (RTE) by Richard Bergmair in his PhD at Cambridge: Monte Carlo Semantics: Robust Inference and Logical Pattern Processing with Natural Language Text The work he describes overlaps our approach to robust inference from the deep, [...]

It’s hard to reckon nice English

The title is in tribute to Raj Reddy’s classic talk about how it’s hard to wreck a nice beach. I came across interesting work on higher order and semantic dependency parsing today: Turning on the Turbo: Fast Third-Order Non-Projective Turbo Parsers. Priberam: A turbo semantic parser with second order features So I gave the software [...]

Properly disambiguating a sentence using the Linguist™

Consider the following disambiguation result from a user of Automata’s Linguist™.

Smart Machines And What They Can Still Learn From People – Gary Marcus

This is a must-watch video from the Allen Institute for AI for anyone seriously interested in artificial intelligence.  It’s 70 minutes long, but worth it.  Some of the highlights from my perspective are: 27:27 where the key reason that deep learning approaches fail at understanding language are discussed 31:30 where the inability of inductive approaches [...]

Deep Learning, Big Data and Common Sense

Thanks to John Sowa’s comment on LinkedIn for this link which, although slightly dated, contains the following: In August, I had the chance to speak with Peter Norvig, Director of Google Research, and asked him if he thought that techniques like deep learning could ever solve complicated tasks that are more characteristic of human intelligence, [...]

Deep Parsing vs. Deep Learning

For those of us that enjoy the intersection of machine learning and natural language, including “deep learning”, which is all the rage, here is an interesting paper on generalizing vector space models of words to broader semantics of English by Jayant Krishnamurthy, a PhD student of Tom Mitchell at Carnegie Mellon University: Krishnamurthy, Jayant, and [...]

IBM Watson in medical education

IBM recently posted this video which suggests the relevance of Watson’s capabilities to medical education. The demo uses cases such as occur on the USMLE exam and Waton’s ability to perform evidentiary reason given large bodies of text. The “reasoning paths” followed by Watson in presenting explanations or decision support material use a nice, increasingly [...]

Suggested questions: Inquire vs. Knewton

Knewton is an interesting company providing a recommendation service for adaptive learning applications.  In a recent post, Jonathon Goldman describes an algorithmic approach to generating questions.  The approach focuses on improving the manual authoring of test questions (known in the educational realm as “assessment items“).  It references work at Microsoft Research on the problem of [...]

Natural Language Leadership at the Allen Institute for Artificial Intelligence (AI2)

Orin Etzioni is a marvelous choice to lead the Allen Institute for AI (aka AI2).  The NL/ML path is the right path for scaling up the deep knowledge that Paul Allen’s vision of a Digital Aristotle requires.  You can read more about it below and here’s more background on the change in the direction and [...]

Affiliate Transactions covered by The Federal Reserve Act (Regulation W)

Benjamin Grosof, co-founder of Coherent Knowledge Systems, is also involved with developing a standard ontology for the financial services industry (i.e., FIBO).  In the course of working on FIBO, he is developing a demonstration of defeasible logic concerning Regulation W of the The Federal Reserve Act.  Regulation W specifies which transactions involving banks and their [...]