Entailment-driven Extracting and Editing for Conversational Machine Reading

When I wrote Are Vitamins Subject to Sales Tax, I was addressing the process of translating knowledge expressed in formal documents, like laws, regulations, and contracts, into logic suitable for inference using the Linguist.

Recently, one of my favorite researchers working in natural language processing and reasoning, Luke Zettlemoyer, is among the authors of Entailment-driven Extracting and Editing for Conversational
Machine Reading
.  This is a very nice turn towards knowledge extraction and inference that improves on superficial reasoning by textual entailment (RTE).

I recommend this paper, which relates to BERT, which is among my current favorites in deep learning for NL/QA.  Here is an image from the paper, FYI:

Entailment-driven Extracting and Editing for Conversational Machine Reading

Problems with Probabilistic Parsing

We are using statistical techniques to increase the automation of logical and semantic disambiguation, but nothing is easy with natural language.

Here is the Stanford Parser (the probabilistic context-free grammar version) applied to a couple of sentences.  There is nothing wrong with the Stanford Parser!  It’s state of the art and worthy of respect for what it does well.

Continue reading “Problems with Probabilistic Parsing”

Natural Intelligence

Deep natural language understanding (NLU) is different than deep learning, as is deep reasoning.  Deep learning facilities deep NLP and will facilitate deeper reasoning, but it’s deep NLP for knowledge acquisition and question answering that seems most critical for general AI.  If that’s the case, we might call such general AI, “natural intelligence”.

Deep learning on its own delivers only the most shallow reasoning and embarrasses itself due to its lack of “common sense” (or any knowledge at all, for that matter!).  DARPA, the Allen Institute, and deep learning experts have come to their senses about the limits of deep learning with regard to general AI.

General artificial intelligence requires all of it: deep natural language understanding[1], deep learning, and deep reasoning.  The deep aspects are critical but no more so than knowledge (including “common sense”).[2] Continue reading “Natural Intelligence”

Are vitamins subject to sales tax in California?

What is the part of speech of “subject” in the sentence:

  • Are vitamins subject to sales tax in California?

Related questions might include:

  • Does California subject vitamins to sales tax?
  • Does California sales tax apply to vitamins?
  • Does California tax vitamins?

Vitamins is the direct object of the verb in each of these sentences, so, perhaps you would think “subject” is a verb in the subject sentence…

Continue reading “Are vitamins subject to sales tax in California?”

Common sense about deep learning

I regularly build deep learning models for natural language processing and today I gave one a try that has been the leader in the Stanford Question Answering Dataset (SQuAD).  This one is a impressive NLP platform built using PyTorch.  But it’s still missing the big picture (i.e., it doesn’t “know” much).

Generally,  NLP systems that emphasize Big Data (e.g., deep learning approaches) but eschew more explicit knowledge representation and reasoning are interesting but unintelligent.  Think Siri and Alexa, for example.  They might get a simple factoid question if a Google search can find closely related text, but not much more.

Here is a simple demonstration of problems that the state of the art in deep machine learning is far from solving…

Here is a paragraph from a Wall Street Journal article about the Fed today where the deep learning system has “found” what the pronouns “this” and “they” reference:

The essential point here is that the deep learning system is missing common sense.  It is “the need”, not “a raise” that is referenced by “this”.  And “they” references “officials”, not “the minutes”.

Bottom line: if you need your natural language understanding system to be smarter than this, you are not going to get there using deep learning alone.

Parsing Winograd Challenges

The Winograd Challenge is an alternative to the Turing Test for assessing artificial intelligence.  The essence of the test involves resolving pronouns.  To date, systems have not fared well on the test for several reasons.  There are 3 that come to mind:

  1. The natural language processing involved in the word problems is beyond the state of the art.
  2. Resolving many of the pronouns requires more common sense knowledge than state of the art systems possess.
  3. Resolving many of the problems requires pragmatic reasoning beyond the state of the art.

As an example, one of the simpler exemplary problems is:

  • There is a pillar between me and the stage, and I can’t see around it.

A heuristic system (or a deep learning one) could infer that “it” does not refer to “me” or “I” and toss a coin between “pillar” and “stage”.  A system worthy of the passing the Winograd Challenge should “know” it’s the pillar.

Even this simple sentence presents some NLP challenges that are easy to overlook.  For example, does “between” modify the pillar or the verb “is”?

This is not much of a challenge, however, so let’s touch on some deeper issues and a more challenging problem…

Continue reading “Parsing Winograd Challenges”

TA/NLP: It’s a jungle out there!

Text analytics and natural language processing have made tremendous advances in the last few years.  Unfortunately, there is a lot more to understanding natural language that TA/NLP.

I was reading a paper today about NLP pipelines for question answering that used machine learning to find what tools are good at what tasks and to configure a pipeline by selecting the best tool for a given task from each of the types of components in the pipeline.  The paper has a long list of various components, so I checked a few out.  Most of those of interest were available on the web so that they could be easily composed into pipelines without a lot of software setup.  Looking at these I quickly tired in disappointment.  Here are some of the reasons.

I am not surprised by these results.  NLU is hard.  But they are not particularly strong results either.  I’m surprised that people find such results useful (if they do).

Continue reading “TA/NLP: It’s a jungle out there!”