This is not all that simple of an article, but it walks you through, from start to finish, how we get from English to logic. In particular, it shows how English sentences can be directly translated into formal logic for use with in automated reasoning with theorem provers, logic programs as simple as Prolog, and even into production rule systems.
There is a section in the middle that is a bit technical about the relationship between full logic and more limited systems (e.g., Prolog or production rule systems). You don’t have to appreciate the details, but we include them to avoid the impression of hand-waving
The examples here are trivial. You can find many and more complex examples throughout Automata’s web site.
Consider the sentence, “A cell has a nucleus.”:
Continue reading “Simply Logical English”
Deep natural language understanding (NLU) is different than deep learning, as is deep reasoning. Deep learning facilities deep NLP and will facilitate deeper reasoning, but it’s deep NLP for knowledge acquisition and question answering that seems most critical for general AI. If that’s the case, we might call such general AI, “natural intelligence”.
Deep learning on its own delivers only the most shallow reasoning and embarrasses itself due to its lack of “common sense” (or any knowledge at all, for that matter!). DARPA, the Allen Institute, and deep learning experts have come to their senses about the limits of deep learning with regard to general AI.
General artificial intelligence requires all of it: deep natural language understanding, deep learning, and deep reasoning. The deep aspects are critical but no more so than knowledge (including “common sense”). Continue reading “Natural Intelligence”
In a prior post we showed how extraordinarily ambiguous, long sentences can be precisely interpreted. Here we take a simpler look upon request.
Let’s take a sentence that has more than 10 parses and configure the software to disambiguate among no more than 10.
Once again, this is a trivial sentence to disambiguate in seconds without iterative parsing!
The immediate results might present:
Suppose the intent is not that the telescope is with my friend, so veto “telescope with my friend” with a right-click.
Continue reading “Iterative Disambiguation”
A decade or so ago, we were debating how to educate Paul Allen’s artificial intelligence in a meeting at Vulcan headquarters in Seattle with researchers from IBM, Cycorp, SRI, and other places.
We were talking about how to “engineer knowledge” from textbooks into formal systems like Cyc or Vulcan’s SILK inference engine (which we were developing at the time). Although some progress had been made in prior years, the onus of acquiring knowledge using SRI’s Aura remained too high and the reasoning capabilities that resulted from Aura, which targeted University of Texas’ Knowledge Machine, were too limited to achieve Paul’s objective of a Digital Aristotle. Unfortunately, this failure ultimately led to the end of Project Halo and the beginning of the Aristo project under Oren Etzioni’s leadership at the Allen Institute for Artificial Intelligence.
At that meeting, I brought up the idea of simply translating English into logic, as my former product called “Authorete” did. (We renamed it before Haley Systems was acquired by Oracle, prior to the meeting.)
Continue reading ““Only full page color ads can run on the back cover of the New York Times Magazine.””
What is the part of speech of “subject” in the sentence:
- Are vitamins subject to sales tax in California?
Related questions might include:
- Does California subject vitamins to sales tax?
- Does California sales tax apply to vitamins?
- Does California tax vitamins?
Vitamins is the direct object of the verb in each of these sentences, so, perhaps you would think “subject” is a verb in the subject sentence…
Continue reading “Are vitamins subject to sales tax in California?”
I regularly build deep learning models for natural language processing and today I gave one a try that has been the leader in the Stanford Question Answering Dataset (SQuAD). This one is a impressive NLP platform built using PyTorch. But it’s still missing the big picture (i.e., it doesn’t “know” much).
Generally, NLP systems that emphasize Big Data (e.g., deep learning approaches) but eschew more explicit knowledge representation and reasoning are interesting but unintelligent. Think Siri and Alexa, for example. They might get a simple factoid question if a Google search can find closely related text, but not much more.
Here is a simple demonstration of problems that the state of the art in deep machine learning is far from solving…
Here is a paragraph from a Wall Street Journal article about the Fed today where the deep learning system has “found” what the pronouns “this” and “they” reference:
The essential point here is that the deep learning system is missing common sense. It is “the need”, not “a raise” that is referenced by “this”. And “they” references “officials”, not “the minutes”.
Bottom line: if you need your natural language understanding system to be smarter than this, you are not going to get there using deep learning alone.
The following is motivated by Section 6359 of the California Sales and Use Tax. It demonstrates how knowledge can be acquired from dictionary definitions:
Here, we’ve taken a definition from WordNet and prefixed it with the word followed by a colon and parsed it using the Linguist.
Continue reading “Dictionary Knowledge Acquisition”
A Linguist user recently had a question about part of a sentence that boiled down to something like the following:
The question was whether “many” was an adjective, cardinality, or noun in this sentence. It’s a reasonable question!
Continue reading “‘believed by many’”
The Winograd Challenge is an alternative to the Turing Test for assessing artificial intelligence. The essence of the test involves resolving pronouns. To date, systems have not fared well on the test for several reasons. There are 3 that come to mind:
- The natural language processing involved in the word problems is beyond the state of the art.
- Resolving many of the pronouns requires more common sense knowledge than state of the art systems possess.
- Resolving many of the problems requires pragmatic reasoning beyond the state of the art.
As an example, one of the simpler exemplary problems is:
- There is a pillar between me and the stage, and I can’t see around it.
A heuristic system (or a deep learning one) could infer that “it” does not refer to “me” or “I” and toss a coin between “pillar” and “stage”. A system worthy of the passing the Winograd Challenge should “know” it’s the pillar.
Even this simple sentence presents some NLP challenges that are easy to overlook. For example, does “between” modify the pillar or the verb “is”?
This is not much of a challenge, however, so let’s touch on some deeper issues and a more challenging problem…
Continue reading “Parsing Winograd Challenges”