Natural Intelligence

Deep natural language understanding (NLU) is different than deep learning, as is deep reasoning.  Deep learning facilities deep NLP and will facilitate deeper reasoning, but it’s deep NLP for knowledge acquisition and question answering that seems most critical for general AI.  If that’s the case, we might call such general AI, “natural intelligence”.

Deep learning on its own delivers only the most shallow reasoning and embarrasses itself due to its lack of “common sense” (or any knowledge at all, for that matter!).  DARPA, the Allen Institute, and deep learning experts have come to their senses about the limits of deep learning with regard to general AI.

General artificial intelligence requires all of it: deep natural language understanding[1], deep learning, and deep reasoning.  The deep aspects are critical but no more so than knowledge (including “common sense”).[2]

  • An agent that does not think cannot be intelligent.

To think there must be some reasoning using some knowledge.

There are several problems to realizing intelligence within an agent or system, the most immediate of which is how it acquires knowledge.  Here, we suggest that knowledge is most easily acquired by understanding language (e.g., English).  There seems to be little controversy on this point since the most intelligent agents obtain most of their knowledge by “reading” (e.g., Wikipedia).

Thanks to improving natural language understanding capabilities, knowledge acquisition is less of a practical problem today than it has been for decades.  Today, we can educate machines at scale.

  • Facts can be scraped out of the web corpus with high certainty and in great volume.
  • The knowledge in a middle- or high-school textbook can be encoded as knowledge suitable for automated reasoning for less than it cost to write the book in the first place.
  • Given such knowledge, machines can earn college credit on Advanced Placement tests.[3]
  • The text of laws and regulations can be precisely understood by machine much more quickly than it can be authored and interpreted by legislators, regulators, businesses, and citizens.

Millions of sentences will be translated from English into axiomatic knowledge and machines will reason with great power with their thinking guided by deep learning.  With 10 years, more knowledge will be encoded for machine reasoning than a human being can read – let alone comprehend – in a lifetime.

Of course, this begs questions, such as:

  • what reasoning systems will use this knowledge,
  • how will they think,
  • and how intelligent will they seem?

Staying in the present, where is reasoning technology that shows the way or what to expect?

Personally, I am not aware of too much, which is disappointing.  There are important islands, but the field of knowledge-based artificial intelligence has been steamrolled since the eighties expert and frame-based systems.  Most recently, real breakthroughs in neural networks and deep learning have taken the air out of the room for knowledge-based AI.  The pendulum is now swinging back, however, such as with regard to “explainable” AI and “common sense”, the lack of which limit the usefulness and intelligence that learning without reasoning can deliver.

As an entrepreneur, I was impressed with Siri and Watson, but I have never found them impressive from an AI perspective.  From an engineering perspective, Watson was tremendous, however.  And the deep NLP component behind Watson was on the right track.

The “intelligent agent” tools from Microsoft (LUIS), Amazon, IBM, and others, especially all the chatbot froth, seem nothing more than scripting tools with simple notions of state and limited types of queries rather than AI platforms.  I recall Viv, from the creators of Siri, being a visual programming tool prior to its acquisition by Samsung.  Such platforms are not going to deliver more general or deeper AI.

Advances in speech recognition and synthesis in Siri and Alexa (mostly due to deep learning) have been impressive, but they are more like search engines than intelligent agents.  Searching the web for Siri or Alexa and “inference engine” finds nothing of interest.  (I expect this to change in the near future.)

All the interesting stuff is hidden in the academic literature, research projects, and very few companies.

Years ago, Carnegie Mellon and Stanford were home to the most significant work in cognitive and knowledge-based AI.  Today, probably due to diversion of research funding, there are fewer and smaller pockets of work.

Corporate research projects are, of course, hard to spot.  I know of several at Google, Intel, IBM, Apple, and elsewhere, but even they are isolated within their companies.  The emphasis is on big data, deep learning, and more mundane commercial applications, such as personalization of advertising.

One area at the intersection of academic and corporate research worth touching on is theorem proving, including applications of deep learning such as in Google/DeepMind’s “neural Turing machines”.[4] This is an area we are working in.  We believe so-called embeddings and approaches such as demonstrated by Alpha Zero’s success at Go will have a significant impact.  We believe this needs to be coupled with concepts like “bounded rationality” and defeasibility.  You can find more about the latter on this site and you can gain insight on the intersection of deep reasoning with deep learning from Google’s Deep Math[5] research.  In general, though, the field of theorem proving has been stable this decade.[6]

The few companies specializing in knowledge-based AI are an interesting lot.  I have to include Wolfram Alpha[7], of course.  Cycorp[8] is more notable, in my opinion, while Alpha is more usable “off the shelf”.  Also keep an eye on little known Soar Technology.[9]

Aside from these we get into small vendors of logic programming technology, largely based on Prolog.  We go into this family of logic programming technologies elsewhere.  To sum up our thinking, a bounded-rationality, defeasible logic theorem proving system that incorporates the well-founded semantics, distributed representations, and deep-learning search heuristics is to be expected.


[1] deep NLP is a topic throughout Automata, Inc’s web site

[2] Doug Lenat of Cyc and our advisor at Inference has been shining light on this since the early eighties

[3] Expect them to pass medical school admissions and post-graduate exams within a few years.

[4] https://en.wikipedia.org/wiki/Neural_Turing_machine

[5] https://github.com/tensorflow/deepmath

[6] See the “Thousands of problems for theorem provers” (TPTP)

[7] https://www.wolframalpha.com

[8] http://www.cyc.com/

[9] https://soartech.com/

One Reply to “Natural Intelligence”

Comments are closed.