Natural Intelligence

Deep natural language understanding (NLU) is different than deep learning, as is deep reasoning.  Deep learning facilities deep NLP and will facilitate deeper reasoning, but it’s deep NLP for knowledge acquisition and question answering that seems most critical for general AI.  If that’s the case, we might call such general AI, “natural intelligence”.

Deep learning on its own delivers only the most shallow reasoning and embarrasses itself due to its lack of “common sense” (or any knowledge at all, for that matter!).  DARPA, the Allen Institute, and deep learning experts have come to their senses about the limits of deep learning with regard to general AI.

General artificial intelligence requires all of it: deep natural language understanding[1], deep learning, and deep reasoning.  The deep aspects are critical but no more so than knowledge (including “common sense”).[2] Continue reading “Natural Intelligence”

Affiliate Transactions covered by The Federal Reserve Act (Regulation W)

Benjamin Grosof, co-founder of Coherent Knowledge Systems, is also involved with developing a standard ontology for the financial services industry (i.e., FIBO).  In the course of working on FIBO, he is developing a demonstration of defeasible logic concerning Regulation W of the The Federal Reserve Act.  Regulation W specifies which transactions involving banks and their affiliates are prohibited under Section 23A of the Act.  In the course of doing this, there are various documents which are being captured within the Linguist™ platform.  This is a brief note of how those documents can be imported into the platform for curation into formal semantics and logic (as Benjamin and Coherent are doing). Continue reading “Affiliate Transactions covered by The Federal Reserve Act (Regulation W)”

Artificially Intelligent Educational Technology

Over the last two years, machines have demonstrated their ability to read, listen, and understand English well enough to beat the best at Jeopardy!, answer questions via iPhone, and earn college credit on college advanced placement exams.  Today, Google, Microsoft and others are rushing to respond to IBM and Apple with ever more competent artificially intelligent systems that answer questions and support decisions.

What do such developments suggest for the future of education? Continue reading “Artificially Intelligent Educational Technology”

Knowledge acquisition using lexical and semantic ontology

In developing a compliance application based on the institutional review board policies of John Hopkins’ Dept. of Medicine, we have to clarify the following sentence:

  • Projects involving drugs or medical devices other than the use of an approved drug or medical device in the course of medical practice and projects whose data will be submitted to or held for inspection by the FDA will not be exempt from JHM IRB review UNLESS that use falls within the Emergency Use provisions of 21 CFR 56.102 (d).

As you can see, there are a number of compound words and acronyms, as well as references to the Code of Federal Regulations that need to be defined or recognized to understand this sentence.  Continue reading “Knowledge acquisition using lexical and semantic ontology”

Neat vs. Scruffy and Watson

Recently, John Sowa has commented on LinkedIn or in correspondence with some of us at Coherent Knowledge Systems on the old adage due to Shanks concerning the Neats. vs. the Scruffies.  The Neats want nice formal logics as the basis of artificial intelligence.  This includes anyone who prefers classical logic (e.g., Common Logic, RIF-BLD, or SBVR) or standard ontologies (e.g., OWL-DL) for representing knowledge and reasoning with it.  The Scruffies may use well-defined technology, but are not constrained by it.  They’ll do whatever they think works, now, whether or not it is a good long term solution and despite its shortcomings, as long as it can obtain immediate objectives.

Watson is scruffy.  It doesn’t try to understand or formally represent knowledge.  It combines a lot of effective technologies into an evidentiary framework that allows it to effectively “guess”.

Today, in response to continued discussion in the Natural Language Processing group on LinkedIn under the topic “This is Watson”, I’m posting the following presentation on Project Sherlock and the Linguist vs. Google and IBM.

Essentially, the neat approach is more viable today than ever.  So, chalk one up for the neats, including Dr. Sowa and Menno Mofait’s comment in that discussion.

During a presentation at CMU after winning the game show,, IBM admitted that in order to get the last leg of improvement needed to win Jeopardy!, they needed to do some “neat” ontological knowledge acquisition, too!

Deep question answering: Watson vs. Aristotle

At the SemTech conference last week, a few companies asked me how to respond to IBM’s Watson given my involvement with rapid knowledge acquisition for deep question answering at Vulcan.  My answer varies with whether there is any subject matter focus, but essentially involves extending their approach with deeper knowledge and more emphasis on logical in additional to textual entailment.

Today, in a discussion on the LinkedIn NLP group, there was some interest in finding more technical details about Watson.  A year ago, IBM published the most technical details to date about Watson in the IBM Journal of Research and Development.  Most of those journal articles are available for free on the web.  For convenience, here are my bookmarks to them.

Cyc is more than encyclopedic

I had the pleasure of visiting with some fine folks at Cycorp in Austin, Texas recently.  Cycorp is interesting for many reasons, but chiefly because they have expended more effort developing a deeper model of common world knowledge than any other group on the planet.  They are different from current semantic web startups.  Unlike Metaweb‘s Freebase, for example, Cycorp is defining the common sense logic of the world, not just populating databases (which is an unjust simplification of what Freebase is doing, but is proportionally fair when comparing their ontological schemata to Cyc’s knowledge).  Not only does Cyc have the largest and most practical ontology on earth, they have almost incomprehensible numbers of formulas[1] describing the world.   Continue reading “Cyc is more than encyclopedic”

Over $100m in 12 months backs natural language for the semantic web

Radar Networks is accelerating down the path towards the world’s largest body of knowledge about what people care about using Twine to organize their bookmarks.  Unlike social bookmarking sites, Twine uses natural language processing technology to read and categorize people’s bookmarks in a substantial ontology.  Using this ontology, Twine not only organizes their bookmarks intelligently but also facilitates social networking and collaborative filtering that result in more relevant suggestions of others’ bookmarks than other social bookmarking sites can provide.

Twine should rapidly eclipse social bookmarking sites, like Digg and Redditt.  This is no small feat!

The underlying capabilities of Twine present Radar Networks with many other opportunities, too.  Twine could spider out from bookmarks and become a general competitor to Google, as Powerset hopes to become.  Twine could become the semantic web’s Wikipedia, to which Metaweb’s Freebase aspires. Continue reading “Over $100m in 12 months backs natural language for the semantic web”