Over $100m in 12 months backs natural language for the semantic web

Radar Networks is accelerating down the path towards the world’s largest body of knowledge about what people care about using Twine to organize their bookmarks.  Unlike social bookmarking sites, Twine uses natural language processing technology to read and categorize people’s bookmarks in a substantial ontology.  Using this ontology, Twine not only organizes their bookmarks intelligently but also facilitates social networking and collaborative filtering that result in more relevant suggestions of others’ bookmarks than other social bookmarking sites can provide.

Twine should rapidly eclipse social bookmarking sites, like Digg and Redditt.  This is no small feat!

The underlying capabilities of Twine present Radar Networks with many other opportunities, too.  Twine could spider out from bookmarks and become a general competitor to Google, as Powerset hopes to become.  Twine could become the semantic web’s Wikipedia, to which Metaweb’s Freebase aspires. Continue reading “Over $100m in 12 months backs natural language for the semantic web”

Goals and backward chaining using the Rete Algorithm

I was prompted to post this by request from Mark Proctor and Peter Lin and in response to recent comments on CEP and backward chaining on Paul Vincent’s blog (with an interesting perspective here).

I hope those interested in artificial intelligence enjoy the following paper .  I wrote it while Chief Scientist of Inference Corporation.  It was published in the International Joint Conference on Artificial Intelligence over twenty years ago. 

The bottom line remains:

  1. intelligence requires logical inference and, more specifically, deduction
  2. deduction is not practical without a means of subgoaling and backward chaining
  3. subgoaling using additional rules to assert goals or other explicit approaches is impractical
  4. backward chaining using a data-driven rules engine requires automatic generation of declarative goals

We implemented this in Inference Corporation’s Automated Reasoning Tool (ART) in 1984.  And we implemented it again at Haley a long time ago in a rules langauge we called “Eclipse” years before Java.

Regretably, to the best of my knowledge, ART is no longer available from Inference spin-off Brightware or its further spin-off, Mindbox.  To the best of my knowledge, no other business rules engine or Rete Algorithm automatically subgoals,  including CLIPS, JESS, TIBCO Business Events (see above), Fair Isaac’s Blaze Advisor, and Ilog Rules/JRules.  After reading the paper, you may understand that the resulting lack of robust logical reasoning capabilities is one of the reasons that business rules has not matured to a robust knowledge management capability, as discussed elsewhere in this blog.  Continue reading “Goals and backward chaining using the Rete Algorithm”