At the SemTech conference last week, a few companies asked me how to respond to IBM’s Watson given my involvement with rapid knowledge acquisition for deep question answering at Vulcan. My answer varies with whether there is any subject matter focus, but essentially involves extending their approach with deeper knowledge and more emphasis on logical in additional to textual entailment.
Today, in a discussion on the LinkedIn NLP group, there was some interest in finding more technical details about Watson. A year ago, IBM published the most technical details to date about Watson in the IBM Journal of Research and Development. Most of those journal articles are available for free on the web. For convenience, here are my bookmarks to them.
- Question analysis: How Watson reads a clue
- Deep parsing in Watson
Good technical details on the two parsing approaches taken. Using deep parsing, such as we have (e.g., using the ERG in Project Sherlock) and disambiguation is a viable approach for the background knowledge that IBM dismisses too quickly, however (see below). Note that in order to train NLP systems in new domains, you have to go through the same process for thousands of sentences, so disambiguation technology as in the Linguist is appropriate even if proofs are based more on textual entailment than logical deduction. - Textual resource acquisition and engineering
- Automatic knowledge extraction from documents
- Finding needles in the haystack: Search and candidate generation
- Typing candidate answers using type coercion
- Textual evidence gathering and analysis
- Relation extraction and scoring in DeepQA
- Structured data and inference in DeepQA
IBM takes a hard line against deep knowledge here. They have the upper hand in the argument due to their impressive results, but more precise knowledge would only improve their performance. You can find more on this debate in the Deep QA FAQ and this presentation which includes the the followings slide:
- Special Questions and techniques
- Identifying implicit relationships
Watson uses semantic networks and algorithms such as spreading activation in various ways, including lexically, syntactically and, to some degree, semantically using Wikipedia, for example. (The examples here demonstrate this combined with the capabilities described in the next paper.) - Fact-based question decomposition in DeepQA
Explains how Watson handles questions with multiple constraints which exposes much of Watson’s sophistication.
- A framework for merging and ranking of answers in DeepQA
The essence of the evidential framework for assessing confidence in candidate answers which allows Watson to combine multiple candidate generation and answer scoring components via machine learning. Although the approach is described comprehensively, many technical critical details, such as the nature of the features considered by the answering scoring and learning components, are omitted. - Making Watson fast
Lots of technical details about parallelism and low-level optimizations (e.g., RAM-residence using 32-bit instead of 64-bit Java, int[], and UTF-8). Also provides a variety of insights into the use of RDF and information retrieval (e.g., Sesame and Indri).
- Simulation, learning, and optimization techniques in Watson’s game strategies
- In the game: The interface between Watson and Jeopardy!
Thanks for blogging on the difference. Quite interesting.