Is Freebase worth much?

There has been some speculation that Freebase is a vehicle for Metaweb to prosper from its semantic web infrastructure when used for commercial purposes.  As I recall, Metaweb raised over $40 million in Series B around the time they started building Freebase. The investment was led by Goldman Sachs.  Metaweb’s seasoned investors were unlikely to invest so much in a business that cannot project a return on that investment.  Almost certainly, Metaweb has firm plans for realizing over $100 million in revenues.  Most likely, for these investors and the amount of capital, target revenues by 2014, five years after the second round, would be in the vicinity of $1 billion.  Obviously, there is a lot of work to get there from around zero today.

Some of the bubble in raising those funds has burst.  The economy would crimp the valuation and investment if made today.  And the semantic web has yet to produce a winner, so with less enthusiasm, the investment would again be less favorable today.  All this is modulo the business plan.  If the business plan withstands scrutiny and the rate of return from credibly achievable projections justifies investment, they could get the money again, even now.  But no one that I have heard or read over the past few years can explain the business plan adequately – that is, concretely.  I would appreciate any insights or opinions on the topic.  I believe these are smart people, in the company and among its investors, so I am sure it is there.  I just don’t believe in the “we’ll figure out how to make money eventually” business plan in this case.

Some Freebase terms that are worth knowing but are commercially reasonable for any site that provides a free service include:

  1. The terms of service are subject to change (upon posting).
  2. The service may be changed or discontinued at any time and without notice.
  3. Limits concerning access to or use of the services may be established.
  4. Any disputes shall be heard in San Francisco and governed by California law.

Continue reading “Is Freebase worth much?”

Google follows Microsoft’s lead towards intelligence

Being a fan of increased intelligence on the web, including Bing’s use of Powerset and True Knowledge, I enjoyed cnet’s report, “Google search gets answer highlights and events.”

Google now shows the following “The Empire State Building rises to 1250 ft (381 m) at the 102nd floor” in response to the classic semantic web test question.

Also, Google leverages more of the content of text or structure of linked data in its Rich Snippet answers:

Rich Snippet shows Google "understands" events

As search engines increase their understanding of concepts and how to extract them from content or linked data and present them as Google does here or above in a sentence, the web will begin to feel a lot smarter. 

As these simple enhancements indicate, the intelligent web is taking off and that feeling of intelligence will come sooner than expected.  Of course, there is a long way to go.   For more on that, I here there is an upcoming issue of AI Magazine that will survey the state of the art in question answering, including coverage of Vulcan’s Project Halo and IBM’s Jeopardy effort, among others.  Also, if you are interested in what bright minds are looking forward to in this regard, see Nova Spivak’s recent blogging and his post on “will the web become conscious?”

Extended Enterprise Ontology

In a recent post I mentioned comments by Sir Tim Berners-Lee concerning the overlap between enterprise information models and semantic web ontology supporting the concept of linked data.  Sir Berners-Lee argued that overlap is already sufficient to have a transformative effect on mainstream IT.  I think he is right, but also that we are not there yet.  There are many obstacles to adoption, not the least of which is the inertia of enterprise IT.  Disruptive approaches to software development typically require ten years or so to cross the chasm from visionary and early adopters to the mainstream.  We are only a few years into this and the technology is not ready.

First, let’s establish that there is plenty of semantics available for reuse now.  There are existing models, some of which are well-designed, mature, and widely used.  Unfortunately, most of what exists has little apparent relevance to enterprises.  There is little on this diagram that would draw the attention of an enterprise architect, for example.

Continue reading “Extended Enterprise Ontology”

Time for the next generation of knowledge automation

In preparing for my workshop at the Business Rules Forum in Las Vegas on November 5th, I have focused on the following needs in reasoning about processes, about events, and about or over time:

  1. Reasoning at a point within a [business] process
  2. Reasoning about events that occur over time.
  3. Reasoning about a [business] process (as in deciding what comes next)
  4. Reasoning about and across different states (as in planning)

Enterprise decision management (EDM) addresses the first.  Complex event processing (CEP) is concerned with the second.  In theory, EDM could address the third but it does not in practice.  This third item includes  the issue of governing and defining workflow or event-driven business processes rather than point decisions within such business processes. 

Business applications of rules have not advanced to include the fourth item.  That is to say, business has yet to significantly leverage reasoning or problem solving techniques that are common in artificial intelligence.  For example, artificially intelligent question and answer systems, which are being developed for  the semantic web,  can do more than retrieve data – they perform inference.  Commercial database and business intelligence queries are typically much less intelligent, which presents a number of opportunities that I don’t want to go into here but would happy to discuss with interested parties.  The point here is that business does not use reasoning much at all, let alone to search across the potential ramifications of alternative decisions or courses of action before making or taking one.  Think of playing chess or a soccer-playing robot planning how to advance the ball on goal.  Why shouldn’t business strategies or tactical business decisions benefit from a little simulated look-ahead along with a lot of inference and evaluation?

Even though I have recently become more interested in the fourth of these areas, I expect the audience at the business rules forum to be most interested in the first two points above.  There will also be some who have enough experience with complex business processes, which are common in larger enterprises.  These folks will be interested in the third item.  Only the most advanced applications, such as in biochemical process planning, will be interested in the fourth.  I don’t expect many of them to attend!

The notion of enterprise decision management (EDM) is focused on point decision making within a business process.  For enterprises that are concerned with governing business processes, a model of the process itself must be available to the business rules that govern its operation.  I’ve written elsewhere about the need for an ontology of events and processes in order to effectively integrate business process management (BPM) with business rules.  Here, and in the workshop, I intend to get a little more specific about the requirements, what is lacking in current standards and offerings, and what we’re trying to do about it. Continue reading “Time for the next generation of knowledge automation”

Sir Tim Berners-Lee on Ontology

A panel on whether or not ontology is needed to achieve a collective vision for the semantic web was held on Tuesday at the International Semantic Web Conference (ISWC 2009) near Washington, DC.  For most of the panelists the question was rhetorical.  But there were a few interesting points made, including that machine learning of ontology is one extreme of a spectrum that extends to human authoring of ontology (however authoritative or coordinated).  Nobody on the panel or in the audience felt that the extreme of human authored ontology was viable for the long-term vision of a comprehensively semantic and intelligent web.  It was clear that the panelists believed that machine learning of ontology will substantially enrich and automate ontology construction, although the timeframe was not discussed.  Nonetheless, the subjective opinion that substantial ontology will be acquired automatically within the next decade or so was clear.  There was much discussion about the knowledge being in the data and so on.  The discussion had a bit of the statistics versus logic debate to it.  Generally, the attitude was “get over it” and even Pat Hayes, who gave a well-received talk on Blogic and whom one would expect to take the strict logic side of the argument, pointed out seminal work on combining machine learning and logic in natural language understanding of text.

David Karger of MIT’s AI lab challenged the panel from the audience by asserting that the data people posted on the web is much more important than any ontology that might define what that data means.  This set off a bit of a firestorm.  There was consensus that data itself is critically important, if not central.  For the most part, panelists were aghast at the notion that spreadsheets of data would be useless to computers unless the meaning of its headings, for example, were related to concepts defined by reference to ontology those computers understood. 

With respectful deference, the panel and audience yielded.  Sir Tim Berners-Lee took the floor. Continue reading “Sir Tim Berners-Lee on Ontology”

Zigtag for social semantic tagging

image

I started to use Radar Networks’ Twine at the invitation of CEO Nova Spivak after writing this earlier this year (also see this). I enjoyed it for a while, especially because a lot of technology folks were hooking up with each other, especially the semantic web community, on Twine. But I found it  tedious to work through beta issues and to be bothered with recommendations or news about who was saying or bookmarking things about what. (I should have turned off the emails sooner!)

I was disappointed that Twine was taking an apparently folksonomic approach to tagging. It was as if Radar Networks was riding semantic web buzz without really embracing it openly or sharing the momentum that the invite-only community was investing in.  That may not sound fair – I believe that there are semantics in the back room, but that’s how it felt and it’s still the way it looks.  But probably the worst part is the process that you have to go through to add a bookmark – which is the whole point, of course!  (I ultimately sacrificed popup blockers, but the process still seems laborious compared to other alternatives.)

I stumbled across Zigtag almost accidentally while working for a VC firm with a portfolio of semantic startups. What I like most about Zigtag is that they make it obvious that they are building an ontology of tags and encourage users to select semantic tags (i.e., concepts) rather than folksonomic “words”.  They also provide tools for managing tags that allow you to move smoothly and incrementally from a folksonomic to a more semantic approach.

Continue reading “Zigtag for social semantic tagging”

A Common Upper Ontology for Advanced Placement tests

I have previously written about the lack of a common upper ontology in the semantic web and commercial software markets (e.g., business rules).  For example, the lack of understanding of time limits the intelligence and ease of use of software in business process management (BPM) and complex event processing (CEP).  The lack of understanding of money limits the intelligence and utility of business rules management systems (BRMS) in financial services and the capital markets.   And, more fundamentally, understanding time and money (among other things, such as location, which includes distance) requires a core understanding of amounts.

The core principle here is that software needs to have a common core of understanding that makes sense to most people and across almost every application.  These are the concepts of Pareto’s 80/20 Principle.  A concept like building could easily be out, but concepts like money and time (and whatever it takes to really understand money and time) are in.  Location, including distance, is in.  Luminousity could be out, but probably not if color is in.  Charge and current could be out, but not if electricity or magnetism is in.  The cutoff is less scientific than practical, but what is in has to be deeply consistent and completely rational (i.e., logically rigorous).[2] Continue reading “A Common Upper Ontology for Advanced Placement tests”

The Semantic Arms Race: Facebook vs. Google

As I discussed in Over $100m in 12 months backs natural language for the semantic web, Radar Networks’ Twine is one of the more interesting semantic web startups.  Their founder, Nova Spivak, is funded by Vulcan and others to provide “interest-driven [social] networking”.  I’ve been participating in the beta program at modest bandwidth for a while.  Generally, Nova’s statements about where they are and where they are going are fully supported by what I have experienced.  There are obvious weaknesses that they are improving.  Overall, the strategy of gradually bootstrapping functionality and content by controlling the ramp up in users from a clearly alpha stage implementation to what is still not quite beta (in my view) seems perfect. 

Recently, Nova recorded a few minute video in which he makes three short-term predictions: Continue reading “The Semantic Arms Race: Facebook vs. Google”

Over $100m in 12 months backs natural language for the semantic web

Radar Networks is accelerating down the path towards the world’s largest body of knowledge about what people care about using Twine to organize their bookmarks.  Unlike social bookmarking sites, Twine uses natural language processing technology to read and categorize people’s bookmarks in a substantial ontology.  Using this ontology, Twine not only organizes their bookmarks intelligently but also facilitates social networking and collaborative filtering that result in more relevant suggestions of others’ bookmarks than other social bookmarking sites can provide.

Twine should rapidly eclipse social bookmarking sites, like Digg and Redditt.  This is no small feat!

The underlying capabilities of Twine present Radar Networks with many other opportunities, too.  Twine could spider out from bookmarks and become a general competitor to Google, as Powerset hopes to become.  Twine could become the semantic web’s Wikipedia, to which Metaweb’s Freebase aspires. Continue reading “Over $100m in 12 months backs natural language for the semantic web”