There has been some speculation that Freebase is a vehicle for Metaweb to prosper from its semantic web infrastructure when used for commercial purposes. As I recall, Metaweb raised over $40 million in Series B around the time they started building Freebase. The investment was led by Goldman Sachs. Metaweb’s seasoned investors were unlikely to invest so much in a business that cannot project a return on that investment. Almost certainly, Metaweb has firm plans for realizing over $100 million in revenues. Most likely, for these investors and the amount of capital, target revenues by 2014, five years after the second round, would be in the vicinity of $1 billion. Obviously, there is a lot of work to get there from around zero today.
Some of the bubble in raising those funds has burst. The economy would crimp the valuation and investment if made today. And the semantic web has yet to produce a winner, so with less enthusiasm, the investment would again be less favorable today. All this is modulo the business plan. If the business plan withstands scrutiny and the rate of return from credibly achievable projections justifies investment, they could get the money again, even now. But no one that I have heard or read over the past few years can explain the business plan adequately – that is, concretely. I would appreciate any insights or opinions on the topic. I believe these are smart people, in the company and among its investors, so I am sure it is there. I just don’t believe in the “we’ll figure out how to make money eventually” business plan in this case.
Some Freebase terms that are worth knowing but are commercially reasonable for any site that provides a free service include:
- The terms of service are subject to change (upon posting).
- The service may be changed or discontinued at any time and without notice.
- Limits concerning access to or use of the services may be established.
- Any disputes shall be heard in San Francisco and governed by California law.
Continue reading “Is Freebase worth much?”
In a recent post I mentioned comments by Sir Tim Berners-Lee concerning the overlap between enterprise information models and semantic web ontology supporting the concept of linked data. Sir Berners-Lee argued that overlap is already sufficient to have a transformative effect on mainstream IT. I think he is right, but also that we are not there yet. There are many obstacles to adoption, not the least of which is the inertia of enterprise IT. Disruptive approaches to software development typically require ten years or so to cross the chasm from visionary and early adopters to the mainstream. We are only a few years into this and the technology is not ready.
First, let’s establish that there is plenty of semantics available for reuse now. There are existing models, some of which are well-designed, mature, and widely used. Unfortunately, most of what exists has little apparent relevance to enterprises. There is little on this diagram that would draw the attention of an enterprise architect, for example.
Continue reading “Extended Enterprise Ontology”
A panel on whether or not ontology is needed to achieve a collective vision for the semantic web was held on Tuesday at the International Semantic Web Conference (ISWC 2009) near Washington, DC. For most of the panelists the question was rhetorical. But there were a few interesting points made, including that machine learning of ontology is one extreme of a spectrum that extends to human authoring of ontology (however authoritative or coordinated). Nobody on the panel or in the audience felt that the extreme of human authored ontology was viable for the long-term vision of a comprehensively semantic and intelligent web. It was clear that the panelists believed that machine learning of ontology will substantially enrich and automate ontology construction, although the timeframe was not discussed. Nonetheless, the subjective opinion that substantial ontology will be acquired automatically within the next decade or so was clear. There was much discussion about the knowledge being in the data and so on. The discussion had a bit of the statistics versus logic debate to it. Generally, the attitude was “get over it” and even Pat Hayes, who gave a well-received talk on Blogic and whom one would expect to take the strict logic side of the argument, pointed out seminal work on combining machine learning and logic in natural language understanding of text.
David Karger of MIT’s AI lab challenged the panel from the audience by asserting that the data people posted on the web is much more important than any ontology that might define what that data means. This set off a bit of a firestorm. There was consensus that data itself is critically important, if not central. For the most part, panelists were aghast at the notion that spreadsheets of data would be useless to computers unless the meaning of its headings, for example, were related to concepts defined by reference to ontology those computers understood.
With respectful deference, the panel and audience yielded. Sir Tim Berners-Lee took the floor. Continue reading “Sir Tim Berners-Lee on Ontology”
As I discussed in Over $100m in 12 months backs natural language for the semantic web, Radar Networks’ Twine is one of the more interesting semantic web startups. Their founder, Nova Spivak, is funded by Vulcan and others to provide “interest-driven [social] networking”. I’ve been participating in the beta program at modest bandwidth for a while. Generally, Nova’s statements about where they are and where they are going are fully supported by what I have experienced. There are obvious weaknesses that they are improving. Overall, the strategy of gradually bootstrapping functionality and content by controlling the ramp up in users from a clearly alpha stage implementation to what is still not quite beta (in my view) seems perfect.
Recently, Nova recorded a few minute video in which he makes three short-term predictions: Continue reading “The Semantic Arms Race: Facebook vs. Google”
Radar Networks is accelerating down the path towards the world’s largest body of knowledge about what people care about using Twine to organize their bookmarks. Unlike social bookmarking sites, Twine uses natural language processing technology to read and categorize people’s bookmarks in a substantial ontology. Using this ontology, Twine not only organizes their bookmarks intelligently but also facilitates social networking and collaborative filtering that result in more relevant suggestions of others’ bookmarks than other social bookmarking sites can provide.
Twine should rapidly eclipse social bookmarking sites, like Digg and Redditt. This is no small feat!
The underlying capabilities of Twine present Radar Networks with many other opportunities, too. Twine could spider out from bookmarks and become a general competitor to Google, as Powerset hopes to become. Twine could become the semantic web’s Wikipedia, to which Metaweb’s Freebase aspires. Continue reading “Over $100m in 12 months backs natural language for the semantic web”