Having the fortune to work as CSO for Talis, an innovative UK software company, in one of the most exciting times for software and the internet; I thought I would share some ideas and insights I am finding exciting at the moment.
Entries in web 2.0 (4)
The puzzle of semantic web adoption
I am a believer in the rise of the web of data. In fact I am CTO of Talis which is investing heavily in semantic web technologies. So don’t take this the wrong way but I can’t help but feel the semantic web community is ignoring a vital part of the semantic web jigsaw and this is creating a major credibility problem between it and large parts of the technology community. I am concerned because I think that the semantic web currently lacks two critical things that drove mass adoption of the web.
To be fair the W3C has created a semantic web outreach group and Talis has two representatives on this, so we are doing our bit to help to spread the word :-) but this is only going to really work if the semantic web community really understands what the major missing pieces are for mass adoption. Today, looking at the conversations in the semantic web community, I don’t think the real barrier is being seen clearly.
So here is my personal view on what is going on here.
Many in the semantic web community have been concerned mainly with the rightness of the technology and not the utility of the technology. That is fine for the invention process but badly wrong for the adoption process. Just ask the inventors of the Beta Max :-) . It doesn’t matter how right you are!
Adoption is a strong function of day 0 utility. That means: “What can I do better today by using semantic web technology rather than existing technology?” You can’t use the argument that when everyone has adopted RDF it will be really really useful, because the people who need to adopt the technology in order reach that critical mass of RDF won’t do it because of belief in the semantic web vision. These adopters are pragmatic and need technology to give them advantage today not in 5 years. Network effect based features always have this kind of initiation problem.
To overcome the network effect initiation problem there needs to be day 0 value to drive adoption until the network effect kicks in and takes over as the main reason for adoption. In short there needs to be a killer application of the technology.
What is the semantic web killer application?
So my question to the semantic web community is what exactly can I do far better today with the more unproven semantic web technology than I can today with more established technology such as agreeing simple XML standards?
A clear answer to this question is vital.
I actually think that for most specific instance of usage you could achieve faster adoption and lower risk through de facto standards agreement with a simple XML approach. Take RSS. Was this a success because it was RDF or because is was the de facto emergence of a simple standard based on its raw day 0 utility, not some far off network effect based value. It is not a semantic web killer application.
So it seems to me that semantic web adoption is a very different problem to the web of documents in the early days.
The web had 0 day utility. Many people will remember that feeling of seeing the web for the first time and knowing that you could easily publish any thing you liked and the whole world could read it instantly, mind blowing.
You didn’t need any special tool to write a HTML document, doing it by hand was easy enough.
The web was its own killer application. The semantic web is not.
But the web had a piece missing. You could start at any resource and navigate the links but you couldn’t search the space itself to find a good starting resource in the first place. This meant that as the web grew, more and more of the content could not in practice add any extra value to a users experience.
Of course the missing piece was the search engine. This allowed a user of the web to query the whole space and now every document no matter how obscure could potential enrich a users web experience.
I don’t think it is right to characterise this as something missing from the architecture of the web because search engines could be layered on top and that is better than building in complexity to the core standards, but from a users point of view, the real potential of the web of documents could not be realised until it was possible to query the whole web space.
We talk about the semantic web in terms of a web as database. But where is the database engine? Google is the free text engine for the web of documents. Where is the equivalent for the semantic web?
So the semantic web appears to have little day 0 utility over a specific approach, it is not its own killer application and it lacks the ability to query the semantic web information space itself.
This may appear a provocative conclusion, I don’t know. But is it correct? If it is then who is doing what about it?
If true does this mean the semantic web will forever be a dream?
No I don’t believe that. But I do believe it changes the way we should think about semantic web adoption.
For example, it would be crazy to believe all data must be in RDF, that would create a huge barrier. Instead the question should be how RDF and other data approaches can work together to create a powerful web of data, the superior value of the RDF approach should over time increase the amount of RDF versus other approaches.
But I think the single biggest blocker on web of data adoption and by extension, the semantic web, is the lack of ability to query the whole space. Where is the database engine for the semantic web?


Ecosystem 1 - Physical technology meets social technology
I’m pretty sure that the concepts of co-operation, platforms, webs of data and webs of functions will be central to understanding how the internet and web will continue to change our world and the way technology companies can make defensible long term value. In the next few post I will look at web1.0, web2, the semantic web from the point of view of ecosystems, drawing on the very useful new view of economics known variously as evolutionary or complexity economics. Over on Nodalities you can follow how Talis is putting these ideas to work in the real work of high tech innovation lead business. It is this special combination of theory and practice that makes Talis such an intense and wonderful company to work for.
Ecosystem
It is the constant dance between physical and social technology,



The platform is dead. Long live the platform
It seems to me that as we pass into the era of web 2.0, the software platform as we know it today will cease to have significant commercial value.
The principal reason being that the internet and web2.0 is allowing a move from code sharing to instance sharing for software platforms causing the existing network effect mechanism for platforms to fail.
The good news is we can expect new platform models to emerge based on the properties of sharing a single, persistent online instance rather than code sharing and multiple isolated instances (e.g. windows).
Some companies have already hooked into some aspects of this new model. eBay and Amazon as platforms have it, Google as a platform does not. I am of course talking about the architecture of participation becoming the principle network effect mechanisms for web2.0 platforms. That is, if the actions of the users contribute to the shared state of the platform (through which every platform application they may use) in such a way as to enhance the experience of other users, then their is a strong network effect based upon participation.
It is important to note that the forces enabling this new model are also undoing the previous model.
Here's why (IMHO).
Platforms
Over the past 10-15 years, Microsoft demonstrated both the enormous intrinsic and commercial value of software platforms. We have seen this battle for control of the platform played out over many segments of the software industry and layers in the software stack (Oracle, Syabase at the DB layers, Windows OS/2, IBM Websphere vs BEA Logic for application servers, smybian etc).
The return on capital invested simply dwarfed other software models and so platform leadership passed into law for many software companies as the one truth strategy for growth. The amazing value creation being principaly driven by two forces; reuse and the network effect.
Reuse: every application built on a platform is saved from having to make the investment to build features that the platform provides. This massively lowers the cost of production (therefore capital invested) for application developers.
But software libraries and software components do this also, but are not considered platforms. The difference between software platform and software library is the network effect or ecosystem.
Network Effect: Each application built for a particular platform increases the value of having that platform and, by extension, every other application that already uses the platform. So the more applications for a platform, the greater the value of the platform. So for the owner of the platform, that has a model that can extract commercial value from the platforms massive intrinsic value; the return on capital is a function of the investment that OTHER people have made. Or put another way, they achieve a return on capital NOT invested by them. Pretty sweet.
But the real questions should be "What causes the network effect in platforms". What is the mechanism by which the investment of application developer A has increased the value of the platform and of application B.
Does that same mechanism hold in the world of web2.0???? My believe is NO it doesn't. And that will have a profound effect on the strategy of software companies over the next 10 years. In fact we are already seeing it.
Traditional cause of the platform network effect
Was the dependency on the users to have purchased and installed the platform in order to use the applications.
Choice of purchase defined which applications you could use, naturally the platform with the better range and quality of software is more valuable (just like in the games console industry).
Web 2.0 removes the need for user purchase of the platform
As functionality moves off the users machine into a standards based cloud, the user choice of application platform effectively disappears. By definition web 2.o platforms API is web based and implementation neutral.
Consider the Google search APIs. If there is one or 100 applications based on it, the value of the platform is not much enhanced, those applications do not add anything to each other, no network effect. From an ecosystem point of view Web2.0 APIs are much more like software libraries than platforms.
Web 2.0 platform network effect
But web2.0 platforms have a new trick that traditional platforms don't have. They can easily present one shared instance i.e. state to all the different users of all the different applications. This allows the actions of one user using application A of the platform to enhance the experience of another user using application B of the platform. This is the architecture of participation. It is easy to see how both eBay and Amazon increase the power of their content based network effect through open access APIs. It is also easy to see how this doesn't work for Google, the end user of a search app typically can't affect the shared state of the platform.
Open Source Network Effect
Developer still need traditional software platforms though.
So web2.0 platforms allow sharing of the state of the platform. Where as traditional platforms allowed read only sharing of the code.
There is a way that traditional platforms can drive a network effect by allowing participation in the shared code of the platform. They can let users contribute to the code. This can immediately drive a whole new network effect which hugely increases the intrinsic value of the platform. Unfortunately for the existing platform vendors, nobody wants to submit code that somebody else will make money off. So this can only be done through open source. Linux is hugely valuable, but nobody can make $billions of its commercial sale, at least not directly.
Interestingly, as more open source code is created, it becomes easier to remix the code and create yet more open source software. The more general the software the more valuable for it to have an open source incarnation i.e. platforms are the natural place for opensource to target as we have seen with Linux, MySQL, JBOSS etc.
So for all the reasons above, I am pretty sure that as web2.0 progresses, we will see the rise of a different type of platform and the existing platform players will have a very hard time in holdings onto any serious returns.
Long live the platform of participation.


Web Services and the Innovators Dilemma
Web 2.0 is a vision of the web where content and functions can be remixed and reused to create new content or new applications. Web services and the semantic web are two of the key enablers for this vision but there appears to be dual approaches to both web services and the semantic web emerging. Why is that? Which is best?
Web Services
SOAP & WSDL - opens up new vista of possibilities by solving some of the real hard problems (WS-this that and the other), requires expertise and new infrastructure e.g. toolkits app servers to manage complexity. Unsurprisingly the app server vendors are driving the new standards in enterprise software.
REST - open up a new vista of possibilities by making it very easy to use web application APIs, so new audiences can get involved and doesn't require much in the way of changes to existing software stack. This is largely being driven by a very different community from the enterprise web services lot.
Semantic Web
RDF & OWL - open up new possibilities by solving some really hard problems. Requires expertise and therefore tooling and new infrastructure like a new query language, data storage, parsers etc. Driven by standards bodies like the W3C .
XHTML & Microformats - opens up new possibilities by lowering the barrier to participation for producers and consumers, uses existing technology, can be hand crafted i.e. disintermediates the expert.
It seems to me that the difference in complexity and cost between the approaches is actually a symptom of something deeper.
SOAP Webservices are trying to go beyond what expert developers could already do with RMI, DCOM etc.
By its nature it must compete with what is already possible which is mission critical software systems that are Trusted, secure, reliable, accountable, and typically have a high cost of failure. Most of these developers could not buy into a new way of working if it mean going backwards in any of those critical areas.
Similarly, RDF & OWL are trying to go beyond what expert developers can do with semantics in XML today.
If you are familiar with the work "The Innovators Dilemma" by Clayton M. Christensen, you may recognise this as the classic description of sustaining innovation. It must be better than what went before because it competes along the same dimensions with the same audience.
Clayton also describes what he terms as "Disruptive Innovation" of which one type is the low-end disruption. This is where a technically inferior innovation radically reduces the barrier(be that skill, cost or location) to entry thereby allowing an audience that was previously excluded to participate. This competes on new dimensions with a new audience.
This massive new audience is currently excluded from the traditional solution so the disruptive innovation only competes against being better than nothing for this audience.
So disruptive innovation allows a new, less skilled community to participate and do new kinds of things. Almost by definition this community is larger than the community of experts i.e. it is the long tail.
If we consider both REST and Microformats we see that neither are technically as good as Web Services and RDF. But both are significantly easier with lower skill and cost barriers for both producer and consumer. And sure enough Amazon are finding that the vast majority of the users of their platform are using the REST APIs.
Software standards have always had a massive network effect. What good is a standard if nobody else uses it. This makes the size of the community around any standard or approach hugely important. The pace of innovation is also deeply linked to the size of the community that can innovate. Consider the number of web authors(including bloggers) who can probably get their heads around REST and Microformats. It is vastly larger than the community hard core software developers on the planet.
Clayton describes, with many examples, how low-end disruptions rapidly become better and better until the complex high end solutions are pushed off the map.
It wouldn't be the first time that innovation become de facto outside the corporate firewall but eventually become good enough to be adopted by the enterprise.
Am I saying that Webservices and RDF are doomed. The truth is I have no idea, but I doubt it. The reason that experts create these new solutions is because they are needed to solve those difficult problems. But, on the other hand, vastly more innovation is likely when ordinary people gain the ability to do what is a "solved problem" for the expert. I would put money on Web2.0 emerging first from the ordinary web user rather than the software experts.
Microformats:
http://www.microformats.com/
http://www.tantek.com/presentations/2004etech/realworldsemanticspres.html

