Web 2.0 Explorer

Guest Bloggers

Latest Posts

By taking a fundamentally Web-based approach to the development of applications, we shift from bolting Web capabilities onto the silo toward a mode in which data and functionality are native to the Web. How do we change the mindset of today's application developers, in order that they stop building 'old' applications in the new world?

Reduced to its simplest, the SPARQL Recommendations offer a simple and standard means of querying any store of RDF, regardless of the software used to run the store. The software has to support SPARQL, of course, and the Talis Platform is amongst those that do.

Amazon Web Services Evangelist Jeff Barr has been at it again, using Twitter to announce the release of his employer's latest offering.Amazon has come a long way since its days as a big book shop, and is increasingly making a name for itself as an exemplar of commodity computation.First we had the Simple Storage Service, S3. Little more than a big disk in the Cloud, it offered an affordable means by which anyone could make large amounts of data available for download by large numbers of people. Second Life client downloads come from S3, as do Talis podcasts. Several of my colleagues use S3 for backing up their laptops (I use Mozy myself, but that's another story).Then we got the Elastic Compute Cloud, EC2. This commoditised availability of virtual computers, making it relatively straightforward for those experiencing rapid growth - or needing short-term access to additional computing power for some other reason - to call upon additional computers as required, configure them as needed, use them for as long as necessary, and then throw them back into the pool when done.Unsurprisingly, given Amazon's e-Commerce heritage, a payment service came next. This essentially opened Amazon's own e-Commerce capabilities to third party developers, and allowed them to build it into their own applications. Although we knew that this would come, I should admit here that the pundits at Talis (including myself) were sure that Amazon's third web service would be the one they actually only announced today. Given our interest in data and their interest in e-Commerce, it's perhaps not surprising that we prioritised them differently.Next in the path, a Service Level Agreement. Essential, if Amazon are to move beyond the early adopters and actually see mass market numbers of mainstream enterprises rely upon their web services.Which brings us to today, and the unveiling of Amazon SimpleDB. It had to come, and now it has, offering; “a web service for running queries on structured data in real time. This service works in close conjunction with Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Compute Cloud (Amazon EC2), collectively providing the ability to store, process and query data sets in the cloud.”It's great to see, and in some ways the conceptual use of Cloud-based 'content' and 'metadata' is similar to our own ideas around the Talis Platform... although with very different emphasis and realisation.And yes, I know I missed SQS and Mechanical Turk, and various other Amazon web services from my story...Story originally posted on the Nodalities blogTechnorati Tags: Amazon Web Services, AWS, EC2, Jeff Barr, web services, S3, SimpleDB, Talis, Talis Platform

I know this is behind the game, and that the bleeding edge of blogreviews has moved well beyond online streaming service Hulu (eventhough it's not yet out to the public). But I received my beta invitelast week and have had all this time to play around with it.My initial thoughts: none. No, not one initial thought. Hulu doesn't work in the UK. They don'ttell you: "Hey, if you live in the UK, you will be able to access andbegin your Hulu experience, but when you choose a show to stream,you'll be disappointed. Have a nice day." You have to jump through allthe Beta hoops to get there first.Now, I know I should have known better, being a generally web-savvychap. But after a few pre-reviews of the Hulu service, I decided not toread any more blogs about it until after I'd tried it out myself. Iknew not to expect too much, after reading the last review over at Between the Lines , but I wanted my own experience.Since then, I've found dozens of blogs about how bad it is that Hulu doesn't work in Europe. Aside from whingeing about the lack of support, I can't really thinkof anything more to write about Hulu (apart from its ridiculous,trying-too-hard-for-the-Web-2.0-market name). But, doesn't this kind of go against point of the web? The idea thatwe can make connections, share content, stream and connect? The principle of the internet is broken by this experiment, and Idon't think a platform intended to be a YouTube killer should ever havebeen trialled in a geographically-limited network. Sure, I understandprivate Betas, but why limit this to the States? I don't think News Corp really gets the Web 2.0 thing. In fact, I wonder if they really get the internet? It reminds me of LaunchCast (now Yahoo Music). When I first launchedthe player, all the content was free, and there was absolutely loads ofit. I was thrilled! Over months, however, content became harder to finddue to advertisement interruptions and restrictions on skipping tracks. Suddenly, Launch re-directed to Yahoo, and I could no longerskip any content without upgrading to a premium service which hadn'texisted before. Then, when I moved to Britain, all the content wasunavailable apart from a limited selection which I can only presume wasintended for a British audience. (Don't think my mates here wouldhave agreed in a focus group!)I haven't used a yahoo service since. No, seriously, I haven't usedYahoo. As soon as Konfabulator was purchased by Yahoo, I uninstalledit. I was all set to set up a Flickr account, when I found out it wasYahoo. (I might go back on that one, once I get a decent digitalcamera.)This wasn't really a boycott so much as a pre-emptive decision. Iknow that as soon as Yahoo gets a hold of a service, itsuser-friendliness will dissolve into advertisements and 'premiumservices' (a contradiction in terms!) This is what Hulu reminds me of.An attempt at grabbing a market instead of a well-thought-out startuptrying to sell a genuinely good service and make a profit on itsquality.What is Web 2.0? Hulu doesn't know, and it makes me think that NewsCorp hasn't really got its head round it at all. I shudder to think what's going to happen with LinkedIn.-Zach (http://www.zachbeauvais.com)

Dan Farber was one of the first to cover the Giant Global Graph, here on ZDNet. A few days on, though, there's value in taking a look at how these ideas are being discussed across the blogosphere.The GGG, or Giant Global Graph. It sounds like something with which you might terrify a child at bed time, but this is no Gruffalo, no Jabberwock, no Smaug. Rather it's father-of-the-web Tim Berners-Lee's label for his latest attempt to express the power of the Semantic Web's core technologies in ways that will resonate beyond the established SemWeb literati. In the post he writes; “So, if only we could express these relationships, such as my social graph, in a way that is above the level of documents, then we would get re-use. That's just what the graph does for us. We have the technology -- it is Semantic Web technology, starting with RDF OWL and SPARQL. Not magic bullets, but the tools which allow us to break free of the document layer. If a social network site uses a common format for expressing that I know Dan Brickley, then any other site or program (when access is allowed) can use that information to give me a better service. Un-manacled to specific documents”As we might expect when someone like Berners-Lee posts, his thoughts sparked the usual flurry of interest, picked up by The Guardian, Read/Write Web, ZD Net, Nova Spivack, GigaOM, Nick Carr, and a host of other bloggers. The compulsory Wikipedia stub is already in place, and anticipating (at the time of writing) that “it may become a common expression.”So what is this Giant Global Graph, how's it related to the Semantic Web, and what does it all mean?In his post, Tim clarifies the distinction between the Net(work of computers) and the (World Wide) Web offered up over that network; “So the Net and the Web may both be shaped as something mathematicians call a Graph, but they are at different levels. The Net links computers, the Web links documents. Now, people are making another mental move. There is realization now, 'It's not the documents, it is the things they are about which are important'. Obvious, really.”He then goes to the next level, to connect the statements in that web of documents to form a graph; “We are all interested in friends, family, colleagues, and acquaintances. There is a lot of blogging about the strain, and total frustration that, while you have a set of friends, the Web is providing you with separate documents about your friends. One in facebook, one on linkedin, one in livejournal, one on advogato, and so on. The frustration that, when you join a photo site or a movie site or a travel site, you name it, you have to tell it who your friends are all over again. The separate Web sites, separate documents, are in fact about the same thing -- but the system doesn't know it. There are cries from the heart (e.g The Open Social Web Bill of Rights) for my friendship, that relationship to another person, to transcend documents and sites. There is a ”Social Network Portability“ community. Its not the Social Network Sites that are interesting -- it is the Social Network itself. The Social Graph. The way I am connected, not the way my Web pages are connected. We can use the word Graph, now, to distinguish from Web. I called this graph the Semantic Web, but maybe it should have been Giant Global Graph!”Tim concludes; “In the long term vision, thinking in terms of the graph rather than the web is critical to us making best use of the mobile web, the zoo of wildy differing devices which will give us access to the system. Then, when I book a flight it is the flight that interests me. Not the flight page on the travel site, or the flight page on the airline site, but the URI (issued by the airlines) of the flight itself. That's what I will bookmark. And whichever device I use to look up the bookmark, phone or office wall, it will access a situation-appropriate view of an integration of everything I know about that flight from different sources. The task of booking and taking the flight will involve many interactions. And all throughout them, that task and the flight will be primary things in my awareness, the websites involved will be secondary things, and the network and the devices tertiary. I'll be thinking in the graph. My flights. My friends. Things in my life. My breakfast. What was that? Oh, yogourt, granola, nuts, and fresh fruit, since you ask.”So not, then, anything radically new. This is the long-held promise of the Semantic Web, but it is valuable to see that promise rearticulated in something akin to the language of the social network. Those involved in the Semantic Web probably 'knew' all of this at some level, but had perhaps become too caught up in the mechanics and the model, too distant from the point. This is why the Semantic Web matters; the graphing of relationships between resources on the open Web. Not ontology wars. Not RDF-is-better-than-microformats. Not demonstrations of concept in the laboratory and behind the firewall. Not the creation of a shadow web. This. So thank you, Tim, for reminding us. That said, might Nova's 'semantic graph' not be a better label for this important restating of the point than the rather obtuse GGG? 'Giant' and 'Global' set too many alarm bells ringing for me, and hint way too much about all-encompassing-ness and top-down-ness... even if that's (probably) not what Berners-Lee intends. We got waylaid by misconceptions of ontologies as all-encompassing and all-pervasive. Rubbing everyone's noses in 'Giant' and 'Global' just sets us up for yet another round of that particular debate, and I for one have better things to do...Let's turn to look at some of the commentary that Berners-Lee's post received. Journalist and author Nick Carr, for example, remarks; “Sir Tim suggests that the Semantic Web (recently dubbed 'Web 3.0') was really the Social Graph all along, and that the graph represents the third great conceptual leap for the network - from net to web to graph”and concludes; “But while it's true that technologists and theoreticians desire to abstract the graph from the sites - and see only the benefits of doing so - it's not yet clear that that's what ordinary users want or even care about. That'll be the real test to whether the graph makes the leap from mathematician to mainstream - and it will also tell us whether a social network like Facebook has a chance to become a true platform or is fated to remain a mere site.”Nick's concluding point is certainly well made, but probably in the early mobile phone camp (who knew they wanted one?) rather than presenting any insurmountable unwillingness to adopt and adapt. The onus is clearly on us to move beyond the talk, and to demonstrate compelling and desirable benefits to being in (on?) the Graph. Tim O'Reilly's damning criticism of Open Social offers a lesson that we would do well to learn; “If all OpenSocial does is allow developers to port their applications more easily from one social network to another, that's a big win for the developer, as they get to shop their application to users of every participating social network. But it provides little incremental value to the user, the real target. We don't want to have the same application on multiple social networks. We want applications that can use data from multiple social networks.” “Set the data free! Allow social data mashups. That's what will be the trump card in building the winning social networking platform.”Surely we can all agree with those sentiments?The scepticism is in evidence elsewhere, perhaps most noticeably when Pete Cashmore writes; “Much like 'Web 2.0', 'ajax', 'crowdsourcing', the 'wisdom of crowds', 'UGC' (user generated content) and other catchy terms before them, the social graph looks set to become a bullet point on every web startup’s VC pitch in 2008. The blessings this week from Tim Berners-Lee make that inevitable. Let’s leave aside the fact that the 'graph' isn’t a graph in the sense that most people think of it (most would describe it as a 'network') or that the phrase 'social network' could already serve this purpose: there’s a sense that we need a new word for the concept now that these networks are becoming portable, and the term can ride a wave of Facebook hype to become the de facto nomenclature for this latest piece of the portable identity puzzle. Beyond that, the Webfather’s latest blog post gives us a meandering introduction to the social graph’s role in the development of the web. For the record, I’m not bothered by the phrase: it’s nice to have new labels for specific parts of the solution. I am, however, adopting a new lexicon for my day-to-day life in keeping with the trend: making a landline phone call will now be 'unSkyping', Post-It notes will henceforth be called 'retro-Twitters', going outside will now be 'outdoorsing', a paperback book will be known as a 'Kindle Alpha' and Wednesdays will be Day 3.0. No need to remember any of these, of course: I’ll rename them all next month.”Recent podcast subject Yihong Ding offers a thoughtful consideration of Tim's post, opening with; “Sir Tim Berners-Lee blogged again. This time he invented another new term---Giant Global Graph. Sir Tim uses GGG to describe [the] Internet in a new abstraction layer that is different from either the Net layer abstraction or the Web layer abstraction. Quite a few technique blogs immediately reported this news in this Thanksgiving weekend. I am afraid, however, that few of them really told readers the deeper meaning of this new GGG. To me, this is a signal from the father of World Wide Web: the Web (or the information on [the] Internet) has started to be reorganized from the traditional publisher-oriented structure to the new viewer-oriented structure”and continuing, “Both Brad Fitzpatrick and Alex Iskold presented the same observation: every individual web user expects to have an organized social graph of web information in which they are interested. Independently, I had another presentation but about the same meaning. The term I had used was web space. Due to current status of web evolution, web users are going to look for integrating their explored web information of interest into a personal cyberspace---web space. Inside each web space, information is organized as a social graph based on the perspective of the owner of the web space. This is thus the connection between the web spaces under my interpretation and the social graphs under the interpretation of Brad and Alex. Note that this web-space interpretation reveals another implicit but important aspect: the major role of an web-space owner is a web viewer instead of a web publisher”before concluding that; “The emergence of this new Graph abstraction of Internet tells that the Web (or information on Internet) is now evolving from a publisher-oriented structure to a viewer-oriented structure. At the Web layer, every web page shows an information organization based on the view of its publishers. Web viewers generally have no control on how web information should be organized. So the Web layer is upon a publisher-oriented structure. At the new proposed Graph layer, every social graph shows an information organization based on the view of graph owners, who are primarily the web viewers. In general, web publishers have little impact on how these social graphs should be composed. 'It's not the documents, it is the things they are about which are important.' Who are going to answer what are 'the things they are about'? It is the viewers instead of the publishers who will answer. This is why information organization at the Graph layer becomes viewer-oriented. The composition of all viewer-oriented social graphs becomes a giant graph at the global scale that is equivalent to the World Wide Web (but based on a varied view); this giant composition is thus the Giant Global Graph (GGG).”Writing for GigaOM, Anne Zelenka worries that the GGG is not best-suited to the modelling of inter-personal relationships; “But the Giant Global Graph itself is like Dustin Hoffman’s autistic savant character Raymond Babbitt in the 1988 movie Rain Man. Raymond knew all about plane trips but couldn’t make sense of human relationships.” “...though Berners-Lee borrows social graph talk, he’s not really concerned with human relationships, but more about things that computers can understand, things like plane trips” “The semantic web has always been about computers taking on more processing for us, not about computers allowing us to be more human, which is where the social graph might more naturally aim. Semantic web fans would like to suggest otherwise. Nova Spivack, founder of semantic web startup Radar Networks, as well wants to make everything into a semantic graph story. 'The social graph is a subset of the semantic graph,' he told me.”Whilst Tim's examples might support Anne's point, I'm unconvinced. The semantic technologies behind the GGG are all about expressing relationships between things, and those relationships might as easily be human or social as a manifestation of the airline timetable. Those social relationships, though, are about far more than the zombification of your 'friends' on Facebook. Rather, we can reach through to the implicit and explicit pattern of relationships between professional peers, students in a class, or citations of an author. We can map the shape of those relationships, and we can leverage existing capabilities to expose them back to participants in the relationship in order to allow them to see it, understand it, and use it in new and beneficial ways.Richard MacManus also covers the story for Read/Write Web, concluding; “I'm very pleased Tim Berners-Lee has appropriated the concept of the Social Graph and married it to his own vision of the Semantic Web. What Berners-Lee wrote today goes way beyond Facebook, OpenSocial, or social networking in general. It is about how we interact with data on the Web (whether it be mobile or PC or a device like the Amazon Kindle) and the connections that we can take advantage of using the network. This is also why Semantic Apps are so interesting right now, as they take data connection to the next level on the Web. Overall, unlike Nick Carr, I'm not concerned whether mainstream people accept the term 'Graph' or 'Social Graph'. It really doesn't matter, so long as the web apps that people use enable them to participate in this 'next level' of the Web. That's what Google, Facebook, and a lot of other companies are trying to achieve.”I'm not sure that Nick's concern was with acceptance of the term, so much as acceptance of the concept that their data become (potentially) more portable than they understand or wish. And Google, Facebook and the rest have a very long way to go in achieving (or even, in some cases, recognising) an open and actionable graph. “Incidentally, it's great to see Tim Berners-Lee 're-using' concepts like the Social Graph, or simply taking inspiration from them. He never really took to the Web 2.0 concept, perhaps because it became too hyped and commercialized, but the fact is that the Consumer Web has given us many innovations over the past few years. Everything from Google to YouTube to MySpace to Facebook. So even though Sir Tim has always been about graphs (as he noted in his post, the Graph is essentially the same as the Semantic Web), it's fantastic he is reaching out to the 'web 2.0' community and citing people like Brad Fitzpatrick and Alex Iskold.”On the Web 3.0 blog, we learn that; “We sometimes forget the real use of data - that of providing value to humanity in various forms, and providing true functionality as the humans need it. Connections are good, but functionality is paramount. The fact that a company can store ticket information on the web is not sufficient, but the user being able to buy it is significant. A company storing data is not sufficient, it being able to sieve out information from it, transforming it into knowledge, and converting to action is paramount. Someone along this, functionality becomes the significant aspect. URLs are becoming more potent with XML wrappers (RDF/OWL/SPARQL) around it. The new generation of applications will be playing on these enhancers to achieve seamlessness that we have sorely been lacking in the last 25 years. The WebTop is becoming more significant than the desktop. Browsers that were a mere window to the world may become a real wide entrance to the world itself. In a very short time, local resources on a computer may have no significance in how users achieve functionality.”Nova Spivack also offers a long and considered response, picking up on some of Anne's concerns; “But if the GGG emerges it may or may not be semantic. For example social networks are NOT semantic today, even though they contain various kinds of links between people and other things. So what makes a graph 'semantic?' How is the semantic graph different from social networks like Facebook for example?”He continues, “A semantic graph is far more reusable than a non-semantic graph -- it's a graph that carries its own meaning. The semantic graph is not merely a graph with links to more kinds of things than the social graph. It's a graph of interconnected things that is machine-understandable -- it's meaning or 'semantics' is explicitly represented on the Web, just like its data. This is the real way to make social networks open. Merely opening up their API's is just the first step”and concludes with; “The Giant Global Graph may or may not be a semantic graph. That depends on whether it is implemented with, or at least connected to, W3C standards for the Semantic Web. I believe that because the Semantic Web makes data-integration easier, it will ultimately be widely adopted. Simply put, applications that wish to access or integrate data in the Age of the Web can more easily do so using RDF and OWL. That alone is reason enough to use these standards. Of course there are many other benefits as well, such as the ability to do more sophisticated reasoning across the data, but that is less important. Simply making data more accessible, connectable, and reusable across applications would be a huge benefit.”So where does all of that leave us?Well, I don't think we saw something new created last week. What we saw was a restating of some principles at the heart of the Semantic Web, a recognition that the social graph so frequently mentioned in relation to the big Social Networking sites shares many of those principles. Finally, we saw the beginning of an informed discussion that might - finally - see the fruits of many years of Semantic Web research and development surfaced in language that can be used in conversation with the pragmatists building the mainstream Web of today, aligned to technologies and techniques fitting for that Web, rather than simply making the gloomy shadows a bit more pronounced.Which brings us, with all due respect to Julia Donaldson, right back to the Gruffalo! :-) “'A gruffalo? What's a gruffalo?' 'A gruffalo! Why, didn't you know? He has terrible triples, and terrible graphs, and terrible OWL in his terrible ontologies.'”Hmm. Maybe not. Read the original anyway, it's good...Content adapted from a post to Nodalities.

After my trial implementation of AdaptiveBlue's Smartlink technology on my blog, I was contacted by Director of Business Development, Fraser Kelton, who agreed to a Questions and Answers session about Adaptive Blue's new technology.

Generally this concept implicit web intends to alert us to the fact that besides all the explicit data, services, and links, the Web engages with much more implicit information such as which data users have browsed, which services users have invoked, and which links users have clicked. This type of information is often too boring and tedious for humans to read. So, inevitably, this type of information is only implicitly stored (if stored) on the Web. The implicit web intends to describe a network of this implicit information.

Opinion, to put it mildly, is somewhat divided on the whole Semantic Web thing. Is it the same as 'Web 3.0'? Or is it simply close enough for the distinction to pale into insignificance amongst those who don't see counting angels on the heads of pins as a worthwhile pastime?

This picture above shows a simple abstraction of web evolution. The traditional World Wide Web, also known as Web 1.0, is a Read-or-Write Web. In particular, authors of web pages write down what they want to share and then publish it online. Web readers can watch these web pages and subjectively comprehend the meanings. Unless writers willingly release their contact information in their authored web pages, the link between writers and readers is generally disconnected on Web 1.0. By leaving public contact information, however, writers have to disclose their private identities (such as emails, phone numbers, or mailing addresses). In short, Web 1.0 connects people to a public, shared environment --- World Wide Web. But Web 1.0 essential does not facilitate direct communication between web readers and writers. The second stage of web evolution is Web 2.0. Though its definition is still vague, Web 2.0 is a Read/Write Web. At Web 2.0, not only writers but also readers can both read and write to a same web space. This advance allows establishing friendly social communication among web users without obligated disclosure of private identities. Hence it significantly increases the participating interest of web users. Normal web readers (not necessarily being a standard web author simultaneously) then have a handy way of telling their viewpoints without the need of disclosing who they are. The link between web readers and writers becomes generally connected, though many of the specific connections are still anonymous. Whether there is default direction communication between web readers and writers is a fundamental distinction between Web 1.0 and Web 2.0. In short, Web 2.0 not only connects individual users to the Web, but also connects these individual uses together. It fixes the previous disconnection between web readers and writers. We don't know precisely what the very next stage of web evolution is at this moment. However, many of us believe that semantic web must be one of the future stages. Following the last two paradigms, an ideal semantic web is a Read/Write/Request Web. The fundamental change is still at web space. A web space will be no longer a simple web page as on Web 1.0. Neither will a web space still be a Web-2.0-style blog/wiki that facilitates only human communications. Every ideal semantic web space will become a little thinking space. It contains owner-approved machine-processable semantics. Based on these semantics, an ideal semantic web space can actively and proactively execute owner-specified requests by themselves and communicate with other semantic web spaces. By this augmentation, a semantic web space simultaneously is also a living machine agent. We had a name for this type of semantic web spaces as Active Semantic Space (ASpaces). (An introductory scientific article about ASpaces can be found at here for advanced readers.) In short, Semantic Web, when it is realized, will connect virtual representatives of real people who use the World Wide Web. It thus will significantly facilitate the exploration of web resources.A practical semantic web requires every web user to have a web space by himself. Though it looks abnormal at first glimpse, this requirement is indeed fundamental. It is impossible to imagine that humans still need to perform every request by themselves on a semantic web. If there are no machine agents help humans process the machine-processable data on a semantic web, why should we build this type of semantic web from the beginning? Every semantic web space is a little agent. So every semantic web user must have a web space. The emergence of Semantic Web will eventually eliminates the distinction between readers and writers on the Web. Every human web user must simultaneously be a reader, a writer, and a requester; or maybe we should rename them to be web participators. In summary, Web 1.0 connects real people to the World Wide Web. Web 2.0 connects real people who use the World Wide Web. The future semantic web, however, will connect virtual representatives of real people who use the World Wide Web. This is a simple story of web evolution.This article is originally posted at Thinking Space.

As Danny highlights in the latest instalment of This Week's Semantic Web, Marc Andreessen has once more demonstrated that he's not content with co-authoring Mosaic, sneaking around in the 24 Hour Laundry and driving social networking Ning-style. Far from it, as he continues his recent practice of blogging thoughtfully on issues facing the industry of which we - and he - are part. Yesterday's post, The three kinds of platforms you meet on the Internet, touched on a number of issues that we've addressed here on Nodalities before, and it is well worth both reading and thinking about.As Marc suggests in his introduction; “One of the hottest of hot topics these days is the topic of Internet platforms, or platforms on the Internet. Web services APIs (application programming interfaces), web services protocols like REST and SOAP, the new Facebook platform, Amazon's web services efforts including EC2 and S3, lots of new startups talking platform (including my own company, Ning)... well, 'platform' is turning into a central theme of our industry and one that a lot of people want to think about and talk about. However, the concept of 'platform' is also the focus of a swirling vortex of confusion -- lots of platform-related concepts, many of them highly technical, bleeding together; lots of people harboring various incompatible mental images of what's about to happen in our industry as a consequence of various platforms. I think this confusion is due in part to the term 'platform' being overloaded and being used to mean many different things, and in part because there truly are a lot of moving parts at play that intersect in fascinating but complex ways.”How true. The Platform space is a great one to be in and it's brimming over with opportunity and potential; so much so that we're one company staking an awful lot upon the detail of our Platform vision. Traditionally sloppy use of language, however, has led to a situation in which unnecessary confusion is now associated with a superficially straightforward term. Some of this confusion is introduced by innocent drift in the evolving usage of a word, but far more is down to the unfortunate fashion for everyone jumping on the bandwaggon and unleashing a 'platform' of their own. At least we've been using the Platform label for our own endeavours in this area for a number of years.In his attempt to introduce some clarity, Marc's post reiterates his basic definition of an internet platform; “A 'platform' is a system that can be programmed and therefore customized by outside developers -- users -- and in that way, adapted to countless needs and niches that the platform's original developers could not have possibly contemplated, much less had time to accommodate. We have a long and proud history of this concept and this definition in the computer industry stretching all the way back to the 1950's and the original mainframe operating systems, continuing through the personal computer age and now into the Internet era. In the computer industry, this concept of platform is completely settled and widely embraced, and still holds going forward. The key term in the definition of platform is 'programmed'. If you can program it, then it's a platform. If you can't, then it's not.”Check.He then offers three 'kinds' or 'levels' of Internet platform, being careful to stress that one is not necessarily better than those it supersedes; “I call these Internet platform models 'levels', because as you go from Level 1 to Level 2 to Level 3, as I will explain, each kind of platform is harder to build, but much better for the developer. Further, as I will also explain, each level typically supersets the levels below. As I describe these three levels of Internet platform, I will walk through the pros and cons of each level as I see them. But let me say up front -- they're all good. In no way to I intend to cast aspersions on what anyone I discuss is doing. Having a platform is always better than not having a platform, period. Platforms are good, period.”Marc's three levels are;Access API “Architecturally, the key thing to understand about this kind of platform is that the developer's application code lives outside the platform -- the code executes somewhere else, on a server elsewhere on the Internet that is provided by the developer. The application calls the web services API over the Internet to access data and services provided by the platform -- by the core system -- and then the application does its thing, on its own.”Plug-in APISuperficially very similar to the 'Access API', but the host application (such as Facebook) into which a developer's application connects does the vast majority of the work around marketing; “Facebook provides a whole series of mechanisms by which Facebook users are exposed to third-party apps automatically, just by using Facebook.”Runtime Environment “In a Level 3 platform [such as Salesforce], the huge difference is that the third-party application code actually runs inside the platform -- developer code is uploaded and runs online, inside the core system. For this reason, in casual conversation I refer to Level 3 platforms as 'online platforms'.” “Put in plain English? A Level 3 platform's developers upload their code into the platform itself, which is where that code runs. As a developer on a Level 3 platform, you don't need your own servers, your own storage, your own database, your own bandwidth, nothing... in fact, often, all you will really need is a browser. The platform itself handles everything required to run your application on your behalf.”And there's more, and it's interesting stuff that Marc has clearly thought about long and hard.Reading - and rereading - Marc's post, though, I kept coming back to ideas touched upon in two posts of mine about the relative openness of different Platform solutions; “Facebook and Talis might very well be offering 'Platforms', but they're quite different in intention. Facebook's platform seems to be all about making the Facebook site as rich, compelling and sticky as possible; everything is sucked to one point. The Talis Platform, on the other hand, is about providing developers - wherever they are - with the tools and capabilities to easily link and manipulate data across and through the web. The former sits heavily 'on' the web, and feeds upon it to suck ever more into its maw. The latter is truly 'of' the web, giving a distributed community of developers and users powerful new capabilities to enmesh their applications, and to deliver capabilities at the point of need.”Regardless of its position in Marc's levels, I truly hope and believe that the Internet platforms of long-term viability will be those that embrace the Network rather than feeding rapaciously upon it; those that are of the web as we are trying so hard to be.A Platform should give the developer a helping hand. It should lift them up and provide them with a set of tools that make it easier to concentrate upon and deliver their core value whilst the Platform worries about the day-to-day mundanity that is mere context [to paraphrase Geoffrey Moore]. A Platform should enable the developer to realise the benefit of those tools and capabilities in places and manners of their own choosing, rather than expecting or requiring the developer merely to expose their assets within the bounds of whatever site(s) the Platform chooses to offer. Platform providers who realise and embrace that will be the ones to succeed.

With Ian Davis and I packing to join the UK contingent hopping across the Atlantic to this week's Web 2.0 Summit in San Francisco, it was interesting to see Anthony Lilley's piece on Web 2.0 and Web 3.0 in today's Guardian. He's clearly not a fan of the labels; “So, finally web 2.0 is dead. Its jargon half-life has expired and the buzzword du jour is being interred and superseded. And by what? Well, you'll never guess. Long live web 3.0. Honestly, give me strength. We'll look back in 20 years and wonder when we decided to hand over the English language to people who can haggle for hours about the difference between versions 2.1 and 2.5 of some software.”In amongst the criticism of marketing hype, and the grounding in nappy/diaper changing that I am so happy to have left behind for the giddy heights of the tooth fairy, Anthony follows John Markoff's line in postulating that Web 3.0 may be the Semantic Web; “I'm coming to the conclusion that if web 3.0 is anything at all, then it's a step on the way to something I first heard about several years ago - the development of the semantic web. And, let's be honest, a version number is a better selling point than the word semantic is ever going to be.”On the way, Anthony steps sideways into discussion of money; “But I share some of the cynicism of a Canadian colleague who says that web 2.0 will actually come to an end when the venture capital money runs out. Well, given that lots of Silicon Valley investors are suddenly starting to talk about web 3.0, maybe that day is near and web 3.0 is just a branding relaunch, kind of like Kylie's new look?”Despite recent figures in the Financial Times, I'm actually not so sure that the money is leaving Web 2.0. Rather, I think that we're seeing the sort of technological bedding in that Brad Feld and Talis Platform Advisory Group member Mills Davis talked about in their podcasts with me. VC's aren't drawing back from funding Web 2.0 at all; instead, we're moving through the hype that Anthony rightly criticises, and we're emerging into an environment in which smarter entrepreneurs and smarter investors are once again becoming interested in meeting real business opportunities. Web 2.0 technologies are there, through and through, but there's far less interest in funding a company just because its website has curvy corners and a smidge of AJAX. That's a good thing. It doesn't mean Web 2.0 is dead. Maybe it does mean Web 2.0 has grown up a little.Like so many others, Anthony also refers to Jason Calacanis' recent PR stunt. I commented on that at the time, but he draws value from Jason's assertion that; “Web 3.0 is the creation of high-quality content and services produced by gifted individuals using web 2.0 technology as an enabling platform. Web 3.0 throttles the 'wisdom of the crowds' from turning into the 'madness of the mobs' we've seen all too often, by balancing it with a respect of experts.”Well, maybe. “The reliability of content and an understanding of the wider context in which content sits are rising in importance on the web and taking their place alongside the wondrous power of group communication, especially as more and more people join the party.”Absolutely. Here, Anthony hits the nail right on the head. Long before the all-encompassing ontological wonder of the Semantic Web is realised (if it ever is), there is much that some of its building blocks can do to help us deliver real solutions to real problems right now. I touched on this mid-point between Web 2.0 and the Semantic Web in my presentation in Cambridge last week, and will be expanding upon those ideas in various places over the next wee while. Behind the curvy corners and the blurring of boundaries between the Cloud and its access point, Web 2.0 is the manifestation of numerous trends, and Tim O'Reilly has consistently done a good job of expressing these. Open Source, Falling costs of storage, Increases in compute power, increasing ubiquity of access, commoditisation, software as a service, and more.However, for all their advances, all too many Web 2.0 applications remain fundamentally 'on' rather than 'of' the Web; offering rich functionality and interaction within their own little microcosm of the wider Web. Through pragmatic application of robust elements of the Semantic Web stack, we can move far beyond 'simply' crowdsourcing an encyclopaedia, 'merely' tracking recommendations and behaviour within a single e-commerce site, or 'just' allowing 46 million people to turn one another into zombies. It is this recognition that the power of the connections between resources is woefully under-utilised that is behind the Talis Platform. We are moving beyond the 'see also' links of the traditional web, and beyond the best-efforts silos of Web 2.0's darlings, to offer means by which assertions - and their provenance - may be made and tracked across the open web. Many of Web 2.0's ideas figure highly, as does a strong grounding in the technologies of the Semantic Web. Data is, of course, key... but we need to move beyond current presumptions in favour of use toward a model by which everyone is clear as to what data can - and should - be used for. Hence our long-standing interest in the Open Data movement.Is any of this 'Web 3.0'? I'm not sure. Talis Platform Advisory Group member Nova Spivack has, in the past, attempted to defuse the whole Web 2.0/ Web 3.0 polarisation by painting Web 3.0 as merely a label for the third decade of the Web. Semantic technologies are part of that decade, but so are other things. Nova is one of those speaking in a Semantic Web session at the Web 2.0 Summit this week. It'll be interesting to see how his ideas are received in that temple to 2.0, and you can be sure that I'll be sat there taking notes...Image of Kylie Minogue by Keven Law, shared on Flickr with a Creative Commons license. To understand why, you'll have to read Anthony's article...

I've really enjoyed the recent flow of posts between Talis Platform Advisory Group member Nova Spivack and (not yet a member!) Tim O'Reilly. Through them it's possible to see some of the complex interrelationships between aspects of 'Web 2.0' and the more pragmatic areas of 'Semantic Web' development. 'Web 3.0' occasionally makes an appearance, confuses things, and gets pushed down the pile in order that a more sensible dialogue can take place. Except, perhaps, in Nova's use of it to describe the third decade of the Web, 'Web 3.0' does seem to currently be causing little more than confusion; which is surely exactly the opposite of what a loose label such as that should be for? Despite that, it - or a term like it - will be needed as the media and others struggle to describe the transitional phase that we're entering as the exuberant outpourings of the early Web 2.0 days bed down into sustainable and longer-term activity. We can either craft these labels ourselves and use them to tell our stories, or we can have them created for us with language that will (doubtless) pit the new thingummy against the 'old' Web 2.0 in ways that are unhelpful. For want of a better term, many of us do seem to fall back upon 'Web 3.0' to describe something else, but I'm not sure that any of us actually like the term. 'Web of Data'? Maybe. 'Web of Intentions'? Possibly... and I'll begin to dig into why in an upcoming series of posts. 'Semantic Web'? No, probably not. It's far too bound up in the totality of Tim Berners-Lee's vision; something that we see small parts of in various labs around the world, but something that is an extremely long way from the mainstream web of today or tomorrow. Parts of the Semantic Web ideal figure extremely highly, but it may be unwise to shoot them in the foot by bogging discussion of them down in all that ontological big system stuff that seems to accompany any mention of the big SW.Robust, pragmatic, and Web-scale deployment of the technologies and ideas of the Semantic Web is not a replacement for Web 2.0. It is an evolution, a change of emphasis and approach. It is the realisation of many of Web 2.0's under-delivered promises, and a powerful step forwards for incumbents and new entrants. The opening up (legally, technically, and practically) of the data that drives the current social web is the big story. The particular W3C recommendations that make it possible are a means to an end.As Nova comments; “The Semantic Web is not about AI or anything fancy like that, it is really just about data. Another and perhaps better name for it would be 'The Data Web.'”Nova also remarks; “I agree with Tim that the Web 2.0 era was a renaissance -- and that there were certain trends and patterns that I think Tim recognized first, and that he has explained better, than just about anyone else. Tim helped the world to see what Web 2.0 was really about -- collective intelligence.”Absolutely. And it is here that the opportunity lies in taking a huge step forward. We're seeing plenty of interesting examples in which silos of reasonably collective reasonably intelligent data are growing and being mined.The opportunities are so much greater with an open pool of data, to which context, role and reason can be applied, and it is here that semantic technologies such as RDF have so much to offer.Nova goes on to say; “The fact is, while I have great respect for Tim as a thinker, I don't think he truly 'gets' the Semantic Web yet. In fact, he consistently misses the real point of where these technologies add value, and instead gets stuck on edge-cases (like artificial intelligence) that all of us who are really working on these technologies actually don't think about at all. We don't care about reasoning or artificial intelligence, we care about OPEN DATA. From what I can see, Tim thinks the Semantic Web is some kind of artificial intelligence system. If that is the case, he's completely missing the point. Yes, of course it enables better, smarter applications. But it's fundamentally NOT about AI and it never was. It's about OPEN DATA. The Semantic Web should be renamed to simply The Data Web.”I know for a fact that Tim 'gets' - and passionately believes in - Open Data. I've seen him talk compellingly on the subject, and read his thoughts online more than once.It does seem, though, that he's not yet making the connection between the power and importance of Open Data and the importance of the open web of data that a move from the siloed databases of today's best Web applications to a distributed network of flexible and actionable RDF data. Getting the data out there (with appropriate licenses to encourage use and reuse, of course) is only part of the job. The networks of association, inference, context and more make the sum of the parts worth far more than the individual records or databases... and this doesn't require (despite fears to the contrary) any wholesale adoption of inflexible ontologies or the widespread crafting of RDF.Now I really must finish the set of posts in which I hope to show more clearly how web-scale and sustainable deployment of Semantic Technologies promises to enrich (not replace) the vibrant ecosystem that Tim has so eloquently captured in his descriptions of Web 2.0.

Thank You

By registering you become a member of the CBS Interactive family of sites and you have read and agree to the Terms of Use, Privacy Policy and Video Services Policy. You agree to receive updates, alerts and promotions from CBS and that CBS may share information about you with our marketing partners so that they may contact you by email or otherwise about their products or services.
You will also receive a complimentary subscription to the ZDNet's Tech Update Today and ZDNet Announcement newsletters. You may unsubscribe from these newsletters at any time.