Addressing The Limitations Of Open Standards

Open standards are great – they can provide machine- and application-independence, thus avoiding vendor lock-in and they can help to ensure services are interoperable and are widely accessible. Unfortunately open standards don’t always work – they can be too ambitious, fail to gain market acceptance, may be too costly to implement or be superceded by alternatives. So how do development programmes ensure they make use of open standards which will be successful and avoid making costly mistakes when selecting standards? This is the theme of a paper on “Addressing The Limitations Of Open Standards” by myself, my colleague Marieke Guy and Alastair Dunning, AHDS which will be given at the Museums and the Web 2007 Conference on 12 April.

The paper and accompanying slides are available. Your comments are welcome.

3 Comments

2. Really it comes down to two things: flexibility and simplicity. Open standards work best when they’ve been designed to be easy to implement, and generalised rather than inclusive of kitchen sink. RSS 0.91 is still the most popular version, and for good reason: 1.0 was needlessly based on RDF, and 2.0, while arguably better, has a higher programming overhead.

When a technical standard is designed by committee, or to satisfy a concept rather than a requirement, it’s almost always doomed to failure. It’s no surprise that the standards that underpin both the web as a whole and web 2.0 – HTML, HTTP, RSS, OpenID – were initially the work of one person, in order to satisfy a real, rather than conceptual, need.

In particular, note OpenID’s current widespread growth: there are other identity solutions out there, many of which are much more fully-featured. But OpenID is much more attractive to programmers, because they can write a client very simply. (It helps that Janrain provide libraries, but check out Scott’s experiences.)

Easy-to-implement standards also represent a much reduced risk upon adoption: if it doesn’t work out, you’ve lost much less programming time. This in itself is an argument as to why they’re adopted more quickly.

Hi Ben – Thanks for the comments. You’re right, Adobe has recently announced its intention to make PDF an open standard. My point that PDF has been a widely deployed proprietary solution for the community for over ten years is still valid, though.
Your second point is very interesting. “When a technical standard is designed by committee, or to satisfy a concept rather than a requirement, it’s almost always doomed to failure.” Well the W3C standards are designed by committees (W3C Working Groups), whereas RSS 2.0 was designed primarily by an individual (RSS 1.0 was designed by a small group closel associated with W3C, hence the use of RDF). But both RSS 1.0 and 2.0 fail, I would suggest, the EU’s definition of an open standards, by not having a stable governance and clear roadmap for future development (as I said when I delivered a paper at WWW 2006). So where does this leave us? Perhaps programmers aren’t as enamoured with open standards as policy makers are, and might have a more pragmatic view of what an open standard is? (RSS may not be an open standard according to the EU definition, but it’s also not a proprietary format, owned by a large corporation).
I know that the JISC OSS Watch service define open source software as software which has an OSI-approved licence. But there has been some discussion that ‘open sourceedness’ could be regarded as having more to do with a community development process, and that a formal requirement for an OSI-approved licence could be regarded as being too bureaucratic. Should a similar stance be taken towards open standards? Or, as I’ve heard staff at OSS Watch suggest, a liberalising of the definition will lead to fear, uncertainty and doubt?
And if we feel that JISC’s statement on “intereoperability through open standards” can lead to heavyweight standards which are difficult to implement, what principle should replace this?