I’m now in the midst of trying to make more sense of the themes in this talk whilst in the writing up stage for my PhD… and much of the feedback I had from the talk has been incredibly valuable in that – so comments are always welcome.

The OGP was always envisaged as a ‘multi-stakeholder forum’ – not only for civil society and governments, but also to include the private sector. But, as Martin Tisne noted in opening the session, private sector involvement has so far been limited – although an OGP Private Sector Council is currently developing.

In his remarks (building on notes from 2013), Martin outlined six different roles for the private sector in open government, including:

Firms as mediators of open government data – making governance related public data more accessible;

Firms as beneficiaries and users of open data – building businesses of data releases, and fostering demand for, and sustainable supply of, open data;

Firms as anti-corruption advocates – particularly rating agencies whose judgements on risk of investment in a country as a result of poor governance environments can strongly incentivise governments to institute reforms;

Firms practising corporate accountability – including by being transparent about their own activities.

Technology firms providing platforms for citizen-state interaction – from large platforms like Facebook which have played a role in democracy movements, to specifically civic private-sector provided platforms like change.org or SeeClickFix.

Companies providing technical assistance and advice to governments on their OGP action plans.

Reflecting on public and private interests

Regardless of the positive contributions and points made by all the panelists in the session, I do find myself approaching the general concept of private sector engagement with OGP with a constructive scepticism, and one that I hope supports wider reflections about the role and accountability of all stakeholders in the process. Many of these reflections are driven by a concern about the relative power of different stakeholders in these processes, and the fact that, in a world where the state is often in retreat, civil society spread increasingly thin, and wealth accumulated in vastly uneven ways, ensuring a fair process of multi-stakeholder dialogue requires careful institutional design. In light of the uneven flow of resources in our world, these reflections also draw on an important distinction between public and private interest.

Whilst there are institutional mechanisms in place (albeit flawed in many cases) that mean both government and non-profits should operate in the public interest, the essential logic of the private sector is to act in private interest. Of course, the extent of this logic varies by type of firm, but large multi-nationals have legal obligations to their shareholders which can, at least when shareholders are focussed on short-term returns, create direct tensions with responsible corporate behaviour. This is relevant for OGP in at least two ways:

Firstly, when private firms are active contributors to open government activities, whether mediating public data, providing humanitarian interventions, offering platforms for citizen interaction, or providing technical assistance, mechanisms are needed in a public interest forum such as the OGP to ensure that such private sector interventions provide a net gain to the public good.

Take for example a private firm that offers hardware or software to a government for free to support it in implementing an open government project. If the project has a reasonable chance of success, this can be a positive contribution to the public good. However, if the motivation for the project comes from private rather than a public interest, and leads to a government being locked into future use of a proprietary software platform, or to an ongoing relationship with the company who have gained special access as a result of their ‘CSR’ support for the open government project – then it is possible for the net-result to be against the public interest.

It should be possible to establish governance mechanisms that address these concerns, and allow the genuine public interest, and win-win contributions of the private sector to open government and development to be facilitated, whilst establishing checks against abuse of the power imbalance, whether due to relative wealth, scale or technical know-how, that can exist between firms and states.

Secondly, corporate contributions to aspects of the OGP agenda should not distract from a focus on key issues of large-scale corporate behaviour that undermine the capacity and effectiveness of governments, such as the use of complex tax avoidance schemes, or the exploitation of workforces and suppression of wages such that citizens have little time or energy left after achieving the essentials of daily living to give to civic engagement.

A proposal

In Tuesday’s session these reflections led me towards thinking about whether the Open Government Partnership should have some form of eligibility criteria for corporate participants, as a partial parallel to those that exist for states. To keep this practical and relevant, they could relate to the existence of key disclosures by the firm for all the settings they operate in: such as disclosure of amount of tax paid, the beneficial owners of the firm, and of the amount of funding the firm is putting towards engagement in the OGP process.

Such requirements need not necessarily operate in an entirely gatekeeping fashion (i.e. it should not be that participants cannot engage at all without such disclosures), but could be instituted initially as a recommended transparency practice, creating space for social pressures to encourage compliance, and giving extra information to those considering the legitimacy of, and weight to give to, the contributions of corporate participants within the OGP process.

As noted earlier, these critical reflection might also be extended to civil society participants: there can also be legitimate concerns about the interests being represented through the work of CSOs. The Who Funds You campaign is a useful point of reference here: CSO participants could be encouraged to disclosure information on who is funding their work, and again, how much resource they are dedicating to OGP work.

Conclusions

This post provides some initial reflections as a discussion starter. The purpose is not to argue against private sector involvement in OGP – but is to, in engaging proactively with a multi-stakeholder model, to raise the need for critical thinking in the open government debate not only about the transparency and accountability of governments, but also about the transparency and accountability of other parties who are engaged.

When I first jotted down a few notes on how to go forward from the rapid prototype I worked on with Sarah Bird in 2012, I didn’t realise we would actually end up with the opportunity to put some of those ideas into practice. However: we did – and so in this post I wanted to reflect on some aspects of the standard we’ve arrived at, some of the learning from the process, and a few of the ideas that have guided at least my inputs into the development process.

As, hopefully, others pick up and draw upon the initial work we’ve done (in addition to the great inputs we’ve had already), I’m certain there will be much more learning to capture.

(1) Foundations for ‘open by default’

Early open data advocacy called for ‘raw data now‘, asking for governments to essentially export and dump online existing datasets, with issues of structure and regular publishing processes to be sorted out later. Yet, as open data matures, the discussion is shifting to the idea of ‘open by default’, and taken seriously this means more than just data dumps that are created being openly licensed as the default position, but should mean that data is released from government systems as a matter of course in part of their day-to-day operation.

The full OCDS model is designed to support this kind of ‘open by default’, allowing publishers to provide small releases of data every time some event occurs in the lifetime of a contracting process. A new tender is a release. An amendment to that tender is a release. The contract being awarded, or then signed, are each releases. These data releases are tied together by a common identifier, and can be combined into a summary record, providing a snapshot view of the state of a contracting process, and a history of how it has developed over time.

This releases and records model seeks to combine together different user needs: from the firm seeking information about tender opportunities, to the civil society organisation wishing to analyse across a wide range of contracting processes. And by allowing core stages in the business process of contracting to be published as they happen, and then joined up later, it is oriented towards the development of contracting systems that default to timely openness.

As I’ll be exploring in my talk at the Berkman Centre next week, the challenge ahead for open data is not just to find standards to make existing datasets line-up when they get dumped online, but is to envisage and co-design new infrastructures for everyday transparent, effective and accountable processes of government and governance.

(2) Not your minimum viable product

Many open data standard projects adopt either a ‘Minimum Viable Product‘ approach, looking to capture only the few most common fields between publishers, or are developed through focussing on the concerns of a single publisher or users. Whilst MVP models may make sense for small building blocks designed to fit into other standardisation efforts, when it came to OCDS there was a clear user demand to link up data along the contracting process, and this required an overarching framework from into which simple component could be placed, or from which they could be extracted, rather than the creation of ad-hoc components, with the attempt to join them up made later on.

Whilst we didn’t quite achieve the full abstract model + idiomatic serialisations proposed in the initial technical architecture sketch, we have ended up with a core schema, and then suggested ways to represent this data in both structured and flat formats. This is already proving useful for example in exploring how data published as part of the UK Local Government Transparency Code might be mapped to OCDS from existing CSV schemas.

(3) The interop balancing act & keeping flex in the framework

OCDS is, ultimately, not a small standard. It seeks to describe the whole of a contracting process, from planning, through tender, to contract award, signed contract, and project implementation. And at each stage it provides space for capturing detailed information, linking to documents, tracking milestones and tracking values and line-items.

However, OCDS by not means covers all the things that publishers might want to state about contracting, nor all the things users may want to know. Instead, it focusses on achieving interoperability of data in a number of key areas, and then providing a framework into which extensions can be linked as the needs of different sub-communities of open data users arise.

We’re only in the early stages of thinking about how extensions to the standard will work, but I suspect they will turn out to be an important aspect: allowing different groups to come together to agree (or contest) the extra elements that are important to share in a particular country, sector or context. Over time, some may move into the core of the standard, and potentially elements that appear core right now might move into the realm of extensions, each able to have their own governance processes if appropriate.

As Urs Gasser and John Palfrey note in their work on Interop, the key in building towards interoperability is not to make everything standardised and interoperable, but is to work out the ways in which things should be made compatible, and the ways in which they should not. Forcing everything into a common mould removes the diversity of the real world, yet leaving everything underspecified means no possibility to connect data up. This is both a question of the standards, and the pressures that shape how they are adopted.

(4) Avoiding identity crisis

Data describes things. To be described, those things need to be identified. When describing data on the web, it helps if those things can be unambiguously identified and distinguished from other things which might have the same names or identification numbers. This generally requires the use of globally unique identifiers (guid): some value which, in a universe of all available contracting data, for example, picks out a unique contracting process; or, in the universe of all organizations, uniquely identifies a specific organization. However, providing these identifiers can turn out to be both a politically and technically challenging process.

The Open Data Institute have recently published a report on the importance of identifiers that underlines how important identifiers are to processes of opening data. Yet, consistent identifiers often have key properties of public goods: everyone benefits from having them, but providing and maintaining them has some costs attached, which no individual identifier user has an incentive to cover. In some cases, such as goods and service identifiers, projects have emerged which take a proprietary approach to fund the maintenance of those identifiers, selling access to the lookup lists which match the codes for describing goods and services to their descriptions. This clearly raises challenges for an open standard, as when proprietary identifiers are incorporated into data, then users may face extra costs to interpret and make sense of data.

In some cases, we’ve split the ‘scheme’ out into a separate field: for example, an organization identifier consists of a scheme field with a value like ‘GB-COH’ to stand for UK Companies House, and then the identifier given in that scheme, like ‘5381958’. This approach allows people to store those identifiers in their existing systems without change (existing databases might hold national company numbers, with the field assumed to come from a particular register), whilst making explicit the scheme they come from in the OCDS. In other cases, however, we look to create new composite string identifiers, combining a prefix, and some identifier drawn from an organizations internal system. This is particularly the case for the Open Contracting ID (ocid). By doing this, the identifier can travel between systems more easily as a guid – and could even be incorporated in unstructured data as a key for locating documents and resources related to a given contracting process.

However, recent learning from the project is showing that many organisations are hesistant about the introduction of new IDs, and that adoption of an identifier schema may require as much advocacy as adoption of a standard. At a policy level, bringing some external convention for identifying things into a dataset appears to be seen as affecting the, for want of a better word, sovereignty of a specific dataset: even if in practice the prefix approach of the ocid means it only need to be hard coded in the systems that expose data to the world, not necessarily stored inside organizations databases. However, this is an area I suspect we will need to explore more, and keep tracking, as OCDS adoption moves forward.

(5) Bridging communities of practice

If you look closely you might in fact notice that the specification just launched in Costa Rica is actually labelled as a ‘release candidate‘. This points to another key element of learning in the project, concerning the different processes and timelines of policy and technical standardisation. In the world of funded projects and policy processes, deadlines are often fixed, and the project plan has to work backwards from there. In a technical standardisation process, there is no ‘standard’ until a specification is in use: and has been robustly tested. The processes for adopting a policy standard, and setting a technical one, differ – and whilst perhaps we should have spoken from the start of the project of an overall standard, embedding within it a technical specification, we were too far down the path towards the policy launch before this point. As a result, the Release Candidate designation is intended to suggest the specification is ready to draw upon, but that there is still a process to go (and future governance arrangements to be defined) before it can be adopted as a standard per-se.

(6) The schema is just the start of it

This leads to the most important point: that launching the schemas and specification is just one part of delivering the standard.

In a recent e-mail conversation with Greg Bloom about elements of standardisation, linked to the development of the Open Referral standard, Greg put forward a list of components that may be involved in delivering a sustainable standards project, including:

The specification – with its various components and subcomponents);

Tools that assesses compliance according to the spec (e.g. validation tools, and more advanced assessment tools);

Some means of visualizing a given set of data’s level of compliance;

Incentives of some kind (whether positive or negative) for attaining various levels of compliance;

Processes for governing all of the above;

and of course the community through which all of this emerges and sustains;

To this we might also add elements like documentation and tutorials, support for publishers, catalysing work with tool builders, guidance for users, and so-on.

Open government standards are not something to be published once, and then left, but require labour to develop and sustain, and involve many social processes as much as technical ones.

In many ways, although we’ve spent a year of small development iterations working towards this OCDS release, the work now is only just getting started, and there are many technical, community and capacity-building challenges ahead for the Open Contracting Partnership and others in the open contracting movement.

[Summary: thinking aloud – brief notes on learning about the wikidata project, and how it might help addressing the organisational identifiers problem]

I’ve spent a fascinating day today at the Wikimania Conference at the Barbican in London, mostly following the programmes ‘data’ track in order to understand in more depth the Wikidata project. This post shares some thinking aloud to capture some learning, reflections and exploration from the day.

As the Wikidata project manager, Lydia Pintscher, framed it, right now access to knowledge on wikipedia is highly skewed by language. The topics of articles you have access to, the depth of meta-data about them (such as the locations they describe), and the detail of those articles, and their liklihood of being up to date, is greatly affected by the language you speak. Italian or Greek wikipedia may have great coverage of places in Italy or Greece, but go wider and their coverage drops off. In terms of seeking more equal access to knowledge, this is a problem. However, whilst the encyclopedic narrative of a French, Spanish of Catalan page about the Barbican Center in London will need to be written by someone in command of that language, many of the basic facts that go into an article are language-neutral, or translatable as small units of content, rather than sentences and paragraphs. The date the building was built, the name of the architect, the current capacity of the building – all the kinds of things which might appear in infoboxes – are all things that could be made available to bootstrap new articles, or that, when changed, could have their changes cascaded across all the different language pages that draw upon them.

That is one of the motivating cases for Wikidata: separating out ‘items’ and their ‘properties’ that might belong in Wikipedia from the pages, making this data re-usable, and using it to build a better encyclopedia.

However, wikidata is also generating much wider interest – not least because it is taking on a number of problems that many people want to see addressed. These include:

Somewhere ‘institutional’ and well governed on the web to put data – and where each data item also gains the advantage of a discussion page.

Providing a data model that can cope with change over time, and with data from heterogenous sources – all of the properties in wikidata can have qualifiers, such as when the statement is true from, or until, source information, and other provenance data.

Wikidata could help address these issues on two levels:

By allowing anyone to add items and properties to the central wikidata instance, and making these available for re-use;

By providing an open source software platform for anyone to use in managing their own corpus of wikified, versioned data*;

A particular use case I’m interested in is whether it might help in addressing the perenial Organisational Identifiers problem faced by data standards such as IATI and Open Contracting, where it turns out that having shared identifiers for government agencies, and lots of existing, but non-registered, entities like charities and associations that give and recieve funds, is really difficult. Others at Wikimania spoke of potential use cases around maintaining national statistics, and archiving the datasets underlying scientific publications.

However, in thinking about the use cases wikidata might have, its important to keep in mind it’s current scope:

It is a store of ‘items’ and then ‘statements’ about them (essentially a graph store). This is different from being a place to store datasets (as you might want to do with the archival of the dataset used in a scientific paper), and it means that, once created, items are the first class entities of wikidata, able to exist in multiple collection.

It currently inherits Wikipedia’s notability criteria for items. That is, the basic building blocks of wikidata – the items that can be identified and described, such as the Barbican, Cheese or Government of Grenada – can only be included in the main wikidata instance if they have a corresponding wikipedia page in some language wikipedia (or similar: this requirement is a little more complex).

It can be edited by anyone, at any time. That is, systems that rely on the data need to consider what levels of consistence they need. Of course, as wikipedia has shown, editability is often a great strength – and as Rufus Pollock noted in the ‘data roundtable’ session, updating and versioning of open data are currently big missing parts of our data infrastructures.

Can it help the organisational identifiers problem?

I’ve started to carry out some quick tests to see how far wikidata might be a resource to help with the aforementioned organisational identifiers problem.

Using Kasper Brandt‘s fantastically useful linked data rendering of IATI, I queried for the names of a selection of government and non-government organisations occurring in the International Aid Transparency Initiative data. I then used Open Refine to look up a selection of these on the DBPedia endpoint (which it seems now incorporates wikidata info as well). This was very rough-and-ready (just searching for full name matches), but by cross-checking negative results (where there were no matches) by searching wikipedia manually, it’s possible to get a sense of how many organisations might be identifiable within Wikipedia.

So far I’ve only tested the method, and haven’t run a large scale test – but I found around 1/2 the organisations I checked had a Wikipedia entry of some form, and thus would currently be eligible to be Wikidata items right away. For others, Wikipedia pages would need to be created, and whether or not all the small voluntary organisations that might occur in an IATI or Open Contracting dataset would be notable for inclusion is something that would need to be explored more.

Exploring the Wikidata pages for some of the organisations I did find threw up some interesting additional possibilities to help with organisation identifiers. A number of pages were linked to identifiers from Library Authority Files, including VIAF identifiers such as this set of examples returned for a search on Malawi Ministry of Finance. Library Authority Files would tend to only include entries when a government agency has a publication of some form in that library, but at a quick glance coverage seems pretty good.

Now, as Chris Taggart would be quick to point out, neither wikipedia pages, nor library authority file identifiers, act as a registry of legal entities. They pick out everyday concepts of an organisation, rather than the legally accountably body which enters into contracts. Yet, as they become increasingly backed by data, these identifiers do provide access to look up lots of contextual information that might help in understanding issues like organisational change over time. For example, the Wikipedia page for the UK’s Department for Education includes details on the departments that preceeded it. In wikidata form, a statement like this could even be qualified to say if that relationship of being a preceeding department is one that passes legal obligations from one to the other.

I’ve still got to think about this a lot more, but it seems that:

There are many things it might be useful to know about organisations, but which are not going to be captured in official registries anytime soon. Some of these things will need to be subject of discussion, and open to agreement through dialogue. Wikidata, as a trusted shared space with good community governance practices might be a good place to keep these things, albeit recognising that in its current phase it has no goal of being a comprehensive repository of records about all organisations in the world (and other spaces such as Open Corporates are already solving the comprehensive coverage problem for particular classes of organiastion).

There are some organisations for which, in many countries, no official registry exists (particularly Government Departments and Agencies). Many of these things are notable (Government Departments for example), and so even if no Wikipedia entry yet exists, one could and should. A project to manage and maintain government agency records and identifiers in Wikidata may be worth exploring.

Whether a shift from seeking to solve some aspects of the organisational identifiers problem through finding some authority to provide master lists, to developing a distributed best-efforts community approach is one that would make sense to the open government community is something yet to be explored.

Notes

*I here acknowledge SJ Klein‘s counsel was that this (encouraging multiple domain specific instances of a wikidata platform) is potentially a very bad idea, as the ‘forking’ of wiki-projects has rarely been a successful journey: particularly with respect to the sustainability of forked content. As SJ outlined, even though there may be technical and social challenges to a mega graph store, these could be compared to the apparant challenges of making the first encyclopedias (the idea of 50,000 page book must have seemed crazy at first), or the social challenges envisioned to Wikipedia at its genesis (‘how could non-experts possible edit an enecylopedia?’). On this view, it is only by setting the ambition of a comprehensive shared store of the worlds propositional data (with the qualifiers that Wikidata supports to make this possible without a closed world assumption) that such limits might be overcome. Perhaps with data there is a greater possibility to support forking, and remerging, of wikidata instances, permitting short-term pragmatic creation of datasets outside the core wikidata project, which can later be brought back in if they are considered, as a set, notable (although this still carries risks that forked projects diverge in their values, governance and structure so far that re-connecting later is made prohibitively difficult).

At the instigation of the UK Cabinet Office, an open policy making process is currently underway to propose new arrangements for data sharing in government. Data sharing arrangements are distinct from open data, as they may involve the limited exchange of personal and private data between government departments, or outside of government, with specific purpose of data use in mind.

The idea that new measures are needed is based on a perception that many opportunities to make better use of data for research, addressing debt and fraud, or tailoring the design of public services, are missed because either because of legal or practical barriers to data moving being exchanged or joined up between government departments. Some departments in particular, such as HMRC, require explicit legal permissions to share data, where in other department and public bodies, a range of existing ‘legal gateways’ and powers support exchange of data.

I’ve been following the process from afar, but on Monday last week I had the chance to attend one of the open full-day workshops that Involve are facilitating as part of the open policy making process. This brought together representatives of a range of public bodies, including central government departments and local authorities, with members of the Cabinet Office team leading on data sharing reforms, and a small number of civil society organisations and individuals. Monday’s discussion were centered on the introduction of new ‘permissive powers’ for data sharing to support tailored public services. For example, powers that would make it easier for local government to request and obtain HMRC data on 16 – 19 year olds in order to identify which young people in their area were already in employment or training, and so to target their resources on contacting those young people outside employment or training who they have a statutory obligation to support.

The exact wording of such a power, and the safeguards that need to be in place to ensure it is neither too broad, nor open to abuse, are being developed through the open policy making process. One safeguard I believe is important comes from introducing greater transparency into government data sharing arrangements.

A few months back, working with Reuben Binns, I put together a short note on a possible model for an ‘Open Register of Data Sharing‘. In Monday’s open policy making meeting, the topic of transparency as an important aspect of tailored public service data sharing came up, and provided an opportunity to discuss many of the ideas that the draft proposal had contained. Through the discussions, however, it became clear that there were a number of extra considerations needed to develop the proposal further, in particular:

Noting that public disclosure of planned data sharing was not only beneficial for transparency and scrutiny, but also for efficiency, coordination and consistency of data sharing: by allowing public bodies to pool data sharing arrangements, and to easily replicate approved shares, rather than starting from scratch with every plan and business case.

Recognising the concerns of local authorities and other public bodies about a centralised register, and the need to accommodate shares that might take place between public bodies at a local level only, without involvement of central government.

Recognising the need for both human and machine-readable information on data sharing arrangements, so that groups with a specific interest in particular data (e.g. associations looking out for the rights of homeless people) could track proposed or enacted arrangements without needing substantial technical know-how.

Recognising the importance of documents like Privacy Impact Assessments and Business Cases, but also noting that mandatory publication of these during their drafting could distort the drafting process (with the risk they become more PR documents making the case for a share, than genuine critical assessments), suggesting a mix of proactive and reactive transparency may be needed in practice.

As a result of the discussions with local authorities, government departments and others, I took away a number of ideas about how the proposal could be refined, and so this Friday, at the University of Southampton Web and Internet Science group annual gathering and weekend of projects (known locally as WAISFest) I worked in a stream on personal data, and spend a morning updating the proposals. The result is a reframed draft that, rather than focusing on the Register, focuses on a Data Sharing Disclosure Standard emphasising the key information that needs to be disclosed about each data share, and discussing when disclosure should take place, whilst leaving open a range of options for how this might be technically implemented.

The Gazette provides semantically enriched public notices: readable by humans and machines.

A couple of things of particular note in the draft:

It is useful to identify (a) data controllers; (b) dataset; (c) legislation authorising data shares. Right now the Register of Data Controllers seems to provide a good resource for (a), and thanks to recent efforts at building out the digital information infrastructure of the UK, it turns out there are often good URLs that can be used as identifiers for datasets (data.gov.uk lists unpublished datasets from many central government departments) and legislation (through the data-all-the-way down approach of legislation.gov.uk).

It considers how the Gazette might be used as a publication route for Data Sharing Disclosures. The Gazette is an official paper of record, established since 1665 but recently re-envisioned with a semantic publishing platform. Using such a route to publish notices of data sharing has the advantage that it combines the long-term archival of information in a robust source, with making enriched openly licensed data available for re-use. This potentially offers a more robust route to disclosures, in which the data version is a progressive enhancement on top of an information disclosure.

Based on feedback from Javier Ruiz, it highlights the importance of flagging when shared data is going to be processed using algorithms that will determine individuals eligibility for services/trigger interventions affecting citizens, and raises of the question of whether the algorithms themselves should be disclosed as a mater of course.

I’ll be sharing a copy of the draft with the Data Sharing open policy process mailing list, and with the Cabinet Office team working on the data sharing brief. They are working to draft an updated paper on policy options by early September, with a view to a possible White Paper – so comments over the next few weeks are particularly valued.

Alongside the new logo, and details of how the new brand was developed, posted on the OK Wiki, appear a set of statements about the motivations, core purpose and tag-line of the organisation. In this post I want to offer an initial critical reading of this particular process and, more importantly, text.

Preliminary notes

Before going further, I want to offer a number of background points that frame the spirit in which the critique is offered.

I have nothing but respect for the work of the leaders, staff team, volunteers and wider community of the Open Knowledge Foundation – and have been greatly inspired by the dedication I’ve seen to changing defaults and practices around how we handle data, information and knowledge. There are so many great projects, and so much political progress on openness, which OKFN as a whole can rightly take credit for.

I recognise that there are massive challenges involved in founding, running and scaling up organisations. These challenges are magnified many times in community based and open organisations.

Organisations with a commitment to openness, or democracy, whether the co-operative movement, open source communities like Mozilla, communities such as Creative Commons and indeed, the Open Knowledge Foundation – are generally held to much higher standards and face much more complex pressures from engaging their communities in what they do – than do closed and conventional organisations. And, as the other examples show, the path is not always an easy one. There are inevitably growing pains and challenges.

It is generally better to raise concerns and critiques and talk about them, than leave things unsaid. A critique is about getting into the details. Details matter.

See (1).

(Disclosure: I have previously worked as a voluntary coordinator for the open-development working group of OKF (with support from AidInfo), and have participated in many community activities. I have never carried out paid work for OKF, and have no current formal affiliation.)

The text

Here’s the three statements in the OK Branding notes that caught my attention and sparked some reflections:

About our brand and what motivates us:
A revolution in technology is happening and it’s changing everything we do. Never before has so much data been collected and analysed. Never before have so many people had the ability to freely, easily and quickly share information across the globe. Governments and corporations are using this data to create knowledge about our world, and make decisions about our future. But who should control this data and the ability to find insights and make decisions? The many, or the few? This is a choice that we get to make. The future is up for grabs. Do we want to live in a world where access to knowledge is “closed”, and the power and understanding it brings is controlled by the few? Or, do we choose a world where knowledge is “open” and we are all empowered to make informed choices about our future? We believe that knowledge should be open, and that everyone – from citizens to scientists, from enterprises to entrepreneurs, – should have access to the information they need to understand and shape the world around them.

Our core purpose:

A world where knowledge creates power for the many, not the few.

A world where data frees us – to make informed choices about how we live, what we buy and who gets our vote.

A world where information and insights are accessible – and apparent – to everyone.

This is the world we choose.

Our tagline:
See how data can change the world

The critique

My concerns are notabout the new logo or name. I understand (all too well) the way that having ‘Foundation’ in a non-profits name can mean different things in different contexts (not least people expecting you to have an endowment and funds to distribute), and so the move to Open Knowledge as a name has a good rationale. Rather, I wanted to raise four concerns:

(1) Process and representativeness

Tag Cloud from Open Knowledge Foundation Survey. See blog post for details.

The message introducing the new brand to OKF-Discuss notes that “The network has been involved in the brand development process especially in the early stages as we explored what open knowledge meant to us all” referring primarily to the Community Survey run at the end of 2013 and written up here and here. However, the later parts of developing the brand appear to have been outsourced to a commercial brand consultancy consulting with a limited set of staff and stakeholders, and what is now presented appears to be being offered as given, rather than for consultation. The result has been a narrow focus on the ‘data’ aspects of OKF.

Looking back over the feedback from the 2013 survey, that data-centricity fails to represent the breadth of interests in the OKF community (particularly when looking beyond the quantitative survey questions which had an in-built bias towards data in the original survey design). Qualitative responses to the Survey talk of addressing specific global challenges, holding governments accountable, seeking diversity, and going beyond open data to develop broader critiques around intellectual property regimes. Yet none of this surfaces in the motivation statement, or visibly in the core purpose.

OKF has not yet grappled in full with idea of internal democracy and governance – yet as a network made up of many working groups, local chapters and more, for a ‘core purpose’ statement to emerge without wider consultation seem problematic. There is a big missed opportunity here for deeper discussion about ideas and ideals, and for the conceptualisation of a much richer vision of open knowledge. The result is, I think, a core purpose statement that fails to represent the diversity of the community OKF has been able to bring together, and that may threaten it’s ability to bring together those communities in shared space in future.

Process points aside however (see growing pains point above), there are three more substantive issues to be raised.

I work on issues of open data everyday. I think it’s an important area. But it’s not the only element of open knowledge that should matter in the broad movement.

Whilst the Open Knowledge Foundation has rarely articulated the kinds of broad political critique of intellectual property regimes that might be found in prior Access to Knowledge movements, developing a concrete motivation and purpose statement gave the OKF chance to deepen it’s vision rather than narrow it. The risk Jo Bates has written about, of intellectual of the ‘open’ movement being co-opted into dominant narratives of neoliberalism, appears to be a very real one. In the motivation statement above, government and big corporates are cast as the problem, and technology and data in the hands of ‘citizens’, ‘scientists’, ‘entrepreneurs’ and (perhaps contradictorily) ‘enterprises’, as the solution. Alternative approaches to improving processes of government and governance through opening more spaces for participation is off the table here, as are any specific normative goals for opening knowledge. Data-centricity displaces all of these.

Now – it might be argued that although the motivation statement takes data as a starting point – is is really at its core about the balance of power: asking who should control data, information and knowledge. Yet – the analysis appears to entirely conflate the terms ‘data’, ‘information’ and ‘knowledge’ – which clouds this substantially.

(3) Data, Information and Knowledge

The DIKW pyramid offers a useful way of thinking about the relationship between Data, Information, Knowledge (and Wisdom). This has sometimes been described as a hierarchy from ‘know nothing’ (data is symbols and signs encoding things about the world, but useless without interpretation), ‘know what’, ‘know how’ and ‘know why’.

Data is not the same as information, nor the same as knowledge. Converting data into information requires the addition of context. Converting information into knowledge requires skill and experience, obtained through practice and dialogue.

Data and information can be treated as artefacts/thigns. I can e-mail you some data or some information. But knowledge involves a process – sharing it involves more than just sending a file.

OKF has historically worked very much on the transition from data to information, and information to knowledge, through providing training, tools and capacity building, yet this is not captured at all in the core purpose. Knowledge, not data, has the potential to free, bringing greater autonomy. And it is arguably proprietary control of data and information that is at the basis of the power of the few, not any superior access to knowledge that they possess. And if we recognise that turning data into information and into knowledge involves contextualisation and subjectivity, then ‘information and insights’ cannot be by simultaneously ‘apparent’ to everyone, if this is taken to represent some consensus on ‘truths’, rather than recognising that insights are generated, and contested, through processes of dialogue.

It feels like there is a strong implicit positivism within the current core purpose: which stands to raise particular problems for broadening the diversity of Open Knowledge beyond a few countries and communities.

(4) Power, individualism and collective action

I’ve already touched upon issues of power. Addressing “global challenges like justice, climate changes, cultural matters” (from survey responses) will not come from empowering individuals alone – but will have to involve new forms of co-ordination and collective action. Yet power in the ‘core purpose’ statement appears to be primarily conceptualised in terms of individual “informed choices about how we live, what we buy and who gets our vote”, suggesting change is purely the result of aggregating ‘choice’, yet failing to explore how knowledge needs to be used to also challenge the frameworks in which choices are presented to us.

The ideas that ‘everyone’ can be empowered, and that when “knowledge is ‘open’ […] we are all empowered to make informed choices about our future” fails to take account of the wider constraints to action and choice that many around the world face, and that some of the global struggles that motivate many to pursue greater openness are not always win-win situations. Those other constraints and wider contexts might not be directly within the power of an open knowledge movement to address, or the core preserve of open knowledge, but they need to be recognised and taken into account in the theories of change developed.

In summary

I’ve tried to deal with the Motivation, Core Purpose and Tag-line statements with as carefully as limited free time allows – but inevitably there is much more to dig into – and there will be other ways of reading these statements. More optimistic readings are possible – and I certainly hope might turn out to be more realistic – but in the interest of dialogue I hope that a critical reading is a more useful contribution to the debate, and I would re-iterate my preliminary notes 1 – 5 above.

To recap the critique:

Developing a brand and statement of core purpose is an opportunity for dialogue and discussion, yet right now this opportunity appears to have be mostly missed;

The motivation, core purpose and tagline are more tech-centric and data-centric than the OKF community, risking sidelining other aspects of the open knowledge community;

There need to be a recognition of the distinction of data, information and knowledge, to develop a coherent theory of change and purpose;

There appears to be an implicit libertarian individualism in current theories of change, and it is not clear that this is compatible with working to address the shared global challenges that have brought many people into the open knowledge community.

Corruption involves the abuse of entrusted power for personal gain (Transparency International, 2009). Grönlund has identified a wide range of actions that can be taken with ICTs to try and combat corruption, from service automation and the creation of online and mobile phone based corruption-reporting channels to the online publication of government transparency information (Grönlund, 2010). In the diagram below we offer eight broad categories of ICTs interventions with a potential role in fighting corruption.

These different ICT interventions can be divided between transactional reforms and transparency reforms. Transactional reforms seek to reduce the space for corrupt activity by controlling and automating processes inside government, or seek to increase the detection of corruption by increasing the flow of information into existing government oversight and accountability mechanisms. Often these developments are framed as part of e-government. Transparency reforms, by contrast, focus on increasing external rather than internal control over government actors by making the actions of the state and its agents more visible to citizens, civil society and the private sector. In the diagram, categories of ICT intervention and related examples are positioned along a horizontal axis to indicate, in general, whether these initiatives have emerged as ‘citizen led’ or ‘government led’ projects, and along the vertical axis to indicate whether the focus of these activities is primarily on transactional reforms, or transparency. In practice, where any actual ICT intervention falls is a matter as much of the details of implementation as it is to do with the technology, although we find these archetypes useful to highlight the different emphasis and origins of different ICT-based approaches.

Many ICT innovations for transparency and accountability[1] have emerged from within civil society and the private sector, only later adopted by governments. In this paper our focus is specifically upon government adoption of innovations: when the government is taking the lead role in implementing some technology with an anti-corruption potential, albeit a technology that may have originally been developed elsewhere, and where similar instances of such technologies may still be deployed by groups outside government. For example, civil society groups in a number of jurisdictions have deployed the Alaveteli open source software[2] which brokers the filing of Right to Information act requests online, logging and making public requests to, and replies from, government. Some government agencies have responded by building their own direct portals for filing requests, which co-exist with the civil society run Alaveteli implementations. The question of concern for this paper is why government has chosen to adopt the innovation and provide its own RTI portals.

Although there are different theories of change underlying ICT enabled transactional and transparency reforms, the actual technologies involved can be highly inter-related. For example, digitising information about a public service as part of an e-government management process means that there is data about its performance that can be released through a data portal and subjected to public pressure and scrutiny. Without the back-office systems, no digital records are available to open (Thurston, 2012).

The connection between transactional e-government and anti-corruption has only relatively recently been explored. As Bhatnagar notes, most e-government reforms did not begin as anti-corruption measures. Instead, they were adopted for their promise to modernise government and make it more efficient (Bhatnagar, 2003). Bhatnagar explains that “…reduction of corruption opportunities has often been an incidental benefit, rather than an explicit objective of e-government”. A focus on the connection between e-government and transparency is more recent still. Kim et. al. (2009) note that “E-government’s potential to increase transparency and combat corruption in government administration is gaining popularity in communities of e-government practitioners and researchers…”, arguably as a result of increased Internet diffusion meaning that for the first time data and information from within government can, in theory, be made directly accessible to citizens through computers and mobile phones, without passing through intermediaries.

In any use of ICTs for anti-corruption, the technology itself is only one part of the picture. Legal frameworks, organisational processes, leadership and campaign strategies may all be necessary complements of digital tools in order to secure effective change. ICTs for accountability and anti-corruption have developed in a range of different sectors and in response to many different global trends. In the following paragraphs we survey in more depth the emergence and evolution of three kinds of ICTs with anti-corruption potential, looking at both the technologies and the contexts they are embedded within.

2.1 TRANSPARENCY PORTALS

A transparency portal is a website where government agencies routinely publish defined sets of information. They are often concerned with financial information and might include details of laws and regulations alongside more dynamic information such as government debt, departmental budget allocations and government spending (Solana, 2004). They tend to have a specific focus, and are often backed by a legal mandate, or regulatory requirement, that information is published to them on an ongoing basis. National transparency portals have existed across Latin America since the early 2000s, developed by finance ministries following over 15 years investment in financial management capacity building in the region. Procurement portals have also become common, linked to efforts to make public procurement more efficient, and comply with regulations and good practice on public tenders.

More recently, a number of governments have mandated the creation of local government transparency portals, or the creation of dedicated transparency pages on local government websites. For example, in the United Kingdom, the Prime Minister requested that governments publish all public spending over £500 on their websites, whilst in the Philippines the Department of Interior and Local Government (DILG) has pushed the implementation of a Full Disclosure Policy requiring Local Government Units to post a summary of revenues collected, funds received, appropriations and disbursement of funds and procurement–related documents on their websites. The Government of the Philippines has also created an online portal to support local government units in publishing the documents demanded by the policy[3].

In focus: Peru Financial Transparency Portal A transparency portal is a website where government agencies routinely publish defined sets of information. They are often concerned with financial information and might include details of laws and regulations alongside more dynamic information such as government debt, departmental budget allocations and government spending.

Country: Peru

Responsible: Government of Peru- Ministry of Economic and Financial Affairs

Brief description: The Peruvian Government implemented a comprehensive transparency strategy in early 2000. That strategy comprised several initiatives (law on access to financial information, promotion of citizen involvement in transparency processes, among others). The Financial Transparency Portal was launched as one of the elements of that strategy. In that regard, Solanas (2003) suggests that the success of the portal is related to the existence of a comprehensive transparency strategy, in which the portal serves as a central element. The Portal (http://www.mef.gob.pe/) started to operate in 2001 and, at that time, it was praised as the most advanced in the region. Several substantial upgrades to the portal have taken place since the launch.

Current situation:

The portal presents several changes from its early days. In the beginning, the portal provided access to documents on economic and financial information. After more than a decade, it currently publishes datasets on several economic and financial topics, which are provided by each of the agencies in charge of producing or collecting the information. Those datasets are divided in 4 main modules: budget performance monitoring, implementation of investment projects, inquiry on transfers to national, local and regional governments, and domestic and external debt. The portal also includes links to request information, under the Peruvian FOI law, as well as track the status of the request.

In general, financial transparency portals have focussed on making government records available: often hosting image file version of printed, signed and scanned documents which mean that anyone wanting to analyse the information from across multiple reports must re-type it into spreadsheets or other software. Although a number of aid and budget transparency portals are linked directly to financial management systems, it is only recently that a small number of portals have started to add features giving direct access to datasets on budget and spending.

Some of the most data-centric transparency portals can be found in the International Aid field, where Aid Transparency Portals have been built on top of Aid Management Platforms used by aid-recipient governments to track their donor-funded projects and budgets. Built with funding and support from International donors, aid transparency portals such as those in Timor Leste and Nepal offer search features across a database of projects. In Nepal, donors have funded the geocoding of project information, allowing a visual map of where funding flows are going to be displayed.

Central to the hypothesis underlying the role of transparency portals in anti-corruption is the idea that citizens and civil society will demand and access information from the portals, and will use it to hold authorities to account (Solana, 2004). In many contexts whilst transparency portals have become well-established, direct demand from citizens and civil society for the information they contain remains, as Alves and Heller put it in relation to Brazil’s fiscal transparency, “frustratingly low” (in Khagram, Fung, & Renzio, 2013). However, transparency portals may also be used by the media and other intermediaries, providing an alternative more indirect theory of change in which coverage of episodes of corruption creates electoral pressures (in functioning democracies at least) against corruption. Though, Power and Taylor’s work on democracy and corruption in Brazil suggests that whilst such mechanisms can have impacts, they are often confounded in practice by other non-corruption related factors that influence voters preferences, and a wide range of contingencies, from electoral cycles to political party structures and electoral math (Power & Taylor, 2011).

2.2 OPEN DATA PORTALS

Where transparency portals focus on the publication of specific kinds of information (financial; aid; government projects etc.), open data portals act as a hub for bringing together diverse datasets published by different government departments.

Open data involves the publication of structured machine-readable data files online with explicit permission granted for anyone to re-use the data in any way. This can be contrasted with examples where transparency portals may publish scanned documents that cannot be loaded into data analysis software, or under copyright restrictions that deny citizens or businesses right to re-use the data. Open data has risen to prominence over the last five years, spurred on by the 2009 Memorandum on Transparency and Open Government from US President Obama (Obama, 2010) which led to the creation of thedata.gov portal, bringing together US government datasets. This built on principles of Open Government Data elaborated in 2007 by a group of activists meeting in Sebastopol California, calling for government to provide data online that was complete, primary (I.e. not edited or interpreted by government before publication), timely, machine-readable, standardised and openly licensed (Malmud & O’Reilly, 2007)

In focus: Kenya Open Data Initiative (KODI) Open data involves the publication of structured machine-readable data files online with explicit permission granted for anyone to re-use the data in any way. Open data portals act as a hub for bringing together diverse datasets published by different government departments. One of those platforms is: Kenya Open Data Initiative (opendata.go.ke)

Country: Kenya

Responsible: Government of Kenya

Brief description:

Around 2008, projects from Ushahidi to M-PESA put Kenya on the map of ICT innovation. Kenyan government – in particular, then-PS Ndemo of the Ministry of Information and Communications – eager to promote and to encourage that market, started to analyze the idea of publishing government datasets for this community of ICT experts to use. In that quest, he received support from actors outside of the government such as the World Bank, Google and Ushahidi. Adding to that context, in 2010 a new constitution, recognizing the right to access to information by citizens, was enacted in Kenya (however, a FOI law is still a pending task for the Kenyan government). On July 8 2011, President Mwai Kibaki launched the Kenya Open Data Initiative, making government datasets available to the public through a web portal: opendata.go.ke

Current situation:

Several activist and analyst are starting to write about the lack of updates and updated information of the Kenya Open Data Initiative. The portal has not been updated in several months, and its traffic has slowed down significantly.

Open data portals have caught on as a policy intervention, with hundreds now online across the world, including an increasing number in developing countries. Brazil, India and Kenya all have national open government data portals, and Edo State in Nigeria recently launched one of the first sub-national open data portals on the continent, expressing a hope that it would “become a platform for improving transparency, catalyzing innovation, and enabling social and economic development”[4]. However, a number of open data portals have already turned out to be short-lived, with the Thai governments open data portal launched[5] in 2011, already defunct and offline at the time of writing.

The data hosted on open data portals varies widely: ranging from information on the locations of public services, and government service performance statistics, to public transport timetables, government budgets, and environmental monitoring data gathered by government research institutions. Not all of this data is useful for anti-corruption work: although the availability of information as structured data makes it far easier to third-parties to analyse a wide range of government datasets not traditionally associated with anti-corruption work to look for patterns and issues that might point to causes for concern. In general, theories of change around open data for anti-corruption assume that skilled intermediaries will access, interpret and work with the datasets published, as portals are generally designed with a technical audience in mind.

Data portals can act as both a catalyst of data publication, providing a focal point that encourages departments to publish data that was not otherwise available, and as an entry-point helping actors outside government to locate datasets that are available. At their best they provide a space for engagement between government and citizens, although few currently incorporate strong community features (De Cindio, 2012).

Recently, transparency and open data efforts have also started to focus on the importance of cross-cutting data standards, that can be used to link up data published in different data portals, and to solicit the publication of sectoral data. Again the aid sector has provided a lead here, with the development the International Aid Transparency Initiative (IATI) data standard, and a data portal collating all the information on aid projects published by donors to this standard[6]. New efforts are seeking to build on experiences from IATI with data standards for contracts information in the Open Contracting initiative, which not only targets information from governments, but also potentially disclosure of contract information in the private sector[7].

2.3 CITIZEN REPORTING CHANNELS

Transparency and open data portals primarily focus on the flow of information from government to citizen. Many efforts to challenge corruption require a flow of information the other way: citizens reporting instances of corruption or providing the information agents of government need to identify and address corrupt behaviour. When reports are filed on paper, or to local officials, it can be hard for central governments to ensure reports are adequately addressed. By contrast, with platforms like the E-Grievance Portal in the Indian State of Orissa[8], when reports are submitted they can be tracked, meaning that where there is will to challenge corruption, citizen reports can be better handled.

Many online channels for citizen reporting have in fact grown up outside of government. Platforms like FixMyStreet in the UK, and the many similar platforms across the world, have been launched by civil society groups frustrated at having to deal with government through seemingly antiquated paper processes. FixMyStreet allows citizens to point out on a map where civil infrastructure requires fixing and forward the citizen reports to the relevant level of government. Government agents are invited to report back to the site when the issue is fixed, giving a trackable and transparent record of government responsiveness. In some areas, governments have responded to these platforms by building their own alternative citizen reporting channels, though often without the transparency of the civil society platforms (reports simply go to the public authority; no open tracking is provided), or, in other cases, by working to integrate the civil society provided solution with their own systems.

In focus: I Paid a BribeMany online channels for citizen reporting have been developed outside of government. One of those platforms is “I Paid a Bribe”, and Indian website aimed at collating bribe’s stories and prices from citizens across the country and then use it to present a snapshot of trends in bribery.

The initiative was first launched on August 15, 2010 (India’s Independence Day), and the website became fully functional a month later. I Paid a Bribe aims to understand the role of bribery in public service delivery by transforming the data collected from the reports into knowledge to inform the government about gaps in public transactions and in strengthening citizen engagement to improve the quality of service delivery. For example, in Bangalore, Bhaskar Rao, the Transport Commissioner for the state of Karnataka, used the data collected on I Paid a Bribe to push through reforms in the motor vehicle department. As a result, and in order to avoid bribes, licenses are now applied for online (Strom, 2012).

Current situation: Trying to reach a greater audience, ipaidabribe.com launched, in mid 2013, “Maine Rishwat Di”, the Hindi language version of the website: http://hindi.ipaidabribe.com/ At the same time, they launched Mobile Apps and SMS services in order to make bribe reporting easier and more accessible to citizens all across India. “I paid a Bribe” has also been replicated with partners in a number of other countries such as Pakistan, Kenya,Morocco and Greece, among others.

[1] It is important to clarify that transparency does not necessarily lead to accountability. Transparency, understood as the disclosure of information that sheds light on institutional behavior, can be also defined as answerability. However, accountability (or “hard accountability” according to Fox, 2007) not only implies answerability but also the possibility of sanctions (Fox, 2007).

Back in January, in response to a blog post by Doug Hadden, I wrote down a few reflections on the incentives for technology for transparency in developing countries. That led to a conversation with Silvana Fumega and the U4 Anti-Corruption Resource Centre about a possible briefing note on the topic, which quickly turned into a full paper – designed to scope out issues for donors and governments to consider in looking at supporting ICT-based anti-corruption efforts, particularly in developing countries. Together with Silvana, I’ve been working on a draft over the last few months – and we’ve just placed a copy online for comments.

Information and Communication Technology (ICT) driven initiatives are playing an increasingly central role in discourses of transparency, accountability and anti-corruption. The Internet and mobile phones are widely hailed as powerful tools in the fight against corruption. From mobile phone based corruption crowd-sourcing platforms, to open government data portals providing citizens with access to state datasets, technology-centric interventions are increasingly attracting both political attention and donor funding flows. The Open Government Partnership (OGP) declaration, launched in 2011, commits the 60 OGP member states to “…seizing this moment to strengthen our commitments to promote transparency, fight corruption, empower citizens, and harness the power of new technologies to make government more effective and accountable” (Open Government Partnership, 2011). In an analysis of the first action plans published by OGP members (Global Integrity, 2012), e-government and open data related commitments were markedly the most common made, illustrating the prominence given to ICTs in creating more open and accountable government.

However, the ‘sales pitch’ for governments to adopt ICTs is far broader than their anti-corruption applications, and the fact that a government adopts some particular technology innovation does not necessarily mean that its potential corruption-reducing role will be realised. Criticisms have already been levelled at open data portals that give an initial appearance of government transparency, whilst either omitting any politically sensitive content, or remaining, in practice, inaccessible to the vast majority of the population; and there are numerous examples to be found of crowd-sourcing platforms designed to source citizen feedback on public services, or corruption reports, languishing with just a handful of reports, or no submissions made for months on end (Bailard et. al., 2012; Brown, 2013) Yet, as Strand argues, “while ICT is not a magic bullet when it comes to ensuring greater transparency and less corruption…it has a significant role to play as a tool in a number of important areas” (Strand, 2010). The challenge is neither to suppose that ICTs will inevitably drive positive change, nor to ignore them as merely high-tech distractions. Rather, there is a need to look in detail at the motivations for ICT adoption, and the context in which ICTs are being deployed, seeking to understand the ways in which strategic and sustainable investments can be made that promote the integrity of public services, and the capacity of officials, citizens and other stakeholders to secure effective and accountable governments.

In this issue paper we consider the reasons that may lead governments to adopt anti-corruption related ICT innovations, and we look at the evidence on how the uptake and use of these ICTs may affect their impacts. In doing so, we draw upon literature from a range of fields, including open government, transparency and anti-corruption, e-government and technology for transparency, and we draw in speculation from our observations of the open government field over the last five years. To ground our argument, we offer a range of illustrative case studies that show some of the different kinds of ICT interventions that governments are engaging with.

I’m back in the US after a week in London, primarily for Rachel’s graduation as a Music Therapist, but which rather fortunately coincided with the Open Government Partnership Summit, and a chance to catch up with many colleagues and friends. I’m yet to digest all the sessions and notes I made well enough to complete a more analytical blog post on the OGP Summit, but as it is many OGP-related projects that have kept me from blogging here over the last month I thought I should at least link to a few of the outputs launched last week that have contributed to my bloggers block:

The Barometer has already picked up some good press coverage, and I hope will contribute usefully to the debate over different approaches to open government data around the world, and how to measure progress on open data in relevant and progressive ways.

Development Initiatives launched the Joined Up Data report, a great scoping study by Neil Ashton, of how different transparency initiatives might work together on common building blocks of data standards.This is something I worked on a big previously when working with Development Initiatives, and that also has a lot of relevance to the Joined Up Philanthropy project.

Over the last few weeks I’ve definitely discovered the meaning of the term ‘action forcing moment’ – as many projects have worked up to the OGP summit as a deadline. Of course, now attention switches to the follow up – but hopefully at a pace that allows a little more time for sharing work-in-progress and reflective blogging.

[Summary: Notes from the UK Open Government Partnership (OGP) Open Policymaking process]

In today’s UK OGP working lunch the focus was on “Participation, policy making and service delivery”. Staff working on Open Policy Making and Community Organising at Cabinet Office joined the OGP team, and participants from civil society to explore possible areas of focus for the revised UK National Action Plan. This blog post contains my personal reflections on core areas that could form that focus.

When civil society met to discuss a shared vision for Open Government back in October, we said that “Open government is a two-way dialogue. It builds on transparency and responsiveness. With increased access to government information and open data, civil society organisations, media, informal networks and individual citizens all have new and expanded roles to play in holding government to account and being part of policy dialogues. This requires resources and capacity building, both in the UK and internationally”. Today’s meeting broke that down into four areas with potential for shared action:

(1) Participation and data: supporting use and feedback loops
Open data only supports transparency, accountability, innovation and growth when it is used. A national action plan needs to include commitments that take account of this. For example, building on:

Open data capacity building with civil society. Groups like LVSCare already exploring ways to build skills and capacity in the voluntary and community centre to both use government data, and to generate and share their own data. Government and civil society should work together to learn about building the skills for data-use, and to share good practices and effective models for capacity building.

The five stars of open data engagement. There are many small steps government can take when publishing data to increase the change that it can be used, and to help close the feedback loops – increasing the chance that data will enable effective participation. The five stars of open data engagement were developed through a civil society and government collaboration earlier this year, and provide a template for taking those steps.

(2) Participation beyond open data
Many of the things we talk about when we discuss open government and participation have absolutely nothing to do with open data. We must not loose site of these aspects of open government, and should not neglect both learning from past experience in the UK, and the interesting experiments and innovative projects currently going on. A revised action plan could make more of:

Open Policy Making: sharing the experiments and learning current going on with opening up the policy making process, and using OGP as a forum for civil society and government to act as a critical friends to one another in drawing out good practice for future open policy making.

Digital engagement: building on social media guidance to civil servants, and work going on in government in digital engagement to make concrete commitments to ways citizens will be able to engage with government in future. Small wins, like making every government consultation easy to respond to online, and bigger challenges, like improving the flow of information from local areas up to central government through digital tools, could all be on the agenda.

Culture and skills: participation is not just about process – it also involves government officials gaining new skills, and involves culture change in government. We should explore Action Plan commitments to build civil service participation skills.

Taking it local: recognising that many issues are dealt with at the local level, and participation needs local government to be open too. Discussions today highlighted the need not to forget councillors and community organisers when thinking about open government and participation.

(3) Civil society and citizen participation in the UK’s OGP process
The open policy making process that is taking place for the UK’s National Action Plan is a really positive step in meeting the OGP participation requirement that state parties “commit to developing their country action plans through a multi-stakeholder process, with the active engagement of citizens and civil society”. However, there are opportunities for government and civil society to commit to going further in outreach to community groups, citizens and other key stakeholders. This also presents great opportunities to experiment with new approaches to engagement and outreach, and to feed learning back into wider government commitments on digital engagement and open policy making.

(4) Celebrating participation practice at the 2013 plenary
When it comes to international knowledge sharing, it appears the central government focus is firmly on sharing the UK’s experience pioneering open data. However, at the 2013 summit that is due to take place in London participation should also be firmly on the agenda, to allow the UK and other countries equal space to discuss, share learning and explore both participation + open data, and participation beyond open data.