Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 19 2013

Art of the State – Von der Kunst, staatliche Werke in Gebrauch zu nehmen

Staatliche Werke sind in Deutschland urheberrechtlich geschützt, auch wenn sie mit Steuergeldern finanziert wurden. Das muss nicht sein, wie das Beispiel USA zeigt. 

Das Urheberrecht schützt nicht nur die Arbeitsleistung des privaten Autors, auch staatliche Werke werden davon erfasst: von der Rede der Bundeskanzlerin auf einem Festbankett bis hin zum Gutachten der Bundesnetzagentur. Für alle diese urheberrechtlich geschützten Werke gilt, dass jede Nutzung, Verbreitung oder Bearbeitung einer Genehmigung bedarf. Diese Genehmigung kann sich entweder aus dem Urheberrecht selbst ergeben, zum Beispiel in Form des Zitatrechts, oder sie ergibt sich aus der Einräumung von Nutzungsrechten durch Urheber oder Rechteinhaber.

Fragt man nach dem Zweck des Urheberrechts insgesamt, geht es meist um den Schutz des Urhebers, um die Sicherung seiner finanziellen Existenz und um die Schaffung von Anreizen für die Entstehung neuer Werke. Keiner dieser Gründe erklärt, warum staatliche Werke urheberrechtlich geschützt sind. Niemand glaubt, dass ein Gesetz dadurch besser, gerechter oder sorgfältiger formuliert würde, wenn es ausreichend lange urheberrechtlich geschützt wäre. Und in der Tat findet sich im deutschen Urheberrechtsgesetz in Paragraf 5 zu „amtlichen Werken“ eine entsprechende Vorschrift, die Gesetze, Verordnungen, Erlasse und Urteile vom urheberrechtlichen Schutz ausnimmt. Das Gesetz wird aber sehr eng ausgelegt, sodass alle anderen staatlichen Werke unters Urheberrecht fallen.

Dass es auch anders geht, zeigt ein Blick über den Atlantik. Alle Werke von US-amerikanischen Bundesangestellten sind urheberrechtlich nicht geschützt und können beliebig nachgenutzt werden. Das gilt auch für Fotos von NASA-Astronauten und führte dazu, dass mit dem Foto „Earthrise“ ein Bild entstehen konnte, das nicht nur zu den meist ­reproduzierten Werken der Menschheitsgeschichte zählt, sondern den Blick auf unseren Planeten ganz entscheidend geprägt hat. Ohne diese Freigabe hätte das Bild wahrscheinlich nie so weit verbreitet.

Der Staat nutzt den urheberrechtlichen Schutz zu systemwidrigen Zwecken

Wie nutzt der Staat den urheberrechtlichen Schutz? Er tut dies manchmal zu systemwidrigen Zwecken, die keinesfalls der Intention des Urheberrechts entsprechen. Ein besonders prominenter Fall im Jahr 2013 war die Abmahnung und später die Einreichung einer Klage gegen die WAZ-Mediengruppe (inzwischen Funke-Mediengruppe) wegen der Veröffent­lichung einer Reihe von „Unterrichtungen des Parlaments“. Die Bundeswehr erstellt diese über ihre Auslandseinsätze, stuft sie jedoch als Verschlusssache ein.

Das Verteidigungsministerium stützte seinen Anspruch auf die Löschung der Daten aus dem Internet einzig auf den urheberrechtlichen Schutz dieser Berichte, wie das Blog Netzpolitik.org berichtete. Eine solche Indienstnahme des Urheberrechts zu rein politischen Zwecken ist in einer freiheitlichen Gesellschaft sehr bedenklich und hat deshalb den Argwohn der Journalistenverbände erregt. Der Deutsche Journalisten-Verband kritisiert die Klage als einen Einschüchterungsversuch gegen Journalistinnen und Journalisten, die ihren Informa­tionsauftrag gegenüber der Öffentlichkeit ernst nehmen.

Open-Government-Initiativen bleiben bloße Lippenbekenntnisse, wenn man sich in diesem Zusammenhang nicht ebenfalls über das Schutzniveau von Werken verständigt, die der Staat beauftragt hat oder die in öffentlichen Einrichtungen vorgehalten werden. Bislang verwiesen Bedenkenträger stets auf die Einnahmen, die dem Staat durch eine liberalere Haltung in der Freigabepolitik entgingen. In den letzten 18 Monaten hat Wikimedia Deutschland Parlamentarier verschiedener Parteien gebeten, durch kleine Anfragen die jeweiligen Landesregierungen und die Bundesregierung nach Zahlen dazu zu bitten, insbesondere zu den Einnahmen aus der Einräumung von Nutzungsrechten an Dritte.

Für den Bund und die Länder war das Ergebnis ernüchternd. Gemessen am Gesamthaushalt, dem Aufwand zur Erstellung der jeweiligen Werke und den Kosten für das Aushandeln von individuellen Nutzungsverträgen sind die Einnahmen verschwindend gering. Selbst wenn man großzügig alle Buchungen und Abkommen mit bundeseigenen Unternehmen hinzunimmt, machen diese Einnahmen im Jahr 2013 einen Bruchteil des Bundeshaushalts aus: lediglich 5,7 Millionen Euro.

Wenn es die künftige Bundesregierung mit Open Data ernst meint, wird sie ihre Liebe für die Gemeinfreiheit amtlicher Werke entdecken müssen. Ein kleiner Eingriff – die Ausweitung von Paragraf 5 des Urheberrechtsgesetzes auf mehr staatliche Werke als bisher – wäre kostenneutral, verwaltungsentlastend und würde mit einem Federstrich eine große Zahl an Werken in die allgemeine Nachnutzbarkeit überführen.

Foto: Nina Gerlach, CC BY-SA

Foto: Nina Gerlach, CC BY-SA

Mathias Schindler ist seit 2003 Autor bei Wikipedia. 2004 gründete er zusammen mit anderen Wikipedianern in Berlin den Verein Wikimedia Deutschland und gehörte seitdem mehrfach dem ehrenamtlichen Vorstand als Beisitzer an. Er ist heute Mitglied im Communications Committee der Wikimedia Foundation und hilft dort bei der Pressearbeit. In seiner Freizeit bloggt er unter anderem auf Netzpolitik.org. Seit 2009 arbeitet er als Projektmanager für Wikimedia Deutschland. 

Dieser Text ist auch im Magazin „Das Netz – Jahresrückblick Netzpolitik 2013-2014“ erschienen. Sie können das Heft für 14,90 EUR bei iRights.Media bestellen. „Das Netz – Jahresrückblick Netzpolitik 2013-2014“ gibt es auch als E-Book, zum Beispiel bei Amazon*, beim Apple iBook-Store* oder bei Beam.
* Affiliate-Link

December 12 2013

Creative Commons soll internationale Organisationen erreichen

Die Welturheberrechtsorganisation WIPO hat zusammen mit weiteren Organisationen eine Textfassung der Creative-Commons-Lizenzen vorgestellt, die sich an zwischenstaatliche Einrichtungen richtet. Die Lizenzen bleiben die gleichen, bei Streit ist aber eine außergerichtliche Vermittlung vorgesehen.

Während die von Creative Commons angebotenen Lizenzen gerade einen Versionssprung gemacht haben, gibt es mit den „Intergovernmental Organisation“-Fassungen nun auch eine spezielle Anpassung der Lizenzen, die besonders den Anforderungen zwischenstaatlicher Einrichtungen entgegenkommen soll. Creative Commons hat sie zusammen mit der OECD, der Welturheberrechtsorganisation WIPO, den Vereinten Nationen und anderen internationalen Organisationen erarbeitet.

Die angepassten Fassungen entsprechen vom Aufbau und den Lizenzmodulen – wie „Namensnennung“, „Keine Bearbeitung“ und so weiter – den bereits bekannten CC-Lizenzen und sind mit ihnen kompatibel. Der Unterschied besteht lediglich in einem Detail: der sogenannten Portierung der Lizenzen. Mit den Portierungen werden die Lizenztexte an nationale Besonderheiten angepasst, um besser an das jeweilige Rechtssystem andocken zu können.

In Form der „Intergovernmental“-Lizenzen gibt es eine solche Portierung nun auch für zwischenstaatliche Einrichtungen. Weil internationale Organisationen gleich an mehrere nationale Rechtssysteme andocken müssten, was ihnen nicht immer möglich ist, sehen die „Intergovernmental“-Lizenzfassungen nun einen besonderen Mechanismus vor, mit dem Konflikte gelöst werden können.

Mediation und Schiedsverfahren bei Streit

Organisationen, die ihre Inhalte unter Creative-Commons-Lizenzen veröffentlichen, können mit den „Intergovernmental“-Fassungen ein zwingendes Mediationsverfahren festlegen, dass dann ins Spiel kommt, wenn jemand die Lizenz verletzt. Gerichtliche Auseinandersetzungen sollen so vermieden werden. Führt das zu keinem Erfolg, folgt ein Schiedsverfahren. Für solche Verfahren im zwischenstaatlichen Bereich gibt es bereits Modelle, an die angeknüpft werden kann.

Bis jetzt gibt es die „Intergovernmental“-Fassungen allerdings nur in der CC-Version 3.0, die gerade von der neuen Version 4 abgelöst wurde. Grund dafür ist wohl, dass die Erarbeitung zwei Jahre gebraucht hat, wie der Mitteilung der WIPO zu entnehmen ist. Ein neues Feature aus den 4.0-Versionen haben die „Intergovernmental“-Fassungen aber schon übernommen: Die sogenannte Heilungsklausel, nach der eine Lizenzverletzung nicht für immer zum Verlust der Rechte führt, sondern die Lizenz wieder auflebt, wenn das Problem innerhalb 30 Tagen behoben wird.

Zu den zwischenstaatlichen Organisationen, die Creative-Commons-Lizenzen als Standard für ihre Veröffentlichungen nutzen, gehört zum Beispiel die Weltbank. Auch die Universität der Vereinten Nationen oder die europäische Kernforschungsorganisation CERN nutzen die Lizenzen in bestimmten Bereichen.

December 10 2013

The public front of the free software campaign: part I

At a recent meeting of the MIT Open Source Planning Tools Group, I had the pleasure of hosting Zak Rogoff — campaigns manager at the Free Software Foundation — for an open-ended discussion on the potential for free and open tools for urban planners, community development organizations, and citizen activists. The conversation ranged over broad terrain in an “exploratory mode,” perhaps uncovering more questions than answers, but we did succeed in identifying some of the more common software (and other) tools needed by planners, designers, developers, and advocates, and shared some thoughts on the current state of FOSS options and their relative levels of adoption.

Included were the usual suspects — LibreOffice for documents, spreadsheets, and presentations; QGIS and OpenStreetMap for mapping; and (my favorite) R for statistical analysis — but we began to explore other areas as well, trying to get a sense of what more advanced tools (and data) planners use for, say, regional economic forecasts, climate change modeling, or real-time transportation management. (Since the event took place in the Department of Urban Studies & Planning at MIT, we mostly centered on planning-related tasks, but we also touched on some tangential non-planning needs of public agencies, and the potential for FOSS solutions there: assessor’s databases, 911 systems, library catalogs, educational software, health care exchanges, and so on.)

Importantly, we agreed from the start that to deliver on the promise of free software, planners must also secure free and open data — and free, fair, and open standards: without access to data — the raw material of the act of planning — our tools become useless, full of empty promise.

Emerging from the discussion, moreover, was a realization of what seemed to be a natural fit between the philosophy of the free and open source software movement and the overall goals of government and nonprofit planning groups, most notably along the following lines:

  • The ideal (and requirement) of thrift: Despite what you might hear on the street, most government agencies do not exist to waste taxpayer money; in fact, even well-funded agencies generally do not have enough funds to meet all the demands we place on them, and budgets are typically stretched pretty thin. On the “community” side, we see similar budgetary constraints for planners and advocates working in NGOs and community-based organizations, where every dollar that goes into purchasing (or upgrading) proprietary software, subscribing to private datasets, and renewing licenses means one less dollar to spend on program activities on the ground. Added to this, ever since the Progressive Era, governments have been required by law to seek the lowest-cost option when spending the public’s money, and we have created an entire bureaucracy of regulations, procurement procedures, and oversight authorities to enforce these requirements. (Yes, yes, I know: the same people who complain about government waste often want to eliminate “red tape” like this…)  When FOSS options meet the specifications of government contracts, it’s hard to see why they wouldn’t be in fact required under these procurement standards; of course, they often fail to meet the one part of the procurement specification that names a particular program; in essence, such practices “rig” bids in favor of proprietary software.  (One future avenue worth exploring might be to argue for performance-based bid specifications in government procurement.)
  • The concomitant goal of empowerment: Beyond simply saving money, planning and development organizations often want to actually do something; they exist to protect what we have (breathable air and clean drinking water, historic and cultural resources, property values), fix what is broken (vacant lots and buildings, outmoded and failing infrastructure, unsafe neighborhoods), and develop what we need (affordable housing, healthy food networks, good jobs, effective public services). Importantly, as part of the process, planners generally seek to empower the communities they are working in (at least since the 1970s); to extend-by-paraphrase Marshall McLuhan, “the process is the purpose,” and there is little point in working “in the public interest” while simultaneously robbing that same public of its voice, its community power, and its rights of democratic participation. So, where’s the tie-in to FOSS? The key here is to avoid the problem Marx diagnosed as “alienation of the workers from the means of production.” (Recent world events notwithstanding, Marx was still sometimes correct, and he really put his finger on it with this one.) When software code is provided in a free and open format, users and coders can become partners in the development cycle; better still, “open-source” can also become “open-ended,” as different groups are empowered to modify and enhance the programs they use. Without permanent, reliable, affordable — and, some would argue, customizable — access to tools and data, planners and citizens (the “workers,” in this case) become alienated from the means of producing plans for their future.
  • The value of transparency and openness: A third area of philosophical alignment between free software and public planners relates to the importance both groups place on transparency. To some extent — at least in the context of government planners — this aspect seems to combine elements of the previous two: just as government agencies are required under procurement laws to be cost-conscious, they are required under public records and open meeting laws to be transparent. Similarly, in the same way that community empowerment requires access to the tools of planning, it also requires access to the information of planning: in order for democratic participation to be meaningful, the public must have access to information about what decisions are being made, when, by whom, and why (based on what rationale?). Transparency — not just the privilege of “being informed,” but rather the right to examine and audit all the files — is the only way to ensure this access. In short, even if it is not free, we expect our government to be open source.
  • The virtuous efficiency of cooperation and sharing: With a few misguided exceptions (for example, when engaging in “tragedy of the commons” battles over shared resources, or manipulated into “race-to-the-bottom” regional bidding wars to attract sports teams or industrial development), governments and community-based organizations generally do not exist in the same competitive environment as private companies. If one agency or neighborhood develops a new tool or has a smart idea to solve a persistent problem, there is no harm — and much benefit — to sharing it with other places. In this way, the natural inclination of public and non-profit agencies bears a striking resemblance to the share-and-share-alike ethos of open source software developers. (The crucial difference being that, often, government and community-based agencies are too busy actually working “in the trenches” to develop networks for shared learning and knowledge transfer, but the interest is certainly there.)

Added to all this, recent government software challenges hint at the potential benefit of a FOSS development model. For example, given the botched rollout of the online health care insurance exchanges (which some have blamed on proprietary software models, and/or the difficulty of building the new public system on top of existing locked private code), groups like FSF have been presented with a “teachable moment” about the virtues of free and open solutions. Of course, given the current track record of adoption (spotty at best), the recognition of these lines of natural alignment begs the question, “Given all this potential and all these shared values, why haven’t more public and non-profit groups embraced free and open software to advance their work?” Our conversation began to address this question in a frank and honest way, enumerating deficiencies in the existing tools and gaps in the adoption pipeline, but quickly pivoted to a more positive framing, suggesting new — and, potentially, quite productive — fronts for the campaign for free and open source software, which I will present in part two. Stay tuned.

November 22 2013

DIN-Institut verklagt Internet-Aktivist Carl Malamud

Der US-Aktivist Carl Malamud versteht sich als Bürger-Archivar: Im großen Maßstab besorgt er amtliche Werke und andere öffentliche Dokumente, um sie für alle online zu stellen. In Deutschland verklagt ihn nun das Deutsche Institut für Normung. Es macht Urheberrechtsverletzungen bei Norm-Dokumenten geltend.

Am 30. Dezember 2012 stellte Carl Malamud 10.062 Dokumente ins Netz: Regulierungen, Normen und technische Standards aus 24 Ländern, die die öffentliche Sicherheit betreffen. Der Gründer der Plattform Public.Resource.Org hat eine Mission: Informationen von Regierungen und öffentlichen Stellen zugänglich zu machen. Für die Sammlung „Public Safety Codes of the World” fragte er Dokumente bei Behörden und Normungsorganisationen an und graste Webseiten wie die der Welthandelsorganisation WTO ab.

Weil für den Zugang zu solchen Dokumenten häufig Gebühren anfallen, gab er 180.410 Dollar und 73 Cent für Dokumente wie zum Beispiel die „Eurocodes” aus – europaweite Normen, die für Vorhaben im Bauwesen verbindlich sind. Jetzt stehen auch sie online. „Es ist Euer Recht, viel Spaß damit”, erklärte er.

Gegen die Veröffentlichung von vier Dokumenten aus dem von Malamud gesammelten Fundus klagt nun das Deutsche Institut für Normung (DIN). Mit seiner Veröffentlichung der Dokumente auf Public.Resource.org habe er die Urheberrechte des Instituts verletzt. (Disclosure: Carl Malamud/Public.Resource.org wird im Streit mit dem DIN von der iRights.info-Partnerfirma iRights Law vertreten. Wir betrachten hier den Streit so wie andere News: unvoreingenommen, aber mit Haltung.)

„Amtliche Werke” in Deutschland eng gefasst

Der Streit zwischen Malamud und dem DIN führt auf die verschlungenen Pfade des Urheberrechts bei „amtlichen Werken”, wie sie in Deutschland heißen. Die Grundidee, die auch in anderen Ländern gilt: Was Regierungen und andere staatliche Stellen produzieren, ist frei von Urheberrechten, weil Gesetze und Regeln für alle Bürger einsehbar sein sollen. Ein Graubereich tut sich dort auf, wo es nicht um klare Fälle wie Gesetzestexte geht, sondern um andere Einrichtungen: Eben jene Institutionen etwa, die mit Aufgaben wie Normierung und Sicherheitsregeln betraut sind. Je nach Land und Perspektive sind sie rein private Einrichtungen – oder solche, denen öffentliche Aufgaben übertragen wurden.

In den USA ist der Streit über die „Public Safety Standards” schon länger ausgebrochen: So sehen die Standardisierungorganisation ASTM, die Brandschutzbehörde und der Berufsverband der Heiz- und Kühlingenieure ihre Urheberrechte verletzt und gehen gerichtlich gegen Public.Resource.org vor (PDF). Mit weiteren Bundesstaaten liegt er im Clinch. In Deutschland landet der Streit nun vorerst vor dem Landgericht Hamburg.

Dabei kann das Deutsche Institut für Normung gute Gründe für sich vorbringen: Die Liste dessen, was zu den „amtlichen Werken” zählt, ist in Deutschland begrenzt – Paragraf 5 Urheberrechtsgesetz zählt es abschließend auf. Und bei der letzten Änderung am Paragrafen im Rahmen des „ersten Korbs” 2003 wurden private Normwerke ausdrücklich von der Gemeinfreiheit ausgenommen. So wollte es der Gesetzgeber. Vor allem, um das Finanzierungsmodell des DIN zu bewahren, dessen Arbeit überwiegend aus Lizenzeinnahmen finanziert wird. Die Abgrenzung bleibt aber Detailarbeit: Wenn technische Normen Teil von Gesetzen, Richtlinien oder Verordnungen werden, sind sie dennoch gemeinfrei. Und wo eine Behörde sie sich durch Verweise „zu eigen” macht, sind Verlage verpflichtet, Lizenzen zu erteilen – allerdings nicht fürs Internet.

Malamud: „Das Recht gehört dem Volk”

Carl Malamuds Position lässt sich in einem Satz zusammenfassen: „Das Recht gehört dem Volk, es kann nicht Eigentum von Regierungen oder anderen Organisationen werden – egal wie verdient auch Monopolrenten erscheinen mögen, wo manche eine Parzelle des Rechts bewirtschaften dürfen”. Das schreibt er in einem Artikel zur Veröffentlichung der Public Safety Standards.

Staatliche Werke, hier auf dem Weg ins Internet Archive. Foto: Carl Malamud, CC BY

Amtliche und weitere Werke auf dem Weg ins Internet Archive. Foto: Carl Malamud, CC BY

Dass er bereit ist, zu kämpfen, hat der „rogue archivist” (Cory Doctorow) schon mehrfach bewiesen. In den USA kommt ihm das case law dabei entgegen, Reformen beim Zugang zum Recht schrittweise voranzubringen. In Deutschland stehen die Voraussetzungen dafür schlecht.

Was Malamud aber wohl nicht davon abhalten wird, weiterzumachen. Im Frühjahr etwa scannte er auf eigene Faust die 32.062 Seiten der Bundesgesetze im District of Columbia. Begleitend dazu erklärte er alle Besitzansprüche an den Gesetzen „hiermit (…) für null und nichtig”. Eine Woche später schloss sich die Bezirksregierung seiner Rechtsauffassung an – und stellte sie ebenfalls frei online.

October 17 2013

EuGH bringt mehr Licht in den EU-Ministerrat

Mehr Transparenz für den Rat der Europäischen Union: Er muss im Rahmen der Gesetzgebung auch darüber Auskunft geben, von welchen Mitgliedsstaaten Änderungswünsche kommen. Der Rat verlor einen Streit darüber jetzt auch vor dem Europäischen Gerichtshof.

Was sagt der Ministerrat? Im politischen System Europas ist diese Frage zentral: Was an Gesetzen durch Kommission und Parlament wandert, das muss auch durch den Rat, der die Interessen der Mitgliedsstaaten vertritt. Doch der Rat gilt als verschlossen: Bis in die Nullerjahre hinein tagte er unter Ausschluss der Öffentlichkeit, nur langsam haben sich seine Türen seitdem geöffnet. Was dort in Arbeitsgruppen formuliert wird, mit welchen Entwürfen er in Verhandlungen geht, das erfährt die Öffentlichkeit meist erst am Schluss: Wenn er bereits entschieden hat.

Wer Einblick in die Arbeit des Rates haben will, kann sein Glück über die Informationsfreiheit versuchen. Die ist in den Verträgen, der Grundrechtecharta und einer eigenen Verordnung für EU-Einrichtungen verankert. 4858 Dokumente gab der Rat im vergangenen Jahr im ersten Anlauf frei. In gut einem Fünftel der Fälle allerdings nicht vollständig, wie die Statistik des Rats (PDF) festhält.

Access Info vs. Ministerrat

Die NGO Access Info Europe stört das schon lange. Sie hatte 2008 ein Ratsdokument angefragt, das selbst wiederum die Informationsfreiheit betrifft: Die „Dokumentenzugangsverordnung” sollte überarbeitet werden, ein siebenseitiges Dokument mit der Drucksachennummer 16338/08 (PDF) hielt fest, was die Mitgliedsstaaten daran noch ändern wollten. Nur eine Sache fehlte im Dokument, das der Rat freigab: Die Namen der Mitgliedsstaaten wurden entfernt.

Streitgegenstand: Ratsdokument im Original und in veröffentlichter Fassung

Streitgegenstand: Ratsdokument im Original und in veröffentlichter Fassung

Seine Praxis, die Namen zu entfernen, begründet der Rat so: Würden die Mitgliedsstaaten konkret genannt, würde der Verhandlungsspielraum der Delegationen eingeschränkt, die Entscheidungsfindung erschwert, letzten Endes die „Effizienz” seiner Arbeit unterminiert. Weil das Dokument nur interne Vorgänge behandle, gelte eine Ausnahme der Verordnung.

Schon vor dem Gericht der EU scheiterte der Rat damit allerdings auf ganzer Linie. Das Gericht entschied 2011: Der Rat müsse auch die Namen der Mitgliedsstaaten herausrücken, das öffentliche Interesse am Zugang überwiege.

EuGH bestätigt Zugang

Weil der Rat das Urteil vorm Europäischen Gerichtshof (EuGH) wieder aufheben wollte, musste dieser nun erneut darüber befinden. Gleich mehrere Regierungen und das EU-Parlament traten als Streithelfer auf: Spanien, Frankreich und die Tschechische Republik auf der Seite des Rats, das Parlament auf der Seite von Access Info Europe.

Heute nun hat der EuGH entschieden, das Rechtsmittel zurückzuweisen. So habe das Gericht der EU korrekt entschieden, dass der Rat konkret aufzeigen müsse, worin die Gefahr besteht, wenn die Namen der Mitgliedsstaaten im Dokument genannt werden. Eine solche Gefahr für den Verhandlungsprozess nur „rein hypothetisch“ zu behaupten, reiche nicht. Auch in den anderen Punkten stellte sich der EuGH gegen die Auffassungen des Rats.

Effizienz vs. Demokratie

Der Stein des Anstoßes, das Dokument Nummer 16338/08, ist übrigens schon seit fünf Jahren öffentlich – kurz nach der Sitzung des Rats wurde es auf statewatch.org geleakt (PDF). Das Bemerkenswerte am Streit: Im Kern geht es um die Frage: Wiegt die „Effizienz” der Verfahren stärker als ihr demokratischer Charakter, der ihnen erst Legitimität verschafft? Generalanwalt Cruz Villalón hatte das schon in seinem Schlussantrag festgehalten:

So nachteilig die Transparenz im Rahmen der Gesetzgebung auch sein kann, ist doch festzuhalten, dass nie behauptet wurde, dass die Gesetzgebung durch die Demokratie „einfacher“ würde, wenn man unter „einfach“ „der Öffentlichkeit entzogen“ versteht, da die von der Öffentlichkeit ausgeübte Kontrolle die Protagonisten der Gesetzgebung gravierend einschränkt.

Er hatte gefolgert: Wer der Öffentlichkeit die Urheber von Änderungsvorschlägen vorenthält, beraubt sie auch des Mittels, ihr demokratisches Recht wahrzunehmen. Im Gefüge der EU-Institutionen ist das nur ein Baustein unter vielen. Immerhin: An diesem Punkt hat der EuGH für mehr Informationsfreiheit entschieden.

April 18 2013

Sprinting toward the future of Jamaica

Creating the conditions for startups to form is now a policy imperative for governments around the world, as Julian Jay Robinson, minister of state in Jamaica’s Ministry of Science, Technology, Energy and Mining, reminded the attendees at the “Developing the Caribbean” conference last week in Kingston, Jamaica.

photo-22photo-22

Robinson said Jamaica is working on deploying wireless broadband access, securing networks and stimulating tech entrepreneurship around the island, a set of priorities that would have sounded of the moment in Washington, Paris, Hong Kong or Bangalore. He also described open access and open data as fundamental parts of democratic governance, explicitly aligning the release of public data with economic development and anti-corruption efforts. Robinson also pledged to help ensure that Jamaica’s open data efforts would be successful, offering a key ally within government to members of civil society.

The interest in adding technical ability and capacity around the Caribbean was sparked by other efforts around the world, particularly Kenya’s open government data efforts. That’s what led the organizers to invite Paul Kukubo to speak about Kenya’s experience, which Robinson noted might be more relevant to Jamaica than that of the global north.

Kukubo, the head of Kenya’s Information, Communication and Technology Board, was a key player in getting the country’s open data initiative off the ground and evangelizing it to developers in Nairobi. At the conference, Kukubo gave Jamaicans two key pieces of advice. First, open data efforts must be aligned with national priorities, from reducing corruption to improving digital services to economic development.

“You can’t do your open data initiative outside of what you’re trying to do for your country,” said Kukubo.

Second, political leadership is essential to success. In Kenya, the president was personally involved in open data, Kukubo said. Now that a new president has been officially elected, however, there are new questions about what happens next, particularly given that pickup in Kenya’s development community hasn’t been as dynamic as officials might have hoped. There’s also a significant issue on the demand-side of open data, with respect to the absence of a Freedom of Information Law in Kenya.

When I asked Kukubo about these issues, he said he expects a Freedom of Information law will be passed this year in Kenya. He also replied that the momentum on open data wasn’t just about the supply side.

“We feel that in the usage side, especially with respect to the developer ecosystem, we haven’t necessarily gotten as much traction from developers using data and interpreting cleverly as we might have wanted to have,” he said. “We’re putting putting more into that area.”

With respect to leadership, Kukubo pointed out that newly elected Kenyan President Uhuru Kenyatta drove open data release and policy when he was the minister of finance. Kukubo expects him to be very supportive of open data in office.

The development of open data in Jamaica, by way of contrast, has been driven by academia, said professor Maurice McNaughton, director of the Center of Excellence at the Mona School of Business at the University of the West Indies (UWI). The Caribbean Open Institute, for instance, has been working closely with Jamaica’s Rural Agriculture Development Authority (RADA). There are high hopes that releases of more data from RADA and other Jamaican institutions will improve Jamaica’s economy and the effectiveness of its government.

Open data could add $35 million annually to the Jamaican economy, said Damian Cox, director of the Access to Information Unit in the Office of the Prime Minister, citing a United Nations estimate. Cox also explicitly aligned open data with measuring progress toward Millennium Development Goals, positing that increasing the availability of data will enable the civil society, government agencies and the UN to more accurately assess success.

The development of (open) data-driven journalism

Developing the Caribbean focused on the demand side of open data as well, particularly the role of intermediaries in collecting, cleaning, fact checking, and presenting data, matched with necessary narrative and context. That kind of work is precisely what data-driven journalism does, which is why it was one of the major themes of the conference. I was invited to give an overview of data-driven journalism that connected some trends and highlighted the best work in the field.

I’ve written quite a bit about how data-driven journalism is making sense of the world elsewhere, with a report yet to come. What I found in Jamaica is that media there have long since begun experimenting in the field, from the investigative journalism at Panos Caribbean to the relatively recent launch of diGJamaica by the Gleaner Company.

diGJamaica is modeled upon the Jamaican Handbook and includes more than a million pages from The Gleaner newspaper, going back to 1834. The site publishes directories of public entities and public data, including visualizations. It charges for access to the archives.

Legends and legacies

Usain Bolt in JamaicaUsain Bolt in Jamaica

Olympic champion Usain Bolt, photographed in his (fast) car at the UWI/Usain Bolt Track in Mona, Jamaica.

Normally, meeting the fastest man on earth would be the most memorable part of any trip. The moment that left the deepest impression from my journey to the Caribbean, however, came not from encountering Usain Bolt on a run but from within a seminar room on a university campus.

As a member of a panel of judges, I saw dozens of young people present after working for 30 hours at a hackathon at the University of the West Indies. While even the most mature of the working apps was still a prototype, the best of them were squarely focused on issues that affect real Jamaicans: scoring the risk of farmers that needed banking loans and collecting and sharing data about produce.

The winning team created a working mobile app that would enable government officials to collect data at farms. While none of the apps are likely to be adopted by the agricultural agency in its current form, or show up in the Google Play store this week, the experience the teams gained will help them in the future.

As I left the island, the perspective that I’d taken away from trips to Brazil, Moldova and Africa last year was further confirmed: technical talent and creativity can be found everywhere in the world, along with considerable passion to apply design thinking, data and mobile technology to improve the societies people live within. This is innovation that matters, not just clones of popular social networking apps — though the judges saw more than a couple of those ideas flow by as well.

In the years ahead, Jamaican developers will play an important role in media, commerce and government on the island. If attracting young people to engineering and teaching them to code is the long-term legacy of efforts like Developing the Caribbean, it will deserve its own thumbs up from Mr. Bolt. The track to that future looks wide open.

photo-23photo-23

Disclosure: the cost of my travel to Jamaica was paid for by the organizers of the Developing the Caribbean conference.

March 28 2013

Four short links: 28 March 2013

  1. What American Startups Can Learn From the Cutthroat Chinese Software IndustryIt follows that the idea of “viral” or “organic” growth doesn’t exist in China. “User acquisition is all about media buys. Platform-to-platform in China is war, and it is fought viciously and bitterly. If you have a Gmail account and send an email to, for example, NetEase163.com, which is the local web dominant player, it will most likely go to spam or junk folders regardless of your settings. Just to get an email to go through to your inbox, the company sending the email needs to have a special partnership.” This entire article is a horror show.
  2. White House Hangout Maker Movement (Whitehouse) — During the Hangout, Tom Kalil will discuss the elements of an “all hands on deck” effort to promote Making, with participants including: Dale Dougherty, Founder and Publisher of MAKE; Tara Tiger Brown, Los Angeles Makerspace; Super Awesome Sylvia, Super Awesome Maker Show; Saul Griffith, Co-Founder, Otherlab; Venkatesh Prasad, Ford.
  3. Municipal Codes of DC Freed (BoingBoing) — more good work by Carl Malamud. He’s specifically providing data for apps.
  4. The Modern Malware Review (PDF) — 90% of fully undetected malware was delivered via web-browsing; It took antivirus vendors 4 times as long to detect malware from web-based applications as opposed to email (20 days for web, 5 days for email); FTP was observed to be exceptionally high-risk.

March 19 2013

The City of Chicago wants you to fork its data on GitHub

GitHub has been gaining new prominence as the use of open source software in government grows.

Earlier this month, I included a few thoughts from Chicago’s chief information officer, Brett Goldstein, about the city’s use of GitHub, in a piece exploring GitHub’s role in government.

While Goldstein says that Chicago’s open data portal will remain the primary means through which Chicago releases public sector data, publishing open data on GitHub is an experiment that will be interesting to watch, in terms of whether it affects reuse or collaboration around it.

In a followup email, Goldstein, who also serves as Chicago’s chief data officer, shared more about why the city is on GitHub and what they’re learning. Our discussion follows.

Chicago's presence on GitHubChicago's presence on GitHub

The City of Chicago is on GitHub.

What has your experience on GitHub been like to date?

Brett Goldstein: It has been a positive experience so far. Our local developer community is very excited by the MIT License on these datasets, and we have received positive reactions from outside of Chicago as well.

This is a new experiment for us, so we are learning along with the community. For instance, GitHub was not built to be a data portal, so it was difficult to upload our buildings dataset, which was over 2GB. We are rethinking how to deploy that data more efficiently.

Why use GitHub, as opposed to some other data repository?

Brett Goldstein: GitHub provides the ability to download, fork, make pull requests, and merge changes back to the original data. This is a new experiment, where we can see if it’s possible to crowdsource better data. GitHub provides the necessary functionality. We already had a presence on GitHub, so it was a natural extension to that as a complement to our existing data portal.

Why does it make sense for the city to use or publish open source code?

Brett Goldstein: Three reasons. First, it solves issues with incorporating data in open source and proprietary projects. The city’s data is available to be used publicly, and this step removes any remaining licensing barriers. These datasets were targeted because they are incredibly useful in the daily life of residents and visitors to Chicago. They are the most likely to be used in outside projects. We hope this data can be incorporated into existing projects. We also hope that developers will feel more comfortable developing applications or services based on an open source license.

Second, it fits within the city’s ethos and vision for data. These datasets are items that are visible in daily life — transportation and buildings. It is not proprietary data and should be open, editable, and usable by the public.

Third, we engage in projects like this because they ultimately benefit the people of Chicago. Not only do our residents get better apps when we do what we can to support a more creative and vibrant developer community, they also will get a smarter and more nimble government using tools that are created by sharing data.

We open source many of our projects because we feel the methodology and data will benefit other municipalities.

Is anyone pulling it or collaborating with you? Have you used that code? Would you, if it happened?

Brett Goldstein: We collaborated with Ian Dees, who is a significant contributor to OpenStreetMaps, to launch this idea. We anticipate that buildings data will be integrated in OpenStreetMaps now that it’s available with a compatible license.

We have had 21 forks and a handful of pull requests fixing some issues in our README. We have not had a pull request fixing the actual data.

We do intend to merge requests to fix the data and are working on our internal process to review, reject, and merge requests. This is an exciting experiment for us, really at the forefront of what governments are doing, and we are learning along with the community as well.

Is anyone using the open data that wasn’t before, now that it’s JSON?

Brett Goldstein: We seem to be reaching a new audience with posting data on GitHub, working in tandem with our heavily trafficked data portal. A core goal of this administration is to make data open and available. We have one of the most ambitious open data programs in the country. Our portal has over 400 datasets that are machine readable, downloadable and searchable. Since it’s hosted on Socrata, basic analysis of the data is possible as well.

March 08 2013

GitHub gains new prominence as the use of open source within governments grows

github-social-codinggithub-social-codingWhen it comes to government IT in 2013, GitHub may have surpassed Twitter and Facebook as the most interesting social network. 

GitHub’s profile has been rising recently, from a Wired article about open source in government, to its high profile use by the White House and within the Consumer Financial Protection Bureau. This March, after the first White House hackathon in February, the administration’s digital team posted its new API standards on GitHub. In addition to the U.S., code from the United Kingdom, Canada, Argentina and Finland is also on the platform.

“We’re reaching a tipping point where we’re seeing more collaboration not only within government agencies, but also between different agencies, and between the government and the public,” said GitHub head of communications Liz Clinkenbeard, when I asked her for comment.

Overall, 2012 was a breakout year for the use of GitHub by government, with more than 350 government code repositories by year’s end.

Total government GitHub repositoriesTotal government GitHub repositories

Total number of government repositories on GitHub.

In January 2012, the British government committed the code for GOV.UK to GitHub.

NASA, after its first commit, added 11 more code repositories over the course of the year.

In September, the new Open Gov Foundation published the code for the MADISON legislative platform. In December, the U.S. Code went on GitHub.

GitHub’s profile was raised further in Washington this week when Ben Balter was announced as the company’s federal liaison. Balter made some open source history last year, when he was part of the federal government’s first agency-to-agency pull request. He also was a big part of giving the White House some much-needed geek cred when he coded the administration’s digital government strategy in HTML5.

Balter will be GitHub’s first government-focused employee. He won’t, however, be saddled with an undecipherable title. In a sly dig at the slow-moving institutions of government, and in keeping with GitHub’s love for octocats, Balter will be the first “Government Bureaucat,” focused on “helping government to do all sorts of governmenty things, well, more awesomely,” wrote GitHub CIO Scott Chacon.

Part of Balter’s job will be to evangelize the use of GitHub’s platform as well as open source in government, in general. The latter will come naturally to him, given how he and the other Presidential Innovation Fellows approached their work.

“Virtually everything the Presidential Innovation Fellows touched was open sourced,” said Balter when I interviewed him earlier this week. “That’s everything from better IT procurement software to internal tools that we used to streamline paperwork. Even more important, much of that development (particularly RFPEZ) happened entirely in the open. We were taking the open source ethos and applying it to how government solutions were developed, regardless whether or not the code was eventually public. That’s a big shift.”

Balter is a proponent of social coding in the open as a means of providing some transparency to interested citizens. “You can go back and see why an agency made a certain decision, especially when tools like these are used to aid formal decision making,” he said. “That can have an empowering effect on the public.”

Forking code in city hall and beyond

There’s notable government activity beyond the Beltway as well.

The City of Chicago is now on GitHub, where chief data officer and city CIO Brett Goldstein is releasing open data as JSON files, along with open source code.

Both Goldstein and Philadelphia chief data officer Mark Headd are also laudably participating in conversations about code and data on Hacker News threads.

“Chicago has released over 400 datasets using our data portal, which is located at data.cityofchicago.org,” Headd wrote on HackerNews. While Goldstein says that the city’s portal will remain the primary way they release public sector data, publishing data on GitHub is an experiment that will be interesting to watch, in terms of whether it affects reuse.

“We hope [the datasets on GitHub] will be widely used by open source projects, businesses, or non-profits,” wrote Goldstein. “GitHub also allows an on-going collaboration with editing and improving data, unlike the typical portal technology. Because it’s an open source license, data can be hosted on other services, and we’d also like to see applications that could facilitate easier editing of geographic data by non-technical users.”

Headd is also on GitHub in a professional capacity, where he and his colleagues have been publishing code to a City of Philadelphia repository.

“We use [GitHub] to share some of our official city apps,” commented Headd on the same Hacker News thread. “These are usually simple web apps built with tools like Bootstrap and jQuery. We’ll be open sourcing more of these going forward. Not only are we interested in sharing the code for these apps, we’re actively encouraging people to fork, improve and send pull requests.”

While there’s still a long road ahead for widespread code sharing between the public and government, the economic circumstances of cities and agencies could create the conditions for more code sharing inside government. In a TED Talk last year, Clay Shirky suggested that adopting open source methods for collaboration could even transform government.

A more modest (although still audacious) goal would be to simply change how government IT is done.

“I’ve often said, the hardest part of being a software developer is training yourself to Google the problem first and see if someone else has already solved it,” said Balter during our interview. “I think we’re going to see government begin to learn that lesson, especially as budgets begin to tighten. It’s a relative ‘app store’ of technology solutions just waiting to be used or improved upon. That’s the first step: rather than going out to a contractor and reinventing the wheel each time, it’s training ourselves that we’re part of a larger ecosystem and to look for prior art. On the flip side, it’s about contributing back to that commons once the problem has been solved. It’s about realizing you’re part of a community. We’re quickly approaching a tipping point where it’s going to be easier for government to work together than alone. All this means that a taxpayer’s dollar can go further, do more with less, and ultimately deliver better citizen services.”

Some people may understandably bridle at including open source code and open data under the broader umbrella of “open government,” particularly if such efforts are not balanced by adherence to good government principles around transparency and accountability.

That said, there’s reason to hail collaboration around software and data as bonafide examples of 21st century civic participation, where better platforms for social coding enable improved outcomes. The commits and pulls of staff and residents on GitHub may feel like small steps, but they represent measurable progress toward more government not just of the people, but with the people.

“Open source in government is nothing new,” said Balter. “What’s new is that we’re finally approaching a tipping point at which, for federal employees, it’s going to be easier to work together, than work apart. Whereas before, ‘open source’ often meant compiling, zipping, and uploading, when you fuse the internal development tools with the external publishing tools, and you make those tools incredibly easy to use, participating in the open source community becomes trivial. Often, it can be more painful for an agency to avoid it completely. I think we’re about to see a big uptick in the amount of open source participation, and not just in the traditional sense. Open source can be between business units within an agency. Often the left hand doesn’t know what the right is doing between agencies. The problems agencies face are not unique. Often the taxpayer is paying to solve the same problem multiple times. Ultimately, in a collaborative commons with the public, we’re working together to make our government better.”

February 22 2013

White House moves to increase public access to scientific research online

Today, the White House responded to a We The People e-petition that asked for free online access to taxpayer-funded research.

open-access-smallopen-access-smallAs part of the response, John Holdren, the director of the White House Office of Science and Technology Policy, released a memorandum today directing agencies with “more than $100 million in research and development expenditures to develop plans to make the results of federally-funded research publically available free of charge within 12 months after original publication.”

The Obama administration has been considering access to federally funded scientific research for years, including a report to Congress in March 2012. The relevant e-petition, which had gathered more than 65,000 signatures, had gone unanswered since May of last year.

As Hayley Tsukayama notes in the Washington Post, the White House acknowledged the open access policies of the National Institutes of Health as a successful model for sharing research.

“This is a big win for researchers, taxpayers, and everyone who depends on research for new medicines, useful technologies, or effective public policies,” said Peter Suber, Director of the Public Knowledge Open Access Project, in a release. “Assuring public access to non-classified publicly-funded research is a long-standing interest of Public Knowledge, and we thank the Obama Administration for taking this significant step.”

Every federal agency covered by this memomorandum will eventually need to “ensure that the public can read, download, and analyze in digital form final peer-reviewed manuscripts or final published documents within a timeframe that is appropriate for each type of research conducted or sponsored by the agency.”

An open government success story?

From the day they were announced, one of the biggest question marks about We The People e-petitions has always been whether the administration would make policy changes or take public stances it had not before on a given issue.

While the memorandum and the potential outcomes from its release come with caveats, from a $100 million threshold to national security or economic competition, this answer from the director of the White House Office of Science Policy accompanied by a memorandum directing agencies to make a plan for public access to research is a substantive outcome.

While there are many reasons to be critical of some open government initiatives, it certainly appears that today, We The People were heard in the halls of government.

An earlier version of this post appears on the Radar Tumblr, including tweets regarding the policy change. Photo Credit: ajc1 on Flickr.

Reposted bycheg00 cheg00

February 13 2013

Personal data ownership drives market transparency and empowers consumers

On Monday morning, the Obama administration launched a new community focused on consumer data at Data.gov. While there was no new data to be found among the 507 datasets listed there, it was the first time that smart disclosure has an official home in federal government.

Data.gov consumer slide apps imageData.gov consumer slide apps image

Image via Data.gov.

“Smart disclosure means transparent, plain language, comprehensive, synthesis and analysis of data that helps consumers make better-informed decisions,” said Christopher Meyer, the vice president for external affairs and information services at Consumers Union, the nonprofit that publishes “Consumer Reports,” in an interview. “The Obama administration deserves credit for championing agency disclosure of data sets and pulling it together into one web site. The best outcome will be widespread consumer use of the tools — and that remains to be seen.”

You can find the new community at Consumer.Data.gov or data.gov/consumer. Both URLs forward visitors to the same landing page, where they can explore the data, past challenges, external resources on the topic, in addition to a page about smart disclosure, blog posts, forums and feedback.

“Analyzing data and giving plain language understanding of that data to consumers is a critical part of what Consumer Reports does,” said Meyer. “Having hundreds of data sets available on one (hopefully) easy-to-use platform will enable us to provide even more useful information to consumers at a time when family budgets are tight and health care and financial ‘choices” have never been more plentiful.”

The newest community brings the total number of communities on Data.gov to 16. A survey of the existing communities didn’t turn up much recent activity in the forums or blogs, although the health care community at HealthData.gov has more signs of life than others and there are ongoing challenges at Challenge.gov associated with many different topics.

Another side of open?

Smart disclosure is one of the 17 initiatives that the U.S. committed to as part of the National Action Plan for the Open Government Partnership.

“We’ve developed new tools — called ‘smart disclosures’ — so that the data we make public can help people make health care choices, help small businesses innovate, and help scientists achieve new breakthroughs,” said President Obama, speaking at the launch of the Open Government Partnership in New York City in September 2011. “We’ve been promoting greater disclosure of government information, empowering citizens with new ways to participate in their democracy. We are releasing more data in usable forms on health and safety and the environment, because information is power, and helping people make informed decisions and entrepreneurs turn data into new products, they create new jobs.”

In the months since, the Obama administration has been promoting the use of smart disclosure across federal government through a task force (PDF), working to embed the practice as part of the ways that agencies deliver on consumer policy. The United Kingdom’s “Midata” initiative is an important smart disclosure case study outside of the United States.

In 2012, the U.S. Treasury Department launched a finance data community, joining open data initiatives in health care, energy, education, development and safety.

“I think you have to say that what has been accomplished so far is mostly [that] the release of government data has spawned a new generation of apps,” said Richard Thaler, professor of behavioral science and economics at the University of Chicago, in an interview. “This has been a win-win for business and consumers. New businesses are created to utilize the now available government data, and consumers now know when the next bus will arrive. The next step will be to get the private sector data into the picture — but that is only the bright future at this stage, rather than something that has already been accomplished. It is great that the government has led the way in releasing data, since it will give them more credibility when they ask private companies to do the same.”

Open data as catalyst?

While their business or organizational goals for data usage may diverge, consumer advocates, entrepreneurs and media are all looking for more insight into what’s actually happening in marketplaces for goods and services.

“Data releases are critical,” said Meyer. “First, even raw, less consumer-friendly data can help change government and industry behavior when it is published. Second, sunlight truly is the best disinfectant. We believe government and industry want to do right by consumers. Scrutiny of data makes the next iteration better, whether it’s produced by the government or a hospital.”

What will make these kinds of disclosures “smart?” When they involve timely, regular release of personal data in standardized, machine readable formats. When data is more liquid, it can easily be ingested by entrepreneurs and developers to be used in tools and services to help people to make more informed decisions as they navigate marketplaces for finance, health care, energy, education or other areas.

“We use government datasets a great deal in the health care space,” said Meyer. “We use CMS ‘Hospital Compare’ data to publish ratings on patient experience and re-admissions. To develop ratings of preventive services for heart disease, we rely on the U.S. Preventive Services Task Force.”

The stories of Brightscope and Panjiva are instructive: both startups had to invest significant time, money and engineering talent in acquiring and cleaning up government data before they could put it to work adding transparency to supply chains or financial advisers.

“It’s cliche, but true – knowledge is power,” said Yaron Samid, the CEO of BillGuard, in an interview. “In BillGuard’s case, when we inform consumers about a charge on their credit bill that was disputed by thousands of other consumers or a known grey charge merchant before they shop, it empowers them to make active choices in protecting their money – and spending it, penny for penny, how they choose and explicitly authorize. The release and cross-sector collaboration of billing dispute data will empower consumers and help put an end to deceptive sales and billing practices, the same way crowdsourced “mark as spam” data did for the anti-spam industry.”

What tools exist for smart disclosure today?

If you look through the tools and services at the new alpha.data.gov, quite a few of the examples are tools that use smart disclosure. When they solve knotty problems, such consumer-facing products or services have the potential to massively scale quickly:

As Meyer pointed out in our interview, however, which ones catch on is still an open question.

“We are still in the nascent stage of identifying many smart disclosure outcomes that have benefited consumers in a practical way,” he said. “Where we can see demonstrable progress is the government’s acknowledgement that freeing the data is the first and most necessary step to giving private sector innovators opportunity to move the marketplace in a pro-consumer direction.”

The difference between open data on a government website and data put to work where consumers are making decisions, however, is significant.

“‘Freeing the data’ is just the first step,” said Meyer. “It has to be organized in a consumer-friendly format. That means a much more intense effort by the government to understand what consumers want and how they can best absorb the data. Consumer Reports and its policy and action arm, Consumers Union, have spent an enormous amount of time trying to get federal and state governments and private health providers to release information about hospital-acquired infections in order to prevent medical harms that kill 100,000 people a year. We’re making progress with government agencies, although we have a long way to go.”

There has already been some movement in sectors where consumers are used to downloading data, like banking. For instance, BillShrink and Hello Wallet use government and private sector data to help people to make better consumer finance decisions. OPower combines energy efficiency data from appliances and government data on energy usage and weather to produce personalized advice on how to save money on energy bills. BillGuard analyzes millions of billing disputes to find “grey charge” patterns on credit cards and debit cards. (Disclosure: Tim O’Reilly is on BillGuard’s Advisory Board and is a shareholder in the startup.)

“To get an idea of the potential here, think about what has happened to the travel agent business,” said Thaler. “That industry has essentially been replaced by websites servings as choice engines. While this has been a loss to those who used to be travel agents, I think most consumers feel they are better served by being able to search the various travel and lodging options via the Internet. When it comes to choosing a calling plan or a credit card, it is very difficult to get the necessary data, either on prices or on one’s own utilization, to make a good choice. The same is true for mortgages. If we can make the underlying data available, we can help consumers make much better choices in these and other domains, and at the same time make these industries more competitive and transparent. There are similar opportunities in education, especially in the post-high school, for-profit sector.”

Recent data releases have the potential to create new insights into previously opaque markets.

“There are also citizen complaint registries that have been created either by statute (Consumer Product Improvement Safety Act of 2008) or by federal agencies, like the Consumer Financial Protection Bureau (CFPB). [These registries] will create rich datasets that industry can use to improve their products and consumer advocates can analyze to point out where the marketplace hasn’t worked,” said Meyer.

In 2012, the CFPB, in fact,began publishing a new database online. As was the case with the Consumer Product Safety Commission in 2011, the consumer complaint database did not go online without industry opposition, as Suzy Khimm reported in her feature story on the CFPB. That said, the CFPB has been making consumer complaints available to the public online since last June.

That data is now being consumed by BillGuard, enabling more consumers to derive benefit that might not have been available otherwise.

“The CFPB has made their consumer complaint database open to the public,” said Samid. “Billing disputes are the No. 1 complaint category for credit cards. We also source consumer complaint data from the web and anonymized billing disputes directly from banks. We are working with other government agencies to share our findings about grey charges, but cannot disclose those relationships just yet.”

“Choice engines” for an open data economy

Many of this emerging class of services use multiple datasets to provide consumers with insight into their choices. For instance, reviews and experiences of prior customers can be mashed up with regulatory data from government agencies, including complaints. Data from patient reviews could power health care startups. The integration of food inspection data into Yelp will give consumers more insights into dining decisions. Trulia and Zillow suggest another direction for government data use, as seen in real estate.

If these early examples are any guide, there’s an interesting role for consumer policy makers and regulators to play: open data stewards and suppliers. Given that the release such data has an effect on the market for products and services, expect more companies in affected industries to resist such initiatives, much in the same way that that CPSC and CFPB database were opposed by industry. Such resistance may be subtle, where government data collection is portrayed as part of a regulator’s mission but its release into the marketplace is undermined.

Nonetheless, smart disclosure taps into larger trends, in particular “personal data ownership” and consumer empowerment. The growth of an energy usage management sector and participatory health care show how personal data can be used, once acquired. The use of behavioral science in combination with such data is of great interest to business interest and should attract the attention of policy makers, legislators and regulators.

After all, convening and pursuing smart disclosure initiatives puts government in an interesting role. If government agencies or private companies then choose to apply behavioral economics in programs or policies, with an eye on improving health or financial well-being, how should the policies themselves be disclosed use? What principles matter?

“The guideline I suggest is that if a firm is keeping track of your usage and purchases, then you should be able to get access to that data in a machine-readable, standardized format that, with one click, you could upload to a search engine website,” said Thaler. “As for the proper balance, I am proposing only that consumers have access to their raw purchase history, not proprietary inferences the firm may have drawn. To give an example, you should have a right to download the list of all the movies you have rented from Netflix, but not the conclusions they have reached about what sort of movies you might also like. Also, any policy like this should begin with larger firms that already have sophisticated information systems keeping track of consumer data. For those firms, the costs of providing the data to their consumers should be minor.”

Given the growth of student loans, more transparency and understanding for higher education education choices is needed. For that to happen, prospective students will need more access to their own personal data to build the profile that they can then use to get personalized recommendations about education, along with data from higher education institutions, including outcomes for different kinds of students, from graduation rates to job placement.

Disclosures of data regarding outcomes can have other effects as well.

“I referenced the hospital-acquired infection battle earlier,” said Meyer. “In 1999, the Institute of Medicine released a study, “To err is human,” that showed tens of thousands of consumers were dying because of preventable medical harms. Consumers Union started a campaign in 2003 to reduce the number of deaths due to hospital-acquired infections. Our plan was to get laws passed in states that required disclosure of infections. We have helped get laws passed in 30 states, which is great, but getting the states to comply with useful data has been difficult. We’re starting to see progress in reducing infections but it’s taken a long time.”


This post is part of our ongoing investigation into the open data economy.

February 08 2013

Datenportal des Bundes: Preußen im Internet

Die Bundesregierung versucht „Open Data“, wird dafür jedoch nicht etwa gelobt, sondern mit einer Protest-Website konfrontiert. Wie das angehen kann? Es geht um Begrifflichkeiten, Rechte, Communities und Technik. Ein Erklärungsversuch.

Bund und Länder bekommen ein Datenportal, aus dessen Namen wenige Tage vor dem offiziellen Start das Wort „Open” verschwunden ist. Verschiedene Initiativen, Organisationen und Aktivisten empören sich in einem offenen Brief beim Bundesinnenministerium. Die Unterzeichnenden wollen wenigstens nicht als Feigenblatt eines scheiternden Ansatzes herhalten. Letzteres drohte, weil die für das neue Portal „Govdata – Das Datenportal für Deutschland” Zuständigen über mehrere sogenannte „Community Workshops” versuchen, die potenziellen Nutzer des Portals in die Planung mit einzubeziehen. Dieser Ansatz wird von niemandem grundsätzlich kritisiert, er sollte vielmehr selbstverständlich sein bei staatlichen Vorhaben dieser Art. Dass die einbezogene Community sich nun aber ausdrücklich distanziert, belegt, dass der Ansatz in diesem Fall zumindest teilweise gescheitert ist.

Der Protest entzündet sich vor allem an drei Punkten:

  • Erstens bekennen sich Bund und Länder mit Govdata Deutschland nicht eindeutig zu den unter Open-Data-Aktivisten international längst anerkannten Standards und Definitionen von „Open”. Stattdessen schaffen sie eine nationale Insellösung und geben damit nach Ansicht der Unterzeichner des Protestbriefs ein schlechtes Vorbild ab.
  • Zweitens bleibt es den Behörden selbst überlassen, ob sie Daten zu dem Portal beisteuern – und wenn ja, welche Nutzung der Daten sie gestatten.
  • Drittens halten Kritiker die bisher über das Portal verfügbaren Daten für kaum nachgefragte „Schnarchdaten”. Das, heißt es, sei die Folge von unverbindlichen oder falschen Prioritäten, welche Daten zur Verfügung gestellt werden sollen. Dass die Standortdaten von Hundekotbehältern weniger interessant sind als der Energieverbrauch öffentlicher Anlagen, liegt auf der Hand. Zusammengenommen steckt in dem offenen Brief der Vorwurf, die deutsche Politik ergehe sich in halbherzigen Schritten und täusche Innovationsbereitschaft nur vor.

Nun findet sich also kein „Open” mehr im Namen des Portals. Damit entgeht die Bundesregierung zumindest der Kritik, sie betreibe hier Etikettenschwindel und eine Verwässerung des Begriffs „Open Data”. Das verweist bereits auf eine sehr grundsätzliche Meinungsverschiedenheit zwischen Community und Bundesregierung. Dass diese erst so spät entdeckt wurde, liegt vor allem an etwas sehr banalem: Zeitdruck.

Aus informierten Kreisen ist zu erfahren, dass die Cebit 2013 die Zeitpläne regiert hat. Um bis dahin etwas in Richtung „Open Government Data” präsentieren zu können, wurde eine Studie beim Fraunhofer-Institut für offene Kommunikationssysteme (FOKUS) in Auftrag gegebendem, die dem Vernehmen nach mit viel zu wenig Bearbeitungszeit ausgestattet war. Ebenso hastig ging es dann an die technische Umsetzung der Studienempfehlungen, die sich bereits in diesem Stadium zu wenig um den Bedeutungsgehalt des sensiblen Begriffs „Open Data” scherten. Das alles geschah und geschieht unter der Leitung des Bundesinnenministeriums, aber in ständiger Abstimmung mit einer bunt besetzten Bund-Länder-Arbeitsgruppe. Was auf die nächste Streitursache hindeutet.

Amtsschimmel 2.0 statt politischer Vorgaben

Schon die Trägheit eines solchen Abstimmungsprozesses ist offensichtlich. Dass er zu lauwarmen Ergebnissen neigt, ist ein Erfahrungssatz. Beides hätte einzig durch eine mit entsprechender politischer Prokura ausgestattete funktionale Leitung auf Bundesebene abgemildert werden können. Die aber gibt es nicht. Weder hat im föderalen System der Bund eine rechtliche Kompetenz, um in Sachen öffentlicher Daten „durchzuregieren”, noch gibt es bei der Bundesregierung offenbar den politischen Willen, wenigstens eine starke Leithammelfunktion jenseits der Kompetenzverteilung zu übernehmen.

In den Ländern sind verschiedene Ministerien, in einem Fall sogar das Landwirtschaftsressort verantwortlich. So kommt es, dass man auf Arbeitsebene des Bundesinnenministeriums zwar irgendwie „offene Daten” will, aber immer nur Bittsteller gegenüber den Behördenleitungen ist, die sich – mangels klarer politischer Ansage ihrer jeweiligen Landesregierung – vor allem ihren eigenen Maßstäben verpflichtet fühlen. Und entsprechend sehen dann auch die Nutzungsbedingungen für Govdata aus, deren zwei Varianten einer „Datenlizenz Deutschland” im Zentrum der Kritik der Protestierenden stehen.

Keine Standards, kein Open, so einfach ist das

Die Unterzeichner des offenen Briefes pochen auf Standards. Dazu gehören neben einigen Standardlizenzmodellen in erster Linie die von unterschiedlichen nichtstaatlichen Akteuren kuratierten Definitionen und Prinzipien unter opendefinition.org, okfn.de und freedomdefined.org. Keine dieser Quellen ist in irgendeiner Weise bindend, in überstaatliche Strukturen oder Abkommen eingebunden. Niemand hat weltweite Markenrechte am Begriff „Open Data” und kann darüber erzwingen, welche Daten wirklich „Open” sind. Aber immerhin wurden die Regeln bislang stets im Konsens einer großen Zahl von Beteiligten weltweit gefunden.

Im Fall der Freigabe von Daten öffentlicher Stellen, neudeutsch „Public Sector Information” (PSI) genannt, werden immer wieder die USA als Paradebeispiel eines bei Daten sehr freigiebigen Staates angeführt. Im Zweifel, so heißt es, seien Behördendaten dort für jede/n einsehbar und nachnutzbar, Verschlusssachen die Ausnahme. Das amerikanische Datenportal data.gov gilt als großes Vorbild bei der elektronischen Verfügbarmachung öffentlicher Daten, gefolgt vom britischen Pendant. Die staatliche Verwaltung preußischer Prägung funktioniert leider anders.

Verwaltungstradition, elektronisch gewendet

Vor der Verabschiedung des Informationsfreiheitsgesetzes (IFG) auf Bundesebene und entsprechender Ländergesetze galt eindeutig: Im Zweifel unterliegen behördliche Vorgänge dem Amtsgeheimnis. Wer Akteneinsicht haben wollte, musste eine besondere Berechtigung vorweisen können. Noch heute versuchen viele Behörden, Auskunftsanträge nach dem IFG und vergleichbaren Gesetzen auf Biegen und Brechen zu erschweren, indem sie die Ausnahmeregeln kreativ interpretieren. Initiativen wie „Frag den Staat” arbeiten mit modernen Mitteln dagegen an.

Die Kritiker befürchten daher, dass viele deutsche Behörden – so sie überhaupt die wirklich interessanten Daten herausgeben – die Option wählen, nach der nur eine nicht-kommerzielle Nutzung ihrer Daten gestattet ist. Diese Lizenz aber widerspricht dem für offene Daten weithin anerkannten Grundsatz, wonach die Daten auch kommerziell nachnutzbar sein müssen. Im Grunde ist es aber noch schlimmer.

Zwar sind auch die Datenportale anderer Länder nicht ohne Fehl und Tadel im Sinne der reinen Lehre von „Open Data“. Aber Govdata geht – der Fraunhofer-Studie folgend – einen ganz besonders eigenwilligen Weg. Statt die Nutzung der Daten vertraglich zu regeln, wie es alle Standardlizenzmodelle im Open-Content-Bereich von Wikipedia bis Linux tun, verlässt man sich lieber auf das deutsche Verwaltungsrecht und setzt damit eine weitere Ursache des entbrannten Streits. Juristischen Laien ist das Problem nur schwer zu vermitteln – hier ein Versuch.

Fakten sind frei, außer wenn deutsche Behörden sie verwahren

Wer sich irgendwie mit dem sogenannten „geistigen Eigentum” befasst, kann sich üblicherweise an einen Grundsatz halten: Ideen und Fakten sind frei. Sie sind schlicht nicht „schutzfähig”, werden also nicht erfasst vom automatischen Schutz des Urheberrechts. Dieser Grundsatz gilt zwar nicht allumfassend, denn wer es darauf anlegt, kann technische Ideen in engen Grenzen durch Patente schützen lassen. Der europäische Datenbankenschutz überzieht große Teile der Datenlandschaft mit einem problematischen indirekten Investitionsschutz. Die Bundesregierung ist mit dem Plan für ein Leistungsschutzrecht für Presseverleger gerade sogar dabei, kleinste Textschnippsel lizenzpflichtig zu machen. Aber ein einzelnes Faktum, eine schlichte Information wie etwa die Tageshöchsttemperatur in Berlin an Heiligabend 2012 ist frei verwendbar – selbst wenn sie in einer ansonsten insgesamt geschützten Datenbank enthalten sein sollte. Doch wer sich auf diesen Grundsatz verlässt, hat die Rechnung ohne die schier unbegrenzten Möglichkeiten des deutschen Verwaltungsrechts gemacht.

Das Urheberrechtsgesetz ist Teil des Zivilrechts, nicht des Verwaltungsrechts. Alle, die sich in zivilrechtlichem Rahmen bewegen müssen (etwa Privatpersonen und Unternehmen), müssen mit den Werkzeugen auskommen, die ihnen das Zivilrecht anbietet. Im Urheberrechtsgesetz ist ein Schutzrecht für einzelne Fakten nicht vorgesehen, also besteht kein geeignetes Werkzeug, um rechtlich durchsetzbare Kontrolle über einzelne Fakten auszuüben. So weit, so klar. Staatliche Stellen aber können sich statt des zivilrechtlichen Rahmens einen anderen aussuchen, nämlich den des Verwaltungsrechts, innerhalb dessen sie sich gewissermaßen ihre eigenen rechtlichen Werkzeuge bauen können. Das geht so:

Sofern eine Behörde nicht gesetzlich verpflichtet ist, in einer bestimmten Weise zu handeln, kann sie für alles, was sie tut, fast beliebige Regeln aufstellen. Diese Regeln werden dann in einem sogenannten „Verwaltungsakt” festgelegt und durchgesetzt mittels amtlichen „Verwaltungszwangs”. Ein Verwaltungsakt wirkt darum wie ein Gesetz im Miniaturformat. Auch Nutzungsbedingungen können in dieser Form definiert werden. Der oben genannte Grundsatz, dass Fakten frei und nachnutzbar sind, sobald man sie hat, kann dadurch umschifft werden. Govdata schlägt genau diesen Weg ein, um den Behörden die Möglichkeit zu geben, die Verwendung der von ihnen bereitgestellten Daten noch kleinteiliger kontrollieren zu können, als es nach dem Zivilrecht möglich wäre. Bis hinunter zum einzelnen Faktum. Ob diese Kontrolle überhaupt aktiv ausgeübt werden soll, darüber sind sich wahrscheinlich noch nicht einmal die Teilnehmer der Bund-Länder-Arbeitsgruppe im Klaren. Schon das spräche dagegen, sie überhaupt vorzusehen.

Datenprojekte notfalls halb illegal?

Dies alles lässt verständlich werden, warum die gegenwärtige Debatte sehr grundsätzlich geführt wird. Nicht nur schafft sich Deutschland gerade seine eigenen Datenregeln, die bei der Vermischung mit Daten von anderswo zusätzlich beachtet werden müssen. Das erhöht den Aufwand für Datenprojekte und dehnt die rechtlichen Grauzonen aus. Zu allem Überfluss wird auch noch die Reichweite der Datenregeln über die bekannten Grenzen des Zivilrechts hinausgetrieben. Dass das einen spürbaren Dämpfer für die Open-Data-Community bedeuten wird, ist möglich, aber keineswegs ausgemacht. Manch ein Projekt wird es schlicht aufgeben, sich um die rechtlichen Fragen seiner Arbeit Gedanken zu machen. Und einfach loslegen. Wenn es Ärger geben sollte, kann das auch eine willkommene Marketinghilfe sein. Dennoch hätte der gegenwärtige Streit vermieden werden können und sollen. Er lässt den Start von Govdata als verpasste Chance erscheinen.

In gekürzter Form zuerst erschienen im Data Blog bei Zeit Online. Dieser Text wird unter der Creative-Commons-Lizenz Namensnennung 3.0 de veröffentlicht. John Hendrik Weitzmann gehört zu den Unterzeichnern von not-your-govdata.de.

Datenportal des Bundes: Preußen im Internet

Die Bundesregierung versucht „Open Data“, wird dafür jedoch nicht etwa gelobt, sondern mit einer Protest-Website konfrontiert. Wie das angehen kann?

Weiterlesen

February 05 2013

Investing in the open data economy

If you had 10 million pounds to spend on open data research, development and startups, what would you do with it? That’s precisely the opportunity that Gavin Starks (@AgentGav) has been given as the first CEO of the Open Data Institute (ODI) in the United Kingdom.

GavinStarksGavinStarksThe ODI, which officially opened last September, was founded by Sir Tim Berners-Lee and Professor Nigel Shadbolt. The independent, non-partisan, “limited by guarantee” nonprofit is a hybrid institution focused on unlocking the value in open data by incubating startups, advising governments, and educating students and media.

Previously, Starks was the founder and chairman of AMEE, a social enterprise that scored environmental costs and risks for businesses. (O’Reilly’s AlphaTech Ventures was one of its funders.) He’s also worked in the arts, science and technology. I spoke to Starks about the work of the ODI and open data earlier this winter as part of our continuing series investigating the open data economy.

What have you accomplished to date?

Gavin Starks: We opened our offices on the first of October last year. Over the first 12 weeks of operation, we’ve had a phenomenal run. The ODI is looking to create value to help everyone address some of the greatest challenges of our time, whether that’s in education, health, in our economy or to benefit our environment.

Since October, we’ve had literally hundreds of people through the door. We’ve secured $750,000 in matched funding from the Amida Network, on top of a 10-million-pound investment from the UK Government’s Technology Strategy Board. We’ve helped identify 200 million pounds a year in savings for the health service in the UK.

200 million pounds? What do you base that estimate upon?

Gavin Starks: Part of our remit is to bring together the main experts from different areas. To illustrate the kind of benefit that I think we can bring here, one part of what we’re doing is to try and unlock data supply.

The Health Service in the UK started to release a lot of its prescription information as open data about nine months ago. We worked with some of the main experts in the health service with a big data analytics firm, Mastodon C, which is a startup that we’re incubating at the ODI.

Together, they identified potential areas of investigation. The data science folks drilled into every single prescription. (I think the dataset was something like 47 million rows of data.) What they were looking at there was the difference between proprietary drugs and generics, where there may be a generic equivalent. In many cases, the generic equivalent has no clinical difference between the proprietary drugs — and so the cost difference is huge. It might be 81 pence or 81 pennies for a generic to more than 20 pounds for a drug that’s still under license.

Looking at the entire dataset, the analytics revealed different patterns, and from that, cost differences. If we carried out this research over a year ago, for example, we could have saved 200 million pounds over the last year. It really is quite significant. That’s on one class of drugs, on one area. We think this research could be repeated against different classes of drugs and replicated internationally.

UK statin map screenshotUK statin map screenshot

Percentage of proprietary statin prescribing by CCG Sept 2011 – May 2012.
Image Credit: PrescribingAnalytics.com

Which open data business models are the most exciting to you?

Gavin Starks: I think there’s lots of different areas to explore here. There are areas where there can be cost savings brought to any organization, whether it’s public sector or private sector organizations. There’s also areas of new innovation. (I think that they’re quite different directions.) Some of the work that we’ve done with the prescription data, that’s where you’re looking at efficiencies.

We’ve got other startups that are based in our offices here in Shoreditch and London that are looking at transportation information. They’re looking at location-based services and other forms of analytics within the commercial sectors: financial information, credit ratings, those kinds of areas. When you start to pull together different levels of open data that have been available but haven’t been that accessible in the past, there’s new services that can be derived from them.

What creates a paid or value-add service? It’s essential that we create a starting point where free and open access to the data itself can be made available for certain use cases for as many people as possible. There, you stimulate innovation if you can gain access to discern new insight from that data.

Having the data aggregated, structured and accessible in an automated way is worth paying for. There could be a service-level-agreement-based model. There could be a carve-out of use cases. You could borrow from the Creative Commons world and say, “If you’re going to have a share alike license on this, then that’s fine, you can use it for free. But if you’re going to start creating closed assets, as a result, there may be a charge for the use of data at that point.”

I think there’s a whole range of different data models, but really, the goal here is to try and discern what insight can be derived from existing datasets and what happens when you start mashing them up with other datasets.

What are the greatest challenges to achieving some of the economic outcomes that the UK Cabinet Office has described?

Gavin Starks: I think there are many challenges. One of the key ones is just understanding. One challenge we’ve heard consistently from pretty much everybody has been, “We believe there’s a huge amount of potential here, but where do we start?”

Part of the ODI’s mission is to provide training, education and assets that enable people to begin on that journey. We’re in the process right now of designing our first dozen or so training programs. We’re working at one level with the World Bank to train the world’s politicians and national leaders, and we’re working at the other end with schools to create programs that fit with existing graduate courses.

Education is one of the biggest challenges. We want to train more than technologists — we also want to train lawyers and journalists about the business case to enable people to understand and move forward at the same pace. There’s very little point in just saying, “There is huge value here,” without trying to demonstrate that return on investment (ROI) and value case at the same time.

What is the ODI’s approach to incubating civic startups?

Gavin Starks: There are two parts to it. One is unlocking supply. We’re working with different government departments and public sector agencies to help them understand what unlocking supply means. Creating structured, addressable, repeatable data creates the supply piece so that you can actually start to build a business. It’s very high-risk to try and build a business when you don’t have a guarantee of supply.

Two, encouraging and incubating the demand side. We’ve got six startups in our space already. They’re all at different stages. Some of them are very early on, just trying to navigate toward the value here that we can discern from the data. Others are more mature, and maybe have some existing revenue streams, but they’re looking at how to really make this scale.

What we’ve found is of benefit so far — and again, we’re only three months in — is our ability to network and convene the different stakeholders. We can take a small startup and get them in front of one of the large corporations and help them bridge that sales process. Helping them communicate their ideas in a clear way, where the value is obvious to the end customer, is important.

What are some of the approaches that have worked to unlock value from open government data?

Gavin Starks: We’re not believers in “If you build it, they will come.” You need to create a guaranteed data supply, but you also need to really engage with people to start to unlock ideas.

We’ve been running our own hackathons, but I think there’s a difference in the way that we’ve structured them and organized them. We include domain experts and frame the hack events around a specific problem or a specific set of problems. For example, we had a weekend-long hackathon in the health space, looking at different datasets, convening domain experts and technical experts.

It involved competitions, when the winner gets a seat at the ODI to take their idea forward. It might be that an idea turns into a business, it might turn into a project, or it might just turn into a research program.

I think that you need to really lead people by the hand through the process of innovation, helping them and supporting them to unlock the value, rather than just having the datasets there and expecting them to be used.

Given the cost the UK’s National Audit Office ascribed to opening data, is the investment worth it?

Gavin Starks: This is like the early days of the web. There are lots of estimates about how much everything is going to be worth and what sort of ROI people are going to see. What we’ve yet to see, I think, is the honest answer.

The reason I’m very excited about this area is that I see the same potential as I saw in the mid-1990s, when I got involved with the web. The same patterns exist today. There are new datasets and ecosystems coming into existence that can be data-mined. They can be joined together in novel ways. They can bridge the virtual and physical worlds. They can bring together people who have not been able to collaborate in different ways.

There’s a huge amount of value to be unlocked. There will be some dead ends, as we had in the web’s development, but there will be some incredible wins. We’re trying to refine our own skills around identifying where those potential hot spots might be.

Health services is an area where it’s really obvious there’s a lot of benefits. There are clear benefits from opening up transportation and location-based services. You can see the potential behind energy efficiency, creating efficient supply chains and opening up more information around education.

You can see resonant points. We’re really drilling into those and asking, “What happens when you really put together the right domain experts and the supportive backers?”

Those backers can be financial as well as in industry. The Open Data Institute has been pulling together those experts and providing a neutral space for that innovation to happen.

Which of those areas have the most clear economic value, in terms of creating shorter term returns on investment and quick wins?

Gavin Starks: I don’t think there’s a single answer to that question. If you look at location-based services, corporate data, health data or education, there are examples and use cases in different parts of the world where they will have different weightings.

If you were looking at water sanitation in areas of the world where there is an absence of it, then they may provide more immediate return than unlocking huge amounts of transportation information.

In Denmark, look at the release of the equivalent of zip code data and the more detailed addresses. I believe the numbers there went from four-fold return to 17-fold return, in terms of value to the country of their investment in decent address-level data.

This is one area that we’ve provided a consultation response in the UK. I think it may vary from state-to-state in the U.S., or maybe in areas where the specific focus on health would be very beneficial. There may be areas where a focus on energy efficiency may be most beneficial.

What conditions lead to beneficial outcomes for open data?

Gavin Starks: A lot of the real issues are not really about the technology. When it comes to the technology, we know what a lot of the solutions are. How can we address or improve the data quality? What standards need to exist? What anonymity, privacy or secrecy needs to exist around the data? How do we really measure the outcomes? What are the circumstances where stakeholders need to get involved?

You definitely need political buy-in, but there also needs to be a sense of what the data landscape is. What’s the inventory? What’s the legal situation? Who has access? What kind of access is required? What does success look like against a particular use case?

You could be looking at health in somewhere like Rwanda, you could be looking at a national statistics office in a particular country where they may not have access to the data themselves, and they don’t have very much access to resources. You could be looking at contracting, government procurement and improving simple accountability, where there may be more information flow than there is around energy data, for example.

I think there’s a range of different use cases that we need to really explore here. We’re looking for great use cases where we can say, “This is something that’s simple to achieve, that’s repeatable, that helps lower costs and stimulate innovation.”

We are really at the beginning of a journey here.

Red Hat made headlines for becoming the first billion-dollar open source company. What do you think the first billion-dollar open data company will be?

Gavin Starks: It would be not unlikely for that to be in the health arena.


This interview has been edited and condensed for clarity. This post is part of our ongoing investigation into the open data economy.

Related:

January 28 2013

Open data economy: Eight business models for open data and insight from Deloitte UK

When I asked whether the push to free up government data was resulting in economic activity and startup creation, I started to receive emails from people around the United States and Europe. I’ll be publishing more of what I learned in our ongoing series of open data interviews and profiles over the next month, but two responses are worth sharing now.

Open questions about open growth

The first response concerned Deloitte’s ongoing research into open data in the United Kingdom [PDF], conducted in collaboration with the Open Data Institute.

Harvey Lewis, one of the primary investigators for the research project, recently wrote about some of Deloitte’s preliminary findings at the Open Government Partnership’s blog in a post on “open growth.” To date, Deloitte has not found the quantitative evidence the team needs to definitely demonstrate the economic value of open data. That said, the team found much of interest in the space:

“… new businesses and new business models are beginning to emerge: Suppliers, aggregators, developers, enrichers and enablers. Working with the Open Data Institute, Deloitte has been investigating the demand for open data from businesses. Looking at the actual supply of and demand for open data in the UK provides some indication of the breadth of sectors the data is relevant to and the scale of data they could be considering.

The research suggests that the key link in the value chain for open data is the consumer (or the citizen). On balance, consumer-driven sectors of the economy will benefit most from open government data that has direct relevance to the choices individuals make as part of their day-to-day lives.”

I interviewed Lewis last week about Deloitte’s findings — stay tuned for more insight into that research in February.

8 business models for open data

Michele Osella, a researcher and business analyst in the Business Model & Policy Innovation Unit at the Istituto Superiore Mario Boella in Italy, wrote in to share examples of emerging business models based upon the research I cited in my post in December. His email reminded me that in Europe, open data is often discussed in the context of public sector information (PSI). Ongoing case studies of re-use are available at the European Public Sector Information Platform website.

Osella linked to a presentation on business models in PSI reuse and shared a list of eight business models, including case studies for six of them:

  1. Premium Product / Service. HospitalRegisters.com
  2. Freemium Product / Service. None of the 13 enterprises interviewed by us falls into this case, but a slew of instances may be provided: a classic example in this vein is represented by mobile apps related to public transportation in urban areas. [Link added.]
  3. Open Source. OpenCorporates and OpenPolis
  4.  Infrastructural Razor & Blades. Public Data Sets on Amazon Web Service
  5. Demand-Oriented Platform. DataMarket and Infochimps
  6. Supply-Oriented Platform. Socrata and Microsoft Open Government Data Initiative
  7. Free, as Branded Advertising. IBM City Forward, IBM Many Eyes or Google Public Data Explorer
  8. White-Label Development.. This business model has not consolidated yet, but some embryonic attempts seem to be particularly promising.

Agree? Disagree? Have other examples of these models or other business models? Please let me know in the comments or write in to alex@oreilly.com.

In the meantime, here are several other posts that have informed my investigation into open data business models:

This post is part of our ongoing investigation into the open data economy.

January 18 2013

We’re releasing the files for O’Reilly’s Open Government book

I’ve read many eloquent eulogies from people who knew Aaron Swartz better than I did, but he was also a Foo and contributor to Open Government. So, we’re doing our part at O’Reilly Media to honor Aaron by posting the Open Government book files for free for anyone to download, read and share.

The files are posted on the O’Reilly Media GitHub account as PDF, Mobi, and EPUB files for now. There is a movement on the Internet (#PDFtribute) to memorialize Aaron by posting research and other material for the world to access, and we’re glad to be able to do this.

You can find the book here: github.com/oreillymedia/open_government

Daniel Lathrop, my co-editor on Open Government, says “I think this is an important way to remember Aaron and everything he has done for the world.” We at O’Reilly echo Daniel’s sentiment.

December 26 2012

Big, open and more networked than ever: 10 trends from 2012

In 2012, technology-accelerated change around the world was driven by the wave of social media, data and mobile devices. In this year in review, we look back at some of the stories that mattered here at Radar and look ahead to what’s in store for 2013.

Below, you’ll find 10 trends that held my interest in 2012. This is by no means a comprehensive account of “everything that mattered in the past year” — try The Economist’s account of the world in 2012 or The Atlantic’s 2012 in review or Popular Science’s “year in ideas” if you’re hungry for that perspective — but I hope you’ll find something new to think about as 2013 draws near.

Social media

Social media wasn’t new in 2012, but it was bigger and more mainstream than ever. There were some firsts, from the first Presidential “Ask Me Anything” on Reddit to the first White House Google Hangout on Google Plus to presidential #debates to the first billion-user social network. The election season had an unprecedented social and digital component, from those hyperwired debates to a presidential campaign built like a startup. Expect even more blogging, tweeting, tumbling, streaming, Liking and pinning in 2013, even if it leaves us searching for context.

Open source in government

Open source software made more inroads in the federal government, from a notable policy at the Consumer Financial Protection Agency to more acceptance in the military.

The White House made its first commits on GitHub, including code for its mobile apps and e-petition platform, where President Obama responded personally to an e-petition for the first time.. The House Oversight Committee’s crowdsourced legislative platform  also went on GitHub. At year’s end, the United States (code) was on GitHub.

Responsive design

According to deputy technical lead Jeremy Vanderlan, the new AIDS.gov, launched in June, was the first full-site implementation of responsive web design for a federal government domain. They weren’t the first to automatically adapt how a website is displayed for the device a visitor is using — you can see next-generation web design at open.nasa.gov or in the way that fcc.gov/live optimizes to provide video to different mobile devices — but this was a genuine milestone for the feds online. By year’s end, Congress had also become responsive, at least with respect to its website, with a new beta at Congress.gov.

Free speech online

Is there free speech on the Internet? As Rebecca MacKinnon, Ethan Zuckerman and others have been explaining for years, what we think of as the new “public square online” is complicated by the fact that these platforms for free expression are owned and operated by private companies. MacKinnon explored these issues, “Consent of the Networked,” one of best technology policy books of the year. In 2012, “Twitter censorship” and the Terms of Service for social networking services caused many more people to suggest a digital Bill of Rights, although “Internet freedom” is an idea that varies with the beholder.

Open mapping

On January 9th, I wondered whether 2012 would be “the year of the open map.” I started reporting on digital maps made with powerful new software and open data last winter. The prediction was partially born out, from Foursquare’s adoption to StreetEast moving from Google Maps to new investments in OpenStreetMap. In response to the shift, Google slashed its price for using the Google Maps API by 88%. In an ideal world, the new competition will result in both better maps and more informed citizens.

Data journalism

Data journalism took on new importance for society. We tracked its growing influence, from the Knight News Challenge to new research initiatives to Africa, and are continuing to investigate data journalism with a series of interviews and a forthcoming report.

Privacy and security

Privacy and security continued to dominate technology policy discussions in the United States, although copyright, spectrum, patents and Internet governance had significant prominence. While the Supreme Court decided GPS monitoring constitutes search under the 4th Amendment, expanded rules for data sharing in the U.S. government raised troubling questions.

In another year that will end without updated baseline privacy legislation from Congress, bills did advance in the U.S. Senate to reform electronic privacy and address location-based technology. After calling for such legislation, the Federal Trade Commission opened an investigation into data brokers.

No “cyber security” bill passed the Senate either, leaving hope that future legislation will balance protections with civil liberties and privacy concerns.

Networked politics

Politics were more wired in Election 2012 than they’d ever been in history, from social media and debates to the growing clout of the Internet. The year started off with the unprecedented wave of networked activism that stopped the progress of the Stop Online Piracy Act (SOPA) and PROTECT-IP Act (PIPA) in Congress.

At year’s end, the jury remains out on whether the Internet will act as a platform for collective action to address societal challenges, from addressing gun violence in the U.S. to a changing climate.

Open data

As open data moves from the information age to the action age, there are significant advances around the globe. As more data becomes available, its practical application has only increased in importance.

After success releasing health care data to fuel innovation and startups, US CTO Todd Park sought to scale open data and agile thinking across the federal government.

While it’s important to be aware of the ambiguity of open government and open data, governments are continuing to move forward globally, with the United Kingdom relaunching Data.gov.uk and, at year’s end, India and the European Commission launching open data platforms. Cities around the world also adopted open data, from Buenos Aires to Berlin to Palo Alto.

In the United States, friendly competition to be the nation’s preeminent digital city emerged between San Francisco, Chicago, Philadelphia and New York. Open data releases became a point of pride. Landmark legislation in New York City and Chicago’s executive order on open data made both cities national leaders.

As the year ends, we’re working to make dollars and sense of the open data economy, explicitly making a connection between releases and economic growth. Look for a report on our research in 2013.

Open government

The world’s largest democracy officially launching an open government data platform was historic. That said, it’s worth reiterating a point I’ve made before: Simply opening up data is not a replacement for a Constitution that enforces a rule of law, free and fair elections, an effective judiciary, decent schools, basic regulatory bodies or civil society — particularly if the data does not relate to meaningful aspects of society. Adopting open data and digital government reforms is not quite the same thing as good government. Beware openwashing in government, as well as in other areas.

On that count, at year’s end, The Economist found that global open government efforts are growing in “scope and clout.” The Open Government Partnership grew, with new leadership, added experts and a finalized review mechanism. The year to come will be a test of the international partnership’s political will.

In the United States, an open government reality check at the federal level showed genuine accomplishments, but it leaves many promises only partially fulfilled, with a mixed record on meeting goals that many critics found transparently disappointing. While some of the administration’s transparency failures concern national security — notably, the use of drones overseas — science journalists reported restricted access to administration officials at the Environmental Protection Agency, Federal Drug Administration and Department of Health and Human Services.

Efforts to check transparency promises also found compliance with the Freedom of Information Act lacking. While a new FOIA portal is promising, only six federal agencies were on it by year’s end. The administration record on prosecuting whistleblowers has also sent a warning to others considering coming forward regarding waste or abuse in the national security.

Despite those challenges, 2012 was a year of continuing progress for open government at the federal level in the United States, with reasons for hope throughout states and cities. Here’s hoping 2013 sees more advances than setbacks in this area.

Coming tomorrow: 14 trends to watch in 2013.

Reposted bycheg00 cheg00

December 06 2012

The United States (Code) is on Github

When Congress launched Congress.gov in beta, they didn’t open the data. This fall, a trio of open government developers took it upon themselves to do what custodians of the U.S. Code and laws in the Library of Congress could have done years ago: published data and scrapers for legislation in Congress from THOMAS.gov in the public domain. The data at github.com/unitedstates is published using an “unlicense” and updated nightly. Credit for releasing this data to the public goes to Sunlight Foundation developer Eric Mill, GovTrack.us founder Josh Tauberer and New York Times developer Derek Willis.

“It would be fantastic if the relevant bodies published this data themselves and made these datasets and scrapers unnecessary,” said Mill, in an email interview. “It would increase the information’s accuracy and timeliness, and probably its breadth. It would certainly save us a lot of work! Until that time, I hope that our approach to this data, based on the joint experience of developers who have each worked with it for years, can model to government what developers who aim to serve the public are actually looking for online.”

If the People’s House is going to become a platform for the people, it will need to release its data to the people. If Congressional leaders want THOMAS.gov to be a platform for members of Congress, legislative staff, civic developers and media, the Library of Congress will need to release structured legislative data. THOMAS is also not updated in real-time, which means that there will continue to be a lag between a bill’s introduction and the nation’s ability to read the bill before a vote.

Until that happens, however, this combination of scraping and open source data publishing offers a way forward on Congressional data to be released to the public, wrote Willis, on his personal blog:

Two years ago, there was a round of blog posts touched off by Clay Johnson that asked, “Why shouldn’t there be a GitHub for data?” My own view at the time was that availability of the data wasn’t as much an issue as smart usage and documentation of it: ‘We need to import, prune, massage, convert. It’s how we learn.’

Turns out that GitHub actually makes this easier, and I’ve had a conversion of sorts to the idea of putting data in version control systems that make it easier to view, download and report issues with data … I’m excited to see this repository grow to include not only other congressional information from THOMAS and the new Congress.gov site, but also related data from other sources. That this is already happening only shows me that for common government data this is a great way to go.

In the future, legislation data could be used to show iterations of laws and improve the ability of communities at OpenCongress, POPVOX or CrunchGov to discover and discuss proposals. As Congress incorporates more tablets on the floor during debates, such data could also be used to update legislative dashboards.

The choice to use Github as a platform for government data and scraper code is another significant milestone in a breakout year for Github’s use in government. In January, the British government committed GOV.UK code to Github. NASA, after contributing its first code in January added 11 code repositories this year. In August, the White House committed code to Github. In September, the Open Gov Foundation open sourced the MADISON crowd sourced legislation platform.

The choice to use Github for this scraper and legislative data, however, presents a new and interesting iteration in the site’s open source story.

“Github is a great fit for this because it’s neutral ground and it’s a welcoming environment for other potential contributors,” wrote Sunlight Labs director Tom Lee, in an email. “Sunlight expects to invest substantial resources in maintaining and improving this codebase, but it’s not ours: we think the data made available by this code belongs to every American. Consequently the project needed to embrace a form that ensures that it will continue to exist, and be free of encumbrances, in a way that’s not dependent on any one organization’s fortunes.”

Mill, an open government developer at Sunlight Labs, shared more perspective in the rest of our email interview, below.

Is this based on the GovTrack.us scraper?

Eric Mill: All three of us have contributed at least one code change to our new THOMAS scraper; the majority of the code was written by me. Some of the code has been taken or adapted from Josh’s work.

The scraper that currently actively populates the information on GovTrack is an older Perl-based scraper. None of that code was used directly in this project. Josh had undertaken an incomplete, experimental rewrite of these scrapers in Python about a year ago (code), but my understanding is it never got to the point of replacing GovTrack’s original Perl scripts.

We used the code from this rewrite in our new scraper, and it was extremely helpful in two ways &mddash; providing a roadmap of how THOMAS’ URLs and sitemap work, and parsing meaning out of the text of official actions.

Parsing the meaning out of action text is, I would say, about half the value and work of the project. When you look at a page on GovTrack or OpenCongress and see the timeline of a bill’s life — “Passed House,” “Signed by the President,” etc. — that information is only obtainable by analyzing the order and nature of the sentences of the official actions that THOMAS lists. Sentences are finicky, inconsistent things, and extracting meaning from them is tricky work. Just scraping them out of THOMAS.gov’s HTML is only half the battle. Josh has experience at doing this for GovTrack. The code in which this experience was encapsulated drastically reduced how long it took to create this.

How long did this take to build?

Eric Mill: Creating the whole scraper, and the accompanying dataset, was about 4 weeks of work on my part. About half of that time was spent actually scraping — reverse engineering THOMAS’ HTML — and the other half was spent creating the necessary framework, documentation, and general level of rigor for this to be a project that the community can invest in and rely on.

There will certainly be more work to come. THOMAS is shutting down in a year, to be replaced by Congress.gov. As Congress.gov grows to have the same level of data as THOMAS, we’ll gradually transition the scraper to use Congress.gov as its data source.

Was this data online before? What’s new?

Eric Mill: All of the data in this project has existed in an open way at GovTrack.us, which has provided bulk data downloads for years. The Sunlight Foundation and OpenCongress have both created applications based on this data, as have many other people and organizations.

This project was undertaken as a collaboration because Josh and I believed that the data was fundamental enough that it should exist in a public, owner-less commons, and that the code to generate it should be in the same place.

There are other benefits, too. Although the source code to GovTrack’s scrapers has been available, it depends on being embedded in GovTrack’s system, and the use of a database server. It was also written in Perl, a language less widely used today, and produced only XML. This new Python scraper has no other dependencies, runs without a database, and generates both JSON and XML. It can be easily extended to output other data formats.

Finally, everyone who worked on the project has had experience in dealing with legislative information. We were able to use that to make various improvements to how the data is structured and presented that make it easier for developers to use the data quickly and connect it to other data sources.

Searches for bills in Scout use data collected directly from this scraper. What else are people doing with the data?

Eric Mill: Right now, I only know for a fact that the Sunlight Foundation is using the data. GovTrack recently sent an email to its developer list announcing that in the near future, its existing dataset would be deprecated in favor of this new one, so the data should be used in GovTrack before long.

Pleasantly, I’ve found nearly nothing new by switching from GovTrack’s original dataset to this one. GovTrack’s data has always had a high level of quality. So far, the new dataset looks to be as good.

Is it common to host open data on Github?

Eric Mill: Not really. Github’s not designed for large-scale data hosting. This is an experiment to see whether this is a useful place to host it. The primary benefit is that no single person or organization (besides Github) is paying for download bandwidth.

The data is published as a convenience, for people to quickly download for analysis or curiosity. I expect that any person or project that intends to integrate the data into their work on an ongoing basis will do so by using the scraper, not downloading the data repeatedly from Github. It’s not our intent that anyone make their project dependent on the Github download links.

Laudably, Josh Tauberer donated his legislator dataset and converted it to YAML. What’s YAML?

Eric Mill: YAML is a lightweight data format intended to be easy for humans to both read and write. This dataset, unlike the one scraped from THOMAS, is maintained mostly through manual effort. Therefore, the data itself needs to be in source control, it needs to not be scary to look at and it needs to be obvious how to fix or improve it.

What’s in this legislator dataset? What can be done with it?

Eric Mill: The legislator dataset contains information about members of Congress from 1789 to the present day. It is a wealth of vital data for anyone doing any sort of application or analysis of members of Congress. This includes a breakdown of their name, a crosswalk of identifiers on other services, and social media accounts. Crucially, it also includes a member of Congress’ change in party, chamber, and name over time.

For example, it’s a pretty necessary companion to the dataset that our scraper gathers from THOMAS. THOMAS tells you the name of the person who sponsored this bill in 2003, and gives you a THOMAS-specific ID number. But it doesn’t tell you what that person’s party was at the time, or if the person is still a member of the same chamber now as they were in 2003 (or whether they’re in office at all). So if you want to say “how many Republicans sponsored bills in 2003,” or if you’d like to draw in information from outside sources, such as campaign finance information, you will need a dataset like the one that’s been publicly donated here.

Sunlight’s API on members of Congress is easily the most prominent API, widely used by people and organizations to build systems that involve legislators. That API’s data is a tiny subset of this new one.

You moved a legal citation and extractor into this code. What do they do here?

Eric Mill: The legal citation extractor, called “Citation,” plucks references to the US Code (and other things) out of text. Just about any system that deals with legal documents benefits from discovering links between those documents. For example, I use this project to power US Code searches on Scout, so that the site returns results that cite some piece of the law, regardless of how that citation is formatted. There’s no text-based search, simple or advanced, that would bring back results matching a variety of formats or matching subsections — something dedicated to the arcane craft of citation formats is required.

The citation extractor is built to be easy for others to invest in. It’s a stand-alone tool that can be used through the command line, HTTP, or directly through JavaScript. This makes it suitable for the front-end or back-end, and easy to integrate into a project written in any language. It’s very far from complete, but even now it’s already proven extremely useful at creating powerful features for us that weren’t possible before.

The parser for the U.S. Code itself is a dataset, written by my colleague Thom Neale. The U.S. Code is published by the government in various formats, but none of them are suitable for easy reuse. The Office of Law Revision Counsel, which publishes the U.S. Code, is planning on producing a dedicated XML version of the US Code, but they only began the procurement process recently. It could be quite some time before it appears.

Thom’s work parses the “locator code” form of the data, which is a binary format designed for telling GPO’s typesetting machines how to print documents. It is very specialized and very complicated. This parser is still in an early stage and not in use in production anywhere yet. When it’s ready, it’ll produce reliable JSON files containing the law of the United States in a sensible, reusable form.

Does Github’s organization structure makes a data commons possible?

Eric Mill: Github deliberately aligns its interests with the open source community, so it is possible to host all of our code and data there for free. Github offers unlimited public repositories, collaborators, bandwidth, and disk space to organizations and users at no charge. They do this while being an extremely successful, profitable business.

On Github, there are two types of accounts: users and organizations. Organizations are independent entities, but no one has to log in as an organization or share a password. Instead, at least one user will be marked as the “owner” of an organization. Ownership can easily change hands or be distributed amongst various users. This means that Josh, Derek, and I can all have equal ownership of the “unitedstates” repositories and data. Any of us can extend that ownership to anyone we want in a simple, secure way, without password sharing.

Github as a company has established both a space and a culture that values the commons. All software development work, from hobbyist to non-profit to corporation, from web to mobile to enterprise, benefits from a foundation of open source code. Github is the best living example of this truth, so it’s not surprising to me that it was the best fit for our work.

Why is this important to the public?

Eric Mill: The work and artifacts of our government should be available in bulk, for easy download, in accessible formats, and without license restrictions. This is a principle that may sound important and obvious to every technologist out there, but it’s rarely the case in practice. When it is, the bag is usually mixed. Not every member of the public will be able or want to interact directly with our data or scrapers. That’s fine. Developers are the force multipliers of public information. Every citizen can benefit somehow from what a developer can build with government information.

Related:

November 26 2012

Investigating data journalism

Great journalism has always been based on adding context, clarity and compelling storytelling to facts. While the tools have improved, the art is the same: explaining the who, what, where, when and why behind the story. The explosion of data, however, provides new opportunities to think about reporting, analysis and publishing stories.

As you may know, there’s already a Data Journalism Handbook to help journalists get started. (I contributed some commentary to it). Over the next month, I’m going to be investigating the best data journalism tools currently in use and the data-driven business models that are working for news startups. We’ll then publish a report that shares those insights and combines them with our profiles of data journalists.

Why dig deeper? Getting to the heart of what’s hype and what’s actually new and noteworthy is worth doing. I’d like to know, for instance, whether tutorials specifically designed for journalists can be useful, as Joe Brockmeier suggested at ReadWrite. On a broader scale, how many data journalists are working today? How many will be needed? What are the primary tools they rely upon now? What will they need in 2013? Who are the leaders or primary drivers in the area? What are the most notable projects? What organizations are embracing data journalism, and why?

This isn’t a new interest for me, but it’s one I’d like to found in more research. When I was offered an opportunity to give a talk at the second International Open Government Data Conference at the World Bank this July, I chose to talk about open data journalism and invited practitioners on stage to share what they do. If you watch the talk and the ensuing discussion in the video below, you’ll pick up great insight from the work of the Sunlight Foundation, the experience of Homicide Watch and why the World Bank is focused on open data journalism in developing countries.

The sites and themes that I explored in that talk will be familiar to Radar readers, focusing on the changing dynamic between the people formerly known as the audience and the editors, researchers and reporters who are charged with making sense of the data deluge for the public good. If you’ve watched one of my Ignites or my Berkman Center talk, much of this won’t be new to you, but the short talk should be a good overview of where I think this aspect of data journalism is going and why I think it’s worth paying attention to today.

For instance, at the Open Government Data Conference Bill Allison talked about how open data creates government accountability and reveals political corruption. We heard from Chris Amico, a data journalist who created a platform to help a court reporter tell the story of every homicide in a city. And we heard from Craig Hammer how the World Bank is working to build capacity in media organizations around the world to use data to show citizens how and where borrowed development dollars are being spent on their behalf.

The last point, regarding capacity, is a critical one. Just as McKinsey identified a gap between available analytic talent and the demand created by big data, there is a data science skills gap in journalism. Rapidly expanding troves of data are useless without the skills to analyze it, whatever the context. An over focus on tech skills could exclude the best candidates for these jobs — but there will need to be training to build them.

This reality hasn’t gone unnoticed by foundations or the academy. In May, the Knight Foundation gave Columbia University $2 million for research to help close the data science skills gap. (I expect to be talking to Emily Bell, Jonathan Stray and the other instructors and students.)

Media organizations must be able to put data to work, a need that was amply demonstrated during Hurricane Sandy, when public open government data feeds became critical infrastructure.

What I’d like to hear from you is what you see working around the world, from the Guardian to ProPublica, and what you’re working on, and where. To kick things off, I’d like to know which organizations are doing the most innovative work in data journalism.

Please weigh in through the comments or drop me a line at alex@oreilly.com or at @digiphile on Twitter.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl