Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

April 18 2013

Sprinting toward the future of Jamaica

Creating the conditions for startups to form is now a policy imperative for governments around the world, as Julian Jay Robinson, minister of state in Jamaica’s Ministry of Science, Technology, Energy and Mining, reminded the attendees at the “Developing the Caribbean” conference last week in Kingston, Jamaica.

photo-22photo-22

Robinson said Jamaica is working on deploying wireless broadband access, securing networks and stimulating tech entrepreneurship around the island, a set of priorities that would have sounded of the moment in Washington, Paris, Hong Kong or Bangalore. He also described open access and open data as fundamental parts of democratic governance, explicitly aligning the release of public data with economic development and anti-corruption efforts. Robinson also pledged to help ensure that Jamaica’s open data efforts would be successful, offering a key ally within government to members of civil society.

The interest in adding technical ability and capacity around the Caribbean was sparked by other efforts around the world, particularly Kenya’s open government data efforts. That’s what led the organizers to invite Paul Kukubo to speak about Kenya’s experience, which Robinson noted might be more relevant to Jamaica than that of the global north.

Kukubo, the head of Kenya’s Information, Communication and Technology Board, was a key player in getting the country’s open data initiative off the ground and evangelizing it to developers in Nairobi. At the conference, Kukubo gave Jamaicans two key pieces of advice. First, open data efforts must be aligned with national priorities, from reducing corruption to improving digital services to economic development.

“You can’t do your open data initiative outside of what you’re trying to do for your country,” said Kukubo.

Second, political leadership is essential to success. In Kenya, the president was personally involved in open data, Kukubo said. Now that a new president has been officially elected, however, there are new questions about what happens next, particularly given that pickup in Kenya’s development community hasn’t been as dynamic as officials might have hoped. There’s also a significant issue on the demand-side of open data, with respect to the absence of a Freedom of Information Law in Kenya.

When I asked Kukubo about these issues, he said he expects a Freedom of Information law will be passed this year in Kenya. He also replied that the momentum on open data wasn’t just about the supply side.

“We feel that in the usage side, especially with respect to the developer ecosystem, we haven’t necessarily gotten as much traction from developers using data and interpreting cleverly as we might have wanted to have,” he said. “We’re putting putting more into that area.”

With respect to leadership, Kukubo pointed out that newly elected Kenyan President Uhuru Kenyatta drove open data release and policy when he was the minister of finance. Kukubo expects him to be very supportive of open data in office.

The development of open data in Jamaica, by way of contrast, has been driven by academia, said professor Maurice McNaughton, director of the Center of Excellence at the Mona School of Business at the University of the West Indies (UWI). The Caribbean Open Institute, for instance, has been working closely with Jamaica’s Rural Agriculture Development Authority (RADA). There are high hopes that releases of more data from RADA and other Jamaican institutions will improve Jamaica’s economy and the effectiveness of its government.

Open data could add $35 million annually to the Jamaican economy, said Damian Cox, director of the Access to Information Unit in the Office of the Prime Minister, citing a United Nations estimate. Cox also explicitly aligned open data with measuring progress toward Millennium Development Goals, positing that increasing the availability of data will enable the civil society, government agencies and the UN to more accurately assess success.

The development of (open) data-driven journalism

Developing the Caribbean focused on the demand side of open data as well, particularly the role of intermediaries in collecting, cleaning, fact checking, and presenting data, matched with necessary narrative and context. That kind of work is precisely what data-driven journalism does, which is why it was one of the major themes of the conference. I was invited to give an overview of data-driven journalism that connected some trends and highlighted the best work in the field.

I’ve written quite a bit about how data-driven journalism is making sense of the world elsewhere, with a report yet to come. What I found in Jamaica is that media there have long since begun experimenting in the field, from the investigative journalism at Panos Caribbean to the relatively recent launch of diGJamaica by the Gleaner Company.

diGJamaica is modeled upon the Jamaican Handbook and includes more than a million pages from The Gleaner newspaper, going back to 1834. The site publishes directories of public entities and public data, including visualizations. It charges for access to the archives.

Legends and legacies

Usain Bolt in JamaicaUsain Bolt in Jamaica

Olympic champion Usain Bolt, photographed in his (fast) car at the UWI/Usain Bolt Track in Mona, Jamaica.

Normally, meeting the fastest man on earth would be the most memorable part of any trip. The moment that left the deepest impression from my journey to the Caribbean, however, came not from encountering Usain Bolt on a run but from within a seminar room on a university campus.

As a member of a panel of judges, I saw dozens of young people present after working for 30 hours at a hackathon at the University of the West Indies. While even the most mature of the working apps was still a prototype, the best of them were squarely focused on issues that affect real Jamaicans: scoring the risk of farmers that needed banking loans and collecting and sharing data about produce.

The winning team created a working mobile app that would enable government officials to collect data at farms. While none of the apps are likely to be adopted by the agricultural agency in its current form, or show up in the Google Play store this week, the experience the teams gained will help them in the future.

As I left the island, the perspective that I’d taken away from trips to Brazil, Moldova and Africa last year was further confirmed: technical talent and creativity can be found everywhere in the world, along with considerable passion to apply design thinking, data and mobile technology to improve the societies people live within. This is innovation that matters, not just clones of popular social networking apps — though the judges saw more than a couple of those ideas flow by as well.

In the years ahead, Jamaican developers will play an important role in media, commerce and government on the island. If attracting young people to engineering and teaching them to code is the long-term legacy of efforts like Developing the Caribbean, it will deserve its own thumbs up from Mr. Bolt. The track to that future looks wide open.

photo-23photo-23

Disclosure: the cost of my travel to Jamaica was paid for by the organizers of the Developing the Caribbean conference.

March 19 2013

The City of Chicago wants you to fork its data on GitHub

GitHub has been gaining new prominence as the use of open source software in government grows.

Earlier this month, I included a few thoughts from Chicago’s chief information officer, Brett Goldstein, about the city’s use of GitHub, in a piece exploring GitHub’s role in government.

While Goldstein says that Chicago’s open data portal will remain the primary means through which Chicago releases public sector data, publishing open data on GitHub is an experiment that will be interesting to watch, in terms of whether it affects reuse or collaboration around it.

In a followup email, Goldstein, who also serves as Chicago’s chief data officer, shared more about why the city is on GitHub and what they’re learning. Our discussion follows.

Chicago's presence on GitHubChicago's presence on GitHub

The City of Chicago is on GitHub.

What has your experience on GitHub been like to date?

Brett Goldstein: It has been a positive experience so far. Our local developer community is very excited by the MIT License on these datasets, and we have received positive reactions from outside of Chicago as well.

This is a new experiment for us, so we are learning along with the community. For instance, GitHub was not built to be a data portal, so it was difficult to upload our buildings dataset, which was over 2GB. We are rethinking how to deploy that data more efficiently.

Why use GitHub, as opposed to some other data repository?

Brett Goldstein: GitHub provides the ability to download, fork, make pull requests, and merge changes back to the original data. This is a new experiment, where we can see if it’s possible to crowdsource better data. GitHub provides the necessary functionality. We already had a presence on GitHub, so it was a natural extension to that as a complement to our existing data portal.

Why does it make sense for the city to use or publish open source code?

Brett Goldstein: Three reasons. First, it solves issues with incorporating data in open source and proprietary projects. The city’s data is available to be used publicly, and this step removes any remaining licensing barriers. These datasets were targeted because they are incredibly useful in the daily life of residents and visitors to Chicago. They are the most likely to be used in outside projects. We hope this data can be incorporated into existing projects. We also hope that developers will feel more comfortable developing applications or services based on an open source license.

Second, it fits within the city’s ethos and vision for data. These datasets are items that are visible in daily life — transportation and buildings. It is not proprietary data and should be open, editable, and usable by the public.

Third, we engage in projects like this because they ultimately benefit the people of Chicago. Not only do our residents get better apps when we do what we can to support a more creative and vibrant developer community, they also will get a smarter and more nimble government using tools that are created by sharing data.

We open source many of our projects because we feel the methodology and data will benefit other municipalities.

Is anyone pulling it or collaborating with you? Have you used that code? Would you, if it happened?

Brett Goldstein: We collaborated with Ian Dees, who is a significant contributor to OpenStreetMaps, to launch this idea. We anticipate that buildings data will be integrated in OpenStreetMaps now that it’s available with a compatible license.

We have had 21 forks and a handful of pull requests fixing some issues in our README. We have not had a pull request fixing the actual data.

We do intend to merge requests to fix the data and are working on our internal process to review, reject, and merge requests. This is an exciting experiment for us, really at the forefront of what governments are doing, and we are learning along with the community as well.

Is anyone using the open data that wasn’t before, now that it’s JSON?

Brett Goldstein: We seem to be reaching a new audience with posting data on GitHub, working in tandem with our heavily trafficked data portal. A core goal of this administration is to make data open and available. We have one of the most ambitious open data programs in the country. Our portal has over 400 datasets that are machine readable, downloadable and searchable. Since it’s hosted on Socrata, basic analysis of the data is possible as well.

January 17 2013

Yelp partners with NYC and SF on restaurant inspection data

One of the key notions in my “Government as a Platform” advocacy has been that there are other ways to partner with the private sector besides hiring contractors and buying technology. One of the best of these is to provide data that can be used by the private sector to build or enrich their own citizen-facing services. Yes, the government runs a weather website but it’s more important that data from government weather satellites shows up on the Weather Channel, your local TV and radio stations, Google and Bing weather feeds, and so on. They already have more eyeballs and ears combined than the government could or should possibly acquire for its own website.

That’s why I’m so excited to see a joint effort by New York City, San Francisco, and Yelp to incorporate government health inspection data into Yelp reviews. I was involved in some early discussions and made some introductions, and have been delighted to see the project take shape.

My biggest contribution was to point to GTFS as a model. Bibiana McHugh at the city of Portland’s TriMet transit agency reached out to Google, Bing, and others with the question: “If we came up with a standard format for transit schedules, could you use it?” Google Transit was the result — a service that has spread to many other U.S. cities. When you rejoice in the convenience of getting transit timetables on your phone, remember to thank Portland officials as well as Google.

In a similar way, Yelp, New York, and San Francisco came up with a data format for health inspection data. The specification is at http://yelp.com/healthscores. It will reportedly be announced at the US Conference of Mayors with San Francisco Mayor Ed Lee today.

Code for America built a site for other municipalities to pledge support. I’d also love to see support in other local restaurant review services from companies like Foursquare, Google, Microsoft, and Yahoo!  This is, as Chris Anderson of TED likes to say, “an idea worth spreading.”

December 26 2012

Big, open and more networked than ever: 10 trends from 2012

In 2012, technology-accelerated change around the world was driven by the wave of social media, data and mobile devices. In this year in review, we look back at some of the stories that mattered here at Radar and look ahead to what’s in store for 2013.

Below, you’ll find 10 trends that held my interest in 2012. This is by no means a comprehensive account of “everything that mattered in the past year” — try The Economist’s account of the world in 2012 or The Atlantic’s 2012 in review or Popular Science’s “year in ideas” if you’re hungry for that perspective — but I hope you’ll find something new to think about as 2013 draws near.

Social media

Social media wasn’t new in 2012, but it was bigger and more mainstream than ever. There were some firsts, from the first Presidential “Ask Me Anything” on Reddit to the first White House Google Hangout on Google Plus to presidential #debates to the first billion-user social network. The election season had an unprecedented social and digital component, from those hyperwired debates to a presidential campaign built like a startup. Expect even more blogging, tweeting, tumbling, streaming, Liking and pinning in 2013, even if it leaves us searching for context.

Open source in government

Open source software made more inroads in the federal government, from a notable policy at the Consumer Financial Protection Agency to more acceptance in the military.

The White House made its first commits on GitHub, including code for its mobile apps and e-petition platform, where President Obama responded personally to an e-petition for the first time.. The House Oversight Committee’s crowdsourced legislative platform  also went on GitHub. At year’s end, the United States (code) was on GitHub.

Responsive design

According to deputy technical lead Jeremy Vanderlan, the new AIDS.gov, launched in June, was the first full-site implementation of responsive web design for a federal government domain. They weren’t the first to automatically adapt how a website is displayed for the device a visitor is using — you can see next-generation web design at open.nasa.gov or in the way that fcc.gov/live optimizes to provide video to different mobile devices — but this was a genuine milestone for the feds online. By year’s end, Congress had also become responsive, at least with respect to its website, with a new beta at Congress.gov.

Free speech online

Is there free speech on the Internet? As Rebecca MacKinnon, Ethan Zuckerman and others have been explaining for years, what we think of as the new “public square online” is complicated by the fact that these platforms for free expression are owned and operated by private companies. MacKinnon explored these issues, “Consent of the Networked,” one of best technology policy books of the year. In 2012, “Twitter censorship” and the Terms of Service for social networking services caused many more people to suggest a digital Bill of Rights, although “Internet freedom” is an idea that varies with the beholder.

Open mapping

On January 9th, I wondered whether 2012 would be “the year of the open map.” I started reporting on digital maps made with powerful new software and open data last winter. The prediction was partially born out, from Foursquare’s adoption to StreetEast moving from Google Maps to new investments in OpenStreetMap. In response to the shift, Google slashed its price for using the Google Maps API by 88%. In an ideal world, the new competition will result in both better maps and more informed citizens.

Data journalism

Data journalism took on new importance for society. We tracked its growing influence, from the Knight News Challenge to new research initiatives to Africa, and are continuing to investigate data journalism with a series of interviews and a forthcoming report.

Privacy and security

Privacy and security continued to dominate technology policy discussions in the United States, although copyright, spectrum, patents and Internet governance had significant prominence. While the Supreme Court decided GPS monitoring constitutes search under the 4th Amendment, expanded rules for data sharing in the U.S. government raised troubling questions.

In another year that will end without updated baseline privacy legislation from Congress, bills did advance in the U.S. Senate to reform electronic privacy and address location-based technology. After calling for such legislation, the Federal Trade Commission opened an investigation into data brokers.

No “cyber security” bill passed the Senate either, leaving hope that future legislation will balance protections with civil liberties and privacy concerns.

Networked politics

Politics were more wired in Election 2012 than they’d ever been in history, from social media and debates to the growing clout of the Internet. The year started off with the unprecedented wave of networked activism that stopped the progress of the Stop Online Piracy Act (SOPA) and PROTECT-IP Act (PIPA) in Congress.

At year’s end, the jury remains out on whether the Internet will act as a platform for collective action to address societal challenges, from addressing gun violence in the U.S. to a changing climate.

Open data

As open data moves from the information age to the action age, there are significant advances around the globe. As more data becomes available, its practical application has only increased in importance.

After success releasing health care data to fuel innovation and startups, US CTO Todd Park sought to scale open data and agile thinking across the federal government.

While it’s important to be aware of the ambiguity of open government and open data, governments are continuing to move forward globally, with the United Kingdom relaunching Data.gov.uk and, at year’s end, India and the European Commission launching open data platforms. Cities around the world also adopted open data, from Buenos Aires to Berlin to Palo Alto.

In the United States, friendly competition to be the nation’s preeminent digital city emerged between San Francisco, Chicago, Philadelphia and New York. Open data releases became a point of pride. Landmark legislation in New York City and Chicago’s executive order on open data made both cities national leaders.

As the year ends, we’re working to make dollars and sense of the open data economy, explicitly making a connection between releases and economic growth. Look for a report on our research in 2013.

Open government

The world’s largest democracy officially launching an open government data platform was historic. That said, it’s worth reiterating a point I’ve made before: Simply opening up data is not a replacement for a Constitution that enforces a rule of law, free and fair elections, an effective judiciary, decent schools, basic regulatory bodies or civil society — particularly if the data does not relate to meaningful aspects of society. Adopting open data and digital government reforms is not quite the same thing as good government. Beware openwashing in government, as well as in other areas.

On that count, at year’s end, The Economist found that global open government efforts are growing in “scope and clout.” The Open Government Partnership grew, with new leadership, added experts and a finalized review mechanism. The year to come will be a test of the international partnership’s political will.

In the United States, an open government reality check at the federal level showed genuine accomplishments, but it leaves many promises only partially fulfilled, with a mixed record on meeting goals that many critics found transparently disappointing. While some of the administration’s transparency failures concern national security — notably, the use of drones overseas — science journalists reported restricted access to administration officials at the Environmental Protection Agency, Federal Drug Administration and Department of Health and Human Services.

Efforts to check transparency promises also found compliance with the Freedom of Information Act lacking. While a new FOIA portal is promising, only six federal agencies were on it by year’s end. The administration record on prosecuting whistleblowers has also sent a warning to others considering coming forward regarding waste or abuse in the national security.

Despite those challenges, 2012 was a year of continuing progress for open government at the federal level in the United States, with reasons for hope throughout states and cities. Here’s hoping 2013 sees more advances than setbacks in this area.

Coming tomorrow: 14 trends to watch in 2013.

Reposted bycheg00 cheg00

December 11 2012

Making dollars and sense of the open data economy

Open dataOpen dataOver the past several years, I’ve been writing about how government data is moving into the marketplaces, underpinning ideas, products and services. Open government data and application programming interfaces to distribute it, more commonly known as APIs, increasingly look like fundamental public infrastructure for digital government in the 21st century.

What I’m looking for now is more examples of startups and businesses that have been created using open data or that would not be able to continue operations without it. If big data is a strategic resource, it’s important to understand how and where organizations are using it for public good, civic utility and economic benefit.

Sometimes government data has been proactively released, like the federal government’s work to revolutionize the health care industry by making health data as useful as weather data or New York City’s approach to becoming a data platform.

In other cases, startups like Panjiva or BrightScope have liberated government data through Freedom of Information Act requests and automated means. By doing so, they’ve helped the American people and global customers understand the supply chain, the fees associated with 401(k) plans and the history of financial advisors.

I’ve hypothesized that open data will have an overall effect on the economy akin to that of open source and small business. Gartner’s research has posited that open data creates value in the public and private sector. If government acts as a platform to enable people inside and outside government to innovate on top of it, what are the outcomes?

Over the past four years, the world has heard a rising chorus for raw data from voices like the creator of the World Wide Web, Tim Berners-Lee, and the chief technology officer of the United States, Todd Park. Park, in particular, has been working to scale open data across the federal government as the nation’s “entrepreneur in residence.”

McKinsey and Associates estimated the annual economic value of big, open liquid health data at some $350 billion annually. While that number is eye-opening, which companies and startups stand to change health care using open health data?

Some examples are clear, from mobile apps like iTriage (now owned by Aetna) to Castlight, but they aren’t sufficient to understand what’s happening out there.

Other promising startups are in the consumer finance space, where so-called “smart disclosure” initiatives are enabling people to put their personal data to use. Startups like Billshrink.com and Hello Wallet now are enabling people to make smarter financial decisions.

I know there are more stories out there, and in sectors beyond health care and consumer finance — including transit, energy, education and media. Over the next several months, I’ll be identifying and profiling more civic startups, such as those from the first class in the Code for America accelerator, like Captricity, to specialized search engines, like Zillow, Panjiva and DataMarket.

In the course of that work, I hope to answer some big questions. What are the sustainable business models that successful civic startups are using, whether they use legislative data or other reuse of public sector information?  What are the real costs associated with opening up government data to make it usable, both for government and entrepreneurs? And how does it balance against what datasets, at the federal, state or local levels, are the most valuable? Are they open and usable? If so, who’s using them and to what effect? If not, why not?

At the end of this particular project, in February, we’ll publish a report on what I’ve found. In it, I hope to be able to share some answers to several core questions on the topic. Where I need your help is in identifying new startups that are using or consuming government data or in highlighting how existing companies use it in their operations, good or services. Who is doing the most interesting work — and where? If you have research and evidence to share on the questions I posed above, feel free to ring in on that count as well.

Please weigh in through the comments or drop me a line at alex@oreilly.com or at @digiphile on Twitter.

October 19 2012

San Francisco looks to tap into the open data economy

As interest in open data continues to grow around the world, cities have become laboratories for participatory democracy. They’re also ground zero for new experiments in spawning civic startups that deliver city services or enable new relationships between the people and city government. San Francisco was one of the first municipalities in the United States to embrace the city as a platform paradigm in 2009, with the launch of an open data platform.

Years later, the city government is pushing to use its open data to accelerate economic development. On Monday, San Francisco announced revised open data legislation to enable that change and highlighted civic entrepreneurs who are putting the city’s data to work in new mobile apps.

City staff have already published the revised open data legislation on GitHub. (If other cities want to “fork” it, clone away.) David Chiu, the chairman of the San Francisco Board of Supervisors, the city’s legislative body, introduced the new version on Monday and submitted it on Tuesday. A vote is expected before the end of the year.

Speaking at the offices of the Hatchery in San Francisco, Chiu observed that, by and large, the data that San Francisco has put out showed the city in a positive light. In the future, he suggested, that should change. Chiu challenged the city and the smartest citizens of San Francisco to release more data, figure out where the city could take risks, be more entrepreneurial and use data to hold the city accountable. In his remarks, he said that San Francisco is working on open budgeting but is still months away from getting the data that they need.

Rise of the CDO

This new version of the open data legislation will create a chief data officer (CDO) position, assign coordinators for open data in each city department, and make it clear in procurement language that the city owns data and retains access to it.

“Timelines, mandates and especially the part about getting them to inventory what data they collect are all really good,” said Luke Fretwell, founder of Govfresh, which covers open government in San Francisco. “It’s important that’s in place. Otherwise, there’s no way to be accountable. Previous directives didn’t do it.”

The city’s new CDO will “be responsible for sharing city data with the public, facilitating the sharing of information between City departments, and analyzing how data sets can be used to improve city decision making,” according to the revised legislation.

In creating a CDO, San Francisco is running a play from the open data playbooks of Chicago and Philadelphia. (San Francisco’s new CDO will be a member of the mayor’s staff in the budget office.) Moreover, the growth of CDOs around the country confirms the newfound importance of civic data in cities. If open government data is to be a strategic asset that can be developed for the public good, civic utility and economic value, it follows that it needs better stewards.

Assigning a coordinator in each department is also an acknowledgement that open data consumers need a point of contact and accountability. In theory, this could help create better feedback loops between the city and the cohort of civic entrepreneurs that this policy is aimed at stimulating.

Who owns the data?

San Francisco’s experience with NextBus and a conflict over NextMuni real-time data is a notable case study for other cities and states that are considering similar policies.

The revised legislation directs the Committee on Information Technology (COIT) to, within 60 days from the passage of the legislation, enact “rules for including open data requirements in applicable City contracts and standard contract provisions that promote the City’s open data policies, including, where appropriate, provisions to ensure that the City retains ownership of City data and the ability to post the data on data.sfgov.org or make it available through other means.”

That language makes it clear that it’s the city that owns city data, not a private company. That’s in line with a principle that open government data is a public good that should be available to the public, not locked up in a proprietary format or a for-pay database. There’s some nuance to the issue, in terms of thinking through what rights a private company that invests in acquiring and cleaning up government data holds, but the basic principle that the public should have access to public data is sound. The procurement practices in place will mean that any newly purchased system that captures structured data must have a public API.

Putting open data to work

Speaking at the Hatchery on Monday, Mayor Ed Lee highlighted three projects that each showcase open data put to use. The new Rec & Park app (iOS download), built by San Francisco-based startup Appallicious, enables citizens to find trails, dog parks, playgrounds and other recreational resources on a mobile device. “Outside” (iOS download), from San Francisco-based 100plus, encourages users to complete “healthy missions” in their neighborhoods. The third project, from mapping giant Esri, is a beautiful web-based visualization of San Francisco’s urban growth based upon open data from San Francisco’s planning departments.

The power of prediction

Over the past three years, transparency, accountability, cost savings and mobile apps have constituted much of the rationale for open data in cities. Now, San Francisco is renewing its pitch for the role of open data in job creation and combining increased efficiency and services.

Jon Walton, San Francisco’s chief information officer (CIO), identified two next steps for San Francisco in an interview earlier this year: working with other cities to create a federated model (now online at cities.data.gov) and using its own data internally to identify and solve issues. (San Francisco and cities everywhere will benefit from looking to New York City’s work with predictive data analytics.)

“We’re thinking about using data behind the firewalls,” said Walton. “We want to give people a graduated approach, in terms of whether they want to share data for themselves, to a department, to the city, or worldwide.”

On that count, it’s notable that Mayor Lee is now publicly encouraging more data sharing between private companies that are collecting data in San Francisco. As TechCrunch reported, the San Francisco government quietly passed a new milestone when it added to its open data platform private-sector datasets on pedestrian and traffic movement collected by Motionloft.

“This gives the city a new metric on when and where congestion happens, and how many pedestrians and vehicles indicate a slowdown will occur,” said Motionloft CEO Jon Mills, in an interview.

Mills sees opportunities ahead to apply predictive data analytics to life and death situations by providing geospatial intelligence for first responders in the city.

“We go even further when police and fire data are brought in to show the relation between emergency situations and our data,” he said. “What patterns cause emergencies in different neighborhoods or blocks? We’ll know, and the city will be able to avoid many horrible situations.”

Such data-sharing could have a real impact on department bottom lines: while “Twitter311” created a lot of buzz in the social media world, access to real-time transit data is what is estimated to have saved San Francisco more than $1 million a year by reducing the volume of San Francisco 311 calls by 21.7%.

Open data visualization can also enable public servants to understand how city residents are interacting and living in an urban area. For instance, a map of San Francisco pedestrian injuries shows high-injury corridors that merit more attention.

Open data and crowdsourcing will not solve all IT ills

While San Francisco was an early adopter of open data, that investment hasn’t changed an underlying reality: the city government remains burdened by a legacy of dysfunctional tech infrastructure, as detailed in a report issued in August 2012 by the City and County of San Francisco.

“San Francisco’s city-wide technology governing structure is ineffective and poorly organized, hampered by a hands-off Mayor, a weak Committee on Information Technology, an unreliable Department of Technology, and a departmentalized culture that only reinforces the City’s technological ineffectiveness,” state the report’s authors.

San Francisco government has embraced technologically progressive laws and rhetoric, but hasn’t always followed through on them, from setting deadlines to reforming human resources, code sharing or procurement.

“Departments with budgets in the tens of millions of dollars — including the very agency tasked with policing government ethics — still have miles to go,” commented Gov 2.0 advocate Adriel Hampton and former San Francisco government staffer in an interview earlier this year.

Hampton, who has turned his advocacy to legal standards for open data in California and to working at Nationbuilder, a campaign software startup, says that San Francisco has used technology “very poorly” over the past decade. While he credited the city’s efforts in mobile government and recent progress on open data, the larger system is plagued with problems that are endemic in government IT.

Hampton said the city’s e-government efforts largely remain in silos. “Lots of departments have e-services, but there has been no significant progress in integrating processes across departments, and some agencies are doing great while others are a mess,” commented Hampton. “Want to do business in SF? Here’s a sea of PDFs.”

The long-standing issues here go beyond policy, in his view. “San Francisco has a very fragmented IT structure, where the CIO doesn’t have real authority, and proven inability to deliver on multi-departmental IT projects,” he said. As an example, Hampton pointed to San Francisco’s Justice Information Tracking System, a $25 million, 10-year project that has made some progress, but still has not been delivered.

“The City is very good at creating feel-good requirements for its vendors that simply result in compliant companies marking up and reselling everything from hardware to IT software and services,” he commented. “This makes for not only higher costs and bureaucratic waste, but huge openings for fraud. Contracting reform was the number one issue identified in the ImproveSF employee ideation exercise in 2010, but it sure didn’t make the press release.”

Hampton sees the need for two major reforms to keep San Francisco on a path to progress: empowering the CIO position with more direct authority over departmental IT projects, and reforming how San Francisco procures technology, an issue he says affects all other parts of the IT landscape. The reason city IT is so bad, he says, its that it’s run by a 13-member council. “[The] poor CIO’s hardly got a shot.”

All that said, Hampton gives David Chiu and San Francisco city government high marks for their recent actions. “Bringing in Socrata to power the open data portal is a solid move and shows commitment to executing on the open data principle,” he said.

While catalyzing more civic entrepreneurship is important, creating enduring structural change in how San Francisco uses technology will require improving how the city government collects, stores, consumes and releases data, along with how it procures, governs and builds upon technology.

On that count, Chicago’s experience may be relevant. Efforts to open government data there have led to both progress and direction, as Chicago CTO John Tolva blogged in January:

“Open data and its analysis are the basis of our permission to interject the following questions into policy debate: How can we quantify the subject-matter underlying a given decision? How can we parse the vital signs of our city to guide our policymaking? … It isn’t just app competitions and civic altruism that prompts developers to create applications from government data. 2011 was the year when it became clear that there’s a new kind of startup ecosystem taking root on the edges of government. Open data is increasingly seen as a foundation for new businesses built using open source technologies, agile development methods, and competitive pricing. High-profile failures of enterprise technology initiatives and the acute budget and resource constraints inside government only make this more appealing.”

Open data and job creation?

While realizing internal efficiencies and cost savings are key requirements for city CIOs, they don’t hold the political cachet of new jobs and startups, particularly in an election year. San Francisco is now explicitly connecting its release of open data to jobs.

“San Francisco’s open data policies are creating jobs, improving our city and making it easier for residents and visitors to communicate with government,” commented Mayor Lee, via email.

Lee is optimistic about the future, too: “I know that, at the heart of this data, there will be a lot more jobs created,” he said on Monday at the Hatchery.

Open data’s potential for job creation is also complemented by its role as a raw material for existing businesses. “This legislation creates more opportunities for the Esri community to create data-driven decision products,” said Bronwyn Agrios, a project manager at Esri, in an interview.

Esri, however, as an established cloud mapping giant, is in a different position than startups enabled by open data. Communications strategist Brian Purchia, the former new media director for former San Francisco Mayor Gavin Newsom, points to Appallicious.

Appallicious “would not have been possible with [San Francisco's] open data efforts,” said Purchia. “They have have hired about 10 folks and are looking to expand to other cities.”

The startup’s software drives the city’s new Rec & Park app, including the potential to enable mobile transactions in the next iteration.

“Motionloft will absolutely grow from our involvement in San Francisco open data,” said Motionloft CEO Mills. “By providing some great data and tools to the city of San Francisco, it enables Motionloft to develop solutions for other cities and government agencies. We’ll be hiring developers, sales people, and data experts to keep up with our plans to grow this nationwide, and internationally.”

The next big question for these startups, as with so many others in nearby Silicon Valley, is whether their initial successes can scale. For that to happen for startups that depend upon government data, other cities will not only need to open up more data, they’ll need to standardize it.

Motionloft, at least, has already moved beyond the Bay Area, although other cities haven’t incorporated its data yet. Esri, as a major enterprise provider of proprietary software to local governments, has some skin in this game.

“City governments are typically using Esri software in some capacity,” said Agrios. “It will certainly be interesting to see how geo data standards emerge given the rapid involvement of civic startups eagerly consuming city data. Location-aware technologists on both sides of the fence, private and public, will need to work together to figure this out.”

If the marketplace for civic applications based upon open data develops further, it could help with a key issue that has dogged the results of city app contests: sustainability. It could also help with a huge problem for city governments: the cost of providing e-services to more mobile residents as budgets continue to tighten.

San Francisco CIO Walton sees an even bigger opportunity for the growth of civic apps that go far beyond the Bay Area, if cities can coordinate their efforts.

“There’s lots of potential here,” Walton said. “The challenge is replicating successes like Open311 in other verticals. If you look at the grand scale of time, we’re just getting started. For instance, I use Nextbus, an open source app that uses San Francisco’s open data … If I have Nextbus on my phone, when I get off a plane in Chicago or New York City, I want to be able to use it there, too. I think we can achieve that by working together.”

If a national movement toward open data and civic apps gathers more momentum, perhaps we’ll solve a perplexing problem, mused Walton.

“In a sense, we have transferred the intellectual property for apps to the public,” he said. “On one hand, that’s great, but I’m always concerned about what happens when an app stops working. By creating data standards and making apps portable, we will create enough users so that there’s enough community to support an application.”

Related:

September 20 2012

Congress launches Congress.gov in beta, doesn’t open the data

The Library of Congress is now more responsive — at least when it comes to web design. Today, the nation’s repository for its laws launched a new beta website at Congress.gov and announced that it would eventually replace Thomas.gov, the 17-year-old website that represented one of the first significant forays online for Congress. The new website will educate the public looking for information on their mobile devices about the lawmaking process, but it falls short of the full promise of embracing the power of the Internet. (More on that later).

Tapping into a growing trend in government new media, the new Congress.gov features responsive design, adapting to desktop, tablet or smartphone screens. It’s also search-centric, with Boolean search and, in an acknowledgement that most of its visitors show up looking for information, puts a search field front and center in the interface. The site includes member profiles for U.S. Senators and Representatives, with associated legislative work. In a nod to a mainstay of social media and media websites, the new Congress.gov also has a “most viewed bills” list that lets visitors see at a glance what laws or proposals are gathering interest online. (You can download a fact sheet on all the changes as a PDF).

On the one hand, the new Congress.gov is a dramatic update to a site that desperately needed one, particularly in a historic moment where citizens are increasingly connecting to the Internet (and one another) through their mobile devices.

On the other hand, the new Congress.gov beta has yet to realize the potential of Congress publishing bulk open legislative data. There is no application programming interface (API) for open government developers to build upon. In many ways, the new Congress.gov replicates what was already available to the public at sites like Govtrack.us and OpenCongress.org.

In response to my tweets about the site, former law librarian Meg Lulofs Kuhagan (@librarylulu) noted on Twitter that there’s “no data whatsoever, just window dressing” in the new site — but that “it looks good on my phone. More #opengov if you have a smartphone.”

Aaron E. Myers, the director of new media for Senator Major Leader Harry Reid, commented on Twitter that legislative data is a “tough nut to crack,” with the text of amendments, SCOTUS votes and treaties missing from new Congress.gov. In reply, Chris Carlson, the creative director for the Library of Congress, tweeted that that information is coming soon and that all the data that is currently in Thomas.gov will be available on Congress.gov.

Emi Kolawole, who reviewed the new Congress.gov for the Washington Post, reported that more information, including the categories Meyers cited, will be coming to the site soon, during its beta, including the Congressional Record and Index. Here’s hoping that Congress decides to publish all of its valuable Congressional Research Reports, too. Currently, the public has to turn to OpenCRS.com to access that research.

Carlson was justifiably proud of the beta of Congress.gov: “The new site has clean URLs, powerful search, member pages, clean design,” he tweeted. “This will provide access to so many more people who only have a phone for internet.”

While the new Congress.gov is well designed and has the potential to lead to more informed citizens, the choice to build a new website versus release the data disappointed some open government advocates.

“Another hilarious/clueless misallocation of resources,” commented David Moore, co-founder of OpenCongress. “First liberate bulk open gov data; then open API; then website.”

“What’s noticeable about this evolving beta website, besides the major improvements in how people can search and understand legislative developments, is what’s still missing: public comment on the design process and computer-friendly bulk access to the underlying data,” wrote Daniel Schuman, legislative counsel for the Sunlight Foundation. “We hope that Congress will now deeply engage with the public on the design and specifications process and make sure that legislative information is available in ways that most encourage analysis and reuse.”

Kolawole asked Congressional officials about bulk data access and an API and heard that the capacity is there but the approval is not. “They said the system could handle it, but they haven’t received congressional auth. to do it yet,” she tweeted.

Vision and bipartisan support for open government on this issue does exist among Congressional leadership. There has been progress on this front in the 112th Congress: the U.S. House started publishing machine-readable legislative data at docs.house.gov this past January.

“Making legislative data easily available in machine-readable formats is a big victory for open government, and another example of the new majority keeping its pledge to make Congress more open and accountable,” said Speaker of the House John Boehner.

Last December, House Minority Whip Steny Hoyer commented upon on how technology is affecting Congress, his caucus and open government in the executive branch:

For Congress, there is still a lot of work to be done, and we have a duty to make the legislative process as open and accessible as possible. One thing we could do is make THOMAS.gov — where people go to research legislation from current and previous Congresses — easier to use, and accessible by social media. Imagine if a bill in Congress could tweet its own status.

The data available on THOMAS.gov should be expanded and made easily accessible by third-party systems. Once this happens, developers, like many of you here today, could use legislative data in innovative ways. This will usher in new public-private partnerships that will empower new entrepreneurs who will, in turn, yield benefits to the public sector.

For any of that vision of civic engagement and entrepreneurship to can happen around Web, the Library of Congress will need to fully open up the data. Why hasn’t it happened yet, given bipartisan support and a letter from the Speaker of the House?

techPresident managing editor Nick Judd asked the Library of Congress about Congress.gov. The director of the communications for the Library of Congress, Gayle Osterberg, suggested in an email in response that Congress hasn’t been clear about the manner for data release.

“Congress has said what to do on bulk access,” commented Schuman. “See the joint explanatory statement. “There is support for bulk access.”

In June 2012, the House’s leadership has issued a bipartisan statement that adopted the goal of “provid[ing] bulk access to legislative information to the American people without further delay,” putting releasing bulk data among its “top priorities in the 112th Congress” and directed a task force “to begin its important work immediately.”

The 112th Congress will come to a close soon. The Republicans swept into the House in 2010 promising a new era of innovation and transparency. If Speaker Boehner, Rep. Hoyer and their colleagues want to end these two divisive years on a high note, fully opening legislative data to the People would be an enduring legacy. Congressional leaders will need to work with the Library of Congress to make that happen.

All that being said, the new Congress.gov is in beta and looks dramatically improved. The digital infrastructure of the federal legislative system got a bit better today, moving towards a more adaptive government. Stay tuned, and give the Library of Congress (@LibraryCongress) some feedback: there’s a new button for it on every page.

This post has been updated with comments from Facebook, a link and reporting from techPresident, and a clarification from Daniel Schuman regarding the position of the House of Representatives.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl