Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 25 2012

William Gibson got some of it right

"The sky above the port was the color of television tuned to a dead channel."

Thus begins "Neuromancer," one of the most influential works of science fiction ever written. William Gibson's vision of a dystopic future, where corporations have become the new governments and freelance hackers jack into the net with immersive computer systems, set the tone for the cyberpunk movement. Unfortunately, we still don't have our "deck" to jack into the net, we're still using the same (if highly upgraded) flat displays, keyboards and mice that we did in the '80s.

What we do have are the negative aspects of the novel. For a while, it looked like cyberwarfare was going to be mostly theoretical, and that the largest threats to network security were going to come from individual black-hat hackers. But then groups such as the Russian mafia got into the game, and then nation-states started using cyberwarfare as a tool of sabotage and espionage, and now corporations are resorting to reprisal attacks against entities that attack them. The net is now an active war zone, where hardware comes pre-installed with spook-authored malware designed to destroy centrifuges.

The other half of the Gibson dystopia, the rise of corporations as pseudo-governments, has occurred as well. SOPA, ACTA, PIPA, DMCA, and friends are all legislation directly authored or highly influenced by powerful industry lobbies, with the goal of making governments the enforcement arms of businesses. The FBI spends significant amounts of its time enforcing copyright and trademark violations. The recent Supreme Court ruling, that corporations are people too, could have come right out of the pages of "Neuromancer."

The fact that the technological future of "Neuromancer" has failed to come to pass speaks to the evolutionary nature of computer innovation. A direct brain interface is probably still decades (if not generations) away. But the fact that the societal and political future forecast in "Neuromancer" struck so close to home is a sad commentary on human nature. If you assume the worst, you stand a good chance of being right.

What's most interesting is that he totally blew the call on where the battle-lines would be drawn. In Gibson's universe, corporations are fighting each other for trade secrets, with highly skilled software assassins dancing elegant battles against elaborately constructed firewalls. In the real world, the defenders are hopelessly outgunned, fighting a battle standing on fragile software platforms while illiterate script-kiddies fire off salvo after salvo of brute-force attack. And rather than priceless technology blueprints, the booty that companies are trying to protect is the mundane: credit card numbers, music and movies.

Also, in "Neuromancer," the battle is largely invisible, with the average person on the street unaware of the carnage occurring electronically around them. By contrast, the general public is painfully aware of how vulnerable modern computer systems are to abuse, and pretty much anyone who uses the net regularly can tell you about DMCA takedowns and the perils of SOPA. In short, Gibson may have been right about the net becoming an online warzone, but he failed badly to identify the what and why of the war.

The real question is, where does our version of dystopic web-life go from here? There appear to be two diverging paths, neither one very palatable. At one extreme, groups such as Anonymous can make the web so unsafe to use that no one dares to use it for anything. On the other, governments and corporations make it safe for themselves, at the cost of our personal liberties and privacies. Or, we could continue to muddle along somewhere in the middle, which may be the best outcome we can hope for.

June 22 2012

The emerging political force of the network of networks

The shape and substance of our networked world is constantly emerging over time, stretching back over decades. Over the past year, the promise of the Internet as a platform for collective action moved from theory to practice, as networked movements of protesters and consumers have used connection technologies around the world in the service of their causes.

This month, more eyes and minds came alive to the potential of this historic moment during the ninth Personal Democracy Forum (PDF) in New York City, where for two intense days the nexus of technology, politics and campaigns came together on stage (and off) in a compelling, provocative mix of TED-style keynotes and lightning talks, longer panels, and the slipstream serendipity of hallway conversations and the backchannel on Twitter.


If you are interested in the intersection of politics, technology, social change and the Internet, PDF has long since become a must-attend event, as many of the most prominent members of the "Internet public" convene to talk about what's changing and why.

The first day began with a huge helping of technology policy, followed with a hint of triumphalism regarding the newfound power of the Internet in politics that was balanced by Jaron Lanier's concern about the impact of the digital economy on the middle class. The conference kicked off with a conversation between two United States Congressmen who were central to the historic online movement that halted the progression of the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA) in the U.S. House of Representatives and Senate: Representative Darrell Issa (R-CA) and Senator Ron Wyden (D-OR). You can watch a video of their conversation with Personal Democracy Media founder Andrew Rasiej below:

During this conversation, Rep. Issa and Sen. Ron Wyden introduced a proposal for a "Digital Bill of Rights." They published a draft set of principles on MADISON, the online legislation platform built last December during the first Congressional hackathon.

Both Congressmen pointed to different policy choices that stand to affect billions of people, ranging from proposed legislation about intellectual property, to the broader issue of online innovation and Internet freedom, and international agreements like the Anti-Counterfeiting Trade Agreement (ACTA or the Trans Pacific Partnership). Such policy choices also include online and network security: Rep. Issa sponsored and voted for CISPA, whereas Sen. Wyden is to opposed to a similar legislative approach in the Senate. SOPA, PIPA, ACTA and TPP have all been posted on MADISON for public comment.


On the second day of PDF, conversations and talks turned toward not only what is happening around the networked world but what could be in store for citizens in failed states in the developing world or those inhabiting huge cities in the West, with implications that can be simultaneously exhilarating and discomfiting. There was a strong current of discussion about the power of "adhocracy" and the force of the networked movements that are now forming, dissolving and reforming in new ways, eddying around the foundations of established societal institutions around the globe. Micah Sifry, co-founder of the Personal Democracy Forum, hailed five of these talks as exemplars of the "radical power of the Internet public.

These keynotes, by Chris Soghoian, Dave Parry, Peter Fein, Sascha Meinrath and Deanna Zandt, "could serve as a 50-minute primer on the radical power of the Internet public to change the world, why it's so important to nurture that public, where some of the threats to the Internet are coming from, and how people are routing around them to build a future 'intranet' that might well stand free from governmental and corporate control," wrote Sifry. (Three of them are embedded individually below; the rest you can watch in the complete video catalog at the bottom of this section.)

Given the historic changes in the Middle East and Africa over the past year during the Arab Spring, or the networked protests we've seen during the Occupy movement or over elections in Russia or austerity measures in Greece, it's no surprise that there was great interest in not just talking about what was happening, but why. This year, PDF attendees were also fortunate to hear about the experiences of netizens in China and Russia. The degree of change created by adding wireless Internet connectivity, social networking and online video to increasingly networked societies will vary from country to country. There are clearly powerful lessons that can be gleaned from the experiences of other humans around the globe. Learning where social change is happening (or not) and understanding how our world is changing due to the influence of networks is core to being a digitally literate citizen in the 21st century.

Declaring that we, as a nation or global polity, stand at a historic inflection point for the future of the Open Web or the role of the Internet in presidential politics or the balance of digital security and privacy feels, frankly, like a reiteration of past punditry, going well back to the .com boom in the 1990s.

That said, it doesn't make it less true. We've never been this connected to a network of networks, nor have the public, governments and corporations been so acutely aware of the risks and rewards that those connection technologies pose. It wasn't an accident that Muammar Gaddafi namechecked Facebook before his fall, nor that the current President of the United States (or his opponent in the the upcoming election) are talking directly with the public over the Internet. One area that PDF might have dwelt more upon is the dark side of networks, from organized crime and crimesourcing to government-sponsored hacking to the consequences of poorly considered online videos or updates.

We live in a moment of breathtaking technological changes that stand to disrupt nearly every sector of society, for good or ill. Many thanks to the curators and conveners of this year's conference for amplifying the voices of those whose work focuses on documenting and understanding how our digital world is changing — and a special thanks to all of the inspiring people who are not only being the change they wish to see in the world but making it.

Below, I've embedded a selection of the PDF 12 talks that resonated with me. These videos should serve a starting point, however, not an ending: every person on the program of this year's conference had something important to share, from Baratunde Thurston to Jan Hemme to Susan Crawford to Leslie Harris to Carne Ross to the RIAA's Cary Sherman — and the list goes on and on. You can watch all 45 talks from PDF 2012 (at least, the ones that have been uploaded to YouTube by the Personal Democracy Media team) in the player below:

Yochai Benkler | SOPA/PIPA: A Case Study in Networked Discourse and Activism

In this talk, Harvard law professor Yochai Benkler (@ybenkler) discussed using the Berkman Center's media cloud to trace how the Internet became a networked platform for collective action against SOPA and PIPA. Benkler applies a fascinating term — the "attention backbone" — to describe how influential nodes in a network direct traffic and awareness to research or data. If you're interested in the evolution of the blueprint for democratic participation online, you'll find this talk compelling.

Sascha Meinrath | Commotion and the Rise of the Intranet Era

Mesh networks have become an important — and growing — force for carrying connectivity to more citizens around the world. The work of Sasha Meinrath (@SashaMeinrath) at the Open Technology Institute in the New America Foundation is well worth following.

Mark Surman | Making Movements: What Punk Rock, Scouting, and the Royal Society Can Teach

Mark Surman (@msurman), the executive director of the Mozilla Foundation, shared a draft of his PDF talk prior to the conference. He offered his thoughts on "movement making," connecting lessons from punk rock, scouting and the Royal Society.

With the onrush of mobile apps and swift ride of Facebook, what we think about as the Internet — the open platform that is the World Wide Web — is changing. Surman contrasted the Internet today, enabled by an end-to-end principle, built upon open-source technologies and on open protocols, with the one of permissions, walled gardens and controlled app stores that we're seeing grow around the world. "Tim Berners-Lee built the idea that the web should be LEGO into its very design," said Surman. We'll see how if all of these pieces (loosely joined?) fit as well together in the future.

Juan Pardinas | OGP: Global Steroids for National Reformers

There are substantial responsibilities and challenges inherent in moving forward with the historic Open Government Partnership (OGP) that officially launched in New York City last September. Juan Pardinas (@jepardinas) took the position that OGP will have a positive impact on the world and that the seat civil society has at the partnership's table will matter. By the time the next annual OGP conference rolls around in 2013, history may well have rendered its own verdict on whether this effort will endure to lasting effect.

Given diplomatic challenges around South Africa's proposed secrecy law, all of the stakeholders in the Open Government Partnership will need to keep pressure on other stakeholders if significant progress is going to be made. If OGP is to be judged more than a PR opportunity for politicians and diplomats to make bold framing statements, government and civil society leaders will need to do more to hold countries accountable to the commitments required for participation: all participating countries must submit Action Plans after a bonafide public consultation. Moreover, they'll need to define the metrics by which progress should be judged and be clear with citizens about the timelines for change.

Michael Anti | Walking Along the Great Firewall

Michael Anti (@mranti) is a Chinese journalist and political blogger who has earned global attention for activism in the service of freedom of the press in China. When Anti was exiled from Facebook over its real names policy, his account deletion became an important example for other activists around the world. At PDF, he shared a frank perspective on where free speech stands in China, including how the Chinese government is responding to the challenges of their increasingly networked society. For perspective, there are now more Internet users in China (an estimated 350 million) than the total population of the United States. As you'll hear in Anti's talk, the Chinese government is learning and watching what happens elsewhere.





Masha Gessen | The Future of the Russian Protest Movement

Masha Gessen (@mashagessen), a Russian and American journalist, threw a bucket of ice water on any hopes that increasing Internet penetration or social media would in of themselves lead to improvements in governance, reduce corruption, or improve the ability of Russia's people to petition their government for grievances.





An Xiao Mina | Internet Street Art and Social Change in China

This beautiful and challenging talk by Mina (@anxiaostudio) offered a fascinating insight: memes are the street art of the censored web. If you want to learn more about how Chinese artists and citizens are communicating online, watch this creative, compelling presentation. (Note: there are naked people in this video, which will make it NSFW is some workplaces.)

Chris Soghoian | Lessons from the Bin Laden Raid and Cyberwar

Soghoian (@csoghoian), who has a well-earned reputation for finding privacy and security issues in the products and services of the world's biggest tech companies, offered up a talk that made three strong points:

  1. Automatic security updates are generally quite a good thing for users.
  2. It's highly problematic if governments create viruses that masquerade as such updates.
  3. The federal government could use an official who owns consumer IT security, not just "cybersecurity" in at the corporate or national level.

Zac Moffatt | The Real Story of 2012: Using Digital for Persuasion

Moffatt (@zacmoffatt> is the digital director for the Mitt Romney presidential campaign. In his talk, Moffatt said 2012 will be the first election cycle where persuasion and mobilization will be core elements of the digital experience. Connecting with millions of voters who have moved to the Internet is clearly a strategic priority for his team — and it appears to be paying off. The Guardian reported recently that the Romney campaign is closing the digital data gap with the Obama campaign.


Nick Judd wrote up further analysis of Moffatt's talk on digital strategy over at TechPresident.

Alex Torpey | The Local Revolution

Alex Torpey (@AlexTorpey) attracted widespread attention when he was elected mayor of South Orange New Jersey last year at the age of 23. In the months since he was elected, Torpey has been trying to interest his peers in politics. His talk at PDF focused on asking for more participation in local government and to rethink partisanship: Torpey ran as an independent. As Gov 2.0 goes local, Mayor Torpey looks likely to be one of its leaders.

Gilad Lotan | Networked Power: What We Learn From Data

If you're interested in a data-driven analysis of networked political power and media influence, Gilan Lotan's talk is a must-watch. Lotan, who tweets as @gilgul, crunched massive amounts of tweets to help the people formerly known as the audience to better understand networked movements for change.






Cheryl Contee | The End of the Digital Divide

Jack and Jill Politics co-founder Cheryl Contee (@cheryl) took a profoundly personal approach when she talked about the death and rebirth of the digital divide. She posited that what underserved citizens in the United States now face isn't so much the classic concerns of the 1990s, where citizens weren't connected to the Internet, but rather a skills gap for open jobs and a lack of investment to address those issues in poor and minority communities. She also highlighted how important mentorship can be in bridging that divide. When Contee shared how Yale computer lab director Margaret Krebs helped her, she briefly teared up — and she called on technologists, innovators and leaders to give others a hand up.

Tracing the storify of PDF 12

I published a storify of Personal Democracy Forum 2012 after the event. Incomplete though it may be, it preserves some thoughtful commentary and context shared in the Twittersphere during the event.

June 18 2012

June 14 2012

Stories over spreadsheets

I didn't realize how much I dislike spreadsheets until I was presented with a vision of the future where their dominance isn't guaranteed.

That eye-opening was offered by Narrative Science CTO Kris Hammond (@whisperspace) during a recent interview. Hammond's company turns data into stories: They provide sentences and paragraphs instead of rows and columns. To date, much of the attention Narrative Science has received has focused on the media applications. That's a natural starting point. Heck, I asked him about those very same things when I first met Hammond at Strata in New York last fall. But during our most recent chat, Hammond explored the other applications of narrative-driven data analysis.

"Companies, God bless them, had a great insight: They wanted to make decisions based upon the data that's out there and the evidence in front of them," Hammond said. "So they started gathering that data up. It quickly exploded. And they ended up with huge data repositories they had to manage. A lot of their effort ended up being focused on gathering that data, managing that data, doing analytics across that data, and then the question was: What do we do with it?"

Hammond sees an opportunity to extract and communicate the insights locked within company data. "We'll be the bridge between the data you have, the insights that are in there, or insights we can gather, and communicating that information to your clients, to your management, and to your different product teams. We'll turn it into something that's intelligible instead of a list of numbers, a spreadsheet, or a graph or two. You get a real narrative; a real story in that data."

My takeaway: The journalism applications of this are intriguing, but these other use cases are empowering.

Why? Because most people don't speak fluent "spreadsheet." They see all those neat rows and columns and charts, and they know something important is tucked in there, but what that something is and how to extract it aren't immediately clear. Spreadsheets require effort. That's doubly true if you don't know what you're looking for. And if data analysis is an adjacent part of a person's job, more effort means those spreadsheets will always be pushed to the side. "I'll get to those next week when I've got more time ..."

We all know how that plays out.

But what if the spreadsheet wasn't our default output anymore? What if we could take things most of us are hard-wired to understand — stories, sentences, clear guidance — and layer it over all that vital data? Hammond touched on that:

"For some people, a spreadsheet is a great device. For most people, not so much so. The story. The paragraph. The report. The prediction. The advisory. Those are much more powerful objects in our world, and they're what we're used to."

He's right. Spreadsheets push us (well, most of us) into a cognitive corner. Open a spreadsheet and you're forced to recalibrate your focus to see the data. Then you have to work even harder to extract meaning. This is the best we can do?

With that in mind, I asked Hammond if the spreadsheet's days are numbered.

"There will always be someone who uses a spreadsheet," Hammond said. "But, I think what we're finding is that the story is really going to be the endpoint. If you think about it, the spreadsheet is for somebody who really embraces the data. And usually what that person does is they reduce that data down to something that they're going to use to communicate with someone else."

A thought on dashboards

I used to view dashboards as the logical step beyond raw data and spreadsheets. I'm not so sure about that anymore, at least in terms of broad adoption. Dashboards are good tools, and I anticipate we'll have them from now until the end of time, but they're still weighed down by a complexity that makes them inaccessible.

It's not that people can't master the buttons and custom reports in dashboards; they simply don't have time. These people — and I include myself among them — need something faster and knob-free. Simplicity is the thing that will ultimately democratize data reporting and data insights. That's why the expansion of data analysis requires a refinement beyond our current dashboards. There's a next step that hasn't been addressed.

Does the answer lie in narrative? Will visualizations lead the way? Will a hybrid format take root? I don't know what the final outputs will look like, but the importance of data reporting means someone will eventually crack the problem.

Full interview

You can see the entire discussion with Hammond in the following video.

Related:

June 08 2012

mHealth apps are just the beginning of the disruption in healthcare from open health data

Two years ago, the potential of government making health information as useful as weather data felt like an abstraction. Healthcare data could give citizens the same "blue dot" for navigating health and illness akin to the one GPS data fuels on the glowing map of geolocated mobile devices that are in more and more hands.

After all, profound changes in entire industries, take years, even generations, to occur. In government, the pace of progress can feel even slower, measured in evolutionary time and epochs.

Sometimes, history works differently, particularly given the effect of rapid technological changes. It's only a little more than a decade since President Clinton announced he would unscramble global positioning system data (GPS) for civilian use. President Obama's second U.S. chief technology officer, Todd Park, estimated that GPS data is estimated to have unlocked some $90 billion dollars in value in the United States.

In the context, the arc of the Health Data Initiative (HDI) in the United States might leave some jaded observers with whiplash. From a small beginning, the initiative to put health data to work has now expanded around the United States and attracted great interest from abroad, including observers from England National Health Service eager to understand what strategies have unlocked innovation around public data sets.

While the potential of government health data driving innovation may well have felt like an abstraction to many observers, in June 2012, real health apps and services are here -- and their potential to change how society accesses health information, deliver care, lowers costs, connects patients to one another, creates jobs, empowers care givers and cuts fraud is profound. The venture capital community seems to have noticed the opportunity here: according to HHS Secretary Sebelius, investment in healthcare startups is up 60% since 2009.

Headlines about rockstar Bon Jovi 'rocking Datapalooza' and the smorgasbord of health apps on display, however, while both understandable and largely warranted, don't convey the deeper undercurrent of change.

On March 10, 2010, the initiative started with 36 people brainstorming in a room. On June 2, 2010, approximately 325 in-person attendees saw 7 health apps demoed at an historic forum in the theater of Institute of Medicine in Washington, D.C, with another 10 apps packed into an expo in the rotunda outside. All of the apps or services used open government data from the United States Department of Health and Human Services (HHS).

In 2012, 242 applications or services that were based upon or use open data were submitted for consideration to third annual "Health Datapalooza. About 70 health app exhibitors made it to the expo. The conference itself had some 1400 registered attendees, not counting press and staff, and was sold out in advance of the event in the cavernous Washington Convention Center in DC. On Wednesday, I asked Dr. Bob Kucher, now of Venrock Capital and the Brookings Institution, about how the Health Data Initiative has grown and evolved. Dr. Kucher was instrumental to its founding when he served in the Obama administration. Our interview is embedded below:

Revolutionizing the healthcare industry --- in HHS Secretary Sebelius's words, reformulating Wired executive editor Thomas Goetz's 'latent data' to "lazy data" --- has meant years of work unlocking government data and actively engaging the developers, entrepreneurial and venture capital community. While the process of making health data open and machine-readable is far from done, there has been incontrovertible progress in standing up new application programming interfaces (APIs) that enable entrepreneurs, academic institutions and government itself to retrieve it one demand. On Monday, in concert with the Health Data Palooza, a new version of HealthData.gov launched, including the release of new data sets that enable not just hospital quality comparisons but insurance fees as well.

Two years later, the blossoming of the HDI Forum into a massive conference that attracted the interest of the media, venture capitalists and entrepreneurs from around the nation is a short-term development that few people would have predicted in 2010 but that a nation starved for solutions to spiraling healthcare costs and some action from a federal government that all too frequently looks broken is welcome.

"The immense fiscal pressure driving 'innovation' in the health context actually means belated leveraging of data insights other industries take for granted from customer databases," said Chuck Curran, executive director and general counsel or the Network Advertising Initiative, when interviewed at this year's HDI Forum. For example, he suggested, look at "the dashboarding of latent/lazy data on community health, combined with geographic visualizations, to enable “hotspot”-focused interventions, or info about service plan information like the new HHS interface for insurance plan data (including the API).

Curran also highlighted the role that fiscal pressure is having on making both individual payers and employers a natural source of business funding and adoption for entrepreneurs innovating with health data, with apps like My Drugs Costs holding the potential to help citizens and businesses alike cut down on an estimated $95 billion dollars in annual unnecessary spending on pharmaceuticals.

Curran said that health app providers have fully internalized smart disclosure : "it’s not enough to have open data available for specialist analysis -- there must be simplified interfaces for actionable insights and patient ownership of the care plan."

For entrepreneurs eying the healthcare industry and established players within it, the 2012 Health Data Palooza offers an excellent opportunity to "take the pulse of mHealth, as Jody Ranck wrote at GigaOm this week:

Roughly 95 percent of the potential entrepreneur pool doesn’t know that these vast stores of data exist, so the HHS is working to increase awareness through the Health Data Initiative. The results have been astounding. Numerous companies, including Google and Microsoft, have held health-data code-a-thons and Health 2.0 developer challenges. These have produced applications in a fraction of the time it has historically taken. Applications for understanding and managing chronic diseases, finding the best healthcare provider, locating clinical trials and helping doctors find the best specialist for a given condition have been built based on the open data available through the initiative.

In addition to the Health Datapalooza, the Health Data Initiative hosts other events which have spawned more health innovators. RockHealth, a Health 2.0 incubator, launched at its SXSW 2011 White House Startup America Roundtable. In the wake of these successful events, StartUp Health, a network of health startup incubators, entrepreneurs and investors, was created. The organization is focused on building a robust ecosystem that can support entrepreneurs in the health and wellness space.

This health data ecosystem has now spread around the United States, from Silicon Valley to New York to Louisiana. During this year's Health Datapalooza, I spoke with Ramesh Kolluru, a technologist who works at the University of Louisiana, about his work on a hackathon in Louisiana, the "Cajun Codefest," and his impressions of the forum in Washington:

One story that stood out from this year's crop of health data apps was Symcat, an mHealth app that enables people to look up their symptoms and find nearby hospitals and clinics. The application was developed by two medical students at Johns Hopkins University who happened to share a passion for tinkering, engineering and healthcare. They put their passion to work - and somehow found the time (remember, they're in medical school) to build a beautiful, usable health app. The pair landed a $100,000 prize from the Robert Wood Johnson Foundation for their efforts. In the video embedded below, I interview Craig Munsen, one of the medical students, about his application. (Notably, the pair intends to use their prize to invest in the business, not pay off medical school debt.)

There are more notable applications and services to profile from this year's expo - and in the weeks ahead, expect to see some of them here on Radar, For now, it's important now to recognize the work of all of the men and women who have worked so hard over the past two years create public good from public data.

Releasing and making open health data useful, however, is about far more than these mHealth apps: It's about saving lives, improving the quality of care, adding more transparency to a system that needs it, and creating jobs. Park spoke with me this spring about how open data relates to much more than consumer-facing mHealth apps:

As the US CTO seeks to scale open data across federal government by applying the lessons learned in the health data initiative, look for more industries to receive digital fuel for innovation, from energy to education to transit and finance. The White House digital government strategy explicitly embraces releasing open data in APIs to enable more accountability, civic utility and economic value creation.

While major challenges lie ahead, from data quality to security or privacy, the opportunity to extend the data revolution in healthcare to other industries looks more tangible now than it has in years past.

Business publications, including the Wall Street Journal, have woken up to the disruptive potential of open government data As Michael Hickins wrote this week, "The potential applications for data from agencies as disparate as the Department of Transportation and Department of Labor are endless, and will affect businesses in every industry imaginable. Including yours. But if you can think of how that data could let someone disrupt your business, you can stop that from happening by getting there first."

This growing health data movement is not placed within any single individual city, state, agency or company. It's beautifully chaotic, decentralized, and self-propelled, said Park this past week.

"The Health Data Initiative is no longer a government initiative," he said. "It's an American one. "

In defense of frivolities and open-ended experiments

My first child was born just about nine months ago. From the hospital window on that memorable day, I could see that it was surprisingly sunny for a Berkeley autumn afternoon. At the time, I'd only slept about three of the last 38 hours. My mind was making up for the missing haze that usually fills the Berkeley sky. Despite my cloudy state, I can easily recall those moments following my first afternoon laying with my newborn son. In those minutes, he cleared my mind better than the sun had cleared the Berkeley skies.

While my wife slept and recovered, I talked to my boy, welcoming him into this strange world and his newfound existence. I told him how excited I was for him to learn about it all: the sky, planets, stars, galaxies, animals, happiness, sadness, laughter. As I talked, I came to realize how many concepts I understand that he lacked. For every new thing I mentioned, I realized there were 10 more that he would need to learn just to understand that one.

Of course, he need not know specific facts to appreciate the sun's warmth, but to understand what the sun is, he must first learn the pyramid of knowledge that encapsulates our understanding of it: He must learn to distinguish self from other; he must learn about time, scale and distance and proportion, light and energy, motion, vision, sensation, and so on.

Anatomy of a sunset

I mentioned time. Ultimately, I regressed to talking about language, mathematics, history, ancient Egypt, and the Pyramids. It was the verbal equivalent of "wiki walking," wherein I go to Wikipedia to look up an innocuous fact, such as the density of gold, and find myself reading about Mesopotamian religious practices an hour later.

It struck me then how incredible human culture, science, and technology truly are. For billions of years, life was restricted to a nearly memoryless existence, at most relying upon brief changes in chemical gradients to move closer to nutrient sources or farther from toxins.

With time, these basic chemo- and photo-sensory apparatuses evolved; creatures with longer memories — perhaps long enough to remember where food sources were richest — possessed an evolutionary advantage. Eventually, the time scales on which memory operates extended longer; short-term memory became long-term memory, and brains evolved the ability to maintain a memory across an entire biological lifetime. (In fact, how the brain coordinates such memories is a core question of my neuroscientific research.)

Brain

However, memory did not stop there. Language permitted interpersonal communication, and primates finally overcame the memory limitations of a single lifespan. Writing and culture imbued an increased permanence to memory, impervious to the requirement for knowledge to pass verbally, thus improving the fidelity of memory and minimizing the costs of the "telephone game effect."

We are now in the digital age, where we are freed from the confines of needing to remember a phone number or other arbitrary facts. While I'd like to think that we're using this "extra storage" for useful purposes, sadly I can tell you more about minutiae of the Marvel Universe and "Star Wars" canon than will ever be useful (short of an alien invasion in which our survival as a species is predicated on my ability to tell you that Nightcrawler doesn't, strictly speaking, teleport, but rather he travels through another dimension, and when he reappears in our dimension the "BAMF" sound results from some sulfuric gasses entering our dimension upon his return).

But I wiki-walk digress.

So what does all of this extra memory gain us?

Accelerated innovation.

As a scientist my (hopefully) novel research is built upon the unfathomable number of failures and successes dedicated by those who came before me. The common refrain is that we scientists stand on the shoulders of giants. It is for this reason that I've previously argued that research funding is so critical, even for apparently "frivolous" projects. I've got a Google Doc noting impressive breakthroughs that emerged from research that, on the surface, has no "practical" value:

Although you can't legislate innovation or democratize a breakthrough, you can encourage a system that maximizes the probability that a breakthrough can occur. This is what science should be doing and this is, to a certain extent, what Silicon Valley is already doing.

The more data, information, software, tools, and knowledge available, the more we as a society can build upon previous work. (That said, even though I'm a huge proponent for more data, the most transformational theory from biology came about from solid critical thinking, logical, and sparse data collection.)

Of course, I'm biased, but I'm going to talk about two projects in which I'm involved: one business and one scientific. The first is Uber, an on-demand car service that allows users to request a private car via their smartphone or SMS. Uber is built using a variety of open software and tools such as Python, MySQL, node.js, and others. These systems helped make Uber possible.

Uber screenshot

As a non-engineer, it's staggering to think of the complexity of the systems that make Uber work: GPS, accurate mapping tools, a reliable cellular/SMS system, automated dispatching system, and so on. But we as a culture become so quickly accustomed to certain advances that, should our system ever experience a service disruption, Louis C.K. would almost certainly be prophetic about the response:

The other project in which I'm involved is brainSCANr. My wife and I recently published a paper on this, but the basic idea is that we mined the text of more than three million peer-reviewed neuroscience research articles to find associations between topics and search for potentially missing links (which we called "semi-automated hypothesis generation").

We built the first version of the site in a week, using nothing but open data and tools. The National Library of Medicine, part of the National Institutes of Health, provides an API to search all of these manuscripts in their massive, 20-million-paper-plus database. We used Python to process the associations, the JavaScript InfoVis Toolkit to plot the data, and Google App Engine to host it all. I'm positive when the NIH funded the creation of PubMed and its API, they didn't have this kind of project in mind.

That's the great thing about making more tools available; it's arrogant to think that we can anticipate the best ways to make use of our own creations. My hope is that brainSCANr is the weakest incarnation of this kind of scientific text mining, and that bigger and better things will come of it.

Twenty years ago, these projects would have been practically impossible, meaning that the amount of labor involved to make them would have been impractical. Now they can be built by a handful of people (or a guy and his pregnant wife) in a week.

Just as research into black holes can lead to a breakthrough in wireless communication, so too can seemingly benign software technologies open amazing and unpredictable frontiers. Who would have guessed that what began with a simple online bookstore would grow into Amazon Web Services, a tool that is playing an ever-important role in innovation and scientific computing such as genetic sequencing?

So, before you scoff at the "pointlessness" of social networks or the wastefulness of "another web service," remember that we don't always do the research that will lead to the best immediate applications or build the company that is immediately useful or profitable. Nor can we always anticipate how our products will be used. It's easy to mock Twitter because you don't care to hear about who ate what for lunch, but I guarantee that the people whose lives were saved after the Haiti earthquake or who coordinated the spark of the Arab Spring are happy Twitter exists.

While we might have to justify ourselves to granting agencies, or venture capitalists, or our shareholders in order to do the work we want to do, sometimes the "real" reason we spend so much of our time working is the same reason people climb mountains: because it's awesome that we can. That said, it's nice to know that what we're building now will be improved upon by our children in ways we can't even conceive.

I can't wait to have this conversation with my son when — after learning how to talk, of course — he's had a chance to build on the frivolities of my generation.

Related:

June 04 2012

Can Future Advisor be the self-driving car for financial advice?

Future AdvisorLast year, venture capitalist Marc Andreessen famously wrote that software is eating the world. The impact of algorithms upon media, education, healthcare and government, among many other verticals, is just beginning to be felt, and with still unfolding consequences for the industries disrupted.

Whether it's the prospect of IBM's Watson offering a diagnosis to a patient or Google's self-driving car taking over on the morning commute, there are going to be serious concerns raised about safety, power, control and influence.

Doctors and lawyers note, for good reason, that their public appearances on radio, television and the Internet should not be viewed as medical or legal advice. While financial advice may not pose the same threat to a citizen as an incorrect medical diagnosis or treatment, poor advice could have pretty significant downstream outcomes.

That risk isn't stopping a new crop of startups from looking for a piece of the billions of dollars paid every year to financial advisors. Future Advisor launched in 2010 with the goal of providing better financial advice through the Internet using data and algorithms. They're competing against startups like Wealthfront and Betterment, among others.

Not everyone is convinced of the validity of this algorithmically mediated approach to financial advice. Mike Alfred, the co-founder of BrightScope (which has liberated financial advisor data itself), wrote in Forbes this spring that online investment firms are wrong about financial advisors:

"While singularity proponents may disagree with me here, I believe that some professions have a fundamentally human component that will never be replaced by computers, machines, or algorithms. Josh Brown, an independent advisor at Fusion Analytics Investment Partners in NYC, recently wrote that 'for 12,000 years, anywhere someone has had wealth through the history of civilization, there's been a desire to pay others for advice in managing it.' In some ways, it's no different from the reason why many seek out the help of a psychiatrist. People want the comfort of a human presence when things aren't going well. A computer arguably may know how to allocate funds in a normal market environment, but can it talk you off the cliff when things go to hell? I don't think so. Ric Edelman, Chairman & CEO of Edelman Financial Services, brings up another important point. According to him, 'most consumers are delegators and procrastinators, and need the advisor to get them to do what they know they need to do but won't do if left on their own'."

To get the other side of this story, I recently talked with Bo Lu (@bolu), one of the two co-founders of Future Advisor. Lu explained how the service works, where the data comes from and whether we should fear the dispassionate influence of our new robotic financial advisor overlords.

Where did the idea for Future Advisor come from?

Lu: The story behind Future Advisor is one of personal frustration. We started the company in 2010 when my co-founder and I were working at Microsoft. Our friends who had reached their mid-20s were really making money for the first time in their lives. They were now being asked to make decisions, such as "Where do I open an IRA? What do I do with my 401K?" As is often the case, they went to the friend who had the most experience, which in this case turned out to be me. So I said, "Well, let's just find you guys a good financial advisor and then we'll do this," because somehow in my mind, I thought, "Financial advisors do this."

It turned out that all of the financial advisors we found fell into two distinct classes. One were folks that were really nice but essentially in very kind words said, "Maybe you'd be more comfortable at the lower stakes table." We didn't meet any of their minimums. You needed a million dollars or at least a half million to get their services.

The other kinds of financial advisors who didn't have minimums immediately started trying to sell my friends term life insurance and annuities. I'm like, "These guys are 25. There's no reason for you to be doing this." Then I realized there was a misalignment of incentives there. We noticed that our friends were making a small set of the same mistakes over and over again, such as not having the right diversification for their age and their portfolio, or paying too much in mutual fund fees. Most people didn't understand that mutual funds charged fees and were not being tax efficient. We said, "Okay, this looks like a data problem that we can help solve for you guys." That's the genesis out of which Future Advisor was born.

What problem are you working on solving?

Bo Lu: Future Advisor is really trying to do one single thing: deliver on the vision that high-quality financial advice should be able to be produced cheaply and, thus, be broadly accessible to everyone.

If you look at the current U.S. market of financial advisors and you multiply the number of financial advisors in the U.S. — which is roughly a quarter-million people — by what is generally accepted to be a full book of clients, you'll realize that even at full capacity, the U.S. advisor market can serve only about 11% of U.S. households.

In serving that 11% of U.S. households, the advisory market for retail investing makes about $20 billion. This is a classic market where a service is extremely expensive but in being so can only serve a small percentage of the addressable market. As we walked into this, we realized that we're part of something bigger. If you look at 60 years ago, a big problem was that everyone wanted a color television and they just weren't being manufactured quickly or cheaply enough. Manufacturing scale has caught up to us. Now, everything you want you generally can have because manufactured things are cheap. Creating services is still extremely expensive and non-scalable. Healthcare as a service, education as a service and, of course, financial services, financial advising service comes to mind. What we're doing is taking information technology, like computer science, to scale a service in the way the electrical engineering of our forefathers scaled manufacturing.

How big is the team? How are you working together?

Bo Lu: The team has eight people in Seattle. It's almost exactly half finance and half engineering. We unabashedly have a bunch of engineers from MIT, which is where my co-founder went to school, essentially sucking the brains out of the finance team and putting them in software. It's really funny because a lot of the time when we design an algorithm, we actually just sit down and say, "Okay, let's look at a bunch of examples and see what the intuitive decisions are of science people and then try to encode them."

We rely heavily on the existing academic literature in both computational finance and economics because a lot of this work has been done. The interesting thing is that the knowledge is not the problem. The knowledge exists, and it's unequivocal in the things that are good for investors. Paying less in fees is good for investors. Being more tax efficient is good for investors. How to do that is relatively easy. What's hard for the industry for a long time has been to scalably apply those principles in a nuanced way to everybody's unique situation. That's something that software is uniquely good at doing.

How do you think about the responsibility of providing financial advice that traditionally has been offered by highly certified professionals who've taken exams, worked at banks, and are expensive to get to because of that professional experience?

Bo Lu: There's a couple of answers to that question, one of which is the folks on our team have the certifications that people look for. We've got certified financial advisors*, CFAs, which is a private designation on the team. We have math PhDs from the University of Washington on the team. The people who create the software are the caliber of people that you would want to be sitting down with you and helping you with your finances in the first place.

The second part of that is that we ourselves are a registered investment advisor. You'll see many websites that on the bottom say, "This is not intended to be financial advice." We don't say that. This is intended to be financial advice. We're registered federally with the SEC as a registered investment advisor and have passed all of the exams necessary.

*In the interview, Lu said that FutureAdvisor has 'certified financial advisors'. In this context, CFA stood for something else: The Future Advisor team includes Simon Moore, a chartered financial analyst, who advises the startup on investing algorithms design.

Where does the financial data behind the site come from?

Bo Lu: From the consumer side, the site has only four steps. These four steps are very familiar to anyone who's used a financial advisor before. A client signs up for the products. It's a free web service, designed to help everyone. In step one, they answer a couple of questions about their personal situation: age, how much they make, when they want to retire. Then they're asked the kinds of questions that good financial advisors ask, such as your risk tolerance. Here, you start to see that we rely on academic work as much as possible.

There is a great set of work out of the University of Kentucky on risk tolerance questionnaires. Whereas most companies just use some questionnaire they came up with internally, we went and scoured literature to find exact questions that were specifically worded — and have been tested under those wordings to yield statistically significant deviations in determining risk tolerance. So we use those questions. With that information, the algorithm can then come up with a target portfolio allocation for the customer.

In step two, the customer can synchronize or import data from their existing financial institutions into the software. We use Yodlee, which you've written about before. It's the same technology that Mint used to import detailed data about what you already hold in your 401K, in your IRA, and in all of your other investment accounts.

Step three is the dashboard. The dashboard shows your investments at a level that makes sense, rather than current brokerages where when you log in, they tell you how much money you have, with a list of funds you have, and how much they've changed in the last 24 hours of trading. We answer four questions on the dashboard.

  1. Am I on track?
  2. Am I well-diversified for this goal?
  3. Am I overpaying in hidden fees in my mutual funds?
  4. Am I as tax efficient as I could be?

We answer those four questions and then in the final step of the process, we give algorithmically-generated, step-by-step instructions about how to improve your portfolio. This includes specific advice like "this many shares of Fund X to buy this many shares of Fund Y" in your IRA. When the consumer sees this, he or she can go and, with this help, clean up their portfolios. It's kind of like diagnosis and prescription for your portfolio.

There are three separate streams of data underlying the product. One is the Yodlee stream, which is detailed holdings data from hundreds of financial institutions. Two is data about what's in a fund. That comes from Morningstar. Morningstar, of course, gets it from the SEC because mutual funds are required to disclose this. So we can tell, for example, if a fund is an international fund or a domestic fund, what the fees are, and what it holds. The third dataset is from the datasets that we have to tier in ourselves, which is 401K data from the Department of Labor.

On top of this triad of datasets sits our algorithm, which has undergone six to eight months of beta testing with customers. (We launched the product in March 2012.) That algorithm asks, "Okay, given these three datasets, what is the current state of your portfolio? What is the minimum number of moves to reduce both transaction costs and any capital gains that you might incur to get you from where you are to roughly where you need to be?" That's how the product works under the covers.

What's the business model?

Bo Lu: You can think of it as similar to Redfin. Redfin allows individual realtors to do more work by using algorithms to help them do all of the repetitive parts. Our product and the web service is free and will always be free. Information wants to be free. That's how we work in software. It doesn't cost us anything for an additional person to come and use the website.

The way that Future Advisor makes money is that we charge for advisor time. A small percentage of customers will have individual questions about their specific situation or want to talk to a human being and have them answer some questions. This is actually good in two ways.

One, it helps the transition from a purely human service to what we think will eventually be an almost purely digital service. People who are somewhere along that continuum of wanting someone to talk to but don't need someone full-time to talk to can still do that.

Two, those conversations are a great way for us to find out, in aggregate, what the things are that the software doesn't yet do or doesn't do well. Overall, if we take a ton of calls that are all the same, then it means there's an opportunity for the software to step in, scale that process, and help people who don't want to call us or who can't afford to call us to get that information.

What's the next step?

Bo Lu: This is a problem that has a dramatic possible impact attached to it. Personal investing, what the industry calls "retail investing," is a closed-loop system. Money goes in, and it's your money, and it stays there for a while. Then it comes out, and it's still your money. There's very little additional value creation by the financial advisory industry.

It may sound like I'm going out on a limb to say this, but it's generally accepted that the value creation of you and I putting our hard-earned money into the market is actually done by companies. Companies deploy that capital, they grow, and they return that capital in the form of higher stock prices or dividends, fueling the engine of our economic growth.

There are companies across the country and across the world adding value to people's lives. There's little to no value to be added by financial advisors trying to pick stocks. It's actually academically proven that there's negative value to be added there because it turns out the only people who make money are financial advisors.

This is a $20 billion market. But really what that means is that it's a $20 billion tax on individual American investors. If we're successful, we're going to reduce that $20 billion tax to a much smaller number by orders of magnitude. The money that's saved is kept by individual investors, and they keep more of what's theirs.

Because of the size of this market and the size of the possible impact, we are venture-backed because we can really change the world for the better if we're successful. There are a bunch of the great folks in the Valley who have done a lot of work in money and the democratization of software and money tools.

What's the vision for the future of your startup?

Bo Lu: I was just reading your story about smart disclosure a little while ago. There's a great analogy in there that I think applies aptly to us. It's maps. The first maps were paper. Today if you look at the way a retail investor absorbs information, it's mostly paper. They get a prospectus in the mail. They have a bunch of disclosures they have to sign — and the paper is extremely hard to read. I don't know if you've ever tried to read a prospectus; it's something that very few of us enjoy. (I happen to be one of them, but I understand if not everyone's me.) They're extremely hard to parse.

Then we moved on to the digital age of folks taking the data embedded in those prospectuses and making them available. That was Morningstar, right? Now we're moving into the age of folks taking that data and mating it with other data, such as 401K data and your own personal financial holdings data, to make individual personalized recommendations. That's Future Advisor the way it is today.

But just as maps moved from paper maps to Google Maps, it didn't stop there. It moves and has moved to self-autonomous cars. There will be a day when you and I don't ever have to look at a map because, rather than the map being a tool to help me make the decision to get somewhere, the map will be a part of a service I use that just gets the job done. It gets me from point A to point B.

In finance, the job is to invest my money properly. Steward it so that it grows, so that it's there for me when I retire. That's our vision as well. We're going to move from being an information service to actually doing it for you. It's just a default way so that if you do nothing, your financial assets are well taken care of. That's what we think is the ultimate vision of this: Everything works beautifully and you no longer have to think about it.

We're now asked to make ridiculous decisions about spreading money between a checking account, an IRA, a savings account and a 401K, which really make no sense to most of us. The vision is to have one pot of money that invests itself correctly, that you put money into when you earn money. You take money out when you spend it. You don't have to make any decisions that you were never trained nor educated to make about your own personal finances because it just does the right thing. The self-driving car is our vision.

Connecting the future of personal finance with an autonomous car is an interesting perspective. Just as with outsourcing driving, however, there's the potential for negative outcomes. Do you have any concerns about the algorithm going awry?

Bo Lu: We are extremely cognizant of the weighty matters that we are working with here. We have a ton of testing that happens internally. You could even criticize us, as a software development firm, in that we're moving slower than other software development firms. We're not going to move as quickly as Twitter or Foursquare because, to be honest, if they mess up, it's not that big a deal. We're extremely careful about it.

At the same time, I think the Google self-driving car analogy is apt because people immediately say, "Well, what if the car gets into an accident?" Those kinds of fears exist in all fields that matter.


Analysis: Why this matters

"The analogy that comes to mind for me isn't the self-driving car," commented Mike Loukides, via email. "It's personalized medicine."

One of the big problems in health care is that to qualify treatments, we do testing over a very wide sample, and reject it if it doesn't work better than a placebo. But what about drugs that are 100% effective on 10% of the population, but 0% effective on 90%? They're almost certainly rejected. It strikes me that what Future Advisor is doing isn't so much helping you to go on autopilot, but getting beyond generic prescriptions and generating customized advice, just as a future MD might be able to do a DNA sequence in his office and generate a custom treatment.

The secret sauce for Future Advisor is the combination of personal data, open government data and proprietary algorithms. The key to realizing value, in this context, is combining multiple data streams with a user interface that's easy for a consumer to navigate. That combination has long been known by another name: It's a mashup. But the mashups of 2012 have something that those of 2002 didn't have, at least in volume or quality: data.

Future Advisor, Redfin (real estate) or Castlight (healthcare) are all interesting examples of entrepreneurs creating data products from democratized government data. Future Advisor uses data from consumers and the U.S. Department of Labor, Redfin synthesizes data from economists and government agencies, and Castlight uses health data from the U.S. Department of Health and Human Services. In each case, they provide a valuable service and/or product by making sense of that data deluge.

Related:

May 29 2012

US CTO seeks to scale agile thinking and open data across federal government

In the 21st century, federal government must go mobile, putting government services and information at the fingertips of citizens, said United States Chief Technology Officer Todd Park in a recent wide-ranging interview. "That's the first digital government result, outcome, and objective that's desired."

To achieve that vision, Park and U.S. chief information officer Steven VanRoekel are working together to improve how government shares data, architects new digital services and collaborates across agencies to reduce costs and increase productivity through smarter use of information technology.

Park, who was chosen by President Obama to be the second CTO of the United States in March, has been (relatively) quiet over the course of his first two months on the job.

Last Wednesday, that changed. Park launched a new Presidential innovation Fellows program, in concert with VanRoekel's new digital government strategy, at TechCrunch's Disrupt conference in New York City. This was followed by another event for a government audience at the Interior Department headquarters in Washington, D.C. Last Friday, he presented his team's agenda to the President's Council of Advisors on Science and Technology.

"The way I think about the strategy is that you're really talking about three elements," said Park, in our interview. "First, it's going mobile, putting government services at the literal fingertips of the people in the same way that basically every other industry and sector has done. Second, it's being smarter about how we procure technology as we move government in this direction. Finally, it's liberating data. In the end, it's the idea of 'government as a platform.'"

"We're looking for a few good men and women"

In the context of the nation's new digital government strategy, Park announced the launch of five projects that this new class of Innovation Fellows will be entrusted with implementing: a broad Open Data Initiative, Blue Button for America, RFP-EZ, The 20% Campaign, and MyGov.

The idea of the Presidential Innovation Fellows Program, said Park, is to bring in people from outside government to work with innovators inside the government. These agile teams will work together within a six-month time frame to deliver results.

The fellowships are basically scaling up the idea of "entrepreneurs in residence," said Park. "It's a portfolio of five projects that, on top of the digital government strategy, will advance the implementation of it in a variety of ways."

The biggest challenge to bringing the five programs that the US CTO has proposed to successful completion is getting 15 talented men and women to join his team and implement them. There's reason for optimism. Park shared vie email that:

"... within 24 hours of TechCrunch Disrupt, 600 people had already registered via Whitehouse.gov to apply to be a Presidential Innovation Fellow, and another several hundred people had expressed interest in following and engaging in the five projects in some other capacity."

To put that in context, Code for America received 550 applications for 24 fellowships last year. That makes both of these fellowships more competitive than getting in to Harvard in 2012, which received 34,285 applications for its next freshman class. There appears to be considerable appetite for a different kind of public service that applies technology and data for the public good.

Park is enthusiastic about putting open government data to work on behalf of the American people, amplifying the vision that his predecessor, Aneesh Chopra, championed around the country for the past three years.

"The fellows are going to have an extraordinary opportunity to make government work better for their fellow citizens," said Park in our interview. "These projects leverage, substantiate and push forward the whole principle of liberating data. Liberate data."

"To me, one of the aspects of the strategy about which I am most excited, that sends my heart into overdrive, is the idea that going forward, the default state of government data shall be open and machine-readable," said Park. "I think that's just fantastic. You'll want to, of course, evolve the legacy data as fast as you can in that same direction. Setting that as 'this is how we are rolling going forward' — and this is where we expect data to ultimately go — is just terrific."

In the videos and interview that follow, Park talks more about his vision for each of the programs.

A federal government-wide Open Data Initiative

In the video below, Park discusses the Presidential Innovation Fellows program and introduces the first program, which focuses on open data:

Park: The Open Data Initiative is a program to seed and expand the work that we're doing to liberate government data as a platform. Encourage, on a voluntary basis, the liberation of data by corporations, as part of the national data platform, and to actively stimulate the development of new tools and services, and enhance existing tools and services, leveraging the data to help improve Americans' lives in very tangible ways, and create jobs for the future.

This leverages the Open Government Directive to say "look, the default going forward is open data." Also the directive to "API-ize" two high priority datasets and also, in targeted ways, go beyond that, and really push to get more data out there in, critically, machine-readable form, in APIs, and to educate the entrepreneur and innovators of the world that it's there through meetups, and hackathons, and challenges, and "Datapaloozas."

We're doubling down on the Health Data Initiative, we are also launching a much more high-profile Safety Data Initiative, which we kicked off last week. An Energy Data Initiative, which kicked off this week. An education data initiative, which we're kicking off soon, and an Impact Data Initiative, which is about liberating data with respect to inputs and outputs in the non-profit space.

We're also going to be exploring an initiative in the realm of personal finance, enabling Americans to access copies of their financial data from public sector agencies and private sector institutions. So, the format that we're going to be leveraging to execute these initiatives is cloned from the Health Data Initiative.

This will make new data available. It will also take the existing public data that is unusable to developers, i.e. in the form of PDFs, books or static websites, and turn it into liquid machine-readable, downloadable, accessible data via API. Then — because we're consistently hearing that 95% of the innovators and entrepreneurs who could turn our data into magic don't even know the data exists, let alone that it's available to them — engage the developer community and the entrepreneurial community with the data from the beginning. Let them know it's there, get their feedback, make it better.

Blue Button for America

Park: The idea is to develop an open source patient portal capability that will replace MyHealthyVet, which is the Veterans Administration's current patient portal. This will actually allow the Blue Button itself to iterate and evolve more rapidly, so that everY time you add more data to it, it won't require heart surgery. It will be a lot easier, and of course will be open source, so that anyone else who wants to use it can use it as well. On top of that, we're going to do a lot of "biz dev" in America to get the word out about Blue Button and encourage more and more holders of data in the private sector to adopt Blue Button. We're also going to work to help stimulate more tool development by entrepreneurs that can upload Blue Button data and make it useful in all kinds of ways for patients. That's Blue Button for America.

What is RFP-EZ?

Park: The objective is "buying smarter." The project that we're working ON with the Small Business Administration on is called "RFP-EZ."

Basically, it's the idea of setting up a streamlined process for the government to procure solutions from innovative, high-growth tech companies. As you know, most high-growth companies regard the government as way too difficult to sell to.

That A) deprives startups and high-growth companies from the government as a marketplace and, B) perhaps even more problematically, actually deprives the government of their solutions.

The hope here is, through the actions of the RFP-EZ team, to create a process and a prototype that the government can much more easily procure solutions from innovative private firms.

It A) opens up this emerging market called "the government" to high-tech startups and B) infects the government with more of their solutions, which are radically more, pound for pound, effective and cost efficient than a lot of the stuff that the government is currently procuring through conventional channels. That's RFP-EZ.

The 20% Campaign

Park: The 20% Campaign is a project that's being championed by USAID. It's an effort at USAID to, working with other government agencies, NGOs and companies, to catalog the movement of foreign assistance payments from cash to electronics. So, just for example, USAID pays its contractors electronically, obviously, but the contractor who, say, pays highway workers in Afghanistan or the way that police officers get paid in Afghanistan is actually principally via cash. Or has been. And that creates all kinds of waste issues, fraud, and abuse.

The idea is actually to move to electronic payment, including mobile payment — and this has the potential to significantly cut waste, fraud and abuse, to improve financial inclusion, to actually let people on phones, to enable them to access bank accounts set up for them. That leads to all kinds of good things, including safety: it's not ideal to be carrying around large amounts of cash in highly kinetic environments.

The Afghan National Police started paying certain contingents of police officers via mobile phones and mobile payments, as opposed to cash, and what happened is that the police officers started reporting an up to a 30% raise. Of course, their pay hadn't changed, but basically, when it was in cash, a bunch of it got lost. This is obviously a good thing, but it's even more important if you realize that when they were paid what they were paid in cash that they ultimately physically received, that was less than the Taliban in this province was actually paying people to join the Taliban — but the mobile payment, and that level of salary, was greater than the Taliban was paying. That's a critical difference.

It's basically taking foreign assistance payments through the last mile to mobile.

MyGov is the U.S. version of Gov.uk

Park: MyGov is an effort to rapidly prototype a citizen-centric system that allows Americans the information and resources of government that are right for them. Think of it as a personalized channel for Americans to be able to access information resources across government and get feedback from citizens about those information and resources.

How do you plan to scale what you learned while you were HHS CTO to the all of the federal government?

Park: Specifically, we're doing exactly the same thing we did with the Health Data Initiative, kicking off the initiatives with a "data jam" — an ideation workshop where we invite, just like with health data, 40 amazing tech and energy minds, tech and safety innovators, to a room — at the White House, in the case of the Safety Data Initiative, or at Stanford University, in the case of the Energy Initiative.

We walk into the room for several hours and say, "Here's a big pile of data. What would you do with this data?" And they invent 15 or 20 news classes of products or services of the future that we could build with the data. And then we challenge them to, at the end of the session, build prototypes or actual working products, that instantiates their ideas in 90 days, to be highlighted at a White House — hosted Safety Datapalooza, Energy Datapalooza, Education Datapalooza, Impact Datapalooza, etc.

We also take the intellectual capital from the workshops, publish it on the White House website, and publicize the opportunity around the country: Discover the data, come up with your own ideas, build prototypes, and throw your hat in the ring to showcase at a Datapalooza.

What happens at the Datapaloozas — our experience in health guides us — is that, first of all, the prototypes and working products inspire many more innovators to actually build new services, products and features, because the data suddenly becomes really concrete to them, in terms of how it could be used.

Secondly, it helps persuade additional folks in the government to liberate more data, making it available, making it machine-readable, as opposed to saying, "Look, I don't know what the upside is. I can only imagine downsides." What happened in health is, when they went to a Datapalooza, they actually saw that, if data is made available, then at no cost to you and no cost to taxpayers, other people who are very smart will build incredible things that actually enhance your mission. And so you should do the same.

As more data gets liberated, that then leads to more products and services getting built, which then inspires more data liberation, which then leads to more products and services getting built — so you have a virtual spiral, like what's happened in health.

The objective of each of these initiatives is not just to liberate data. Data by itself isn't helpful. You can't eat data. You can't pour data on a wound and heal it. You can't pour data on your house and make it more energy efficient. Data is only useful if it's applied to deliver benefit. The whole point of this exercise, the whole point of these kickoff efforts, is to catalyze the development of an ecosystem of data supply and data use to improve the lives of Americans in very tangible ways — and create jobs.

We have the developers and the suppliers of data actually talk to each other, create value for the American people, and then rinse, wash, repeat.

We're recruiting, to join the team of Presidential Innovation Fellows, entrepreneurs and developers from the outside to come in and help with this effort to liberate data, make it machine-readable, and get it out there to entrepreneurs and help catalyze development of this ecosystem.

We went to TechCrunch Disrupt for a reason: it's right smack dab center in the middle of people we want to recruit. We invite people to check out the projects on WhiteHouse.gov and, if you're interested in applying to be a fellow, indicate their interest. Even if they can't come to DC for 6-plus months to be a fellow, but they want to follow one of the projects or contribute or help in some way, we are inviting them express interest in that as well. For example, if you're an entrepreneur, and you're really interested in the education space, and learning about what data is available in education, you can check out the project, look at the data, and perhaps you can build something really good to show at the Education Datapalooza.

Is open data just about government data? What about smart disclosure?

Park: In the context of the Open Data Initiatives projects, it's not just about liberation of government health data: it's also about government catalyzing the release, on a voluntary basis, of private sector data.

Obviously, scaling Blue Button will extend the open data ecosystem. We're also doubling down on Green Button. I was just in California to host discussions around Green Button. Utilities representing 31 million households and businesses have now committed to make Green Button happen. Close to 10 million households and businesses already have access to Green Button data.

There's also a whole bunch of conversation happening about, at some point later this year, having the first utilities add the option of what we're calling "Green Button Connect." Right now, the Green Button is a download, where you go to a website, hit a green button and bam, you download your data. Green Button Connect is the ability for you to say as a consumer, "I authorize this third party to receive a continuous feed of my electricity usage data."

That creates massive additional opportunity for new products and services. That could go live later this year.

As part of the education data initiative, we are pursuing the launch and scale up of something called "My Data," which will have a red color button. (It will probably, ultimately, be called "Red Button.") This is the ability for students and their families to download an electronic copy of their student loan data, of their transcript data, of their academic assessment data.

That notion of people getting their own data, whether it's your health data, your education data, your finance data, your energy use data, that's an important part of these open data initiatives as well, with government helping to catalyze the release of that data to then feed the ecosystem.

How does open data specifically relate to the things that Americans care about, access to healthcare, reducing energy bills, giving their kids more educational opportunities, and job creation? Is this just about apps?

Park: In healthcare, for example, you'll see a growing array of examples that leverage data to create tangible benefit in many, many ways for Americans. Everything from helping me find the right doctor or hospital for my family to being notified of a clinical trial that could assist my profile and save my life, and the ability to get the latest and greatest information about how to manage my asthma and diabetes via government knowledge in the National Library of Medicine.

There is a whole shift in healthcare systems away from pay-for-volume of services to basically paying to get people healthy. It goes by lots of different names — accountable care organizations or episodic payment — but the fundamental common theme is that the doctors and hospitals increasingly will be paid to keep people healthy and to co-ordinate their care, and keep them out of the hospital, and out of the ER.

There's a whole fleet of companies and services that utilize data to help doctors and hospitals do that work, like utilize Medicare claims data to help identity segments of a patient population that are at real risk, and need to get to the ER or hospital soon. There are tools that help journalists identify easily public health issues, like healthcare outcomes disparities by race, gender and ethnicity. There are tools that help country commissioners and mayors understand what's going on in a community, from a health standpoint, and make better policy decisions, like showing them food desserts. There's just a whole fleet of rapidly growing services for consumers, for doctors, nurses, journalists, employers, public policy makers, that help them make decisions, help them deliver improved health and healthcare, and create jobs, all at the same time.

That's very exciting. If you look at all of those products and services — and a subset of them are the ones that self-identify to us, to actually be exhibited at the Health Datapaloozas. Look at the 20 healthcare apps that were at the first Datapalooza or the 50 that were at the second. This year, there are 230 companies that are being narrowed down to about a total of 100 that will be at the Datapalooza. They collectively serve millions of people today, either through brand new products and services or through new features on existing platforms. They help people in ways that we would never have thought of, let alone build.

The taxpayer dollars expended here were zero. We basically just took our data, made it available in machine-readable format, educated entrepreneurs that it was there, and they did the rest. Think about these other sectors, and think about what's possible in those sectors.

In education, through making the data that we've made available, you can imagine much better tools to help you shop for the college that will deliver the biggest bang for your buck and is the best fit for your situation.

We've actually made available a bunch of data about college outcomes and are making more data available in machine-readable form so it can feed college search tools much better. We are going to be enabling students to download machine-readable copies of their own financial aid application, student loan data and school records. That will really turbo charge "smart scholarship" and school search capabilities for those students. You can actually mash that up with college outcomes in a really powerful, personalized college and scholarship search engine that is enabled by your personal data plus machine-readable data. Tools that help kids and their parents pick the right college for their education and get the right financial aid, that's something government is going to facilitate.

In the energy space, there are apps and services that help you leverage your Green Button data and other data to really assess your electricity usage compared to that of others and get concrete tips on how you can actually save yourself money. We're already seeing very clever, very cool efforts to integrate gamification and social networking into that kind of app, to make it a lot more fun and engaging — and make yourself money.

One dataset that's particularly spectacular that we're making a lot more usable is the EnergyStar database. It's got 40,000 different appliances, everything from washing machines to servers that consumers and businesses use. We are creating a much, much easier to use public, downloadable NSTAR database. It's got really detailed information on the energy use profiles and performance of each of these 40,000 appliances and devices. Imagine that actually integrated into much smarter services.

On safety, the kinds of ideas that people are bringing together are awesome. They're everything from using publicly available safety data to plot the optimal route for your kid to walk home or for a first responder to travel through a city and get to a place most expeditiously.

There's this super awesome resource on Data.gov called the "Safer Products API," which is published by the Consumer Products Safety Commission (CPSC). Consumers send in safety reports to CPSC, but until March of last year, you had to FOIA [Freedom of Information Act] CPSC to get these. So what they've now done is actually publish an API which not only makes the entire database of these reports public, without you having to FOIA them, but also makes it available through an API.

One of the ideas that came up is that, when people buy products on eBay, Craiglist, etc, all the time, some huge percentage of Americans never get to know about a recall — a recall of a crib, a recall of a toy. And even when a company recalls new products, old products are in circulation. What if someone built the ability to integrate the recall data and attach it to all the stuff in the eBays and Craigslists of the world?

Former CIO Vivek Kundra often touted government recall apps based upon government data during his tenure. Is this API the same thing, shared again, or something new?

Park: I think the smartest thing the government can do with data like product recalls data is not build our own shopping sites, or our own product information sites: it's to get the information out there in machine-readable form, so that lots and lots of other platforms that have audiences with millions of people already, and who are really good at creating shopping experiences or product comparison experiences, get the data into their hands, so that they can integrate it seamlessly into what they do. I feel that that's really the core play that the government should be engaged in.

I don't know if the Safer Products API was included in the recall app. What I do know is that before 2011, you had to FOIA to get the data. I think that even if the government included it in some app the government built, that it's important for it to get used by lots and lots of other apps that have a collective audience that's massively greater than any app the government could itself build.

Another example of this is the Hospital Compare website. The Hospital Compare website has been around for a long time. Nobody knows about it. There was a survey done that found 94% of Americans didn't know that there was hospital quality data that was available, let alone that there was a hospital compare website. So, the notion of A) making the hospital care data downloadable and B), we actually deployed it a year and a half ago in API form at Medicare.gov.

That then makes the data much easier for lots of other platforms to incorporate it, that are far more likely than HospitalCompare.gov to be able to present the information in actionable forms for citizens. Even if we build our own apps, we have to get this data out to lots of other people that can help people with it. To do that, we have to make it machine-readable, we have to put it into RESTFUL APIs — or at least make it downloadable — and get the word out to entrepreneurs that it's something they can use.

This is a stunning arbitrage opportunity. Even if you take all this data and you "API-ize" it, it's not automatic that entrepreneurs are going to know it's there.

Let's assume that the hospital quality data is good — which it is — and that you build it, and put it into an API. If nobody knows about it, you've delivered no value to the American people. People don't care whether you API a bunch of data. What they care about is that when they need to find a hospital, like I did, for my baby, I can get that information.

The private sector, in the places where we have pushed the pedal to the medal on this, has just demonstrated the incredible ability to make this data a lot more relevant and help a lot more people with it than we could have by ourselves.

White House photo used on associated home and category pages: white house by dcJohn, on Flickr

May 22 2012

A gaming revolution, minus the hype

In the following interview, "Playful Design" author John Ferrara (@PlayfulDesign) explains what he sees as the real gaming revolution — not "gamification," or the application of gaming characteristics to existing applications and processes, but how games themselves can and will be a "force of cultural transformation." Ferrera also reveals five universal principles of good game design.

Our interview follows.

How are mobile and social technologies affecting game design and the evolution of gaming technology?

John FerraraJohn Ferrara: One of the really surprising things about modern smartphones and tablets is that the've turned out to be such credible gaming platforms. They open doors to new ways of experiencing games by giving designers access to touchscreens, accelerometers, cameras, microphones, GPS, and Internet connectivity through a single device. They also allow games to be experienced in new contexts, enjoyed on the train to work, in the minutes between meetings, and while you're out with friends. The traditional gaming model, where players sit passively in one place in the home and stare at a fixed screen, seems stodgy and limiting by comparison.

The funny thing about social technology is that before we had video games, gaming was almost always a social activity. You needed to have multiple people to play most board games, card games, and sports — in fact, the game was often just a pretense for people to get together. But then video games made solitary experiences more of the norm. Now social technology is bringing gaming back to its multiplayer roots, but it's also going beyond what was ever possible before by enabling hyper-social experiences where you're playing with dozens of friends and family at once. Even though you may be separated from these people in space and time, you have an intimate sense of shared presence and community when you're playing. That's revolutionary.

How do you see the social media aspects of gaming seeping into day-to-day life?

John Ferrara: Games certainly can transform the workplace, though I want to caution that it's very easy to make the mistake of dressing up everyday work activities as games by just tacking on some points and badges. That's not game design, and people will recognize that it's not. In the process of failing, approaches like this generate cynicism toward the effort. Games need to be designed to be games first and foremost. They must be intrinsically rewarding, enjoyed for their own sake.

That said, I absolutely believe that games can work at work. As you suggest, for example, they have great strengths for training. Games create a safe space for people to test out their mastery of a set of skills in ways that aren't possible or practical in the real world. They can also help people figure out how best to handle different situations. Say, for example, that you created a game to develop management skills. You might allow players to assign values to their in-game avatars like "nurturing," "autocratic," or "optimistic," which lead to different behavior paths. Players could then examine how these traits play out in a situation filled with characters who have different values like "dependability," "autonomy," and "efficiency." A structure like this could not only impart insight about management styles, but also invite introspection about how an individual's own personality traits may lead to success and failure in the real world.

In your book's introduction, you say, "I hope to start moving toward a post-hype discussion of how games can most effectively achieve great things in the real world." Who is leading the way — or at least moving in the right direction — and what are they doing?

Playful Design CoverJohn Ferrara: You know, there's so much really inventive work being done right now. Recently, I've been playing a lot of "Zombies, Run!," and I think it's great. This is a game for smartphones that overlays a narrative about survivors in a zombie apocalypse onto your daily run. As you're out getting your exercise, you're listening to the game events as they unfold, and you can hear the zombies closing in. It's a great use of fantasy, and it plays as a true game with meaningful choices and conflict.

There's also a great group at the University of Wisconsin-Madison that's developed a smartphone app called ARIS, which builds game scenarios into physical locations, and they've developed dozens of applications for it. One of them is being developed as a museum tour for the Minnesota Historical Center, giving people quests to complete by scanning objects in the exhibit and then using them to complete objectives in a story line. The museum is actually changing the way the exhibit is laid out to better accommodate the gameplay, moving away from the traditional snaking path to more of an open layout that allows players to move more freely between the interacting displays to solve the game's challenges.

Some of the thought leaders who I really admire include Eric Klopfer and Scot Osterweil at MIT, Ian Bogost at the Georgia Institute of Technology, and Jane McGonigal. A common current among these thinkers is their emphasis on games themselves as a force of cultural transformation, rather than simplistic "gamification" of software applications that lead to little or no meaningful change.

What about engineering games like "Foldit" — with improved UX, could this type of crowdsourced gaming become a viable research tool?

John Ferrara: This is what's been called "human computation," where a group of people work together to solve some complex problem as a by-product of some other action, like playing a game. Luis Von Ahn at Carnegie Mellon describes games as algorithms that are executed by people rather than machines, and I think that's a really fascinating idea. Foldit is a great example. This is a puzzle game where players try to figure out how to fold chains of proteins. This is a problem that's very well suited to human computation because it requires a type of intuitive reasoning that's very difficult for actual computers. Foldit made a big news last fall when the people playing it decoded the structure of a protein related to a virus that causes AIDS in monkeys, which had eluded researchers for years.

This is a wonderful demonstration of how this type of game can be really valuable to researchers. At the same time, I'm very critical of Foldit because I think its gameplay experience is kind of awful. It's very difficult to figure out which actions lead to the results you see on-screen — like why you're awarded points the way you are — and there's not a strong sense of objectives or conflict. These design issues place limits on the appeal of Foldit, and that's a big problem because human computation works better the more people you have playing. If the gameplay were really compelling and fun, then the sky would be the limit.

How do you see the collection and use of gaming data evolving?

John Ferrara: Games can produce enormous volumes of data because it's really simple to gather every little interaction the player has in the game and report it all back to a central server. This has immediate applications for game design itself. Zynga, for example, uses data to determine which design choices create greater tendencies for players to stay engaged longer, involve more friends, or pay to enhance the game experience. I expect this kind of data collection and analysis to become the norm because companies will be more successful the better they can do it.

I would suggest that financial services could be one of the biggest secondary beneficiaries of such data because there's so much to learn about how people make financial decisions under different circumstances. Staying with the Zynga theme, suppose players have the option of investing in any of a variety of different farm crops, each of which has different strengths and vulnerabilities to environmental conditions. How do players choose which ones they should purchase? How do they appraise risk and reward? Which presentations of information lead to a better understanding of a crop's attributes? Which lead people to make more appropriate choices for their goals? All of these questions can be examined quantitatively through games and can lead to greater insights into the innate qualities of human psychology that drive investor behavior and decision making.

What are some emerging best practices for game technology?

John Ferrara: Best practices vary widely depending on the game and the type of player motivations to which it appeals. For example, games meant to promote a sense of immersion like "Red Dead Redemption" remove as much of the user interface elements from immediate view as possible. Data-intensive games like "Tiny Tower" benefit by compressing as much information and as many functional controls as they can into the smallest possible space.

With that in mind, there are some clear universal principles for the design of all games:

  • Skip the manual and embed as much instruction into the gameplay as you can.
  • Fit the game into the player's lifestyle so that he or she can play when and where it's convenient.
  • Don't cheat — people recognize when a game unfairly stacks the odds against them and they resent it.
  • Make sure players always have a clear sense of cause and effect, and that they understand what actions are available to them.
  • Above all, playtest, playtest, playtest. It's impossible to fully anticipate how people will react to a game short of actually watching them play it.

In the book, you argue that games should be used as instruments of persuasion. Why is this?

John Ferrara: To be clear, it's not that all games should be persuasive but that people who want to persuade should look at games very seriously; I believe they present an ideal way to convince people to adopt a particular point of view or to move them to action in the real world. Ian Bogost describes games as a form of "procedural rhetoric," meaning that they communicate messages through participation in the experience. This creates a lot of advantages for persuasion. For example, it allows a kind of self-directed discovery where people adopt the designer's message as a working hypothesis and then test its truthfulness through the gameplay. That's a really powerful way to get your point across. Furthermore, it builds a sense of personal ownership of the insight the player has uncovered.

Are there ethical concerns related to persuasion in gaming environments?

John Ferrara: As there are for any medium, certainly. Film, television, books, billboards, oratory, and posters have all been appropriated for less-than-above-board purposes. Whether it's propaganda, demagoguery, misleading advertising, or dirty politics, you'd expect that games would be subject to the same kinds of unethical practices. It's especially important to be aware of this in the case of games, considering how compelling a procedural rhetoric can be. Rather than casting a negative light on games, however, I think that speaks to their power to effect meaningful change in the real world. I believe that games can achieve great things, and I expect that over the next decade we'll see them doing a lot of good.

This interview was edited and condensed.

Related:

May 10 2012

Commerce Weekly: The competitive push toward mobile payment

Here are a few of this week's stories from the commerce space that caught my eye.

Mobile payments are coming, one way or another

Square_AngleyHands.pngThe New York Times (NYT) took a look this week at the push toward mobile payments and the various paths toward that end. The push isn't only coming from a consumer desire for a mobile wallet, but also from the payment companies. The NYT's post reports:

"Merchants are facing heavy pressure to upgrade their payment terminals to accept smart cards. Over the last several months, Visa, Discover and MasterCard have said that merchants that cannot accept these cards will be liable for any losses owing to fraud."

This could be the push needed for mobile payment, at least in the U.S., to get over the technology hump that has thus far been hindering it from catching on. Jennifer Miles, executive vice president at payment terminal provider VeriFone, told the NYT, "Everybody is going to be upgrading ... Before the credit card companies made their announcements, almost no merchants were buying terminals with smart card and NFC capabilities." She says VeriFone no longer installs payment terminals without NFC readers.

NFC technology, however, not only requires upgrades from merchants, but also consumers. The post reviews mobile payment solutions from PayPal and Square, noting the directive for these two companies may be more consumer centric:

"Both PayPal and Square say that asking customers to buy NFC-enabled phones and wait for merchants to install new hardware is folly. Neither company says it has plans to incorporate NFC into its wallet."

This consumer-centric approach might be part of what's behind VeriFone's announcement this week that it would jump into the payment processing fray. Bloomberg reports:

"VeriFone Systems Inc. (PAY), the largest maker of credit-card terminals, will offer an attachment that lets mobile devices accept credit and debit cards, making a deeper push into a market pioneered by Square Inc. and EBay Inc. (EBAY)'s PayPal ... VeriFone's version will allow partners such as banks to customize the service to transmit coupons and loyalty points to consumers, said Greg Cohen, a senior vice president at San Jose, California-based VeriFone."

VeriFone's system will work with Apple and Android mobile devices.

X.commerce harnesses the technologies of eBay, PayPal and Magento to create the first end-to-end multi-channel commerce technology platform. Our vision is to enable merchants of every size, service providers and developers to thrive in a marketplace where in-store, online, mobile and social selling are all mission critical to business success. Learn more at x.com.

MasterCard releases PayPass

MasterCard announced its new PayPass Wallet Services this week. The company describes the global service in a press release:

PayPass Wallet Services delivers three distinct components — PayPass Acceptance Network (PayPass Online and PayPass Contactless), PayPass Wallet and PayPass API. These services enable a consistent shopping experience no matter where and how consumers shop, as well as a suite of digital wallet services, and developer tools to make it easier to connect other wallets into the PayPass Online acceptance network.

In other words, it's designed to work with any sort of digital wallet used by its partners. According to the release, American Airlines and Barnes & Noble are in the initial group of merchant partners.

One of the big differences between MasterCard's system and those of its competitors is its open nature. PC World reports:

What sets MasterCard's offering apart from digital wallet systems announced by Visa, Google, PayPal and others is how much the company is opening up its platform to third parties, said Gartner wireless analyst Mark Hung. Banks and other partners will be able to adopt PayPass Wallet Services in two different ways: They can use MasterCard's own service under their own brand or just use the company's API (application programming interface) to build their own platform.

Mobile payment readiness, global edition

How ready is the world for mobile payments? MasterCard has that covered this week, too. In a guest post at Forbes, vice president of MasterCard Worldwide Theodore Iacobuzio wrote about the launch of the MasterCard Mobile Payments Readiness Index (MPRI), a data-driven survey of the mobile payments landscape. Iacobuzio says the index "assesses and ranks 34 global economies in terms of how ready (or not) they are for mobile payments of three types":

  • M-commerce, which is e-commerce conducted from a mobile phone or tablet.
  • Point-of-Sale (POS) mobile payments where a smart phone becomes the authentication device to complete a transaction at checkout.
  • Person-to-Person (P2P) mobile payments that involve the direct transfer of funds from one person to another using a mobile device.

Iacobuzio says that "one of the top-level findings is that unless all constituents — banks, merchants, telcos, device makers, governments — collaborate on developing new solutions and services, the mainstream adoption of mobile payments will be slower, more contentious and more expensive." He discusses the needs for mobile payments around the world, including in developed, developing and emerging countries.

But who's ready? The following image is a screenshot of the index summary. Note that no country has yet hit the "inflection point":

MPRIScreenshot.png
A screenshot of the MasterCard Mobile Payments Readiness Index (MPRI). Click here to access the full site.

Dan Rowinski at ReadWriteWeb has a nice analysis of the index. In part, he says much of the finance world, including MasterCard, may be viewing the mobile payment situation through "rose-colored glasses":

"For instance, why do mobile payments skew heavily toward young males in developed countries? The answer, more or less, is because it is cool. The actual need for mobile payments (NFC or otherwise) is not as clear in the U.S. as it is in other countries, like Kenya and Singapore."

Mobile shopping needs faster carts

Michael Darnaud, CEO of i-Cue Design, proposed a solution this week for one of the major problems with mobile shopping: speed, or lack thereof. In a post at Mobile Commerce Daily, he says the steps to a purchase simply take too long because of the number of data transfers involved:

"Just clicking a button to 'add,' 'delete' or 'change quantity' on the mobile Web requires sending transaction data from the shopper's mobile device to the vendor's server — average three to five seconds — via cell towers, not high-speed cables. These interim steps, long before checking out, are the challenge — it is all about time."

"Time is money" is no joke in mobile commerce. Darnaud notes: "A recent Wall Street Journal article declared that sales at Amazon increase by 1 percent for every 100 milliseconds it shaves off download times." To that end, he suggests an improvement to online cart technology that "reduces the time it takes to 'add,' 'delete' or 'change quantity' by virtually 100 percent because it eliminates the need for a server call for each of those commands." He describes his solution:

"This 'instant-add' cart solution requires nothing but familiar HTML and JavaScript. It is an incremental change that can be inserted into virtually any new or existing cart.

And what that means to a customer arriving at your site on the mobile Web is that he or she can see a product, click 'add to cart' and have no forced page change or reload or waiting time at all as a result."

Darnaud also notes the "elegance" of the solution: "... it forms a perfect bridge between desktop and mobile Web. The reason is simply that it works identically on both, via the browser."

Tip us off

News tips and suggestions are always welcome, so please send them along.

Related:

May 08 2012

Think of it like a political campaign: Baratunde Thurston's book marketing

Since its release in late January, Baratunde Thurston's book, "How To Be Black," has sold more than 15,000 copies, hitting the New York Time bestseller list out of the gate. Thurston, The Onion's former director of digital, and Craig Cannon, his campaign manager, have employed a slew of creative tactics for selling the book. In a recent interview, Thurston talked with me about what's worked, what hasn't, and the secret sauce for their campaign.

Before you dive in, I'll note that Thurston — in addition to having written a terrific book — has a gift for making people feel like they want to be part of his world. Although I'd read excerpts of the book early on, as he included them in his email newsletter, and although I was given both an electronic and a print copy of the book, I still bought it, just to support him. How can you make that magic happen for your book? Read on.

Any sales numbers we can share for context?

Baratunde ThurstonBaratunde Thurston: We went into this with a goal of significant pre-sales to hit the New York Times bestseller list. How many does it take to do that? Anywhere from 1,000 to 100,000 in sales, and it depends on what else has come out that week. The pre-sales all accrue to one week, so you can stack the deck. We had 20,000 pre-sales as a goal. That was insane. We wound up with several hundred pre-sales, which was helpful, but not a juggernaut. We hit the list at No. 21. Mostly, that was useful because The Times had me do a joint interview with Charles Murray that ran in print and online during Black History Month. And that drew some attention.

What we learned from this is that people do not buy books. They like to talk about books. They like to talk about buying them. But they do not buy them.

Also for context, how important are book sales for you?

Baratunde Thurston: Sales are important. I want people to read the book. I want them to spend money. This wasn't a vanity publishing project, but it wasn't a get-rich scheme, either. It was a way for me as a creative person to point to something solid. I speak, I tweet; it's gone. I publish in very forgettable platforms. A book has some staying power. It's a cultural object, a physical object on which you can focus some attention.

What elements of the campaign worked?

Baratunde Thurston: We decided to treat this like a political campaign — more about the issues than the politician. We asked ourselves: Can we create a sense of movement that has other people seeing themselves in a book about me?

There was a process to arrive at the plan, and it equaled me coming up with the marketing. I knew I had to get on it when I was on a trip somewhere, and I got an email from Harper [Collins Publishers]: "Do you think you'll tweet about the book?"

There was a big research phase, talking to people I know or was introduced to, like Gary Vaynerchuk, Deanna Zandt, Eric Ries, Amber Rae at the Domino Project, and Tim Ferris. There were a lot of conversations had and articles read. There's no excuse to make things up completely or rely on hope.

We went with a content-oriented promotion strategy — check out this video or tweet or interview. So, for example, we wound up with 50 short videos that we could build into the book and the campaign.

The book's website was the heart. We posted a daily question every day in February, seeding it with the video I had already shot.

For speaking gigs I'd already booked, we asked if we could add book sales.

We had field ops — the Street Team. They were the ideal beta group: 115 people, half active and half of those really dedicated. We thought each street team member would equate to sales, but it's turned out to be more important as a group that lets us test ideas.

We also identified the high-value donors — people who are going to deliver a bunch of votes or cash. I went through all my contacts manually, about 4,500 people, and scrubbed that down to about 1,800 real people. I tagged them lightly, looking at them in terms of relevance. And then I started reaching out to them one by one.

"Fresh Air" with Terry Gross worked. MSNBC appearances worked.

How did the Street Team work out?

Baratunde Thurston: We tried to build a very loyal, very intense community. People had to apply. We asked them to participate in web video chats. It was like they made it through basic training. And that was kind of the goal: to have a group of advocates you can deploy in different ways. At launch parties across the country, they help out. Craig crashes on their sofas. They provide a support network; they're the volunteer fire department.

They also became an early-warning system for how the public would interpret the book. They weren't biased the way the other people close to me were. For instance, during Street Team video chats, they asked questions the public would ask. So I'd go to launch parties and interviews really prepared with answers.

Michael Phelps parody photoThis notion of showing the book cover in the hands of people as an image of value — they helped create that. Somebody Photoshopped Michael Phelps holding it, and that was one of first we saw. We seeded that idea with the Street Team, and they ran with it. The Photoshopping became redundant because actual people were holding the book and people were taking their pictures. It turned into a photomeme as people began to post them [to Twitter and the "How To Be Black" website].

We had a roadmap of things we had to do, and one thing we didn't miss was the Amazon reviews. We wanted to get them up within hours of the book's availability to set the trend for five-star reviews. We had a video chat with the Street Team right before the Amazon release. Within hours, we had 10 five-star reviews. That signaled to the Amazon buying market that it was a worthwhile book, and the Street Team provided the initial traction. And it's not just the number of five-star reviews, it's also how many reviews were helpful or not. We basically created our own Amazon Vine program.

What didn't work the way you expected?

Baratunde Thurston: The goal of 20,000 pre-sales didn't work. Every weekday in February, I should have been doing something for Black History Month. That didn't quite work, because the lead time for booking events is six months to a year, and we weren't on top of it early enough. As I mentioned, having the Street Team directly account for a certain number of units distributed didn't quite work.

What role did Craig Cannon play?

Baratunde Thurston: I knew Craig loosely at the Onion [where he was graphics editor]. He invited me to lunch to talk about something he was working on, a project with Skillshare. About five or six months before the book launched, we did a class on how to be black. That was a good test for our relationship.

We had a huge Google doc with everything laid out. Craig set up the Tumblr, the Facebook page, a private group for the Street Team, the tour support, the admin support. He's running the merchandise business. The black card — he just went off and built it.

I would have been able to do a lot of that worse. Even the two of us are only hitting 60% capacity. We should have had merch ready at launch. At some of our book events, we didn't have books.

For people who don't have a Craig, the most important thing is the personal one-on-one outreach. Look at the market of people interested in your topic, interested in you. Start with your inner circle. I had an epiphany with Gary Vaynerchuk. I asked: "Did I ever ask you to buy my book?" He said, "Yeah, I bought it yesterday." I talked about his book, but cash on the table — it didn't happen. He wished he had identified everyone he knows, sending a personal note explaining: "A) buy the book; B) this means a lot to me. You owe me or I will owe you. Here's some things you can do to help: If you have speaking opportunities, let me know. For instance, I would love to speak at schools." Make it easy for people who want to help you. Everything else is bonus. If you haven't already converted the inner circle, you've skipped a critical step.

What specific marketing technique would you recommend to other authors?

Baratunde Thurston: You can make everything easier by figuring out what value to attach your book to. We've been working under the over-arching theme of identity. If you blog every week about why your book is so awesome, nobody cares. If you're producing relevant, interesting content, they get attached to you in context. That leads to sales. It's a good model.

Once you've actually articulated what that value is, make everything else consistent with that. For us, it was comfort with yourself and your identity — everybody has an outsider identity. That provides a roadmap for interviews and events. It establishes the brand and reinforces it. This approach requires time and consideration, but not cash. It's not just reactive. For instance, this book is about DIY culture that makes the world a better place. With that approach, somebody like my friend Nora Abousteit can get involved, even though race, per se, isn't her issue.

Anything else you want to add?

Baratunde Thurston: There was a very important tactical layer, the secret sauce: Knod.es [Note: this is launching to the public soon]. Ron Williams, Knod.es founder, has been an essential shadow. The types of services Knod.es provides — pre-qualified leads — are going to be important for everything. We were sending targeted blasts around and used Knod.es to augment that. The results have been incredible.

For example, we wanted people to submit more content to the How To Be Black Tumblr. After launch, it had faded. We recruited 18 people [some from the Street Team] to push a message through Facebook and email. We had a 50% conversion rate on those messages, and got in nine stories without trying that hard. In the same way you approach your network of friends, you can do the same with social networks where you don't know them as well but they still want to help. You still have to make it easy for people to help you, but finding the value in your existing relationships — that's incredibly valuable. "The Today Show" isn't available to everyone.

This interview was edited and condensed.

Related:

May 03 2012

Commerce Weekly: Mobile payments and the consumer experience

Here are a few stories from the commerce space that caught my eye this week.

Don't forget the mobile payment UX

PayPalSquareLogo.jpgCompetition in the mobile payment space is heating up, as Square's payment pace closes in on PayPal's, according to a report at Bloomberg. The report highlights a recent move by Square to lure in merchants: "The San Francisco company is making cash from sales before 5 p.m. on any day available in merchants' accounts on the next business day, compared with as many as five days out for other processors."

The real endgame, though, will be adoption by consumers, and Lauren Goode over at All Things Digital addressed the battle to control digital wallets from a UX perspective. Goode reports on her experience shopping around San Francisco and New York, paying either with Pay with Square or PayPal's mobile app. She says both apps are easy to use and that the biggest issue for both was the lack of merchants accepting payments of this type. Another issue she mentions caught my eye, however — the execution inconsistencies:

"Square has been touting the idea that this app actually allows for 'hands-free' payments ... One shop I bought coffee at didn't see my name right away, even though I had turned on the tab in the iPhone version of the app. I tried to buy another item using the app on a Samsung Galaxy Nexus Android phone, and my name didn't appear at all on the list of customers in the store.

But at another downtown coffee shop I was able to walk in, place my order and say, 'Charge it to Lauren Goode' — without taking my phone out of my pocket — and the transaction was completed in seconds."

And regarding a beef jerky purchase using PayPal's app:

"Since data service on my phone happened to be particularly bad in that area, I initially had trouble dropping the digital pin within the app that's supposed to let the merchant know I was there. The merchant also had to reboot his phone once to process the payment on his end. But once I switched over to Wi-Fi, I had four options for paying him ..."

Goode also reports on location-based features and the importance of merchant-provided content — her entire account is well worth the read.

X.commerce harnesses the technologies of eBay, PayPal and Magento to create the first end-to-end multi-channel commerce technology platform. Our vision is to enable merchants of every size, service providers and developers to thrive in a marketplace where in-store, online, mobile and social selling are all mission critical to business success. Learn more at x.com.


E-gifting and mobile commerce get social

Social gifting is gearing up to be one of the next big mobile commerce booms, according to a report at Reuters. The post focuses on the launch of Wrapp, a Swedish-based app startup, and highlights the blurring lines of online and brick-and-mortar commerce worlds. It describes the app:

"It allows Facebook friends to buy each other gift cards from participating retailers either individually or by teaming up, which they can store on their mobile devices and redeem either online or inside physical stores. Retailers like it because there is little marketing cost and because customers often end up buying more once they are inside the store."

Wrapp's CEO Hjalmar Winbladh told Reuters, "Brick-and-mortar retailers are all looking for new, more efficient ways to drive sales into stores without diluting their brands ... we wanted to really see how retailers can leverage the megatrends of smartphones and social networks."

TheFind also launched a social commerce app this week. It's called Glimpse, and it's a Facebook app that, according to the press release, "uses Facebook Like data from across the web to instantly personalize and curate a stream of fashion and design items that are trending, tailored to the tastes and preferences of an individual and their community of Facebook friends."

Ryan Kim at GigaOm calls the shopping discovery app a Pinterest rival and reports: "TheFind's CEO Siva Kumar told me TheFind has been working with Facebook for some time to bridge the two data sets, mapping a user's likes to products, their taxonomy and a user's profile. Now, when a Glimpse user likes a page, the service can determine what product the URL is referring to, can pull up the most recent availability and pricing data and also fit it into different styles and trends."


Move over smartphones, NFC to unlock experiences for Nook users

In an interview at CNN Fortune, Barnes & Noble CEO William Lynch talked about the future of the Nook and the recently announced partnership with Microsoft. In talking about opportunities in offline-online integration, Lynch offered an example of how B&N will improve customers' experiences:

"We're going to start embedding NFC [near-field communications] chips into our Nooks. We can work with the publishers so they would ship a copy of each hardcover with an NFC chip embedded with all the editorial reviews they can get on BN.com. And if you had your Nook, you can walk up to any of our pictures, any our aisles, any of our bestseller lists, and just touch the book, and get information on that physical book on your Nook and have some frictionless purchase experience. That's coming, and we could lead in that area."

Lynch told Fortune the NFC experience could appear as early as this year.

Related:

May 02 2012

The UK's battle for open standards

Many of you are probably not aware, but there is an ongoing battle within the U.K. that will shape the future of the U.K. tech industry. It's all about open standards.

Last year, the Cabinet Office ran a consultation on open standards covering 970 CIOs and academics. The result of this consultation was a policy (PDF) in favour of royalty-free (RF) open standards in the U.K. I'm not going to go through the benefits of open standards in this space, other than to note that they are essential for the U.K.'s future competitive position, for spurring on innovation and creating a level playing field within the tech field. For those who wish to read more on this subject, Mark Thompson, the only academic I know to have published a paper on open standards in a quality peer reviewed journal, has provided an excellent overview.

Normally, I put these battles into an historical context, and I certainly have a plethora of examples of past industries attempting to lobby against future change. However, to keep this short I'll simply note that the incumbent industry has reacted to the Cabinet Office policy with attempts to redefine open standards to include non-open FRAND (fair, reasonable and non discriminatory) licenses and portray some sort of legitimate debate of RF versus FRAND, which doesn't exist.

Whilst this is clearly wrong and underhanded, there's another story I wish to focus on. It relates to the accusations that the meetings have been filled with "spokespeople for big vendors to argue in favour of paid-for software, specifically giving advocates of FRAND the chance to argue that free software on RF terms would be a bad thing" as reported by TechWeek Europe.

The back story is that since the Government policy on open standards was put in place, the Cabinet Office was pressured into a u-turn and running another consultation by various standards bodies and other vested interests. The arguments used were either fortuitous misunderstandings of the policy or willful misinformation in favour of current business interests. The Cabinet Office then appeared to relent to the pressure and undertake a second set of consultations. What happened next shows the sorry behaviour of lobbyists in our industry.

"Software patent heavyweights piled into the first public meeting," filling the room with unrepresentative views backed up by vendors flying in senior individuals from the U.S. It apparently seems that the chair of the roundtable was himself a paid lobbyist working on behalf of those vested interests, a fact that he forgot to mention to the Cabinet Office. Microsoft has now been "accused of trying to secretly influence government consultation."

What's surprising is that the majority of this had been uncovered by two journalists — Mark Ballard at Computer Weekly and Glyn Moody — who work mainly outside the mainstream media. In fact, the mainstream media has remained silent on the issue, with the notable exception of The Guardian.

The end result of the work of these two journalists is that the Cabinet Office has had to extend the consultation and, as noted by The Guardian, "rerun one of its discussion roundtables after it found that an independent facilitator of one of its discussions was simultaneously advising Microsoft on the consultation."

So, we have two plucky journalists who stand alone uncovering large corporations that are bullying Government to protect profits worth hundreds of millions. Our heroes' journey uncovers gerrymandering, skullduggery, rampant conflicts of interests, dubious ethics and a host of other sordid details and ... hold on, this sounds like a Hollywood script, not real life. Why on earth isn't mainstream media all over this, especially given the leaked Bell Pottinger memo on exploiting citizen initiatives?

The silence makes me wonder whether investigative journalism into things that might matter and might make a positive difference doesn't sell much advertising? Would it help if the open standards battle had celebrity endorsement? Alas, that's not the case and the battle for open standards might have been extended, but it is still ongoing. This issue is as important to the U.K. as SOPA / PIPA were to the U.S., but rather than fighting against a Government trying to do something that harms the growth of future industry, we are fighting with a Government trying to do the right thing and benefit a nation.

If you're too busy to help, that's understandable, but don't ever grumble about why the U.K. Government doesn't do more to support open standards and open source. The U.K. Government is trying to make a difference. It's trying to fight a good fight against a huge and well-funded lobby, but it needs you to turn up.

The battle for open standards needs help, so get involved.

Related:

May 01 2012

Utopia on a budget: A completely practical plan for regaining paradise

When you're selling dreams, the trick is to strike a balance between utopian promises and common sense.

A week ago today, a privately funded startup called Planetary Resources announced that it had embarked on a program to mine trillions of dollars' worth of precious metals and other resources from asteroids in space. The project is undeniably ambitious, yet in their press conference the company's executives took pains to emphasize the pragmatism of their approach. Exponential advances in technology now make it possible, said co-founder and co-chairman Peter Diamandis, for small companies to accomplish what once required the backing of governments or large corporations. Planetary Resources plans to deploy "swarms" of low-cost telescope satellites to find asteroids that are rich in water, platinum, and other assets, but relatively close to Earth. They will then be mined not by people but by robots.

To be sure, there's nothing modest about the profits Planetary Resources hopes to realize. There were also frequent mentions during the press conference of how the project's success would benefit all of humankind, not only by developing new supplies of diminishing resources but also by keeping alive the dream of space exploration itself. Still, the gee whiz factor was kept to a minimum. Diamandis even claimed at one point that he'd dreamed since he was a teenager of being an asteroid miner, which seemed to be taking pragmatism a bit too far. Surely a teenager can imagine more glamorous things to do in space than that.

The press conference's one truly utopian moment came in a comment from Planetary Resource's other co-founder and co-chairman, Eric Anderson. My guess is that he momentarily let his enthusiasm get the best of him when he let slip his vision of where, in the long run, this could be heading. "We see the future of Earth as a garden of Eden,"he said, "as a place where we take care of the Earth and protect the environment and we do our heavy industries and our mining and all that sort of stuff in space!"

Ah, the Garden of Eden. In truth that's what we've always been after, although we're less inclined to admit it today than we used to be. In 1833 a German immigrant named John Jacob Etzler published "The Paradise Within Reach of All Men," the first extended work of technological utopianism to appear in the United States. Follow my proposals for harnessing the elements with machines, Etzler declared, and within 10 years "everything desirable for human life may be had by every man in superabundance, without labor, and without pay; where the whole face of nature shall be changed into the most beautiful forms, and man may live in the most magnificent palaces, in all imaginable refinements of luxury ..." He went on.

Skepticism regarding technological miracles was less prevalent then than it is today. Even so, Etzler predicted that some would greet his proposals with ridicule, and he was right. Among them was Henry David Thoreau, who published, anonymously, a review of Etzler's book that was slyly humorous in parts, openly sarcastic in others. "Let us not succumb to nature," he wrote. "We will marshal the clouds and restrain the tempests; we will bottle up pestilent exhalations, we will probe for earthquakes, grub them up; and give vent to the dangerous gases; we will disembowel the volcano, and extract its poison, take its seed out. We will wash water, and warm fire, and cool ice, and underprop the earth. We will teach birds to fly, and fishes to swim, and ruminants to chew the cud. It is time we had looked into these things."

A similar exchange occurred in the mid 1970s when a Princeton physics professor named Gerard O'Neill came forward with his own proposal for mining asteroids. O'Neill envisioned a series of permanently inhabited, self-sustaining human colonies orbiting in deep space. Huge inter-connected cylinders, each encompassing a land area as large as 100 square miles, would accommodate, in addition to extensive mining operations, capacious living quarters, gardens, and recreation areas. Settlers would be attracted not only by the promise of employment, O'Neill said, but also by internal climate conditions equivalent to "quite attractive modern communities in the U.S. and in southern France." He added that, because levels of gravity could be varied within the cylinders, a short walk up a hillside could bring a resident to an area where "human-powered flight would be easy" and "sports and ballet could take on a new dimensions."

Government funding was still the way to go at that point, and O'Neill appeared before subcommittees of the House of Representatives and the Senate to present his ideas. Here, too, it seems clear the intention was to portray the project as entirely reasonable. Mentions of southern France and flying ballet dancers were exceptions; charts and graphs were the rule. What we're talking about, O'Neill testified, is "civil engineering on a large scale in a well-understood, highly-predictable environment."

Again, naysayers emerged. Stewart Brand solicited comments on the project for the Spring 1976 edition of "CoEvolution Quarterly," a spinoff of the "Whole Earth Catalog." Brand was an enthusiastic supporter, but many of his readers weren't. The writer, farmer and environmentalist Wendell Berry called O'Neill's proposals "an ideal solution to the moral dilemma of all those in this society who cannot face the necessities of meaningful change." E F. Schumacher, author of "Small Is Beautiful," wrote that he'd be happy to nominate several hundred people to ship into outer space immediately, so that the real work of saving the planet could proceed unimpeded.

Failing to find support in Congress, O'Neill's project faded away. Soon after that the personal computer industry began its remarkable rise in Silicon Valley, reinvigorating the idea that technology can change the world overnight, making a lot of people extremely rich in the process. No accident that many of Planetary Resources' investors acquired their fortunes digitally. When you have billions to spend, your dreams don't have to make sense.

Related:

April 26 2012

Design your website for a graceful fail

Websites go down. It happens. But in many cases it might be possible to deal with and explain a failure while keeping user frustration to a minimum.

Mike Brittain (@mikebrittain), director of engineering at Etsy, addressed the resilient user experience in our recent interview. Among his insights from the full interview (below):

  • Designing an experience that can adapt to individual service failures and partial degradations requires an intermingling between software engineers, operations teams and product and design teams.
  • Previous experience designing for cable-connected devices may skew our connectivity expectations when it comes to more fragile mobile networks.

Brittain will expand on these ideas and more in his keynote address "Building Resilient User Experiences" at Velocity 2012 in June.

Our full interview follows.

What is a "resilient" user experience — and what are a few of the main practices involved in ensuring an acceptable UX during an outage?

MikeBrittain_headshot.pngMike Brittain: Resilient user experiences are adaptable to individual failure modes within the system — allowing users to continue to use the service even in a partially degraded scenario.

Large-scale websites are driven by numerous databases, APIs, and other back-end services. Without thoughtful application design, any failure in an individual service might bubble up as a generic "Server Error." This sort of response completely blocks the user from any further experience and has the potential to degrade the user's confidence in your website, software or brand.

Consider an article page on the New York Times' website. There is the primary content of the page: the body of the article itself. And then there are all sorts of ancillary content and modules, such as social sharing tools, personalization details if you're signed-in, comments, articles recommended for you, most emailed articles, advertisements, etc. If something were to go wrong while retrieving the primary content for the page — the article body — you might not be able to provide anything meaningful to the reader. But if one or more services failed for generating any of those ancillary modules, it's likely to have a much lower impact on the reader. So, a resilient design would allow for any of those individual modules to fail gracefully without blocking the reader from completing the primary action on the site — reading news.

Here's another example closer to my own heart: The primary action for visitors to Etsy is to find, review, and purchase handcrafted goods. A product page on Etsy.com includes all sorts of ancillary information and tools, including a mechanism for marking a product as a "favorite." If the Favorites system goes down, we wouldn't want to return an error page to the visitor. Instead, we would hide the tool altogether. Meanwhile, visitors can continue to find and purchase products during this degradation. In fact, many of them may be blissfully unaware that the feature even exists while it is unavailable.

In the DevOps culture, we see increasing intermingling of experience and knowledge between software engineers and operations teams. Engineers who understand well how their software is operated, and the interplay between various services and back-ends, often understand failure modes and can adapt. Their software and hardware architecture may take advantage of patterns like redundant services, failover services, or retry attempts after failures.

Resilient user experiences require further intermingling with product and design teams. Product design is focused almost entirely on user experience when the system is assumed to be working properly. So, we need to have product designers commingling with engineers to better understand individual failure modes and to plan for them.

Do these interface practices vary for desktops/laptops versus mobile or tablets?

Mike Brittain: These principles apply to any user interface. But as we move into using more mobile devices and networks, we need to consider the relative fragility of the network that connects our software (e.g. a smartphone app) to servers on the Internet.

Our design process may be hampered by our prior experiences in which computers and web browsers connected to the Internet by physical cables suffered relatively low network failure rates. As such, our expectations may be that the network is seldom a failure point. We're moving rapidly into a world where mobile software connects to back-end services over cellular data networks — not to mention that the handset may be moving at high speed by car or train. So, we need to design resilience into our UIs anywhere we depend on network availability for data.

Velocity 2012: Web Operations & Performance — The smartest minds in web operations and performance are coming together for the Velocity Conference, being held June 25-27 in Santa Clara, Calif.

Save 20% on registration with the code RADAR20

How do you set up a front-end to fail gracefully?

Mike Brittain: Front-end could mean client-side, or it could refer to the forward-most server-side script in the request flow, which talks to other back-end services to generate the HTML for a web page. Both situations are valid for resilient design.

In designing resilient UIs, you expect failures in each and every back-end service. Examples might include connection failures, connection timeouts, response timeouts, or corrupted/incomplete data in a response. A resilient UI traps these failures at a low level and provides a usable response, rather than throwing a general exception that causes the entire page to fail.

On the client side, this could mean detecting failures in Ajax responses and allowing the user experience to continue unblocked, or by retrying after a given amount of time. This could be during page render, or maybe during a user interaction. Those familiar with Gmail may recognize that during periods of network congestion or back-end failures, the small status message that reads, "sending," when you send an email sometimes changes to "still trying …" or "offline." This is preferred over a general "failed to send email" after a single attempt.

Some general patterns for resilient UI include:

  • Disable or hide features that are failing.
  • Provide fallback (default) content in place of dynamic content or feature that cannot be reached or displayed.
  • Avoid behaviors that block UI display or interaction.
  • Detect service failures and allow for retries.
  • Failover to redundant services.

Systems engineers may recognize these patterns in low-level services or protocols. But these patterns are not as familiar to front-end engineers, product developers, and designers — who plan more around success than around failure. I don't mean for that statement to be divisive, but I do think it's true of the current state of how we build software and how we build the web.

How do you make your community aware of a failure?

Mike Brittain: In the case of small failures, the idea is to obscure the failure in a way that it does not block the primary use case for the site (e.g. we don't shut down product pages because the Favorites service is failing). Your community may not need much communication around this.

When things really go wrong, you want to be upfront and clear about failures. Use specific terms, rather than general. Provide context of time and estimated time to resolution whenever possible. If you have a service that fails and will be unavailable until you restore data over a period of, say, three hours, it's better to tell your visitors to check back in three hours than to have them hammering the refresh button on their browser for 20 minutes as they build up frustration.

You want to make sure this information is within reach for your users. I actually think at Etsy we have some pretty good patterns for this. We start with a status blog that is hosted outside of our primary network and should be available even if our data center is unreachable. Most service warnings or error messages on Etsy.com will provide a link to this blog. And anytime we have a service outage posted to this blog, a service warning is automatically posted at the top of any pages within our community forums and anywhere else that members would go looking for help on our site.

In your Velocity 2012 keynote summary, you mention "validating failure scenarios with 'game days'." What's a game day and how does it work?

Mike Brittain: The term game day" describes an exercise that tests some failure scenario in production. These drills are used to test hypotheses about how our systems will react to specific failures. They also surface any surprises about how the system reacts while we are actively observing.

We do this in production because development, testing, and staging environments are seldom 100% symmetric with production. You may have different numbers of machines, different volumes of data, or simulated versus live traffic. The downside is that these drills will impact real visitors. The upside is that you build real confidence within your team and exercise your abilities to cope with real failures.

We regularly test configuration flags across our site to ensure that we haven't unwired configuration logic for features we have been patching or improving. We also want to confirm that the user experience degrades gracefully when the flag is turned off. For example, when we disable the Favorites service on our site, we expect reads and writes to the data store to stop and we would expect various parts of the UI to hide the Favorites tools. Our game day would allow us to prove these out.

We would be surprised to find that disabling Favorites causes entire pages on the site to fail, rather than to degrade gracefully. We would be surprised if some processes continued to read from or write to the service while the config flag was disabled. And we would be further surprised to find unrelated services failing outright when the Favorites service was disabled. These are scenarios that might not be observed by simulated testing outside of production.

This interview was edited and condensed.

Associated photo on home and category pages: 404 error message something went wrong by IvanWalsh.com, on Flickr

Related:

April 18 2012

True in spirit: Why I liked "Captain America," but didn't like "John Carter"

This post originally appeared in Tim O'Reilly's Google+ feed.

In my recent review of "John Carter," I damned the movie for failing to be true to the book, taking liberties with the story and with the character of John Carter himself. Yet when watching "Captain America" on the plane the other day, I found myself completely satisfied despite that fact that it too was unfaithful to the original in many ways.

I asked myself why, and concluded that the answer is central to understanding O'Reilly's brand marketing, and by extension the authenticity that is at the heart of all great brands.

For me as a young reader, the appeal of "Captain America" (as with "Spider-Man" and other Marvel comics) was the notion that a nerd, a kid who wasn't good at sports and was scorned by popular society, could be transformed into a hero. His smarts, his will, his character were what mattered — all that was required was a chance spark that would transform him into who he really was inside.

The movie version of "Captain America" is completely true to this fantasy. The character of Steve Rogers is so right that I was willing to forgive the many changes to the story (e.g. that Bucky was not his young sidekick but his pre-transformation protector and military buddy), improbabilities such as that the notion of riding a zipline from a mountain down onto the roof of a fast-moving train begs the question of just how they strung that zipline. (I've done it, and it's non-trivial, and gets harder the longer the line.) These are the kinds of errors that I found offensive in "John Carter" but didn't mind at all in "Captain America." I found myself moved by scenes in the movie that demonstrated Steve Rogers' courage, his indomitable will, his loyalty to friends — hell, his nobility. Exactly what Andrew Stanton took away from "John Carter"!

This was equally true in the second installment of "Sherlock Holmes," which I likewise saw on a plane last week. It takes even more liberties with Conan Doyle's original stories than "John Carter" took with Burroughs. Yet once again I consumed it with relish! Why? Because the character of Holmes was so true — his incredible ability to observe tiny details, to think ahead, his remarkable strength (which features in only a few of the stories, but is there nonetheless), his flawed character. And even though the character of Watson was nothing like the Watson of Doyle's stories, I forgave the director, because he made Watson better, not worse than the original.

This notion of understanding the essence of what matters about a book, a story, a character, also applies to business.

I think about the common thread that runs through all the books we created at O'Reilly — however different they might be. Consider the range of treatment shown by books as diverse as "Linux in a Nutshell," "Programming Perl," "Unix Power Tools," "The Perl Cookbook," "Head First Java," "Mac OS X: The Missing Manual," or "Make: Electronics." From the point of view of external details, each of these books was a radical departure from what had gone before, and therefore a potential opportunity to confuse customers and dilute the brand.

Yet these books have a common essence: a practical bent, respect for the intelligence of the reader, a clear path to what you need to know, the authentic voice of experience, a willingness to take risks with new tools and new ideas that have been taken up by people on the cutting edge. When they stray from these core features, our books fail.

O'Reilly conferences display the same brand essence. In their deepest core, an O'Reilly book and an O'Reilly technical conference have more in common than a technical book from O'Reilly and those from some of our competitors. Like many of our pioneering books, our most successful new conferences were launched because we thought they were needed, not because we necessarily knew how successful they'd be. We weren't chasing dollars; we were trying to help the early adopter communities who are our core customers to change the world.

(Of course, it also helped that we created "brand affordances" whenever we introduced a new type of book. I remember in the old days hearing that competitors would cheer every time we put out a new book without an animal on the cover. They thought we were throwing away our brand advantage. Little did they know that we were preserving it. Over time, we created a house of powerful brands with a common core but with clearly visible differences and distinct audiences.)

This brand essence is also true in our advocacy. We stand up for issues that matter in our industry. We tackle big problems that we don't yet know how to solve, and try to grow markets in ways that benefit others besides ourselves.

Hmmm. Maybe that's why I hated "John Carter" but loved "Captain America" and "Sherlock Holmes." Andrew Stanton's John Carter was a self-absorbed adventurer, a reluctant hero and an anti-romantic, not the noble figure I remembered from my childhood.

There's a way in which the O'Reilly brand essence is ultimately a story about the hacker as hero, the kid who is playing with technology because he loves it, but one day falls into a situation where he or she is called on to go forth and change the world. Our editors and conference chairs, our authors and our conference presenters, are drawn from the ranks of our customers, and like all true nerds, we have a secret hunger to be heroes.

See comments and join the conversation about this topic at Google+.

Related:

April 12 2012

Strata Week: Add structured data, lose local flavor?

Here are a few of the data stories that caught my attention this week:

A possible downside to Wikidata

Wikidata data model diagram
Screenshot from the Wikidata Data Model page.

The Wikimedia Foundation — the good folks behind Wikipedia — recently proposed a Wikidata initiative. It's a new project that would build out a free secondary database to collect structured data that could provide support in turn for Wikipedia and other Wikimedia projects. According to the proposal:

"Many Wikipedia articles contain facts and connections to other articles that are not easily understood by a computer, like the population of a country or the place of birth of an actor. In Wikidata, you will be able to enter that information in a way that makes it processable by the computer. This means that the machine can provide it in different languages, use it to create overviews of such data, like lists or charts, or answer questions that can hardly be answered automatically today."

But in The Atlantic this week, Mark Graham, a research fellow at the Oxford Research Institute, takes a look at the proposal, calling these "changes that have worrying connotations for the diversity of knowledge in the world's sixth most popular website." Graham points to the different language editions of Wikipedia, noting that the encyclopedic knowledge contained therein is always highly diverse. "Not only does each language edition include different sets of topics, but when several editions do cover the same topic, they often put their own, unique spin on the topic. In particular, the ability of each language edition to exist independently has allowed each language community to contextualize knowledge for its audience."

Graham fears that emphasizing a standardized, machine-readable, semantic-oriented Wikipedia will lose this local flavor:

"The reason that Wikidata marks such a significant moment in Wikipedia's history is the fact that it eliminates some of the scope for culturally contingent representations of places, processes, people, and events. However, even more concerning is that fact that this sort of congealed and structured knowledge is unlikely to reflect the opinions and beliefs of traditionally marginalized groups."

His arguments raise questions about the perceived universality of data, when in fact what we might find instead is terribly nuanced and localized, particularly when that data is contributed by humans who are distributed globally.

The intricacies of Netflix personalization

Netflix suggestion buttonNetflix's recommendation engine is often cited as a premier example of how user data can be mined and analyzed to build a better service. This week, Netflix's Xavier Amatriain and Justin Basilico penned a blog post offering insights into the challenges that the company — and thanks to the Netflix Prize, the data mining and machine learning communities — have faced in improving the accuracy of movie recommendation engines.

The Netflix post raises some interesting questions about how the means of content delivery have changed recommendations. In other words, when Netflix refocused on its streaming product, viewing interests changed (and not just because the selection changed). The same holds true for the multitude of ways in which we can now watch movies via Netflix (there are hundreds of different device options for accessing and viewing content from the service).

Amatriain and Basilico write:

"Now it is clear that the Netflix Prize objective, accurate prediction of a movie's rating, is just one of the many components of an effective recommendation system that optimizes our members' enjoyment. We also need to take into account factors such as context, title popularity, interest, evidence, novelty, diversity, and freshness. Supporting all the different contexts in which we want to make recommendations requires a range of algorithms that are tuned to the needs of those contexts."

Fluent Conference: JavaScript & Beyond — Explore the changing worlds of JavaScript & HTML5 at the O'Reilly Fluent Conference (May 29 - 31 in San Francisco, Calif.).

Save 20% on registration with the code RADAR20

Got data news?

Feel free to email me.

Related:

Reposted byRK RK

April 10 2012

Carsharing saves U.S. city governments millions in operating costs

One of the most dynamic sectors of the sharing economy is the trend in large cities toward more collaborative consumption — and the entrepreneurs have followed, from Airbnb to Getable to Freecycle. Whether it's co-working, bike sharing, exchanging books and videos, or cohabiting hackerspaces and community garden spaces, there are green shoots throughout the economy that suggest the way we work, play and learn is changing due to the impact of connection technologies and the Great Recession.

This isn't just about the classic dilemma of "buy vs. rent." It's about whether people or organizations can pool limited resources to more efficiently access tools or services as needed and then pass them back into a commons, if appropriate.

Speaking to TechCrunch last year, Lauren Anderson floated the idea that a collaborative consumption revolution might be as "significant as the Industrial Revolution." We'll see about that. The new sharing economy is clearly a powerful force, as a recent report (PDF) by Latitude Research and Shareable Magazine highlighted, but it's not clear yet if it's going to transform society and production in the same way that industrialized mass production did in the 19th and 20th centuries.

Opportunity Infographic - The New Sharing Economy Study by latddotcom, on Flickr
Infographic from "The New Sharing Economy" study. Read the report (PDF) and see a larger version of this image.

Carsharing is saving

What is clear is that, after years of spreading through the private sector, collaborative consumption is coming to government, and it's making a difference. A specific example: Carsharing via Zipcar in city car fleets is saving money and enabling government to increase its efficacy and decrease its use of natural resources.

After finally making inroads into cities, Zipcar is saving taxpayers real money in the public sector. Technology developed by the car-sharing startup is being used in 10 cities and municipalities in 2012. If data from a pilot with the United States General Services Agency fleet pans out, the technology could be also adopted across the sprawling federal agency's vehicles, saving tens of millions of dollars of operating expenses though smarter use of new technology.

"Now the politics are past, the data are there," said Michael Serafino, general manager for Zipcar's university and FastFleet programs, in a phone interview. "Collaborative consumption isn't so difficult from other technology. We're all used to networked laser printers. The car is just a tool to do business. People are starting to come around to the idea that it can be shared."

As with many other city needs, vehicle fleet management in the public sector shares commonalities across all cities. In every case, municipal governments need to find a way to use the vehicles that the city owns more efficiently to save scarce funds.

The FastFleet product has been around for a little more than three years, said Serafino. Zipcar started it in beta and then took a "methodical approach" to rolling it out.

FastFleet uses the same mechanism that's used throughout thousands of cars in the Zipcar fleet: a magnetized smartcard paired with a card reader in the windshield that can communicate with a central web-based reservation system.

There's a one-time setup charge to get a car wired for the system and then a per-month charge for the FastFleet service. The cost of that installation varies, predicated upon the make of vehicles, type of vehicles and tech that goes into them. Zipcar earns its revenue in a model quite similar to cloud computing and software-as-a-service, where operational costs are billed based upon usage.

Currently, Washington, D.C., Chicago, Santa Cruz, Calif., Boston, New York and Wilmington, Del. are all using FastFleet to add carsharing capabilities to their fleets, with more cities on the way. (Zipcar's representative declined to identify which municipalities are next.)

Boston's pilot cut its fleet in half

"Lots of cities have departments where someone occasionally needs a car," said Matthew Mayrl, chief of staff in the Boston Public Works department, during a phone interview.

"They buy one and then use it semi-frequently, maybe one to two times per week. But they do need it, so they can't give up the car. That means it's not being used for highest utilization."

The utilization issue is the key pain point, in terms of both efficiency and cost. Depending on the make and model, it generally costs between $3,000 and $7,000 on average for a municipality to operate a vehicle, said Serafino. "Utilization is about 30% in most municipal fleets," he said.

That's where collaborative consumption became to relevant to Boston. Mayrl said Boston's Public Works Department talked to Zipcar representatives with two goals in mind: get out of a manual reservation system and reduce the number of cars the city uses, which would reduce costs in the process. "Our public works was, for a long time, administered by a city motor pool," Mayrl said. "It was pretty old school: stop by, get keys, borrow a car."

While Boston did decide to join up with Zipcar, public sector workers aren't using actual Zipcars. The city has licensed Zipcar's FastFleet technology and is adding it to the existing fleet.

One benefit to using just the tech is that it can be integrated with cars that are already branded with the "City of Boston," pointed out Mayrl. That's crucial when the assessing office is visiting a household, he said: In that context, it's important to be identified.

Boston started a pilot in February that was rolled out to existing users of public works vehicles, along with two pilots in assessing and the Department of Motor Vehicles. The program started by taking the oldest cars off the road and training the relevant potential drivers. Using carsharing, the city of Boston was able to reduce the number of vehicles in the pilot by over 50%.

"Previously, there were 28 cars between DPW [the Public Works department] and those elsewhere in the department," said Mayrl. "That's been cut in half. Now we have 12 to 14 cars without any missed reservations. This holds a lot of promise, only a month in. We don't have to worry about maintenance or whether someone is parked in the wrong place or cleaning snow off a car. We hope that if this is successful, we can roll it out to other departments."

The District's fleet gets leaner

While a 50% reduction in fleet size looks like significant cost savings, Serafino said that a 2:1 ratio is actually a conservative number.

"We strive for 3:1," Serafino said. "The one thing we have is data. We capture and gather data from every single use of every single vehicle by every single driver, at a very granular level, including whenever a driver gets in and out. That allows a city to measure real utilization and efficiency. Using those numbers, officials can drive policy and other things. You can take effective utilization and real utilization and say, 'we're taking away these four cars from this area.' You can use hard data gathered by the system to make financial and efficiency decisions."

Based upon the results to date, Serafino said he expects Washington, DC, to triple its investment in the system. "The original pilot was started by a mandated reduction by [former DC Mayor Adrian] Fenty, who said 'make this goal,' and 'get it done by this date.' Overall, DC went from 365 to 80 vehicles by consolidating and cooperating."

Serafino estimated the reduction represents about 50% of the opportunity for DC to save money. "The leader of the DC Department of Public Works wants to do more," he said. "The final plans are to get to a couple of hundred vehicles under management, resulting in another reduction by at least 200 cars." Serafino estimated potential net cost savings would be north of $1 million per year.

There is a floor, however, for how lean a city's car fleet can become — and a ceiling for optimal utilization as well.

"The more you reduce, the harder it gets," said Serafino. "DC may have gone too far, by going down to 80 [vehicles]. It has hurt mobility." If you cut into fat deep enough, in other words, eventually you hit muscle and bone.

"DC is passing 70% utilization on a per-day basis," said Serafino. "They have three to four people using each of the cars every day. The trip profile, in the government sense, is different from other customers. We don't expect to go over 80%. There is a point where you can get too lean. DC has kind of gotten there now."

In Boston, Mayrl said they did a financial analysis of how to reduce costs from their car fleet. "It was cheaper to better manage the cars we have than to buy new ones. Technology helps us do that. [Carsharing] had already been done in a couple of other cities. Chicago does it. The city of DC does it. We went to a competitive bid for an online vehicle fleet management software system. [Zipcar] was the only respondent."

Given that FastFleet has been around for more than three years and there's a strong business case for employing the technology, the rate of adoption by American cities might seem to be a little slow to outside observers. What would be missing from that analysis are the barriers to entry for startups that want to compete in the market for government services.

"What hit us was the sales cycle," said Zipcar's Serafino. "The average is about 18 months to two years on city deals. That's why they're all popping now, with more announcements to come soon."

The problem, Serafino mused, was not making the case for potential cost savings. "Cities will only act as sensitive as politics will allow," said Serafino.

"Boston, San Francisco, New York and Chicago are trying. The problem is the automotive and vehicle culture," Serafino said. "That, combined with the financial aspects of decentralized budgeting for fleets, is the bane of fleet managers. Most automotive fleet managers in cities don't control their own destinies. Chicago is one of the very few cities where they can control the entire fleet.

Cities do have other options to use technology to manage their car fleets, from telematics providers to GPS devices to web-based reservation systems, each of which may be comparatively less expensive to buy off the shelf.

One place that Zipcar will continue to face competition at the local level is from companies that provide key vending machines, which are essentially automated devices on garage walls.

"You go get a key and go to a car," said Serafino. "If you have 20 cars in one location, it's not as likely to make sense to choose our system. If you have 50 cars in three locations, that's a different context. You can't just pick up a keybox and move it."

Collaborative consumption goes federal?

Zipcar is continuing along the long on-ramp to working with government. The next step for the company may be to help Uncle Sam with the federal government's car fleet.

As noted previously, the U.S. General Services Agency (GSA) has already done a collaborative consumption pilot using part of its immense vehicle fleet. Serafino says the GSA is now using that data to prepare a broader procurement action for a request for proposals.

The scale for potential cost savings is significant: The GSA manages some 210,000 vehicles, including a small but growing number of electric vehicles.

Given congressional pressure to find cost savings in the federal budget, if the GSA can increase the utilization of its fleet in a way that's even vaguely comparable to the savings that cities are finding, collaborative consumption could become quite popular in Congress.

If carsharing at the federal level succeeded similarly well at scale, members of Congress and staff that became familiar with collaborative consumption through the wildly popular Capital bike sharing program may well see the sharing economy in a new light.

"There's a broader international trend to work to share resources more efficiently, from energy to physical infrastructure," said Mayrl. "Like every good city, we're copying the successful stuff elsewhere."

Related:

April 05 2012

Strata Week: New life for an old census

Here are a few of the data stories that caught my attention this week

Now available in digital form: The 1940 census

The National Archives released the 1940 U.S. Census records on Monday, after a mandatory 72-year waiting period. The release marks the single largest collection of digital information ever made available online by the agency.

Screenshot from the 1940 Census available through Archives.org
Screenshot from the digital version of the 1940 Census.

The 1940 Census, conducted as a door-to-door survey, included questions about age, race, occupation, employment status, income, and participation in New Deal programs — all important (and intriguing) following the previous decade's Great Depression. One data point: in 1940, there were 5.1 million farmers. According to the 2010 American Community Survey (not the census, mind you), there were just 613,000.

The ability to glean these sorts of insights proved to be far more compelling than the National Archives anticipated, and the website hosting the data, Archives.com, was temporarily brought down by the traffic load. The site is now up, so anyone can investigate the records of approximately 132 million Americans. The records are searchable by map — or rather, "the appropriate enumeration district" — but not by name.

A federal plan for big data

The Obama administration unveiled its "Big Data Research and Development Initiative" late last week, with more than $200 million in financial commitments. Among the White House's goals: to "advance state-of-the-art core technologies needed to to collect, store, preserve, manage, analyze, and share huge quantities of data."

The new big data initiative was announced with a number of departments and agencies already on board with specific plans, including grant opportunities from the Department of Defense and National Science Foundation, new spending on an XDATA program by DARPA to build new computational tools as well as open data initiatives, such as the the 1000 Genomes Project.

"In the same way that past Federal investments in information-technology R&D led to dramatic advances in supercomputing and the creation of the Internet, the initiative we are launching today promises to transform our ability to use big data for scientific discovery, environmental and biomedical research, education, and national security," said Dr. John P. Holdren, assistant to the President and director of the White House Office of Science and Technology Policy in the official press release (PDF).

Personal data and social context

When the Girls Around Me app was released, using data from Foursquare and Facebook to notify users when there were females nearby, many commentators called it creepy. "Girls Around Me is the perfect complement to any pick-up strategy," the app's website once touted. "And with millions of chicks checking in daily, there's never been a better time to be on the hunt."

"Hunt" is an interesting choice of words here, and the Cult of Mac, among other blogs, asked if the app was encouraging stalking. Outcry about the app prompted Foursquare to yank the app's API access, and the app's developers later pulled the app voluntarily from the App Store.

Many of the responses to the app raised issues about privacy and user data, and questioned whether women in particular should be extra cautious about sharing their information with social networks. But as Amit Runchal writes in TechCrunch, this response blames the victims:

"You may argue, the women signed up to be a part of this when they signed up to be on Facebook. No. What they signed up for was to be on Facebook. Our identities change depending on our context, no matter what permissions we have given to the Big Blue Eye. Denying us the right to this creates victims who then get blamed for it. 'Well,' they say, 'you shouldn't have been on Facebook if you didn't want to ...' No. Please recognize them as a person. Please recognize what that means."

Writing here at Radar, Mike Loukides expands on some of these issues, noting that the questions are always about data and social context:

"It's useful to imagine the same software with a slightly different configuration. Girls Around Me has undeniably crossed a line. But what if, instead of finding women, the app was Hackers Around Me? That might be borderline creepy, but most people could live with it, and it might even lead to some wonderful impromptu hackathons. EMTs Around Me could save lives. I doubt that you'd need to change a single line of code to implement either of these apps, just some search strings. The problem isn't the software itself, nor is it the victims, but what happens when you move data from one context into another. Moving data about EMTs into context where EMTs are needed is socially acceptable; moving data into a context that facilitates stalking isn't acceptable, and shouldn't be."

Fluent Conference: JavaScript & Beyond — Explore the changing worlds of JavaScript & HTML5 at the O'Reilly Fluent Conference (May 29 - 31 in San Francisco, Calif.).

Save 20% on registration with the code RADAR20

Got data news?

Feel free to email me.

Related:

Commerce Weekly: The do's and don'ts of geo marketing

Here's what caught my eye in the commerce space this week.

Placecast's CEO on the secret to successful targeted offers

PlacecastLast August, I wrote about Placecast, which has been working to deliver coupons and offers on behalf of its retail clients to opted-in customers when they hit geofenced areas. Placecast's platform allows merchants to set up a ring around their locations (or other locations, as described below) and then trigger an SMS to customers who have opted in to receive them. Placecast works with mobile carriers to deliver large tranches of opted-in customers to its merchant clients. This week at O'Reilly's Where Conference, Placecast CEO Alistair Goodman talked about the right and wrong way to deliver ads to a geofenced audience, based on the learning curve they have climbed over the past few years.

Some of these are obvious, like the need to link data about the customers' preferences with the location — the richer the data, the more relevant the message, and the more likely it is to hit home. Goodman explained this as a sort of stack, with positioning data (mostly from GPS, but supplemented with Wi-Fi and other data) at the lowest level. Just above that, a layer on context: What type of place is the user at (mall? stadium? park?) and what's the weather like? Atop that level, demographics and psychographics — who are the users and what do users in their consumer categories tend to go for? Atop that layer, the users' preferences: What do they want to be notified about, when, and how often? And finally, at the top of the stack, the offer itself: What is it the retailer is promoting?

A second key point is the need to find relevant locations — not just the retailer's store, which is obvious, but other places where the customer is likely to be receptive to the offers. For example, you might promote dog food or pet stores at a dog park, or a promo for a sports drink around a gym, or the sponsor of a concert around an arena. Interestingly, Goodman said that while merchants often ask Placecast to geofence around a competitor's store, he advises them that isn't a particularly effective marketing strategy: "If a customer is already headed into a certain store, a message urging them to visit a different location isn't likely to be very effective. A more effective way is to promote the message from a relevant public space." (I noticed the audience received this wisdom in total silence; you could almost hear the wheels of doubt spinning.)

Finally, Goodman said customers react better to offers when they believe it comes to them through this channel with some level of exclusivity. "Customers like it when they feel they're getting an offer that others aren't getting." So the coupons or other offers can't be the same as what's posted on the window of the store.

Goodman said the platform can deliver offers through a variety of channels, but most are delivered as SMS text messages, which remain tremendously effective. And they seem to be working: Goodman said that their research finds that 49% of store visits that occurred after receiving a Placecast ShopAlert were unplanned before the alert, while another 19% served as reminders to visit the store. In these cases, you might say those texts delivered twice.

X.commerce harnesses the technologies of eBay, PayPal and Magento to create the first end-to-end multi-channel commerce technology platform. Our vision is to enable merchants of every size, service providers and developers to thrive in a marketplace where in-store, online, mobile and social selling are all mission critical to business success. Learn more at x.com.

Jumping ship at Google Wallet?

Google WalletThe departure of Google Wallet co-founding engineer Rob von Behren to join payments startup Square aroused suspicion that Square might be looking to incorporate NFC in its system. Dan Balaban's article in NFC Times puts von Behren's departure in the context of a swath of high profile talent exits from a project that appears to be struggling to find partners and users. Balaban quotes a mobile commerce analyst who believes von Behren's joining Square almost certainly means a move by Square to support NFC. "Else, it would be like hiring Michael Jordan to get advice on golf," the analyst said.

In the past, Square's COO Keith Rabois has questioned the value of NFC, calling it, at last September's GigaOM Mobile Conference, "a technology in search of a value proposition." But as more mobile phones ship this year with the short-range wireless technology, it seems natural that Square would want to tap into it to facilitate its "Pay with Square" (formerly Card Case) system that allows customers to pay at merchants with their Square accounts.

Meanwhile, Balaban's article raises questions about the viability of the Google Wallet project. In addition to von Behren, fellow founding engineer Jonathan Wall and product lead Marc Freed-Finnegan left to start their own mobile-commerce startup, Tappmo, in March. Andrew Zaeske, former director of engineering for Wallet, is also said to have left the project. Speculation centers around disagreements between Wallet chief Osama Bedier (who joined Google from PayPal in February 2011) and other leaders of the team over the project's direction. It can't help that the refusal last autumn of Verizon to allow Google Wallet into its phones, and Verizon, AT&T, and T-Mobile's plans to launch their own mobile wallet under the Isis brand, cast into doubt whether Wallet will ever be able to expand beyond the Sprint network.

Will carriers like Facebook's post-IPO status?

Mobile carriers run the risk of losing text revenue from Facebook, as more of the service's users access it from mobile devices and use it as their primary communication channel. That's the view of Victor Basta, managing director of London-based Magister Advisors, which advises companies on acquisitions and public offerings. Basta told Bloomberg BusinessWeek that "Facebook's IPO is about the worst thing that could happen to network operators" since the pressure to demonstrate strong earnings to investors will make it harder for Facebook to share revenue with the carriers. Facebook's "over-the-top" service rides on the mobile networks, failing to share any of the revenue from advertising delivered over it and increasingly taking away from the carriers' SMS text earnings, as users send free Facebook messages instead.

"The fundamental challenge for network operators will be finding a way of becoming part of the Facebook ecosystem rather than simply external enablers," Basta said.

Tip us off

News tips and suggestions are always welcome, so please send them along.


If you're interested in learning more about the commerce space, check out DevZone on x.com, a collaboration between O'Reilly and X.commerce.


Related:

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl