Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 12 2011

What did Microsoft get for $8.5 billion?

SkypeIt's the kind of deal that really gets the tech world buzzing — one titan buying another for an astronomical amount — and the question on many people's minds now is what is Microsoft really getting for its $8.5 billion purchase of Skype?

From a technical perspective, it seems a little suspect. After all, Microsoft has already developed much of the same functionality that Skype provides. Windows Live Messenger offers free instant messaging and voice and video chat, it already works with Lync and Office, and Microsoft had previously announced plans to integrate Messenger into Windows Phone.

From a financial perspective it doesn't look much better. Even though Skype is a massive (and disruptive) presence in Internet communications, it has never been very profitable. Last year Skype had $860 million in revenue, and a net loss of $7 million, according to its initial public offering filing. Forrester Research analyst Andrew Bartels told MSNBC:

It doesn't make sense at all as a financial investment. There's no way Microsoft is going to generate enough revenue and profit from Skype to compensate.



So why would Microsoft pay so much for a company that doesn't have vastly superior technology or great financials? As far as I can tell, there's five reasons:



1. Skype's user base


Skype has a strong and loyal base of more than 170 million active users across multiple platforms. Microsoft will get instant access to Skype's network, which represents a huge influx of customers and a big leap in Microsoft's presence as a consumer Internet company.

2. Skype and the enterprise

Of course, it's not all about consumers. Skype has been aggressively pursuing the enterprise market in recent years and has made some solid progress there. Microsoft likely sees this purchase as a win for both its consumer and business product lines, and we can expect to see Skype integration coming soon in all manner of Microsoft products and platforms. Already promised is Skype support for Xbox (and Xbox Live), Kinect, Office, Lync, Outlook, Hotmail, and Windows Phone.

3. Windows Phone integration

Along those lines, perhaps the most obvious outcome will be Skype's integration into future versions of Windows Phone. This will allow Microsoft to compete squarely with Apple's Facetime and Google Voice. Expect to see Skype on a Nokia Windows Phone in the near future.

4. International long distance

Interestingly, one of Skype's biggest strengths was barely mentioned in the press conference announcing the deal: the massive amount of minutes Skype users represent on the public telephone network. Skype's been in an all-out war with the entrenched telecom carriers since its inception — the carriers despise Skype and rightly see it as a major threat to their business model. According to Alec Saunders, a telecom vet and ex-Microsoft employee, with this purchase Microsoft instantly becomes the world's largest carrier of international long distance minutes. How Microsoft will take advantage of this, and what impact it will have on their existing carrier relationships, is one of the open questions around this deal.

5. Blocking the competition

Let's not forget another angle that surely motivated Microsoft to pursue Skype — keeping it away from the competition. It's been common knowledge that Skype's investors have been looking for a buyer for some time. Skype was on a rocky road to an IPO, and it sounds like the investors who bought Skype from eBay were concerned about the viability of that process and eager to get their returns sooner rather than later. When Google came to the table, you can bet the deal-makers in Redmond started some serious number crunching.

Reports are surfacing that Microsoft may have paid much more than they needed to get Skype. Anonymous sources close to the negotiations have said that Google came in second place in the bidding at just $4 billion. That's a huge difference and it certainly suggests that Microsoft could have gotten Skype for much less. But some believe Google was never a serious suitor in the first place, and their interest may have been mostly about trying to keep Facebook from getting Skype.

As Microsoft is feeling pressure from the likes of Google and Apple, especially in the mobile space, the idea of a competitor getting their hands on Skype (and their network) was surely something Microsoft wanted to avoid. Microsoft is already seen as playing catch-up in the mobile space, and if Google got Skype that would put them even further behind.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD




Related:


May 04 2011

A Manhattan Project for online identity

In 1993, Peter Steiner famously wrote in the New Yorker that "on the Internet, nobody knows you're a dog." In 2011, trillions of dollars in e-commerce transactions and a growing number of other interactions have made knowing that someone is not only human, but a particular individual, increasingly important.

Governments are now faced with complex decisions in how they approach issues of identity, given the stakes for activists in autocracies and the increasing integration of technology into the daily lives of citizens. Governments need ways to empower citizens to identify themselves online to realize both aspirational goals for citizen-to-government interaction and secure basic interactions for commercial purposes.

It is in that context that the United States federal government introduced the final version of its National Strategy for Trusted Identities in Cyberspace (NSTIC) this spring. The strategy addresses key trends that are crucial to the growth of the Internet operating system: online identity, privacy, and security.

Blackberry at White House
Image Credit: Official White House Photo by Pete Souza

The NSTIC proposes the creation of an "identity ecosystem" online, "where individuals and organizations will be able to trust each other because they follow agreed upon standards to obtain and authenticate their digital identities." The strategy puts government in the role of a convener, verifying and certifying identity providers in a trust framework.

First steps toward this model, in the context of citizen-to-government authentication, came in 2010 with the launch of the Open Identity Exchange (OIX) and a pilot at the National Institute of Health of a trust frameworks — but there's a very long road ahead for this larger initiative. Online identity, as my colleague Andy Oram explored in a series of essays here at Radar, is tremendously complicated, from issues of anonymity to digital privacy and security to more existential notions of insight into the representation of our true selves in the digital medium.

NSTIC and online identity

The need to improve the current state of online identity has been hailed at the highest levels of government. "By making online transactions more trustworthy and better protecting privacy, we will prevent costly crime, we will give businesses and consumers new confidence, and we will foster growth and untold innovation," President Obama said in a statement on NSTIC.

The final version of NSTIC is a framework that lays out a vision for an identity ecosystem. Video of the launch of the NSTIC at the Commerce Department is embedded below:

"This is a strategy document, not an implementation document," said Ian Glazer, research director on identity management at Gartner, speaking in an interview last week. "It's about a larger vision: this where we want to get to, these are the principles we need to get there."

Jim Dempsey of the Center for Democracy and Technology (CDT) highlighted a critical tension at a forum on NSTIC in January: government needs a better online identity infrastructure to improve IT security, online privacy, and support e-commerce but can't create it itself.

Andy Ozment, White House Director for Cybersecurity Policy, said in a press briefing prior to the release of NSTIC that the strategy was intended to protect online privacy, reduce identity theft, foster economic growth on the Internet and create a platform for the growth of innovative identity services.

"It must be led by the private sector and facilitated by government," said Ozment. "There will be a sort of trust mark — it may not be NSTIC — that certifies that solutions will have gone through an accreditation process."

Instead of creating a single online identity for each citizen, NSTIC envisions an identity ecosystem with many trusted providers. "In the EU [European Union] mentality, identity can only exist if the state provides it," said Glazer. "That's inherently an un-American position. This is frankly an adoption of the core values of the nation. There's a rugged individualism in what we're incorporating into this strategy."

Glazer and others who have been following the issue closely have repeatedly emphasized that NSTIC is not a mandate to create a national online identity for every American citizen. "NSTIC puts forth a vision where individuals can choose to use a smaller number of secure, privacy-preserving online identities, rather than handing over a new set of personal information each time they sign up for a service," said Leslie Harris, president for the Center for Democracy and Technology, in a prepared statement.  "There are two key points about this Strategy:  First, this is NOT a government-mandated, national ID program; in fact, it's not an identity 'program' at all," Harris said.  "Second, this is a call by the Administration to the private sector to step up, take leadership of this effort and provide the innovation to implement a privacy-enhancing, trusted system."

Harris also published a guest post at Commerce.gov that explored how the national identity strategy "envisions a more trustworthy Internet."

The NSTIC was refined in an open government consultation process with multiple stakeholders over the course of the past year, including people from private industry, academics, privacy advocates and regulators. It is not a top-down mandate but instead a set of suggested principles that its architects hope will lead to a health identity ecosystem.

"Until a competitive marketplace and proper standards are adopted across industry, we actually continue to have fewer options in terms of how we secure our accounts than more," said Chris Messina in an interview with WebProNews this year. "And that means that the majority of Americans will continue using the same set of credentials over and over again, increasing their risk and exposure to possible leaks."

For a sense of the constituencies involved, read through what they're saying about NSTIC at NIST.gov. Many of those parties are involved in an ongoing open dialogue on NSTIC at NSTIC.us

"The commercial sector is making progress every week, every month, with players for whom there's a lot of money involved," said Eric Sachs, product manager for the Google Security team and board member of the OpenID Foundation, in an interview this winter. "These players have a strong expectation of a regulated solution. That's one reason so many companies are involved in the OpenID Foundation. Businesses are finding that if they don't offer choices for authentication, there's significant push back that affects business outcomes."

Functionally, there will now be an NSTIC program office in the Department of Commerce and a series of roundtables held across the United States over the next year. There will be funding for more research. Beyond that, "milestones are really hard to see in the strategy," said Glazer. "We tend to think of NSTIC's goal as a single, recognizable state. Maybe we should be thinking of this as DARPA for identity. It's us as a nation harnessing really smart people on all sides of transactions."

Improving online identity will require government and industry to work together. "One role government can play is by aggregating citizen demand and asking companies to address it," said Sachs. "Government is doing well by coming to companies and saying that this is an issue that affects everyone on the Internet."

NSTIC and online privacy

There are serious risks to getting this wrong, as the Electronic Frontier Foundation highlighted in its analysis of the federal online identity plan last year. The most contentious issue with NSTIC lies in its potential to enable new harms instead of preventing them, including increased identity theft.

"While we're concerned about the unsolved technological hurdles, we are even more concerned about the policy and behavioral vulnerabilities that a widespread identity ecosystem would create," wrote Aaron Titus of Identity Finder, which has released a 39-page analysis of NSTIC's effect on privacy:

We all have social security cards and it took decades to realize that we shouldn't carry them around in our wallets. Now we will have a much more powerful identity credential, and we are told to carry it in our wallets, phones, laptops, tablets and other computing devices. Although NSTIC aspires to improve privacy, it stops short of recommending regulations to protect privacy. The stakes are high, and if implemented improperly, an unregulated Identity Ecosystem could have a devastating impact on individual privacy.

It would be a mistake, however, to "freak out" over this strategy, as Kaliya Hamlin has illuminated in her piece on the NSTIC in Fast Company:

There [are] a wide diversity of use cases and needs to verify identity transactions in cyberspace across the public and private sectors. All those covering this emerging effort would do well to stop just reacting to the words "National," "Identity," and "Cyberspace" being in the title of the strategy document but instead to actually talk to the the agencies to understand real challenges they are working to address, along with the people in the private sector and civil society that have been consulted over many years and are advising the government on how to do this right.

So no, the NSTIC is not a government ID card, although information cards may well be one of the trusted sources of for online identity in the future, along with smartphones and other physical tokens.

The online privacy issue necessarily extends far beyond whatever NSTIC accomplishes, affecting every one of the billions of people now online. At present, the legal and regulatory framework that governs the online world varies from state to state and from sector to sector. While the healthcare and financial world have associated penalties, online privacy hasn't been specifically addressed by legislation.

As online privacy debates heat up again in Washington when Congress returns from its spring break, that may change. Following many of the principles advanced in the FTC privacy report and the Commerce Department's privacy report last year, Senator John McCain and Senator Kerry introduced an online privacy bill of rights in March.

After last week's story on the retention of iPhone location data, location privacy is also receiving heightened attention in Washington. The Federal Trade Commission, with action on Twitter privacy in 2010 and Google Buzz in 2011, has moved forward without a national consumer privacy law. "I think you'll see some enforcement actions on mobile privacy in the future," Maneesha Mithal, associate director of the Division of Privacy and Identity Protection, told Politico last week.

For companies to be held more accountable for online privacy breaches, however, the U.S. Senate would need to move forward on H.R. 2221, the Data Accountability and Trust Act (DATA) that passed the U.S. House of Representatives this year. To date, the 112th Congress has not taken up a national data breach law again, although such a measure could be added to a comprehensive cybersecurity bill.

NSTIC and security

"The old password and user-name combination we often use to verify people is no longer good enough. It leaves too many consumers, government agencies and businesses vulnerable to ID and data theft," said Commerce Secretary Gary Locke during the strategy launch event at the Commerce Department in Washington, D.C.

The problem is that "people are doing billions of transactions with LOA1 [Level of Assurance 1] credentials already," said Glazer. "That's happening today. It's costing business to go verify these things before and after the transaction, and the principle of minimization is not being adhered to. "

Many of these challenges, however, come not from the technical side but from the user experience and business balance side, said Sachs. "Most businesses shouldn't be in the business of having passwords for users. The goal is educating website owners that unless you specialize in Internet security, you shouldn't be handling authentication."

Larger companies "don't want to tie their existence to a single company, but the worse they're doing in a given quarter, the more willing they are to take that risk," Sachs said.

"One username and password for everything is actually very bad 'security hygiene,' especially as you replay the same credentials across many different applications and contexts (your mobile phone, your computer, that seemingly harmless iMac at the Apple store, etc)," said Messina in the WebProNews interview. "However, nothing in NSTIC advocates for a particular solution to the identity challenge — least of all supporting or advocating for a single username and password per person."

"If you look at the hopes of the NSTIC, it moves beyond passwords," said Glazer. "My concern is that it's authenticator-fixated. Let's make sure it's not solely smartcards or one-time passwords." There won't be a magic bullet here, as is the same conclusion faced by so many other great challenges for government and society.

Some of the answers to securing online privacy and identity, however, won't be technical or legislative at all. They will lie in improving the digital literacy of all online citizens. That very human reality was highlighted after the Gawker database breach last year, when the number of weak passwords used online became clear.

"We're going to set the behavior for the next generation of computing," said Glazer. "We shouldn't be obsessed with one or two kinds of authenticators. We should have a panoply. NSTIC is aimed at fostering the next generation of behaviors. It will involve designers, psychologists, as well as technologists. We, the identity community, need to step out of the way when it comes to the user experience and behavioral architecture.

NSTIC and the future of the Internet

NSTIC may be the "wave of the future" but, ultimately, the success or failure of this strategy will rest on several factors, many of them lying outside of government's hands. For one, the widespread adoption of Facebook's social graph and Facebook Connect by developers means that the enormous social network is well on its way to becoming an identity utility for the Internet. For another, the loss of anonymity online would have dire consequences for marginalized populations or activists in autocratic governments.

Ultimately, said Glazer, NSTIC may not matter in the ways that we expect it to. "I think what will come of this is a fostering of research at levels, including business standards, identity protocols and user experience. I hope that this will be a Manhattan Project for identity, but done in the public eye."

There may be enormous risks to getting this wrong, but then that was equally true of the Apollo Project and Manhattan Project, both of which involved considerably more resources. If the United States is to enable its citizens to engage in more trusted interactions with government systems online, something better than the status quo will have to emerge. One answer will be services like Singly and the Locker Project that enable citizens to aggregate Internet data about ourselves, empowering people with personal data stores. There are new challenges ahead for the National Institute of Standards and Technology (NIST), too. "NIST must not only support the development of standards and technology, but must also develop the policy governing the use of the technology," wrote Titus.

What might be possible? Aaron Brauer-Rieke, a fellow at the Center for Democracy and Technology, described a best-case scenario to Nancy Scola of techPresident:

I can envision a world where a particularly good trust framework says, "Our terms of service that we will take every possible step to resist government subpoenas for your information. Any of the identity providers under our framework or anyone who accepts information from any of our identity providers must have those terms of service, too." If something like that gains traction, that would be great.

That won't be easy, but the potential payoffs are immense. "For those of us interested in the open government space, trusted identity raises the intriguing possibility of creating threaded online transactions with governments that require the exchange of only the minimum in identifying information," writes Scola at techPresident. "For example, Brauer-Rieke sketched out the idea of an urban survey that only required a certification that you lived in the relevant area. The city doesn't need to know who you are or where, exactly, you live. It only needs to know that you fit within the boundaries of the area they're interested in."

Online identity is, literally, all about us. It's no longer possible for governments, businesses or citizens to remain static with the status quo. To get this right, the federal government is taking the risk of looking to the nation's innovators to create better methods for trusted identity and authentication. In other words, it's time to work on stuff that matters, not making people click on more ads.



Related:


Sponsored post
feedback2020-admin

November 15 2010

Hiring trends among the major platform players

After re-reading Tim's post on the major internet platform players, I looked at recent hiring trends* among the companies he highlighted. First I examined year-over-year changes in number of job postings (from Aug to Oct 2009 vs. Aug to Oct 2010). Consistent with the recent flurry of articles about hiring wars, all the companies (except for Yahoo) increased** their number of job postings. Winning the battle for the Internet's points of control requires amassing talent:

pathint


Below is the breakdown by most popular occupations over the last three months:

pathint


[For a similar breakdown by location (most popular metro areas), click HERE.]




(*) Using data from a partnership with SimplyHired, we maintain a data warehouse that includes most U.S. online job postings dating back to late 2005. Since there are no standard data formats for job postings across employment sites, algorithms are used to detect duplicate job postings, companies, occupations, and metro areas. The algorithms are far from perfect, so the above results are at best extremely rough estimates.



(**) As a benchmark, the total number of job postings in our entire data warehouse of jobs grew 68% from Aug/Oct 2009 to Aug/Oct 2010.


Livestream from Web 2.0 Summit

Leaders from across the Internet Economy are converging in San Francisco this week for Web 2.0 Summit. They'll be discussing "points of control" -- the key companies, technologies and platforms that will shape the future of the Internet.

A free (and registration-free) livestream of the presentations is available below. We'll also be posting interviews and sessions on the Web 2.0 Summit site.

Note: Sessions begin at 2:30 PT today (check out the full schedule here). Highlights from past events will be streamed when live sessions aren't be held.


October 26 2010

Four short links: 26 October 2010

  1. 12 Months with MongoDB (Worknik) -- every type of retrieval got faster than their old MySQL store, and there are some other benefits too. They note that the admin tools aren't really there for MongoDB, so "there is a blurry hand-off between IT Ops and Engineering." (via Hacker News)
  2. Dawn of a New Day -- Ray Ozzie's farewell note to Microsoft. Clear definition of the challenges to come: At first blush, this world of continuous services and connected devices doesn’t seem very different than today. But those who build, deploy and manage today’s websites understand viscerally that fielding a truly continuous service is incredibly difficult and is only achieved by the most sophisticated high-scale consumer websites. And those who build and deploy application fabrics targeting connected devices understand how challenging it can be to simply & reliably just ‘sync’ or ‘stream’. To achieve these seemingly simple objectives will require dramatic innovation in human interface, hardware, software and services. (via Tim O'Reilly on Twitter)
  3. A Civic Hacktivism Abecedary -- good ideas matched with exquisite quotes and language. My favourite: Kick at the darkness until it bleeds daylight. (via Francis Irving on Twitter)
  4. UI Guidelines for Mobile and Web Programming -- collection of pointers to official UI guidelines from Nokia, Apple, Microsoft, MeeGo, and more.

October 25 2010

The battle for the Internet Economy

Battles are brewing for the Internet's points of control: search, location, identity, commerce and more. How these battles play out and who emerges victorious will shape virtually everything and everyone involved in the Internet Economy.

Tim O'Reilly and John Battelle will examine the people, organizations, and chokeholds relevant to points of control during a one-hour webcast on Wednesday (Oct. 27) at 1 pm PT / 4 pm ET. Registration and attendance are both free.

"Points of control" is also the theme of this year's Web 2.0 Summit, being held Nov. 15-17 in San Francisco. Learn more about Web 2.0 Summit at the conference site and check out the illustrated Points of Control map.

September 13 2010

Why Twitter's t.co is a game changer

TwitterTwitter has been open with its data from the start, and widely available APIs have created a huge variety of applications and fast adoption. But by making their platform so open, Twitter has fewer options for monetization.

The one thing they can do that nobody else can -- because they're the message bus -- is to rewrite tweets in transit. That includes hashtags and URLs. Twitter could turn #coffee into #starbucks. They could replace a big URL with a short one. And that gives them tremendous power.

Twitter recently announced a new feature that makes this a reality. The t.co URL shortener -- similar to those from bit.ly, awe.sm, and tinyURL -- might seem like a relatively small addition to the company's offering. But it's a massive power shift in the world of analytics because now Twitter can measure engagement wherever it happens, across any browser or app. And unlike other URL shorteners, Twitter can force everyone to use their service simply because they control the platform. Your URLs can be shortened (and their engagement tracked by Twitter) whether you like it or not.

Web marketers obsess over the "funnel" -- the steps from first contact to purchase. They try to optimize it constantly, tweaking an offer or moving an image. They want to know everything about a buyer or a visitor.

While every click of a visit to these marketers' sites is analyzed with web analytics, it's much harder to know what people are doing elsewhere on the web. Modern marketers crave insight into two aspects of online consumers' behavior.

  1. They want insight into the "long funnel" -- what happened before someone got to their site that turned a stranger into a visitor.
  2. They want to measure engagement -- more than just knowing how many people a message might have reached, they want to know how many acted on it, regardless of where that link took them.

Web analytics is a huge industry, but the tools marketers rely on to understand visitors are breaking.

Web 2.0 Expo New York - 20% off with code RadarCookies, long the basis for tracking users, need web browsers to store them. In a world where we share URLs via email and social networks, those cookies get lost along the way, and with them the ability to track viral spread of a message. Invasive practices like toolbars and cross-site tracking cookies that try to tie users across websites have triggered huge consumer backlash (that hasn't stopped them from becoming common). Despite adoption, cross-site tracking cookies' days are numbered. This is one of the reasons companies like Tynt are finding other ways of following the spread of messages.

If you're a nosy marketer, it gets worse. We're moving from a browser-centric to an app-centric world. Every time you access the Internet through a particular app -- Facebook, Gowalla, Yelp, Foursquare, and so on -- you're surfing from within a walled garden. If you click on a link, all the marketer sees is a new visit. The referring URL is lost, and with it, the context of your visit.

This is why short URLs are so important. URLs survive the share. Because the interested reader is forced to go to the URL shortener to map the short URL to the real one, whoever owns the shortener sees the engagement between the audience and the content, no matter where it happens. That's why URLs are the new cookies.





Web analytics, marketing and points of control will be discussed at Web 2.0 Expo NY. Radar readers can save 20% on registration with the code "radar."






According to a Twitter email, t.co will "wrap links in Tweets with a new, simplified link." There's good reason to believe this will become the dominant URL shortener. Here's why:

  • Twitter is adding malware detection to the links it shortens.
  • T.co links will include a custom display that shows more of the destination before you click on the link.
  • The company has Twitter clients on most mobile devices, where it can make t.co the default shortener if it wants.
  • The extremely short URL saves precious characters.

Back in late 2008, Twitter was looking for ways to monetize its platform. With t.co, Twitter has found a product marketers will embrace if they want to understand how the world interacts with the messages they put out there.

By now, it's clear that Twitter is not just a site. It's a protocol for asymmetric follow. It's a message bus for human attention. It's able to force every Twitter user to let it know when an interaction happens, simply by changing URLs.

This is the real value of the company -- not just knowing what people are talking about, but knowing which things prompt an action, wherever that happens.


Related:

August 31 2010

Points of Control: The Web 2.0 Summit Map

In my blog post State of the Internet Operating System a few months ago (and the followup Handicapping the Internet Platform Wars), I used the analogy of "the Great Game" played out between England and Russia in the late Victorian era for control of access to India through what is now Afghanistan. In our planning for this year's Web 2.0 Summit, John Battelle and I have expanded on this metaphor, exploring the many ways that Internet companies at all levels of the stack are looking for points of control that will give them competitive advantage in the years to come.

Now, John has developed that idea even further, with a super-cool interactive map that shows the Internet platform wars in a kind of fantasy landscape, highlighting each of the players and some of the moves they might make against each other. Click on the link at the top of the image below to get to the full interactive version. You might also want to read John Battelle's description of the points of control map and how to use it.

Some of the battlegrounds are already clear, as Google has entered the phone hardware market to match Apple's early lead, while Apple is ramping up its presence in advertising and location-based services to try to catch up to Google. Meanwhile, Facebook adds features to compete with Twitter and Foursquare, Google (and everyone else) keeps trying to find the magic bullet for social networking, and tries to eat Yelp's lunch with Place Pages, Microsoft gains share in search and tries again to become a player in the phone market. Areas like social gaming, payment, speech and image recognition, location services, advertising, tablets and other new form factors for connected devices, are all rife with new startups, potential acquisition plays, and straight-up competition.

In the map, we've tried to highlight some of the possible competitive vectors. I'm sure you'll have other ideas about companies, possible lines of attack, and possible alliances. We hope to hear your feedback, and we hope to see you at the Web 2.0 Summit, where we'll be talking with many of the key players, and handicapping the next stage of the Great Game.




Related:

June 10 2010

Can privacy, social media and business get along?

Facebook's recent privacy moves -- and the company's response to the ensuing criticism -- plug in to much larger issues. Specifically, the changing perspectives on privacy and the give-and-take between user data and online services.

The implications from these bigger shifts extend beyond individual users. Many companies now rely on social media, so there are business-level repercussions at play here as well.

I got in touch with Tamar Weinberg, author of "The New Community Rules," to explore these various threads. In the following Q&A, she discusses Facebook's position in the privacy world and she looks at how broader privacy changes affect consumers and businesses alike.



What's your take on Facebook's relationship with privacy?


Tamar WeinbergTamar Weinberg: Facebook likes to think it can get away with everything. A lot of the time, it can. I don't think the privacy move was a good one, though. Just recently, I was viewing a friend's profile where she said that one of her favorite activities was "being Abigail's mom." This activity was now clickable and turned into one of Facebook's Community Pages, which are public wikis. I noticed that there was another mother, unrelated to the first, who also had a similar activity. Furthermore, the page itself had a status update from a third mother who was also a mother to an Abigail. This is private information that now becomes public, and quite frankly, it's creepy.

A service like Blippy seems to represent an alternative, completely-open mindset. Will the population of people putting it all out there increase?

TW: Yes, absolutely. I'm not ready to share my personal purchases, but Blippy tells me what my friends are doing, and there are active users on the service. Members of Generation Y are aware that all the information they are posting is public, and those already present in the online space are not going to shy away from being as public as possible. In fact, if you think about it from an online reputation management perspective, the more pages you have about you on social networks, the less likely other websites with negative mentions of you will crop up -- and if they do, they'll be buried by the innocuous stuff.

Are consumers getting comfortable giving up certain personal information in exchange for a service?

TW: I don't even know if it's an exchange for a service that they're doing it for. Yes, you pay Facebook with your data. But you can still get the same usage out of Facebook without providing all of that data. The information exchange is not necessarily being given to the "service," if you will. It's being given so that other people on the service can find you. But the service essentially acts as a conduit to facilitate those connections.

Do you think people outside the tech universe care about the Facebook/privacy debate?

TW: They certainly do. An old teacher at my high school alma mater is the administrator of the largest privacy group on Facebook. If you're active in the Facebook community, you might be worried about the visibility of the data. You don't have to be in the tech community -- you just have to be a regular user of Facebook.

What privacy advice would you give to a company that's just moving into the social media space?

TW: The bottom line: whatever you say online is public, whether you are doing it in a public medium or not. I can post on my Facebook wall, and one of my friends can repost that information to his blog. Making content public does not necessarily have to be facilitated by the service itself; someone else can do it for you.

Also, think twice before saying something questionable because Google as a search engine has a super long-term memory.

Can you point to specific services that handle the give-and-take between social media and privacy well?

TW: One service that comes to mind is chi.mp. It's a stream that aggregates other social profiles, and you have three major profiles for the professional, the private, and the public. Each specific stream can be made visible to only certain personas. That said, its controls are a bit confusing since there are so many settings. Facebook does have a good handle on privacy controls (at least in my opinion -- there are a billion settings), but recent changes and heightened confusion is really what drove this recent privacy story to light.

This interview was condensed and edited.

Related:

June 09 2010

Cloud computing saves L.A. millions in IT costs

Why did the City of Angels move to the cloud? Given that the Los Angeles city budget was constrained by the Great Recession, the driver was simple: cost savings. When the reality of a legacy IT infrastructure that couldn't meet the needs of an increasingly mobile workforce was added to that driver, LA's chief technology officer Randi Levin put out a request for proposal for the cloud.

"We've been cutting services," she said. "We are now completely in what I would call 'survival mode.' Two or three years ago, we had a $116 million expense budget. Next year, it'll be $80 million. It might even be less."

After due diligence, Levin said her team recommended that the City of Los Angeles implement Google. "We predicted -- and are on track -- to save about $5 million over the next three years," said Levin. "Those are hard dollar savings."

Levin is looking ahead to where she can best use her staff. What are the important tasks? "It's not running servers," said Levin. "The city will get much more bang for their buck if we're developing applications and websites, or making processes more efficient. As a general rule, we're going to start moving out of that business and letting somebody else do it who can do it more efficiently."

Tim O'Reilly talked with Levin at the recent Gov 2.0 Expo in Washington, D.C. Their discussion is embedded below:

After the jump, read more about Los Angeles' decision to move into Google's cloud.

Google's contract with the Los Angeles for cloud computing services is notable in a number of ways, not least in that it's available to the public for perusal. For excellent, exhaustive analysis of what's in Google's SaaS contract with the City of Los Angeles, make sure to read the Information Law Group's series. Part two of their analysis of Google's SaaS contract with LA is now online, in addition to the contact itself, which is embedded below. After the embed, you'll find my interview with Levin.






Interview with Randi Levin, Los Angeles CTO


How has Los Angeles approached the process of IT transformation with yawning deficits and pressure to innovate?

I've been here three years now. When I got here, I made the rounds and interviewed everyone. We really started looking at our whole portfolio of work and saying if we're trending downward, what techniques can we use to survive here? When you go from closer to $120 million down to $80 million in budget, how do we survive?

How did you approach the Request for Proposal (RFP) for cloud computing services?

The unanimous opinion of everyone was that they hated the email system, Novell GroupWise Version 7, which we hadn't upgraded. That was partially the city's fault. The mailbox size was small, and people had to archive or delete on a regular basis, which was a complaint. The calendar function wasn't very good. When the iPhone came out, it didn't work on the iPhone.

We decided that we would release an RFP and ask for either hosted on-site, hosted off-site or SaaS. We didn't want to preclude putting it on-site and having somebody else run it, versus our own staff. We knew that the same old traditional model wasn't going to work: "We buy it; we install it; we run it." We didn't have the money and we didn't have the resources. On top of that, when you have a heavily unionized workforce where you can't pick and choose your employees, you can't go out to the open market and hire people at the levels that you need them. It's very difficult to move forward, in any area of technology.

Our RFP got back 15 responses. We established a committee of folks from the different departments to help look at the responses and narrow it down. We had everybody bid, including Lotus Notes, Yahoo Mail, Hotmail, Google and Microsoft. A couple of them were partnered with different implementation partners. It was narrowed down to one Microsoft hosted option and two Google options, with two different implementation partners.

We started diving into the products and we started diving into the economics. The economics of the Google cloud solution were so much better than the economics of a hosted version that we said, "We can't afford this." We knew that our budget would be less the next year. After doing our due diligence, we decided as a cross-team to recommend that we implement Google."

How were those economic drivers measured?

We compared the cost of the products that were involved in running our current email, including maintenance costs, the cost of the machines, and the staff. They were all hard dollar savings. There was nothing about productivity. We weren't comparing an apple to an apple here. We were getting a lot more functionality with Google than we were with the old system.

We did two analyses: one that was cash-based, and one that was ROI-based. The city wouldn't accept the ROI analysis because when you're so cash constrained, you only want to look at cash. It doesn't mean that we won't achieve some of the ROI that we stated, but in terms of what people have seen publicly, we're going to save $5 million.

The challenge for government, as with heavily regulated industries, is when data moves out of your hands. How do you assess whether it's being stored properly, or securely enough? Who audits that? Who certifies it? How often does it get checked?

I think part of what people need to realize is that anybody can put anything in any email system. People lose sight of that. If you're not supposed to put a certain piece of data in an email, for whatever reason, whether it's public safety or HIPAA or whatever, there's nothing to prevent anybody from doing that.

When people get that argument mixed up with SAS 70, it's not the right argument. The argument is: does your company, does your government, does your organization have a policy? Do people adhere to the policy? What are the ramifications if you don't? Because somebody could put something personal in email and send it to Taiwan.

That goes in my "lessons learned" category from doing the implementation here. There's a feeling that if your data is stored on-site, your data is under your desk. I call it the "hug a server concept." It's like the people that used to put the money under their mattress.

How are you addressing security concerns associated with cloud computing?

When you look at all of the various ways to encrypt and store with a lot of these companies, it's getting much more sophisticated than it used to be. Just storing it on your own server in your own data center? Please don't tell me that that's very secure. There's a myth about that, that you have more control. Some organizations do it a lot better than others. If you're looking at a whole cross-section of industries and governments, in general, people don't do it that well.

When you look at the hard numbers of cost, you enter into a useful discussion around risk avoidance versus risk management. What are the opportunity costs here? What is the potential to be able to do a lot more, versus the potential for a significant data breach?

The hard thing about this subject is that most people can't go on the record to discuss this. The problem is, because of that, you have a lot more myths about the security of people's existing environments. The discussion's always about the security of the SaaS provider, or Google or Microsoft. You can't have a candid conversation about what you had or have, versus what you're going to have. Any entity that does that actually opens the door in that period of time to somebody to come in and hack them more. It's a tricky subject.

What improvements do you expect to realize through this move to the cloud?

I want us to manage somebody else managing servers because we're in such a severe financial crisis that I don't want my staff having to worry about any infrastructure. I don't want to spend any time worrying about virtualizing servers. I want to make sure I have good performance and enough disk storage.

Nobody will end up with a completely SaaS model, at least not in the next couple of years, but what I do see is that most organizations are going to end up in a hybrid world where you have some on-site infrastructure, you have some hosted infrastructure, and you have some SaaS, simply because of the fact that everybody's trying to do more with less.

There may be certain applications that you feel you need to keep onsite and that you need to run them yourselves. I'm sure there's good reasons for that, but there's a lot that you don't have to. I think that many CIOs are starting to say: "If had 100 people, great. But I only have 80. I want to do the most value-add for the 80 and not running machines for the 20."

Based upon this experience, what advice would you give to other IT staff and municipalities?

Do your own due diligence, first of all. You can talk to people like me, but do your own due diligence. Make sure you know your requirements. We thought we knew the requirements and relied on people that we thought knew them, but in some cases they didn't know what they were. Make sure that you have a contract that has service levels. SAS 70 contracts are new, but they are out there. Make sure that you also are comparing your current service levels.

Gov 2.0 Expo: Cloud Computing in Los Angeles

At last month's Gov 2.0 Expo, Tim O'Reilly talked with Levin and Dave Girouard, Google Enterprise president, about Los Angeles' adoption of cloud computing. Video of their conversation is embedded below:

May 17 2010

Mobile operating systems and browsers are headed in opposite directions

During a panel at Web 2.0 Expo, someone asked if the panelists saw any signs that suggest mobile operating system fragmentation might decrease.

One of the panelists had a blunt answer: "No. There will be more fragmentation."

It is striking to see the different trajectories mobile operating systems are on when compared to the mobile web.

In 2006, two smartphone operating systems accounted for 81 percent of the market. There were really only four platforms to worry about: Symbian, Windows Mobile, RIM, and Palm OS. These represented 93 percent of the market.


Smartphone Operating System Market Share Percentage 2006 2007 2008 2009 2010
Sources: Canalys, 2006. Gartner: 2007, 2008, 2009.
Symbian 67 63.5 52.4 46.9 ? RIM 7 9.6 16.6 19.9 ? Windows Mobile 14 12.0 11.8 8.7 ? iPhone 0 2.7 8.2 14.4 ? Linux 6 9.6 7.6 4.7 ? Palm OS 5 1.4 1.8 Android 0.5 3.9 ? WebOS 0.7 ? Windows Phone 7 ? Bada OS ? MeeGo ? Other OSs 1 1.1 2.9 0.6 ?


Fast-forward to the present and the picture is different. No single operating system has more than 50 percent marketshare. There are seven operating systems being tracked and even within operating systems there are fragmentation concerns.

The future promises more operating system fragmentation, not less:

This list doesn't include differences within each particular operating system. Much has been made of Android fragmentation due to different user experiences like MotoBlur and HTC's Sense UI. And some argue that even the homogenous iPhone platform is starting to fragment.

There are more mobile operating systems coming and no signs of the mobile OS market narrowing any time soon.

The mobile web is converging

By contrast, the mobile web is converging on HTML5 and WebKit.

Unlike mobile operating systems, mobile browsers were fragmented a few years ago. The list of early mobile browsers include a series of proprietary browser engines:

  • jB5 Browser
  • Polaris Browser
  • Blazer
  • Internet Explorer Mobile
  • Openwave
  • NetFront
  • Obigo
  • Blackberry Browser

That's a fraction of the browser options that were available to mobile phone users. And while there is still work to be done to make mobile browsers more consistent, it is nothing compared to the inconsistencies between early mobile browsers.

Today, every mobile browser is moving toward HTML5 support, if it isn't there already:


Modern Mobile Browsers Engine HTML5 Mobile Safari Webkit Yes Android Webkit Yes Blackberry 6 Browser Webkit Yes Symbian^3 Webkit Yes MeeGo Webkit (Chromium) Yes Internet Explorer Internet Explorer 7 No WebOS Browser Webkit Yes Bada OS Browser Webkit Yes? Opera Mobile Opera Presto 2.2 Yes Opera Mini Opera Presto 2.2 Yes Fennec Firefox Yes Myriad (former Openwave) Webkit No BOLT browser Webkit ?

There are a couple of things to keep in mind about this list:

  1. The only major browser that definitely will not support HTML5 is Internet Explorer, but Internet Explorer 9 for desktop is going to support HTML5. Eventually the mobile browser will as well.
  2. Saying a browser supports HTML5 does not mean it supports the full HTML5 spec right now. It simply means that it supports a portion of the spec and is on track to support it fully.

The support isn't perfect, but it is clear that all of the mobile browsers are moving toward supporting full HTML, Javascript and CSS in a way that is already decreasing the difference between browsers.

WebKit: The dominant mobile platform

The WebKit browser engine now has a dominant position in mobile browsers. When BlackBerry ships its new browser based on WebKit, 85 percent of smartphones will ship with a WebKit-based browser.

Just because a device uses WebKit does not mean it has the latest version of WebKit and can use HTML5 fully. PPK has documented the many inconsistencies between WebKit implementations. Alex Russell makes a compelling counterpoint that the inconsistencies aren't that bad if you factor in when the browsers shipped.

WebKit is also used by numerous feature phones. Vision Mobile estimates that at the end of 2009, WebKit had been embedded in more than 250 million devices.

Advancing the mobile browser

In many ways, HTML5 is just the baseline of where mobile browsers are headed. Many companies, from carriers to handset manufacturers, are looking to mobile browser innovation as a key to their mobile strategies.

  • WebOS extends Javascript to provide access to device characteristics like the address book, camera, and accelerometer.
  • Sony Ericsson worked with the PhoneGap community to create its WebSDK.
  • Symbian is wooing developers with access to the dialer, calendar, camera, contacts and other tools using web technology.
  • Forty carriers and handset manufacturers have formed the Wholesale Application Community to build an open platform that will work on all devices. They seek to combine JIL and BONDI. JIL and BONDI provide access to device APIs via web technology.

There are two common threads in each of these stories.

First, companies throughout the ecosystem are extending mobile browsers to provide more functionality and attract developers to their platforms. Second, they are all approaching it in similar ways built on HTML widget technology.

Much like WebKit, there will be inconsistencies between these efforts in the near term, but all of these efforts are headed in the same direction.

Mobile Competitive Landscape

In December, Morgan Stanley released its Mobile Internet Report. Buried among the more than 1,000 pages in that report was a slide showing probability-weighted scenarios for mobile operating systems:

Mobile internet operating system competitive landscape

In the most probable scenario, "products with the best HTML5 browsers gain share." It is no wonder then that so many mobile companies are lining up behind HTML5 and pushing mobile browser technology.

Two to many, many to one

In 2006, two mobile operating systems controlled 81 percent of the market. This year there are 10 different smartphone operating systems.

Over that same period of time, mobile browsers have gone from many different proprietary rendering engines to the point where WebKit alone will power browsers in more than 85 percent of the smartphones sold.

From two operating systems to many. From many browsers to one. We have two core mobile technologies headed in opposite directions.

Related:

May 12 2010

Why check-ins and like buttons will change the local landscape

Notable advancements in the geo sector -- GIS, GPS, slippy maps -- punctuate an otherwise steady equilibrium; progress in the geo world is subtle, and tends to sneak up on us without our at first knowing its significance. I think we've just passed one of these stealth milestones whose importance will be realized only after the amplitude of the ripple effect shakes the local landscape. I'm talking about the humble check-in and the like button.

On first blush the idea of a "check in" is unworthy of note -- broadcasting publicly that you are at a particular venue is neither technologically sophisticated nor particularly novel. However, the driving impulse here is what's interesting: the check-in is much more about who you are than where you are. The passion driving the uptake of Foursquare, Gowalla, and their ilk is sociological rather than geographical.


People check-in to venues for the same reasons they so proudly display swooshes on shirts, Coach emblems on handbags, and glowing apples (or penguins for the more enlightened) on computing devices: these emblems communicate a facet of one's identity to the world in a facile, shared language. Checking into "Zeitgeist" in San Francisco tells the world you have roguish good looks (surely), plus a penchant for beer, outdoor smoking, and tamales. Checking into "Costco" tells the world that you own a car, and have an apartment or house in which to store large bulk items; checking into "SFO Long Term Parking" tells the world that you check-in too much, and should maybe ease-off a bit.


What logo or symbol can say so much, with such brevity, about an individual? The venue makes the man.

The magic here is the expression of affinity with a business, which is where the idea of the "like" has similar power in the local space (here I use "like" as any systemic convenience to express affinity, not exclusively Facebook's). In many ways liking a local business has significant advantages over a check-in: you do not have to be on-site, you like only once in perpetuity, and your affinity for the business is not measured by the number of times you visit. Check-ins, however, measure on-site presence, a hugely important metric for any business that values feet-through-the-door.

Thus the check-in holds the most potential for point-of-sale, in-the-moment marketing ("I see you are next door; come try a free latte at our place instead"), while Likes may offer greater potential for geo-relevant demographic targeting ("Here are deals for restaurants, similar to the ones you like, in areas where you are commercially active"). Likes and check-ins are really two sides of the same coin, but unfortunately share the same curse: there is no way to realize this user-to-business affinity and the value it represents uniformly across platforms.

Where previously this lack demanded only a frustrating duplication of effort, the problem has become more serious: the advent of likes and check-ins drive a semantic necessity to open and aggregate -- the problem is no longer one of mere inconvenience.

Gary Gale calls this problem "Geo Babel"; and the local landscape has all the hallmarks of the eponymous tower. Without a common means, our likes and check-ins will continue to be bound to the platform on which they were created, hamstringing their potential. Elbowing my way into Tim O'Reilly's metaphor, I would propose that this is a critical, missing component of the Internet Operating System. It seems outrageous that we find ourselves in 2010 without the means to refer to a business in a unique and unambiguous manner. Developers are left holding the bag for a number of reasons; here are my big three:

  1. Focus on listings data as end rather than means: Local search as we know it today is the parthenogenous child of the Yellow Pages industry. Many local search sites, and the data vendors they rely on, remain grounded in YP-era thinking, where the value was found in owning the listing data, making them discoverable in alphabetical order, and advertising against these listings. Local search for ages focused on being an electronic version of the Yellow Pages. Few organizations have looked above the horizon and considered carefully what value could be realized if listings were viewed as a means to connect users to businesses, rather than only advertise against their search.
  2. Attempts at distinction with common data: With peripheral exceptions, every local site out there has the same data. Boil these listings down to their hcards and you're looking at dupes across the board. This is of course known by all the big players, which is why they focus on differentiation with ancillary data such as ratings and reviews, or making their listings the most inclusive or the most current. This strategy drives competitors to attempt the "one ring" approach, where each tries to climb above the other in an effort to become the premier source of online local listings. It is a bugbear for developers because efforts to differentiate a commodity are always frustrating and rarely carry the sector forward.
  3. Over-fascination with pins on maps: The advent of GPS-enabled devices and Wi-Fi positioning gave us the ability to show people's location on the map; this has proved hugely distracting while the sector over the last four years has attempted to make buddy-finding interesting. We've since learned, I think, that people do not navigate by Long/Lat -- Foursquare and Gowalla succeed in part because they use business listings or venues as mechanisms of personal expression within a shared social landscape. Social location is best referenced by place not space; the coords tell me nothing.

This lacuna can be resolved one of two ways -- either one will do the trick but I am hoping for both:

First, we can work together toward an open database of places. Erick Schonfeld's post on TechCrunch a few weeks back is the most recent clarion. This is not an insignificant undertaking, but is certainly the preferred outcome. However, even with OpenStreetMap at our back, realizing this goal will take an eon in Internet time unless we see an unanticipated outbreak of altruism on the part of a data supplier. Of course, the 'new' players in the game -- those whose businesses use listings as a conveyance of value understand this need and are best positioned to deliver against it.

The second alternative -- one that is much more realistic in the short term -- is an open and accessible Local Listings Crosswalk, essentially an API that takes your URI and translates it to my URI so we can ensure we are speaking about the same business (I am less concerned about geographic place-names because such services already exist); Placecast's Match API went live as I was writing this post, so I hope that theirs will be the first step in getting us to where we want to be. The outcome,of course, will be achieved when we have a number of these services crosswalking via callbacks and hcard/og-aware crawlers, perpetuating listings data and ensuring they are up-to-date and uniformly accessible.

One or both of these services will allow us to put the bother of business listings management behind us, so we can get on with what's really exciting about local: connecting consumers with businesses they love, and providing genuine value to both.

Related:

April 30 2010

State of the Internet Operating System Part Two: Handicapping the Internet Platform Wars

This post is Part Two of my State of the Internet Operating System. If you haven't read Part One, you should do so before reading this piece.


As I wrote last month, it is becoming increasingly clear that the internet is becoming not just a platform, but an operating system, an operating system that manages access by devices such as personal computers, phones, and other personal electronics to cloud subsystems ranging from computation, storage, and communications to location, identity, social graph, search, and payment. The question is whether a single company will put together a single, vertically-integrated platform that is sufficiently compelling to developers to enable the kind of lock-in we saw during the personal computer era, or whether, Internet-style, we will instead see services from multiple providers horizontally integrated via open standards.


There are many competing contenders to the Internet Operating System throne. Amazon, Apple, Facebook, Google, Microsoft, and VMware all have credible platforms with strong developer ecosystems. Then there is a collection of players with strong point solutions but no complete operating system offering. Let's take them in alphabetical order.

Amazon

With the introduction in 2006 of S3, the Simple Storage Service, and EC2, the Elastic Compute Cloud, Amazon electrified the computing world by, for the first time, offering a general-purpose cloud computing platform with a business model that made it attractive to developers large and small. An ecosystem quickly grew up of companies providing developer and system-management tools. Companies like RightScale provide higher level management frameworks; EngineYard and Heroku provide Ruby-on-Rails based stacks that make it easy to use familiar web tools to deploy applications against an Amazon back-end; the Ubuntu Enterprise Cloud and Eucalyptus offer Amazon-compatible solutions.

A number of competitors, including Rackspace, Terremark, Joyent, GoGrid, and AppNexus are going head to head with Amazon in providing cloud infrastructure services. Many analysts are simply handicapping these providers and comparing them to offerings from Microsoft and Google. But to compare only cloud infrastructure providers is to miss the point. It's a bit like leaving out Microsoft while comparing IBM, Compaq, and Dell when handicapping the PC operating system wars. Hardware was no longer king; the competition had moved up the stack.

The key subsystems of the Internet Operating System are not storage and computation. Those are merely table stakes to get into the game. What will distinguish players are data subsystems.

Data is hard to acquire and expensive to maintain. Delivering it algorithmically at the speeds applications required for reasonable real-time performance is the province of very few companies.

In this regard, Amazon has three major subsystems that give it an edge: its access to media (notably books, music, and video); its massive database of user contributed reviews, ratings, and purchase data, and its One-Click database of hundreds of millions of payment accounts. As yet, only one of these, payment, has been turned into a web service, Amazon Flexible Payment Service.

Despite having an early lead in internet payment, Amazon reserved its use for too long for competitive advantage for its own e-commerce site, and didn't deploy it as an internet-wide service usable by developers until recently. And even then, Amazon lacks significant payment presence on mobile devices. Amazon has its own Kindle device for ebook sales, and its iPhone and Android apps for e-commerce, but powerful as these apps may be for driving sales to Amazon, they give the company no leverage in supporting third party developers. If anything, they will hinder the development of a mobile e-commerce ecosystem based on Amazon because Amazon is the largest competitor for many potential e-commerce developers.

Amazon's use of its media database as a back-end for its own proprietary e-reader device, the Kindle, highlights one of the fronts in what I've elsewhere called the War for the Web, namely the use of a dedicated front-end device giving preferential access to a player's back-end services. Apple and Google are in a much stronger position in this regard, with general-purpose smartphones as the device front-ends for their platforms. But Amazon has moved quickly to deploy its Kindle software on iPhone and Android; their compelling library of content may make the use of a proprietary device less important.

Two other Amazon service worthy of note are the Mechanical Turk service and the Fulfillment Web Service.

The Mechanical Turk service allows developers to farm out simple tasks to human participants. This turns out to be a remarkably powerful capability, with applications as divergent as data cleansing, metadata management, and even crowdsourcing disaster relief. There are many tasks that computers can't do alone, but that humans can help with. I've often made the case that all Web 2.0 applications are in fact systems for harnessing the collective intelligence of human users. But most of these applications do it in a single field of endeavor; Mechanical Turk is the leading general-purpose platform for putting people to work on small tasks that are easy for humans but hard for computers to do on their own.

Amazon's Fulfillment Web Service is another sleeper, whose full significance hasn't yet been realized. I foresee a future in which phone-based e-commerce makes the leap from virtual to physical goods. Right now, there's a huge business in selling songs, applications, ebooks, movies, and games on phones. There's an even bigger explosion coming in buying physical goods on the phone. And Amazon is the only platform player who actually can offer programmatically-driven fulfillment services. This is hugely important.

Amazon's weaknesses: Search (they have search capabilities with A9 and Alexa, but don't have a business model to support or extend those capabilities to developers at a cost (free) that is going to be required); advertising; location services; speech recognition; social graph. They have a very strong hand, very deep in some areas, but almost completely lacking in others.

Amazon also is weaker financially than its big three competitors: Apple, Google, and Microsoft. Jeff Bezos argues that this is actually a strength. He has noted more than once that Amazon's core business is retail, a notably low margin business. Cloud computing is a better business for Amazon than the school of hard knocks where it's learned to make a profit. "Commodity businesses don't scare us," he says. "We're experts at them. We've never had 35 or 40 percent margins like most tech companies."

This idea, of course, applies only to the commodity layers of cloud computing. And that's one more reminder that the outsized profits actually reside in the data subsystems where lock-in is achievable.

Apple

A few years ago, everyone thought that the big industry showdown was between Microsoft and Google. Now, Apple is the company to beat. With over 185,000 applications, the iPhone app store is creating a new information and services marketplace to rival the web itself. While Apple doesn't provide Amazon-like cloud hosting services, they don't have to. iPhone apps don't live on the web per se, though most of them, apart from local games, do rely on internet-based services.

Apple's strongest Internet OS subsystems are media (the iTunes store), application hosting (the App Store), and payment. Apple has over a hundred million people who are used to buying content with one click. They've given Apple their payment credentials, and use them to buy a wide variety of digital goods: first music, then applications, including games, then books.

What's next? As physical goods e-commerce takes off on the phone, I expect Apple to try to insert itself into the great money river flowing through its platform. Apple takes a 30% cut from application sales. While this percentage is too high for physical goods, it's not hard to imagine Apple interposing itself as the payment processor for applications ranging from ebay to Chipotle, taking a little bit of a much larger revenue stream.

Apple's weaknesses are legion. They have no cloud computation platform, they are latecomers to location and advertising, with interesting acquisitions but no clear strategy and nothing like a critical mass of data. (They do, however, have piles of cash, and strategic acquisitions could quickly change those dynamics.) They have great social graph assets in the form of user address books, email stores, and instant messaging friend networks, but they show little sign of understanding how to turn those assets into next generation applications or services. But most strikingly, they don't really seem to understand some key aspects of the game that is afoot.

If they did, MobileMe would be free to every user, not a $99 add-on. Web 2.0 companies know that systems that get better the more people use them are the key to marketplace dominance in the network era. The social graph is one such system, for which Facebook is currently the market leader. Companies that want to dominate the Internet Operating System either need to make a deal with Facebook to integrate their platforms, or have a compelling strategy for building out their own social graph assets. Unless Apple is planning a deal with Facebook, their current MobileMe strategy seems only to indicate that they don't understand the stakes.

Apple's other weaknesses might well be addressed by an alliance with Microsoft, which has strengths everywhere that Apple is weak. Given Apple's feud with Google, this is an increasingly likely scenario. In fact, you can imagine a 3-way alliance between Apple, Facebook, and Microsoft that would make for a very powerful platform. That being said, alliances are relatively weak at coordinated execution, so this opportunity may be stronger in theory than it turns out in fact.

But all of these weaknesses may be outweighed by the amazing job that Apple has done in creating a new computing paradigm to rival the web itself. As Jim Stogdill wrote in a recent post, the iPad isn't a computer, it's a distribution channel:

One interesting twist is how the iPad combines network effects and constrained distribution. The bright shiny object design of the iPad leads to network effects at the app store which in turn drives more consumers back to the device itself. Then to the degree that those two forces hold consumers in thrall of the device, Apple can use the device as the point of sale for content worth more than the device itself. The leverage is linked - the first leads to market presence, and then the market presence makes for stronger monetization opportunities in the device-hosted channel.

The other interesting thing is that so many of those "apps" are really just web pages without a URL. Or books packaged as an app. In short, this is content that is abandoning the web to become a monetizable app.

History is never completely new and we've seen things like this happen before. Prior to the 1980's essentially all television was broadcast in the clear. An unconstrained distribution channel like broadcast TV could only be monetized through ad sales, but along came cable with its point-to-point wave guides and surprised consumers were suddenly faced with paying for access.

This is Apple's trump card.

There are those who point to the lessons of VHS vs Betamax and the commodity PC vs Apple, and see Android as an inevitable winner. However, as Mark Sigal points out in Five reasons iPhone vs Android isn't Mac vs Windows:

It is a truism that in platform plays he who wins the hearts and minds of developers, wins the war. In the PC era, Apple forgot this, bungling badly by launching and abandoning technology initiatives, co-opting and competing with their developers and routinely missed promised milestones. By contrast, Microsoft provided clear delineation points for developers, integrated core technologies across all products, and made sure developer tools readily supported these core initiatives. No less, Microsoft excelled at ensuring that the ecosystem made money.

Lesson learned, Apple is moving on to the 4.0 stage of its mobile platform, has consistently hit promised milestones, has done yeomen's work on evangelizing key technologies within the platform (and third-party developer creations - "There's an app for that"), and developed multiple ways for developers to monetize their products. No less, they have offered 100 percent distribution to 85 million iPhones, iPod Touches and iPads, and one-click monetization via same. Nested in every one of these devices is a giant vending machine that is bottomless and never closes. By contrast, Google has taught consumers to expect free, the Android Market is hobbled by poor discovery and clunky, inconsistent monetization workflows. Most damning, despite touted high-volume third-party applications, there are (seemingly) no breakout third-party developer successes, despite Android being around two-thirds as long as the iPhone platform.

And that's only one of the five compelling reasons that Mark puts forward for Apple to win in mobile. (However, see Chris Lynch's rebuttal for the corresponding arguments why Android will win.)

Nonetheless, even if Apple has the dominant mobile platform, they won't have the full recipe for the operating system of the future. A network connection has two ends, and until Apple can offer a complete suite of cloud data services, they can't deliver the kind of lock-in that Microsoft enjoyed in the PC era. This is actually a good thing, and a harbinger of the best outcome for the Internet OS, namely that no one controls enough of it, everyone has to compromise, and interoperability (the internet as "network of networks") continues to play its generative role.

Facebook

Archilochus, the Greek fabulist, once said, "The fox knows many things, but the hedgehog knows one big thing." He might well have been talking about Facebook. Their one big thing, the social graph, might look like an incomplete offering, but they have made a great deal of progress based on it.

Facebook is more than a website. For many people, it is a replacement for the web, the entire platform, the world in which they receive news, communicate with friends, play games, store and share photographs and videos, and use any one of hundreds of thousands of applications. The Facebook Application ecosystem exceeds even the Apple App Store in the number of applications (500,000 to Apple's 187,000); third party developers like Zynga are amassing fortunes using entirely new social selling dynamics.

Facebook Connect is well on its way to becoming the universal single-signon for the web - one of the first Internet Operating System subsystems (after Google Maps) to get wide adoption across a range of websites not belonging to the platform provider. Even more importantly, Facebook Connect allows you to Facebook-enable mobile apps. Clearly, Facebook understands what it means to be a platform provider.

Their latest announcements, of Facebook's Graph API and Social Plugins, are taking Facebook beyond its original "walled garden" approach, instead turning Facebook into a social utility for the entire web (including mobile devices.)

Facebook is testing a payment platform, but perhaps more interestingly, they appear to be partnering with Paypal to increase their capabilities in this area. They are getting better at monetization via advertising, but they haven't yet found the golden path for social advertising that Google found for search.

Facebook's weaknesses: Location, control over mobile devices, general purpose computing and storage platforms. But these are weaknesses only in the context of the desire to have a vertically-integrated platform from a single vendor. It may instead be that the lack of these capabilities is Facebook's greatest strength, as it will force them into a strategy of horizontal integration.

Google

There's no question in my mind that Google's Internet Operating System is the furthest along. Many observers will look first to Google AppEngine as Google's echo of the Win32 promise. How can anyone who lived through the transition from DOS to Windows not read the following paragraph, and not be struck by the similarity of intent?
"For the the first time your applications can take advantage of the same scalable technologies that Google applications are built on, things like BigTable and GFS. Automatic scaling is built in with App Engine, all you have to do is write your application code and we'll do the rest. No matter how many users you have or how much data your application stores, App Engine can scale to meet your needs."
As Mark Twain once said, "History does not repeat itself, but it does rhyme."

But to focus too much on AppEngine is to miss the point. If all the Internet Operating System does is provide storage and computation, then Amazon, Microsoft, and VMware are all contenders - and Amazon has the lead.

Remember, though, that in the future, the subsystems that applications depend on to differentiate themselves will largely be data subsystems. And data at scale is Google's sweet spot.

Consider a few of the following applications, and ask yourself how many companies could put together all the data and computation assets to deliver on the following promises:

  • Provide free turn-by-turn directions on the phone, with destinations set by natural language search rather than by address, with optional speech recognition to set the destination or to search along the route for items of interest, with real-time traffic information used to calculate your arrival time, and actual Streetview images of turns as well as of your final destination. (The ability to deliver these services directly is a critical advantage. Anyone who has to license one or more of the components from another company has a much harder time giving the service away for free.)
  • Point your cell phone camera at many common objects - a book cover, a wine label, a work of art in a museum, a famous building or other landmark, a company logo, a business card, a bar code, and even, in an unreleased version, a human face - and return information about that object (Google Goggles). (Amazon's e-commerce app for the iPhone and Android does show off some similar capabilities. Bar-code scanning and image recognition allow you to see something in the real world, quickly find it at Amazon, and either order it, or simply remember it on your Amazon wishlist. But the range of objects recognized is less complete, and the use case is Amazon e-commerce, versus Google's more general platform.)
  • Automatically dial all of your phone numbers until you answer one of them, and if you aren't found, automatically transcribe (even badly) the message that was left for you.
  • Translate your speech or document (even badly) into any one of fifty languages.
The list goes on. Google has the most impressive set of data assets of any company on the planet, together with the boldness to put them together into a vision of how computing will work in the future. They aren't afraid, any more than Bill Gates was with Windows 1.0, to put out something that is ahead of their current reach, with the persistence to stick with it till it works.

And that's leaving out Google's stronghold in search and advertising, its dominance via YouTube of internet video, its cloud office suite, its strong email offering, the fact that Google Maps is becoming the lingua franca of mapping across the web, and more. With Android, Google also has the front-end component of the full mobile-to-cloud stack, with an industry adoption strategy (open hardware from multiple manufacturers) that has worked before, for both the VCR and the personal computer. They have a robust application ecosystem, both for the phone and for the enterprise. A week after its launch, the Google Application Marketplace had nearly 1500 apps, a faster uptake than even the iPhone. Today, there are over 50,000 Android Apps. (But as Marc Sigal notes in the analysis linked earlier, Apple has demonstrated much more consistent monetization for developers. According to O'Reilly Research, 24% of iPhone apps are free apps, while 59% of Android apps are free.)

That being said, Google does have a payment platform. While Google Checkout was an also-ran in the web payment wars, it has renewed significance and opportunity in the mobile era, as every Android Market customer is, by default, now a Google Checkout customer. In this one story you see how having all of the elements together makes each of them stronger than they would be alone. If you have an Android phone, Google Checkout is suddenly the default payment option. It doesn't have to be the best. It's the incumbent.

Google's weaknesses: they are the one to beat, the new Microsoft that everyone is afraid of. In addition, they lack Apple's sure touch on user experience; even the slickest Android phone lags Apple's fit and finish. They also have yet to come up with a convincing social media subsystem, although they are clearly focused on this opportunity. Their strongest assets are not actually overtly social systems like Google Wave or Google Buzz, but the phone itself, and its connection to the Gmail-based cloud address book.

I continue to believe that the tools we actually use to communicate with each other - our phones, our email, our instant messaging, and our shared documents - are the most powerful measures of our real social network. The company that first cracks the code of reflecting that social network throughout its applications, and giving the user the power to harness that network, will ultimately win. Google has many of the data assets necessary to develop those next-generation applications, but they haven't yet found the way to put them together.

Microsoft

Microsoft, like Google, has a strong suite of capabilities across the board: the Azure hosting and computation platform, the Bing search engine and advertising platform, a full-featured mapping platform, speech recognition (via the 2007 acquisition of Tellme). They have made a promising restart in their mobile platform with Windows Mobile 7. They have enormous untapped business-oriented social media assets in Microsoft Exchange, Outlook, and Sharepoint. And of course, they have boatloads of cash, and the willingness to spend it to achieve strategic objectives. They understand the game they are playing, and how important it is to their survival.

That being said, Microsoft's biggest asset right now might just be that they aren't Google, making them the favored white knight of everyone from Apple to Facebook. With rumors flying that Apple is in talks with Microsoft to make Bing the default search engine on iPhone, you can see the shape of a possible future in which an alliance between Apple and Microsoft acts as a counter to Google's outsized ambitions. Add in a partnership with Facebook, in which Microsoft is an investor, and you have a powerful combination.

Microsoft's biggest weaknesses (apart from the fact that the original Windows Mobile platform was a failure, and they are now facing a restart) are the "strategy tax" of continuing to support Windows and Microsoft Office, the very same problem that kept Microsoft from seizing the internet opportunity in the late 1990s.

Another point of distinction between Microsoft and Google is Microsoft's "software plus services" vision - namely the idea that rich, device-specific client apps will be the front end to web services, versus Google's web-only vision. Microsoft argues that, faced with the success of native apps on smartphones, even Google is embracing a software-plus-services approach. However, the rich clients that are driving the equation are not PC-based; they are native smartphone apps. And barring a successful restart of Microsoft's phone strategy, Google has the advantage there.

This isn't to say that Microsoft won't continue to be a phenomenally successful company - just as IBM managed to do despite the death of its mainframe monopoly - but the cutting edge of the future is in data-backed mobile services, where they are playing serious catch up.

Microsoft's greatest opportunity, paradoxically, is to embrace open data services in the same way that IBM embraced Open Source Software, to integrate their offerings with those from Facebook, Nuance, Paypal, and other best of breed data services. The question is whether that's in their DNA or their business model. My bet is that it's Facebook that emerges as the integration point for selected Microsoft services (location and search in particular) rather than the other way around.

Nokia

While Nokia is left out of many of the overheated discussions about the future, let's not forget that they are still the dominant phone supplier in the world, that they own significant location and mapping assets via their purchase of Navteq, and that they too have a platform vision in the form of Ovi, providing access to music, maps, applications, games, and more on Nokia phones. That being said, it's hard to conceive of Nokia as a first-tier player in the Great Game.

PayPal

There is no doubt in my mind that payment will be one of the most important of the Internet OS subsystems. E-commerce, not advertising, is the killer business model of the mobile world.

And payment is hard. Knowing how much credit to extend is one of the problems that is best left to people with algorithmic expertise and massive amounts of data.

Apple and Google have their own built-in payment solutions (as does Microsoft for some of their platforms, such as Xbox), but what about everyone else? Visa and MasterCard remain sleeping giants; mobile phone carriers too have payment capabilities, but their business culture and systems make it difficult for them to deploy them for cutting edge applications. PayPal is web-native; making the transition to mobile has got to be their highest priority. Startups like Square (and others yet to be announced) are also taking aim at this area. Expect innovation. Expect competition. Expect acquisitions.

Salesforce

Salesforce.com also has a strong platform play, with thousands of business-oriented applications built on the force.com platform. Salesforce in fact was the first to promulgate the idea of "platform as a service" (as distinguished from simply "software as a service" (individual applications) or "infrastructure as a service" (the kind of platform that Amazon pioneered.)

Twitter

While Twitter is hardly, as yet, in the same league as Apple, Google, Microsoft, or Facebook, their dominance of the real time web has been game changing. They have a large and growing developer ecosystem, and a minimalist mindset that leads to rapid evolution. Twitter is increasingly used as transport for other kinds of data, and Twitter analytics are pointing the way towards other kinds of real-time intelligence.

VMware

At first glance, VMware might look like a niche player. Yes, they are the leader in application virtualization, and by virtue of that, a leader in corporate cloud computing. VMware's strategy appears to be based on making it easy for applications to migrate between cloud providers, and creating an easy interface between private and public clouds.

But is that enough?

But anyone who knows Paul Maritz knows that this is a man who understands the dynamics of internet data. While at PiCorp, the startup he founded after leaving Microsoft in 2000, he was focused on a vision of shared data in the cloud. "Why would you want to have your data in the cloud?" he told me in a private conversation some years ago. "For the same reason you keep your money in the bank rather than under your mattress. It becomes more valuable when it's kept with other people's data."

And as Scott Yara, founder and chairman of Greenplum, the massively multiprocessor Postgres database (disclosure: I am an advisor) pointed out to me when describing Greenplum's Chorus offering, it is not in fact Google that has the world's largest data repository. The New York Stock Exchange, T-mobile, Skype, Fox Interactive Media (MySpace), and many others, host their data in Greenplum. The sum of corporate data (sometimes referred to as "the dark web") is far greater than that in any consumer web company. Hence Chorus, Greenplum's platform to enable data sharing between its corporate customers.

Put in this light, VMware's management of private clouds may turn out to be an unexpected advantage. As companies far from the consumer web become fuller participants in the cloud data operating system, they will rely on facilities that VMware is already building: facilities that allow them to manage the boundaries between private and public data. This data and service segmentation may turn out to be one of the fundamental Internet OS capabilities.

In addition, VMware's acquisition of Zimbra might be seen as the first step towards acquiring internet data assets of the kind described in this article. Zimbra is an Exchange-compatible email platform; in the right hands it might be used to unlock Microsoft's business-oriented social graph.

VMware's acquisition of SpringSource is even more significant. Roman Stanek, CEO of cloud business-intelligence provider GoodData (in which I am an investor and a board member) remarked to me that for corporate cloud developers accustomed to programming in Java, Springsource is a kind of "Goldilocks" solution. Amazon's cloud APIs are too low-level; Google's and Microsoft's too high, with too much buy-in to Google's or Microsoft's systems; VMware's are "just right."

Maritz' long experience at Microsoft drove home to him the importance of developer tools. He understands how platform advantage is built, brick by brick. You can have the best platform in the world, but developer tools are what makes it stick.

VMware's weaknesses, of course, are many. They lack assets in media, in search, in advertising, in location based services, in speech recognition, and in many other areas that are going to be the currency of developers in future.

But that may not matter. Because there's another competitor in the mix.

Small Pieces Loosely Joined

In talking about the Internet Operating System, I've long used Tolkien's "one ring to rule them all" as a metaphor for platforms that seek, like Windows before them, to take control of the entire developer ecosystem, to be a platform on which all applications exclusively depend, and which gives the platform developer power over them. But there is another alternative. Both Linux and the World Wide Web are examples of what I call "small pieces loosely joined" (after David Weinberger's book of the same name). That is, these platforms have a simple set of rules that allow applications to interoperate, enabling developers to build complex systems that work together without central control.

OneRingLooselyJoined.png

Apple, Google, and Microsoft all seem to be plausible contenders to a one-ring strategy. Facebook too may want to play that game, though they lack the crucial mobile platform that each of the others hopes to control.

But it seems to me that one of the alternative futures we can choose is a future of cooperating internet subsystems that aren't owned by any one provider, a system in which an application might use Facebook Connect and Open Graph Protocol for user authentication, user photos, and status updates, but Google or Bing maps for location services, Google or Nuance for speech recognition, Paypal or Amazon for payment services, Amazon or Google or Microsoft or VMware or Rackspace for server hosting and computation, and any one of a thousand other developers for features not yet conceived.

This is a future of horizontal integration, not vertical integration. This integration is already happening at many levels. Consider music. Virtually every device that reads a music CD relies on Gracenote's CDDB to look up the track names; this is one of the net's oldest and most universally deployed data services. SonicLiving provides sites from Facebook to Pandora and Loopt with the ability for their users to find upcoming live concerts for artists they like - and to add them to their own calendars via "Universal RSVP."

Now it's certainly possible that Gracenote and SonicLiving (both private companies) might be acquired by someone looking to consolidate their hold on the infrastructure of online music, but evidence is strong that there will be countless "point" solutions like these that will be consumed by developers.

The Internet Operating System may end up looking more like a Linux distribution than a Microsoft or Apple PC or phone operating system. VMware might perhaps provide a cloud computing "kernel" while Facebook provides a social UI layer, Google or Bing provide alternate search subsystems, Android and iPhone and Nokia and the next generation of Windows Mobile provide mobile phone front-ends, and so on.

The VMForce announcement, which combines elements of VMware and Salesforce's respective offerings into a single developer platform, is a good sign of things to come, as companies who aren't holding on to the vision of a single vertically-integrated platform find it in their interest to work together.

As Benjamin Franklin so memorably said, just before signing the American Declaration of Independence: ""We must, indeed, all hang together, or most assuredly we shall all hang separately." Developers will need to make a choice between adopting any single platform, or pushing for open systems that allow interoperability and choice.

In the short term, I believe we'll see heightened competition, shifting alliances, and a wave of innovation, as companies fight for advantage in delivering next generation applications, and then use those applications to drive adoption of their respective platforms.

The key question, to my mind, is which of the "big four" (Apple, Google, Microsoft, and Facebook) will most strongly adopt the horizontal, open strategy.

Apple is the least likely. They have made a compelling case for vertical integration; what's more, they have made it work.

Microsoft seems like an unlikely ally of an open internet strategy given their history and heritage, but necessity is a good teacher.

Facebook has made selective moves towards openness. As their partnership with SonicLiving demonstrates, they consume web services from others as well as produce them. And while critics have argued that Facebook's recent open announcements don't go far enough, it's clear to me that Facebook gains more than it loses by cooperating with everyone from PayPal and VMware to Microsoft to strengthen their hand.

Google has made strong, public commitments to the values of the open web. Critics have pointed out, quite rightly, that For Google, The Meaning Of Open Is When It's Convenient For Them.

But frankly, this is true of any company balancing open and proprietary strategy. Does anyone doubt that IBM's commitment to open source software was gated on corporate advantage? Or that companies from Red Hat to MySQL have added proprietary elements to a core open source strategy?

In the end, companies make decisions about open versus closed and proprietary on competitive grounds. The art of promoting openness is not to make it a moral crusade, but rather to highlight the competitive advantages of openness, and to knit together the strategies of companies who might otherwise find themselves left out of the game.

This, by the way, is the backdrop for the discussion at this year's Web 2.0 Expo and especially Web 2.0 Summit. While the term "Web 2.0" has come to mean many things to many people, for me it's always been the story of what happens when you treat the internet, not any individual computer, as the platform. The Expo focuses on the technical infrastructure of the platform; the Summit focuses on the business models and the business strategy.

As John Battelle notes in his post Points of Control, "Fifteen years and two recessions into the commercial Internet, it’s clear that our industry has moved into a new competitive phase - a “middlegame” in the battle to dominate the Internet economy. To understand this shift, we’ll use the Summit’s program to map strategic inflection points across the Internet landscape, identifying key players who are battling to control the services and infrastructure of a websquared world."

Handicapping the Players

This post provides a conceptual framework for thinking about the strategic and tactical landscape ahead. Once you understand that we're building an Internet Operating System, that some players have most of the pieces assembled, while others are just getting started, that some have a plausible shot at a "go it alone" strategy while others are going to have to partner, you can begin to see the possibilities for future alliances, mergers and acquisitions, and the technologies that each player has to acquire in order to strengthen their hand.

I'll hope in future to provide a more thorough drill-down into the strengths and weaknesses of each player. But for now, here's a summary chart that highlights some of the key components, and where I believe each of the major players is strongest.

Chart Illustration 1-4.png

(Note that this chart is influenced by one that Nick Bilton of the New York Times put together last January.)


The most significant takeaway is that the column marked "other" represents the richest set of capabilities. And that gives me hope.

April 22 2010

Preparing for the realtime web

The dominance of static web pages -- and their accompanying user expectations and analytics -- is drawing to a close. Taking over: the links, notes, and updates that make up the realtime web. Ted Roden, author of O'Reilly's upcoming "Building the Realtime User Experience" and a creative technologist at the New York Times, discusses the realtime web's impact in the following Q&A.

Web 2.0 Expo San FranciscoMac Slocum: Have we shifted from a website-centric model to a user-centric model?

Ted Roden: It used to be that a user sat down at a computer and checked Yahoo and CNN.com and whatever else. Now, users get their Yahoo updates via Twitter and pushed into Facebook, wherever they are. So rather than a user going to to a specific website, websites are coming to where the users already are.

MS: Has push technology finally found its footing with realtime applications?

TR: I think so. It's not that push technology was a solution looking for a problem, it was only a partial solution. But now broadband has a wide penetration, browsers are much more stable and resource friendly, servers are cheap or free, and the development of realtime applications has gotten drastically easier. Using push technology without all of those other bits in place was a lot more painful for everybody involved and as a result, designers and programmers stuck with standard web apps. Now we can take a lot of that for granted and think like desktop application designers.

MS: What skill sets do developers need to take advantage of the realtime web?

TR: Python and Java are certainly strong contenders at this point. Tornado is really exciting to me. But one thing is totally clear: the world of a dominant HTTP server is gone. Developers need to get comfortable with the fact that there's going to be a huge shakeup in the server world. Instead of big fat Apache servers, we'll see a lot more apps running with built-in HTTP servers that are specific to the application, or at least the type of application.

MS: How about publishers: what do they need to consider as they're working with realtime apps?

TR: There is a lot to do on the content side. Creators will have to figure out if pushing each new message is important and which messages are important. They'll also need to know if content has to change as it's distributed in different formats. Big publishers like the New York Times have long understood that a headline that works in print doesn't work on the web. And they've also figured out that regular web headlines don't work on the mobile web. At the Times, we've recently started retooling the headlines as they get pushed out to Twitter and Facebook because the content needs to fit those platforms specifically.

MS: How can users organize all this incoming information?

TR: I think a lot of users will start relying on things like the "hide" button and "unfollow." I think more and more apps are going to ship with a "mute for a bit" button, too. So manual filtering will be a big part of this. Apps and websites will start to get smarter about what they show you as well. Facebook already tries to do this with varying degrees of success.

I tend to think information overload is a lot less of a problem than most people do. Essentially, since not long after the printing press was invented, there has been more information being created than we could consume. But we've never really had a problem with it. When people go into a library, they don't have panic attacks because of all the information. They know they can see it all, but it's not all for them. I look at my Twitter stream the same way.

MS: Are we at a point yet where realtime analytics are in place?

TR: We're a long way from having these analytics in place. There are some great services, like Chartbeat, that are getting us there. But we have work to do.

It isn't completely clear what's important in realtime analytics. For starters, we need to know a) an article is blowing up and b) why? That is absolutely crucial information to have. But we're going to have to start mixing realtime analytics with A/B testing and all kinds of other things to really understand.

It isn't a problem limited to realtime analytics, either. If you want to track when a user comes from a certain URL and ends up checking out with his shopping cart, we can do that easily. But we have a tough time figuring out who that person is. Beyond that, we're going to have to start tracking the amount of influence the users have. We'll also need to continue tracking the conversation about a piece of content even if the conversation happens in far off corners of the web, far from our websites and Facebook pages.

Note: This interview was condensed and edited.

April 20 2010

What will the browser look like in five years?

The web browser was just another application five years ago. A useful app, no doubt, but it played second fiddle to operating systems and productivity software.

That's no longer the case. Browsers have matured into multi-purpose tools that connect to the Internet (of course) and also grant access to a host of powerful online applications and services. Shut off your web connection for a few minutes and you'll quickly understand the browser's impact.

I got in touch with Charles McCathieNevile, Opera chief standards officer and a speaker at the upcoming Web 2.0 Expo, to discuss the the current role of web browsers and their near-term future. He shares his predictions in the following Q&A.

MS: Will the web browser become the primary tool on computers?

Charles McCathieNevileCharles McCathieNevile: It isn't already? Email, document management, device control, are all done through the browser. Games are increasingly browser-based -- while it is not the only area that has taken time to move to the web, it is one of the biggest. There will always be applications that don't run in the browser, just because that is the way people are. But there is little reason for the browser not to be a primary application already.MS: Will we even see a browser in five years? Or, will it simply blend with the operating system?

CM: We will see it, but as its importance increases it will be the part people see of their interface to the computer. So it will be less noticeable. Five years ago people chose their computer for the OS, and the software available for that OS. Ten years ago much more so. Increasingly, the browser will be the thing people choose.

MS: What has been the most significant browser advancement of the last 2-3 years?

CM: There are many I could name, from the huge increase in Javascript speed (and capabilities) in all browsers, to the development of native video, or the increased interoperability of XHTML, CSS and SVG. But the most significant in the long term just might be WAI-ARIA -- a technology that makes it easier to make rich applications accessible to anyone, effectively by tagging code to say what it is meant to do. Because an important aspect of the web is its universality.

The ability to build innovative new things has always been around, and it is part of what engineers do by nature. But the ability to make sure everyone can use and benefit from them is the prerequisite for a societal shift. If you like, it isn't the "bleeding edge" that is most significant (although it is generally the most interesting and captures the most mind-share), but it's how the "trailing edge" shifts. That what changes everyone's life.

MS: How important is cloud computing to the future of the browser?

CM: It shows a pathway for things that people want that require more power than the browser could provide at the time. So in that sense, it is very important. The major successes, the Web applications with million of users, are important in the sense that their user base sets some of the requirements for browser development.

MS: Will a single company achieve cloud-based lock in?

Web 2.0 Expo San FranciscoCM: I hope not. And I don't think so. Although very few companies have the computing power to build global-scale applications that people use many times in a day, that power is not necessary for many applications. And there are plenty of things that people are not very keen to put on a cloud at all -- or at least will insist on being able to move their data from one cloud to another. So while there will be very dominant players from time to time, I don't think we will see one company take over completely.

MS: Will browser innovation come from the mobile side in years to come?

CM: Of course -- in a continuation of the contribution of mobile browsing to the overall ecosystem. While mobile is increasingly important, it will not be the only driver. Large-screen devices, which are almost of necessity static, medium-sized devices, and different interface modalities such as voice and game controllers, will also drive innovations that will contribute to the richness of the entire web platform.

MS: What impact will tablet computing have on browsing?

CM: It's another class of device. We have seen it around for years, although it is now taking off with the shiny new toy. So it will highlight the importance of developing for a range of platforms -- and I think in large part the value of developing as much as possible with the "One Web" concept -- making applications and content that are easy to adapt to the increasing diversity of devices people use.

MS: Will web applications catch up to mobile applications?

CM: Oddly enough, many people still ask the question the other way around. So I guess the real issue is whether we will see genuine convergence. And the answer is yes. There are new things being developed on the desktop browser, and they include the capabilities that we now see in mobile applications. Initiatives such as JIL and BONDI were developed to "hothouse" specifications that could form a basis for what is now W3C's Device API group, and the development of technology like Web Workers, database storage and HTML5 video is being brought to mobile as fast as we can.

Note: This interview was condensed and edited.

March 31 2010

Thoughts from the front lines of payment's big shift

There's a lot brewing in the payment space and much of that activity traces back to developers. You can get a sense of the seismic shift by looking over the winners of the recently concluded PayPal X Developer Challenge.

Rentalic, Paypal X Developer ChallengeThe big winner in that competition was Rentalic, a service that uses PayPal to let regular folks rent out their gear. And by "gear" I mean virtually anything -- sporting goods, electronics, tools, toys, etc. This video and the company's FAQ spell out its process.

Rentalic CEO Punsri Abeywickrema has a unique vantage point in the payment world. He's in that interesting spot where code is turning theory into application. I asked Abeywickrema for his take on the current and future state of payment and the role startups will play in shaping that landscape.

Mac Slocum: What have been the most important recent innovations in payment?

Punsri Abeywickrema: An important change is that companies like PayPal and Amazon are opening up APIs to provide more flexibility for the developers to build their payment solutions. This will result in a big wave of innovation and a whole new line of payment apps in the months and years to come.

Also, as a result of more flexible APIs, companies will figure out new ways to monetize their features and technology. That will result in more innovation and revenue growth while providing convenience and better security to the users.

MS: Do developers bring a unique perspective to payment?

PA: The way I see it, companies that provide goods or services online are the best ones to know the challenges their customers face. And it's not just the developers. It's the product people, architects, business development, marketing, sales and all other parts of the organization that come together to build the next generation of payment solutions. Empowering organizations to come up with their own solutions to address their own problems in their own business segments will continue to drive activity in the payment space.

MS: What's the upside to a competition like the Paypal X Developer Challenge? Is it the prospect of funding? Is it the focus competition can bring?

PA: I'd say both. Initially we were planning to release our new transaction solution at the end of March. Then we got to know about the competition and really had to speed up the development effort. Definitely, the competition helped us to focus. Prospects of funding is another huge advantage.

If it weren't for competitions like the PayPal X Developer challenge or events like Maker Faire, a small company like Rentalic, which has a great idea and a business model with huge potential, would have gone unnoticed.

MS: Could you run Rentalic through traditional credit cards, checks or cash?

PA: Probably not. I don't know of a way for credit cards or checks to process chained or split payments and handle deposit refunds. I would have been able to handle the pre-approval part through traditional cards and Automatic Clearing House, but the rest of the requirements would not have been possible. Or even if it was, it would have been too expensive to implement.

Note: This interview was condensed and edited.

March 29 2010

The State of the Internet Operating System

I've been talking for years about "the internet operating system", but I realized I've never written an extended post to define what I think it is, where it is going, and the choices we face. This is that missing post. Here you will see the underlying beliefs about the future that are guiding my publishing program as well as the rationale behind conferences I organize like the Web 2.0 Summit and Web 2.0 Expo, the Where 2.0 Conference, and even the Gov 2.0 Summit and Gov 2.0 Expo.


Ask yourself for a moment, what is the operating system of a Google or Bing search? What is the operating system of a mobile phone call? What is the operating system of maps and directions on your phone? What is the operating system of a tweet?


On a standalone computer, operating systems like Windows, Mac OS X, and Linux manage the machine's resources, making it possible for applications to focus on the job they do for the user. But many of the activities that are most important to us today take place in a mysterious space between individual machines. Most people take for granted that these things just work, and complain when the daily miracle of instantaneous communications and access to information breaks down for even a moment.


But peel back the covers and remember that there is an enormous, worldwide technical infrastructure that is enabling the always-on future that we rush thoughtlessly towards.


When you type a search query into Google, the resources on your local computer - the keyboard where you type your query, the screen that displays the results, the networking hardware and software that connects your computer to the network, the browser that formats and forwards your request to Google's servers - play only a small role. What's more, they don't really matter much to the operation of the search - you can type your search terms into a browser on a Windows, Mac, or Linux machine, or into a smartphone running Symbian, or PalmOS, the Mac OS, Android, Windows Mobile, or some other phone operating system.


The resources that are critical to this operation are mostly somewhere else: in Google's massive server farms, where proprietary Google software farms out your request (one of millions of simultaneous requests) to some subset of Google's servers, where proprietary Google software processes a massive index to return your results in milliseconds.


Then there's the IP routing software on each system between you and Google's data center (you didn't think you were directly connected to Google did you?), the majority of it running on Cisco equipment; the mostly open source Domain Name System, a network of lookup servers that not only allowed your computer to connect to google.com in the first place (rather than typing an IP address like 74.125.19.106), but also steps in to help your computer access whatever system out there across the net holds the web pages you are ultimately looking for; the protocols of the web itself, which allow browsers on client computers running any local operating system (perhaps we'd better call it a bag of device drivers) to connect to servers running any other operating system.


You might argue that Google search is just an application that happens to run on a massive computing cluster, and that at bottom, Linux is still the operating system of that cluster. And that the internet and web stacks are simply a software layer implemented by both your local computer and remote applications like Google.


But wait. It gets more interesting. Now consider doing that Google search on your phone, using Google's voice search capability. You speak into your phone, and Google's speech recognition service translates the sound of your voice into text, and passes that text on to the search engine - or, on an Android phone, to any other application that chooses to listen. Someone familiar with speech recognition on the PC might think that the translation is happening on the phone, but no, once again, it's happening on Google's servers. But wait. There's more. Google improves the accuracy of its speech recognition by comparing what the speech algorithms think you said with what its search system (think "Google suggest") expects you were most likely to say. Then, because your phone knows where you are, Google filters the results to find those most relevant to your location.


Your phone knows where you are. How does it do that? "It's got a GPS receiver," is the facile answer. But if it has a GPS receiver, that means your phone is getting its position information by reaching out to a network of satellites originally put up by the US military. It may also be getting additional information from your mobile carrier that speeds up the GPS location detection. It may instead be using "cell tower triangulation" to measure your distance from the nearest cellular network towers, or even doing a lookup from a database that maps wifi hotspots to GPS coordinates. (These databases have been created by driving every street and noting the location and strength of every Wi-Fi signal.) The iPhone relies on the Skyhook Wireless service to perform these lookups; Google has its own equivalent, doubtless created at the same time as it created the imagery for Google Streetview.


But whichever technique is being used, the application is relying on network-available facilities, not just features of your phone itself. And increasingly, it's hard to claim that all of these intertwined features are simply an application, even when they are provided by a single company, like Google.


Keep following the plot. What mobile app (other than casual games) exists solely on the phone? Virtually every application is a network application, relying on remote services to perform its function.


Where is the "operating system" in all this? Clearly, it is still evolving. Applications use a hodgepodge of services from multiple different providers to get the information they need.


But how different is this from PC application development in the early 1980s, when every application provider wrote their own device drivers to support the hodgepodge of disks, ports, keyboards, and screens that comprised the still emerging personal computer ecosystem? Along came Microsoft with an offer that was difficult to refuse: We'll manage the drivers; all application developers have to do is write software that uses the Win32 APIs, and all of the complexity will be abstracted away.


It was. Few developers write device drivers any more. That is left to device manufacturers, with all the messiness hidden by "operating system vendors" who manage the updates and often provide generic APIs for entire classes of device. Those vendors who took on the pain of managing complexity ended up with a powerful lock-in. They created the context in which applications have worked ever since.


This is the crux of my argument about the internet operating system. We are once again approaching the point at which the Faustian bargain will be made: simply use our facilities, and the complexity will go away. And much as happened during the 1980s, there is more than one company making that promise. We're entering a modern version of "the Great Game", the rivalry to control the narrow passes to the promised future of computing. (John Battelle calls them "points of control".) This rivalry is seen most acutely in mobile applications that rely on internet services as back-ends. As Nick Bilton of the New York Times described it in a recent article comparing the Google Nexus One and the iPhone:


Chad Dickerson, chief technology officer of Etsy, received a pre-launch Nexus One from Google three weeks ago. He says Google's phone feels connected to certain services on the Web in a way the iPhone doesn't. "Compared to the iPhone, the Google phone feels like it's part of the Internet to me," he said. "If you live in a Google world, you have that world in your pocket in a way that's cleaner and more connected than the iPhone."


The same thing applies to the iPhone. If you're a MobileMe, iPhoto, iTunes or Safari user, the iPhone connects effortlessly to your pictures, contacts, bookmarks and music. But if you use other services, you sometimes need to find software workarounds to get access to your content.


In comparison, with the Nexus One, if you use GMail, Google Calendar or Picasa, Google's online photo storage software, the phone connects effortlessly to these services and automatically syncs with a single log-in on the phone.


The phones work perfectly with their respective software, but both of them don't make an effort to play nice with other services.



Never mind the technical details of whether the Internet really has an operating system or not. It's clear that in mobile, we're being presented with a choice of platforms that goes far beyond the operating system on the handheld device itself.


With that preamble, let's take a look at the state of the Internet Operating System - or rather, competing Internet Operating Systems - as they exist today.

The Internet Operating System is an Information Operating System

Among many other functions, a traditional operating system coordinates access by applications to the underlying resources of the machine - things like the CPU, memory, disk storage, keyboard and screen. The operating system kernel schedules processes, allocates memory, manages interrupts from devices, handles exceptions, and generally makes it possible for multiple applications to share the same hardware.

Web 2.0 Expo San Francisco As a result, it's easy to jump to the conclusion that "cloud computing" platforms like Amazon Web Services, Google App Engine, or Microsoft Azure, which provide developers with access to storage and computation, are the heart of the emerging Internet Operating System.

Cloud infrastructure services are indeed important, but to focus on them is to make the same mistake as Lotus did when it bet on DOS remaining the operating system standard rather than the new GUI-based interfaces. After all, Graphical User Interfaces weren't part of the "real" operating system, but just another application-level construct. But even though for years, Windows was just a thin shell over DOS, Microsoft understood that moving developers to higher levels of abstraction was the key to making applications easier to use.

But what are these higher levels of abstraction? Are they just features that hide the details of virtual machines in the cloud, insulating the developer from managing scaling or hiding details of 1990s-era operating system instances in cloud virtual machines?

The underlying services accessed by applications today are not just device components and operating system features, but data subsystems: locations, social networks, indexes of web sites, speech recognition, image recognition, automated translation. It's easy to think that it's the sensors in your device - the touch screen, the microphone, the GPS, the magnetometer, the accelerometer - that are enabling their cool new functionality. But really, these sensors are just inputs to massive data subsystems living in the cloud.

When, for example, as an iPhone developer, you use the iPhone's Core Location Framework to establish the phone's location, you aren't just querying the sensor, you're doing a cloud data lookup against the results, transforming GPS coordinates into street addresses, or perhaps transforming WiFi signal strength into GPS coordinates, and then into street addresses. When the Amazon app or Google Goggles scans a barcode, or the cover of a book, it isn't just using the camera with onboard image processing, it's passing the image to much more powerful image processing in the cloud, and then doing a database lookup on the results.

Increasingly, application developers don't do low-level image recognition, speech recognition, location lookup, social network management and friend connect. They place high level function calls to data-rich platforms that provide these services.

With that in mind, let's consider what new subsystems a "modern" Internet Operating System might contain:

Search

Because the volume of data to be managed is so large, because it is constantly changing, and because it is distributed across millions of networked systems, search proved to be the first great challenge of the Internet OS era. Cracking the search problem requires massive, ongoing crawling of the network, the construction of massive indexes, and complex algorithmic retrieval schemes to find the most appropriate results for a user query. Because of the complexity, only a few vendors have succeeded with web search, most notably Google and Microsoft. Yahoo! and Amazon too built substantial web search capabilities, but have largely left the field to the two market leaders.

However, not all search is as complex as web search. For example, an e-commerce site like Amazon doesn't need to constantly crawl other sites to discover their products; it has a more constrained retrieval problem of finding only web pages that it manages itself. Nonetheless, search is fractal, and search infrastructure is replicated again and again at many levels across the internet. This suggests that there are future opportunities in harnessing distributed, specialized search engines to do more complete crawls than can be done by any single centralized player. For example, Amazon harnesses data visible only to them, such as the rate of sales, as well as data they publish, such as the number and value of customer reviews, in ranking the most popular products.

In addition to web search, there are many specialized types of media search. For example, any time you put a music CD into an internet-connected drive, it immediately looks up the track names in CDDB using a kind of fingerprint produced by the length and sequence of each of the tracks on the CD. Other types of music search, like the one used by cell phone applications like Shazam, look up songs by matching their actual acoustic fingerprint. Meanwhile, Pandora's "music genome project" finds similar songs via a complex of hundreds of different factors as analyzed by professional musicians.

Many of the search techniques developed for web pages rely on the rich implied semantics of linking, in which every link is a vote, and votes from authoritative sources are ranked more highly than others. This is a kind of implicit user-contributed metadata that is not present when searching other types of content, such as digitized books. There, search remains in the same brute-force dark ages as web search before Google. We can expect significant breakthroughs in search techniques for books, video, images, and sound to be a feature of the future evolution of the Internet OS.

The techniques of algorithmic search are an essential part of the developer's toolkit today. The O'Reilly book Programming Collective Intelligence reviews many of the algorithms and techniques. But there's no question that this kind of low-level programming is ripe for a higher-level solution, in which developers just place a call to a search service, and return the results. Thus, search moves from application to system call.

Media Access

Just as a PC-era operating system has the capability to manage user-level constructs like files and directories as well as lower-level constructs like physical disk volumes and blocks, an Internet-era operating system must provide access to various types of media, such as web pages, music, videos, photos, e-books, office documents, presentations, downloadable applications, and more. Each of these media types requires some common technology infrastructure beyond specialized search:
  • Access Control. Since not all information is freely available, managing access control - providing snippets rather than full sources, providing streaming but not downloads, recognizing authorized users and giving them a different result from unauthorized users - is a crucial feature of the Internet OS. (Like it or not.)
  • The recent moves by News Corp to place their newspapers behind a paywall, as well as the paid application and content marketplace of the iPhone and iPad suggests that the ability to manage access to content is going to be more important, rather than less, in the years ahead. We're largely past the knee-jerk "keep it off the net" reactions of old school DRM; companies are going to be exploring more nuanced ways to control access to content, and the platform provider that has the most robust systems (and consumer expectations) for paid content is going to be in a very strong position.

    In the world of the App Store, paid applications and paid content are re-legitimizing access control (and payment.) Don't assume that advertising will continue to be the only significant way to monetize internet content in the years ahead.

  • Caching. Large media files benefit from being closer to their destination. A whole class of companies exist to provide Content Delivery Networks; these may survive as independent companies, or these services may ultimately be rolled up into the leading Internet OS companies in much the way that Microsoft acquired or "embraced and extended" various technologies on the way to making Windows the dominant OS of the PC era.
  • Instrumentation and analytics Because of the amount of money at stake, an entire industry has grown up around web analytics and search engine optimization. We can expect a similar wave of companies instrumenting social media and mobile applications, as well as particular media types. After all, a video, a game, or an ebook can know how long you watch, when you abandon the product and where you go next.
  • Expect these features to be pushed first by independent companies, like TweetStats or Peoplebrowsr Analytics for Twitter, or Flurry for mobile apps. GoodData, a cloud-based business intelligence platform is being used for analytics on everything from Salesforce applications to online games. (Disclosure: I am an investor and on the board of GoodData.) But eventually, via acquisition or imitation, they will become part of the major platforms.

Communications

The internet is a communications network, and it's easy to forget that communications technologies like email and chat, have long been central to the Internet's appeal. Now, with the widespread availability of VoIP, and with the mobile phone joining the "network of networks," voice and video communications are an increasingly important part of the communications subsystem.

Communications providers from the Internet world are now on a collision course with communications providers from the telephony world. For now, there are uneasy alliances right and left. But it isn't going to be pretty once the battle for control comes out into the open.

I expect the communications directory service to be one of the key battlefronts. Who will manage the lookup service that allows individuals and businesses to find and connect to each other? The phone and email address books will eventually merge with the data from social networks to provide a rich set of identity infrastructure services.

Identity and the Social Graph

When you use Facebook Connect to log into another application, and suddenly your friends' faces are listed in the new application, that application is using Facebook as a "subsystem" of the new Internet OS. On Android phones, simply add the Facebook application, and your phone address book shows the photos of your Facebook friends. Facebook is expanding the range of data revealed by Facebook Connect; they clearly understand the potential of Facebook as a platform for more than hosted applications.

But as hinted at above, there are other rich sources of social data - and I'm not just talking about applications like Twitter that include explicit social graphs. Every communications provider owns a treasure trove of social data. Microsoft has piles of social data locked up in Exchange, Outlook, Hotmail, Active Directory, and Sharepoint. Google has social data not just from Orkut (an also-ran in the US) but from Gmail and Google Docs, whose "sharing" is another name for "meaningful source of workgroup-level social graph data." And of course, now, there's the social graph data produced by the address book on every Android phone...

The breakthroughs that we need to look forward to may not come from explicitly social applications. In fact, I see "me too" social networking applications from those who have other sources of identity data as a sign that they don't really understand the platform opportunity. Building a social network to rival Facebook or Twitter is far less important to the future of the Internet platform than creating facilities that will allow third-party developers to leverage the social data that companies like Google, Microsoft, Yahoo!, AOL - and phone companies like ATT, Verizon and T-Mobile - have produced through years or even decades of managing user's social data for communications.

Of course, use of this data will require breakthroughs in privacy mechanism and policy. As Nat Torkington wrote in email after reviewing an earlier draft of this post:

We still face the problem of "friend": my Docs social graph is different from my email social graph is different from my Facebook social graph is different from my address book. I want to be able to complain about work to my friends without my coworkers seeing it, and the usability-vs-privacy problem remains unsolved.

Whoever cracks this code, providing frameworks that make it possible for applications to be functionally social without being socially promiscuous, will win. Platform providers are in a good position to solve this problem once, so that users don't have to give credentials to a larger and larger pool of application providers, with little assurance that the data they provide won't be misused.

Payment

Payment is another key subsystem of the Internet Operating System. Companies like Apple that have 150 million credit cards on file and a huge population of users accustomed to using their phones to buy songs, videos, applications, and now ebooks, are going to be in a prime position to turn today's phone into tomorrow's wallet. (And as anyone who reaches into a wallet not for payment but for ID knows, payment systems are also powerful, authenticated identity stores - a fact that won't always be lost on payment providers looking for their lock on a piece of the Internet future.)

PayPal obviously plays an important role as an internet payment subsystem that's already in wide use by developers. It operates in 190 countries, in 19 different currencies (not counting in-game micro-currencies) and it has over 185 million accounts. What's fascinating is the rich developer ecosystem they've built around payment - their recent developer conference had over 2000 attendees. Their challenge is to make the transition from the web to mobile.

Google Checkout has been a distant also-ran in web payments, but the Android Market has given it new prominence in mobile, and will eventually make it a first class internet payment subsystem.

Amazon too has a credible payment offering, though until recently they haven't deployed it to full effect, reserving the best features for their own e-commerce site and not making them available to developers. (More on that in next week's post, in which I will handicap the leading platform offerings from major internet vendors.)

Advertising

Advertising has been the most successful business model on the web. While there are signs that e-commerce - buying everything from virtual goods to a lunchtime burrito - may be the bigger opportunity in mobile (and perhaps even in social media), there's no question that advertising will play a significant role.

Google's dominance of search advertising has involved better algorithmic placement, as well as the ability to predict, in real time, how often an ad will be clicked on, allowing them to optimize the advertising yield. The Google Ad Auction system is the heart of their economic value proposition, and demonstrates just how much difference a technical edge can make.

And advertising has always been a platform play. Signs that it will be a key battleground of the Internet OS can be seen in the competing acquisition of AdMob by Google and Quattro Wireless by Apple.

The question is the extent to which platform companies will use their advertising capabilities as a system service. Will they treat these assets as the source of competitive advantage for their own products, or will they find ways to deploy advertising as a business model for developers on their platform?

Location

Location is the sine-qua-non of mobile apps. When your phone knows where you are, it can find your friends, find services nearby, and even better authenticate a transaction.

Maps and directions on the phone are intrinsically cloud services - unlike with dedicated GPS devices, there's not enough local storage to keep all the relevant maps on hand. But when turned into a cloud application, maps and directions can include other data, such as real-time traffic (indeed, traffic data collected from the very applications that are requesting traffic updates - a classic example of "collective intelligence" at work.)

Location is also the search key for countless database lookup services, from Google's "search along route" to a Yelp search for nearby cafes to the Chipotle app routing your lunch request to the restaurant near you.

O'Reilly Where 2010 Conference In many ways, Location is the Internet data subsystem that is furthest along in its development as a system service accessible to all applications, with developers showing enormous creativity in using it in areas from augmented reality to advertising. (Understanding that this would be the case, I launched the Where 2.0 Conference in 2005. There are lessons to be learned in the location market for all Internet entrepreneurs, not just "geo" geeks, as techniques developed here will soon be applied in many other areas.)

Activity Streams

Location is also becoming a proxy for something else: attention. The My location is a box of cereal." (Disclosure: O'Reilly AlphaTech Ventures is an investor in Foursquare.)

We thus see convergence between Location and social media concepts like Activity Streams. Platform providers that understand and exploit this intersection will be in a stronger position than those who see location only in traditional terms.

Time

Time is an important dimension of data driven services - at least as important as location, though as yet less fully exploited. Calendars are one obvious application, but activity streams are also organized as timelines; stock charts link up news stories with spikes or drops in price. Time stamps can also be used as a filter for other data types (as Google measures frequency of update in calculating search results, or as an RSS feed or social activity stream organizes posts by recency.)

"Real time" - as in the real-time search provided by Twitter, the "where am I now" pointer on a map, the automated replenishment of inventory at WalMart, or instant political polling - emphasizes just how much the future will belong to those who measure response time in milliseconds, or even microseconds, rather than seconds, hours, or days. This need for speed is going to be a major driver of platform services; individual applications will have difficulty keeping up.

Image and Speech Recognition

As I've written previously, one of the big differences since I first wrote Web Squared).

With the advent of smartphone apps like Google Goggles and the Amazon e-commerce app, which deploy advanced image recognition to scan bar codes, book covers, album covers and more - not to mention gaming platforms like Microsoft's still unreleased Project Natal and innovative startups like Affective Interfaces, it's clear that computer vision is going to be an important part of the UI toolkit for future developers. While there are good computer vision packages like OpenCV that can be deployed locally for robotics applications, as well as research projects like those competing in the DARPA Grand Challenge for automated vehicles, for smartphone applications, image recognition, like speech recognition, happens in the cloud. Not only is there a wealth of compute cycles, there are also vast databases of images for matching purposes. Picasa and Flickr are no longer just consumer image sharing sites: they are vast repositories of tagged image data that can be used to train algorithms and filter results.

Government Data

Gov 2.0 Expo 2010 Long before recent initiatives like FixMyStreet and SeeClickFix submit 311 reports to local governments - potholes that need filling, graffiti that needs repainting, streetlights that are out. These applications have typically overloaded existing communications channels like email and SMS, but there are now attempts to standardize an Open311 web services protocol.

Now, a new flood of government data is being released, and the government is starting to see itself as a platform provider, providing facilities for private sector third parties to build applications. This idea of Government as a Platform is a key focus of my advocacy about Government 2.0.

There is huge opportunity to apply the lessons of Web 2.0 and apply them to government data. Take health care as an example. How might we improve our healthcare system if Medicare provided a feedback loop about costs and outcomes analogous to the one that Google built for search keyword advertising.

Anyone building internet data applications would be foolish to underestimate the role that government is going to play in this unfolding story, both as provider and consumer of data web services, and also as regulator in key areas like privacy, access, and interstate commerce.

What About the Browser?

While I think that claims that the browser itself is the new operating system are as misguided as the idea that it can be found solely in cloud infrastructure services, it is important to recognize that control over front end interfaces is at least as important as back-end services. Companies like Apple and Google that have substantial cloud services and a credible mobile platform play are in the catbird seat in the platform wars of the next decade. But the browser, and with it control of the PC user experience, is also critical.

This is why Apple's iPad, Google's ChromeOS, and HTML 5 (plus initiatives like Google's Native Client) are so important. Microsoft isn't far wrong in its cloud computing vision of "Software Plus Services." The full operating system stack includes back end infrastructure, the data subsystems highlighted in this article, and rich front-ends.

Apple and Microsoft largely have visions of vertically integrated systems; Google's vision seems to be for open source driving front end interfaces, while back end services are owned by Google. But in each case, there's a major drive to own a front-end experience that favors each company's back-end systems.

What's Still Missing

Even the most advanced Internet Operating System platforms are still missing many concepts that are familiar to those who work with traditional single-computer operating systems. Where is the executive? Where is the memory management?

I believe that these functions are evolving at each of the cloud platforms. Tools like memcache or mapreduce are the rough cloud equivalents of virtual memory or multiprocessing features in a traditional operating system. But they are only the beginning. Werner Vogels' post Eventually Consistent highlights some of the hard technical issues that will need to be solved for an internet-scale operating system. There are many more.

But it's also clear that there are many opportunities to build higher level functionality that will be required for a true Internet Operating System.

Might an operating system of the future manage when and how data is collected about individuals, what applications can access it, and how they might use it? Might it not automatically synchronize data between devices and applications? Might it do automatic translation, and automatic format conversion between different media types? Might such an operating system do predictive analytics to collect or locally cache data that it expects an individual user or device to need? Might such an operating system do "garbage collection" not of memory pointers but of outdated data or spam? Might it not perform credit checks before issuing payments and suspend activity for those who violate terms of service?

There is a great opportunity for developers with vision to build forward-looking platforms that aim squarely at our connected future, that provide applications running on any device with access to rich new sources of intelligence and capability. The possibilities are endless. There will be many failed experiments, many successes that will be widely copied, a lot of mergers and acquisitions, and fierce competition between companies with different strengths and weaknesses.

Next week, I'll handicap the leading players and tell you what I think of their respective strategies.

February 26 2010

A Prism for Jolicloud: Web-Centric Desktop Apps

I recently bought a netbook and installed Jolicloud, a Linux/Ubuntu distro designed as a replacement for, or companion to, Windows. Jolicloud was a revelation, something fresh and new in the seemingly snail-paced world of desktop computing. The bold idea of Jolicloud is that the browser is the operating system. It's all you need and you don't need to even think about it. The browser is a core service that supports all applications but it can recede into the background and let applications take the foreground.

CEF4F5C4-A963-4AAE-948F-C62DB3A3E416.jpg

The Samsung N210 netbook had Windows 7 Starter installed. I'll admit my discomfort with Windows. It's actually not so much the operating system itself, as it is the ecosystem that surrounds it. The desktop is cluttered with icons from the manufacturer and other add-ons, which seem to activate on their own. I don't want them yet I can't disable them easily. I realized that Windows has become like a carnival, with barkers trying to get my attention (and money) at every turn. I tried Windows 7 long enough to realize that it was really no different on a netbook than it was on a desktop. Windows. Same old, same old.

In fact, I bought the netbook to see if it was suitable as a computer for my mother. I want to help her be connected online to her family but she was frequently confused by the Windows desktop and its many applications and pop-up windows. AOL was just as confusing, layered on top of Windows. I tried moving her to Gmail and removing what I could from the desktop but it still became cluttered and she became so confused that she cancelled her Internet service. (I don't live in the same city as my mom so my ability to provide ongoing tech support is limited.)

A friend, Alberto Gaitán from DC, recommended trying Jolicloud on a netbook. Jolicloud was developed by Tariq Kim, who also created NetVibes. He had a vision of devices running an Internet Operating System, influenced by ideas from Tim O'Reilly. Here's an excerpt from the Jolicloud manifesto:

Jolicloud ... combines the two driving forces of the modern computing industry: the open source and the open web.

Jolicloud transforms your netbook into a sophisticated web device that taps into the cloud to expand your computing possibilities. The web already hosts a significant part of our lives: mails, photos, videos, and friends are already somewhere online. Jolicloud was built to make the computer and web part of the same experience.

Jolicloud offered something new on a non-Mac device -- peace of mind. I found an operating system that was adapted to the netbook, just as Apple has modified its core system for different devices. As much as it is an advantage for Apple, it is a disadvantage for every other computer manufacturer to ship their devices with a largely unmodified version of Windows. One gripe I have is that Windows doesn't use the more limited display space efficiently.

D69CB228-C476-4699-B858-D43DFD0645CC.jpg

The Jolicloud user interface is simple and well-organized. Jolicloud is derived from Ubuntu and indeed some of the features I'm praising may also be present there. Quite frankly, it's been years since I've explored a Linux desktop, believing them to be hopelessly clunky and awkward, a generic imitation of existing windowing systems.

But the big leap forward in my view for Jolicloud is how it adapts web sites to function more like desktop applications, an interface paradigm mashup of the iPhone and desktop. In Jolicloud, I launch Gmail as an application, and dozens of other services I use such as Twitter and Facebook can be organized as desktop interfaces. Like the iPhone, Jolicloud provides an Apps directory where you can choose applications to install on your netbook. In addition, Jolicloud provides cloud-based services for data storage. Jolicloud allows me to use a netbook as an alternate computer without really having to organize my data and service specifically for that computer. (I even find myself moving away from Mac-based software to web-centric services that I can use from any device.)

9F6BE824-C8BE-45D7-A87D-0E429B25D4B7.jpg

Prism

I learned that some of the magic behind Jolicloud's web-centric model was made possible by the Prism project from Mozilla Labs. Mark Finkle, a Mozilla developer, created Prism but he's now working on a Firefox mobile browser. Prism development seemed to stall for a while until recently. Prism, which will work on any operating system, allows you to turn a website into a standalone application, even creating an icon for it so that you can place in on your toolbar. If you find yourself fumbling through tabs to get back to your mail or calendar, Prism can help you move your key applications into separate windows so they can stand on their own as desktop applications.

On a netbook, Prism gives you a full-screen view of your application, and drops most of the browser functions. It's as if the browser disappears into the operating system as a core service, one that's shared by dozens of applications.


Interview with Matthew Gertner

I caught up with Matthew Gertner by email who has done work on Prism, particularly adapting it for Zimbra Desktop. He has been posting information about updates to Prism on his Just Thinking blog, an additional source of information on Prism developments.

Q. What I like about prism is that it makes one think of the browser as a service provided by the operating system; it allows a website to become viewed and organized as an application. It is a metaphor that is now much more prevalent given the iPhone and its apps. But Prism anticipated that direction.

MG. I am a great believer in web applications. Particular strengths are the use of well-established, simple and standards-based languages for application development, incredible multiplatform support and lack of explicit install/uninstall. At the same time, there are clearly weakness as well, tied in particular to the fact that the browser was never designed to run applications.

Prism is one attempt to get the best of both worlds. Environments like
iPhone OS and Adobe AIR take a slightly different tack. Rather than
using the same web languages for software development as the
traditional browser, they have their own languages (CocoaTouch, Flex,
etc.) and development tools. So I wouldn't make a direct parallel
between Prism and something like the iPhone. The latter runs apps that
are as much like traditional software as like websites. It's true that
some of the goals are the same, and they often use web protocols
(HTTP, XML, etc.).

The big advantage of Prism, at least in the near term, is that no
development is required to make an existing web app look more like a
traditional application. You just run it in a Prism window instead of
a normal browser window. You can then customize the app with more
desktop-oriented features (tray icon, popup notifications,
drag-and-drop, etc.). With iPhone or Flex you basically have to
reimplement the entire client.

Q. I am also interested in understanding the status of Prism. It has been available for a while and it looked the original project lost some steam and then it regained some life. Is that so and if so, what or who got it going again.

MG. Prism was invented by Mark Finkle (now working on Mozilla's mobile browser) as a Mozilla Labs project. These projects are basically experiments that let Mozilla try out new ideas without committing to making a new product. They can then observe how users and the development community react. In the case of Prism, it was quickly picked up by Zimbra (then part of Yahoo) for Zimbra Desktop, and they ended up hiring me to improve Prism based on their requirements. Both Yahoo and Zimbra were awesome about donating my work back to Mozilla so that other Prism users can benefit.

I also have a few other clients using Prism, and together their
contributions have helped move the product forward tremendously, even
if it's been a relatively drawn-out process as you point out. I can't
comment on Mozilla's future plans for Prism.

Q. Jolicloud makes use of Prism but I had not heard of Prism previously but it can be used on a desktop. Are you seeing it used in other contexts outside of Jolicloud.

MG. See above. The biggest user is probably Zimbra, but there are others I work with and doubtless many I don't even know about. Basically if you want a multiplatform single-site browser, Prism is still the only game in town.

More on Prism

I've downloaded Prism to my Mac and used it to replace my Mail App icon with a Gmail icon on the toolbar. You can "applify" any website.

380541EC-B6F6-422E-A8DC-29ED29ABD3DA.jpg


I can see other uses for Prism, even for creating web-based content that looks less like a website and more of an interactive experience like the CD-ROM game "Myst." There's an opportunity to break away from the constraints of the current web design paradigm, and perhaps learn from the lessons of iPhone apps. Interactions can be embedded in the application without any dependence on the browser functions outside the view window.

I'm not so sure my mom can handle the netbook but I'm going to try it. Jolicloud does allow me to customize the interface to those applications (websites) that she needs to use, which is essentially email and maybe Facebook. That's what's remarkable about Jolicloud and the ideas that inspired it -- you can customize a device and simply its interface by integrating it more deeply with the open web.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
(PRO)
No Soup for you

Don't be the product, buy the product!

close
YES, I want to SOUP ●UP for ...