Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

October 28 2011

Top Stories: October 24-28, 2011

Here's a look at the top stories published across O'Reilly sites this week.

Dennis Ritchie Day: October 30, 2011
Tim O'Reilly: "I don't have the convening power of a governor, but for those of us around the world who care, I hereby declare this Sunday, October 30 to be Dennis Ritchie Day."

You say you want a revolution? It's called post-PC computing
Spurred on by a Googler's rant against his own company and Apple's release of a new phone, a new OS and a new cloud infrastructure, Mark Sigal wonders what the "post-PC" revolution really looks like.

We're in the midst of a restructuring of the publishing universe (don't panic)
Hugh McGuire, co-author of "Book: A Futurist's Manifesto," explains why publishing's digital transformation goes way beyond format shifts. He also reveals nine ways the publishing industry will change over the next five years.

"Revolution in the Valley," revisited
With "Revolution in the Valley" making its paperback debut and the work of Steve Jobs fresh in people's minds, we checked in with Andy Hertzfeld to discuss the legacy of the first Macintosh.

What to watch for in mobile web apps
Sencha's James Pearce discusses the most promising mobile web app technologies and explains why device APIs could make the web a lot more interesting.


Velocity Europe, being held November 8-9 in Berlin, brings together performance and site reliability experts who share the unique experiences that can only be gained by operating at scale. Save 20% on registration with the code RADAR20.

October 24 2011

You say you want a revolution? It's called post-PC computing


"You say you want a revolution,

Well, you know,

We all want to change the world."
— The Beatles

I loved Google engineer Steve Yegge's rant about: A) Google not grokking how to build and execute platforms; and B) How his ex-employer, Amazon, does.

First off, it bucks conventional wisdom. How could Google, the high priest of the cloud and the parent of Android, analytics and AdWords/AdSense, not be a standard-setter for platform creation?

Second, as Amazon's strategy seems to be to embrace "open" Android and use it to make a platform that's proprietary to Amazon, that's a heck of a story to watch unfold in the months ahead. Even more so, knowing that Amazon has serious platform mojo.

But mostly, I loved the piece because it underscores the granular truth about just how hard it is to execute a coherent platform strategy in the real world.

Put another way, Yegge's rant, and what it suggests about Google's and Amazon's platform readiness, provides the best insider's point of reference for appreciating how Apple has played chess to everyone's checkers in the post-PC platform wars.

Case in point, what company other than Apple could have executed something even remotely as rich and well-integrated as the simultaneous release of iOS 5, iCloud and iPhone 4S, the latter of which sold four million units in its first weekend of availability?

Let me answer that for you: No one.

Post-PC: Putting humans into the center of the computing equation

Each computing wave dwarfs and disrupts its predecessor

There is a truism that each wave of computing not only disrupts, but dwarfs its predecessor.

The mainframe was dwarfed by the PC, which in turn has been subordinated by the web. But now, a new kind of device is taking over. It's mobile, lightweight, simple to use, connected, has a long battery life and is a digital machine for running native apps, web browsing, playing all kinds of media, enabling game playing, taking photos and communicating.

Given its multiplicity of capabilities, it's not hard to imagine a future where post-PC devices dot every nook and cranny of the planet (an estimated 10 billion devices by 2020, according to Morgan Stanley).

But, an analysis of evolving computing models suggests a second, less obvious moral of the story. Namely, when you solve the right core problems central to enabling the emergent wave (as opposed to just bolting on more stuff), all sorts of lifecycle advantages come your way.

In the PC era, for example, the core problems were centered on creating homogeneity to get to scale and to give developers a singular platform to program around, something that the Wintel hardware-software duopoly addressed with bull's-eye accuracy. As a result, Microsoft and Intel captured the lion's share of the industry's profits.

By contrast, the wonderful thing about the way that the web emerged is that HTML initially made it so simple to "write once, run anywhere" that any new idea — brilliant or otherwise — could rapidly go from napkin to launch to global presence. The revolution was completely decentralized, and suddenly, web-based applications were absorbing more and more of the PC's reason for being.

Making all of this new content discoverable via search and monetizable (usually via advertising) thus became the core problem where the lion's share of profits flowed, and Google became the icon of the web.

The downside of this is that because the premise of the web is about abstracting out hardware and OS specificity, browsers are prone to crashing, slowdowns and sub-optimal performance. Very little about the web screams out "great design" or "magical user experience."

Enter Apple. It brought back a fundamental appreciation of the goodness of "native" experiences built around deeply integrated hardware, software and service platforms.

Equally important, Apple's emphasis on outcomes over attributes led it to marry design, technology and liberal arts in ways that brought humans into the center of the computing equation, such that for many, an iPhone, iPod Touch or iPad is the most "personal" computer they have ever owned.

The success of Apple in this regard is best appreciated by how it took a touch-based interfacing model and made it seamless and invisible across different device types and interaction methods. Touch facilitated the emotional bond that users have with their iPhones, iPads and the like. Touch is one of the human senses, after all.

Thus, it's little surprise that the lion's share of profits in the post-PC computing space are flowing to the company that is delivering the best, most human-centric user experience: Apple.

Now, Apple is opening a second formal interface into iOS through Siri, a voice-based helper system that is enmeshed in the land of artificial intelligence and automated agents. This was noted by Daring Fireball's John Gruber in an excellent analysis of the iPhone 4S:

... Siri is indicative of an AI-focused ambition that Apple hasn't shown since before Steve Jobs returned to the company. Prior to Siri, iOS struck me being designed to make it easy for us to do things. Siri is designed to do things for us.

Once again, Apple is looking to one of the human senses — this time, sound — to provide a window for users into computing. While many look at Siri as a concept that's bound to fail, if Apple gets Siri right, it could become even more transformational than touch — particularly as Siri's dictionary, grammar and contextual understanding grow.

Taken together, a new picture of the evolution of computing starts to emerge. An industry that was once defined by the singular goal of achieving power (the mainframe era), morphed over time into the noble ambition of achieving ubiquity via the "PC on every desktop" era. It then evolved into the ideal of universality, vis-à-vis the universal access model of the web, which in turn was aided by lots of free, ad-supported sites and services. Now, human-centricity is emerging as the raison d'être for computing, and it seems clear that the inmates will never run the asylum again. That may quite possibly be the greatest legacy of Steve Jobs.

Do technology revolutions drive economic revolutions?

Sitting in these difficult economic times, it is perhaps fair to ask if the rise of post-PC computing is destined to be a catalyst for economic revival. After all, we've seen the Internet disrupt industry after industry with a brutal efficiency that has arguably wiped out more jobs than it has created.

Before answering that, though, let me note that while the seminal revolutions always appear in retrospect to occur in one magical moment, in truth, they play out as a series of compounding innovations, punctuated by a handful of catalytic, game-changing events.

For example, it may seem that the Industrial Revolution occurred spontaneously, but the truth is that for the revolution to realize its destiny, multiple concurrent innovations had to occur in manufacturing, energy utilization, information exchange and machine tools. And all of this was aided by significant public infrastructure development. It took continuous, measurable improvements in the products, markets, suppliers and sales channels participating in the embryonic wave before things sufficiently coalesced to transform society, launch new industries, create jobs, and rain serious material wealth on the economy.

It's often a painful, messy process going from infancy to maturation, and it may take still more time for this latest wave to play out in our society. But, I fully believe that we are approaching what VC John Doerr refers to as the "third wave" in technology:

We are at the beginning of a third wave in technology (the prior two were the commercialization of the microprocessor, followed 15 years later by the advent of the web), which is this convergence of mobile and social technologies made possible by the cloud. We will see the creation of multiple multi-billion-dollar businesses, and equally important, tens maybe hundreds of thousands of smaller companies.

For many folks, the revolution can't come soon enough. But it is coming.

Quantifying the post-PC "standard bearers"

A couple years back, I wrote an article called "Built-to-Thrive — The Standard Bearers," where I argued that Apple was the gold standard company (i.e., the measuring stick by which all others are judged), Google was the silver and Amazon was the bronze.

The only re-thinking I have with respect to that medal stand is that Amazon and Google have now flipped places.

Most fundamentally, this exemplifies:

  1. How well Apple has succeeded in actually solving the core problems of its constituency base through an integrated, human-centered platform.
  2. How Amazon has gained religion about the importance of platform practice.
  3. How, as Yegge noted, Google doesn't always "eat its own dog food."

If you doubt this, check out the adjacent charts, which spotlight the relative stock performance of Apple, Amazon and Google after each company's strategic foray into post-PC computing: namely, iPod, Kindle and Android, respectively.

This is one of those cases where the numbers may surprise, but they don't lie.

Amazon, Google, Apple stock charts in the post-PC era

Related:


September 15 2011

The evolution of data products

In "What is Data Science?," I started to talk about the nature of data products. Since then, we've seen a lot of exciting new products, most of which involve data analysis to an extent that we couldn't have imagined a few years ago. But that begs some important questions: What happens when data becomes a product, specifically, a consumer product? Where are data products headed? As computer engineers and data scientists, we tend to revel in the cool new ways we can work with data. But to the consumer, as long as the products are about the data, our job isn't finished. Proud as we may be about what we've accomplished, the products aren't about the data; they're about enabling their users to do whatever they want, which most often has little to do with data.

It's an old problem: the geeky engineer wants something cool with lots of knobs, dials, and fancy displays. The consumer wants an iPod, with one tiny screen, one jack for headphones, and one jack for charging. The engineer wants to customize and script it. The consumer wants a cool matte aluminum finish on a device that just works. If the consumer has to script it, something is very wrong. We're currently caught between the two worlds. We're looking for the Steve Jobs of data — someone who can design something that does what we want without getting us involved in the details.


Disappearing data

We've become accustomed to virtual products, but it's only appropriate to start by appreciating the extent to which data products have replaced physical products. Not that long ago, music was shipped as chunks of plastic that weighed roughly a pound. When the music was digitized and stored on CDs, it became a data product that weighed under an ounce, but was still a physical object. We've moved even further since: many of the readers of this article have bought their last CD, and now buy music exclusively in online form, through iTunes or Amazon. Video has followed the same path, as analog VHS videotapes became DVDs and are now streamed through Netflix, a pure data product.

Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science — from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

Save 30% on registration with the code ORM30



But while we're accustomed to the displacement of physical products by
virtual products, the question of how we take the next step — where
data recedes into the background — is surprisingly tough.
Do we want products that deliver data?
Or do we want products that deliver results based on data? We're
evolving toward the latter, though we're not there yet. The iPod may
be the best example of a product that pushes the data into the
background to deliver what the user wants, but its partner
application, iTunes, may be the worst. The user interface to iTunes
is essentially a spreadsheet that exposes all of your music
collection's metadata. Similarly, the
"People You May Know" feature on social sites such as LinkedIn and
Facebook delivers recommendations: a list of people in the
database who are close to you in one way or another. While that's
much more friendly than iTunes' spreadsheet, it is still a
list, a classic data structure. Products like these have a "data smell." I call them "overt"
data products because the data is clearly visible as part
of the deliverable.



A list may be an appropriate way to deliver potential contacts, and a
spreadsheet may be an appropriate way to edit music metadata. But
there are many other kinds of deliverables that help us to understand
where data products are headed. At a recent event at IBM Research, IBM
demonstrated an application that accurately predicts bus arrival times,
based on real-time analysis of traffic data.
(London is about to roll out something similar.) Another IBM project implemented a

congestion management system
for Stockholm that brought about significant
decreases in traffic and air pollution. A href="http://www.globalgiants.com/archives/2010/04/city_traffic_ma.html">newer
initiative allows drivers to text their destinations to a service, and receive an optimized route, given current traffic and weather conditions. Is a bus arrival time data?
Probably so. Is a route another list structure, like a list of potential Facebook friends? Yes, though the real deliverable here is reduced transit time and an improved
environment. The data is still in the foreground, but we're starting
to look beyond the data to the bigger picture: better quality of life.

These projects suggest the next step in the evolution toward data products that deliver results rather than data. Recently, Ford discussed some experimental work in which they used Google's prediction and mapping capabilities to optimize mileage in hybrid cars based on predictions about where the driver was going. It's clearly a data product: it's doing data analysis on historical driving data and knowledge about road conditions. But the deliverable isn't a route or anything the driver actually sees — it's optimized engine usage and lower fuel consumption. We might call such a product, in which the data is hidden, a "covert" data product.

We can push even further. The user really just wants to get from point A to point B. Google has demonstrated a self-driving car that solves this problem. A self-driving car is clearly not delivering data as the result, but there are massive amounts of data behind the scenes, including maps, Street View images of the roads (which, among other things, help it to compute the locations of curbs, traffic lights, and stop signs), and data from sensors on the car. If we ever find out everything that goes into the data processing for a self-driving car, I believe we'll see a masterpiece of extracting every bit of value from many data sources. A self-driving car clearly takes the next step to solving a user's real problem while making the data hide behind the scenes.

Once you start looking for data products that deliver real-world results rather than data, you start seeing them everywhere. One IBM project involved finding leaks in Dubuque, Iowa's, public water supply. Water is being used all the time, but sudden changes in usage could represent a leak. Leaks have a unique signature: they can appear at any time, particularly at times when you would expect usage to be low. Unlike someone watering his lawn, flushing a toilet, or filling a pool, leaks don't stop. What's the deliverable? Lower water bills and a more robust water system during droughts — not data, but the result of data.

In medical care, doctors and nurses frequently have more data at their disposal than they know what to do with. The problem isn't the data, but seeing beyond the data to the medical issue. In a collaboration between IBM and the University of Ontario, researchers knew that most of the data streaming from the systems monitoring premature babies was discarded. While readings of a baby's vital signs might be taken every few milliseconds, they were being digested into a single reading that was checked once or twice an hour. By taking advantage of the entire data stream, it was possible to detect the onset of life-threatening infections as much as 24 hours before the symptoms were apparent to a human. Again, a covert data product; and the fact that it's covert is precisely what makes it valuable. A human can't deal with the raw data, and digesting the data into hourly summaries so that humans can use it makes it less useful, not more. What doctors and nurses need isn't data, they need to know that the sick baby is about to get sicker.

Eben Hewitt, author of "Cassandra: The Definitive Guide," works for a large hotel chain. He told me that the hotel chain considers itself a software company that delivers a data product. The company's real expertise lies in the reservation systems, the supply management systems, and the rest of the software that glues the whole enterprise together. It's not a small task. They're tracking huge numbers of customers making reservations for hundreds of thousands of rooms at tens of thousands of properties, along with various awards programs, special offers, rates that fluctuate with holidays and seasons, and so forth. The complexity of the system is certainly on par with LinkedIn, and the amount of data they manage isn't that much smaller. A hotel looks awfully concrete, but in fact, your reservation at Westin or Marriott or Day's Inn is data. You don't experience it as data, however — you experience it as a comfortable bed at the end of a long day. The data is hidden, as it should be.

I see another theme developing. Overt products tend to depend on overt data collection: LinkedIn and Facebook don't have any data that wasn't given to them explicitly, though they may be able to combine it in unexpected ways. With covert data products, not only is data invisible in the result, but it tends to be collected invisibly. It has to be collected invisibly: we would not find a self-driving car satisfactory if we had to feed it with our driving history. These products are frequently built from data that's discarded because nobody knows how to use it; sometimes it's the "data exhaust" that we leave behind as our cell phones, cars, and other devices collect information on our activities. Many cities have all the data they need to do real-time traffic analysis; many municipal water supplies have extensive data about water usage, but can't yet use the data to detect leaks; many hospitals connect patients to sensors, but can't digest the data that flows from those sensors. We live in an ocean of ambient data, much of which we're unaware. The evolution of data products will center around discovering uses for these hidden sources of data.

The power of combining data

The first generation of data products, such as CDDB, were essentially a single database. More recent products, such as LinkedIn's Skills database, are composites: Skills incorporates databases of users, employers, job listings, skill descriptions, employment histories, and more. Indeed, the most important operation in data science may be a "join" between different databases to answer questions that couldn't be answered by either database alone.

Facebook's facial recognition provides an excellent example of the power in linked databases. In the most general case, identifying faces (matching a face to a picture, given millions of possible matches) is an extremely difficult problem. But that's not the problem Facebook has solved. In a reply to Tim O'Reilly, Jeff Jonas said that while one-to-many picture identification remains an extremely difficult problem, one-to-few identification is relatively easy. Facebook knows about social networks, and when it sees a picture, Facebook knows who took it and who that person's friends are. It's a reasonable guess that any faces in the picture belong to the taker's Facebook friends. So Facebook doesn't need to solve the difficult problem of matching against millions of pictures; it only needs to match against pictures of friends. The power doesn't come from a database of millions of photos; it comes from joining the photos to the social graph.


The goal of discovery

Many current data products are recommendation engines, using collaborative filtering or other techniques to suggest what to buy, who to friend, etc. One of the holy grails of the "new media" is to build customized, personalized news services that automatically find what the user thinks is relevant and interesting. Tools like Apple's Genius look through your apps or your record collection to make recommendations about what else to buy. "People you may know," a feature common to many social sites, is effectively a recommendation engine.

But mere recommendation is a shallow goal. Recommendation engines aren't, and can't, be the end of the road. I recently spent some time talking to Bradford Cross (@bradfordcross), founder of Woven, and eventually realized that his language was slightly different from the language I was used to. Bradford consistently talked about "discovery," not recommendation. That's a huge difference. Discovery is the key to building great data products, as opposed to products that are merely good.

The problem with recommendation is that it's all about recommending something that the user will like, whether that's a news article, a song, or an app. But simply "liking" something is the wrong criterion. A couple months ago, I turned on Genius on my iPad, and it said things like "You have Flipboard, maybe you should try Zite." D'oh. It looked through all my apps, and recommended more apps that were like the apps I had. That's frustrating because I don't need more apps like the ones I have. I'd probably like the apps it recommended (in fact, I do like Zite), but the apps I have are fine. I need apps that do something different. I need software to tell me about things that are entirely new, ideally something I didn't know I'd like or might have thought I wouldn't like. That's where discovery takes over. What kind of insight are we talking about here? I might be delighted if Genius said, "I see you have ForScore, you must be a musician, why don't you try Smule's Magic Fiddle" (well worth trying, even if you're not a musician). That's where recommendation starts making the transition to discovery.

Eli Pariser's "The Filter Bubble" is an excellent meditation on the danger of excessive personalization and a media diet consisting only of stuff selected because you will "like" it. If I only read news that has been preselected to be news I will "like," news that fits my personal convictions and biases, not only am I impoverished, but I can't take part in the kind of intelligent debate that is essential to a healthy democracy. If I only listen to music that has been chosen because I will "like" it, my music experience will be dull and boring. This is the world of E.M. Forster's story "The Machine Stops," where the machine provides a pleasing, innocuous cocoon in which to live. The machine offers music, art, and food — even water, air, and bedding; these provide a context for all "ideas" in an intellectual space where direct observation is devalued, even discouraged (and eventually forbidden). And it's no surprise that when the machine breaks down, the consequences are devastating.

I do not believe it is possible to navigate the enormous digital library that's available to us without filtering, nor does Pariser. Some kind of programmatic selection is an inevitable part of the future. Try doing Google searches in Chrome's Incognito mode, which suppresses any information that could be used to personalize search results. I did that experiment, and it's really tough to get useful search results when Google is not filtering based on its prior knowledge of your interests.

But if we're going to break out of the cocoon in which our experience of the world is filtered according to our likes and dislikes, we need to get beyond naïve recommendations to break through to discovery. I installed the iPad Zite app shortly after it launched, and I find that it occasionally breaks through to discovery. It can find articles for me that I wouldn't have found for myself, that I wouldn't have known to look for. I don't use the "thumbs up" and "thumbs down" buttons because I don't want Zite to turn into a parody of my tastes. Unfortunately, that seems to be happening anyway. I find that Zite is becoming less interesting over time: even without the buttons, I suspect that my Twitter stream is telling Zite altogether too much about what I like and degrading the results. Making the transition from recommendation to true discovery may be the toughest problem we face as we design the next generation of data products.

Interfaces

In the dark ages of data products, we accessed data through computers: laptops and desktops, and even minicomputers and mainframes if you go back far enough. When music and video first made the transition from physical products to data products, we listened and watched on our computers. But that's no longer the case: we listen to music on iPods; read books on Kindles, Nooks, and iPads; and watch online videos on our Internet-enabled televisions (whether the Internet interface is part of the TV itself or in an external box, like the Apple TV). This transition is inevitable. Computers make us aware of data as data: one disk failure will make you painfully aware that your favorite songs, movies, and photos are nothing more than bits on a disk drive.

It's important that Apple was at the core of this shift. Apple is a master of product design and user interface development. And it understood something about data that those of use who preferred listening to music through WinAmp or FreeAmp (now Zinf) missed: data products would never become part of our lives until the computer was designed out of the system. The user experience was designed into the product from the start. DJ Patil (@dpatil), Data Scientist in Residence at Greylock Partners, says that when building a data product, it is critical to integrate designers into the engineering team from the beginning. Data products frequently have special challenges around inputting or displaying data. It's not sufficient for engineers to mock up something first and toss it over to design. Nor is it sufficient for designers to draw pretty wireframes without understanding what the product is or how it works. The earlier design is integrated into the product group and the deeper the understanding designers have of the product, the better the results will be. Patil suggested that FourSquare succeeded because it used GPS to make checking into a location trivially simple. That's a design decision as much as a technical decision. (Success isn't fair: as a Dodgeball review points out, position wasn't integrated into cell phones, so Dodgeball's user interface was fundamentally hobbled.) To listen to music, you don't want a laptop with a disk drive, a filesystem, and a user interface that looks like something from Microsoft Office; you want something as small and convenient as a 1960s transistor radio, but much more capable and flexible.

What else needs to go if we're going to get beyond a geeky obsession with the artifact of data to what the customer wants? Amazon has done an excellent job of packaging ebooks in a way that is unobtrusive: the Kindle reader is excellent, it supports note taking and sharing, and Amazon keeps your location in sync across all your devices. There's very little file management; it all happens in Amazon's cloud. And the quality is excellent. Nothing gives a product a data smell quite as much as typos and other errors. Remember Project Gutenberg?

Back to music: we've done away with ripping CDs and managing the music ourselves. We're also done with the low-quality metadata from CDDB (although I've praised CDDB's algorithm, the quality of its data is atrocious, as anyone with songs by John "Lennnon" knows). Moving music to the cloud in itself is a simplification: you don't need to worry about backups or keeping different devices in sync. It's almost as good as an old phonograph, where you could easily move a record from one room to another, or take it to a friend's house. But can the task of uploading and downloading music be eliminated completely? We're partway there, but not completely. Can the burden of file management be eliminated? I don't really care about the so-called "death of the filesystem," but I do care about shielding users from the underlying storage mechanism, whether local or in the cloud.

New interfaces for data products are all about hiding the data itself, and getting to what the user wants. The iPod revolutionized audio not by adding bells and whistles, but by eliminating knobs and controls. Music had become data. The iPod turned it back into music.

The drive toward human time

It's almost shocking that in the past, Google searches were based on indexes that were built as batch jobs, with possibly a few weeks before a given page made it into the index. But as human needs and requirements have driven the evolution of data products, batch processing has been replaced by "human time," a term coined by Justin Sheehy (@justinsheehy), CTO of Basho Technologies. We probably wouldn't complain about search results that are a few minutes late, or maybe even an hour, but having to wait until tomorrow to search today's Twitter stream would be out of the question. Many of my examples only make sense in human time. Bus arrival times don't make sense after the bus has left, and while making predictions based on the previous day's traffic might have some value, to do the job right you need live data. We'd laugh at a self-driving car that used yesterday's road conditions. Predicting the onset of infection in a premature infant is only helpful if you can make the prediction before the infection becomes apparent to human observers, and for that you need all the data streaming from the monitors.

To meet the demands of human time, we're entering a new era in data tooling. Last September, Google blogged about Caffeine and Percolator, its new framework for doing real-time analysis. Few details about Percolate are available, but we're starting to see new tools in the open source world: Apache Flume adds real-time data collection to Hadoop-based systems. A recently announced project, Storm, claims to be the Hadoop of real-time processing. It's a framework for assembling complex topologies of message processing pipelines and represents a major rethinking of how to build data products in a real-time, stream-processing context.

Conclusions

Data products are increasingly part of our lives. It's easy to look at the time spent in Facebook or Twitter, but the real changes in our lives will be driven by data that doesn't look like data: when it looks like a sign saying the next bus will arrive in 10 minutes, or that the price of a hotel reservation for next week is $97. That's certainly the tack that Apple is taking. If we're moving to a post-PC world, we're moving to a world where we interact with appliances that deliver the results of data, rather than the data itself. Music and video may be represented as a data stream, but we're interested in the music, not the bits, and we are already moving beyond interfaces that force us to deal with its "bitly-ness": laptops, files, backups, and all that. We've witnessed the transformation from vinyl to CD to digital media, but the process is ongoing. We rarely rip CDs anymore, and almost never have to haul out an MP3 encoder. The music just lives in the cloud (whether it's Amazon's, Apple's, Google's, or Spotify's). Music has made the transition from overt to covert. So have books. Will you have to back up your self-driving route-optimized car? I doubt it. Though that car is clearly a data product, the data that drives it will have disappeared from view.

Earlier this year Eric Schmidt said:

Google needs to move beyond the current search format of you entering a query and getting 10 results. The ideal would be us knowing what you want before you search for it...

This controversial and somewhat creepy statement actually captures the next stage in data evolution. We don't want lists or spreadsheets; we don't want data as data; we want results that are in tune with our human goals and that cause the data to recede into the background. We need data products that derive their power by mashing up many sources. We need products that deliver their results in human time, rather than as batch processes run at the convenience of a computing system. And most crucially, we need data products that go beyond mere recommendation to discovery. When we have these products, we will forget that we are dealing with data. We'll just see the results, which will be aligned with our needs.

We are seeing a transformation in data products similar to what we have seen in computer networking. In the '80s and '90s, you couldn't have a network without being intimately aware of the plumbing. You had to manage addresses, hosts files, shared filesystems, even wiring. The high end of technical geekery was wiring a house with Ethernet. But all that network plumbing hasn't just moved into the walls: it's moved into the ether and disappeared entirely. Someone with no technical background can now build a wireless network for a home or office by doing little more than calling the cable company. Data products are striving for the same goal: consumers don't want to, or need to, be aware that they are using data. When we achieve that, when data products have the richness of data without calling attention to themselves as data, we'll be ready for the next revolution.

Related:

August 25 2011

Ruminations on the legacy of Steve Jobs

Steve Jobs"It's better to die on your feet than to live on your knees." — Neil Young

"That day has come." Four simple words that signaled that Steve Jobs felt compelled to step down as CEO of Apple, the company he founded, then lost, then saw ridiculed and written off, only to lead its rebirth and rise to new heights.

It's an incredible story of prevailing (read: dominating) over seemingly insurmountable odds. A story that has no peer in technology, or any other industry, for that matter.

That is why even though this moment was long anticipated, and while I know that Steve isn't gone (and hopefully won't be anytime soon), yesterday's announcement nonetheless feels like a "Kennedy" or "Lennon" moment, where you'll remember "where you were when ..."

I say this having seen first-hand the genuine, profound sadness of multitudes of people, both online and on the street, most who (obviously) have never met the man.

Why is this? I think that we all recognize greatness, and appreciate the focus, care, creativity, and original vision that it takes to achieve it.

The realization that one man sits at the junction point of cataclysmic disruptions in personal computing (Apple II/Mac), music (iPod + iTunes), mobile computing (iPhone + iOS), movies (Pixar) and post-PC computing (iPad) is breath taking in its majesty. A legacy with no equal.

The intersection of technology and liberal arts

Apple Store in New York CityIn an era where entrepreneurialism is too often defined by incrementalism and pursuit of the exit strategy, Jobs' Apple was always defined by true husbandry of a vision, and the long, often thankless, pursuit of excellence and customer delight that goes with it.

Ironically, though, Jobs' greatest innovation may actually be as basic as "bringing humanity back into the center of the ring," to borrow a phrase from Joe Strummer of the seminal rock band, The Clash.

Consider Jobs' own words at the launch of the iPad back in January, 2010:

The reason we've been able to create products like this is because we've tried to be at the intersection of technology and liberal arts. We make things that are easy to use, fun to use — they really fit the users.

If this seems intuitive, and it should be, consider the modus operandi that preceded it. Before Apple, the hard truth was that the "inmates ran the asylum," in that products were typically designed by engineers to satisfy their own needs, as opposed to those of the actual consumers of the products.

Moreover, products were designed and marketed according to their "speeds and feeds," checklists of attributes over well-chiseled, highly-crafted outcomes. And it didn't really matter if at each step along the value chain the consumer was disrespected and disregarded.

Ponder for a moment the predecessor to the Apple Store, CompUSA, and what that experience was like versus the new bar for customer service being set by Apple.

Or, think about the constraints on enjoying music and other media before the iPod, or the pathetic state of mobile phones before the iPhone.

Skeptics and haters alike can credibly say that Apple did not create these categories, but recognize that it took a visionary like Steve Jobs to build a new technology value chain around the consumer and make it actually work. To give birth to an entirely new platform play. To free the user from the hard boundaries of WIMP computing. To bring design and user interaction models into the modern age. And to magically collapse the once-impenetrable boundaries between computing, communications, media, Internet, and gaming.

Even today, the legacy MP3 device category is utterly dominated by Apple's iPod, despite every would-be competitor knowing exactly what Apple's strategy is in this domain.

To do this in segment after segment, launch after launch, takes true conviction and a bit of chutzpah. But then again, Apple, under Jobs, has never been a company that embraced or felt beholden to conventional wisdom (see "Apple's segmentation strategy, and the folly of conventional wisdom").

iPad as the signature moment in a brilliant career

iPad 2Time and again, investors, competitors and industry pundits have dismissed Apple, most recently when the company launched the iPad. Then, the conventional wisdom was that Apple "blew it" or that it was "just a big iPod Touch," nothing landmark.

Truth be told, such dismissals are probably the barometer by which Steve Jobs knows that he's played the winning hand.

I wrote in 2010, in anticipation of the iPad launch:

The best way to think about the iPad is as the device that inspired Steve Jobs to create the iPhone and the iPod Touch. It's the vaunted 3.0 vision of a 1.0 deliverable that began its public life when the first generation of iPhone launched only two-and-a-half years ago ... it is a product that is deeply personal to Steve Jobs, and I believe the final signature on an amazing career. I expect the product to deliver.

Well, it did deliver, and 30 million iPads later, the ascent of post-PC computing seems irrevocable as a result.

The moral of the story in considering the wonder and beauty of Steven P. Jobs, thus, is two-fold.

One is that most companies wouldn't even have chanced cannibalizing a cash cow product like the iPod Touch (or the iPhone) to create a new product in an unproven category like tablet devices.

Not Apple, where sacred cows are ground up and served for lunch as standard operating procedure.

Two is that the mastery required to create a wholly new category of device that could be dismissed as "just a big iPod Touch" takes a very rare bird. Namely, one that pursues non-linear strategies requiring high leverage, deep integration and even higher orchestration.

.

Exactly the type of complexity that only Jobs and company could make look ridiculously, deceptively simple.

In his honor, may we all be willing to "Think Different" in the days, weeks and months ahead. That's the best way to pay tribute to a legacy that will stand the test of time.

Apple Store and Steve Jobs photos from Apple Press Info.



Related:

June 20 2011

The iPhone, the Angry Bird and the Pink Elephant

pink elephantI am firm believer that he who wins the hearts and minds of developers wins the platform game.

Case in point, in today's mobile/Post-PC universe, we see clearly how major companies like Microsoft, HP, Dell, RIM and Nokia are struggling to remain relevant in the face of developer apathy.

Meanwhile, Apple and Google have left the competition in the dust by virtue of their tremendous success in courting application developers.

But, there is a "pink elephant" in the room that no one is really discussing, and it gets to the nut of what investing time and energy in a software platform is all about. More on that in a minute.

First, some table setting. As an apps developer, I care about three things. First and foremost, is having a great platform to develop on top of.

After all, great software is a by-product of: A) Enabling your target audience to achieve a well-defined set of outcomes; B) Solving the right problem, technically speaking; and C) Delivering an engaging user experience.

Simply put, if you are working on the wrong canvas, or using an inferior palette, accomplishing these tasks is hard to do. The good news here is that whether you're a devotee of Apple's iOS, Google's Android, third-party frameworks like Ansca's Corona, or open web approaches like HTML5, the getting's actually pretty good in this realm.

The second requirement is having a readily addressable, targetable base of users. All things being equal, this is preferably a large base of users, but ultimately, the metric of audience size is less integral than factoring the lifecycle value (in dollars) that you can reasonably hope to capture from the base of your users that you actually do monetize. Again, 200 million iOS devices, and 100 million Android devices is a very large footprint for targeting purposes, so no complaints there either.

This brings me to my third requirement. As a developer, while I am of course very passionate about what I build, I am not doing this for the dark joys of being a starving artist.

Rather, I am in it to make money; namely, to build upon my profession, and if all goes well, sing and dance all the way to the bank. Here's where the circumstances are cloudy at best, and deeply troubling at worst.

Thinking about success: What's past is prologue

vintage pcWhen I close my eyes and think back to the days of old PC, I can recall legions of very large, breakout successes that emanated from the PC model (i.e., $100M+ revenue companies).

The high profile names include companies like Intuit, Lotus, Adobe, Symantec, Borland, CheckPoint, McAfee, Siebel and Sybase. But trust me, the landscape was dotted with successes across a dizzying array of application categories and vertical segments, and serviced by a wide range of solution providers.

Similarly, when I think about the dotcom phase of the web, companies like Amazon, eBay, Yahoo, Google and PayPal come easily to mind.

Even the post-dotcom phase of the web is spotlighted by monster successes like Salesforce.com, Facebook and LinkedIn, with Facebook being doubly noteworthy for having already spawned a true cash-generating machine goliath off of its platform, in Zynga, which is expected to reach $1.8 billion in revenue, and $630 million in profits in 2011.

Angry BirdNow, contrast these companies with their "breakout success" counterparts on iOS and Android, and you are left with the chirping sounds of crickets.

Shockingly, lost in the stunning growth of iPhone, iPad, iPod Touch and Android-derived devices — 300 million devices sold combined and counting, 600,000-plus apps built, and more than 18 billion app downloads — is the disconcerting truth that no one is talking about. Namely, that the closest story of financial success that we have to Facebook, Amazon or Intuit is ... Angry Birds!

What the frak? Angry Birds is ridiculously addictive, it's cute and it's brilliantly executed, but it is perhaps a $15-25 million business.

Is this the best that we can do in painting a picture of software success in an industry that is projected to grow to 10 billion devices worldwide?

Cry me a river: Why should Apple or Google care?

I trust that Apple CEO Steve Jobs felt tremendous pride when he announced at WWDC that Apple had paid app developers more than $2.5 billion in revenue share from sales of their applications.

He should be proud. Apple has created an amazing platform that seemingly overnight, but actually a decade in the making, has achieved the disruption trifecta: first re-jiggering the music business, then mobile, and now, the PC industry.

But, I'd like to submit an uncomfortable truth that should give the chess players at Apple (and to a lesser extent, Google) some cause for pause.

As Amazon first began to prove out back in the mid-'90s, creating a discovery engine, distribution platform and marketplace optimized for long tail-oriented product offerings can create great financial rewards for the platform creator, and no doubt Apple has innovated upon this model incredibly well vis-á-vis iTunes and the App Store.

However, whereas Amazon's model did not completely re-write the economics of selling electronics, toys and books, such that what once sold for $25 is now $0.99, the App Store is fundamentally different. Its sole purpose seems designed to create surplus, so as to commoditize software, and since the incremental cost of each piece of software is effectively zero, the race to the bottom is almost assured in this environment.

This is ironic because Apple's own highly disciplined business strategy is geared toward maximizing profit margins, without leaving pricing overhang for the competition to attack them from the low-end (which is what happened to Apple during the PC era).

Yet strangely, for all of the brilliant creation, orchestration and curation efforts that Apple has made on behalf of developers, little attention seems to have been made to ensuring that app makers can actually build profitable, scalable businesses.

Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

Save 20% on registration with the code AN11RAD

Thus, it's noteworthy that in Amazon's nascent Android App Store, the company is exerting a measure of pricing control over app developers, presumably to avoid this race to the bottom.

Why is this? Perhaps, unlike Apple and Google, Amazon is in the business of making the lion's share of its money selling other people's stuff. Silly as it sounds, Amazon actually needs its vendors to be fiscally healthy enough so Amazon can sell lots of their products. By contrast, Apple just needs a steady supply of "there's an app for that" chum to keep the platform fresh and exciting.

Lest one wax poetic about Google saving the day, remember that their real customer is the carrier and device OEM, and the lion's share of their dollars are derived from search advertising, so they merely need the "optics" of app diversity to remain relevant.

(Sidebar: If you watch Apple's TV commercials for iPhone/iPad and mobile carriers' ads for Android phones, this qualitative distinction becomes clear.)

Netting it out, the current state of affairs raises the following questions:

  1. How is a large software industry going to grow around this type of model, and what happens if it doesn't?
  2. From an economic viability perspective, what would the ideal platform approach look like for developers?
  3. How might another platform, such as Amazon's, or Facebook's rumored Project Spartan, outflank Apple and Google by building a better mousetrap for developers to make money?

A final thought: Once upon a time, the notion that people would even pay for software was scoffed at. But Microsoft, acting purely out of enlightened self-interest, helped catalyze a packaged software industry that would grow to more than $200 billion in annual revenues.

The moral of the story? What's past is prologue in distinguishing between mere survival and breakout success. How do I know this? A little birdy told me.



Related:

June 08 2011

Four core takeaways from Apple's WWDC keynote

WWDC logosThe Worldwide Developers Conference (WWDC) keynote by Apple CEO Steve Jobs was pure "shock and awe," a showcase of the overwhelming power that has been assembled and orchestrated by Apple, the industry's emerging Post-PC gorilla.

Most impressively, the event and the specifics presented (iOS 5, iCloud, OS X Lion) during it were clearly staged to deliver an inspiring but chilling message: Whether you're a prospective customer, developer, channel partner, or competitor, "resistance is futile."

What follows are my four core takeaways from the keynote.

No. 1: The halo effect

Three years ago, I wrote that Apple had made, and was brilliantly executing on, a handful of trend bets that left it uniquely positioned within the marketplace.

These bets included:

  1. Making the mobile Internet caveat-free.
  2. Harnessing rich media as the "my stuff" bucket that matters.
  3. Treating everything in their arsenal as an integrated platform (from PC to device to online service).
  4. Leveraging and deriving core technologies from one product family to cross-pollinate another.

At the WWDC keynote, Jobs and company repeatedly asserted that "it just works" (the ultimate caveat-free mantra) when presenting this feature or that. They noted that no one else can assemble all of these pieces to deliver this type of solution.

Similarly, a heavy emphasis was placed on extending the utility, reach, and integration of:

  • Personal media: Via camera enhancements, which use Apple's Core Image camera technology, and a new Photo Stream service, which will run on iPhone, iPod Touch, iPad, Mac and the Apple TV.
  • Personal documents: iWork now runs on everything from the iPhone and iPod Touch to the iPad and the Mac, and it'll soon be cloud-enabled via Documents in the Cloud.


  • Messaging/scheduling/contacts: Via the new iCloud service, which revamps and subsumes the company's disappointing MobileMe service. The new iMessage offering is poised to disrupt the SMS business.


  • Professional media: Via iTunes in the cloud and a new iTunes Match service; a new magazine and newspaper subscription service called Newsstand, which complements its iBookstore; and unique to Apple, liberal rights to use the same media now and into the future on multiple iOS devices.

Web 2.0 Summit, being held October 17-19 in San Francisco, will examine "The Data Frame" — focusing on the impact of data in today's networked economy.

Save $300 on registration with the code RADAR

No. 2: A coherent Post-PC vision

John Gruber has a great analog for how Apple approaches markets, strategies and tactics that he calls, "Measure Twice, Cut Once." The basic premise is that while most companies have a tendency to fire, then aim, Apple is diligent in assessing all of the moving parts of a strategy, and ensuring they have extreme confidence in both the viability of the path and their ability to execute on that path.

Hence, while many mocked Apple's slow path to copy and paste in iOS, their handling of Antennagate, and their seeming lack of urgency in responding to Google's cloud ambitions, the truth is that Apple begins with a 3.0 vision that guides 1.0 execution.

This "begin with the end in mind" sensibility and patience has repeatedly rewarded the company and its constituency. This week's announcements were no different.

In announcing both iOS 5 and iCloud, Apple for the first time gave users clear workflows that don't force false dichotomies between the PC as proxy, and the cloud as the hard drive in the sky. You can cut the cord or not. Software updating and iTunes and App library syncing don't demand a host PC. Nor does photo or video editing. Nor does creation of calendars, mailboxes, documents or the like.

At the same time, they have delineated the cloud as The Truth, relegating rather forcefully the PC (and the Mac) as just another device from a backup, syncing and service perspective.

Categorically, this puts them in a real sweet spot between the lowest common denominator web tilt of Google, the PC legacy catholicism of Microsoft, the device-agnosticism of Facebook, and the digital disruptor that is Amazon.

No. 3: Amazon beware

Two storylines always seemed obvious when Apple began its assault on becoming the digital hub. One was that long-time friends, Apple and Google, were destined to become frienemies. The second was that the only company positioned to fight Apple in terms of both style and substance was (and is) Amazon.

Why? Amazon, like Apple, is singularly focused on how to sell stuff. Both companies are somewhat agnostic to rigid categorical definitions of the types of products that they can sell and the lines of business that they can play within.

Equally important, like Apple, Amazon has a relentless focus on customer satisfaction, not to mention, the all-important billing relationship.

Plus, like Apple, Jeff Bezos and company know how to execute on platform strategy, are adept at pioneering cloud services, have their own device and integrated app store strategy (via Kindle and their Android app store) and have secured the all-important media relationships across music, books and movies.

With Apple moving aggressively into PC software sales, ebooks (the WWDC keynote touted 130 million downloads from iBookstore) and magazine subscriptions, and an Android-derived Amazon Kindle tablet rumored, Amazon seems to represent a potential fly in the ointment of Apple's ambitions.

Whether Apple represents a serious threat to Amazon, however, remains to be seen. This will be my favorite industry storyline to watch unfold in the year ahead.

No. 4: The cannibal

Two tweets that I saw stood out in the waning moments of the keynote, both speaking to Apple's willingness to kill stuff. The first, by the New York Times' John Markoff, underscores the admirable quality of Jobs to see beyond long-held conventions and thus to kill sacred cows (even his own in the case of MobileMe).

Steve's great strength? He kills things ... Floppy , hard disk, etc. Next up? The file system..less than a minute ago via Twitter for iPhone Favorite Retweet Reply


More chilling, however, is Apple's ready willingness to cannibalize its partners. While inherent in any platform play is the risk that the platform provider will see your sandbox as strategic and co-opt it for themselves, the news wires were legion with stories about the "body count" from Apple's announcements.

Apple's announcement about their new feature that enables the iPhone's volume control to activate the camera shutter led to this sarcastic tweet by Chirag Mehta:

Step 1: Reject an innovative app. Step 2.:Copy that functionality in the core OS Step 3: Claim Innovation. #WWDC #Camera #TapTapTapless than a minute ago via TweetDeck Favorite Retweet Reply


In case you don't know, this is a direct dig at the fact that iOS developer Tap Tap Tap, makers of the popular Camera+ app, innovated this very same feature several months back, but Apple blocked the app's release until the feature was removed. Now, however, Apple has added it to iOS as their own feature.

To be clear, as platform maker Apple is both within their rights and responsibility to decide which features are best left for third-parties to extend, and which are core and thus should be universal within the platform. But I suspect that it speaks to the growing unease that an all-powerful Apple may not be so great for third-party developers, especially given Apple's past track record of co-option during the PC wars.

Such is the paradox of astounding success. One moment, you are being celebrated as a revolutionary, and bringer of a golden age. The next, you're being taken to task. Apple's relationship with its developers and corresponding role in their success (or failure) is a topic certainly worth further exploration. But that is a post for another day.



Related


  • Apple's Halo Effect

  • Apple's Segmentation Strategy (and the Folly of Conventional Wisdom)

  • Understanding Apple's iPad

  • Five reasons iPhone vs Android isn't Mac vs Windows

  • Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl