Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

March 21 2013

The demise of Google Reader: Stability as a service

Om Malik’s brief post on the demise of Google Reader raises a good point: If we can’t trust Google to keep successful applications around, why should we bother trying to use their new applications, such as Google Keep?

Given the timing, the name is ironic. I’d definitely like an application similar to Evernote, but with search that actually worked well; I trust Google on search. But why should I use Keep if the chances are that Google is going to drop it a year or two from now?

Google Keep screenshotGoogle Keep screenshot

In the larger scheme of things, Keep is small potatoes. Google is injuring themselves in ways that are potentially much more serious than the success or failure of one app. Google is working on the most ambitious re-envisioning of computing since the beginning of the PC era: moving absolutely everything to the cloud. Minimal local storage; local disk drives, whether solid state or rust-based, are the problem, not the solution. Projects like Google Fiber show that they’re interested in seeing that people have enough bandwidth so that they can get at their cloud storage fast enough so that they don’t notice that it isn’t local.

It’s a breath-taking vision, on many levels: I should be able to have access to all of my work, regardless of the device I’m using or where it’s located. A mobile phone shouldn’t be any different from a desktop. I may not want to write software on a mobile phone (I can’t imagine coding on those tiny touch keyboards), but I should be able to if I want to. And I should definitely be able to take a laptop into the hills and work transparently over a 4G network.

Furthermore, why should I worry about local storage? The most common cause for throwing a computer on the bone pile is disk drive failure. Granted, I keep machines around for a long time, so by the time the disk drive fails, it’s more than time for an upgrade. But local disks require backups; backups are a pain; and it’s all too common for something to go wrong when you’re doing a restore. I’d prefer to leave backups to a professional in a data center. For that matter, there are many things I’d rather leave to a data center ops group: malware detection, authentication, software updates, you name it. Most of the things that make computing a pain disappear when you move them to the cloud.

So I’ve written two paragraphs about what’s wonderful about Google’s vision. Here’s what sucks. How can I contemplate moving everything to the cloud, especially Google’s cloud, if services are going to flicker in and out of existence at the whim of Google’s management? That’s a non-starter. Google has scrapped services in the past, and though I’ve been sympathetic with the people who complained about the cancellation, they’ve been services that haven’t reached critical mass. You can’t say that about Google Reader. And if they’re willing to scrap Google Reader, why not Google Docs? I bet more people use Reader than Docs. What if they kill the Prediction API, and you rely on that? There are alternatives to Reader, there may be alternatives to Docs (though most of the ones I knew have died on the vine), but I don’t know of anything remotely like the Prediction API. I could go on with “what ifs” forever (Authentication API? Web Optimizer?), but you get the point.

If Google is serious about providing a platform that lets us move all of our computing to the cloud, they need to provide a stable platform. So far, the tools are great, but Google gets a #fail for stability. Google understands the Internet far better than its competitors, but they’re demonstrating that they don’t understand their users. If you’re a product company, taking out the trash–cancelling the old projects, the non-productive products–is an unpleasant necessity. But Google is trying to be far more than a product company. They’re trying to become a platform company, and they don’t yet understand that’s a different game, with different rules.

July 30 2012

Four short links: 30 July 2012

  1. pathodA pathological HTTP daemon for testing and torturing client software. (via Hacker News)
  2. A Walk Through Twitter’s Walled Garden (The Realtime Report) — nice breakdown of Twitter’s business model choice and consequences. Twitter wants you to be able to see the pictures and read the articles shared in your its Tweets, without leaving the garden. Costolo told the Los Angeles Times that “Twitter is heading in a direction where its 140-character messages are not so much the main attraction but rather the caption to other forms of content.” (You know all the traffic that Twitter’s been driving to web sites? Don’t count on it being there next year.) (via Jim Stogdill)
  3. My Computing Environment (Jesse Vincent) — already have a set of those gloves on order.
  4. How Speedo Created a Record-Breaking Swimsuit (Scientific American) — A new 3-D printer at Aqualab fabricated prototypes of the cap and goggles for testing within hours, rather than sending drawings to a manufacturer and waiting weeks or months. “In the past we couldn’t do many changes to the original design,” Santry says. “With this process, we completely revolutionized the goggle from scratch.” (via Eric Ries)

February 23 2012

Strata Week: Infochimps makes a platform play

Here are a few of the data stories that caught my attention this week.

Infochimps makes its big data expertise available in a platform

The big data marketplace Infochimps announced this week that it will begin offering the platform that it's built for itself to other companies — as both a platform-as-a-service and an on-premise solution. "The technical needs for Infochimps are pretty substantial," says CEO Joe Kelly, and the company now plans to help others get up-to-speed with implementing a big data infrastructure.

Infochimps has offered datasets for download or via API for a number of years (see my May 2011 interview with the company here), but the startup is now making the transition to offer its infrastructure to others. Likening its big data marketplace to an "iTunes for data," Infochimps says it's clear that we still need a lot more "iPods" in production before most companies are able to handle the big data deluge.

Infochimps will now offer its in-house expertise to others. That includes a number of tools that one might expect: AWS, Hadoop, and Pig. But it also includes Ironfan, Infochimps' management tool built on top of Chef.

Infochimps isn't abandoning the big data marketplace piece of its business. However, its move to support companies with their big data efforts is indication there's still quite a bit of work to do before everyone's quite ready to "do stuff" with the big data we're accumulating.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

How do you anonymize online publications?

A fascinating piece of research is set to to appear at IEEE S&P on the subject of Internet-scale authorship identification based on "stylometry," which is an analysis of writing style. The paper was co-authored by Arvind Narayanan, Hristo Paskov, Neil Gong, John Bethencourt, Emil Stefanov, Richard Shin and Dawn Song. They've been able to correctly identify writers 20% of the time based on looking at what they've published online before. It's a finding with serious implications for online anonymity and free speech, the team notes.

"The good news for authors who would like to protect themselves against de-anonymization is it appears that manually changing one's style is enough to throw off these attacks," says Narayanan.

Open data for the public data

O'Reilly Media has just published a report on "Data for the Public Good." In the report, Alex Howard makes the argument for a systemic approach to thinking about open data and the public sector, examining the case for a "public good" around public data as well as around governmental, journalistic, healthcare, and crisis situations (to name but a few scenarios and applications).

Howard notes that the success of recent open data initiatives "won't depend on any single chief information officer, chief executive or brilliant developer. Data for the public good will be driven by a distributed community of media, nonprofits, academics and civic advocates focused on better outcomes, more informed communities and the new news, in whatever form it is delivered." Although many municipalities have made the case for open data initiatives, there's more to the puzzle, Howard argues, including recognizing the importance of personal data and making the case for a "hybridized public-private data."

The "Data for the Public Good" report is available for free as a PDF, ePUB, or MOBI download.

Got data news?

Feel free to email me.


October 03 2011

USA: Occupy Together

The website Occupy Together offers a wealth of information on the social movements catalyzing in many cities in the United States and in other countries around the world against corporate greed and corruption.



this entry is part of the OccupyWallStreet compilation 2011-09/10, here.

July 28 2011

Books as a service: How and why it works

Justo Hildago (@justohidalgo), co-founder of 24Symbols — a kind of Netflix for ebooks — says books as a service not only benefits readers, but publishers as well. Hildago outlines his company's business model and explains the benefits it offers in the following interview.

Hidalgo will also expand on these ideas at TOC Frankfurt 2011 in October.

How does 24Symbols' business model work?

Justo HidalgoJusto Hidalgo: 24symbols is a subscription service that lets users read in the cloud, and it includes social capabilities. This means that the user does not need to download the ebook. The book goes wherever you go — read it on your laptop, iPad, smartphone, and so forth.

We have a freemium business model. Users can subscribe for free in order to read ad-supported books online. Or they can pay a monthly, quarterly or yearly fee to access a bigger catalog with no ads, and with additional capabilities, such as reading offline — on the plane, on the subway, or in any place where Internet connectivity is not available.

What we're offering is quite different compared to what the big players are doing. We're offering an alternative approach — a new channel where publishers can provide additional value to the readers, and where readers can take advantage of what the Internet is offering.

How does your model benefit publishers?

Justo Hidalgo: There are three main benefits for publishers:

  • Piracy — Though not as high yet as in music or movies, piracy in books is clearly increasing. Publishers can either wait until the numbers get so high that nothing can be done, or they can act accordingly. The examples of Netflix and Spotify show that if you give users a compelling way to consume paid content, they will pay for it.
  • Cannibalization — We don't believe books are dead, but rather that they will co-exist with their digital counterparts. 24symbols helps in that coexistence as a way to easily re-direct traffic to retailers — if you love a book on 24symbols, give it as a gift; if you read for a while but still prefer the printed version, buy it.
  • Books as a service — The trend toward consuming content from the cloud is clear and inevitable. Publishers must start positioning themselves in an area that is already profitable in many businesses and clearly will be soon in the book industry. The benefits it brings to publishers — statistics and data gathering, close revenue control, and the ability to experiment — override the current concerns.

Additionally, we share revenue with publishers. The way to do this is by having a common revenue pool where we include all book-related ad revenue and the paid subscriptions. For a specific time range, such as a month, this revenue plus the number of pages that have been accessed throughout that period gives us the "price per page." Then we just count the number of pages per publisher and pay each publisher accordingly.

TOC Frankfurt 2011 — Being held on Tuesday, Oct. 11, 2011, TOC Frankfurt will feature a full day of cutting-edge keynotes and panel discussions by key figures in the worlds of publishing and technology.

Save 100€ off the regular admission price with code TOC2011OR

How are publishers responding?

Justo Hidalgo: We're finding lots of interest from publishers. Most of them understand how our model can help them and since we're quite flexible regarding how to start, it's easy for them to begin by publishing some content in order to experiment. We're adding new books to our catalog every week, and we're finishing some deals with big publishers that will provide a "seal of quality" to our project.

Is piracy a concern for you?

Justo Hidalgo: The project itself was born with piracy in mind. As I mentioned before, piracy is increasing in the ebook market. This doesn't help the industry, as it didn't help other cultural and entertainment industries, but it clearly shows a shift in how content is accessed and consumed. We offer a solution that's based on a proven premise: if you provide readers with a convenient, unified and affordable way to access content, people will use it. Once that's achieved, piracy doesn't matter that much.

This interview was edited and condensed.

For more on how 24Symbols works, check out the video below:


  • Data markets aren't coming. They're already here
  • For publishing, sales info is the tip of the data iceberg
  • Book piracy: Less DRM, more data
  • Ebooks and the threat from "internal constituencies"

  • June 30 2011

    How Netflix handles all those devices

    Netflix's shift to streaming delivery has made quite an impression on Internet traffic. According to Sandvine's latest report, Netflix now claims almost 30% of peak downstream traffic in North America.

    That traffic occurs, in no small part, because Netflix can run on so many devices — PCs, tablets, gaming consoles, phones, and so on. In the following interview, Netflix's Matt McCarthy (@dnl2ba) shares a few lessons from building across those varied platforms. McCarthy and co-presenter Kimberly Trott will expand on many of these same topics during their session at next month's OSCON.

    What are some of the user interface (UI) challenges that Netflix faces when working across devices?

    Matt McCarthyMatt McCarthy: Scaling UI performance to run well on a low-cost Blu-ray player and still take advantage of a PlayStation 3's muscle has required consulting WebKit and hardware experts, rewriting components that looked perfectly good a week before, and patiently tuning cache sizes and animations. There's no silver bullet.

    Since we've standardized on WebKit, we don't have to support multiple disparate rendering engines, DOM API variants, or script engines. However, there are lots of complex rendering scenarios that are difficult to anticipate and test, especially now that we're starting to take advantage of WebKit accelerated compositing. There are WebKit test suites, but none that are both comprehensive and well documented, so we're working on our own test suite that we can use to validate partners' ports of our platform.

    OSCON JavaScript and HTML5 Track — Discover the new power offered by HTML5, and understand JavaScript's imminent colonization of server-side technology.

    Save 20% on registration with the code OS11RAD

    How do the platform lessons Netflix has learned apply to other developers?

    Matt McCarthy: The challenges we face may be familiar to many large-scale AJAX application developers. In addition, mobile developers need to make similar trade-offs between memory usage and performance, other sophisticated user interfaces need to handle UI state, and most large code bases can benefit from good abstraction, encapsulation, and reuse.

    The urgency and difficulty of solving those challenges may differ for different applications, of course. If your application is very simple, it would be silly for you to use the level of abstraction we've implemented to support A/B testing in Netflix device UIs. But if you're innovating heavily on user experience, your performance isn't always what you'd like, and your UI is an endless font of race conditions and application state bugs, then maybe you'd like to learn about our successes and mistakes.

    There were reports last year that some Netflix PS3 users were seeing several different UIs. What are the benefits and challenges with this kind of A/B testing?

    Matt McCarthy: Netflix is a subscriber service, so ultimately what we care about is customer retention. But retention, by definition, takes a long time to measure. We use proxy metrics that correlate well with retention. Some of our most closely watched metrics have to do with how many hours of content customers stream per month. Personally, I find it gratifying to have business interests that are aligned closely with our customers' interests.

    The challenges grow as the A/B test matrix grows, since the number of test cell combinations scales geometrically with the number of tests. Our quality assurance team has been working on automated tests to detect regressions so a fancy new feature doesn't inadvertently break another feature that launched last month. Our engineers adhere to a number of best practices, e.g. defining, documenting, and adhering to interfaces so we don't find nasty surprises when we replace a UI component in a test cell.

    A/B testing user interfaces obviously takes a lot more effort than developing our "best bet" UI and calling it a day, but it's been well worth the cost. We've already been surprised a few times by TV UI test results, and it's changed the direction we've taken in new UI tests for both TV devices and our website. Every surprise validates our approach, and it shows us a new way to delight and retain more customers.

    This interview was edited and condensed.


    June 16 2011

    Apple and a web-free cloud

    iCloudThe nature of Apple's new iCloud service, announced at WWDC, is perhaps more interesting than it seems. It hints very firmly at the company's longer-term strategy; a strategy that doesn't involve the web.

    Apple will join Google and Amazon as a major player in cloud computing. The 200 million iTunes users Apple brings with them puts the company on the same level as those other platforms. Despite that, the three companies obviously see the cloud in very different ways, and as a result have very different strategies.

    Amazon is the odd man out. Their cloud offering is bare metal, contrasting sharply with Google, and now Apple's, document-based model. To be fair, Amazon's target market is very different, with their focus on service providers. If you're a Valley start-up looking for storage and servers, you need look no further than Amazon's Web Services platform.

    Google and Apple's document model contrasts sharply with Amazon's service-stack approach. Both Google and Apple have attempted to abstract away things, like the file system, which stand between the end user and their data. An unsurprising difference perhaps, Google and Apple are consumer-facing companies that are marketing to the final end user rather than the people and companies who aim to provide services for those users.

    But that's where the similarity between Google and Apple breaks down. Google sees the cloud as a way to deprecate general purpose computers in the hands of their users. In the same way that their new Chromium OS is built for the web, their cloud strategy is an attempt to move Google's users away from native applications so that their applications and data live in Google's cloud of services. Perhaps coincidentally, this also gives Google the chance to display and target their advertising even more cleverly.

    Apple's approach is almost entirely the opposite. They see the cloud as a way to keep the general purpose computer on life support for a few more years until touch-based hardware is really ready to take over. Apple's new cloud platform is built for native applications, in an attempt to pull users into native apps designed for their platforms. This method also gives Apple the chance to sell hardware, applications, and content that will lock users into their platform even more firmly. This is the basis of the often remarked "halo effect."

    At least on the surface things seem to be simple — the "why" of the thing is not in question. However it's what hasn't been said, at least openly, that raises the most interesting questions.

    Web 2.0 Summit, being held October 17-19 in San Francisco, will examine "The Data Frame" — focusing on the impact of data in today's networked economy.

    Save $300 on registration with the code RADAR

    Apple is fundamentally platform orientated. It's deep in their company genetics. The ill-fated official cloning program from the mid-'90s, which was brought to a screeching halt by the return of Steve Jobs, seems to have set a deep fear inside the company about letting someone else control anything that might stand between the company and direct access to their customers.

    At least to me, nothing confirms that mindset more than Apple's return to designing their own processors in-house in Cupertino. Apple has a long history of using its own custom silicon, but it's been more than five years since Apple has done so. With the move to Intel, the hope was to delegate nearly all of Apple's custom chip development. Unfortunately, that proved to be a stumbling block when Apple built the first generation iPhone. The Samsung H1 processor in the original model wasn't quite what Apple wanted, even though it was what had been asked for, and I think the return to custom silicon probably brought a sigh of relief in some corners of the company.

    The link between custom chips and the cloud may seem tenuous at first glance, but I think Apple's return to designing their own silicon is telling. Almost as telling as spending half a billion dollars on a custom data center to support their new iCloud service. Both moves show the company is now committed more than ever to controlling the verticals. From the chips inside the devices to the data centers their customers' data ultimately resides on, Apple is committed to controlling the user experience, and the web has no place in that.

    You might argue that this is because the web is "too open" and that threatens Apple's platform. However, the continuing argument over openness, or lack there of, isn't really relevant. Despite Google's protestations to the contrary, neither of these two companies is particularly open. The very document-based model they're both advocating in their cloud architectures precludes a truly open system. It's such an obvious straw man argument that it's not actually that interesting.

    What is interesting is that there was little or no mention of the web, or HTML5, during Apple's WWDC keynote. I think you'll see far less emphasis on HTML5 from Apple in the future, unless someone asks to do something with Apple's platform the company disapproves of, and then the traditional answer of "Well, you can always do that in HTML5" will be rolled out again.

    Apple has finally put their cards on the table. They have not yet bet the company on iCloud, but it's telling how deep the integration into both iOS and OS X appears to be. They have for too much invested in iCloud for it to fail, if only in reputation. Whether the first incarnation lives up to its promises out of the box is still to be seen, but success isn't out of the question. Despite MobileMe, Apple does know how to build large-scale reliable backend services. You only have to look at the App Store itself for an example.

    So in the future don't be too surprised to see Apple integrate iCloud even more tightly with both iOS and OS X. For the same strategic reasons, don't be shocked to see more custom chips appear — I expect to see the arrival of ARM-based MacBooks and the transition away from Intel for Apple's laptops. That's because for Apple, It's all about the platform.


    May 20 2011

    Kindle 2012: Wish-list features for the next model

    This post originally appeared on Joe Wikert's Publishing 2020 Blog ("What Will the Kindle Platform Look Like in 2012?"). It's republished with permission.

    KindleAmazon is well positioned to advance the Kindle platform much faster and further than they have in any 6-12 month period up to now. Here's where I hope they end up between now and the middle of next year:

    An insanely inexpensive entry-level device. Picture the current Kindle, but for $99 or less. How about $49? Better yet, how about free with a customer commitment to buy a minimum of X books in each of the next two years? Sounds a lot like a cell phone plan, doesn't it?

    Of course, if you're instead looking for something a bit more powerful and extendable, how about...

    An Android tablet device with an LCD screen. This one is the worst kept secrets since the iPhone 4. Amazon didn't launch that Appstore for Android because they want to push more cell phone sales. The only questions here are (1) when?, (2) how much?, and (3) how open? If they're smart the answers will be (1) any day, (2) $300 max, and (3) wide open.

    But if you can't stand the thought of reading long-form content on an LCD screen, then how about...

    That same Android tablet with a hybrid E Ink/LCD screen. That's right. A single device offering both the bright-light comfort of E Ink with the backlit option of LCD. Unfortunately for Amazon, it seems Apple is the one who's taking the lead on this front. Just search for the phrase "hybrid E Ink LCD display" and you get nothing but Apple news. That's a bummer since the first company to offer this solution could own the high end (and my loyalty). A fully open Android tablet with hybrid E Ink/LCD could easily command a $500 price or higher.

    Android Open, being held October 9-11 in San Francisco, is a big-tent meeting ground for app and game developers, carriers, chip manufacturers, content creators, OEMs, researchers, entrepreneurs, VCs, and business leaders.

    Save 20% on registration with the code AN11RAD

    That's all great for the hardware side, but what about the rest of the platform? Will Amazon really stick with the proprietary AZW file format that's based on mobi, even as the rest of the world embraces EPUB? For backwards compatibility reasons they probably have to stick with mobi. What a shame though. EPUB is where the action is and EPUB3 adds a great deal of functionality to enable much richer content than the Kindle supports.

    Expanding into a tablet with LCD display means the Kindle will no longer be hamstrung by the limits of E Ink. What a terrific opportunity Amazon has to offer (and encourage the development of) richer content than just words on the screen. But will they? I've been critical of the glacial pace at which Amazon implements Kindle enhancements, but I hope they take advantage of this opportunity early on.

    Regarding formats and flexibility, I'd love to see Amazon support mobi and EPUB. Better yet, if they have the confidence to provide an open device, how about letting it run any reader app from the competition? Let me put the Nook app on my Kindle device and may the best content provider win. Now that would be a bold move! After all, if I could own an Amazon device that lets me buy content from any store, why woud I ever consider buying a device from anyone else?


    March 10 2011

    Flipboard and the end of "sourciness"

    Today's Flipboard update sports increased speed, an improved design layout, a partnership with Instagram, and the ability for users to search across several social platforms, including Flickr, Twitter and Facebook.

    The shiny new features are drawing plenty of attention, but the really cool thing here — and what likely will fuel Flipboard's success — is the platform's ability to seamlessly present the newly integrated social content without overly focusing on the original source or platform.

    In a recent interview, Craig Mod, designer and publisher at Flipboard, stressed the importance of putting the content first. By making content the focus of the presentation, users can experience a seamless stream of information rather than jumping from platform to platform:

    I think the thing that Flipboard is doing particularly well is that the integrations become seamless. One of the main goals at Flipboard that we really try to drive home is that [users] plug in these [integration] sources and we remove the "sourciness" from it.

    When I'm reading stuff in Flipboard, it's not like I'm engaging Twitter or engaging Facebook. I'm just aware of the great content that's being micro-curated by my social groups. There's an obfuscation of that social network layer — what we're building is a comfortable consumption layer, as fed by human curation.

    Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

    Save 20% on registration with the code WEBSF11RAD

    In the interview, Mod also discusses the most important elements of app design and how Flipboard is, at this point, a great big experiment. The full interview is available in the following video:


    March 04 2011

    Four short links: 4 March 2011

    1. JSARToolKit -- Javascript port of the Flash AR Toolkit. I'm intrigued because the iPad2 has rear-facing camera and gyroscopes up the wazoo, and (of course) no Flash. (via Mike Shaver on Twitter)
    2. Android Patterns -- set of design patterns for Android apps. (via Josh Clark on Twitter)
    3. Preview of Up and Running with Node.js (O'Reilly) -- Tom Hughes-Croucher's new book in preview form. Just sorting out commenting now. (via Tom on Twitter)
    4. #Blue Opens for Business -- a web app that gets your text messages. You can reply, and there's an API to give other apps read/write access. Signs the text message is finally becoming a consumer platform.

    October 19 2010

    Pandora's ubiquitous platform play

    Pandora on the iPad and AndroidAn informal survey of my home's device inventory reveals that Pandora is omnipresent. The music service is accessible through my various computers, an iPad, two iPods, an Android phone, and a Blu-ray player. The only reason I can't access Pandora through a DVR, stereo, distributed audio system, or car is because I don't have compatible devices (yet ...).

    I began mulling Pandora's presence in my life after interviewing Pandora CTO Tom Conrad (@tconrad) at last month's Web 2.0 Expo. During our chat, I asked which of Pandora's platforms is most popular. Here's what he said:

    It's about 50/50 between desktop and mobile. In fact we just slipped over into having more hours of listening consumed off of the PC than on. And the vast majority of off-PC listening is some kind of a mobile device. There's a big chunk of iPod Touch usage, and then there's a small but growing percentage that's consumer electronics devices. We've done probably a hundred or more partnerships for television and Blu-ray players, tabletop radios and stereos, and set-top boxes and even automobiles.

    I'm as enthusiastic about platforms as anyone. I believe digital content should be spread far and wide: websites, phones, tablets, ereaders, Facebook, Twitter, RSS -- get it all out there. But even my liberal platform perspective pales in comparison to Pandora's. They're going for all the platforms, not just the web-based ones.

    And this makes me wonder if there's a lesson here for content companies -- both those that create content and those that distribute it.

    Decoupling on a different level

    A lot of folks in the publishing world have grown comfortable decoupling content from containers. That's why CSS is an integral part of online content development and XML is a key tool in many production chains. But Pandora represents an entirely different type of decoupling: They're not just container-agnostic. They're device-agnostic. You want Pandora's content on a computer? Done. On a phone? No problem. On your stereo? On a TV? In a car? You bet.

    Pandora is a music service, so the expectation is that the music it provides will be available through all the channels where music is consumed -- not just the ones chained to a computer. Shouldn't this be the threshold for other types of content?

    Implementation of this type of distributed effort is tricky, but I think the mindset is what really matters here. If we accept that the old model of driving all the attention to specific platforms (e.g. a website, a book, etc.) has been replaced by serving audiences where they want to reside, then shouldn't content companies make their content accessible through all the appropriate channels and devices? Instead of hedging bets on specific devices or platforms, why not spread that bet across as many platforms as you can? Most will be misses, but some of the hits could come from channels you wouldn't expect.

    Other examples

    Pandora isn't the only content-centric company pursuing the ubiquitous path. In putting together this piece, I was reminded of three related efforts:

    • Netflix made a statement in 2009 when it switched the default tab at from "Browse DVDs" to "Watch Instantly." The company has followed up by spreading their streaming service far and wide. In addition to standard browser-based access, the Netflix streaming library is now available through game consoles, TVs, mobile devices and other hardware. In many ways, Netflix is the video version of Pandora.
    • Amazon's Kindle platform extends across computers and devices. The Kindle hardware is simply part of a broader effort to sell ebooks through Amazon. How and where you access Amazon's offerings isn't the priority. (Barnes & Noble and Borders are following the Amazon playbook as well.)
    • UK news publisher The Guardian encourages developers to grab its content API -- which pumps out the full text of articles -- and transform/mash-up/repurpose as developers see fit. The only caveat: the Guardian reserves the right to put ads into its API content stream. This represents one possible way to maintain an advertising model while distributing content across platforms and devices.

    The defining characteristic of these efforts is commitment. These aren't tepid platform plays. The companies behind them are all in, which is necessary during this period of ambiguity and experimentation.

    Really, it comes down to this: The old methods of distribution doen't mesh with the way audiences consume digital content, so a technique that relies on those old methods will either fail mightily, or -- perhaps even worse -- chug along aimlessly. A bold embrace of the digital landscape is key to seizing the digital opportunity.

    The full interview with Conrad is embedded below. His Web 2.0 Expo keynote is also worth checking out.


    April 26 2010

    Five reasons iPhone vs Android isn't Mac vs Windows


    Last week I presented at Stanford Graduate School of Business in a session on Mobile Computing called, "Creating Mobile Experiences: It's the Platform, Stupid."

    As the title underscores, I am a big believer that to understand what makes mobile tick, you really need to look beyond a device's hardware shell (important, though it is), and fully factor in the composite that includes its software and service layers; developer tools and the ecosystem "surround." Successful platforms, after all, are more than the sum of their parts' propositions. They are not simply a bunch of dis-integrated ingredients.

    Having built hardware and software platforms since 1994, this thought process has led me to harp endlessly on why the iPhone platform (and its derivatives) is such a game changer. By contrast, I would argue that the long-term success of Android is anything but a given.

    It's human nature to look to the past in an attempt to understand the future. As such, I was unsurprised when I was asked during my presentation if Apple and iPhone vs Google and Android in mobile computing is "destined" to play out as Apple and the Mac did when confronted by Microsoft and Windows in the PC wars.

    As I have provided "big picture" analysis on this topic before in other posts (here and here), I want to share what I see as the five "little picture" reasons Apple vs Google isn't destined for the same outcome as Apple vs Microsoft:

    1. Retail Distribution: During the PC Wars, everything came down to distribution and presence on limited retail shelf space. To be successful, you had to be on the shelves of retailers like ComputerLand, CompUSA, Circuit City, Office Depot and MicroAge. Given the wide variety of hardware OEMs making Wintel-based PCs, both shelf-space for Macs and the technical know-how to sell them were severely limited, making a differentiation story like Apple's a hard sell. Today, Apple Stores drive a superior environment for consumers to experience hardware hands-on and get educated about the full breadth of Apple products. An aside, this is a consumer touch point that Google absolutely lacks.

    2. Pricing overhang: A primary reason for Apple's crushing defeat by Microsoft was Apple's misguided notion that it could charge grossly higher dollars for Mac products than Windows-based PC offerings. Contrast this with the present, where Apple is consistent in their assertion and awareness that it cannot and will not leave pricing overhang (i.e. a sufficient pricing gap between its products and the competition). This avoids the past dynamic where consumers saw picking Apple products as an either/or decision, in terms of price vs premier experience. iPod, iPhone, iPod Touch and iPad all have followed this course.

    3. Developer ecosystem: It is a truism that in platform plays he who wins the hearts and minds of developers, wins the war. In the PC era, Apple forgot this, bungling badly by launching and abandoning technology initiatives, co-opting and competing with their developers and routinely missed promised milestones. By contrast, Microsoft provided clear delineation points for developers, integrated core technologies across all products, and made sure developer tools readily supported these core initiatives. No less, Microsoft excelled at ensuring that the ecosystem made money.

      Lesson learned, Apple is moving on to the 4.0 stage of its mobile platform, has consistently hit promised milestones, has done yeomen's work on evangelizing key technologies within the platform (and third-party developer creations - "There's an app for that"), and developed multiple ways for developers to monetize their products. No less, they have offered 100 percent distribution to 85 million iPhones, iPod Touches and iPads, and one-click monetization via same. Nested in every one of these devices is a giant vending machine that is bottomless and never closes. By contrast, Google has taught consumers to expect free, the Android Market is hobbled by poor discovery and clunky, inconsistent monetization workflows. Most damning, despite touted high-volume third-party applications, there are (seemingly) no breakout third-party developer successes, despite Android being around two-thirds as long as the iPhone platform.

    4. Consumer technology adoption: During the PC era, large enterprises essentially dictated the industry winners by virtue of standardizing on a given vendor or type of solution. This created a winner-takes-all dynamic, inasmuch as consumers would ultimately buy the same solutions that had been blessed by large enterprises. By virtue of its conservative nature (remember the motto, "No ever got fired for buying IBM"?), staid Microsoft always felt like a safer choice than crazy Apple. And besides, accounting could solicit bids from multiple hardware vendors, which they liked.

    5. By contrast, today's breakthrough adoption begins in the consumer realm and filters back to enterprises, not the other way around. This change deeply favors a consumer products and marketing force like Apple. While Google has done a reasonable job in the consumer arena, its approach is decidedly design-lite and techie focused, not mass-market friendly.

    6. Microsoft-like resilience: I remember too well the Microsoft mantra "Embrace-Extend-Extinguish," which basically meant that any segment worth owning Microsoft would ultimately dominate by the 3.0 version of its competing product. Part of this was a by-product of the incredible "unfair advantages" Microsoft had built for itself by virtue of channeling items 1-4 above. Part of this was its ruthlessness in squeezing the lifeblood out of competitors through any means necessary. But, give Microsoft full props for manifesting an unyielding resilience to keep working its product offering and market assault until victory was at hand.

      Considering Apple's rise from the ashes to re-create a very profitable Mac business -- the dominance it has created with iPod and iTunes; the powerhouse iPhone and iPhone platform and the ambitious, and already well-regarded iPad -- does anyone wonder about Apple's resilience? By contrast, Google remains almost completely dependent upon search and advertising, despite launching so many new product offerings and seriously pursuing M&A over the past several years. Arguably, Google's famously loosely coupled structure leads to a lot of seeds being planted, but so too, it seems to a less than laser-like focus on seeing those seeds to cultivation and full harvest. It begs the question, "Can a tiger change its stripes?"

    Obviously a lot can change in the next couple of years. It's easy to lose sight of the fact that the mobile industry that exists today looks nothing like the one that did before iPhone came on to the scene. Just ask Nokia.

    Clearly, the best case for Google with Android is that mobile technology and mobile platforms become sufficiently commoditized for its device OEM-centric, horizontal model to tip the balance in its favor.

    Never say never, but paint me a skeptic -- barring as =yet unseen missteps by Apple.


    Check Mate: Understanding Apple's iPad

    The Google Android Rollout: Windows or Waterloo?

    Google Android, the Dawn of Mobile, and the Missing Leg

    April 13 2010

    The missing link in Twitter's ad program

    The "Twitter Business Model Watch" is officially (and thankfully) over.

    Web 2.0 Expo San FranciscoAnnounced today, Twitter's Promoted Tweets advertising program is supposed to help companies offset the limited window of attention that's common to the real-time web. The new ads, which have the look and behavior of normal tweets, will float like corks at the top of Twitter search results. The feature will roll out to user accounts down the line. (Advertising Age has an example of a Promoted Tweet.)

    I've run a few non-scientific tests on the shelf-life of a tweet, and most burn bright for three or four minutes and then fizzle into obscurity. It's tough to build a brand campaign around that. So the "cork" innovation is quite clever.

    Linking ads to search queries is smart, too. We're all aware it often yields good results. And I'm intrigued by the assortment of views, clicks and retweets that will influence Twitter's new "resonance" metric. Clearly, there's been a lot of deep thought in the halls of Twitter.

    But there's something missing here.

    If Twitter really wants to emulate Google, as this New York Times article suggests, it needs to empower the little guy. Not the little advertiser. The little user.

    The true genius of Google's AdSense program lies in its inclusiveness. It gives small web publishers a fast and easy revenue stream. Now, most only make a few bucks per month. I realize that. But that meager money represents a whole lot more than the "absolutely nothing" publishers were earning pre-AdSense. And if you've got SEO chops and a lot of luck, you might beat the average.

    It's early and I'm undoubtedly jumping the gun, but I'd love to see Twitter create something similar to AdSense. Perhaps opening up the Promoted Tweets dashboard to users and third-party developers so those folks can connect with advertisers. A three-month snapshot of any particular user's tweet stream reveals a lot about their favorite topics. Connecting relevant businesses and giving users a revenue share is a logical extension.

    Or, if direct payment feels wrong, let users post Promoted Tweets that float at the top of their own streams. A "custom cork" that gives precedence to a particular blog post or event or cause. If it resonates with the user base, it could then get wider play through the formal advertising/promotion program. That seems far more useful than bombarding the Twittersphere with the same desperate tweets over and over again.

    It's important to note that AdSense emerged years after Google's AdWords program launched. The AdWords-AdSense ecosystem we now know required a lot of time and iteration. And along those lines, Twitter is going out of its way to characterize Promoted Tweets as a work in progress. So perhaps expansion of the sort I'm hoping for is already on the drawing board.

    April 08 2010

    02mydafsoup-01, a Discussion and Lecture Online Platform, Founded in 2007 - is Charging since March 2010 for Access to New Complete Videostreams

    Since March 24th 2010 there is at no more a clearly seperating line between scientific information, PR and charging: the newly introduced premium access is nether clearly defined nor transparent in its extensions - what is charged, how long, is there a time line, from where on the videos are free? - etc. - The shift between free access and premium access was done mostly silently - intransparency rules, bad style for the audience, which to a greater part is composed by an international community of netizens, mostly people who tried to support and build up a freely supported network of good information sources, what was btw. also the PR strategy of during the last years -'s way to handle now the financial reward shows a lack of information society conceptions and a new way to organize them by digital supported technologies to gain nevertheless a financial outcome - unfortunately it proves also a lack of honesty which silently menaces by a systematicaly build up intransparency the access to reliable www based qualitiy of information.

    [to whome it may concerne - @sigalon02 @sigalon @sigaloninspired ]

    oanth - muc - 20100408

    February 11 2010

    The Most Efficient iPhone Developers

    Last week marked the first time the U.S. iTunes store had over 150,000 apps available. Close to 31,000 different developers (or "sellers") were responsible for those apps, with many offering one to five apps, while a few offered over a hundred different apps.

    Which developers consistently produce top-selling apps? I examined the percentage of apps produced by a developer that became best-sellers. To identify best-selling apps, I used the Top 100 Free and Top 100 Paid, and the recently launched Top 100 Grossing apps lists.

    I've noted that Games dominate these Top 100 lists, so it's no surprise that Game developers are among the most efficient producers of best-selling PAID apps. A pair of large Game developers (Gameloft and EA) offered over 40 different PAID apps over the last year, yet managed to have 3 out of 4 of their apps land on the Top 100 PAID apps list. The typical large developer only had 1 out of 10 apps (9%) appear on the Top 100 list. (NOTE: In each of the graphs below, I only show the 25 most efficient developers. The MEDIAN Efficiency is for all developers that had at least the stated number of apps during the period.)


    Game publishers also dominated the list of most efficient FREE app developers, but a pair of (adult-oriented) Entertainment developers were among the most consistent producers of popular FREE apps. Also note that a different set of Game publishers are producing the best-selling FREE games:


    Apple launched the Top 100 Grossing apps list in September 2009, so there isn't as much historical data available. There are quite a few individual developers who've produced top-grossing apps. Given that a few small and successful Game developers were acquired last year, small outfits who consistently produce best-sellers are attractive acquisition targets. The graph below is limited to developers with at least 5 PAID apps since September 2009:


    [By using popularity rankings within a category, one can also identify the most efficient developers for individual iTunes app store categories.]

    As far as embracing the iPad, several developers listed above are very enthusiastic about producing iPad apps. The interest is particularly high among iPhone Game developers††, I would really be shocked if Games aren't a major component of the iPad app ecosystem.

    (†) For this post a large developer (will usually) refer to one that offered over 10 Paid or Free apps from Feb/2009 to Feb 7, 2010.

    (††) See [1], [2], [3], [4]. The other interesting thing I noticed from the data is how many of the key iPhone Game publishers are located in the SF Bay Area.

    December 15 2009

    Is Facebook a Brand that You Can Trust?

    Facebook-Fox.pngIsn't it about time that we started holding our online brands to the same standards that we hold our offline ones?

    Case in point, consider Facebook. In Facebook's relatively short life, there has been the Beacon Debacle (a 'social' advertising model that only Big Brother could love), the Scamville Furor (lead gen scams around social gaming) and now, the Privacy Putsch.

    By Privacy Putsch, I am referring to Facebook's new 'Privacy' Settings, which unilaterally invoked upon all Facebook users a radically different set of privacy setting defaults than had been in place during the company's build-up to its current 350 million strong user base.

    To put a bow around this one, the EFF (Electronic Frontier Foundation), not exactly a bastion of radicalism, concluded after comparing Facebook's new privacy settings with the privacy settings that they replaced:

    "Our conclusion? These new 'privacy' changes are clearly intended to push Facebook users to publicly share even more information than before. Even worse, the changes will actually reduce the amount of control that users have over some of their personal data." EFF adds that, "The privacy 'transition tool' that guides users through the configuration will 'recommend' — preselect by default — the setting to share the content they post to Facebook, such as status messages and wall posts, with everyone on the Internet, even though the default privacy level that those users had accepted previously was limited to 'Your Networks and Friends' on Facebook."

    Ruminate on what that means for a moment. You are a parent, and you regularly upload photos of your kids to Facebook, blithely assuming that they are free from the roaming eyes of some sexual predator. While previously, these photos were only viewable to the Friends and Networks that you explicitly connected with, now, without consulting you, Facebook has made your son or daughter's pictures readily accessible to friend or felon.

    Or, perhaps you are a typical 'thirty something,' sharing your weekend escapades with what you thought was a bounded social circle. Now, your current or prospective employer is just a click away from concluding that, perhaps trusting the company's marketing department to you is not such a good idea after all.

    So as not to split hairs, let's just agree that some potential existed for either of these scenarios to have occurred under the old privacy model, and also worth nothing, if you actually understand what these new settings mean to your world, you can reverse (many of) these settings.

    But, that's beside the point. Why? Because three separate instances now (i.e., Beacon, Scamville and Privacy Settings) have underscored a tendency of Facebook to not only make fairly key strategic decisions without first engaging it user base in a bi-lateral dialog, but to make decisions that are decidedly at odds with consumer protection/interest.

    On a human level, one can look at the new privacy changes as akin to going to sleep at night with the assumption that the various doors and windows of your house were locked, only to wake up and realize that while you were sleeping, the 'locksmith' decided that you/they were better served if the doors were left unlocked.

    Upon waking up to discover this unilateral decision, would you be pissed? Would you trust the locksmith to keep you safe at night going forward?

    One last example before I move on, here's another excerpt from EFF's analysis on the 'Good, Bad and Ugly' of the new privacy settings:

    The Ugly: Information That You Used to Control Is Now Treated as "Publicly Available," and You Can't Opt Out of The "Sharing" of Your Information with Facebook Apps.

    Specifically, under the new model, Facebook treats information, such as friends lists, your name, profile picture, current city, gender, networks, and the pages that you are a 'fan' of — as 'publicly available information,' a new definition of heretofore personal information that Facebook held off disclosing in any material way -- until the very day it was forcing the new change on users.

    Blogger Jason Calcanis puts this policy in perspective in his excellent post, 'Is Facebook unethical, clueless or unlucky?'

    I'm sorry, what the frack just happened? I turned over my friend list, photos and status updates to everyone in the world? Why on earth would anyone do that with their Facebook page? The entire purpose of Facebook since inception has been to share your information with a small group of people in your private network. Everyone knows that and everyone expects that. In fact, Facebook's success is largely based on the face that people feel safe putting their private information on Facebook.

    Do with this information what you will (forewarned is forearmed, after all), but me personally, after reviewing each Facebook photo album of mine with personal, family and/or friend oriented photos within it, I couldn't help but feel that Facebook should be given a new name: Faceless Betrayal.

    Some Relativity from the World of Offline Brands: Perrier and Tylenol

    perrier_water.jpgI read an interesting stat in Fast Company about the US Bottled Water Industry. Americans now spend more on bottled water than they do on iPods or Movie Tickets - $16B dollars.

    Now, think back to 1989. Perrier Water was the imported water market leader in North America, with an eponymous water product marketed as 'naturally sparkling' water sourced from a mineral spring in the south of France.

    But then, Perrier ran into serious trouble when the noxious, cancer-causing agent, Benzene, was found in the water that Perrier sold in the United States.

    Seeking damage control, the company gravitated between silence and evasiveness, with Perrier initially stating that the problem was an isolated one, when in actuality, it had turned out to be a global issue.

    Perrier's ultimate mistake, though, was responding to a serious brand integrity crisis in a less than above-board, consultative fashion with its customer base.

    The net effect is that, despite a massive global boom in bottled water consumption, a once-trusted, dominant brand, in essence, collapsed. In the end, Perrier's sales fell in half; the company was later sold, and the brand never recovered.

    Tylenol Extra Strength.jpgBy contrast, when seven people died after taking cyanide-laced Extra-Strength Tylenol capsules, the company did a massive education and outreach effort, culminating with the recall of 31 million bottles of the product, at a then-cost of $100M. Like Perrier, Tylenol's market share initially cratered.

    But, because the company had been proactive, public and always acting in the best interests of its consumers, within a year, its share had rebounded dramatically, and within a few years, had come all the way back.

    The moral of the story is that two companies faced crises that threatened to kneecap their brand, but only one maintained a consistent focus on living up to the trust that its customers had put in the brand. Tellingly, the market rewarded the brand that was truest to its customers (Tylenol).

    Netting it out: In light of the company's past consumer-unfriendly initiatives, Facebook's privacy settings change should serve as a wake up call to its 350M users that they are entrusting a Fox to guard the Hen House; a truth that is destined to erupt into a crisis for the company. Will they handle it like Tylenol or Perrier?

    Related Post:
    Why Facebook's Terms of Service Change is Much Ado About Nothing

    December 14 2009

    Apps Per Seller Across the US iTunes Categories

    Measured in terms of number of unique apps, the Top 5 categories in the U.S. app store have been Games, Books, Entertainment, Travel and Utilities. But comparing categories in terms of number of apps doesn't capture the challenge of developing applications in different categories. As I noted in an earlier post, it's much easier to develop a Book app than an interactive game.

    One crude measure for the relative complexity of developing apps across categories is to compare the number of apps per seller. The Top 5 categories in Nov/2009, were Books (17 apps per seller), Travel (6 apps per seller), Education (4 per seller), Reference and Sports (3 per seller). There were also 3 apps per seller in the Games and Entertainment categories in Nov/2009:

    (†) Data for this post was for through 12/10/2009, and covers the U.S. iTunes App store.

    October 13 2008

    Play fullscreen
    The Financial Crisis: Where Do We Go From Here?
    Von: uchannel Hinzugefügt: 13. Oktober 2008 Nouriel Roubini, Associate Professor of Economics and International Business, New York University; Brad W. Setser, Fellow for Geoeconomics, Council on Foreign Relations; Benn Steil, Director of International Economics, Council on Foreign Relations Presider: Mort Zuckerman, Editor in Chief, U.S. News & World Report (Sep 25, 2008 at the Council on Foreign Relations)
    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!