Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 26 2014

September 07 2013

De la sécurité et de la surveillance passive - IETF

De la sécurité et de la #surveillance passive - #IETF
http://www.ietf.org/blog/2013/09/security-and-pervasive-monitoring

Le président de l’IETF, Jari Arkko, appelle les participants de l’IETF, face aux révélations de la surveillance massive de la NSA, à construire un internet plus sûr et à améliorer les protocoles de l’internet. « Nous devons jeter un regard critique » et améliorer la sécurité des protocoles de l’internet, déclare le président, qui appelle les ingénieurs de l’internet à y travailler lors de leur prochain congrès. L’internet doit passer à la « sécurité » par défaut. Tags : (...)

#securité #protocole #standards #infrastructure

September 05 2013

N.S.A. Foils Much Internet Encryption - NYTimes.com

N.S.A. Foils Much Internet Encryption - NYTimes.com
http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?pagewanted=all

#NSA documents show that the agency maintains an internal database of #encryption keys for specific commercial products, called a Key Provisioning Service, which can automatically decode many messages. If the necessary key is not in the collection, a request goes to the separate Key Recovery Service, which tries to obtain it.

How keys are acquired is shrouded in secrecy, but independent cryptographers say many are probably collected by hacking into companies’ computer servers, where they are stored. To keep such methods secret, the N.S.A. shares decrypted messages with other agencies only if the keys could have been acquired through legal means. “Approval to release to non-Sigint agencies,” a GCHQ document says, “will depend on there being a proven non-Sigint method of acquiring keys.”

Simultaneously, the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” the most common encryption method.

Cryptographers have long suspected that the agency planted vulnerabilities in a standard adopted in 2006 by the National Institute of #Standards and Technology, the United States’ encryption standards body, and later by the International Organization for Standardization, which has 163 countries as members.

Classified N.S.A. memos appear to confirm that the fatal weakness, discovered by two Microsoft cryptographers in 2007, was engineered by the agency. The N.S.A. wrote the standard and aggressively pushed it on the international group, privately calling the effort “a challenge in finesse.”

May 24 2013

March 22 2013

Four short links: 22 March 2013

  1. Defend the Open Web: Keep DRM Out of W3C Standards (EFF) — W3C is there to create comprehensible, publicly-implementable standards that will guarantee interoperability, not to facilitate an explosion of new mutually-incompatible software and of sites and services that can only be accessed by particular devices or applications. See also Ian Hickson on the subject. (via BoingBoing)
  2. Inside the South Korean Cyber Attack (Ars Technica) — about thirty minutes after the broadcasters’ networks went down, the network of Korea Gas Corporation also suffered a roughly two-hour outage, as all 10 of its routed networks apparently went offline. Three of Shinhan Bank’s networks dropped offline as well [...] Given the relative simplicity of the code (despite its Roman military references), the malware could have been written by anyone.
  3. BotNet Racking Up Ad Impressionsobserved the Chameleon botnet targeting a cluster of at least 202 websites. 14 billion ad impressions are served across these 202 websites per month. The botnet accounts for at least 9 billion of these ad impressions. At least 7 million distinct ad-exchange cookies are associated with the botnet per month. Advertisers are currently paying $0.69 CPM on average to serve display ad impressions to the botnet.
  4. Legal Manual for Cyberwar (Washington Post) — the main reason I care so much about security is that the US is in the middle of a CyberCommie scare. Politicians and bureaucrats so fear red teams under the bed that they’re clamouring for legal and contra methods to retaliate, and then blindly use those methods on domestic disobedience and even good citizenship. The parallels with the 50s and McCarthy are becoming painfully clear: we’re in for another witch-hunting time when we ruin good people (and bad) because a new type of inter-state hostility has created paranoia and distrust of the unknown. “Are you now, or have you ever been, a member of the nmap team?”

March 13 2013

Four short links: 13 March 2013

  1. What Tim Berners-Lee Doesn’t Know About HTML DRM (Guardian) — Cory Doctorow lays it out straight. HTML DRM is a bad idea, no two ways. The future of the Web is the future of the world, because everything we do today involves the net and everything we’ll do tomorrow will require it. Now it proposes to sell out that trust, on the grounds that Big Content will lock up its “content” in Flash if it doesn’t get a veto over Web-innovation. [...] The W3C has a duty to send the DRM-peddlers packing, just as the US courts did in the case of digital TV.
  2. Visualizing the Topical Structure of the Medical Sciences: A Self-Organizing Map Approach (PLOSone) — a high-resolution visualization of the medical knowledge domain using the self-organizing map (SOM) method, based on a corpus of over two million publications.
  3. What Teens Get About The Internet That Parents Don’t (The Atlantic) — the Internet has been a lifeline for self-directed learning and connection to peers. In our research, we found that parents more often than not have a negative view of the role of the Internet in learning, but young people almost always have a positive one. (via Clive Thompson)
  4. Portable C64 — beautiful piece of C64 hardware hacking to embed a screen and battery in it. (via Hackaday)

February 10 2012

Developer Week in Review: A pause to consider patents

This week, as I do occasionally, I want to focus in on one specific topic.

For regular readers, the topic of technology innovation and patents is nothing new; it's a problem that is frequently covered in this space. But this week, there were two important occurrences in the world of intellectual property that highlight just what a mess we've gotten ourselves into.

The first is an unexpected turn of events down in scenic Tyler, Texas, home of the world's most litigant-friendly patent infringement juries. How friendly? Well, biased enough that Eolas relocated its corporate HQ to Tyler just to be close to the courts. Eolas, as you may recall, is the pesky gadfly that's been buzzing around giants such as Microsoft for years, claiming broad patents over, well, the entire Internet. Rather than continuing a costly court battle it might lose in the end, the House of Redmond settled, and a host of other high-tech cash-cows followed suit.

US Patent 5,838,906As Eolas continued to threaten to sue the pants off everyone, a ragtag group of plucky companies like Adobe Systems, Google, Yahoo, Apple, eBay and Amazon.com said enough is enough. And this week in Tyler, following testimony by luminaries such as Sir Tim Berners-Lee, a jury agreed, invalidating the infamous '906 patent.

You'd think that this would make Google, one of the main defendants, a big hero and confirm its status as Not Evil. But in the very same week, Google refused to budge on its licensing requirements for patents it acquired from Motorola, patents that are required for any company that wants to play in the 3G cell phone space.

When a standard is adopted by governmental bodies (such as the FCC) or standards-setting bodies like IEEE, it should ideally be free of any intellectual property restraints. After all, that's the purpose of a standard: to provide a common framework that competing companies can use to produce interoperable products. Standards such as GSM and CDMA are why you can use your iPhone in Europe (if you're rich).

The problem is, most modern standards come with a bundle of patents attached to them. In the 3G space, Google (through the Motorola acquisition) and Samsung own a lot of them. As part of the standard-making process, these companies are supposed to agree to offer use of the patents under Fair, Reasonable and Non-Discriminatory (FRAND) license terms. The idea is that all companies using the standard pay the same license fees to the patent holders, so no one gets an advantage. The problem is, who decides what is Fair and Reasonable?

This is especially pernicious when the company licensing the patent is also a competitor in the space. Obviously, Samsung doesn't pay itself a license fee to use its patent, so it doesn't matter how expensive it makes the fee, as long as Samsung doesn't incur the wrath of the standard-setting body. In the case of Motorola/Google, the license fee is set at 2.25% of the total selling price of the phone (which would come to around $13.50 on a $600 iPhone). Apple, et al., are screaming to the moon that that kind of licensing is not in the spirit of FRAND, but it's up to groups such as the European standards body, ETSI, to determine if the patent holders are really playing fair.

Of course, Google has fallen victim to the same issues. Although it doesn't pay the piper directly, phone manufacturers using Android end up reportedly paying $5 per phone to Microsoft to avoid patent issues. It's worth noting, however, that at least Microsoft is using software-related patents that it claims Android infringes, not patents directly related to the underlying standards used by the phone.

There's a simple solution to this problem, of course, which is not to allow patent-encumbered technologies to become standards. The software world has (mostly) been free of this kind of nonsense, and it's a good thing. Can you imagine having to pay a license fee to use SOAP, or AJAX? The worrisome thing is that this could become the model used for software patents, and it would basically kill smaller companies trying to be innovative.

Oh, and before you count Eolas out of the game, remember that this is just a single trial it lost. It can try again with another jury and another set of companies. Unless the USPTO invalidates the underlying patent, Eolas is still out there, waiting to strike.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

Got news?

Please send tips and leads here.

Related:

September 19 2011

Promoting Open Source Software in Government: The Challenges of Motivation and Follow-Through


The Journal of Information Technology & Politics has just published a special issue on open source software. My article "Promoting Open Source Software in Government: The Challenges of Motivation and Follow-Through" appears in this issue, and the publisher has given me permission to put a prepublication draft online.

The main subject of the article is the battle between the Open Document Format (ODF) and Microsoft's Office standard, OOXML, which might sound like a quaint echo of a by-gone era but is still a critical issue in open government. But during the time my article developed, I saw new trends in government procurement--such as the Apps for Democracy challenge and the data.gov site--and incorporated some of the potential they represent into the piece.

Working with the publisher Taylor & Francis was enriching. The prepublication draft I gave them ranged far and wide among topics, and although these topics pleased the peer reviewers, my style did not. They demanded a much more rigorous accounting of theses and their justification. In response to their critique, I shortened the article a lot and oriented it around the four main criteria for successful adoption of open source by government agencies:

  1. An external trigger, such as a deadline for upgrading existing software

  2. An emphasis on strategic goals, rather than a naive focus on cost

  3. A principled commitment to open source among managers and IT staff responsible for making the transition, accompanied by the technical sophistication and creativity to implement an open source strategy

  4. High-level support at the policy-making level, such as the legislature or city council

Whenever I tell colleagues about the special issue on open source, they ask whether it's available under a Creative Commons license, or at least online for free download. This was also the first issue I raised with the editor as soon as my article was accepted, and he raised it with the publisher, but they decided to stick to their usual licensing policies. Allowing authors to put up a prepublication draft is adroit marketing, but also represents a pretty open policy as academic journals go.

On the one hand, I see the decision to leave the articles under a conventional license as organizational inertia, and a form of inertia I can sympathize with. It's hard to make an exception to one's business model and legal process for a single issue of a journal. Moreover, working as I do for a publisher, I feel strongly that each publisher should make the licensing and distribution choices that it feels is right for it.

But reflecting on the academic review process I had just undergone, I realized that the licensing choice reflected the significant difference between my attitude toward the topic and the attitude taken by academics who run journals. I have been "embedded" in free software communities for years and see my writing as an emerging distillation of what they have taught me. To people like me who promote open information, making our papers open is a logical expression of the values we're promoting in writing the papers.

But the academic approach is much more stand-offish. An anthropologist doesn't feel that he needs to invoke tribal spirits before writing about the tribe's rituals to invoke spirits, nor does a political scientist feel it necessary to organize a worker's revolution in order to write about Marxism. And having outsiders critique practices is valuable. I value the process that improved my paper.

But something special happens when an academic produces insights from the midst of a community or a movement. It's like illuminating a light emitting diode instead of just "shining light on a subject." I recently finished the book by Henry Jenkins, Fans, Bloggers, and Gamers: Media Consumers in a Digital Age, which hammers on this issue. As with his better-known book Convergence Culture, Jenkins is convinced that research about popular culture is uniquely informed by participating in fan communities. These communities don't waste much attention on licenses and copyrights. They aren't merely fawning enthusiasts, either--they critique the culture roughly and demandingly. I wonder what other disciplines could take from Jenkins.

April 13 2011

What VMware's Cloud Foundry announcement is about

I chatted today about VMware's Cloud Foundry with Roger Bodamer, the EVP of products and technology at 10Gen. 10Gen's MongoDB is one of three back-ends (along with MySQL and Redis) supported from the start by Cloud Foundry.

If I understand Cloud Foundry and VMware's declared "Open PaaS" strategy, it should fill a gap in services. Suppose you are a developer who wants to loosen the bonds between your programs and the hardware they run on, for the sake of flexibility, fast ramp-up, or cost savings. Your choices are:

  • An IaaS (Infrastructure as a Service) product, which hands you an emulation of bare metal where you run an appliance (which you may need to build up yourself) combining an operating system, application, and related services such as DNS, firewall, and a database.

  • You can implement IaaS on your own hardware using a virtualization solution such as VMware's products, Azure, Eucalyptus, or RPM. Alternatively, you can rent space on a service such as Amazon's EC2 or Rackspace.

  • A PaaS (Platform as a Service) product, which operates at a much higher level. A vendor such as

By now, the popular APIs for IaaS have been satisfactorily emulated so that you can move your application fairly easily from one vendor to another. Some APIs, notably OpenStack, were designed explicitly to eliminate the friction of moving an app and increase the competition in the IaaS space.

Until now, the PaaS situation was much more closed. VMware claims to do for PaaS what Eucalyptus and OpenStack want to do for IaaS. Vmware has a conventional cloud service called Cloud Foundry, but will offer the code under an open source license. Right Scale has already announced that you can use it to run a Cloud Foundry application on EC2. And a large site could run Cloud Foundry on its own hardware, just as it runs VMware.

Cloud Foundry is aggressively open middleware, offering a flexible way to administer applications with a variety of options on the top and bottom. As mentioned already, you can interact with MongoDB, MySQL, or Redis as your storage. (However, you have to use the particular API offered by each back-end; there is no common Cloud Foundry interface that can be translated to the chosen back end.) You can use Spring, Rails, or Node.js as your programming environment.

So open source Cloud Foundry may prove to be a step toward more openness in the cloud arena, as many people call for and I analyzed in a series of articles last year. VMware will, if the gamble pays off, gain more customers by hedging against lock-in and will sell its tools to those who host PaaS on their own servers. The success of the effort will depend on the robustness of the solution, ease of management, and the rate of adoption by programmers and sites.

March 16 2011

4G is a moving target

Motorola Atrix 4G In the alphabet soup of telecom acronyms, 4G (for "fourth generation") seems like one everyone should be able to agree on. Unfortunately it's turning into a vague and ill-defined marketing term.

The "generations" in cellular standards refer to dramatic shifts in mobile technology infrastructure that are generally not backwards-compatible, use new frequency bands, have significantly increased data rates, and have occurred in roughly 10-year increments. First-generation mobile networks were the original analog cellular technology deployed in the early '80s, second-generation refers to the transition to digital networks in the early '90s, and third-generation was the circa-2001 shift to spread-spectrum transmission that was considered viable for multimedia applications.

So far so good, but the leap to fourth generation is getting confusing. The International Telecommunication Union Radiocommunication sector (ITU-R) is the body that manages and develops standards for radio-frequency spectrum, and as such is the most official arbiter of concepts like "4G." But the ITU can't make up its mind.

Originally the ITU-R defined 4G as technologies that could support downstream data rates of 100 Mbps in high-mobility situations and 1 Gbps in limited mobility scenarios. The next versions of both LTE and WiMax networks (LTE-Advanced and WirelessMAN-Advanced/802.16m) promise these kinds of speeds, but neither is expected to be available commercially until 2014 at the earliest. However "4G" has been widely used by vendors to label other technologies that don't meet the ITU-R definition, including the original versions of LTE and WiMax, and that's where things start getting messy.

Apparently bowing to pressure from carriers on this topic, the ITU-R issued a statement last December backing down from the requirements of its original definition of 4G. Stephen Lawson wrote in Network World that the original ITU-R definition caused chaos by stepping on the vendors who had been already using the term:

After setting off a marketing free-for-all by effectively declaring that only future versions of LTE and WiMax will be 4G, the International Telecommunication Union appears to have opened its doors and let the party come inside.

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD


But loosening the definition of what can be called 4G hasn't pleased everyone and has added to the sense of confusion over what the term really means. Besides being used for what many consider 3G technologies like LTE and WiMax, T-Mobile has also been referring to its HSPA+ technology, which can support up to 42 Mbps downstream, as 4G.


Sascha Segan writes that he wishes there were some "4G police" to enforce the matter in a scathing PCMag article about AT&T's slippery usage of the term. PCMag recently tested some of AT&T's supposedly 4G devices and found them underperforming their 3G counterparts.

AT&T has reached a new low, though, by delivering "4G" devices that are actually slower than the carrier's own 3G devices. Yes, you read it correctly: for AT&T, 4G is a step backwards ... AT&T recently confirmed it is crippling the upload speeds on its two 4G phones, the Motorola Atrix and HTC Inspire.

AT&T has responded that they're still in testing mode on these higher-bandwidth capabilities and will be enabling advanced features at some point in the future. But they don't seem to have any issue calling them "4G" today. Not all of their customers are happy about this. Zack Nebbaki has launched an online petition that has more than 900 signatures, asking AT&T to explain (and presumably stop) the bandwidth-capping they appear to be doing to the Motorola Atrix 4G phones.

Bandwidth-capping seems to becoming more widespread. Clearwire is currently being sued for throttling the download speeds of its WiMax offering while advertising "no limits on data usage." (And the trend isn't just restricted to mobile networks, AT&T recently announced that it will institute data-capping for its DSL and U-Verse services.)

Regardless of the terms the industry uses (or abuses) or whatever bandwidth-limiting practices are put in place, it's the users' perceptions that will determine the success of these future technologies. That's what at least one mobile carrier says it's focusing on. When asked by Network World about the controversy around their use of the term 4G, T-Mobile's senior director of engineering Mark McDiarmid said, "For us, 4G is really about the consumer experience."

March 01 2011

Four short links: 1 March 2011

  1. Implementing Open Standards in Open Source (Larry Rosen) -- Companies try to control specifications because they want to control software that implements those specifications. This is often incompatible with the freedom promised by open source principles that allow anyone to create and distribute copies and derivative works without restriction. This article explores ways that are available to compromise that incompatibility and to make open standards work for open source. (via Sam Ruby)
  2. Easy WebSocket -- simple Javascript client for WebSockets. (via Lucas Gonze on Twitter)
  3. Essential Javascript Design Patterns -- updated book of Javascript design patterns.
  4. Social Mechanics (Raph Koster) -- a taxonomy of social mechanics in games. See also Alice Taylor's notes. (via BoingBoing)

February 18 2011

An era in which to curate skills: report from Tools of Change conference

Three days of intensive
discussion about the current state of publishing
wrapped up last
night in New York City. Let's take a tour from start to end.

If I were to draw one point from the first session I attended, I would
say that metadata is a key competitive point for publishers. First,
facts such as who has reviewed a book, how many editions it has been
through, and other such things we call "metadata" can be valuable to
readers and institutions you want to sell content to. Second, it can
be valuable to you internally as part of curation (which I'll get to
later). Basically, metadata makes content far more useful. But it's
so tedious to add and to collate with other content's metadata that
few people out in the field bother to add it. Publishers are
well-placed to do it because they have the resources to pay people for
that unappreciated task.

If I were to leap to the other end of the conference and draw one
point from the closing keynotes, I would say that the key to survival
is to start with the determination that you're going to win, and to
derive your strategy from that. The closing keynoters offered a couple
strategies along those lines.


Kathy Sierra
claimed she started her href="http://oreilly.com/store/series/headfirst.html">Head First
series with no vision loftier than to make money and beat the
competition. The key to sales, as she has often explained in her
talks and articles on "Creating Passionate Users," is not to promote
the author or the topic but to promote what the reader could become.

href="http://www.toccon.com/toc2011/public/schedule/detail/17571">Ben
Huh of I Can Has
Cheezburger
said that one must plan a book to be a bestseller, the
way car or appliance manufacturers plan to meet the demands of their
buyers.

Thus passed the start and end of this conference. A lot happened in
between. I'll cover a few topics in this blog.

Skills for a future publishing

There clearly is an undercurrent of worry, if not actual panic, in
publishing. We see what is happening to newspapers, we watch our
markets shrink like those of the music and television industries, and
we check our own balance sheets as Borders Books slides into
bankruptcy. I cannot remember another conference where I heard, as I
did this week, the leader of a major corporation air doubts from the
podium about the future of her company and her career.

Many speakers combatted this sense of helplessness, of course, but
their advice often came across as, "Everything is going haywire and
you can't possibly imagine what the field will look like in a few
years, so just hang on and go with the flow. And by the way,
completely overturn your workflows and revamp your skill sets."

Nevertheless, I repeatedly heard references to four classic skills
that still rule in the field of publishing. These skills were always
important and will remain important, but they have to shift and in
some ways to merge.

Two of these skills are research and sales. Although one was usually
expected to do research on the market and topic before writing and do
sales afterward, the talks by Sierra, Huh, and others suggested that
these are continuous activities, and hard to separate. The big buzz in
all the content industries is about getting closer to one's audience.
There is never a start and end to the process.

The consensus is that casual exploitation of social
networking--sending out postings and updates and trying to chat with
readers online--won't sell your content. Your readers are a market and
must be researched like one: using surveys, statistical analysis, and
so on. This news can be a relief to the thousands of authors who feel
guilty (and perhaps are made to feel guilty by their publishers)
because they don't get pleasure from reporting things on Facebook
ranging from the breakfast cereal they ate to their latest brilliant
insight. But the question of how bring one's audience into one's
project--a topic I'll refer to as crowdsourcing and cover later--is a
difficult one.

Authoring and curation are even more fundamental skills. Curation has
traditionally meant just making sure assets are safe, uncorrupted, and
ready for use, but it has broadened (particularly in the href="http://www.toccon.com/toc2011/public/schedule/detail/18000">keynote
by Steve Rosenbaum) to include gathering information, filtering
and tagging it, and generally understanding what's useful to different
audiences. This has always been a publisher's role. In the age of
abundant digital content, the gathering and filtering functions can
dwarf the editorial side of publishing. Thus, although Thomson Reuters
has enormous resources of their own, they also generate value by href="http://www.toccon.com/toc2011/public/schedule/detail/17827">
tracking the assets of many other organizations.

When working with other people's material, curation, authoring, and
editing all start to merge. Perhaps organizing other people's work
into a meaningful sequence is as valuable as authoring one's own. In
short, curation adds value in ways that are different from authoring
but increasingly valid.

Capitalizing on the old

I am not ready to change my business cards from saying "Editor" to
"Curator" because that would make it look like I'm presiding over a
museum. Indeed, I fear that many publishers are dragged down by their
legacy holdings, which may go back a hundred years or more. I talked
to one publisher who felt like his time was mostly taken up with huge
holdings of classics that had been converted to digital form, and he
was struggling to find time just to learn how his firm could add the
kinds of interactivity, multimedia, links, and other enhancements that
people at the show were saying these digital works deserved.

We hope that no publishers will end up as museums, but some may have
to survive by partnering with newer, more limber companies that grok
the digital age better, rather as the publisher of T.S. Eliot's
classic poem The Waste Land partnered with Touch Press, the
new imprint set up by Wolfram Research and discussed in a href="http://www.toccon.com/toc2011/public/schedule/detail/17732">keynote
by Theodore Gray. Readers will expect more than plain old rendered
text from their glittery devices, and Gray's immensely successful book
The Elements (which brought to life some very classic content, the
periodic table) shows one model for giving them what they want.

Two polar extremes

Gray defined his formula for success as one of bringing together top
talent in every field, rather as Hollywood film-makers do. Gray claims
to bring together "real authors" (meaning people with extraordinary
perspectives to offer), video producers with top industry
qualifications, and programmers whose skills go beyond the ordinary.

I can't fault Gray's formula--in fact, Head First follows the same
model by devoting huge resources to each book and choosing its topics
carefully to sell well--but if it was the only formula for success,
the book industry would put out a few hundred products each year like
Hollywood does. Gray did not offer this economic analysis himself, but
it's the only way I see it working financially. Not only would this
shift squelch the thousands of quirky niche offerings that make
publishing a joy at present, I don't consider it sustainable. How will
the next generation of top producers mature and experiment? If there
is no business model to support the long tail, they'll never develop
the skills needed in Gray's kind of business.

Business models are also a problem at the other extreme,
crowdsourcing. Everybody would like to draw on the insights of
readers. We've learned from popular books such as
The Wisdom of
Crowds
and Wikinomics that our
public has valuable things to say, and our own works grow in value if
we mine them adeptly. There are innumerable conversations going on out
there, on the forums and the rating sites, and the social networks,
and publishers want to draw those conversations into the book. The
problem is that our customers are very happy on the communities they
have created themselves, and while they will drop in on our site to
rate a product or correct an error, they won't create the digital
equivalent of Paris's nineteenth-century cafe culture for us.

Because I have been fascinated for years by online information sharing
and have href="http://www.praxagora.com/community_documentation/">researched it
a fair amount, I made use of the conference in the appropriate way
by organizing a roundtable for anyone who was interested under the
subject, "Can crowdsourcing coexist with monetization?" Some of the
projects the participants discussed included:

  • A book review site that pays experts for reviews, and then opens up
    the site to the public for their comments. This approach uses
    high-quality content to attract more content.

  • O'Reilly's own Answers
    site
    , which works similarly by sharing authors' ideas as well as
    excerpts from their books and draws postings from readers.

  • A site for baby product recommendations, put up by the publisher of
    books on parenting. The publisher has succeeded in drawing large
    groups of avid participants, and has persuaded volunteers to moderate
    the site for free. But it hasn't taken the next steps, such as
    generating useful content for its own books from readers, or finding
    ways to expand into information for parents of older children so the
    publisher can keep them on the site.

  • Offering a site for teachers to share educational materials and
    improve their curricula. In this case, the publisher is not interested
    in monetizing the content, but the participants use the site to
    improve their careers.

In between the premium offerings of Touch Press and the resale of
crowdsourced material lies a wide spectrum of things publishers can
do. At all points on the spectrum, though, traditional business
models are challenged.

The one strategic move that was emphasized in session after session
was to move our digital content to standards. EPUB, HTML 5, and other
standards are evolving and growing (sometimes beyond the scope, it
seems, of any human being to grasp the whole). If we use these
formats, we can mix and mingle our content with others, and thus take
advantage of partnerships, crowdsourcing, and new devices and
distribution opportunities.

Three gratifying trends

Trends can feel like they're running against us in the publishing
industry. But I heard of three trends that should make us feel good:
reading is on the increase, TV watching is on the decrease (which will
make you happy if you agree with such analysts as Jerry Mander and
Neil Postman), and people want portability--the right to read their
purchases on any device. The significance of the last push is that it
will lead to more openness and more chances for the rich environment
of information exchange that generates new media and ideas. We're in a
fertile era, and the first assets we need to curate are our own
skills.

An era in which to curate skills: report from Tools of Change conference

Three days of intensive
discussion about the current state of publishing
wrapped up last
night in New York City. Let's take a tour from start to end.

If I were to draw one point from the first session I attended, I would
say that metadata is a key competitive point for publishers. First,
facts such as who has reviewed a book, how many editions it has been
through, and other such things we call "metadata" can be valuable to
readers and institutions you want to sell content to. Second, it can
be valuable to you internally as part of curation (which I'll get to
later). Basically, metadata makes content far more useful. But it's
so tedious to add and to collate with other content's metadata that
few people out in the field bother to add it. Publishers are
well-placed to do it because they have the resources to pay people for
that unappreciated task.

If I were to leap to the other end of the conference and draw one
point from the closing keynotes, I would say that the key to survival
is to start with the determination that you're going to win, and to
derive your strategy from that. The closing keynoters offered a couple
strategies along those lines.


Kathy Sierra
claimed she started her href="http://oreilly.com/store/series/headfirst.html">Head First
series with no vision loftier than to make money and beat the
competition. The key to sales, as she has often explained in her
talks and articles on "Creating Passionate Users," is not to promote
the author or the topic but to promote what the reader could become.

href="http://www.toccon.com/toc2011/public/schedule/detail/17571">Ben
Huh of I Can Has
Cheezburger
said that one must plan a book to be a bestseller, the
way car or appliance manufacturers plan to meet the demands of their
buyers.

Thus passed the start and end of this conference. A lot happened in
between. I'll cover a few topics in this blog.

Skills for a future publishing

There clearly is an undercurrent of worry, if not actual panic, in
publishing. We see what is happening to newspapers, we watch our
markets shrink like those of the music and television industries, and
we check our own balance sheets as Borders Books slides into
bankruptcy. I cannot remember another conference where I heard, as I
did this week, the leader of a major corporation air doubts from the
podium about the future of her company and her career.

Many speakers combatted this sense of helplessness, of course, but
their advice often came across as, "Everything is going haywire and
you can't possibly imagine what the field will look like in a few
years, so just hang on and go with the flow. And by the way,
completely overturn your workflows and revamp your skill sets."

Nevertheless, I repeatedly heard references to four classic skills
that still rule in the field of publishing. These skills were always
important and will remain important, but they have to shift and in
some ways to merge.

Two of these skills are research and sales. Although one was usually
expected to do research on the market and topic before writing and do
sales afterward, the talks by Sierra, Huh, and others suggested that
these are continuous activities, and hard to separate. The big buzz in
all the content industries is about getting closer to one's audience.
There is never a start and end to the process.

The consensus is that casual exploitation of social
networking--sending out postings and updates and trying to chat with
readers online--won't sell your content. Your readers are a market and
must be researched like one: using surveys, statistical analysis, and
so on. This news can be a relief to the thousands of authors who feel
guilty (and perhaps are made to feel guilty by their publishers)
because they don't get pleasure from reporting things on Facebook
ranging from the breakfast cereal they ate to their latest brilliant
insight. But the question of how bring one's audience into one's
project--a topic I'll refer to as crowdsourcing and cover later--is a
difficult one.

Authoring and curation are even more fundamental skills. Curation has
traditionally meant just making sure assets are safe, uncorrupted, and
ready for use, but it has broadened (particularly in the href="http://www.toccon.com/toc2011/public/schedule/detail/18000">keynote
by Steve Rosenbaum) to include gathering information, filtering
and tagging it, and generally understanding what's useful to different
audiences. This has always been a publisher's role. In the age of
abundant digital content, the gathering and filtering functions can
dwarf the editorial side of publishing. Thus, although Thomson Reuters
has enormous resources of their own, they also generate value by href="http://www.toccon.com/toc2011/public/schedule/detail/17827">
tracking the assets of many other organizations.

When working with other people's material, curation, authoring, and
editing all start to merge. Perhaps organizing other people's work
into a meaningful sequence is as valuable as authoring one's own. In
short, curation adds value in ways that are different from authoring
but increasingly valid.

Capitalizing on the old

I am not ready to change my business cards from saying "Editor" to
"Curator" because that would make it look like I'm presiding over a
museum. Indeed, I fear that many publishers are dragged down by their
legacy holdings, which may go back a hundred years or more. I talked
to one publisher who felt like his time was mostly taken up with huge
holdings of classics that had been converted to digital form, and he
was struggling to find time just to learn how his firm could add the
kinds of interactivity, multimedia, links, and other enhancements that
people at the show were saying these digital works deserved.

We hope that no publishers will end up as museums, but some may have
to survive by partnering with newer, more limber companies that grok
the digital age better, rather as the publisher of T.S. Eliot's
classic poem The Waste Land partnered with Touch Press, the
new imprint set up by Wolfram Research and discussed in a href="http://www.toccon.com/toc2011/public/schedule/detail/17732">keynote
by Theodore Gray. Readers will expect more than plain old rendered
text from their glittery devices, and Gray's immensely successful book
The Elements (which brought to life some very classic content, the
periodic table) shows one model for giving them what they want.

Two polar extremes

Gray defined his formula for success as one of bringing together top
talent in every field, rather as Hollywood film-makers do. Gray claims
to bring together "real authors" (meaning people with extraordinary
perspectives to offer), video producers with top industry
qualifications, and programmers whose skills go beyond the ordinary.

I can't fault Gray's formula--in fact, Head First follows the same
model by devoting huge resources to each book and choosing its topics
carefully to sell well--but if it was the only formula for success,
the book industry would put out a few hundred products each year like
Hollywood does. Gray did not offer this economic analysis himself, but
it's the only way I see it working financially. Not only would this
shift squelch the thousands of quirky niche offerings that make
publishing a joy at present, I don't consider it sustainable. How will
the next generation of top producers mature and experiment? If there
is no business model to support the long tail, they'll never develop
the skills needed in Gray's kind of business.

Business models are also a problem at the other extreme,
crowdsourcing. Everybody would like to draw on the insights of
readers. We've learned from popular books such as
The Wisdom of
Crowds
and Wikinomics that our
public has valuable things to say, and our own works grow in value if
we mine them adeptly. There are innumerable conversations going on out
there, on the forums and the rating sites, and the social networks,
and publishers want to draw those conversations into the book. The
problem is that our customers are very happy on the communities they
have created themselves, and while they will drop in on our site to
rate a product or correct an error, they won't create the digital
equivalent of Paris's nineteenth-century cafe culture for us.

Because I have been fascinated for years by online information sharing
and have href="http://www.praxagora.com/community_documentation/">researched it
a fair amount, I made use of the conference in the appropriate way
by organizing a roundtable for anyone who was interested under the
subject, "Can crowdsourcing coexist with monetization?" Some of the
projects the participants discussed included:

  • A book review site that pays experts for reviews, and then opens up
    the site to the public for their comments. This approach uses
    high-quality content to attract more content.

  • O'Reilly's own Answers
    site
    , which works similarly by sharing authors' ideas as well as
    excerpts from their books and draws postings from readers.

  • A site for baby product recommendations, put up by the publisher of
    books on parenting. The publisher has succeeded in drawing large
    groups of avid participants, and has persuaded volunteers to moderate
    the site for free. But it hasn't taken the next steps, such as
    generating useful content for its own books from readers, or finding
    ways to expand into information for parents of older children so the
    publisher can keep them on the site.

  • Offering a site for teachers to share educational materials and
    improve their curricula. In this case, the publisher is not interested
    in monetizing the content, but the participants use the site to
    improve their careers.

In between the premium offerings of Touch Press and the resale of
crowdsourced material lies a wide spectrum of things publishers can
do. At all points on the spectrum, though, traditional business
models are challenged.

The one strategic move that was emphasized in session after session
was to move our digital content to standards. EPUB, HTML 5, and other
standards are evolving and growing (sometimes beyond the scope, it
seems, of any human being to grasp the whole). If we use these
formats, we can mix and mingle our content with others, and thus take
advantage of partnerships, crowdsourcing, and new devices and
distribution opportunities.

Three gratifying trends

Trends can feel like they're running against us in the publishing
industry. But I heard of three trends that should make us feel good:
reading is on the increase, TV watching is on the decrease (which will
make you happy if you agree with such analysts as Jerry Mander and
Neil Postman), and people want portability--the right to read their
purchases on any device. The significance of the last push is that it
will lead to more openness and more chances for the rich environment
of information exchange that generates new media and ideas. We're in a
fertile era, and the first assets we need to curate are our own
skills.

February 17 2011

ePayments Week: Does Apple deserve a bigger bite?

Here's what caught my attention in the payment space this week.

Apple wants a bigger bite of the publishing pie

Apple announced a subscription plan for publishers through its iTunes store, a plan that seemed to annoy and confuse publishers while creating a nice opportunity for Google to ride in the next day with a competing plan that publishers seemed to like better.

Apple's plan goes something like this: If a publisher sells a subscription through the app and the iTunes store, Apple takes 30% and keeps control of the subscriber's data. Some of the coverage of the plan missed the point that if the publisher brings an existing subscriber over to the App, they keep the revenue and Apple doesn't get a cut — at least not until that subscriber renews through the app. Publishers were at least as miffed about Apple's control over the user data. Apple sees these as iTunes subscribers, whose information they will jealously guard. Publishers see them as subscribers whom they need to send endless renewal reminders and whose names they need to sell to other publishers, Omaha Steaks, and the Franklin Mint.

Subscription screen from The Daily
The in-app subscription screen from The Daily

The 10% that Google will collect from publishers who opt to participate in its One Pass program looks mighty good by comparison. Google will sell subscriptions through a yet-to-appear newsstand, process them with Google Checkout, and hand over subscriber data to the publisher. "We don't prevent you from knowing, if you're a publisher, who your customers are, like some other people," The Wall Street Journal quoted CEO Eric Schmidt saying. Subscribers will get a login that lets them read the publication on their phone, tablet, or on the web.

Publishers seemed to welcome the clarity and generosity of Google's plan, but Matthew Ingram at GigaOm wondered if Google's One Pass was "pretty much just a warmed-over content paywall" since he doubted the open Android platform could deliver the high-quality user experience that iPad users have been frothing over. There may also be a case to the argument that Apple's iTunes subscribers are higher value leads: Silicon Alley Insider published this chart based on data from Mobclix suggesting that iPhone app users are worth more to developers (and, presumably, publishers) than Android app users. That's not much of a stretch. After all, we know they've probably paid more for their devices.


Can Isis unite the gods of mobile commerce?

ISISI've reported recently on individual efforts to enable tap-and-pay capabilities for mobile phones — Paypal and Bump, for example. And I've written several times about Starbucks' mobile payment app, just because it gives me an excuse to go get coffee. But I've also noted that a different app for every checkout would be cumbersome. Mobile telcos think so, too, so they are teaming up around a standard called Isis.

Isis, which was announced last fall, aims to put payment capability inside phones by the spring of 2012. It could work something like this: if the carriers, credit card companies, and banks agree on how the system should work, including where the financial data is stored on the phone (probably the SIM card), how payment happens (NFC wireless communication), how secure it is, how transactions get processed, and who gets a cut, they could work to implement the standard on new phones and configure a backward-compatible bridge to existing devices. AT&T Mobility, Verizon Communications, and T-Mobile USA launched the effort, and they're tapping the payment expertise of Barclaycard US and Discover Financial Services.

While the combined muscle of these big names gives credence to the effort, it can also be difficult to get too many giants to dance to the same tune. Occasionally, industry-wide standards efforts flounder on the demands and compromises of too many players, and a rogue player undercuts the effort by presenting a viable alternative that becomes a de facto standard. Windows Explorer's dominance in the early- to mid-2000s and Apple's music format (AAC with digital rights management) used on iTunes from 2003-2007 are two examples.

Drew Sievers at MFoundry has a perspective on Isis based on experience with an earlier attempt at universal mobile payments called Firethorn. (Qualcomm paid big bucks for the service, but banks balked at letting the carriers control their customers' purchasing experience.)

In the past there may not have been enough of a platform on which to build consumer acceptance for any payment standard. Now, however, with smartphones exponentially altering what consumers think they can do with their devices, the Isis partners stand a better chance. Of course, as Sievers points out in his post, they'll have competition as the credit card firms and mobile operating system companies throw their weight behind various competing or collaborative efforts.

Bling Nation wants you to Like them

We can take it as a sign that Bling Nation is "all-in" on social when they set their domain to roll over to their Facebook page.

"We feel strongly that, with 650 million people using Facebook, that's the place where communication is occurring," Bling Nation's general manager Matt Murphy told me. "We wanted to put our whole experience there."

Bling Nation received a fair amount of attention last year with a contactless payment trial. Murphy said about 30,000 users have tried out the Bling sticker (which contains an NFC chip) on their phones, and they can use it to buy stuff at "thousands" of merchants in the Bay Area. Tapping Bling's point-of-purchase hardware charges the user's PayPal account.

Murphy said Bling is counting on the viral nature of Facebook and its users' relationships to show when they've Blinged a purchase or — and this appears to be the key incentive — earned a reward, like a discount on their next cup of coffee or caesar cut. "We're asking consumers to change their behavior," he said. "We've got to offer them something in return for that."

Got news?

News tips and suggestions are always welcome, so please send them along.




If you're interested in learning more about the payment development space, check out PayPal X DevZone, a collaboration between O'Reilly and PayPal.



Related:


May 01 2010

Report from Health Information Technology in Massachusetts

When politicians organize a conference, there's obviously an agenda--beyond the published program--but I suspect that it differed from the impressions left by speakers and break-out session attendees at Health Information Technology: Creating Jobs, Reducing Costs, & Improving Quality.

A quick overview of what I took away from the conference is sobering.
Health care costs will remain high for many years while
institutionalize measures intended to reduce them. Patients will still
have trouble getting their records in electronic form to a different
doctor (much less access it themselves). And quality control will make
slow headway against the reluctance of doctors to share data on
treatment outcomes.

Still, I have to give the optimists their due, and chief among the
optimists is Richard Shoup, director of the href="http://maehi.org/">Massachusetts eHealth Institute and one
of the conference's key organizers. He points out that the quality
control measures emerging at the federal level (the "meaningful use"
criteria for electronic health records) meshes excellently with both
the principles and the timing legislated in href="http://www.mass.gov/legis/laws/seslaw08/sl080305.htm">Section
305 in the Massachusetts health care bill. Massachusetts has a
long history of health care IT deployment and of collaboration to
improve quality. "All stakeholders are at the table," he says, and the
Massachusetts eHealth Institute recently floated a href="http://www.maehi.org/HIT/plan.html">statewide plan for
implementing health care IT.

A conference fraught with political meaning

Gov 2.0 Expo 2010There was no doubt that politicians high up in the federal and Massachusetts governments respected the significance of this conference, which was also called the Governors National Conference (no missing apostrophe here; the conference really did draw representatives from many governors). Attendees included Massachusetts governor Deval Patrick (who came straight from the airport to speak), Senate president Therese Murray, US Surgeon General Regina Benjamin, and health care national coordinator David Blumenthal. I haven't even mentioned the many other scheduled speakers who could not attend for one reason or another.

As Governor Patrick indicated, Massachusetts is an excellent locale
for this conference. Besides the high concentration of medical
institutions that attract patients from around the world, and a decent
number of innovative research facilities, we are leaders in electronic
physician order entry and other aspects of health care IT.

I wondered, though, why no venue for this conference could be found in
the Longwood medical area. It would require handling the crowds
differently, but perhaps the main drawback is that Longwood would
swamp the out-of-towners in attendees from local institutions. But
instead, we were located in the new conference center area of Boston,
a place devoid of signs of life even though it's only a fifteen-minute
walk from the bustling financial district.

The conference met the needs of both the state and federal
administrations. Patrick hit on three major topics on many people's
minds: adding jobs, lowering health care premiums for small
businesses, and reducing the burden of health care in local
governments.

The pressures at the state level are out in full view. A recent flap
frightened the health care industry when insurers proposed annual
insurance policy increases of up to 22% and the administration slapped
them down. Although an annual 22% raise is clearly unsustainable,
imposing arbitrary limits (known as capitation) usually leads to
equally arbitrary denials of care instead of the creative fine-tuning
required to intelligently eliminate waste. I noted today that Paul
Tang, who is responsible for defining meaningful use for the federal
stimulus bill, says that to improve quality, the health care system
has to move from fee-for-service to paying for outcomes, but that we
don't yet know how to do make such a major change.

As mentioned earlier, the Massachusetts health care bill as well as
the federal recovery and health care bills include ways to collect
data, analyze it, and disseminate results meant to raise quality while
lowering costs. I have to say that I'll believe it when I see it,
because "doing the right thing" (as David Blumenthal called the
implementation of electronic health records) has to fight barriers put
up consciously or unconsciously by medical institutions, individual
doctors, and electronic health record vendors.

Nationally, both the stimulus package and the health care bill
stipulate very ambitious goals and extremely accelerated
schedules--and still, many people worry that the incentives aren't
strong enough to make them come to pass.

David Blumenthal lays out the stimulus package

The Department of Health and Human Services, to administer the
billions of dollars provided in the stimulus package and the demands
on health care providers that may dwarf that appropriation, set up the
Office of the National Coordinator with the task of making and
administering regulations. David Blumenthal came from Boston back to
Washington to take on the job of National Coordinator, and
practitioners in health care now hang on his every word.

Under such circumstances, one has to look beyond the official aspects
of Blumenthal's keynote and look at particular inflections or
emphases. Most telling to me was his metaphor of putting heath care
providers on an escalator. The point was that no matter what problems
they encounter, they should keep moving. It's OK to start slow (he
spoke of making the first step low enough) as long as the institution
keeps adding functions along the sequence specified in the ONC
documents.

Given the extensive goals in using electronic records, sharing data
with relevant agencies, and improving clinical care, Blumenthal made
some statements one could see as defending the initiatives. He pointed
out that when the goals were circulated for public comment, many
people questioned the ambitiousness or timing, but hardly anybody
challenged the direction they were taking or the value of the goals.

He did admit some of the barriers we are collectively facing:

  • The unmatched diversity this country presents in geography,
    demographics, income and educational levels, political philosophies,
    etc.

  • The risk of holding back innovation. As standards are specified in
    more detail, they increase the chance that conforming implementations
    will interoperate, but also the chance that future advances in a field
    will be hard to reflect in product improvements. (John Halamka, CIO of
    Harvard Medical School and an advisor to the federal government on
    implementing health care policy, issued a similar warning on his panel
    the next day.)

  • Resilient problems with privacy. It's worth mentioning, in this
    regard, a study cited by a lawyer on a a later panel, David Szabo.
    Fears of privacy hold back many people from using personal health
    records, and are cited even by a large percentage of people who use
    them. Only 4% of respondents trusted HIPAA to protect them. But many
    say they would start using personal health records if privacy laws
    were improved.

The high-level priorities cited by Blumenthal were to help small and
rural providers who have few resources (the task of Regional Extension
Centers, a new institution in health care created by the stimulus
bill) to get data in the hands of patients, and to "make electronic
systems so easy to use that doctors can't wait to turn them on in the
morning." I'll return to this sunny notion later.

Patient-centered care

As I claimed in an href="http://radar.oreilly.com/2010/03/report-from-himms-health-it-co-1.html">
earlier blog, the revolution that will really crack open the
electronic health record field is the need to share data with and
among patients. The same point was raised today by Paul Tang.

One of the barriers to giving data to patients is that, frankly, it's
not in a form they can use. Current records are fashioned more toward
insurance claims than clinical needs. They can be confusing and
positively frightening to someone who doesn't understand the peculiar
circumstances that drive the entries. Doctors are consequently
reluctant to open current records to patients. Barbra Rabson also said
that this dominance of billing data makes it hard to collect useful
data for quality control, but that it will be several years before
doctors provide the clinical data that will provide a better basis for
analysis.

Themes that came up throughout the conference suggested that
improvements in health require patient education. Some speakers
objected to using the term "patient" because that already implies
ill-health and sets up a situation where the professional health
provider is in control.

John Halamka said that the recently passed federal health care bill
requires health care systems to make it possible for all patients to
get electronic access to their data.

How can we get patients to use this power? They need to understand,
first of all, the benefits of having access to their data. John Moore,
who promotes patient-centered care at href="http://chilmarkresearch.com">Chilmark Research, said that
for many people this will begin at the office, because some companies
require employees to take some responsibility for managing their own
insurance. Patient records may become more widely used as patients
find value in them far beyond tracking their treatment: to order
refills of medicine, make follow-up appointments, and so on.

Next, patients have to learn the value of adding to that data, and how
to do so. (Another problem with patient-centered care is that some
patients deliberately or mistakenly enter incorrect information or
fail to record important events.) As US Surgeon General Regina
Benjamin pointed out in a teleconferenced talk, we have to design a
patient-centered system that can be used even by illiterate patients,
who are quite common in our country and who need perhaps even more
assistance than the people who can read this blog.

With all these practices in place, patients can turn to comprehending
the information they get back and using it to improve the quality of
their lives. One doctor even pushed to pay patients for complying with
treatment plans, to put some responsibility for outcomes on the
patient.

Girish Kumar Navani, CEO of the eClinicalWorks health record vendor,
mentioned that involving patients in their care provides a powerful
motivation to expand access to high-bandwidth Internet.

Privacy came up in this talk, as it did in nearly every one. David
Szabo reassured us that there are more legal protections in place than
we tend to admit. Many patient record sites post privacy policies. The
FTC, and many state attorneys general, vigorously these policies. What
Szabo did not address--because, I suppose, it fell outside legal
considerations--was the risk of data breaches, which should concern us
because attacks on health care repositories are on the rise.

Data exchange

One pediatrician recounted a teeth-clenching story of a doctor who
moved his practice to another hospital and instantly lost electronic
access to all his records. Any patient who wants to stay with him will
have to obtain records in printed form and have them re-entered at the
new hospital. This frustrating scenario gets repeated at every level
of the national health system as systems trap data in proprietary
formats.

Several members of the ONC have boasted how their specifications for
electronic records and health information exchanges say nothing about
architecture, being "technology neutral." One can interpret this as
modest caution, but could we also see in it a veiled plea for help, an
acknowledgment that current standards and protocols aren't up to the
task?

While many people criticize the vendors of electronic health systems
for incompatibility, Micky Tripathi, president of the Massachusetts
eHealth Collaborative, said that doctors are more to blame. The
doctors have assigned no importance to sharing data with other doctors
or with responsible agencies, and just demand electronic systems that
allow them to continue with their old workflows and require the least
possible change in behavior. One doctor in a break-out session
reported that doctors use the systems inconsistently or enter data in
unstructured comments instead of fields designated for that data, so
that automatic searching and aggregation of data becomes impossible.


Tripathi pointed out that standards in themselves don't get people to
communicate. The history in every field is that people start to feel a
burning need to communicate; systems and standards then emerge from
that. The very early days of telephony resembled today's health
information exchanges: you needed a separate phone and a
point-to-point line for each person you wanted to talk to. Even in
1901, the United States had 2,811 independent phone networks.
(Tripathi didn't point out that it took heavy-handed government
mandates to bring that number down to one, and that this AT&T
network eventually became a bottleneck--if not a chokepoint--for
innovation.) His main point remains valid: most systems start out
cumbersome and expensive before best practices and standards help them
converge on elegant solutions.

Along those lines, a commenter in one forum praised the New England
hospital network, NEHEN, and claimed that it started before
applications were available, but generated innovative applications.
J. Marc Overhage, a leader in the use of electronic records for
clinical decision support, added a cute reference to McDonald's, which
waits for a highway to be built before putting a restaurant at the
interchange.

Daniel Nigrin, CIO of Children's Hospital, also praised NEHEN but
reminded us it was designed only for doctors, not patients.

I talked to managers at Coping
Systems
, a firm that helps hospitals assess their quality of care
by analyzing statistics and presenting them in visual displays. The
biggest barrier Coping Systems face is the willingness of hospitals to
share data. Patient data must be anonymized, of course, but sometimes
hospitals won't share data about quality of care unless the name of
the institution is removed. Even by looking at their own data in
isolation, though, a hospital or an individual doctor can discover
insights that change treatment. They can check the expected versus
actual outcomes for individual doctors, for a doctor working with a
particular nurse, for a particular time of the day, etc.

Tang mentioned a simple example of how public health could be improved
by data collection. During last year's rush to provide H1N1 flu
vaccines to the most critical people, the government divided the
limited supplies up geographically. Some areas with high
concentrations of vulnerable people were severely constrained, and if
we had data about the locations of people who needed the vaccine, we
could have distributed it on a much fairer basis.

John Halamka, while acknowledging that many current standards for
electronic records are adequate for the task, called for better
standards to classify patients and treatments. Right now, for
instance, it's hard to define who is diabetic, which makes it hard to
compare statistics about the treatment of diabetics by different
doctors. A recent ONC meeting, covered in href="http://radar.oreilly.com/2010/04/hit-standards-committee-addres.html">another
Radar post, discussed standards for health IT.

Halamka said that electronic records, for which he is a strong
advocate, will catch on when doctors realize they facilitate new
activities that the doctors could never do before. In this way Halamka
fleshed out and energized Blumenthal's dream of "electronic systems so
easy to use that doctors can't wait to turn them on in the morning."
Whether this involves improvements to public health or something more
closely aligned to doctors' day-to-day practices, good planning will
help doctors, patients, and researchers all move toward a brighter
health care future.

Related:

January 08 2010

What's going on with OAuth?

Over the past week there's been a variety of incorrect information shared about what's going on with the OAuth protocol. Chris Messina (Google), Dick Hardt (Microsoft), Eran Hammer-Lahav (Yahoo!), and I (Facebook) wrote this post to help provide a bit more clarity.

The OAuth protocol enables users to provide third-party access to their web resources without sharing their passwords; kind of like a valet key for the web. To date, OAuth 1.0a is the most successful such protocol deployed on the web. The origins of OAuth date back to late 2006, when a small group of web engineers, tired of reinventing the API authorization wheel, came together to find a common, open solution.

The protocol was derived from several existing API authorization protocols, including AOL, Flickr, Google, Microsoft, and Yahoo!. By developing a unified approach to API authorization, the goal was to reduce the burden of implementing any one of these protocols, and provide third party applications a more convenient and secure way to access user data. It is also well-established that security protocols are hard and often suffer from potential exploits. By focusing on an single, open protocol, the community could reduce the likelihood of an attack and respond faster when one occurs.

In the past two years, the number of services that require users to divulge their passwords to enable third-party access — the so-called password anti-pattern — has decreased dramatically. Today the most well-known and used deployment of OAuth 1.0a is the Twitter's API. (If you're interested in a more detailed explanation of OAuth, check out The Authoritative Guide to OAuth 1.0.)

Last year OAuth transitioned to the IETF as a new Working Group to produce version 1.1 which would be suitable for publication as an Internet Standard. The working group was tasked with reviewing the security and interoperability properties of the protocol, while maintaining as much backwards-compatibility as possible. As is sometimes the case in such efforts, there was little interest among the community in such a minor cleanup.

Introducing WRAP

At the same time, new use cases emerged as well as a significant amount of hands-on experience about the shortcomings and gaps in the 1.0a version of the protocol. A small group of developers herded by Dick Hardt started work on simplifying the protocol, inspired by the OAuth Session Extension proposed by Yahoo!. Originally dubbed "Simple OAuth", it was later renamed to WRAP (Web Resource Authorization Protocol) to reflect the fact that it is a different protocol. It is now known as OAuth WRAP.

WRAP attempts to simplify the OAuth protocol, primarily by dropping the signatures, and replacing them with a requirement to acquire short lived tokens over SSL. It is not an even trade-off, and the new proposal has a different set of security characteristics, benefits, and shortcomings.

In 2007 when OAuth 1.0 was being created, SSL was used sparingly for APIs. As CPUs have become faster and more specialized SSL hardware has been deployed, it has become increasingly possible to operate APIs over SSL. Some APIs, like the Google Health Data API or Yahoo!'s Fire Eagle API, operate fully over SSL anyway as developers are interacting with non-public data. Using SSL obviates the primary purpose of the cryptography used in OAuth 1.0a, which was designed for transferring data over insecure channels.

WRAP addresses two areas in which the 1.0a protocol is lacking: it offers new ways to obtain tokens, and it evolves the architecture to enable other roles to issue tokens (other than the server). OAuth 1.0a offers a single browser-based redirection flow used to send the user from the application to the server, obtain approval, and return to the application. WRAP adds a few new flows for obtaining authorization and tokens mainly designed around providing better experiences on devices such as your XBox, desktop applications like TweetDeck, or fully JavaScript based implementations like Facebook Connect. And unlike 1.0a where the server issues and verifies every token, the tokens in OAuth WRAP are short lived and can represent claims issued by an authorization server, providing scale and security benefits for large operators.

Judging by the original "Simple OAuth" moniker, the goal behind WRAP was not to confuse developers or compete with OAuth. The intention, rather, was to promote OAuth and increase long term adoption by offering an SSL variant. Therefore, if you're building a new API today and are trying to decide between deploying OAuth 1.0a or OAuth WRAP, nine times out of ten you should continue deploying OAuth 1.0a. But start experimenting with WRAP when its features are important to you and you are comfortable making changes as it evolves.

Building OAuth 2.0

WRAP brought the use cases and experiences that inspired it to the attention of the IETF working group. The consensus is that we now have enough implementation experience and new requirements to begin work on OAuth 2.0, instead of a minor revision. OAuth 2.0 will likely contain two parts, one defining an authentication scheme for accessing resources using tokens, and the second defining a rich set of authorization schemes for obtaining such tokens. By separating the two parts, we will be able to provide the right level of abstraction and modularity to support both the SSL-based approach taken by WRAP as well as the existing signature-based approach taken by 1.0a.

In many ways, OAuth 2.0 will be the result of combining the best ideas from both protocols. The authentication part will built on top of 1.0a while the authorization part will build on top of WRAP. It is important to remember that it is very early in the process, and that all these decision will be made by the members of the IETF OAuth working group. In other words, by those who show up. The goal is to have a set of stable drafts for OAuth 2.0 by the upcoming IETF OAuth Working Group meeting in March at the 77th IETF meeting.

For those implementing OAuth 1.0a today, a new edition has been published as an RFC draft which was accepted by the community as a replacement for the original 1.0a specification. This new specification does not change the protocol, but is more readable, includes many clarifications, errata, and examples, and thus easier to implement.

If you're interested in keeping track of what's going on with OAuth, Hueinverse's OAuth page is a great place to watch. To get involved and take part in this important work, dig into the IETF OAuth Working Group and WRAP discussion list.

November 20 2009

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl