Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 09 2011

Big health advances in small packages: report from the third annual Medical Device Connectivity conference

At some point, all of us are likely to owe our lives--or our quality of life--to a medical device. Yesterday I had the chance to attend the third annual Medical Device Connectivity conference, where manufacturers, doctors, and administrators discussed how to get all these monitors, pumps, and imaging machines to work together for better patient care.

A few days ago I reported on the themes in the conference, and my attendance opened up lots of new insights for me. I've been convinced for a long time that the real improvements in our health care system will come, not by adjusting insurance policies, payment systems, or data collection, but by changes right at the point of care. And nothing is closer to the point of care than the sensor attached to your chest or the infusion pump inserted into your vein.

What sorts of innovation can we hope for in medical devices? They include:

Connectivity

Devices already produce (in the form of monitors' numerical outputs and waveforms) and consume (in the form of infusion pumps' formularies and settings) data. The next natural step is to send them over networks--hopeful wireless--so they can be viewed remotely. Even better, networking can infuse their data into electronic health records.

One biomedical engineer gestured vigorously as he told me how the collection of device data could be used to answer such questions as "How do some hospitals fare in comparison to others on similar cases?" and "How fast do treatments take effect?" He declared, "I'd like to spend the rest of my life working on this."

Cooperation

After coaxing devices to share what they know, the industry can work on getting them to signal and control each other. This would remove a major source of error in the system, because the devices would no longer depend on nurses to notice an alert and adjust a pump. "Smart alarms" would combine the results of several sensors to determine more accurately whether something is happening that really requires intervention.

Of course, putting all this intelligence in software calls for iron-clad requirements definitions and heavy testing. Some observers doubt that we can program devices or controllers well enough to handle the complexity of individual patient differences. But here, I think the old complaint, "If we can get men to the moon..." applies. The Apollo program showed that we can create complex systems that work, if enough care and effort are applied. (However, the Challenger explosion shows that we can also fail, when they are not.)

Smaller and smarter

Faster, low-power chips are leading to completely different ways of solving common medical problems. Some recent devices are two orders of magnitude smaller than corresponding devices of the previous generation. This means, among other things, that patients formerly confined to bed can be more ambulatory, which is better for their health. The new devices also tend to be 25% to 30% the cost of older ones.

Wireless

Telemedicine is one of the primary goals of health care reform. Taking care out of clinics and hospitals and letting people care for themselves in their natural environments gives us whole new opportunities for maintaining high-level functioning. But it isn't good for anybody's health to trip over cables, so wireless connections of various types are a prerequisite to the wider use of devices.

Ed Cantwell of the West Wireless Health Institute suggested three different tiers of wireless reliability, each appropriate for different devices and functions in a medical institution. The lowest level, consumer-grade, would correspond to the cellular networks we all use now. The next level, enterprise-grade, would have higher capacity and better signal-to-noise ratio. The highest level, medical-grade, would be reserved for life-critical devices and would protect against interference or bandwidth problems by using special frequencies.

Standards

The status of standards and standards bodies in the health care space is complex. Some bodies actually create standards, while others try to provide guidelines (even standards) for using these standards. When you consider that it takes many years just to upgrade from one version of a standard to the next, you can see why the industry is permanently in transition. But these standards do lead to interoperability.

A few of the talks took on a bit of a revival meeting atmosphere--step up and do things the way we tell you! Follow standards, do a risk analysis, get that test plan in place. I think one of the most anticipated standards for device safety,

Communication frameworks

The industry is just in the opening chapter of making devices cooperate, but a key principle seems to be generally accepted: instead of having one device directly contact another, it's better to have them go through a central supervisor or node. A monitor, for instance, may say, "This patient's heart rate is falling." The supervisor would receive this information and check a "scenario" to see whether action should be taken. If so, it tells the pump, "Change the infusion rate as follows."

Michael Robkin reported about an architecture for an integrated clinical environment (ICE) in a set of safety requirements called ASTM F2761-09. The devices are managed by a network controller, which in turn feeds data back and forth to a supervisor.

I took the term "scenario" from a project by John Hatcliff of Kansas State University, the Medical Device Coordination Framework that seems well suited to the ICE architecture. In addition to providing an API and open-source library for devices to communicate with a supervisor, the project creates a kind of domain-specific language in which clinicians can tell the devices how to react to events.

Communicating devices need regulatory changes too. As I described in my earlier posting, the FDA allows communicating devices only if the exact configuration has been tested by the vendor. Each pair of devices must be proven to work reliably together.

In a world of standards, it should be possible to replace this pair-wise clearance with what Robkin called component-wise clearance. A device would be tested for conformance to a standard protocol, and then would be cleared by the FDA to run with any other device that is also cleared to conform. A volunteer group called the Medical Device Interoperability Safety Working Group has submitted a set of recommendations to the FDA that recommends this change. We know that conformance to written standards does not perfectly guarantee interoperability (have you run into a JavaScript construct that worked differently on different web browsers?), but I suppose this change will not compromise safety because the hospital is always ultimately responsible for testing the configurations of devices it chooses.

The safety working group is also persuading the FDA to take a light hand in regulating consumer devices. Things such as the Fitbit or Zeo that are used for personal fitness should not have to obey the same rigorous regulations as devices used to make diagnostic decisions or deliver treatments.

Open Source

One of the conference organizers is Shahid Shah, a software developer in the medical device space with a strong leaning toward open-source solutions. As in his OSCon presentation, Shah spoke here of the many high-quality components one can get just for the effort of integration and testing. The device manufacturer takes responsibility for the behavior of any third-party software, proprietary or free, that goes into the device. But for at least a decade, devices using free software have been approved by the FDA. Shah also goaded his audience to follow standards wherever possible (such as XMPP for communication), because that will allow third parties to add useful functionality and enhance the marketability of their devices.

Solving old problems in new ways

Tim Gee, program chair, gave an inspiring talk showing several new devices that do their job in radically better ways. Many have incorporated the touch screen interface popularized by the iPhone, and use accelerometers like modern consumer devices to react intelligently to patient movements. For instance, many devices give off false alarms when patients move. The proper interpretation of accelerometers might help suppress these alarms.

In addition to user interface advances and low-power CPUs with more memory processing power, manufacturers are taking advantage of smaller, better WiFi radios and tools that reduce their time to market. They are also using third party software modules (including open source) and following standards more. Networking allows the device to offload some processing to a PC or network server, thus allowing the device itself to shrink even more in physical size and complexity. Perhaps most welcome to clinicians, manufacturers are adding workflow automation.

Meaningful use and the HITECH act play a role in device advances too. Clinicians are expected to report more and more data, and the easiest way to get and store it is go to the devices where it originates.

Small, pesky problems

One of the biggest barriers to sharing device data is a seemingly trivial question: what patient does the data apply to? The question is actually a major sticking point, because scenarios like the following can happen:

  • A patient moves to a new hospital room. Either her monitor reports a new location, or she gets a new monitor. Manually updating the information to put the new data into the patient's chart is an error-prone operation.

  • A patient moves out and a device is hooked up to a new patient. This time the staff must be sure to stop reporting data to the old patient's record and direct it into the new patient's record.

    • Several speakers indicated that even the smallest risk of putting data into the wrong person's record was unacceptable. Hospitals assign IDs to patients, and most monitors report the ID with each data sample, but Luis Melendez of Partners HealthCare says the EHR vendors don't trust this data (nurses could have entered it incorrectly) so they throw the IDs away and record only the location. Defaults can be changed, but these introduce new problems.

      Another one of these frustratingly basic problems is attaching the correct time to data. It seems surprisingly hard to load the correct time on devices and keep them synched to it. I guess medical devices don't run NTP. They often have timestamps that vary from the correct time by minutes or even months, and EHRs store these without trying to verify them.

      Other steps forward

      We are only beginning to imagine what we could do with smarter medical devices. For instance, if their output is recorded and run through statistical tests, problem with devices might be revealed earlier in their life cycles. (According to expert Julian Goldman, the FDA and ONC are looking into this.) And I didn't even hear talk of RFIDs, which could greatly reduce the cost of tracking equipment as well as ensure that each patient is using what she needs. No doubt about it: even if medical device improvements aren't the actually protagonists of the health care reform drama, they will play a more than supporting role. But few clinical institutions use the capabilities of smart devices yet, and those that do exploit them do so mostly to insert data into patient records. So we'll probably wait longer than we want for the visions unveiled at this conference to enter our everyday lives.

September 06 2011

Medical device experts and their devices converse at Boston conference

I'm looking forward to Medical Device Connectivity conference this week at Harvard Medical School. It's described by one of the organizers, Shahid N. Shah, as an "operational" conference, focused not on standards or the potential of the field but on how to really make these things work. On the other hand, as I learned from a call with Program Chair Tim Gee, many of the benefits for which the field is striving have to wait for changes in technology, markets, FDA regulations, and hospital deployment practices.

For instance, Gee pointed out that patients under anesthesia need a ventilator in order to breath. But this interferes with certain interventions that may be needed during surgery, such as X-Rays, so the ventilator must be also sometimes turned off temporarily. Guess how often the staff forget to turn the ventilator back on after taking the X-Ray? Often enough, anyway, to take this matter out of human hands and make the X-Ray machine talk directly to the ventilator. Activating the X-Ray machine should turn off the ventilator, and finishing the X-Ray should turn it back on. But that's not done now.

Another example where Gee pointed out the benefits of smarter and better networked devices are patient-controlled analgesia (PCA) pumps. When overused, they can cause the patient's organs to slow down and eventually cause death. But a busy nurse or untrained family member might keep the pump going even after the patient monitors issue alarms. It would be better for the monitor to shut off the pump when vital signs get dangerously low.

The first requirement for such life-saving feedback loops is smart software that can handle feedback loops and traverse state machines, as well as rules engines for more complex scenarios such as self-regulating intravenous feeds. Standards will make it easier to communicate between devices. But the technologies already exist and will be demonstrated at an interoperability lab on Wednesday evening.

None of the functions demonstrated at the lab are available in the field. They require more testing by manufacturers. Obviously, devices that communicate also add an entirely new dimension of tasks to hospital staff. And FDA certification rules may also need to be updated to be more flexible. Currently, a manufacturer selling a device that interoperates with other devices must test every configuration it wants to sell, specifying every device in that configuration and taking legal responsibility for their safe and reliable operation. It cannot sell into a hospital that uses the device in a different configuration. Certifying a device to conform to a standard instead of working with particular devices would increase the sale and installation of smart, interconnected instruments.

Reading the conference agenda (PDF), I came up with three main themes for the conference:

The device in practice

Hospitals, nursing homes, and other institutions who are current major uses of devices (although they're increasingly being used in homes as well) need to make lots of changes in workflow and IT to reap the benefits of newer device capabilities. Currently, according to Gee, the output from devices is used to make immediate medical decisions but are rarely stored in EHRs. The quantities of data could become overwhelming (perhaps why one session at the conference covers cloud storage) and doctors would need tools for analyzing it. Continuous patient monitoring, which tends to send 12 Kb of data per second and must have a reliable channel that prevents data loss, has a session all its own.

Standards and regulations

The FDA has responded with unusual speed and flexibility to device manufacturers recently. One simple but valuable directive (PDF) clarified that devices could exchange information with other systems (such as EHRs) without requiring the FDA to regulate those EHRs. It has also issued a risk assessment standard named IEC80001, which specifies what manufacturers and hospitals should do when installing devices or upgrading environments. This is an advance for the FDA because they normally do not regulate the users of products (only the manufacturers) but they realize that users need guidance for best practices. Another conference session covers work by the mHealth Regulatory Coalition.

Wireless networks

This single topic is complex and important enough to earn several sessions, including security and the use of the 5 GHz band. No surprise that so much attention is being devoted to wireless networking: it's the preferred medium for connecting devices, but its reliability is hard to guarantee and absolutely critical.

The conference ends with three workshops: one on making nurse response to alarms and calls more efficient, one on distributed antenna systems (good for cell phones and ensuring public safety), and one on the use of open source libraries to put connectivity into devices. The latter workshop is given by Shah and goes more deeply into the topic he presented in a talk at O'Reilly's Open Source convention and an interview with me.

March 05 2010

Report from HIMMS Health IT conference: building or bypassing infrastructure

Today the Healthcare Information and
Management Systems Society (HIMSS)
conference wrapped up. In
previous blogs, I laid out the href="http://radar.oreilly.com/2010/03/report-from-himms-health-it-co.html">
benefits of risk-taking in health care IT followed by my main
theme, href="http://radar.oreilly.com/2010/03/report-from-himms-health-it-co-1.html">
interoperability and openness. This blog will cover a few topics
about a third important issue, infrastructure.

Why did I decide this topic was worth a blog? When physicians install
electronic systems, they find that they need all kinds of underlying
support. Backups and high availability, which might have been
optional or haphazard before, now have to be professional. Your
patient doesn't want to hear, "You need an antibiotic right away, but
we'll order it tomorrow when our IT guy comes in to reboot the
system." Your accounts manager would be almost as upset if you told
her that billing will be delayed for the same reason.

Network bandwidth

An old sales pitch in the computer field (which I first heard at
Apollo Computer in the 1980s) goes, "The network is the computer." In
the coming age of EHRs, the network is the clinic. My family
practitioner (in an office of five practitioners) had to install a T1
line when they installed an EHR. In eastern Massachusetts, whose soil
probably holds more T1 lines than maple tree roots, that was no big
deal. It's considerably more problematic in an isolated rural area
where the bandwidth is more comparable to what I got in my hotel room
during the conference (particularly after 10:30 at night, when I'm
guessing a kid in a nearby room joined an MMPG). One provider from the
mid-West told me that the incumbent changes $800 per month for a T1.
Luckily, he found a cheaper alternative.

So the FCC is href="http://www.fcc.gov/cgb/rural/rhcp.html">involved in health care
now. Bandwidth is perhaps their main focus at the moment, and
they're explicitly tasked with making sure rural providers are able to
get high-speed connections. This is not a totally new concern; the
landmark 1994 Telecom Act included rural health care providers in its
universal service provisions. I heard one economist deride the
provision, asking what was special about rural health care providers
that they should get government funding. Fifteen years later, I think
rising health care costs and deteriorating lifestyles have answered
that question.

Wireless hubs

The last meter is just as important as the rest of your network, and
hospitals with modern, technology-soaked staff are depending
increasingly on mobile devices. I chatted with the staff of a small
wireless company called Aerohive that aims its products at hospitals.
Its key features are:

Totally cable-free hubs

Not only do Aerohive's hubs communicate with your wireless endpoints,
they communicate with other hubs and switches wirelessly. They just
make the hub-to-endpoint traffic and hub-to-hub traffic share the
bandwidth in the available 2.4 and 5 GHz ranges. This allows you to
put them just about anywhere you want and move them easily.

Dynamic airtime scheduling

The normal 802.11 protocols share the bandwidth on a packet-by-packet
basis, so a slow device can cause all the faster devices to go slower
even when there is empty airtime. I was told that an 802.11n device
can go slower than a 802.11b device if it's remote and its signal has
to go around barriers. Aerohive just checks how fast packets are
coming in and allocates bandwidth on that ratio, like time-division
multiplexing. If your device is ten times faster than someone else's
and the bandwidth is available, you can use ten times as much
bandwidth.

Dynamic rerouting

Aerohive hubs use mesh networking and an algorithm somewhat like
Spanning Tree Protocol to reconfigure the network when a hub is added
or removed. Furthermore, when you authenticate with one hub, its
neighbors store your access information so they can pick up your
traffic without taking time to re-authenticate. This makes roaming
easy and allows you to continue a conversation without a hitch if a
hub goes down.

Security checking at the endpoint

Each hub has a built-in firewall so that no unauthorized device can
attach to the network. This should be of interest in an open, public
environment like a hospital where you have no idea who's coming in.

High bandwidth

The top-of-the-line hub has two MIMO radios, each with three
directional antennae.

Go virtual, part 1

VMware has href="http://www.vmware.com/solutions/industry/healthcare/case-studies.html">customers
in health care, as in other industries. In addition, they've
incorporated virtualization into several products from medical
equipment and service vendors,

Radiology

Hospitals consider these critical devices. Virtualization here
supports high availability.

Services

A transcription service could require ten servers. Virtualization can
consolidate them onto one or two pieces of hardware.

Roaming desktops

Nurses often move from station to station. Desktop virtualization
allows them to pull up the windows just as they were left on the
previous workstation.

Go virtual, squared

If all this talk of bandwidth and servers brings pain to your head as
well as to the bottom line, consider heading into the cloud. At one
talk I attended today on cost analysis, a hospital administrator
reported that about 20% of their costs went to server hosting. They
saved a lot of money by rigorously eliminating unneeded backups, and a
lot on air conditioning by arranging their servers more efficiently.
Although she didn't discuss Software as a Service, those are a couple
examples of costs that could go down if functions were outsourced.

Lots of traditional vendors are providing their services over the Web
so you don't have to install anything, and several companies at the
conference are entirely Software as a Service. I mentioned href="http://www.practicefusion.com/">Practice Fusion in my
previous blog. At the conference, I asked them three key questions
pertinent to Software as a Service.

Security

This is the biggest question clients ask when using all kinds of cloud
services (although I think it's easier to solve than many other
architectural issues). Practice Fusion runs on HIPAA-compliant
Salesforce.com servers.

Data portability

If you don't like your service, can you get your data out? Practice
Fusion hasn't had any customers ask for their data yet, but upon
request they will produce a DVD containing your data in CSV files, or
in other common formats, overnight.

Extendibility

As I explained in my previous blog, clients increasingly expect a
service to be open to enhancements and third-party programs. Practice
Fusion has an API in beta, and plans to offer a sandbox on their site
for people to develop and play with extensions--which I consider
really cool. One of the API's features is to enforce a notice to the
clinician before transferring sensitive data.

The big selling point that first attracts providers to Practice Fusion
is that it's cost-free. They support the service through ads, which
users tell them are unobtrusive and useful. But you can also pay to
turn off ads. The service now has 30,000 users and is adding about 100
each day.

Another SaaS company I mentioned in my previous blog is href="http://www.covisint.com/">Covisint. Their service is
broader than Practice Fusion, covering not only patient records but
billing, prescription ordering, etc. Operating also as an HIE, they
speed up access to data on patients by indexing all the data on each
patient in the extended network. The actual data, for security and
storage reasons, stays with the provider. But once you ask about a
patient, the system can instantly tell you what sorts of data are
available and hook you up with the providers for each data set.

Finally, I talked to the managers of a nimble new company called href="http://carecloud.com/">CareCloud, which will start serving
customers in early April. CareCloud, too, offers a range of services
in patient health records, practice management, and and revenue cycle
management. It was built entirely on open source software--Ruby on
Rails and a PostgreSQL database--while using Flex to build their
snazzy interface, which can run in any browser (including the iPhone,
thanks to Adobe's upcoming translation to native code).upcoming
translation to native code). Their strategy is based on improving
physicians' productivity and the overall patient experience through a
social networking platform. The interface has endearing Web 2.0 style
touches such as a news feed, SMS and email confirmations, and
integration with Google Maps.

And with that reference to Google Maps (which, in my first blog, I
complained about mislocating the address 285 International Blvd NW for
the Georgia World Congress Center--thanks to the Google Local staff
for getting in touch with me right after a tweet) I'll end my coverage
of this year's HIMSS.

February 05 2010

One hundred eighty degrees of freedom: signs of how open platforms are spreading

I was talking recently with Bob Frankston, who has a href="http://frankston.com/public/Bob_Frankston_Bio.asp">distinguished
history in computing that goes back to work on Multics, VisiCalc,
and Lotus Notes. We were discussing some of the dreams of the Internet
visionaries, such as total decentralization (no mobile-system walls,
no DNS) and bandwidth too cheap to meter. While these seem impossibly
far off, I realized that computing and networking have come a long way
already, making things normal that not too far in the past would have
seemed utopian.



Flat-rate long distance calls

I remember waiting past my bedtime to make long-distance calls, and
getting down to business real quick to avoid high charges.
Conventional carriers were forced to flat-rate pricing by competition
from VoIP (which I'll return to later in the blog). International
calls are still overpriced, but with penny-per-minute cards available
in any convenience store, I don't imagine any consumers are paying
those high prices.



Mobile phone app stores

Not that long ago, the few phones that offered Internet access did so
as a novelty. Hardly anybody seriously considered downloading an
application to their phones--what are you asking for, spam and
fraudulent charges? So the iPhone and Android stores teaming with
third-party apps are a 180-degree turn for the mobile field. I
attribute the iPhone app store once again to competition: the href="http://www.oreillynet.com/onlamp/blog/2008/01/the_unflappable_free_software.html">uncovering
of the iPhone SDK by a free software community.



Downloadable TV segments

While the studios strike deals with Internet providers, send out
take-down notices by the ream, and calculate how to derive revenue
from television-on-demand, people are already getting the most popular
segments from Oprah Winfrey or Saturday Night Live whenever they want,
wherever they want.



Good-enough generic devices

People no longer look down on cheap, generic tools and devices. Both
in software and in hardware, people are realizing that in the long run
they can do more with simple, flexible, interchangeable parts than
with complex and closed offerings. There will probably always be a
market for exquisitely designed premium products--the success of Apple
proves that--but the leading edge goes to products that are just "good
enough," and the DIY movement especially ensures a growing market for
building blocks of that quality.


I won't even start to summarize Frankston's own href="http://frankston.com/public/">writings, which start with
premises so far from what the Internet is like today that you won't be
able to make complete sense of any one article on its own. I'd
recommend the mind-blowing href="http://frankston.com/public/?n=Sidewalks">Sidewalks: Paying by
the Stroll if you want to venture into his world.

But I'll mention one sign of Frankston's optimism: he reminded me that
in the early 1990s, technologists were agonizing over arcane
quality-of-service systems in the hope of permitting VoIP over
ordinary phone connections. Now we take VoIP for granted and are
heading toward ubiquitous video. Why? Two things happened in parallel:
the technologists figured out much more efficient encodings, and
normal demand led to faster transmission technologies even over
copper. We didn't need QoS and all the noxious control and overhead it
entails. More generally, it's impossible to determine where progress
will come from or how fast it can happen.

January 14 2010

Innovation Battles Investment as FCC Road Show Returns to Cambridge

Opponents can shed their rhetoric and reveal new depths to their
thought when you bring them together for rapid-fire exchanges,
sometimes with their faces literally inches away from each other. That
made it worth my while to truck down to the MIT Media Lab for
yesterday's href="http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-295521A1.pdf">Workshop
on Innovation, Investment and the Open Internet, sponsored by the
Federal Communications Commission. In this article I'll cover:

Context and background

The FCC kicked off its country-wide hearing campaign almost two years
ago with a meeting at Harvard Law School, which quickly went wild. I
covered the href="http://radar.oreilly.com/archives/2008/02/network-neutrality-how-the-fcc.html">
experience in one article and the href="http://radar.oreilly.com/archives/2008/02/network-neutrality-code-words.html">
unstated agendas in another. With a star cast and an introduction
by the head of the House's Subcommittee on Telecommunications and the
Internet, Ed Markey, the meeting took on such a cachet that the
public flocked to the lecture hall, only to find it filled because
Comcast recruited people off the street to pack the seats and keep
network neutrality proponents from attending. (They had an overflow
room instead.)

I therefore took pains to arrive at the Media Lab's Bartos Theater
early yesterday, but found it unnecessary. Even though Tim Berners-Lee
spoke, along with well-known experts across the industry, only 175
people turned up, in my estimation (I'm not an expert at counting
crowds). I also noticed that the meeting wasn't worth a
mention today in the Boston Globe.

Perhaps it was the calamitous earthquake yesterday in Haiti, or the
bad economy, or the failure of the Copenhagan summit to solve the
worst crisis ever facing humanity, or concern over three wars the US
is involved in (if you count Yemen), or just fatigue, but it seems
that not as many people are concerned with network neutrality as two
years ago. I recognized several people in the audience yesterday and
surmised that the FCC could have picked out a dozen people at random
from their seats, instead of the parade of national experts on the
panel, and still have led a pretty darned good discussion.

And network neutrality is definitely the greased pig everyone is
sliding around. There are hundreds of things one could discuss
in the context of innovation and investment, but various political
forces ranging from large companies (AT&T versus Google) to highly
visible political campaigners (Huffington Post) have made network
neutrality the agenda. The FCC gave several of the movement's leaders
rein to speak, but perhaps signaled its direction by sending Meredith
Attwell Baker as the commissioner in attendance.

In contrast to FCC chair Julius Genachowski, who publicly calls for
network neutrality (a position also taken by Barack Obama href="http://www.barackobama.com/issues/technology/index_campaign.php#open-internet">
during his presidential campaign), Baker has traditionally
espoused a free-market stance. She opened the talks yesterday by
announcing that she is "unconvinced there is a problem" and posing the
question: "Is it broken?" I'll provide my own opinion later in this
article.

Two kinds of investment

Investment is the handmaiden, if not the inseminator, of innovation.
Despite a few spectacular successes, like the invention of Linux and
Apache, most new ideas require funding. Even Linux and Apache are
represented now by foundations backed now by huge companies.

So why did I title this article "Innovation Battles Investment"?
Because investment happens at every level of the Internet, from the
cables and cell towers up to the applications you load on your cell
phone.

Here I'll pause to highlight an incredible paradigm shift that was
visible at this meeting--a shift so conclusive that no one mentioned
it. Are you old enough to remember the tussle between "voice" and
"data" on telephone lines? Remember the predictions that data would
grow in importance at the expense of voice (meaning Plain Old
Telephone Service) and the milestones celebrated in the trade press when
data pulled ahead of voice?

Well, at the hearing yesterday, the term "Internet" was used to cover
the whole communications infrastructure, including wires and cell
phone service. This is a mental breakthrough all it's own, and one
I'll call the Triumph of the Singularity.

But different levels of infrastructure benefit from different
incentives. I found that all the participants danced around this.
Innovation and investment at the infrastructure level got short shrift
from the network neutrality advocates, whether in the bedtime story
version delivered by Barbara van Schewick or the deliberately
intimidating, breakneck overview by economist Shane Greenstein, who
defined openness as "transparency and consistency to facilitate
communication between different partners in an independent value
chain."

You can explore his href="http://www.kellogg.northwestern.edu/faculty/greenstein/images/articles.html">
papers on your own, but I took this to mean, more or less, that
everybody sharing a platform should broadcast their intentions and
appraise everybody else of their plans, so that others can make the
most rational decisions and invest wisely. Greenstein realized, of
course that firms have little incentive to share their strategies. He said that
communication was "costly," which I take as a reference not to an expenditure of
money but to a surrender of control and relinquishing of opportunities.

This is just what the cable and phone companies are not going to do.
Dot-com innovator Jeffrey Glueck, founder of href="http://www.skyfire.com/">Skyfire, would like the FCC to
require ISPs to give application providers and users at least 60 to 90
days notice before making any changes to how they treat traffic. This
is absurd in an environment where bad actors require responses within
a few seconds and the victory goes to the router administrators with
the most creative coping strategy. Sometimes network users just have
to trust their administrators to do the best thing for them. Network
neutrality becomes a political and ethical issue when administrators
don't. But I'll return to this point later.

The pocket protector crowd versus the bean
counters

If the network neutrality advocates could be accused of trying to
emasculate the providers, advocates for network provider prerogative
were guilty of taking the "Trust us" doctrine too far. For me, the
best part of yesterday's panel was how it revealed the deep gap that
still exists between those with an engineering point of view and those
with a traditional business point of view.

The engineers, led by Internet designer David Clark, repeated the
mantra of user control of quality of service, the vehicle for this
being the QoS field added to the IP packet header. Van Schewick
postulated a situation where a user increases the QoS on one session
because they're interviewing for a job over the Internet, then reduces
the QoS to chat with a friend.

In the rosy world envisioned by the engineers, we would deal not with
the physical reality of a shared network with our neighbors, all
converging into a backhaul running from our ISP to its peers, but with
the logical mechanism of a limited, dedicated bandwidth pipe (former
senator Ted Stevens can enjoy his revenge) that we would spend our
time tweaking. One moment we're increasing the allocation for file
transfer so we can upload a spreadsheet to our work site; the next
moment we're privileging the port we use for an MPMG.

The practicality of such a network service is open to question. Glueck
pointed out that users are unlikely ever to ask for lower quality of
service (although this is precisely the model that Internet experts
have converged on, as I report in my 2002 article href="http://www.oreillynet.com/pub/a/network/2002/06/11/platform.html">
A Nice Way to Get Network Quality of Service?). He recommends
simple tiers of service--already in effect at many providers--so that
someone who wants to carry out a lot of P2P file transfers or
high-definition video conferencing can just pay for it.

In contrast, network providers want all the control. Much was made
during the panel of a remark by Marcus Weldon of Alcatel-Lucent in
support of letting the providers shape traffic. His pointed out that
video teleconferencing over the fantastically popular Skype delivered
unappealing results over today's best-effort Internet delivery, and
suggested a scenario where the provider gives the user a dialog box
where the user could increase the QoS for Skype in order to enjoy the
video experience.

Others on the panel legitimately flagged this comment as a classic
illustration of the problem with providers' traffic shaping: the
provider would negotiate with a few popular services such as Skype
(which boasts tens of millions of users online whenever you log in)
and leave innovative young services to fend for themselves in a
best-effort environment.

But the providers can't see doing quality of service any other way.
Their business model has always been predicated on designing services
around known costs, risks, and opportunities. Before they roll out a
service, they need to justify its long-term prospects and reserve
control over it for further tweaking. If the pocket protector crowd in
Internet standards could present their vision to the providers in a
way that showed them the benefits they'd accrue from openness
(presumably by creating a bigger pie), we might have progress. But the
providers fear, above else, being reduced to a commodity. I'll pick up
this theme in the next section.

Is network competition over?

Law professor Christopher S. Yoo is probably the most often heard (not
at this panel, unfortunately, where he was given only a few minutes)
of academics in favor of network provider prerogatives. He suggested
that competition was changing, and therefore requiring a different
approach to providers' funding models, from the Internet we knew in
the 1990s. Emerging markets (where growth comes mostly from signing up
new customers) differ from saturated markets (where growth comes
mainly from wooing away your competitors' customers). With 70% of
households using cable or fiber broadband offerings, he suggested the
U.S. market was getting saturated, or mature.

Well, only if you accept that current providers' policies will stifle
growth. What looks like saturation to an academic in the U.S. telecom
field looks like a state of primitive underinvestment to people who
enjoy lightning-speed service in other developed nations.

But Yoo's assertion makes us pause for a moment to consider the
implications of a mature network. When change becomes predictable and
slow, and an infrastructure is a public good--as I think everyone
would agree the Internet is--it becomes a candidate for government
takeover. Indeed, there have been calls for various forms of
government control of our network infrastructure. In some places this
is actually happening, as cities and towns create their own networks.
A related proposal is to rigidly separate the physical infrastructure
from the services, barring companies that provide the physical
infrastructure from offering services (and therefore presumably
relegating them to a maintenance role--a company in that position
wouldn't have much incentive to take on literally ground-breaking new
projects).

Such government interventions are politically inconceivable in the
United States. Furthermore, experience in other developed nations with
more successful networks shows that it is unnecessary.

No one can doubt that we need a massive investment in new
infrastructure if we want to use the Internet as flexibly and
powerfully as our trading partners. But there was disagreement yesterday about
how much of an effort the investment will take, and where it will come
from.

Yoo argued that a mature market requires investment to come from
operating expenditures (i.e., charging users more money, which
presumably is justified by discriminating against some traffic in
order to offer enhanced services at a premium) instead of capital
expenditures. But Clark believes that current operating expenditures
would permit adequate growth. He anticipated a rise in Internet access
charges of $20 a month, which could fund the added bandwidth we need
to reach the Internet speeds of advanced countries. In exchange for
paying that extra $20 per month, we would enjoy all the content we
want without paying cable TV fees.

The current understanding by providers is that usage is rising
"exponentially" (whatever that means--they don't say what the exponent
is) whereas charges are rising slowly. Following some charts from
Alcatel-Lucent's Weldon that showed profits disappearing entirely in a
couple years--a victim of the squeeze between rising usage and slow
income growth--Van Schewick challenged him, arguing that providers can
enjoy lower bandwidth costs to the tune of 30% per year. But Weldon
pointed out that the only costs going down are equipment, and claimed
that after a large initial drop caused by any disruptive new
technology, costs of equipment decrease only 10% per year.

Everyone agreed that mobile, the most exciting and
innovation-supporting market, is expensive to provide and suffering an
investment crisis. It is also the least open part of the Internet and
the part most dependent on legacy pricing (high voice and SMS
charges), deviating from the Triumph of the Singularity.

So the Internet is like health care in the U.S.: in worse shape than
it appears. We have to do something to address rising
usage--investment in new infrastructure as well as new
applications--just as we have to lower health care costs that have
surpassed 17% of the gross domestic product.

Weldon's vision--a rosy one in its own way, complementing the
user-friendly pipe I presented earlier from the engineers--is that
providers remain free to control the speeds of different Internet
streams and strike deals with anyone they want. He presented provider
prerogatives as simple extensions of what already happens now, where
large companies create private networks where they can impose QoS on
their users, and major web sites contract with content delivery
networks such as Akamai (represented at yesterday's panel by lawyer
Aaron Abola) to host their content for faster response time. Susie Kim
Riley of Camiant testified that European providers are offering
differentiated services already, and making money by doing so.

What Weldon and Riley left out is what I documented in href="http://www.oreillynet.com/pub/a/network/2002/06/11/platform.html">
A Nice Way to Get Network Quality of Service? Managed networks
providing QoS are not the Internet. Attempts to provide QoS over the
Internet--by getting different providers to cooperate in privileging
certain traffic--have floundered. The technical problems may be
surmountable, but no one has figured out how to build trust and to design
adequate payment models that would motivate providers to cooperate.

It's possible, as Weldon asserts, that providers allowed to manage
their networks would invest in infrastructure that would ultimately
improve the experience for all sites--those delivered over the
Internet by best-effort methods as well as those striking deals. But
the change would still represent increased privatization of the public
Internet. It would create what application developers such as Glueck
and Nabeel Hyatt of Conduit Labs fear most: a thousand different
networks with different rules that have to be negotiated with
individually. And new risks and costs would be placed in the way of
the disruptive innovators we've enjoyed on the Internet.

Competition, not network neutrality, is actually the key issue facing
the FCC, and it was central to their Internet discussions in the years
following the 1996 Telecom Act. For the first five years or so, the
FCC took seriously a commitment to support new entrants by such
strategies as requiring incumbent companies to allow interconnection.
Then, especially under Michael Powell, the FCC did an about-face.

The question posed during this period was: what leads to greater
investment and growth--letting a few big incumbents enter each other's
markets, or promoting a horde of new, small entrants? It's pretty
clear that in the short term, the former is more effective because the
incumbents have resources to throw at the problem, but that in the
long term, the latter is required in order to find new solutions and
fix problems by working around them in creative ways.

Yet the FCC took the former route, starting in the early 2000s. They
explicitly made a deal with incumbents: build more infrastructure, and
we'll relax competition rules so you don't have to share it with other
companies.

Starting a telecom firm is hard, so it's not clear that pursuing the
other route would have saved us from the impasse we're in today. But a
lack of competition is integral to our problems--including the one
being fought out in the field of "network neutrality."

All the network neutrality advocates I've talked to wish that we had
more competition at the infrastructure level, because then we could
rely on competition to discipline providers instead of trying to
regulate such discipline. I covered this dilemma in a 2006 article, href="http://lxer.com/module/newswire/view/53907/">Network Neutrality
and an Internet with Vision. But somehow, this kind of competition
is now off the FCC agenda. Even in the mobile space, they offer
spectrum though auctions that permit the huge incumbents to gather up
the best bands. These incumbents then sit on spectrum without doing
anything, a strategy known as "foreclosure" (because it forecloses
competitors from doing something useful with it).

Because everybody goes off in his own direction, the situation pits two groups against each other that should be
cooperating: small ISPs and proponents of an open Internet.

What to regulate

Amy Tykeson, CEO of a small Oregon Internet provider named
BendBroadband, forcefully presented the view of an independent
provider, similar to the more familiar imprecations by href="http://www.brettglass.com/"> Brett Glass of Lariat. In their
world--characterized by paper-thin margins, precarious deals with
back-end providers, and the constant pressure to provide superb
customer service--flexible traffic management is critical and network
neutrality is viewed as a straitjacket.

I agree that many advocates of network neutrality have oversimplified
the workings of the Internet and downplayed the day-to-day
requirements of administrators. In contrast, as I have shown, large
network providers have overstepped their boundaries. But to end this
article on a positive note (you see, I'm trying) I'll report that the
lively exchange did produce some common ground and a glimmer of hope
for resolving the differing positions.

First, in an exchange between Berners-Lee and van Schewick on the
pro-regulatory side and Riley on the anti-regulatory side, a more
nuanced view of non-discrimination and quality of service emerged.
Everybody on panel offered vociferous exclamations in support of the position that it was
unfair discrimination for a network provider to prevent a user from
getting legal content or to promote one web site over a competing web
site. And this is a major achievement, because those are precisely
the practices that providers liked AT&T and Verizon claim the
right to do--the practices that spawned the current network neutrality
controversy.

To complement this consensus, the network neutrality folks approved
the concept of quality of service, so long as it was used to improve
the user experience instead of to let network providers pick winners.
In a context where some network neutrality advocates have made QoS a
dirty word, I see progress.

This raises the question of what is regulation. The traffic shaping
policies and business deals proposed by AT&T and Verizon are a
form of regulation. They claim the same privilege that large
corporations--we could look at health care again--have repeatedly
tried to claim when they invoke the "free market": the right of
corporations to impose their own regulations.

Berners-Lee and others would like the government to step in and issue
regulations that suppress the corporate regulations. A wide range of
wording has been proposed for the FCC's consideration. Commissioner
Baker asked whether, given the international reach of the Internet,
the FCC should regulate at all. Van Schewick quite properly responded
that the abuses carried out by providers are at the local level and
therefore can be controlled by the government.

Two traits of a market are key to innovation, and came up over and
over yesterday among dot-com founders and funders (represented by Ajay
Agarwal of Bain Capital) alike: a level playing field, and
light-handed regulation.

Sometimes, as Berners-Lee pointed out, government regulation is
required to level the playing field. The transparency and consistency
cited by Greenstein and others are key features of the level playing
field. And as I pointed out, a vacuum in government regulation is
often filled by even more onerous regulation by large corporations.

One of the most intriguing suggestions of the day came from Clark, who
elliptically suggested that the FCC provide "facilitation, not
regulation." I take this to mean the kind of process that Comcast and
BitTorrent went through, of which Sally Shipman Wentworth of ISOC
boasted about in her opening remarks. Working with the IETF (which she
said created two new working groups to deal with the problem), Comcast
and BitTorrent worked out a protocol that should reduce the load of
P2P file sharing on networks and end up being a win-win for everybody.

But there are several ways to interpret this history. To free market
ideologues, the Comcast/BitTorrent collaboration shows that private
actors on the Internet can exploit its infinite extendibility to find
their own solutions without government meddling. Free market
proponents also call on anti-competition laws to hold back abuses. But
those calling for parental controls would claim that Comcast wanted
nothing to do with BitTorrent and started to work on technical
solutions only after getting tired of the feces being thrown its way
by outsiders, including the FCC.

And in any case--as panelists pointed out--the IETF has no enforcement
power. The presence of a superior protocol doesn't guarantee that
developers and users will adopt it, or that network providers will
allow traffic that could be a threat to their business models.

The FCC at Harvard, which I mentioned at the beginning of this
article, promised intervention in the market to preserve Internet
freedom. What we got after that (as I predicted) was a slap on
Comcast's wrist and no clear sense of direction. The continued
involvement of the FCC--including these public forums, which I find
educational--show, along with the appointment of the more
interventionist Genachowski and the mandate to promote broadband in
the American Recovery and Reinvestment Act, that it can't step away
from the questions of competition and investment.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl