Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 09 2013

Who will upgrade the telecom foundation of the Internet?

Although readers of this blog know quite well the role that the Internet can play in our lives, we may forget that its most promising contributions — telemedicine, the smart electrical grid, distance education, etc. — depend on a rock-solid and speedy telecommunications network, and therefore that relatively few people can actually take advantage of the shining future the Internet offers.

Worries over sputtering advances in bandwidth in the US, as well as an actual drop in reliability, spurred the FCC to create the Technology Transitions Policy Task Force, and to drive discussion of what they like to call the “IP transition”.

Last week, I attended a conference on the IP transition in Boston, one of a series being held around the country. While we tussled with the problems of reliability and competition, one urgent question loomed over the conference: who will actually make advances happen?

What’s at stake and why bids are coming in so low

It’s not hard to tally up the promise of fast, reliable Internet connections. Popular futures include:

  • Delivering TV and movie content on demand
  • Checking on your lights, refrigerator, thermostat, etc., and adjusting them remotely
  • Hooking up rural patients with health care experts in major health centers for diagnosis and consultation
  • Urgent information updates during a disaster, to aid both victims and responders

I could go on and on, but already one can see the outline of the problem: how do we get there? Who is going to actually create a telecom structure that enables everyone (not just a few privileged affluent residents of big cities) to do these things?

Costs are high, but the payoff is worthwhile. Ultimately, the applications I listed will lower the costs of the services they replace or improve life enough to justify an investment many times over. Rural areas — where investment is currently hardest to get — could probably benefit the most from the services because the Internet would give them access to resources that more centrally located people can walk or drive to.

The problem is that none of the likely players can seize the initiative. Let’s look at each one:

Telecom and cable companies
The upgrading of facilities is mostly in their hands right now, but they can’t see beyond the first item in the previous list. Distributing TV and movies is a familiar business, but they don’t know how to extract value from any of the other applications. In fact, most of the benefits of the other services go to people at the endpoints, not to the owners of the network. This has been a sore point with the telecom companies ever since the Internet took off, and spurs them on constant attempts to hold Internet users hostage and shake them down for more cash.

Given the limitations of the telecom and cable business models, it’s no surprise they’ve rolled out fiber in the areas they want and are actually de-investing in many other geographic areas. Hurricane Sandy brought this to public consciousness, but the problem has actually been mounting in rural areas for some time.

Angela Kronenberg of COMPTEL, an industry association of competitive communications companies, pointed out that it’s hard to make a business case for broadband in many parts of the United States. We have a funny demographic: we’re not as densely populated as the Netherlands or South Korea (both famous for blazingly fast Internet service), nor as concentrated as Canada and Australia, where it’s feasible to spend a lot of money getting service to the few remote users outside major population centers. There’s no easy way to reach everybody in the US.

Governments
Although governments subsidize network construction in many ways — half a dozen subsidies were reeled off by keynote speaker Cameron Kerry, former Acting Secretary of the Department of Commerce — such stimuli can only nudge the upgrade process along, not control it completely. Government funding has certainly enabled plenty of big projects (Internet access is often compared to the highway system, for instance), but it tends to go toward familiar technologies that the government finds safe, and therefore misses opportunities for radical disruption. It’s no coincidence that these safe, familiar technologies are provided by established companies with lobbyists all over DC.

As an example of how help can come from unusual sources, Sharon Gillett mentioned on her panel the use of unlicensed spectrum by small, rural ISPs to deliver Internet to areas that otherwise had only dial-up access. The FCC ruling that opened up “white space” spectrum in the TV band to such use has greatly empowered these mavericks.

Individual consumers
Although we are the ultimate beneficiaries of new technology (and will ultimately pay for it somehow, through fees or taxes) hardly anyone can plunk down the cash for it in advance: the vision is too murky and the reward too far down the road. John Burke, Commissioner of the Vermont Public Service Board, flatly said that consumers choose the phone service almost entirely on the basis of price and don’t really find out its reliability and features until later.

Basically, consumers can’t bet that all the pieces of the IP transition will fall in place during their lifetimes, and rolling out services one consumer at a time is incredibly inefficient.

Internet companies
Google Fiber came up once or twice at the conference, but their initiatives are just a proof of concept. Even if Google became the lynchpin it wants to be in our lives, it would not have enough funds to wire the world.

What’s the way forward, then? I find it in community efforts, which I’ll explore at the end of this article.

Practiced dance steps

Few of the insights in this article came up directly in the Boston conference. The panelists were old hands who had crossed each other’s paths repeatedly, gliding between companies, regulatory agencies, and academia for decades. At the conference, they pulled their punches and hid their agendas under platitudes. The few controversies I saw on stage seemed to be launched for entertainment purposes, distracting from the real issues.

From what I could see, the audience of about 75 people came almost entirely from the telecom industry. I saw just one representative of what you might call the new Internet industries (Microsoft strategist Sharon Gillett, who went to that company after an august regulatory career) and two people who represent the public interest outside of regulatory agencies (speaker Harold Feld of Public Knowledge and Fred Goldstein of Interisle Consulting Group).

Can I get through to you?

Everyone knows that Internet technologies, such as voice over IP, are less reliable than plain old telephone service, but few realize how soon reliability of any sort will be a thing of the past. When a telecom company signs you up for a fancy new fiber connection, you are no longer connected to a power source at the telephone company’s central office. Instead, you get a battery that can last eight hours in case of a power failure. A local power failure may let you stay in contact with outsiders if the nearby mobile phone towers stay up, but a larger failure will take out everything.

These issues have a big impact on public safety, a concern raised at the beginning of the conference by Gregory Bialecki in his role as a Massachusetts official, and repeated by many others during the day.

There are ways around the new unreliability through redundant networks, as Feld pointed out during his panel. But the public and regulators must take a stand for reliability, as the post-Sandy victims have done. The issue in that case was whether a community could be served by wireless connections. At this point, they just don’t deliver either the reliability or the bandwidth that modern consumers need.

Mark Reilly of Comcast claimed at the conference that 94% of American consumers now have access to at least one broadband provider. I’m suspicious of this statistic because the telecom and cable companies have a very weak definition of “broadband” and may be including mobile phones in the count. Meanwhile, we face the possibility of a whole new digital divide consisting of people relegated to wireless service, on top of the old digital divide involving dial-up access.

We’ll take that market if you’re not interested

In a healthy market, at least three companies would be racing to roll out new services at affordable prices, but every new product or service must provide a migration path from the old ones it hopes to replace. Nowhere is this more true than in networks because their whole purpose is to let you reach other people. Competition in telecom has been a battle cry since the first work on the law that became the 1996 Telecom Act (and which many speakers at the conference say needs an upgrade).

Most of the 20th century accustomed people to thinking of telecom as a boring, predictable utility business, the kind that “little old ladies” bought stock in. The Telecom Act was supposed to knock the Bell companies out of that model and turn them into fierce innovators with a bunch of other competitors. Some people actually want to reverse the process and essentially nationalize the telecom infrastructure, but that would put innovation at risk.

The Telecom Act, especially as interpreted later by the FCC, fumbled the chance to enforce competition. According to Goldstein, the FCC decided that a duopoly (baby Bells and cable companies) were enough competition.

The nail in the coffin may have been the FCC ruling that any new fiber providing IP service was exempt from the requirements for interconnection. The sleight of hand that the FCC used to make this switch was a redefinition of the Internet: they conflated the use of IP on the carrier layer with the bits traveling around above, which most people think of as “the Internet.” But the industry and the FCC had a bevy of arguments (including the looser regulation of cable companies, now full-fledged competitors of the incumbent telecom companies), so the ruling stands. The issue then got mixed in with a number of other controversies involving competition and control on the Internet, often muddled together under the term “network neutrality.”

Ironically, one of the selling points that helps maintain a competitive company, such as Granite Telecom, is reselling existing copper. Many small businesses find that the advantages of fiber are outweighed by the costs, which may include expensive quality-of-service upgrades (such as MPLS), new handsets to handle VoIP, and rewiring the whole office. Thus, Senior Vice President Sam Kline announced at the conference that Granite Telecom is adding a thousand new copper POTS lines every day.

This reinforces the point I made earlier about depending on consumers to drive change. The calculus that leads small businesses to stick with copper may be dangerous in the long run. Besides lost opportunities, it means sticking with a technology that is aging and decaying by the year. Most of the staff (known familiarly as Bellheads) who designed, built, and maintain the old POTS network are retiring, and the phone companies don’t want to bear the increasing costs of maintenance, so reliability is likely to decline. Kline said he would like to find a way to make fiber more attractive, but the benefits are still vaporware.

At this point, the major companies and the smaller competing ones are both cherry picking in different ways. The big guys are upgrading very selectively and even giving up on some areas, whereas the small companies look for niches, as Granite Telecom has. If universal service is to become a reality, a whole different actor must step up to the podium.

A beautiful day in the neighborhood

One hope for change is through municipal and regional government bodies, linked to local citizen groups who know where the need for service is. Freenets, which go back to 1984, drew on local volunteers to provide free Internet access to everyone with a dial-up line, and mesh networks have powered similar efforts in Catalonia and elsewhere. In the 1990s, a number of towns in the US started creating their own networks, usually because they had been left off the list of areas that telecom companies wanted to upgrade.

Despite legal initiatives by the telecom companies to squelch municipal networks, they are gradually catching on. The logistics involve quite a bit of compromise (often, a commercial vendor builds and runs the network, contracting with the city to do so), but many town managers swear that advantages in public safety and staff communications make the investment worthwhile.

The limited regulations that cities have over cable companies (a control that sometimes is taken away) is a crude instrument, like a potter trying to manipulate clay with tongs. To craft a beautiful work, you need to get your hands right on the material. Ideally, citizens would design their own future. The creation of networks should involve companies and local governments, but also the direct input of citizens.

National governments and international bodies still have roles to play. Burke pointed out that public safety issues, such as 911 service, can’t be fixed by the market, and developing nations have very little fiber infrastructure. So, we need large-scale projects to achieve universal access.

Several speakers also lauded state regulators as the most effective centers to handle customer complaints, but I think the IP transition will be increasingly a group effort at the local level.

Back to school

Education emerged at the conference as one of the key responsibilities that companies and governments share. The transition to digital TV was accompanied by a massive education budget, but in my home town, there are still people confused by it. And it’s a minuscule issue compared to the task of going to fiber, wireless, and IP services.

I had my own chance to join the educational effort on the evening following the conference. Friends from Western Massachusetts phoned me because they were holding a service for an elderly man who had died. They lacked the traditional 10 Jews (the minyan) required by Jewish law to say the prayer for the deceased, and asked me to Skype in. I told them that remote participation would not satisfy the law, but they seemed to feel better if I did it. So I said, “If Skype will satisfy you, why can’t I just participate by phone? It’s the same network.” See, FCC? I’m doing my part.

May 16 2013

Three organizations pressing for change in society’s approach to computing

Taking advantage of a recent trip to Washington, DC, I had the privilege of visiting three non-profit organizations who are leaders in the application of computers to changing society. First, I attended the annual meeting of the Association for Computing Machinery’s US Public Policy Council (USACM). Several members of the council then visited the Open Technology Institute (OTI), which is a section of New America Foundation (NAF). Finally, I caught the end of the first general-attendance meeting of the Open Source Initiative (OSI).

In different ways, these organizations are all putting in tremendous effort to provide the benefits of computing to more people of all walks of life and to preserve the vigor and creativity of computing platforms. I found out through my meetings what sorts of systemic change is required to achieve these goals and saw these organizations grapple with a variety of strategies to get there. This report is not a statement from any of these groups, just my personal observations.

USACM Public Policy Council

The Association for Computing Machinery (ACM) has been around almost as long as electronic computers: it was founded in 1948. I joined in the 1980s but was mostly a passive recipient of information. Although last week’s policy meeting was the first I had ever attended at ACM, many of the attendees were familiar to me from previous work on either technical or policy problems.

As we met, open data was in the news thanks to a White House memo reiterating its call for open government data in machine-readable form. Although the movement for these data sets has been congealing from many directions over the past few years, USACM was out in front back in early 2009 with a policy recommendation for consumable data.

USACM weighs in on such policy issues as copyrights and patents, accessibility, privacy, innovation, and the various other topics on which you’d expect computer scientists to have professional opinions. I felt that the group’s domestic scope is oddly out of sync with the larger ACM, which has been assiduously expanding overseas for many years. A majority of ACM members now live outside the United States.

In fact, many of today’s issues have international reach: cybersecurity, accessibility, and copyright, to name some obvious ones. Although USACM has submitted comments on ACTA and the Trans-Pacific Partnership, they don’t maintain regular contacts work with organizations outside the country. Perhaps they’ll have the cycles to add more international connections in the future. Eugene Spafford, security expert and current chair of the policy committee, pointed out that many state-level projects in the US would be worth commenting on as well.

It’s also time to recognize that policy is made by non-governmental organizations as well as governments. Facebook and Google, for example, are setting policies about privacy. The book The End of Power: From Boardrooms to Battlefields and Churches to States, Why Being In Charge Isn’t What It Used To Be by Moisés Naím claims that power is becoming more widely distributed (not ending, really) and that a bigger set of actors should be taken into account by people hoping to effect change.

USACM represents a technical organization, so it seeks to educate policy decision-makers on issues where there is an intersection of computing technology and public policy. Their principles derive from the ACM Code of Ethics and Professional Conduct, which evolved from input by many ACM members and the organization’s experience. USACM papers usually focus on pointing out the technical implications of legislative or regulatory choices.

When the notorious SOPA and PIPA bills came up, for instance, the USACM didn’t issue the kind of blanket condemnation many other groups put out, supported by appeals to vague concepts such as freedom and innovation. Instead, they put the microscope to the bills’ provisions and issued brief comments about negative effects on the functioning of the Internet, with a focus on DNS. Spafford commented, “We also met privately with Congressional staff and provided tutorials on how DNS and similar mechanisms worked. That helped them understand why their proposals would fail."

Open Technology Institute

NAF is a flexible and innovative think tank proposing new strategies for dozens of national and international issues. Mostly progressive, in my view, it is committed to considering a wide range of possible solutions and finding rational solutions that all sides can accept. On computing and Internet issues, it features the Open Technology Institute, a rare example of a non-profit group that is firmly engaged in both technology production and policy-making. This reflects the multi-disciplinary expertise of OTI director Sascha Meinrath.

Known best for advocating strong policies to promote high-bandwidth Internet access, the OTI also concerns itself with the familiar range of policies in copyright, patents, privacy, and security. Google executive chairman Eric Schmidt is chair of the NAF board, and Google has been generous in its donations to NAF, including storage for the massive amounts of data the OTI has collected on bandwidth worldwide through its Measurement Lab, or M-Lab.

M-Lab measures Internet traffic around the world, using crowdsourcing to produce realistic reports about bandwidth, chokepoints, and other aspects of traffic. People can download the M-Lab tools to check for traffic shaping by providers and other characteristic of their connection, and send results back to M-Lab for storage. (They now have 700 terabytes of such data.) Other sites offer speed testing for uploads and downloads, but M-Lab is unique in storing and visualizing the results. The FCC, among others, has used M-Lab to determine the uneven progress of bandwidth in different regions. Like all OTI software projects, Measurement Lab is open source software.

Open Source Initiative

For my last meeting of the day, I dropped by for the last few sessions of Open Source Initiative’s Open Source Community Summit and talked to Deborah Bryant, Simon Phipps, and Bruno Souza. OSI’s recent changes represent yet another strategy for pushing change in the computer field.

OSI is best known for approving open source licenses and seems to be universally recognized as an honest and dependable judge in that area, but they want to branch out from this narrow task. About a year ago, they completely revamped their structure and redefined themselves as a membership organization. (I plunked down some cash as one of their first individual members, having heard of the change from Simon at a Community Leadership Summit).

When they announced the summit, they opened up a wiki for discussion about what to cover. The winner hands down was an all-day workshop on licensing — I guess you can tell when you’re in Washington. (The location was the Library of Congress.)

They also held an unconference that attracted a nice mix of open-source and proprietary software companies, software uses, and government workers. I heard working group summaries that covered such basic advice as getting ownership of the code that you contract out to companies to create for you, using open source to attract and retain staff, and justifying the investment in open source by thinking more broadly than the agency’s immediate needs and priorities.

Organizers used the conference to roll out Working Groups, a new procedure for starting projects. Two such projects, launched by members, are the development of a FAQ and the creation of a speaker’s bureau. Anybody with an idea that fits the mission of promoting and adopting open source software can propose a project, but the process requires strict deadlines and plans for fiscally sustaining the project.

OSI is trying to change government and society by changing the way they make and consume software. USACM is trying to improve the institutions’ understanding of software as well as the environment in which it is made. NAF is trying to extend computing to everyone, and using software as a research tool in pursuit of that goal. Each organization, starting from a different place, is expanding its options and changing itself in order to change others.

May 10 2012

Understanding Mojito

Yahoo's Mojito is a different kind of framework: all JavaScript, but running on both the client and the server. Code can run on the server, or on the client, depending on how the framework is tuned. It shook my web architecture assumptions by moving well beyond the convenience of a single language, taking advantage of that approach to process code where it seems most efficient. Programming this way will make it much easier to bridge the gap between developing code and running it efficiently.

I talked with Yahoo architect fellow and VP Bruno Fernandez-Ruiz (@olympum) about the possibilities Node opened and Mojito exploits.

Highlights from the full video interview include:

  • "The browser loses the chrome." Web applications no longer always look like they've come from the Web. [Discussed at the 02:11 mark]
  • Basic "Hello World" in Mojito. How do you get started? [Discussed at the 05:05 mark]
  • Exposing web services through YQL. Yahoo Query Language lets you work with web services without sweating the details. [Discussed at the 07:56 mark]
  • Manhattan, a closed Platform as a Service. If you want a more complete hosting option for your Mojito applications, take a look. [Discussed at the 10:29 mark]
  • Code should flow among devices. All of these devices speak HTML and JavaScript. Can we help them talk with each other? [Discussed at the 11:50 mark]

You can view the entire conversation in the following video:

Fluent Conference: JavaScript & Beyond — Explore the changing worlds of JavaScript & HTML5 at the O'Reilly Fluent Conference (May 29 - 31 in San Francisco, Calif.).

Save 20% on registration with the code RADAR20


Related:


April 10 2012

Four short links: 9 April 2012

  1. E-Reading/E-Books Data (Luke Wroblewski) -- This past January, paperbacks outsold e-books by less than 6 million units; if e-book market growth continues, it will have far outpaced paperbacks to become the number-one category for U.S. publishers. Combine that with only 21% of American adults having read a ebook, the signs are there that readers of ebooks buy many more books.
  2. Web 2.0 Ends with Data Monopolies (Bryce Roberts) -- in the context of Google Googles: So you’re able to track every website someone sees, every conversation they have, every Ukulele book they purchase and you’re not thinking about business models, eh? Bryce is looking at online businesses as increasingly about exclusive access to data. This is all to feed the advertising behemoth.
  3. Building and Implementing Single Sign On -- nice run-through of the system changes and APIs they built for single-sign on.
  4. How Big are Porn Site (ExtremeTech) -- porn sites cope with astronomical amounts of data. The only sites that really come close in term of raw bandwidth are YouTube or Hulu, but even then YouPorn is something like six times larger than Hulu.

January 30 2012

A discussion with David Farber: bandwidth, cyber security, and the obsolescence of the Internet

David Farber, a veteran of Internet technology and politics, dropped by Cambridge, Mass. today and was gracious enough to grant me some time in between his numerous meetings. On leave from Carnegie Mellon, Dave still intervenes in numerous policy discussions related to the Internet and "plays in Washington," as well as hosting the popular Interesting People mailing list. This list delves into dizzying levels of detail about technological issues, but I wanted to pump him for big ideas about where the Internet is headed, topics that don't make it to the list.

How long can the Internet last?

I'll start with the most far-reaching prediction: that Internet protocols simply aren't adequate for the changes in hardware and network use that will come up in a decade or so. Dave predicts that computers will be equipped with optical connections instead of pins for networking, and the volume of data transmitted will overwhelm routers, which at best have mixed optical/electrical switching. Sensor networks, smart electrical grids, and medical applications with genetic information could all increase network loads to terabits per second.

When routers evolve to handle terabit-per-second rates, packet-switching protocols will become obsolete. The speed of light is constant, so we'll have to rethink the fundamentals of digital networking.

I tossed in the common nostrum that packet-switching was the fundamental idea behind the Internet and its key advance over earlier networks, but Dave disagreed. He said lots of activities on the Internet reproduce circuit-like behavior, such as sessions at the TCP or Web application level. So theoretically we could re-architect the underlying protocols to fit what the hardware and the applications have to offer.

But he says his generation of programmers who developed the Internet are too tired ("It's been a tough fifteen or twenty years") and will have to pass the baton to a new group of young software engineers who can think as boldly and originally as the inventors of the Internet. He did not endorse any of the current attempts to design a new network, though.

Slaying the bandwidth bottleneck

Like most Internet activists, Dave bewailed the poor state of networking in the U.S. In advanced nations elsewhere, 100-megabit per second networking is available for reasonable costs, whereas here it's hard to go beyond a 30 megabits (on paper!) even at enormous prices and in major metropolitan areas. Furthermore, the current administration hasn't done much to improve the situation, even though candidate Obama made high bandwidth networking a part of his platform and FCC Chairman Julius Genachowski talks about it all the time.

Dave has been going to Washington on tech policy consultations for decades, and his impressions of the different administrations has a unique slant all its own. The Clinton administration really listened to staff who understood technology--Gore in particular was quite a technology junkie--and the administration's combination of judicious policy initiatives and benign neglect led to the explosion of the commercial Internet. The following Bush administration was famously indifferent to technology at best. The Obama administration lies somewhere in between in cluefulness, but despite their frequent plaudits for STEM and technological development, Dave senses that neither Obama nor Biden really have the drive to deal with and examine complex technical issues and insist on action where necessary.

I pointed out the U.S.'s particular geographic challenges--with a large, spread-out population making fiber expensive--and Dave countered that fiber to the home is not the best solution. In fact, he claims no company could make fiber pay unless it gained 75% of the local market. Instead, phone companies should string fiber to access points 100 meters or so from homes, and depend on old copper for the rest. This could deliver quite adequate bandwidth at a reasonable cost. Cable companies, he said, could also greatly increase Internet speeds. Wireless companies are pretty crippled by loads that they encouraged (through the sale of app-heavy phones) and then had problems handling, and are busy trying to restrict users' bandwidth. But a combination of 4G, changes in protocols, and other innovations could improve their performance.

Waiting for the big breach

I mentioned that in the previous night's State of the Union address, Obama had made a vague reference to a href="http://www.whitehouse.gov/blog/2012/01/26/legislation-address-growing-danger-cyber-threats">cybersecurity initiative with a totally unpersuasive claim that it would protect us from attack. Dave retorted that nobody has a good definition of cybersecurity, but that this detail hasn't held back every agency with a stab at getting funds for it from putting forward a cybersecurity strategy. The Army, the Navy, Homeland Security, and others are all looking or new missions now that old ones are winding down, and cybersecurity fills the bill.

The key problem with cybersecurity is that it can't be imposed top-down, at least not on the Internet, which, in a common observation reiterated by Dave, was not designed with security in mind. If people use weak passwords (and given current password cracking speeds, just about any password is weak) and fall victim to phishing attacks, there's little we can do with dictats from the center. I made this point in an article twelve years ago. Dave also pointed out that viruses stay ahead of pattern-matching virus detection software.

Security will therefore need to be rethought drastically, as part of the new network that will replace the Internet. In the meantime, catastrophe could strike--and whoever is in the Administration at the time will have to face public wrath.

Odds without ends

We briefly discussed FCC regulation, where Farber tends to lean toward asking the government to forebear. He acknowledged the merits of arguments made by many Internet supporters, that the FCC tremendously weakened the chances for competition in 2002 when it classified cable Internet as a Title 1 service. This shielded the cable companies from regulations under a classification designed back in early Internet days to protect the mom-and-pop ISPs. And I pointed out that the cable companies have brazenly sued the FCC to win court rulings saying the companies can control traffic any way they choose. But Farber says there are still ways to bring in the FCC and other agencies, notably the Federal Trade commission, to enforce anti-trust laws, and that these agencies have been willing to act to shut down noxious behavior.

Dave and I shared other concerns about the general deterioration of modern infrastructure, affecting water, electricity, traffic, public transportation, and more. An amateur pilot, Dave knows some things about the air traffic systems that make on reluctant to fly. But there a few simple fixes. Commercial air flights are safe partly because pilots possess great sense and can land a plane even in the presence of confusing and conflicting information. On the other hand, Dave pointed out that mathematicians lack models to describe the complexity of such systems as our electrical grid. There are lots of areas for progress in data science.

March 05 2010

Report from HIMMS Health IT conference: building or bypassing infrastructure

Today the Healthcare Information and
Management Systems Society (HIMSS)
conference wrapped up. In
previous blogs, I laid out the href="http://radar.oreilly.com/2010/03/report-from-himms-health-it-co.html">
benefits of risk-taking in health care IT followed by my main
theme, href="http://radar.oreilly.com/2010/03/report-from-himms-health-it-co-1.html">
interoperability and openness. This blog will cover a few topics
about a third important issue, infrastructure.

Why did I decide this topic was worth a blog? When physicians install
electronic systems, they find that they need all kinds of underlying
support. Backups and high availability, which might have been
optional or haphazard before, now have to be professional. Your
patient doesn't want to hear, "You need an antibiotic right away, but
we'll order it tomorrow when our IT guy comes in to reboot the
system." Your accounts manager would be almost as upset if you told
her that billing will be delayed for the same reason.

Network bandwidth

An old sales pitch in the computer field (which I first heard at
Apollo Computer in the 1980s) goes, "The network is the computer." In
the coming age of EHRs, the network is the clinic. My family
practitioner (in an office of five practitioners) had to install a T1
line when they installed an EHR. In eastern Massachusetts, whose soil
probably holds more T1 lines than maple tree roots, that was no big
deal. It's considerably more problematic in an isolated rural area
where the bandwidth is more comparable to what I got in my hotel room
during the conference (particularly after 10:30 at night, when I'm
guessing a kid in a nearby room joined an MMPG). One provider from the
mid-West told me that the incumbent changes $800 per month for a T1.
Luckily, he found a cheaper alternative.

So the FCC is href="http://www.fcc.gov/cgb/rural/rhcp.html">involved in health care
now. Bandwidth is perhaps their main focus at the moment, and
they're explicitly tasked with making sure rural providers are able to
get high-speed connections. This is not a totally new concern; the
landmark 1994 Telecom Act included rural health care providers in its
universal service provisions. I heard one economist deride the
provision, asking what was special about rural health care providers
that they should get government funding. Fifteen years later, I think
rising health care costs and deteriorating lifestyles have answered
that question.

Wireless hubs

The last meter is just as important as the rest of your network, and
hospitals with modern, technology-soaked staff are depending
increasingly on mobile devices. I chatted with the staff of a small
wireless company called Aerohive that aims its products at hospitals.
Its key features are:

Totally cable-free hubs

Not only do Aerohive's hubs communicate with your wireless endpoints,
they communicate with other hubs and switches wirelessly. They just
make the hub-to-endpoint traffic and hub-to-hub traffic share the
bandwidth in the available 2.4 and 5 GHz ranges. This allows you to
put them just about anywhere you want and move them easily.

Dynamic airtime scheduling

The normal 802.11 protocols share the bandwidth on a packet-by-packet
basis, so a slow device can cause all the faster devices to go slower
even when there is empty airtime. I was told that an 802.11n device
can go slower than a 802.11b device if it's remote and its signal has
to go around barriers. Aerohive just checks how fast packets are
coming in and allocates bandwidth on that ratio, like time-division
multiplexing. If your device is ten times faster than someone else's
and the bandwidth is available, you can use ten times as much
bandwidth.

Dynamic rerouting

Aerohive hubs use mesh networking and an algorithm somewhat like
Spanning Tree Protocol to reconfigure the network when a hub is added
or removed. Furthermore, when you authenticate with one hub, its
neighbors store your access information so they can pick up your
traffic without taking time to re-authenticate. This makes roaming
easy and allows you to continue a conversation without a hitch if a
hub goes down.

Security checking at the endpoint

Each hub has a built-in firewall so that no unauthorized device can
attach to the network. This should be of interest in an open, public
environment like a hospital where you have no idea who's coming in.

High bandwidth

The top-of-the-line hub has two MIMO radios, each with three
directional antennae.

Go virtual, part 1

VMware has href="http://www.vmware.com/solutions/industry/healthcare/case-studies.html">customers
in health care, as in other industries. In addition, they've
incorporated virtualization into several products from medical
equipment and service vendors,

Radiology

Hospitals consider these critical devices. Virtualization here
supports high availability.

Services

A transcription service could require ten servers. Virtualization can
consolidate them onto one or two pieces of hardware.

Roaming desktops

Nurses often move from station to station. Desktop virtualization
allows them to pull up the windows just as they were left on the
previous workstation.

Go virtual, squared

If all this talk of bandwidth and servers brings pain to your head as
well as to the bottom line, consider heading into the cloud. At one
talk I attended today on cost analysis, a hospital administrator
reported that about 20% of their costs went to server hosting. They
saved a lot of money by rigorously eliminating unneeded backups, and a
lot on air conditioning by arranging their servers more efficiently.
Although she didn't discuss Software as a Service, those are a couple
examples of costs that could go down if functions were outsourced.

Lots of traditional vendors are providing their services over the Web
so you don't have to install anything, and several companies at the
conference are entirely Software as a Service. I mentioned href="http://www.practicefusion.com/">Practice Fusion in my
previous blog. At the conference, I asked them three key questions
pertinent to Software as a Service.

Security

This is the biggest question clients ask when using all kinds of cloud
services (although I think it's easier to solve than many other
architectural issues). Practice Fusion runs on HIPAA-compliant
Salesforce.com servers.

Data portability

If you don't like your service, can you get your data out? Practice
Fusion hasn't had any customers ask for their data yet, but upon
request they will produce a DVD containing your data in CSV files, or
in other common formats, overnight.

Extendibility

As I explained in my previous blog, clients increasingly expect a
service to be open to enhancements and third-party programs. Practice
Fusion has an API in beta, and plans to offer a sandbox on their site
for people to develop and play with extensions--which I consider
really cool. One of the API's features is to enforce a notice to the
clinician before transferring sensitive data.

The big selling point that first attracts providers to Practice Fusion
is that it's cost-free. They support the service through ads, which
users tell them are unobtrusive and useful. But you can also pay to
turn off ads. The service now has 30,000 users and is adding about 100
each day.

Another SaaS company I mentioned in my previous blog is href="http://www.covisint.com/">Covisint. Their service is
broader than Practice Fusion, covering not only patient records but
billing, prescription ordering, etc. Operating also as an HIE, they
speed up access to data on patients by indexing all the data on each
patient in the extended network. The actual data, for security and
storage reasons, stays with the provider. But once you ask about a
patient, the system can instantly tell you what sorts of data are
available and hook you up with the providers for each data set.

Finally, I talked to the managers of a nimble new company called href="http://carecloud.com/">CareCloud, which will start serving
customers in early April. CareCloud, too, offers a range of services
in patient health records, practice management, and and revenue cycle
management. It was built entirely on open source software--Ruby on
Rails and a PostgreSQL database--while using Flex to build their
snazzy interface, which can run in any browser (including the iPhone,
thanks to Adobe's upcoming translation to native code).upcoming
translation to native code). Their strategy is based on improving
physicians' productivity and the overall patient experience through a
social networking platform. The interface has endearing Web 2.0 style
touches such as a news feed, SMS and email confirmations, and
integration with Google Maps.

And with that reference to Google Maps (which, in my first blog, I
complained about mislocating the address 285 International Blvd NW for
the Georgia World Congress Center--thanks to the Google Local staff
for getting in touch with me right after a tweet) I'll end my coverage
of this year's HIMSS.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl