Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 14 2011

Report from first health care privacy conference

Strange that a conference on health privacy has never been held before, so I'm told. Privacy in health care is the first topic raised whenever someone talks about electronic health records--and dominates the discussion from then on--or, on the other hand, is dismissed as an overblown concern not worthy of criticism. But today a conference was held on the subject, prepared by the the University of Texas's Lyndon B. Johnson School of Public Affairs and held just a few blocks from the Capitol building at the Georgetown Law Center as a preconference to the august Computers, Freedom & Privacy conference.

The Goldilocks dilemma in health privacy

Policy experts seem to fall into three camps regarding health privacy. The privacy maximalists include the organizers of this conference, notably the Patient Privacy Rights, as well as the well-known Electronic Privacy Information Center and a number of world-renowned experts, including Alan Westin, Ross Anderson from Cambridge University, Canadian luminary Stephanie Perrin, and Carnegie Mellon's indefatigable Latanya Sweeney (who couldn't attend today but submitted a presentation via video). These people talk of the risks of re-identifying data that was supposed to be identified, and highlight all the points in both current and proposed health systems where intrusions can occur.

On the other side stand a lot of my closest associates in the health care area, who intensely dislike Patient Privacy Rights and accuse it of exaggerations and mistruths. The privacy minimalists assert that current systems provide pretty good protection, that attacks on the average person are unlikely (except from other people in his or her life, which are hard to fight systematically), and that an over-concern for privacy throws sand in the machinery of useful data exchange systems that can fix many of the problems in health care. (See for instance, my blog on last week's Health Data Initiative Forum)

In between the maximalists lie the many people trying to adapt current systems to the complex needs of modern health care with an eye toward privacy--those who want to get it "just right." The Direct Project (discussed today the Chief Privacy Officer of the Office of the National Coordinator, Joy Pritts) is an example of these pragmatic approaches.

It so happens that the American public can also be divided into these three camps, as Westin explained in his keynote. Some will go to great lengths to conceal their data and want no secondary uses without their express permission. Others have nothing to hide, and most of us lie in between. It is sobering, though, to hear that Americans in surveys declare that they don't trust what insurers, employers, and marketers will do with their health data. What's more disturbing is that Americans don't trust researchers either. Those who take on the mantle of the brave biological explorer acting in the highest public interest must question why ordinary people question his devotion to their needs.

The dilemma of simplicity: technical solutions may not be implementable

As technologist Wes Rishel pointed out, technical solutions can often be created that solve complex social problems in theory, but prove unfeasible to deploy in practice. This dilemma turns up in two of the solutions often proposed for health privacy: patient consent and data segmentation.

It's easy to say that no data should be used for any purpose without express consent. For instance, Jessica Rich from the FTC laid out an iron-clad program that a panel came up with for protecting data: systems must have security protections built in, should not collect or store any more data than necessary, and should ensure accuracy. It is understood that sharing may be necessary during treatment, but the data should be discarded when no longer needed. Staff that don't need to know the data (such as receptionists and billing staff) should not have access. Indeed, Rich challenged the notion of consent, saying it is a good criterion for non-treatment sharing (such as web sites that offer data to patients) but that in treatment settings, certain things should taken as a given.

But piercing the ground with the stake of consent reveals the quicksand below. We don't even trace all the ways in which data is shared: reports for public health campaigns, billing, research, and so on. Privacy researchers have trouble figuring out where data goes. How can doctors do it, then, and explain it to patients? We are left with the notorious 16-page privacy policies that no one reads.

Most patients don't want to be bothered every time their data needs to be shared, and sometimes (such as where public health is involved), we don't want to give them the right to say no. In one break-out session about analytics, some people said that public health officials are too intrusive and that few people would opt out if they were given a choice about whether to share data. But perhaps the people likely to opt out are precisely the ones with the conditions we need to track.

Helen Nissenbaum of NYU suggested replacing the notion of "consent" with one of "appropriateness." But another speaker said that everyone in the room has a different notion of what is appropriate to share, and when.

The general principle here--found in any security system--is that any technology that's hard to use will not be used. The same applies to the other widely pushed innovation, segmented data.

The notion behind segmentation is that you may choose to release only a particular type of data--such as to show a school your vaccination record--or to suppress a particular type, such as HIV status or mental health records. Segmentation was a major feature of an influential report by the President's Council of Advisors on Science and Technology.

Like consent, segmentation turns out to be complex. Who will go throw a checklist of 60 items to decide what to release each time he is referred to a specialist? Furthermore, although it may be unnecessary for a a doctor treating you for a broken leg to know you have a sexually transmitted disease, there may be surprising times when seemingly unrelated data is important. So patients can't use segmentation well without a lot of education about risks.

And their attempts at segmentation may be undermined in any case. Even if you suppress a diagnosis, some other information--such as a drug you're taking--may be used to infer that you have the condition.

A certain fatalism sometimes hung over the conference. One speaker went to far as to suggest a "moratorium" on implementing new health record systems until we have figured out the essential outlines of solutions, but even she offered it only as a desperate speculation, knowing that the country needs new systems. And good models for handling data certainly exist.

Here is the strenuous procedure that the Centers for Medicare & Medicaid Services (CMS) engage in when they release data sets. Each set of data (a Public Use File) represents a particular use of CMS payments: inpatient, outpatient, prescription drugs, etc. The procedure, which I heard described at two conferences last week, is as follows:

  1. They choose a random 5% sample of the people who use particular payments. These samples are disjoint, meaning that no person is used in more than one sample. Because they cover tens of millions of individuals, a small sample can still be a huge data set.

  2. They perform standard clean-up, such as fixing obvious errors.

  3. They generalize the data somewhat. A familiar way to release aggregated results in a way that makes it harder to identify people is to provide only the first three digits of a five-digit ZIP code. Other such fudge factors employed by CMS include offering only age ranges instead of exact ages, and rounding payment amounts.

  4. They check certain combinations of fields to make sure these appear in numerous records. If fewer than 11 people share a certain combination of values, they drop these people.

  5. If they had to drop more than 10% of the people in step 4, they go back to step 3 and try increasing the fudge factors. They iterate through steps 3 and 4 until the data is of a satisfactory size.

Clearly, this procedure works only with data sets on a large scale, not with the limited samples provided by many hospitals, particularly for relatively rare diseases.

Avoidable risks and achievable rewards

As Anderson said, large systems with lots of people have leaks. "Some people will be careless and others will be crooked." As if to illustrate the problem, one of the attendees today told me that Health Information Exchanges could well be on the hook for breaches they can't prevent. They rely on health providers to release the right data to the right health provider. The HIE doesn't contact the patient independently. Any mistake is likely to be the doctor's fault, but the law holds the HIE equally liable. And given a small, rural doctor with few funds, well liked by the public, versus a large corporation, whom do you suppose the patient will sue?

I can't summarize all the questions raised at today's conference--which offered one of the most impressive rosters of experts I've seen at any one-day affair--but I'll list some of the challenges identified by a panel on technology.

  • Use cases to give us concrete material for discussing solutions

  • Mapping the flows of data, also to inform policy discussions

  • Data stewardship--is the data in the hands of the patient or the doctor, and who is most trustworthy for each item?

  • Determining how long data needs to be stored, especially given that ways to crack de-identified data will improve over time

  • Reducing the fatigue mentioned earlier for consent and segmentation

  • Identifying different legal jurisdictions and harmonizing their privacy regulations

  • Identifying secondary levels of information, such as the medication that indirectly reveals the patient's condition


Some of the next steps urged by attendees and speakers at the conference include:

  • Generating educational materials for the public, for doctors, and for politicians

  • Making health privacy a topic for the Presidential campaign and other political debates

  • Offering clinicians guidelines to build privacy into procedures

  • Seeking some immediate, achievable goals, while also defining a long-term agenda under the recognition that change is hard

  • Defining a research agenda

  • Educating state legislatures, which are getting more involved in policy around health care

March 28 2011

Computers, Freedom, and Privacy enters 21st year at a moment of hot debate

The legendary and redoubtable Computers, Freedom, and Privacy conference takes place this year from June 14-16 in Washington, DC, just steps from the Capitol building in Washington, DC where many of the conference topics are under discussion. Yesterday I talked to Lillie Coney of the Electronic Privacy Information Center and Jules Polonetsky of the Future of Privacy Forum, cochairs of CFP this year, about what makes the conference unique and how it will illuminate the pressing issues of Twitter revolutions (or whatever role the Internet may play), surveillance and tracking, security of personal health data, and more.

Conference organizers are reaching out to Congress and administration
leaders, confident that high-level representatives will speak (as well
as listen) this year as they have in the past. But one of the
conference's strengths is to raise its eyes past the United States and
look at other places in the world where online privacy is being
handled better, or not as well. Many of the leaders who have made
headlines in recent revolutions have been invited.

Listen to the interview (MOV file) for more information about CFP and how you can get involved.

Relevant links:

Sponsored post

April 21 2010


March 19 2010

Current activities at the Electronic Information Privacy Center

When Marc Rotenberg founded the Electronic
Information Privacy Center
in 1994, I doubt he realized how fast
their scope would swell as more and more of our lives became digitized
and networked. Now it seems like everything that happens in society
has an electronic component and a privacy component. I had the chance
to drop in to their office on Monday and heard about the
front-burner items they're working on.

  • Whole-body imaging in airports, a very hot issue right now. While
    Americans push back against it, the European Union has to vote on it

  • The Smart Grid: a massive upgrade planned for the American system for
    delivering electricity across the nation as well as over the last mile
    to your home. Could the Smart Grid tell marketers your life style?

  • Privacy of text messaging. EPIC is very active on City of Ontario
    v. Quon
    , where the government asserts that using a city-issued
    device allows the city to read all of the employee's messages.

  • Freedom of Information Act. Why are government agencies (except for a
    few exemplary ones) fulfilling a smaller percentage of
    demands during the Obama administration than they did during the Bush

  • Ballot initiatives. EPIC has argued in Doe v. Reed that
    signing a petition to put a question on a ballot should be private,
    like voting.

And if you visit the EPIC home page this week, or the companion href=""> page, you'll see that
they're following even more diverse issues: the FCC broadband
proposal, consumer privacy, data retention by ISPs, etc. They were
interested to hear what I've been learning recently about privacy in
electronic health records.

EPIC has been remarkably effective over the years as an organization
with about a dozen staff (mostly young and idealistic rather than
canny and seasoned) and no cash-wielding lobbyists. They haven't
compromised their principles in the dozen years I've been following
them, but they not only get to the table most of the time but manage
to bend the decision their way most of the time.

I attribute this success to single-mindedness (they can nail the
privacy chink in any initiative) persistence, coalition-building with
like minded organizations (leading the href="">Privacy Coalition,
collaborating with London's href="">Privacy International,
among other organizations around the world, and work closely with such
natural allies as the ACLU), but mostly knowing their stuff cold. They
sail into debate with a full understanding of technical details as
well as the legal issues that impinge on their position.

The Smart Grid is an excellent example of how EPIC investigates an
issue early in its existence and hones in on the dark underside. The
Smart Grid is a buzzword covering changes that should save us huge
amounts of electricity lost in old, inefficient switches, as well as
improve the efficiency of energy delivery in neighborhoods. A key part
of the Smart Grid is monitoring and logging our electricity usage,
building by building and even machine by machine.

In this futuristic vision, the electric utility would know when you've
started your air conditioner or clothes dryer and could send you
messages suggesting new patterns of behavior that will relieve
pressure on the grid and save you money as well. This is nice, but it
also means the electric utility basically knows how you lead your

Traffic analysis on your device usage could show who stays home during
the day, when kids come home from school, and who plays video games
(heavy electricity usage from a home computer) late at night.

Currently no one has discussed who controls this data. Implicitly, it
is left in the hands of the utility, which is free to sell it like any
other information. There is little doubt that advertisers would love
to get their hands on this information. So would the government, I
bet--remember when police were scanning homes for evidence of
marijuana cultivation? EPIC would like the information to be in the
hands of the consumer.

A bill just introduced by Representative Ed Markey, the href="">"Electric
Consumer Right to Know Act" (H. R. 4860), would inform electricity
users of their energy usage in a form they could process on a computer
or other device, typically every 15 minutes. The bill mandates a smart
meter that "provides adequate protections for the security of such
information and the privacy of such electric consumer." It doesn't go
into any more detail about what the utility could do with the

The ambiguous ownership of Smart Grid data illustrates why privacy is
such a hard turf to defend, once you have declared your jurisdiction
over it as EPIC has done. Data flows from one place to
another--whether from the electric meter to your cell phone, your
camera to Facebook, or your vendor to your bank--and is therefore
intrinsically shared. Privacy is an umbrella term that encompass
attempts to set limits or impose rules on all these types of sharing.

In trying to protect our privacy EPIC is swimming against the tide, of
course, but what's really challenging is how data collection and
dissemination has shifted. When EPIC started, most electronic data was
held by large institutions who made ready targets for EPIC's legal
challenges. Now each person is his or her own worst enemy, freely
sharing personal information, pictures, and videos online--a
phenomenon termed Little Brother.

Cameras and sensors are also creating millions of new sources for
data, while advances in data mining and analysis allow people to learn
more from the data than ever before.

I think EPIC is handling this shift well. They stay focused on policy
rather than pursuing the idealistic but impractical course of training
people to use privacy safeguards and protect themselves. There are
just too many ways to weasel data out of us, some of which will never
be under our control, and most people just can't learn everything they
need to know to be safe, whether it be about Web proxies, Flash
cookies, or document metadata.

EPIC demands that institutions take responsibility for privacy,
designing it into their systems. A recent, well publicized example of
this doctrine was their complaint to the FTC about Facebook's changes
to privacy settings in December 2009. EPIC doesn't believe it's enough
to boast about flexibility and user control--something that endangers
the 99.9% of users who don't understand how to change a default is a
violation of users' rights.

But EPIC is neither rigid nor abstentionist. They may complain about
Facebook, but maintain a Facebook page. They're totally into the new
electronic age. But they want it to serve its users rather than a few
centralized institutions, and for privacy advocates they're not shy
about letting us know what they think.

March 04 2010

Report from HIMMS Health IT conference: toward interoperability and openness

Yesterday and today I spent once again at the href="">Healthcare Information and Management
Systems Society (HIMSS) conference in Atlanta, rushing from panel
session to vendor booth to interoperability demo and back (or
forward--I'm not sure which direction I've been going). All these
peregrinations involve a quest to find progress in the areas of
interoperability and openness.

The U.S. has a mobile population, bringing their aches and pains to a
plethora of institutions and small providers. That's why health care
needs interoperability. Furthermore, despite superb medical research,
we desperately need to share more information and crunch it in
creative new ways. That's why health care needs openness.

My href="">blog
yesterday covered risk-taking; today I'll explore the reasons it's
so hard to create change.

The health care information exchange architecture

Some of the vendors I talked to boasted of being in the field for 20
years. This give them time to refine and build on their offerings,
but it tends to reinforce approaches to building and selling software
that were prominent in the 1980s. These guys certainly know what the
rest of the computer field is doing, such as the Web, and they reflect
the concerns for interoperability and openness in their own ways. I
just feel that what I'm seeing is a kind of hybrid--more marsupial
than mammal.

Information exchange in the health care field has evolved the
following architecture:

Electronic medical systems and electronic record systems

These do all the heavy labor that make health care IT work (or fail).
They can be divided into many categories, ranging from the simple
capturing of clinical observations to incredibly detailed templates
listing patient symptoms and treatments. Billing and routine workflow
(practice management) are other categories of electronic records that
don't strictly speaking fall into the category of health records.
Although each provider traditionally has had to buy computer systems
to support the software and deal with all the issues of hosting it,
Software as a Service has come along in solutions such as href="">Practice Fusion.

Services and value-added applications

As with any complex software problem, nimble development firms partner
with the big vendors or offer add-on tools to do what health care
providers find too difficult to do on their own.

Health information exchanges (HIEs)

Eventually a patient has to see a specialist or transfer records to a
hospital in another city--perhaps urgently. Partly due to a lack of
planning, and partly due to privacy concerns and other particular
issues caught up in health care, transfer is not as simple as querying or Google. So record transfer is a whole industry of its
own. Some institutions can transfer records directly, while others
have to use repositories--paper or electronic--maintained by states or
other organizations in their geographic regions.

HIE software and Regional Health Information Organizations

The demands of record exchange create a new information need that's
filled by still more companies. States and public agencies have also
weighed in with rules and standards through organizations called
Regional Health Information Organizations.

Let's see how various companies and agencies fit into this complicated
landscape. My first item covered a huge range of products that
vendors don't like to have lumped together. Some vendors, such as the
Vocera company I mentioned in yesterday's blog and href="">3M,
offer products that capture clinicians' notes, which can be a job in
itself, particularly through speech recognition. href="">Emdeon covers billing, and adds validity
checking to increase the provider's chances of getting reimbursed the
first time they submit a bill. There are many activities in a doctor's
office, and some vendors try to cover more than others.

Having captured huge amounts of data--symptoms, diagnoses, tests
ordered, results of those tests, procedures performed, medicines
ordered and administered--these systems face their first data exchange
challenge: retrieving information about conditions and medicines that
may make a critical difference to care. For instance, I saw a cool
demo at the booth of Epic, one of
the leading health record companies." A doctor ordered a diuretic that
has the side-effect of lowering potassium levels. So Epic's screen
automatically brought up the patient's history of potassium levels
along with information about the diuretic.

Since no physician can keep all the side-effects and interactions
between drugs in his head, most subscribe to databases that keep track
of such things; the most popular company that provides this data is href="">First DataBank. Health record
systems simply integrate the information into their user interfaces.
As I've heard repeatedly at this conference, the timing and delivery
of information is just as important as having the information; the
data is not of much value if a clinician or patient has to think about
it and go searching for it. And such support is central to the HITECH
act's meaningful use criteria, mentioned in yesterday's blog.

So I asked the Epic rep how this information got into the system. When
the physicians sign up for the databases, the data is sent in simple
CSV files or other text formats. Although different databases are
formatted in different ways, the health record vendor can easily read
it in and set up a system to handle updates.

Variations on this theme turn up with other vendors. For instance, href="">NextGen Healthcare contracts
directly with First DataBank so they can integrate the data intimately
with NextGen's screens and database.

So where does First DataBank get this data? They employ about 40
doctors to study available literature, including drug manufacturers'
information and medical journals. This leads to a constantly updated,
independent, reliable source for doses, side-effects,
counterindications, etc.

This leads to an interesting case of data validity. Like any
researchers--myself writing this blog, for instance--First DataBank
could theoretically make a mistake. Their printed publications include
disclaimers, and they require the companies who licence the data to
reprint the disclaimers in their own literature. But of course, the
disclaimer does not pop up on every dialog box the doctor views while
using the product. Caveat emptor...

Still, decision support as a data import problem is fairly well
solved. When health record systems communicate with each other,
however, things are not so simple.

The challenges in health information exchange: identification

When a patient visits another provider who wants to see her records,
the first issue the system must face is identifying the patient at the
other provider. Many countries have universal IDs, and therefore
unique identifiers that can be used to retrieve information on a
person wherever she goes, but the United States public finds such
forms of control anathema (remember the push-back over Read ID?).
There are costs to restraining the information state: in this case,
the hospital you visit during a health crisis may have trouble
figuring out which patient at your other providers is really you.

HIEs solve the problem by matching information such as name, birth
date, age, gender, and even cell phone number. One proponent of the
federal government's Nationwide
Health Information Network
told me it can look for up to 19 fields
of personal information to make a match. False positives are
effectively eliminated by strict matching rules, but legitimate
records may be missed.

Another issue HIEs face is obtaining authorization for health data,
which is the most sensitive data that usually concerns ordinary
people. When requesting data from another provider, the clinician has
to log in securely and then offer information not only about who he is
but why he needs the data. The sender, for many reasons, may say no:

  • Someone identified as a VIP, such as a movie star or high-ranking
    politician, is automatically protected from requests for information.

  • Some types of medical information, such as HIV status, are considered
    especially sensitive and treated with more care.

  • The state of California allows ordinary individuals to restrict the
    distribution of information at the granularity of a single institution
    or even a single clinician, and other states are likely to do the

Thus, each clinician needs to register with the HIE that transmits the
data, and accompany each request with a personal identifier as well as
the type of information requested and the purpose. One service I
talked to, Covisint, can query
the AMA if necessary to verify the unique number assigned to each
physician in the us, the Drug Enforcement Administration (DEA) number.
(This is not the intended use of a DEA number, of course; it was
created to control the spread of pharmaceuticals, not data.)

One of the positive impacts of all this identification is that some
systems can retrieve information about patients from a variety of
hospitals, labs, pharmacies, and clinics even if the requester doesn't
know where it is. It's still up to them to determine whether to send
the data to the requester. Currently, providers exchange a Data Use
and Reciprocal Support Agreement (DURSA) to promise that information
will be stored properly and used only for the agreed-on purpose.
Exchanging these documents is currently cumbersome, and I've been told
the government is looking for a way to standardize the agreement so
the providers don't need to directly communicate.

The challenges in health information exchange: format

Let's suppose we're at the point where the owner of the record has
decided to send it to the requester. Despite the reverence expressed
by vendors for HL7 and other
standards with which the health care field is rife, documents require
a good deal of translation before they can be incorporated into the
receiving system. Each vendor presents a slightly different challenge,
so to connect n different products a vendor has to implement
n2 different transformations.

Reasons for this interoperability lie at many levels:

Lack of adherence to standards

Many vendors created their initial offerings before applicable
standards existed, and haven't yet upgraded to the standards or still
offer new features not covered by standards. The meaningful use
criteria discussed in yesterday's blog will accelerate the move to

Fuzzy standards

Like many standards, the ones that are common in the medical field
leave details unspecified.

Problems that lie out of scope

The standards tend to cover the easiest aspect of data exchange, the
document's format. As an indication of the problem, the 7 in HL7
refers to the seventh (application) layer of the ISO model. Brian
Behlendorf of Apache fame, now consulting with the federal government
to implement the NHIN, offers the following analogy. "Suppose that we
created the Internet by standardizing HTML and CSS but saying nothing
about TCP/IP and DNS."

Complex standards

As in other fields, the standards that work best in health records are
simple ones. There is currently a debate, for instance, over whether
to use the CCR or CCD exchange format for patient data. The trade-off
seems to be that the newer CCD is richer and more flexible but a lot
harder to support.


As one example, the University of Pittsburgh Medical Center tried to
harmonize its problem lists and found that a huge number of
patients--including many men--were coded as smoking during pregnancy.
They should have been coded with a general tobacco disorder. As Dr.
William Hogan said, "People have an amazing ability to make a standard
do what it's not meant to do, even when it's highly specified and

So many to choose from

Dell/Perot manager Jack Wankowski told me that even though other
countries have digitized their health records far more than the U.S.
has, they have a lot fewer published standards. It might seem logical
to share standards--given that people are people everywhere--but in
fact, that's hard to do because diagnosis and treatment are a lot
different in different cultures. Wankowski says, "Unlike other
industries such as manufacturing and financial services, where a lot
can be replicated, health care is very individual on a country by
country basis at the moment. Because of this, change is a lot slower."


The UPMC coded its problem lists in ICD-9-CM instead of SNOMED, even
through SNOMED was far superior in specificity and clarity. Along with
historical reasons, they avoided SNOMED because it was a licensed
product until 2003 whereas ICD-9-CM was free. As for ICD-9-CM, its
official standard is distributed as RTF documents, making correct
adoption difficult.

Here are a few examples of how vendors told me they handle

InterSystems is a major
player in health care. The basis of their offerings is Caché,
an object database written in the classic programming language for
medical information processing, MUMPS. (MUMPS was also standardized by
an ANSI committee under the name M.) Caché can be found in all
major hospitals. For data exchange, InterSystems provides an HIE
called HealthShare, which they claim can communicate with other
vendors' systems by supporting HL7 and other appropriate standards.
HealthShare is both communications software and an actual hub that can
create the connections for customers.

Medicity is another key
HIE vendor. Providers can set up their own hubs or contract with a
server set up by Medicity in their geographic area. Having a hub means
that a small practice can register just once with the hub and then
communicate with all other providers in that region.

Let's turn again to Epic. Two facilities that use it can exchange a
wide range of data, because some of its data is not covered by
standards. A facility that uses another product can exchange a
narrower set of data with an Epic system over href="">Care
Everywhere, using the standards. The Epic rep said they will move
more and more fields into Care Everywhere as standards evolve.

What all this comes down to is an enormous redundant infrastructure
that adds no value to electronic records, but merely runs a Red
Queen's Race to provide the value that already exists in those
records. We've already seen that defining more standards has a
limited impact on the problem. But a lot of programmers at this point
will claim the solution lies in open source, so let's see what's
happening in that area.

The open source challengers

The previous sections, like acts of a play, laid out the character of
the vendors in the health care space as earnest, hard-working, and
sometimes brilliantly accomplished, but ultimately stumbling through a
plot whose bad turns overwhelm them. In the current act we turn to a
new character, one who is not so well known nor so well tested, one
who has shown promise on other stages but is still finding her footing
on our proscenium.

The best-known open source projects in health care are href="">OpenMRS, the Veterans Administration's
VistA, and the href="">NHIN CONNECT Gateway. I
won't say anything more about OpenMRS because it has received high
praise but has made little inroads into American health care. I'll
devote a few paragraphs to the strengths and weaknesses of VistA and

Buzz in the medical world is that VistA beats commercial offerings for
usability and a general fit to the clinicians' needs. But it's
tailored to the Veterans Administration and--as a rep for the href="">vxVistA called it--has to be
deveteranized for general use. This is what vxVistA does, but they are
not open source. They make changes to the core and contribute it back,
but their own products are proprietary. A community project called href="">WorldVistA also works on a
version of VistA for the non-government sector.

One of the hurdles of adapting VistA is that one has to learn its
underlying language, MUMPS. Most people who dive in license a MUMPS
compiler. The vxVistA rep knows of no significant users of the free
software MUMPS compiler GT.M. VistA also runs on the Caché
database, mentioned earlier in this article. If you don't want to
license Caché from InterSystems, you need to find some other
database solution.

So while VistA is a bona fide open source project with a community,
it's ecosystem does not fit neatly with the habits of most free
software developers.

CONNECT is championed by the same Office of the National Coordinator
for Health Information Technology that is implementing the HITECH
recovery plan and meaningful use. A means for authenticating requests
and sending patient data between providers, CONNECT may well be
emerging as the HIE solution for our age. But it has some maturing to
do as well. It uses a SOAP-based protocol that requires knowledge of
typical SOA-based technologies such as SAML.

Two free software companies that have entered the field to make
installing CONNECT easier are href="">Axial Exchange, which creates
open source libraries and tools to work with the system, and the href="">Mirth Corporation. Jon Teichrow
of Mirth told me how a typical CONNECT setup at a rural hospital took
just a week to complete, and can run for the cost of just a couple
hours of support time per week. The complexities of handling CONNECT
that make so many people tremulous, he said, were actually much easier
for Mirth than the more typical problem of interpreting the hospital's
idiosyncratic data formats.

Just last week, href="">the
government announced a simpler interface to the NHIN called NHIN
Direct. Hopefully, this will bring in a new level of providers
who couldn't afford the costs of negotiating with CONNECT.

CONNECT has certainly built up an active community. href="">Agilex employee Scott E. Borst, who is
responsible for a good deal of the testing of CONNECT, tells me that
participation in development, testing, and online discussion is
intense, and that two people were recently approved as committers
without being associated with any company or government agency
officially affiliated with CONNECT.

The community is willing to stand up for itself, too. Borst says that
when CONNECT was made open source last year, it came with a Sun-based
development environment including such components as NetBeans and
GlassFish. Many community members wanted to work on CONNECT using
other popular free software tools. Accommodating them was tough at
first, but the project leaders listened to them and ended up with a
much more flexible environment where contributors could use
essentially any tools that struck their fancy.

Buried in href="">a
major announcement yesterday about certification for meaningful
use was an endorsement by the Office of the National Coordinator
for open source. My colleague and fellow blogger Brian Ahier href="">points
out that rule 4 for certification programs explicitly mentions
open source as well self-developed solutions. This will not magically
lead to more open source electronic health record systems like
OpenMRS, but it offers an optimistic assessment that they will emerge
and will reach maturity.

As I mentioned earlier, traditional vendors are moving more toward
openness in the form of APIs that offer their products as platforms.
InterSystems does this with a SOAP-based interface called Ensemble,
for instance. Eclipsys,
offering its own SOAP-based interface called Helios, claims that they
want an app store on top of their product--and that they will not kick
off applications that compete with their own.

Web-based Practice Fusion has an API in beta, and is also planning an
innovation that makes me really excited: a sandbox provided by their
web site where developers can work on extensions without having to
download and install software.

But to a long-time observer such as Dr. Adrian Gropper, founder of the
MedCommons storage service,
true open source is the only way forward for health care records. He
says we need to replace all those SOAP and WS-* standards with RESTful
interfaces, perform authentication over OpenID and OAuth, and use the
simplest possible formats. And only an enlightenment among the major
users--the health care providers--will bring about the revolution.

But at this point in the play, having explored the characters of
electronic record vendors and the open source community, we need to
round out the drama by introducing yet a third character: the patient.
Gropper's MedCommons is a patient-centered service, and thus part of a
movement that may bring us openness sooner than OpenMRS, VistA, or

Enter the patient

Most people are familiar with Microsoft's HealthVault and Google
Health. Both allow patients to enter data about their own health, and
provide APIs that individuals and companies alike are using to provide
services. A Journal of Participatory
has just been launched, reflecting the growth of interest
in patient-centered or participatory medicine. I saw a book on the
subject by HIMSS itself in the conference bookstore.

The promise of personal health records goes far beyond keeping track
of data. Like electronic records in clinicians' hands, the data will
just be fodder for services with incredible potential to improve
health. In a lively session given today by Patricia Brennan of href="">Project HealthDesign,
she used the metaphors of "intelligent medicines" and "smart
Band-Aids" that reduce errors and help patients follow directions.

Project HealthDesign's research has injected a dose of realism into
our understanding of the doctor-patient relationship. For instance,
they learned that we can't expect patients to share everything with
their doctors. They get embarrassed when they lapse in their behavior,
and don't want to admit they take extra medications or do other things
not recommended by doctors. So patient-centered health should focus on
delivering information so patients can independently evaluate what
they're doing.

As critical patient data becomes distributed among a hundred million
individual records, instead of being concentrated in the hands of
providers, simple formats and frictionless data exchange will emerge
to handle them. Electronic record vendors will adapt or die. And a
whole generation of products--as well as users--will grow up with no
experience of anything but completely open, interoperable systems.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.
No Soup for you

Don't be the product, buy the product!

YES, I want to SOUP ●UP for ...