Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 02 2012

Recombinant Research: Breaking open rewards and incentives

In the previous articles in this series I've looked at problems in current medical research, and at the legal and technical solutions proposed by Sage Bionetworks. Pilot projects have shown encouraging results but to move from a hothouse environment of experimentation to the mainstream of one of the world's most lucrative and tradition-bound industries, Sage Bionetworks must aim for its nucleus: rewards and incentives.

Previous article in the series: Sage Congress plans for patient engagement.

Think about the publication system, that wretchedly inadequate medium for transferring information about experiments. Getting the data on which a study was based is incredibly hard; getting the actual samples or access to patients is usually impossible. Just as boiling vegetables drains most of their nutrients into the water, publishing results of an experiment throws away what is most valuable.

But the publication system has been built into the foundation of employment and funding over the centuries. A massive industry provides distribution of published results to libraries and research institutions around the world, and maintains iron control over access to that network through peer review and editorial discretion. Even more important, funding grants require publication (but the data behind the study only very recently). And of course, advancement in one's field requires publication.

Lawrence Lessig, in his keynote, castigated for-profit journals for restricting access to knowledge in order to puff up profits. A chart in his talk showed skyrocketing prices for for-profit journals in comparison to non-profit journals. Lessig is not out on the radical fringe in this regard; Harvard Library is calling the current pricing situation "untenable" in a move toward open access echoed by many in academia.

Lawrence Lessig keynote at Sage Congress
Lawrence Lessig keynote at Sage Congress.

How do we open up this system that seemed to serve science so well for so long, but is now becoming a drag on it? One approach is to expand the notion of publication. This is what Sage Bionetworks is doing with Science Translational Medicine in publishing validated biological models, as mentioned in an earlier article. An even more extensive reset of the publication model is found in Open Network Biology (ONB), an online journal. The publishers require that an article be accompanied by the biological model, the data and code used to produce the model, a description of the algorithm, and a platform to aid in reproducing results.

But neither of these worthy projects changes the external conditions that prop up the current publication system.

When one tries to design a reward system that gives deserved credit to other things besides the final results of an experiment, as some participants did at Sage Congress, great unknowns loom up. Is normalizing and cleaning data an activity worth praise and recognition? How about combining data sets from many different projects, as a Synapse researcher did for the TCGA? How much credit do you assign researchers at each step of the necessary procedure for a successful experiment?

Let's turn to the case of free software to look at an example of success in open sharing. It's clear that free software has swept the computer world. Most web sites use free software ranging from the server on which they run to the language compilers that deliver their code. Everybody knows that the most popular mobile platform, Android, is based on Linux, although fewer realize that the next most popular mobile platforms, Apple's iPhones and iPads, run on a modified version of the open BSD operating system. We could go on and on citing ways in which free and open source software have changed the field.

The mechanism by which free and open source software staked out its dominance in so many areas has not been authoritatively established, but I think many programmers agree on a few key points:

  • Computer professionals encountered free software early in their careers, particularly as students or tinkerers, and brought their predilection for it into jobs they took at stodgier institutions such as banks and government agencies. Their managers deferred to them on choices for programming tools, and the rest is history.

  • Of course, computer professionals would not have chosen the free tools had they not been fit for the job (and often best for the job). Why is free software so good? Probably because the people creating it have complete jurisdiction over what to produce and how much time to spend producing it, unlike in commercial ventures with requirements established through marketing surveys and deadlines set unreasonably by management.

  • Different pieces of free software are easy to hook up, because one can alter their interfaces as necessary. Free software developers tend to look for other tools and platforms that could work with their own, and provide hooks into them (Apache, free database engines such as MySQL, and other such platforms are often accommodated.) Customers of proprietary software, in contrast, experience constant frustration when they try to introduce a new component or change components, because the software vendors are hostile to outside code (except when they are eager to fill a niche left by a competitor with market dominance). Formal standards cannot overcome vendor recalcitrance--a painful truth particularly obvious in health care with quasi-standards such as HL7.

  • Free software scales. Programmers work on it tirelessly until it's as efficient as it needs to be, and when one solution just can't scale any more, programmers can create new components such as Cassandra, CouchDB, or Redis that meet new needs.

Are there lessons we can take from this success story? Biological research doesn't fit the circumstances that made open source software a success. For instance, researchers start out low on the totem pole in very proprietary-minded institutions, and don't get to choose new ways of working. But the cleverer ones are beginning to break out and try more collaboration. Software and Internet connections help.

Researchers tend to choose formats and procedures on an ad hoc, project by project basis. They haven't paid enough attention to making their procedures and data sets work with those produced by other teams. This has got to change, and Sage Bionetworks is working hard on it.

Research is labor-intensive. It needs desperately to scale, as I have pointed out throughout this article, but to do so it needs entire new paradigms for thinking about biological models, workflow, and teamwork. This too is part of Sage Bionetworks' mission.

Certain problems are particularly resistant in research:

  • Conditions that affect small populations have trouble raising funds for research. The Sage Congress initiatives can lower research costs by pooling data from the affected population and helping researchers work more closely with patients.

  • Computation and statistical methods are very difficult fields, and biological research is competing with every other industry for the rare individuals who know these well. All we can do is bolster educational programs for both computer scientists and biologists to get more of these people.

  • There's a long lag time before one knows the effects of treatments. As Heywood's keynote suggested, this is partly solved by collecting longitudinal data on many patients and letting them talk among themselves.

Another process change has revolutionized the computer field: agile programming. That paradigm stresses close collaboration with the end-users whom the software is supposed to benefit, and a willingness to throw out old models and experiment. BRIDGE and other patient initiatives hold out the hope of a similar shift in medical research.

All these things are needed to rescue the study of genetics. It's a lot to do all at once. Progress on some fronts were more apparent than others at this year's Sage Congress. But as more people get drawn in, and sometimes fumbling experiments produce maps for changing direction, we may start to see real outcomes from the efforts in upcoming years.

All articles in this series, and others I've written about Sage Congress, are available through a bit.ly bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

May 01 2012

Recombinant Research: Sage Congress plans for patient engagement

Clinical trials are the pathway for approving drug use, but they aren't good enough. That has become clear as a number of drugs (Vioxx being the most famous) have been blessed by the FDA, but disqualified after years of widespread use reveal either lack of efficacy or dangerous side effects. And the measures taken by the FDA recently to solve this embarrassing problem continue the heavy-weight bureaucratic methods it has always employed: more trials, raising the costs of every drug and slowing down approval. Although I don't agree with the opinion of Avik S. A. Roy (reprinted in Forbes) that Phase III trials tend to be arbitrary, I do believe it is time to look for other ways to test drugs for safety and efficacy.

First article in the series: Recombinant Research: Sage Congress Promotes Data Sharing in Genetics.

But the Vioxx problem is just one instance of the wider malaise afflicting the drug industry. They just aren't producing enough new medications, either to solve pressing public needs or to keep up their own earnings. Vicki Seyfert-Margolis of the FDA built on her noteworthy speech at last year's Sage Congress (reported in one of my articles about the conference) with the statistic that drug companies have submitted 20% fewer medications to the FDA between 2001 and 2007. Their blockbuster drugs produce far fewer profits than before as patents expire and fewer new drugs emerge (a predicament called the "patent cliff"). Seyfert-Margolis intimated that this crisis in the cause of layoffs in the industry, although I heard elsewhere that the companies are outsourcing more research, so perhaps the downsizing is just a reallocation of the same money.

Benefits of patient involvement

The field has failed to rise to the challenges posed by new complexity. Speakers at Sage Congress seemed to feel that genetic research has gone off the tracks. As the previous article in this series explained, Sage Bionetworks wants researchers to break the logjam by sharing data and code in GitHub fashion. And surprisingly, pharma is hurting enough to consider going along with an open research system. They're bleeding from a situation where as much as 80% of each clinical analysis is spent retrieving, formatting, and curating the data. Meanwhile, Kathy Giusti of the Multiple Myeloma Research Foundation says that in their work, open clinical trials are 60% faster.

Attendees at a breakout session where I sat in, including numerous managers from major pharma companies, expressed confidence that they could expand public or "pre-competitive" research in the direction Sage Congress proposed. The sector left to engage is the one that's central to all this work--the public.

If we could collect wide-ranging data from, say, 50,000 individuals (a May 2013 goal cited by John Wilbanks of Sage Bionetworks, a Kauffman Foundation Fellow), we could uncover a lot of trends that clinical trials are too narrow to turn up. Wilbanks ultimately wants millions of such data samples, and another attendee claimed that "technology will be ready by 2020 for a billion people to maintain their own molecular and longitudinal health data." And Jamie Heywood of PatientsLikeMe, in his keynote, claimed to have demonstrated through shared patient notes that some drugs were ineffective long before the FDA or manufacturers made the discoveries. He decried the current system of validating drugs for use and then failing to follow up with more studies, snorting that, "Validated means that I have ceased the process of learning."

But patients have good reasons to keep a close hold on their health data, fearing that an insurance company, an identity thief, a drug marketer, or even their own employer will find and misuse it. They already have little enough control over it, because the annoying consent forms we always have shoved in our faces when we come to a clinic give away a lot of rights. Current laws allow all kinds of funny business, as shown in the famous case of the Vermont law against data mining, which gave the Supreme Court a chance to say that marketers can do anything they damn please with your data, under the excuse that it's de-identified.

In a noteworthy poll by Sage Bionetworks, 80% of academics claimed they were comfortable sharing their personal health data with family members, but only 31% of citizen advocates would do so. If that 31% is more representative of patients and the general public, how many would open their data to strangers, even when supposedly de-identified?

The Sage Bionetworks approach to patient consent

It's basic research that loses. So Wilbanks and a team have been working for the past year on a "portable consent" procedure. This is meant to overcome the hurdle by which a patient has to be contacted and give consent anew each time a new researcher wants data related to his or her genetics, conditions, or treatment. The ideal behind portable consent is to treat the entire research community as a trusted user.

The current plan for portable consent provides three tiers:

Tier 1

No restrictions on data, so long as researchers follow the terms of service. Hopefully, millions of people will choose this tier.

Tier 2

A middle ground. Someone with asthma may state that his data can be used only by asthma researchers, for example.

Tier 3

Carefully controlled. Meant for data coming from sensitive populations, along with anything that includes genetic information.

Synapse provides a trusted identification service. If researchers find a person with useful characteristics in the last two tiers, and are not authorized automatically to use that person's data, they can contact Synapse with the random number assigned to the person. Synapse keeps the original email address of the person on file and will contact him or her to request consent.

Portable consent also involves a lot of patient education. People will sign up through a software wizard that explains the risks. After choosing portable consent, the person decides how much to put in: 23andMe data, prescriptions, or whatever they choose to release.

Sharon Terry of the Genetic Alliance said that patient advocates currently try to control patient data in order to force researchers to share the work they base on that data. Portable consent loosens this control, but the field may be ready for its more flexible conditions for sharing.

Pharma companies and genetics researchers have lots to gain from access to enormous repositories of patient data. But what do the patients get from it? Leaders in health care already recognize that patients are more than experimental subjects and passive recipients of treatment. The recent ONC proposal for Stage 2 of Meaningful Use includes several requirements to share treatment data with the people being treated (which seems kind of a no-brainer when stated this baldly) and the ONC has a Consumer/Patient Engagement Power Team.

Sage Congress is fully engaged in the patient engagement movement too. One result is the BRIDGE initiative, a joint project of Sage Bionetworks and Ashoka with funding from the Robert Wood Johnson Foundation, to solicit questions and suggestions for research from patients. Researchers can go for years researching a condition without even touching on some symptom that patients care about. Listening to patients in the long run produces more cooperation and more funding.

Portable consent requires a leap of faith, because as Wilbanks admits, releasing aggregates of patient data mean that over time, a patient is almost certain to be re-identified. Statistical techniques are just getting too sophisticated and compute power growing too fast for anyone to hide behind current tricks such as using only the first three digits of a five-digit postal code. Portable consent requires the data repository to grant access only to bona fide researchers and to set terms of use, including a ban on re-identifying patients. Still, researchers will have rights to do research, redistribute data, and derive products from it. Audits will be built in.

But as mentioned by Kelly Edwards of the University of Washington, tools and legal contracts can contribute to trust, but trust is ultimately based on shared values. Portable consent, properly done, engages with frameworks like Synapse to create a culture of respect for data.

In fact, I think the combination of the contractual framework in portable consent and a platform like Synapse, with its terms of use, might make a big difference in protecting patient privacy. Seyfert-Margolis cited predictions that 500 million smartphone users will be using medical apps by 2015. But mobile apps are notoriously greedy for personal data and cavalier toward user rights. Suppose all those smartphone users stored their data in a repository with clear terms of use and employed portable consent to grant access to the apps? We might all be safer.

The final article in this series will evaluate the prospects for open research in genetics, with a look at the grip of journal publishing on the field, and some comparisons to the success of free and open source software.

Next: Breaking Open Rewards and Incentives. All articles in this series, and others I've written about Sage Congress, are available through a bit.ly bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

April 30 2012

Recombinant Research: Sage Congress promotes data sharing in genetics

Given the exponential drop in the cost of personal genome sequencing (you can get a basic DNA test from 23andMe for a couple hundred dollars, and a full sequence will probably soon come down to one thousand dollars in cost), a new dawn seems to be breaking forth for biological research. Yet the assessment of genetics research at the recent Sage Congress was highly cautionary. Various speakers chided their own field for tilling the same ground over and over, ignoring the urgent needs of patients, and just plain researching the wrong things.

Sage Congress also has some plans to fix all that. These projects include tools for sharing data and storing it in cloud facilities, running challenges, injecting new fertility into collaboration projects, and ways to gather more patient data and bring patients into the planning process. Through two days of demos, keynotes, panels, and breakout sessions, Sage Congress brought its vision to a high-level cohort of 230 attendees from universities, pharmaceutical companies, government health agencies, and others who can make change in the field.

In the course of this series of articles, I'll pinpoint some of the pain points that can force researchers, pharmaceutical companies, doctors, and patients to work together better. I'll offer a look at the importance of public input, legal frameworks for cooperation, the role of standards, and a number of other topics. But we'll start by seeing what Sage Bionetworks and its pals have done over the past year.

Synapse: providing the tools for genetics collaboration

Everybody understands that change is driven by people and the culture they form around them, not by tools, but good tools can make it a heck of a lot easier to drive change. To give genetics researchers the best environment available to share their work, Sage Bionetworks created the Synapse platform.

Synapse recognizes that data sets in biological research are getting too large to share through simple data transfers. For instance, in his keynote about cancer research (where he kindly treated us to pictures of cancer victims during lunch), UC Santa Cruz professor David Haussler announced plans to store 25,000 cases at 200 gigabytes per case in the Cancer Genome Atlas, also known as TCGA in what seems to be a clever pun on the four nucleotides in DNA. Storage requirements thus work out to 5 petabytes, which Haussler wants to be expandable to 20 petabytes. In the face of big data like this, the job becomes moving the code to the data, not moving the data to the code.

Synapse points to data sets contributed by cooperating researchers, but also lets you pull up a console in a web browser to run R or Python code on the data. Some effort goes into tagging each data set with associated metadata: tissue type, species tested, last update, number of samples, etc. Thus, you can search across Synapse to find data sets that are pertinent to your research.

One group working with Synapse has already harmonized and normalized the data sets in TCGA so that a researcher can quickly mix and run stats on them to extract emerging patterns. The effort took about one and half full-time employees for six months, but the project leader is confident that with the system in place, "we can activate a similar size repository in hours."

This contribution highlights an important principle behind Synapse (appropriately called "viral" by some people in the open source movement): when you have manipulated and improved upon the data you find through Synapse, you should put your work back into Synapse. This work could include cleaning up outlier data, adding metadata, and so on. To make work sharing even easier, Synapse has plans to incorporate the Amazon Simple Workflow Service (SWF). It also hopes to add web interfaces to allow non-programmers do do useful work with data.

The Synapse development effort was an impressive one, coming up with a feature-rich Beta version in a year with just four coders. And Synapse code is entirely open source. So not only is the data distributed, but the creators will be happy for research institutions to set up their own Synapse sites. This may make Synapse more appealing to geneticists who are prevented by inertia from visiting the original Synapse.

Mike Kellen, introducing Synapse, compared its potential impact to that of moving research from a world of journals to a world like GitHub, where people record and share every detail of their work and plans. Along these lines, Synapse records who has used a data set. This has many benefits:

  • Researchers can meet up with others doing related work.

  • It gives public interest advocates a hook with which to call on those who benefit commercially from Synapse--as we hope the pharmaceutical companies will--to contribute money or other resources.

  • Members of the public can monitor accesses for suspicious uses that may be unethical.

There's plenty more work to be done to get data in good shape for sharing. Researchers must agree on some kind of metadata--the dreaded notion of ontologies came up several times--and clean up their data. They must learn about data provenance and versioning.

But sharing is critical for such basics of science as reproducing results. One source estimates that 75% of published results in genetics can't be replicated. A later article in this series will examine a new model in which enough metainformation is shared about a study for it to be reproduced, and even more important to be a foundation for further research.

With this Beta release of Synapse, Sage Bionetworks feels it is ready for a new initiative to promote collaboration in biological research. But how do you get biologists around the world to start using Synapse? For one, try an activity that's gotten popular nowadays: a research challenge.

The Sage DREAM challenge

Sage Bionetworks' DREAM challenge asks genetics researchers to find predictors of the progression of breast cancer. The challenge uses data from 2000 women diagnosed with breast cancer, combining information on DNA alterations affecting how their genes were expressed in the tumors, clinical information about their tumor status, and their outcomes over ten years. The challenge is to build models integrating the alterations with molecular markers and clinical features to predict which women will have the most aggressive disease over a ten year period.

Several hidden aspects of the challenge make it a clever vehicle for Sage Bionetworks' values and goals. First, breast cancer is a scourge whose urgency is matched by its stubborn resistance to diagnosis. The famous 2009 recommendations of U.S. Preventive Services Task Force, after all the controversy was aired, left us with the dismal truth that we don't know a good way to predict breast cancer. Some women get mastectomies in the total absence of symptoms based just on frightening family histories. In short, breast cancer puts the research and health care communities in a quandary.

We need finer-grained predictors to say who is likely to get breast cancer, and standard research efforts up to now have fallen short. The Sage proposal is to marshal experts in a new way that combines their strengths, asking them to publish models that show the complex interactions between gene targets and influences from the environment. Sage Bionetworks will publish data sets at regular intervals that it uses to measure the predictive ability of each model. A totally fresh data set will be used at the end to choose the winning model.

The process behind the challenge--particularly the need to upload code in order to run it on the Synapse site--automatically forces model builders to publish all their code. According to Stephen Friend, founder of Sage Bionetworks, "this brings a level of accountability, transparency, and reproducibility not previously achieved in clinical data model challenges."

Finally, the process has two more effects: it shows off the huge amount of genetic data that can be accessed through Synapse, and it encourages researchers to look at each other's models in order to boost their own efforts. In less than a month, the challenge already received more than 100 models from 10 sources.

The reward for winning the challenge is publication in a respected journal, the gold medal still sought by academic researchers. (More on shattering this obelisk later in the series.) Science Translational Medicine will accept results of the evaluation as a stand-in for peer review, a real breakthrough for Sage Bionetworks because it validates their software-based, evidence-driven process.

Finally, the DREAM challenge promotes use of the Synapse infrastructure, and in particular the method of bringing the code to the data. Google is donating server space for the challenge, which levels the playing field for researchers, freeing them from paying for their own computing.

A single challenge doesn't solve all the problems of incentives, of course. We still need to persuade researchers to put up their code and data on a kind of genetic GitHub, persuade pharmaceutical companies to support open research, and persuade the general public to share data about the phonemes (life data) and genes--all topics for upcoming articles in the series.

Next: Sage Congress Plans for Patient Engagement. All articles in this series, and others I've written about Sage Congress, are available through a bit.ly bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

April 19 2012

Sage Congress: The synthesis of open source with genetics

For several years, O'Reilly Radar has been covering the exciting
potential that open source software, open data, and a general attitude
of sharing and cooperation bring to health care. Along with many
exemplary open source projects in areas directly affecting the
public — such as the VA's Blue
Button
in electronic medical records and the href="http://wiki.directproject.org/">Direct project in data
exchange — the study of disease is undergoing a paradigm shift.

Sage Bionetworks stands at the
center of a wide range of academic researchers, pharmaceutical
companies, government agencies, and health providers realizing that
the old closed system of tiny teams who race each other to a cure has
got to change. Today's complex health problems, such as Alzheimer's,
AIDS, and cancer, are too big for a single team. And these
institutions are slowly wrenching themselves out of the habit of data
hoarding and finding ways to work together.

A couple weeks ago I talked to the founder of Sage Bionetworks,
Stephen Friend, about recent advances in open source in this area, and
the projects to be highlighted at the upcoming http://sagecongress.org/">Sage Commons congress. Steve is careful
to call this a "congress" instead of a "conference" because all
attendees are supposed to pitch in and contribute to the meme pool. I
covered Sage Congress in a series of
articles last year
. The following podcast ranges over
topics such as:

  • what is Sage Bionetworks [Discussed at the 00:25 mark];
  • the commitment of participants to open source software [Discussed at the 01:01 mark];
  • how open source can support a business model in drug development [Discussed at the 01:40 mark];
  • a look at the upcoming congress [Discussed at the 03:47 mark];
  • citizen-led contributions or network science [Discussed at the 06:12 mark];
  • data sharing philosophy [Discussed at the 09:01 mark];
  • when projects are shared with other institutions [Discussed at the 12:43 mark];
  • how to democratize medicine [Discussed at the 17:10 mark];
  • a portable legal consent approach where the patient controls his or her own data [Discussed at the 20:07 mark];
  • solving the problem of non-sharing in the industry [Discussed at the 22:15 mark]; and
  • key speakers at the congress [Discussed at the 26:35 mark].

Sessions from the congress will be broadcast live via webcast and posted on the Internet.

March 26 2012

Five tough lessons I had to learn about health care

Working in the health care space has forced me to give up many hopes and expectations that I had a few years ago. Forgive me for being cynical (it's an easy feeling to have following the country's largest health IT conference, as I reported a month ago), and indeed some positive trends do step in to shore up hope. I'll go over the redeeming factors after listing the five tough lessons.

1. The health care field will not adopt a Silicon Valley mentality

Wild, willful, ego-driven experimentation--a zeal for throwing money after intriguing ideas with minimal business plans--has seemed work for the computer field, and much of the world is trying to adopt a "California optimism." A lot of venture capitalists and technology fans deem this attitude the way to redeem health care from its morass of expensive solutions that don't lead to cures. But it won't happen, at least not the way they paint it.

Health care is one of the most regulated fields in public life, and we want it that way. From the moment we walk into a health facility, we expect the staff to be following rigorous policies to avoid infections. (They don't, but we expect them to.) And not just anybody can set up a shield outside the door and call themselves a doctor. In the nineteenth century it was easier, but we don't consider that a golden age of medicine.

Instead, doctors go through some of the longest and most demanding training that exists in the world today. And even after they're licensed, they have to regularly sign up for continuing education to keep practicing. Other fields in medicine are similar. The whole industry is constrained by endless requirements that make sure the insiders remain in their seats and no "disruptive technologies" raise surprises. Just ask a legal expert about the complex mesh of Federal and state regulations that a health care provider has to navigate to protect patient privacy--and you do want your medical records to be private, don't you?--before you rave about the Silicon Valley mentality. Also read the O'Reilly book by Fred Trotter and David Uhlman about the health care system as it really is.

Nor can patients change treatments with the ease of closing down a Facebook account. Once a patient has established a trust relationship with a doctor and obtained a treatment plan, he or she won't say, "I think I'll go down the road to another center that charges $100 less for this procedure." And indeed, health reform doesn't prosper from breaking down treatments into individual chunks. Progress lies in the opposite direction: the redemptive potential of long-term relationships.

2. Regulations can't force change

I am very impressed with the HITECH act (a product of the American Recovery and Reinvestment Act, more than the Affordable Care Act) that set modern health reform in motion, as well as the efforts of the Department of Health and Human Services to push institutions forward. But change in health care, like education, boils down to the interaction in a room between a professional and a client. Just as lesson plans and tests can't ensure that a teacher inspires a child to learn, regulations can't keep a doctor from ordering an unnecessary test to placate an anxious patient.

We can offer clinical decision support to suggest what has worked for other patients, but we can't keep a patient from asking for a expensive procedure that has a 10% chance of making him better (and a 20% chance of making him worse), nor can we make the moral decision about what treatment to pursue, for the patient or the doctor. Each patient is different, anyway. No one wants to be a statistic.

3. The insurance companies are not the locus of cost and treatment problems

Health insurers are a favorite target of hatred by Americans, exemplified by Michael Moore's 2007 movie Sicko and more surprisingly in the 1997 romantic comedy As Good as it Gets, where I saw an audience applaud as Helen Hunt delivered a rant against health maintenance organizations. A lot of activists, looking at other countries, declare that our problems would be solved (well, would improve a lot) if we got private insurers out of the picture.

Sure, there's a lot of waste in the current insurance system, which deliberately stretches out the task of payment and makes it take up the days of full-time staff in each doctor's office. But that's not the cause of the main problems in either costs or treatment failures. The problems lie with the beloved treatment staff. We can respect their hard work and the lives they save, but we don't have to respect them for releasing patients from hospitals without adequate follow-up, or for ordering unnecessary radiation that creates harm for patients, or for the preventable errors that still (after years of publicity) kill 90,000 to 100,000 patients a year.

4. Doctors don't want to be care managers

The premise of health reform is to integrate patients into a larger plan for managing a population. A doctor is supposed to manage a case load and keep his or her pipeline full while not spending too much. The thrust of various remuneration schemes, old and new, that go beyond fee for service (capitation, global payment systems) is to reward a doctor for handling patients of a particular type (for instance, elderly people with hypertension) at a particular cost. But doctors aren't trained for this. They want to fix the immediate, presenting complaint and send the patient home until they're needed again. Some think longitudinally, and diligently try to treat the whole person rather than a symptom. But managing their treatment options as a finite resource is just not in their skill set.

The United Kingdom--host of one of the world's great national care systems--is about to launch a bold new program where doctors have to do case management. The doctors are rebelling. If this is the future of medicine, we'll have to find new medical personnel to do it.

5. Patients don't want to be care managers

Now that the medical field has responded superbly to acute health problems, we are left with long-term problems that require lifestyle and environmental changes. The patient is even more important than the doctor in these modern ills. But the patients who cost the most and need to make the most far-ranging changes are demonstrating an immunity to good advice. They didn't get emphysema or Type 2 diabetes by acting healthily in the first place, and they aren't about to climb out of their condition voluntarily either.

You know what the problem with chronic disease is? Its worst effects are not likely to show up early in life when lifestyle change could make the most difference. (Serious pain can come quickly from some chronic illnesses, such as asthma and Crohn's disease, but these are also hard to fix through lifestyle changes, if by "lifestyle change" you mean breathing clean air.) The changes a patient would have to make to prevent smoking-related lung disease or obesity-related problems would require a piercing re-evaluation of his course of life, which few can do. And incidentally, they are neither motivated nor trained to store their own personal health records.

Hope for the future

Despite the disappointments I've undergone in learning about health care, I expect the system to change for the better. It has to, because the public just won't tolerate more precipitous price hikes and sub-standard care.

There's a paucity of citations in my five lessons because they tend not to be laid out bluntly in research or opinion pieces; for the most part, they emerged gradually over many hallway conversations I had. Each of the five lessons contain a "not," indicating that they attack common myths. Myths (in the traditional sense) in fact are very useful constructs, because they organize the understanding of the world that societies have trouble articulating in other ways. We can realize that myths are historically inaccurate while finding positive steps forward in them.

The Silicon Valley mentality will have some effect through new devices and mobile phone apps that promote healthy activity. They can help with everything from basic compliance--remembering to take prescribed meds--to promoting fitness crazes and keeping disabled people in their homes. Lectures given once in a year in the doctor's office don't lead to deep personal change, but having a helper nearby (even a digital one) can impel a person to act better, hour by hour and day by day. This has been proven by psychologists over and over: motivation is best delivered in small, regular doses (a theme found in my posting from HIMSS).

Because the most needy patients are often the most recalcitrant ones, personal responsibility has to intersect with professional guidance. A doctor has to work the patient, and other staff can shore up good habits as well. This requires the doctors' electronic record systems to accept patient data, such as weight and mood. Projects such as Indivo X support these enhancements, which traditional electronic record systems are ill-prepared for.

Although doctors eschew case management, there are plenty of other professionals who can help them with it, and forming Accountable Care Organizations gives the treatment staff access to such help. Tons of potential savings lie in the data that clinicians could collect and aggregate. Still more data is being loaded by the federal government regularly at Health.Data.Gov. ACOs and other large institutions can hire people who love to crunch big data (if such staff can be found, because they're in extremely high demand now in almost every industry) to create systems that slide seamlessly into clinical decision support and provide guidelines for better treatment, as well as handle the clinic's logistics better. So what we need to do is train a lot more experts in big data to understand the health care field and crunch its numbers.

Change will be disruptive, and will not be welcomed with open arms. Those who want a better system need to look at the areas where change is most likely to make a difference.

February 29 2012

Report from HIMSS 12: wrap-up of the largest health IT conference

This is a time of great promise in health care, yet an oppressive atmosphere hung over much of href="http://www.himssconference.org/">HIMSS. All the speakers--not least the government representatives who announced rules for the adoption of electronic health records--stressed commendable practices such as data exchange, providing the patient with information, and engaging with the patient. Many berated hospitals, doctors, and vendors for neglecting the elements that maintain health. But the thrust of most sessions was on such details as how to convert patient records to the latest classification of diseases (ICD-10).

Intelligent Hospital pavilion shows off tempting technology
Intelligent Hospital pavilion shows off tempting technology.

I have nothing against ICD-10 and I'm sure adopting it is a big headache that deserves attention at the conference. The reason I call the atmosphere oppressive is that I felt stuck among health care providers unable to think long-term or to embrace the systems approach that we'll need to cure people and cut costs. While some health care institutions took the ICD-10 change-over seriously and put resources into meeting the deadline, others pressured the Dept. of Health and Human services to delay implementation, and apparently won a major reprieve. The health IT community, including HIMSS, criticized the delay. But resistance to progress usually does not break out so overtly, and remains ingrained in day-to-day habits.

But ICD-10 is a sideline to the major issue of Stage 2 meaningful use. Why, as I reported on Wednesday, were so many of the 35,000 HIMSS attendees wrapped up in the next step being forced on them by the federal government? The scandal is that these meaningful use concepts (using data to drive care, giving care-givers information that other care-givers have collected about the patient) have to be forced on them. Indeed, institutions like Kaiser Permanente that integrated their electronic records years ago and concentrated on the whole patient had relatively little work to do to conform to Stage 1, and probably have the building blocks for Stage 2 in place. And of course these things are part of the landscape of health care in other countries. (The proposed regulations were finally posted last Thursday.)

Recipients of Regina Holliday jackets record patient involvement stories
Recipients of Regina Holliday jackets record patient involvement stories.

Haven't our providers heard that an ounce of prevention is worth a pound of cure? Don't well-educated and well-paid executives invest in quality measures with the expectation that they'll pay off in the long run? And aren't we all in the field for the good of the patients? What is that snickering I hear?

Actually, I don't accept the premise that providers are all in it for the money. If so many are newly incentivized to join the government's program for a mere $15,000 per doctor (plus avoiding some cuts in Medicare payments), which is a small fraction of the money they'll have to spend implementing the program, they must know that it's time to do the right thing. Meaningful use can be a good framework to concretize the idealistic goals of health care reform, but I just wish the vendors and doctors would keep their eyes more on the final goal.

Redwood MedNet in Northern California is an example of a health information exchange that adopted standards (CONNECT, before the Direct project was in place) to simplify data exchange between health providers. Will Ross of Redwood MedNet told me that qualifying for Stage 2 would be simple for them, "but you won't hear that from many vendors in this exhibit hall."

Annual surveys by Family Practice Management journal about their readers' satisfaction with EHRs, reviewed in one HIMSS session, showed widespread dissatisfaction that doesn't change from year to year. For instance, 39% were dissatisfied with support and training, although a few vendors rated quite high. Still, considering that doctors tend to veer away from open source solutions and pay big bucks for proprietary ones out of a hope of receiving better support and training, they deserve better. It's worth noting that the longer a practice uses its system, the more they're likely to express satisfaction. But only 38% of respondents would purchase the same systems now if they weren't already locked in.

That's the big, frustrating contradiction at HIMSS. The vendors have standards (HL7 and others), they've been setting up health information exchanges (under various other names) for years, they have a big, popular interoperability lab at each conference--and yet most patients still have to carry paper records and CDs with images from one doctor to another. (A survey of HIMSS members showed that one-quarter allowed access by patients to their data, which is an advance but still just a start.) The industry as a whole has failed to make a dent in the 90,000 to 100,000 needless deaths that occur in treatment facilities each year. And (according to one speaker) 20% of patients hospitalized under Medicare have to return to the hospital shortly after discharge.

Omens of change

Suffice it say that by my fourth day at HIMSS I was not happy. Advances come, but slowly. Examples of developments I can give a thumbs-up to at HIMSS were data sharing among physicians who use Practice Fusion, a popular example of a growing move to web services for electronic records, and a CardioEngagement Challenge funded by Novartis to encourage at-risk patients to take more interest in their health. The winner was a Sensei mobile app that acts as an automated coach. Sensei CEO Robert Schwarzberg, a cardiologist, told me had put together phone-in coaching services for heart patients during the years before mobile apps, and was frustrated that these coaches were available less than once a week when what patients needed was round-the-clock motivation. Sensei Wellness is one of the many mobile apps that make both patients and doctors more connected, and HIMSS quite properly devoted a whole section of the exhibit floor to them.

Talking about Sensei Wellness with Dr. Robert Schwarzberg
Talking about Sensei Wellness with Dr. Robert Schwarzberg.

I dropped by the IBM booth for the obligatory demo of Watson's medical application, and some background from Dr. Josko Silobrcic. I also filled in some of this report from an earlier conversation with tech staff.

Medical diagnosis involves more structured data than solving Jeopardy riddles, structure that appears mostly in the form of links between data sets. For instance, medicines are linked to diagnoses, to lab results, and to other medicines (for instance, some drugs are counter-indicated when the patient is taking other drugs). Watson follows these relationships.

But because Watson is a natural language processing application--based on UIMA, which IBM donated to the Apache Foundation--it doesn't try to do much reasoning to pick out the best diagnosis or treatment, both of which are sometimes requested of it. Instead, it dumps huge indexes of medical articles into its data stores on one side, and takes in the text about the patient's complaint and doctor's evaluation on the other. Matching them up is not so different from a Jeopardy question, after all. Any possible match is considered and kept live until the final round of weighing answers, even if the chance of matching is near zero.

Dr. Josko Silobrcic before Watson demonstration
Dr. Josko Silobrcic before Watson demonstration.

Also because of the NLP basis for matching, there is rarely a need to harmonize disparate data taken in from different journals or medical sources.

I assumed that any processing that uses such a large data set and works so fast must run on a huge server farm, but the staff assured me it's not as big as one would think. For production use, of course, they'll need to take into account the need to scale. The medical informatics equivalent of a Christmas rush on sales would be an epidemic where everybody in the region is urgently hitting Watson for critical diagnoses.

Coming to peace

Healing came to me on my last day at HIMSS, at too related conferences off to the side of the main events: a meeting of Open Health Tools members and the eCollaboration forum, run by health activists who want to break down barriers to care. Both groups have partnerships with HIMSS.

Open Health Tools positions itself as an umbrella organization for projects making free software for a lot of different purposes in health care: recording, treatment, research and more. One illustrative project I got to hear about at their meeting was the Medical Imaging Network Transport (MINT), which Johns Hopkins is working on in coordination with other teams

MINT cuts down on the transfers of huge images by doing some processing in place and transferring only portions of the data. Switching to modern storage formats (XML and JSON) and better methods of data transfer also reduces waste. For instance, current DICOM vendors transmit images over TCP, which introduces more overhead than necessary when handling the packet losses engendered by transmitting files that are several gigabytes in size. MINT allows UDP and other protocols that are leaner than TCP.

Best of all, MINT DICOM images can be displayed through HTML5, which means any browser can view them in good resolution, there is no need to install a specialized viewer at each location where the doctor is checking the image, and dependence on proprietary software is reduced. (The same reliance on standard browsers is also claimed by eMix in a recent interview.

At the eCollaboration forum, E-patient Dave DeBronkart reported that being an engaged patient is still swimming upstream. It's hard to get one's records, hard to find out what treatments will cost, and hard to get taken seriously as an adult interested in monitoring one's own care. Meg McCabe of Aetna says that insurers need to offer more sophisticated guidance to patients trying to choose a health provider--simple lists of options are confusing and hard to choose from.

One speaker warned providers that if they try to open their data for collaborative care, they may find themselves hampered by contracts that maintain vendor ownership of EHR data. But speakers assured us vendors are not evil. The issue is what the providers ask for when they buy the EHR systems.

Here's the strange thing about the eCollaboration forum: they signed up enough people to fill the room ahead of time and left many potential attendees lamenting that they couldn't get in. Yet on the actual day of the event, there were about eight empty seats for every attendee. Maybe HIMSS attendees felt that had to devote all their time to the stage 2 regulations, previously mentioned. But I take the disappointing turn-out as a sign of the providers' and vendors' lack of commitment to change. Shown a dazzling roster of interesting talks about data exchange, open record sharing, and patient engagement, they're quick to sign up--but they don't show up when it counts.

As members of the general public, we can move the health care field forward by demanding more from our providers, at the point where we have some influence. Anyone looking for concrete guidance for increasing their influence as a patient can try e-Patients Live Longer: The Complete Guide to Managing Health Care Using Technology, by Nancy B. Finn.

Public attention and anger have been focused on insurers, who have certainly engaged in some unsavory practices to avoid paying for care--but nothing as destructive as the preventable errors and deaths caused by old-fashioned medical practices. And while economists complain about the 30 cents out of every dollar wasted in the American hodge-podge of payment systems, we know that unnecessary medical procedures or, conversely, preventative steps that were omitted, also suck up a lot of money. One speaker at the eCollaboration forum compared the sky-rocketing costs of health care and insurance to a financial bubble that can't last. Let's all take some responsibility for instituting better medical and reporting systems so the costs come down in a healthy manner.

Other articles about HIMSS were posted last Tuesday and Wednesday.

February 23 2012

Report from HIMSS 2012: toward interoperability and openness

I was wondering how it would feel to be in the midst of 35,000 people whose livelihoods are driven by the decisions of a large institution at the moment when that institution releases a major set of rules. I didn't really find out, though. The 35,000 people I speak of are the attendees of the HIMSS conference and the institution is the Department of Health and Human Services. But HHS just sort of half-released the rules (called Stage 2 of meaningful use), telling us that they would appear online tomorrow and meanwhile rushing over a few of the key points in a presentation that drew overflow crowds in two rooms.

The reaction, I sensed, was a mix of relief and frustration. Relief because Farzad Mostashari, National Coordinator for Health Information Technology, promised us the rules would be familiar and hewed closely to what advisors had requested. Frustration, however, at not seeing the details. The few snippets put up on the screen contained enough ambiguities and poorly worded phrases that I'm glad there's a 60-day comment period before the final rules are adopted.

There isn't much one can say about the Stage 2 rules until they are posted and the experts have a chance to parse them closely, and I'm a bit reluctant to throw onto the Internet one of potentially 35,000 reactions to the announcement, but a few points struck me enough to be worth writing about. Mostashari used his pulpit for several pronouncements about the rules:

  • HHS would push ahead on goals for interoperability and health information exchange. "We can't wait five years," said Mostashari. He emphasized the phrase "standard-based" in referring to HIE.

  • Patient engagement was another priority. To attest to Stage 2, institutions will have to allow at least half their patients to download and transfer their records.

  • They would strive for continuous quality improvement and clinical decision support, key goals enabled by the building blocks of meaningful use.

Two key pillars of the Stage 2 announcement are requirements to use the Direct project for data exchange and HL7's consolidated CDA for the format (the only data exchange I heard mentioned was a summary of care, which is all that most institutions exchange when a patient is referred).

The announcement demonstrates the confidence that HHS has in the Direct project, which it launched just a couple years ago and that exemplifies a successful joint government/private sector project. Direct will allow health care providers of any size and financial endowment to use email or the Web to share summaries of care. (I mentioned it in yesterday's article.) With Direct, we can hope to leave the cumbersome and costly days of health information exchange behind. The older and more complex CONNECT project will be an option as well.

The other half of that announcement, regarding adoption of the CDA (incarnated as a CCD for summaries of care), is a loss for the older CCR format, which was an option in Stage 1. The CCR was the Silicon Valley version of health data, a sleek and consistent XML format used by Google Health and Microsoft HealthVault. But health care experts criticized the CCR as not rich enough to convey the information institutions need, so it lost out to the more complex CCD.

The news on formats is good overall, though. The HL7 consortium, which has historically funded itself by requiring organizations to become members in order to use its standards, is opening some of them for free use. This is critical for the development of open source projects. And at an HL7 panel today, a spokesperson said they would like to head more in the direction of free licensing and have to determine whether they can survive financially while doing so.

So I'm feeling optimistic that U.S. health care is moving "toward interoperability and openness," the phrase I used in the title to his article and also used in a posting from HIMSS two years ago.

HHS allowed late-coming institutions (those who began the Stage 1 process in 2011) to continue at Stage 1 for another year. This is welcome because they have so much work to do, but means that providers who want to demonstrate Stage 2 information exchange may have trouble because they can't do it with other providers who are ready only for Stage 1.

HHS endorsed some other standards today as well, notably SNOMED for diseases and LRI for lab results. Another nice tidbit from the summit includes the requirement to use electronic medication administration (for instance, bar codes to check for errors in giving medicine) to foster patient safety.

February 22 2012

Report from HIMSS: health care tries to leap the chasm from the average to the superb

I couldn't attend the session today on StealthVest--and small surprise. Who wouldn't want to come see an Arduino-based garment that can hold numerous health-monitoring devices in a way that is supposed to feel like a completely normal piece of clothing? As with many events at the HIMSS conference, which has registered over 35,000 people (at least four thousand more than last year), the StealthVest presentation drew an overflow crowd.

StealthVest sounds incredibly cool (and I may have another chance to report on it Thursday), but when I gave up on getting into the talk I walked downstairs to a session that sounds kind of boring but may actually be more significant: Practical Application of Control Theory to Improve Capacity in a Clinical Setting.

The speakers on this session, from Banner Gateway Medical Center in Gilbert, Arizona, laid out a fairly standard use of analytics to predict when the hospital units are likely to exceed their capacity, and then to reschedule patients and provider schedules to smooth out the curve. The basic idea comes from chemical engineering, and requires them to monitor all the factors that lead patients to come in to the hospital and that determine how long they stay. Queuing theory can show when things are likely to get tight. Hospitals care a lot about these workflow issues, as Fred Trotter and David Uhlman discuss in the O'Reilly book Beyond Meaningful Use, and they have a real effect on patient care too.

The reason I find this topic interesting is that capacity planning leads fairly quickly to visible cost savings. So hospitals are likely to do it. Furthermore, once they go down the path of collecting long-term data and crunching it, they may extend the practice to clinical decision support, public health reporting, and other things that can make a big difference to patient care.

A few stats about data in U.S. health care

Do we need a big push to do such things? We sure do, and that's why meaningful use was introduced into HITECH sections of the American Recovery and Reinvestment Act. HHS released mounds of government health data on Health.data.gov hoping to serve a similar purpose. Let's just take a look at how far the United States is from using its health data effectively.

  • Last November, a CompTIA survey (reported by Health Care IT News) found that only 28% of providers have comprehensive EHRs in use, and another 17% have partial implementations. One has to remember that even a "comprehensive" EHR is unlikely to support the sophisticated data mining, information exchange, and process improvement that will eventually lead to lower costs and better care.

  • According to a recent Beacon Partners survey (PDF), half of the responding institutions have not yet set up an infrastructure for pursuing health information exchange, although 70% consider it a priority. The main problem, according to a HIMSS survey, is budget: HIEs are shockingly expensive. There's more to this story, which I reported on from a recent conference in Massachusetts.

Stats like these have to be considered when HIMSS board chair, Charlene S. Underwood, extolled the organization's achievements in the morning keynote. HIMSS has promoted good causes, but only recently has it addressed cost, interoperability, and open source issues that can allow health IT to break out of the elite of institutions large or sophisticated enough to adopt the right practices.

As signs of change, I am particularly happy to hear of HIMSS's new collaboration with Open Health Tools and their acquisition of the mHealth summit. These should guide the health care field toward more patient engagement and adaptable computer systems. HIEs are another area crying out for change.

An HIE optimist

With the flaccid figures for HIE adoption in mind, I met Charles Parisot, chair of Interoperability Standards and Testing Manager for EHRA, which is HIMSS's Electronic Health Records Association. The biggest EHR vendors and HIEs come together in this association, and Parisot was just stoked with positive stories about their advances.

His take on the cost of HIEs is that most of them just do it in a brute force manner that doesn't work. They actually copy the data from each institution into a central database, which is hard to manage from many standpoints. The HIEs that have done it right (notably in New York state and parts of Tennessee) are sleek and low-cost. The solution involves:

  • Keeping the data at the health care providers, and storing in the HIE only some glue data that associates the patient and the type of data to the provider.

  • Keeping all metadata about formats out to the HIE, so that new formats, new codes, and new types of data can easily be introduced into the system without recoding the HIE.

  • Breaking information exchange down into constituent parts--the data itself, the exchange protocols, identification, standards for encryption and integrity, etc.--and finding standard solutions for each of these.

So EHRA has developed profiles (also known by its ONC term, implementation specifications) that indicate which standard is used for each part of the data exchange. Metadata can be stored in the core HL7 document, the Clinical Document Architecture, and differences between implementations of HL7 documents by different vendors can also be documented.

A view of different architectures in their approach can be found in an EHRA white paper, Supporting a Robust Health Information Exchange Strategy with a Pragmatic Transport Framework. As testament to their success, Parisot claimed that the interoperability lab (a huge part of the exhibit hall floor space, and a popular destination for attendees) could set up the software connecting all the vendors' and HIEs' systems in one hour.

I asked him about the simple email solution promised by the government's Direct project, and whether that may be the path forward for small, cash-strapped providers. He accepted that Direct is part of the solution, but warned that it doesn't make things so simple. Unless two providers have a pre-existing relationship, they need to be part of a directory or even a set of federated directories, and assure their identities through digital signatures.

And what if a large hospital receives hundreds of email messages a day from various doctors who don't even know to whom their patients are being referred? Parisot says metadata must accompany any communications--and he's found that it's more effective for institutions to pull the data they want than for referring physicians to push it.

Intelligence for hospitals

Finally, Parisot told me EHRA has developed standards for submitting data to EHRs from 350 types of devices, and have 50 manufacturers working on devices with these standards. I visited a booth of iSirona as an example. They accept basic monitoring data such as pulses from different systems that use different formats, and translate over 50 items of information into a simple text format that they transmit to an EHR. They also add networking to devices that communicate only over cables. Outlying values can be rejected by a person monitoring the data. The vendor pointed out that format translation will be necessary for some time to come, because neither vendors nor hospitals will replace their devices simply to implement a new data transfer protocol.

For more about devices, I dropped by one of the most entertaining parts of the conference, the Intelligent Hospital Pavilion. Here, after a badge scan, you are somberly led through a series of locked doors into simulated hospital rooms where you get to watch actors in nursing outfits work with lifesize dolls and check innumerable monitors. I think the information overload is barely ameliorated and may be worsened by the arrays of constantly updated screens.

But the background presentation is persuasive: by using attaching RFIDs and all sorts of other devices to everything from people to equipment, and basically making the hospital more like a factory, providers can radically speed up responses in emergency situations and reduce errors. Some devices use the ISM "junk" band, whereas more critical ones use dedicated spectrum. Redundancy is built in throughout the background servers.

Waiting for the main event

The US health care field held their breaths most of last week, waiting for Stage 2 meaningful use guidelines from HHS. The announcement never came, nor did it come this morning as many people had hoped. Because meaningful use is the major theme of HIMSS, and many sessions were planned on helping providers move to Stage 2, the delay in the announcement put the conference in an awkward position.

HIMSS is also nonplussed over a delay in another initiative, the adoption of a new standard in the classification of disease and procedures. ICD-10 is actually pretty old, having been standardized in the 1980s, and the U.S. lags decades behind other countries in adopting it. Advantages touted for ICD-10 are:

  • It incorporates newer discoveries in medicine than the dominant standard in the U.S., ICD-9, and therefore permits better disease tracking and treatment.

  • Additionally, it's much more detailed than ICD-9 (with an order of magnitude more classifications). This allows the recording of more information but complicates the job of classifying a patient correctly.

ICD-10 is rather controversial. Some people would prefer to base clinical decisions on SNOMED, a standard described in the Beyond Meaningful Use book mentioned earlier. Ultimately, doctors lobbied hard against the HHS timeline for adopting ICD-10 because providers are so busy with meaningful use. (But of course, the goals of adopting meaningful use are closely tied to the goals of adopting ICD-10.) It was the pushback from these institutions that led HHS to accede and announce a delay. HIMSS and many of its members were disappointed by the delay.

In addition, there is an upcoming standard, ICD-11, whose sandal some say ICD-10 is not even worthy to lace. A strong suggestion that the industry just move to ICD-11 was aired in Government Health IT, and the possibility was raised in Health Care IT News as well. In addition reflecting the newest knowledge about disease, ICD-11 is praised for its interaction with SNOMED and its use of Semantic Web technology.

That last point makes me a bit worried. The Semantic Web has not been widely adopted, and if people in the health IT field think ICD-10 is complex, how are they going to deal with drawing up and following relationships through OWL? I plan to learn more about ICD-11 at the conference.

February 06 2012

Small Massachusetts HIT conference returns to big issues in health care

I've come to look forward to the Massachusetts Heath Data Consortium's annual HIT conference because--although speakers tout the very real and impressive progress made by Massachusetts health providers--you can also hear acerbic and ruthlessly candid critiques of policy and the status quo. Two notable take-aways from last year's conference (which I wrote up at the time) were the equivalence of old "managed care" to new "accountable care organizations" and the complaint that electronic health records were "too expensive, too hard to use, and too disruptive to workflow." I'll return to these claims later.

The sticking point: health information exchange

This year, the spears were lobbed by Ashish Jha of Harvard Medical School, who laid out a broad overview of progress since the release of meaningful use criteria and then accused health care providers of undermining one of its main goals, the exchange of data between different providers who care for the same patient. Through quantitative research (publication in progress), Jha's researchers showed a correlation between fear of competition and low adoption of HIEs. Hospitals with a larger, more secure position in their markets, or in more concentrated markets, were more likely to join an HIE.

The research bolsters Jha's claim that the commonly cited barriers to using HIEs (technical challenges, cost, and privacy concerns) are surmountable, and that the real problem is a refusal to join because a provider fears that patients would migrate to other providers. It seems to me that the government and public can demand better from providers, but simply cracking the whip may be ineffective. Nor should it be necessary. An urgent shortage of medical care exists everywhere in the country, except perhaps a few posh neighborhoods. There's plenty for all providers. Once insurance is provided to all the people in need, no institution should need to fear a lack of business, unless it's performance record is dismal.

Jha also put up some research showing a strong trend toward adopting electronic health records, although the small offices that give half the treatment in the United States are still left behind. He warned that to see big benefits, we need to bring in health care institutions that are currently given little attention by the government--nursing home, rehab facilities, and so forth--and give them incentives to digitize. He wrapped up by quoting David Blumenthal, former head of the ONC, on the subject of HIEs. Blumenthal predicted that we'd see EHRs in most providers over the next few years, and that the real battle would be getting them to adopt health information exchange.

Meanwhile, meaningful use could trigger a shake-out in the EHR industry, as vendors who have spent years building silo'd projects fail to meet the Stage 2 requirements that fulfill the highest aspirations of the HITECH act that defined meaningful use, including health information exchange. Meanwhile, a small but steadily increasing number of open source projects have achieved meaningful use certification. So we'll see more advances in the adoption of both EHRs and HIEs.

Low-hanging fruit signals a new path for cost savings

The big achievement in Massachusetts, going into the conference today, was a recent agreement between the state's major insurer, Blue Cross Blue Shield, and the 800-pound gorilla of the state's health care market, Partners HealthCare System. The pact significantly slows the skyrocketing costs that we've all become accustomed to in the United States, through the adoption of global payments (that is, fixed reimbursements for treating patients in certain categories). That two institutions of such weight can relinquish the old, imprisoning system of fee-for-service is news indeed.

Note that the Blue Cross/Partners agreement doesn't even involve the formation of an Accountable Care Organization. Presumably, Partners believes it can pick some low-hanging fruit through modest advances in efficiency. Cost savings you can really count will come from ACOs, where total care of the patient is streamlined through better transfers of care and intensive communication. Patient-centered medical homes can do even more. So an ACO is actually much smarter than old managed care. But it depends on collecting good data and using it right.

The current deal is an important affirmation of the path Massachusetts took long before the rest of the country in aiming for universal health coverage. We all knew at the time that the Massachusetts bill was not addressing costs and that these would have to be tackled eventually. And at first, of course, health premiums went up because a huge number of new people were added to the roles, and many of them were either sick or part of high-risk populations.

The cost problem is now being addressed through administrative pressure (at one point, Governor Deval Patrick flatly denied a large increase requested by insurers), proposed laws, and sincere efforts at the private level such as the Blue Cross/Partners deal. I asked a member of the Patrick administration whether they problem could be solved without a new law, and he expressed the opinion that there's a good chance it could be. Steven Fox of Blue Cross Blue Shield said that 70% of their HMO members go to physicians in their Alternative Quality Network, which features global payments. And he said these members have better outcomes at lower costs.

ACOs have a paradoxical effect on health information exchange Jha predicted that ACOs, while greatly streamlining the exchanges between their member organizations, because these save money, they will resist exchanging data with outside providers because keeping patients is even more important for ACOs than for traditional hospitals and clinics. Only by keeping a patient can the ACO reap the benefits of the investments they make in long-term patient health.

As Doris Mitchell received an award for her work with the MHDC, executive directory Ray Campbell mentioned the rapid growth and new responsibilities of her agency, the Group Insurance Commission, which negotiates all health insurance coverage for state employees, as cities and towns have been transferring their municipal employees to it. A highly contentious bill last year that allowed the municipalities to transfer their workers to the GIC was widely interpreted as a blow against unionized workers, when it was actually just a ploy to save money through the familiar gambit of combining the insured into a larger pool. I covered this controversy at the time.

A low-key conference

Attendance was down at this year's conference, with about half the attendees and vendors as last year's. Lowered interest seemed to be reflected as none of the three CEOs receiving awards turned up to represent their institutions (the two institutions mentioned earlier for their historic cost-cutting deal--Blue Cross Blue Shield and Partners HealthCare--along with Steward Health Care).

The morning started with a thoughtful look at the requirements for ACOs by Frank Ingari of Essence Healthcare, who predicted a big rise in investment by health care institutions in their IT departments. Later speakers echoed this theme, saying that hospitals should invest less in state-of-the-art equipment that leads to immediately billable activities, and more in the underlying IT that will allow them to collect research data and cut down waste. Some of the benefits available through this research were covered in a talk at the Open Source convention a couple years ago.

Another intriguing session covered technologies available today that could be more widely adopted to improve health care. Videos of robots always draw an enthusiastic response, but a more significant innovation ultimately may be a database McKesson is developing that lets doctors evaluate genetic tests and decide when such tests are worth the money and trouble.

The dozen vendors were joined by a non-profit, Sustainable Healthcare for Haiti. Their first project is one of the most basic health interventions one can make: providing wells for drinkable water. They have a local sponsor who can manage their relationship with the government, and an ambitious mission that includes job development, an outpatient clinic, and an acute care children's hospital.

January 16 2012

Medical imaging in the cloud: a conversation about eMix

Over the past few weeks I've been talking to staff at DR Systems about medical imaging in the cloud. DR Systems boasts of offering the first cloud solution for sharing medical images, called eMix. According to my contact Michael Trambert, Lead Radiologist for PACS Reengineering for the Cottage Health System and Sansum Clinic in Santa Barbara, California, eMix started off by offering storage for both images and the reports generated by radiologists, cardiologists, and other imaging specialists. It then expanded to include other medical records in HL7, CDA, CCR, PDF, RTF, and plain text formats. It is vendor neutral, thanks to DICOM (a standard that covers both images and reports) and HL7.

First a bit of background (some of which I offered in an earlier posting). In the U.S., currently, an estimated 30 billion dollars are wasted each year through re-imagining that could be avoided. In addition to cost, there are many reasons to cut down on images: many systems expose patients to small amounts of radiation that pose a cumulative risk over time, and in an emergency situation it's better to reuse a recent image than to waste time taking another.

The situation was brought home by a conversation I had with CIO Chuck Christian of Vincennes, Indiana's Good Samaritan Hospital, a customer of eMix. Patients are often tasked with carrying their own images around (originally as print-outs, and more recently as CD-ROMs). These things often get misplaced, or the CDs turn out to be corrupt or incompatible with the receiving IT system. It's a situation crying out for networked transfer, but HIPAA requires careful attention to security and privacy.

eMix is currently used by about 300 sites, most in the US, and a few in Europe. Uses include remote consulting and sending an eMix image and report "package" to an emergency treatment center ahead of the patient. The eMix package has a built-in viewing capability, so the recipient needs nothing beyond a web browser. Data is protected by encryption on the eMix site and through SSL during transmission.

Sharing is so easy that according to eMix General Manager Florent Saint-Clair, the chief privacy risk in eMix is user error. A sender may type in the wrong email address or accede to a request for an image without ensuring that the recipient is properly authorized to receive it.

This will be an issue with the Direct project, too, when that enters common use. The Direct project will allow the exchange of data over email, but because most doctors' email accounts are not currently secure, eMix just uses email to notify a recipient that an image is ready. Everything else takes place over the Web. The company stresses a number of measures they take to ensure security: for instance, data is always deleted after 30 days, physical security is robust, and storage is on redundant servers.

December 10 2011

HealthTap's growth validates hypotheses about doctors and patients

A major round of funding for HealthTap gave me the opportunity to talk again with founder Ron Gutman, whom I interviewed earlier this year. You can get an overview of HealthTap from that posting or from its own web site. Essentially, HealthTap is a portal for doctors to offer information to patients and potential patients. In this digital age, HealthTap asks, why should a patient have to make an appointment and drive to the clinic just to find out whether her symptoms are probably caused by a recent medication? And why should a doctor repeat the same advice for each patient when the patient can go online for it?

Now, with 6,000 participating physicians and 500 participating health care institutions, HealthTap has revealed two interesting and perhaps unexpected traits about doctors:

  • Doctors will take the time to post information online for free. Many observations, including my own earlier posting, questioned whether they'd take the time to do this. The benefits of posting information is that doctors can demonstrate their expertise, win new patients, and cut down on time spent answering minor questions.

  • Doctors are willing to rate each other. This can be a surprise in a field known for its reluctance to break ranks and doctors' famous unwillingness to testify in malpractice lawsuits. But doctors do make use of the "Agree" button that HealthTap provides to approve postings by other doctors. When they press this button, they add the approved posting to their own web page (Virtual Practice), thus offering useful information to their own patients and others who can find them through search engines and social networks. The "Agree" ratings also cause postings to turn up higher when patients search for information on HealthTap, and help create a “Trust Score” for the doctor.

HealthTap, Gutman assures me, is not meant to replace doctors' visits, although online chats and other services in the future may allow patients to consult with doctors online. The goals of HealthTap remain to the routine provision of information that's easy for doctors to provide online, and to make medicine more transparent so patients know their doctors, before treatment and throughout their relationships.

HealthTap has leapt to a new stage with substantial backing from Tim Chang (managing director of Mayfield Fund), Eric Schmidt (through his Innovation Endeavors) and Rowan Chapman (Mohr Davidow Ventures). These VCs provide HealthTap with the funds to bring on board the developers, as well as key product and business development hires, required to scale up its growing operations. These investors also lend the business the expertise of some of the leaders in the health IT industry.

October 16 2011

BioCurious opens its lab in Sunnyvale, CA

When I got to the BioCurious lab yesterday evening, they were just cleaning up some old coffee makers. These, I learned, had been turned into sous vide cookers in that day's class.

New lab at BioCurious
New lab at BioCurious

Sous vide cookers are sort of the gourmet rage at the moment. One normally costs several hundred dollars, but BioCurious offered a class for $117 where seventeen participants learned to build their own cookers and took them home at the end. They actually cooked steak during the class--and I'm told that it come out very good--but of course, sous vide cookers are also useful for biological experiments because they hold temperatures very steady.

The class used Arduinos to provide the temperature control for the coffee pots and other basic hardware, so the lesson was more about electronics than biology. But it's a great illustration of several aspects of what BioCurious is doing: a mission of involving ordinary people off the street in biological experiments, using hands-on learning, and promoting open source hardware and software.

Other classes have taught people to insert dyes into cells (in order to teach basic skills such as pipetting), to run tests on food for genetically modified ingredients, and to run computer analyses on people's personal DNA sequences. The latter class involved interesting philosophical discussions about how much to trust their amateur analyses and how to handle potentially disturbing revelations about their genetic make-up. All the participants in that class got their sequencing done at 23andme first, so they had sequences to work with and could compare their own work with what the professionals turned up.

Experiments at BioCurious are not just about health. Synthetic biologists, for instance, are trying a lot of different ways to create eco-friendly synthetic fuels.


BioCurious is not a substitute for formal training in biochemistry, biology, and genetics. But it is a place for people to get a feel for what biologists do and for real biologists without access to expensive equipment to do research of their dreams.

In a back room (where I was allowed to go after being strenuously warned not to touch anything--BioCurious is an official BSL 1 facility, and they're lucky the city of Sunnyvale allowed them to open), one of the staff showed a traditional polymerase chain reaction (PCR) machine, which costs several thousand dollars and is critical for sequencing DNA.

Traditional commercial PCR
Traditional commercial PCR

A couple BioCurious founders analyzed the functions of a PCR and, out of plywood and off-the-shelf parts, built an OpenPCR with open hardware specs. At $599, OpenPCR opens up genetic research to a far greater audience.

BioCurious staffer with OpenPCR
BioCurious staffer with OpenPCR

How low-budget is BioCurious? After meeting for a year in somebody's garage, they finally opened this space three weeks ago with funds raised through Kickstarter. All the staff and instructors are volunteers. They keep such a tight rein on spending that a staffer told me they could keep the place open by teaching one class per week. Of the $117 students spent today for their five-hour class, $80 went to hardware.

BioCurious isn't unique (a similar space has been set up in New York City, and some movements such as synthetic biology promote open information), but it's got a rare knack for making people comfortable with processes and ideas that normally put them off. When executive director Eri Gentry introduces the idea to many people, they react with alarm and put up their hands, as if they're afraid of being overwhelmed by technobabble. (I interviewed Gentry (MP3) before a talk she gave at this year's O'Reilly Open Source Convention.)

Founder and executive director Eri Gentry
Founder and executive director Eri Gentry

BioCurious attacks that fear and miscomprehension. Like Hacker Dojo, another Silicon Valley stalwart whose happy hour I attended Friday night, they wants an open space for open-minded people. Hacker Dojo and BioCurious will banish forever the stereotype of the scientist or engineer as a socially maladroit loner. The attendees are stringently welcoming and interested in talking about what they do in says that make it understandable.

I thought of my two children, both of whom pursued musical careers. I wondered how they would have felt about music if kids weren't exposed to music until junior high school, whereupon they were sat down and forced to learn the circle of fifths and first species counterpoint. That's sort of how we present biology to the public--and then, even those who do show an interest are denied access to affordable equipment. BioCurious is on the cusp of a new scientific revolution.

Eri Gentry with Andy Oram in lab
Eri Gentry with Andy Oram in lab

October 15 2011

lifeIMAGE and the quest for medical imaging exchange

Medical imaging--first X-Rays, and later CAT scans, ultrasound, and MRIs--was one of the first areas of medicine to computerize, and images are routinely distributed in digital format around the world for diagnosis, training, and storage. But the field still fails to capitalize on many of the advantages that other parts of the computer field take for granted: access anywhere, seamless integration, and (perhaps most important for a health field) clear enforcement of permissions.

I talked last week to Hamid Tabatabaie, CEO of lifeIMAGE, one of the companies providing ways to share images in various cloud configurations, and got some useful background on the field of medical imaging and its challenges.

Like all areas of cloud computing (and health IT, for that matter), this one is growing fast. Right after talking to Tabatabaie, I heard two more announcements of cloud services for medical images, one from DR Systems (whose eMix service was the first cloud service for medical images) and another from Merge HealthCare. Other areas of health IT have a woeful history of misunderstanding and riding roughshod over the needs of the clinicians and patients who use them; Tabatabaie assured me that lifeIMAGE is different. Here are some areas we discussed.

Formats and standards

Most images adhere to one of the DICOM standards. But like the HL7 standards for electronic health records, DICOM provides enough wiggle room for images to come down the pike that can't be read in the software a particular radiologist owns. One radiologist I spoke to groans whenever a patient walks in bearing a CD. He knows he's likely to spend a precious and unreimbursed quarter of an hour getting the image to open. Worse still, he usually can't display images from two different sources in the same program, so he can't do the intensive, probing kind of examination that radiologists need to do--for instance, to compare an image from a fracture or a cancer site from four weeks ago with an image from today. Tabatabaie told me that lack of access to patient imaging history accounts for a substantial number of repeat exams, which amount to 20 billion dollars of the annual 120 billion spent on imaging in the US alone.

In contrast, lifeIMAGE understands enough different formats that it can displays any two images side by side more than 98% of the time.

Storage, transmission, and access

But the real business of lifeIMAGE is helping doctors and institutions exchange images. Smaller health care providers can use lifeIMAGE as a public cloud service, letting it be custodian of the images, but larger ones are likely to set up lifeIMAGE software on their own servers. To exchange images, lifeIMAGE uses standards-based protocols, which helps prevent vendor lock-inand addresses customers’ demand for interoperability between services.

What about the self-employed radiologist, the small physician, or any institution that hasn't signed up with lifeIMAGE? Here's where sophisticated permissions come into play. An institution using lifeIMAGE--whether on its private network or on the lifeIMAGE cloud--can provide any individual with access to an image. Although community physicians with long-term relationships to a hospital will create accounts on lifeIMAGE, an institution can also set up a temporary username and password for urgent needs, such as a request from a trauma center. Options for sharing include:

  • Giving the individual the right to download the image

  • Withholding the right to download the image, but giving someone the right to view it while it remains on the lifeIMAGE site (this must involve some form of Digital Rights Management)

  • Withdrawing rights after they have been granted

  • Granting rights for a limited time period

  • Granting rights to a group or class of images

  • Transmitting an image after removing identifying information (useful for training)

The cloud may or may not be the right way to store sensitive health information, but the medical imaging field realizes it has to handle distributing processing somehow. I'll be interested in hearing how doctors and patients respond to the current wave of companies entering the space, and whether some convergence and interoperability take place.

September 26 2011

Could Medical Devices in the Field Help Prevent Fraud?

Medical devices, as I reported earlier this month, are becoming more and more important to health care as they get more sophisticated, more compact, cheaper, and easier to deploy in the home. But in addition to improving care, keeping patients out of the hospital, and providing useful clinical data to researchers, could medical devices help with one of the biggest contributors to escalating health costs--could they help prevent fraud?

Fraud is definitely sucking money from the health care system. The Coalition Against Insurance Fraud (while cautioning that accurate statistics are hard to gather) estimated that Medicare and Medicaid made an estimated 23.7 billion dollars in improper payments in 2007. The Wall Street Journal just reported the conviction of a single scam that cost Medicare 205 million dollars.

Governments aren't the only ones paying bad claims. Private insurers are obsessed with fraudulent charges. And all payers--both government and private--put up huge bureaucratic barriers to payments in order to combat fraud, elevating health costs even more.

Deliberate charges for work that was never performed represent the extreme end of a range of poor medical practices. Medicare defines a category of "abuse" in addition to fraud. Examples of abuse include performing an unnecessary procedure and offering treatments in an ineffective manner.

What can medical devices do? As developer Shahid Shah pointed out in an interview, devices provide authoritative information that is more accurate than impressions reported by either patients or health care providers. This means they can objectively indicate:

  • Traits and statistics indicating the presence of diseases and medical conditions

  • Traits and statistics indicating that treatments were applied

  • Traits and statistics indicating the remission of symptoms

In other words, evidence from devices can verify that a treatment was necessary, that it was administered, and that it was effective. To carry out their role in fraud prevention, devices must be secured. When a manufacturer's testing verifies their accuracy, the manufacturer should digitally sign them, and each transmission from the device should bear the signature. Tampering should, as much as the manufacturer can guarantee, disable the digital signature. Analysis may also be able to detect other ways of gaming the system, such as patients creating artificial conditions to simulate medical problems.

September 21 2011

David Blumenthal lauds incrementalism at forum on electronic health records

Anyone who follows health issues in the U.S. has to be obsessed with the workings of the Office of the National Coordinator (ONC). During the critical early phases of implementing HITECH and meaningful use, the National Coordinator himself was Dr. David Blumenthal, who came to speak yesterday in the Longwood medical area in Boston.

A long-time Bostonian, where he moved up from being a primary care physician, Blumenthal is now back at Mass General and Harvard Business School. Most of his speech yesterday was a summary of the reasoning behind meaningful use, but some off-the-cuff remarks at the end, as well as vigorous discussion during a following panel, provided some interesting perspectives. Best of all was hearing a lot of facts on the ground. These helped explain the difference between EHRs in theory and in practice.

Which comes first, electronic records or standard formats?

There were a lot of complaints at the forum about the lack of interoperability between electronic health records. Blumenthal declared twice that pushing doctors to adopt EHRs was a good idea because we have to have our information digitized before we can think of interchanging it. Coming from the perspective of having seen systems and standards develop--and having seen the mess that results from products out of sync with standards in areas ranging from CORBA to browsers--I disagree with this claim. Luckily, Blumenthal's actual work didn't match the simplistic "digitize first" approach. The ONC built some modest requirements for interoperability into the first stage of meaningful use and plans to ramp these requirements up quickly. Furthermore, they're engaging in intensive negotiations with industry players over EHR standards (see, for instance, my write-up of a presentation by John Halamka last May) and worked quite early on the ground-breaking CONNECT and Direct projects for information exchange.

I understand that an ideal standard can't be expected to spring from the head of Zeus. What perhaps the standards proponents should have worked on is a separation of formats from products. Most EHRs reflect an old-fashioned design that throws together data format, architecture, and user interface. Wouldn't it be great to start the formats off on their own course, and tell EHR vendors to design wonderful interfaces that are flexible enough to adapt to format changes, while competing on providing clinicians with the best possible interface and workflow support? (Poor workflow was another common complaint at last night's forum.) That's the goal of the Indivo project. I interviewed Daniel Haas from that project in June.

Incrementalism in EHRs: accepting imperfection

Perhaps Blumenthal's enthusiasm for putting electronic records in place and seek interoperability later may reflect a larger pragmatism he brought up several times yesterday. He praised the state of EHRs (pushing back against members of the audience with stories to tell of alienated patients and doctors quitting the field in frustration), pointing to a recent literature survey where 92% of studies found improved outcomes in patient care, cost control, or user satisfaction. And he said we would always be dissatisfied with EHRs because we compare them to some abstract ideal

I don't think his assurances or the literature survey can assuage everyone's complaints. But his point that we should compare EHRs to paper is a good one. Several people pointed out that before EHRs, doctors simply lacked basic information when making decisions, such as what labs and scans the patient had a few months ago, or even what diagnosis a specialist had rendered. How can you complain that EHRs slow down workflow? Before EHRs there often was no workflow! Many critical decisions were stabs in the dark.

Too much content, too much discontent

Even so, it's clear that EHRs have to get better at sifting and presenting information. Perhaps even more important, clinicians have to learn how to use them better, so they can focus on the important information. One member of the audience said that after her institution adopted EHRs, discharge summaries went from 3 pages to 10 pages in average length. This is probably not a problem with EHRS, but with clinicians being lazy and taking advantage of the cut-and-paste function.

The computer was often described as a "third person in the room" during patient visits, and even, by panelist and primary care physician Gerard Coste, as a two-year-old who takes up everybody's attention. One panelist, law professor and patient representative Michael Meltsner, suggested that medical residents need to be trained about how to maintain a warm, personal atmosphere during an interview while looking up and entering data. Some people suggested that better devices for input and output (read: iPads) would help.

Blumenthal admitted that electronic records can increase workloads and slow doctors down. "I've said that the EHR made me a better doctor, but I didn't say it made me a faster one." He used this as a lead-in to his other major point during the evening, which is that EHRs have to be adopted in conjunction with an overhaul of our payment and reward system for doctors. He cited Kaiser Permanente (a favorite of health care reformers, even though doctors and patients in that system have their share of complaints) as a model because they look for ways to keep patients healthy with less treatment.

While increasing workloads, electronic records also raise patient expectations. Doctors are really on the hook for everything in the record, and have to act as if they know everything in it. Similar expectations apply to coordination of care. Head nurse Diane L Gilworth said, "Patients think we talk to each other much more than we do." The promise of EHRs and information interchange hasn't been realized.

New monitoring devices and the movement for a patient centered medical home will add even more data to the mix. I didn't ask a question during the session (because I felt it was for clinicians and they should be the ones to have their say), but if I could have posed a question, it would be this: one speaker reminded the audience that the doctor is liable for all the information in the patient's record. But the patient centered medical home requires the uploading of megabytes of data that is controlled by the patient, not the doctor. Doctors are reluctant to accept such data. How can we get the doctor and patient to collaborate to produce high-quality data, and do we need changes in regulations for that to happen?

A plea for an old-fashioned relationship

One theme bubbled up over and over at yesterday's meeting The clinicians don't want to be dazzled by more technology. They just want more time to interview patients and a chance to understand them better. Their focus is not on meaningful use but on meaningful contact. If EHRs can give them and their patients that experience, EHRs are useful and will be adopted enthusiastically. If EHRs get in the way, they will be rejected or undermined. This was an appropriate theme for a panel organized by the Schwartz Center for Compassionate Healthcare.

That challenge is harder to deal with than interchange formats or better I/O devices. It's at the heart of complaints over workflow and many other things. But perhaps it should be at the top of the EHR vendors' agendas.

September 09 2011

Big health advances in small packages: report from the third annual Medical Device Connectivity conference

At some point, all of us are likely to owe our lives--or our quality of life--to a medical device. Yesterday I had the chance to attend the third annual Medical Device Connectivity conference, where manufacturers, doctors, and administrators discussed how to get all these monitors, pumps, and imaging machines to work together for better patient care.

A few days ago I reported on the themes in the conference, and my attendance opened up lots of new insights for me. I've been convinced for a long time that the real improvements in our health care system will come, not by adjusting insurance policies, payment systems, or data collection, but by changes right at the point of care. And nothing is closer to the point of care than the sensor attached to your chest or the infusion pump inserted into your vein.

What sorts of innovation can we hope for in medical devices? They include:

Connectivity

Devices already produce (in the form of monitors' numerical outputs and waveforms) and consume (in the form of infusion pumps' formularies and settings) data. The next natural step is to send them over networks--hopeful wireless--so they can be viewed remotely. Even better, networking can infuse their data into electronic health records.

One biomedical engineer gestured vigorously as he told me how the collection of device data could be used to answer such questions as "How do some hospitals fare in comparison to others on similar cases?" and "How fast do treatments take effect?" He declared, "I'd like to spend the rest of my life working on this."

Cooperation

After coaxing devices to share what they know, the industry can work on getting them to signal and control each other. This would remove a major source of error in the system, because the devices would no longer depend on nurses to notice an alert and adjust a pump. "Smart alarms" would combine the results of several sensors to determine more accurately whether something is happening that really requires intervention.

Of course, putting all this intelligence in software calls for iron-clad requirements definitions and heavy testing. Some observers doubt that we can program devices or controllers well enough to handle the complexity of individual patient differences. But here, I think the old complaint, "If we can get men to the moon..." applies. The Apollo program showed that we can create complex systems that work, if enough care and effort are applied. (However, the Challenger explosion shows that we can also fail, when they are not.)

Smaller and smarter

Faster, low-power chips are leading to completely different ways of solving common medical problems. Some recent devices are two orders of magnitude smaller than corresponding devices of the previous generation. This means, among other things, that patients formerly confined to bed can be more ambulatory, which is better for their health. The new devices also tend to be 25% to 30% the cost of older ones.

Wireless

Telemedicine is one of the primary goals of health care reform. Taking care out of clinics and hospitals and letting people care for themselves in their natural environments gives us whole new opportunities for maintaining high-level functioning. But it isn't good for anybody's health to trip over cables, so wireless connections of various types are a prerequisite to the wider use of devices.

Ed Cantwell of the West Wireless Health Institute suggested three different tiers of wireless reliability, each appropriate for different devices and functions in a medical institution. The lowest level, consumer-grade, would correspond to the cellular networks we all use now. The next level, enterprise-grade, would have higher capacity and better signal-to-noise ratio. The highest level, medical-grade, would be reserved for life-critical devices and would protect against interference or bandwidth problems by using special frequencies.

Standards

The status of standards and standards bodies in the health care space is complex. Some bodies actually create standards, while others try to provide guidelines (even standards) for using these standards. When you consider that it takes many years just to upgrade from one version of a standard to the next, you can see why the industry is permanently in transition. But these standards do lead to interoperability.

A few of the talks took on a bit of a revival meeting atmosphere--step up and do things the way we tell you! Follow standards, do a risk analysis, get that test plan in place. I think one of the most anticipated standards for device safety,

Communication frameworks

The industry is just in the opening chapter of making devices cooperate, but a key principle seems to be generally accepted: instead of having one device directly contact another, it's better to have them go through a central supervisor or node. A monitor, for instance, may say, "This patient's heart rate is falling." The supervisor would receive this information and check a "scenario" to see whether action should be taken. If so, it tells the pump, "Change the infusion rate as follows."

Michael Robkin reported about an architecture for an integrated clinical environment (ICE) in a set of safety requirements called ASTM F2761-09. The devices are managed by a network controller, which in turn feeds data back and forth to a supervisor.

I took the term "scenario" from a project by John Hatcliff of Kansas State University, the Medical Device Coordination Framework that seems well suited to the ICE architecture. In addition to providing an API and open-source library for devices to communicate with a supervisor, the project creates a kind of domain-specific language in which clinicians can tell the devices how to react to events.

Communicating devices need regulatory changes too. As I described in my earlier posting, the FDA allows communicating devices only if the exact configuration has been tested by the vendor. Each pair of devices must be proven to work reliably together.

In a world of standards, it should be possible to replace this pair-wise clearance with what Robkin called component-wise clearance. A device would be tested for conformance to a standard protocol, and then would be cleared by the FDA to run with any other device that is also cleared to conform. A volunteer group called the Medical Device Interoperability Safety Working Group has submitted a set of recommendations to the FDA that recommends this change. We know that conformance to written standards does not perfectly guarantee interoperability (have you run into a JavaScript construct that worked differently on different web browsers?), but I suppose this change will not compromise safety because the hospital is always ultimately responsible for testing the configurations of devices it chooses.

The safety working group is also persuading the FDA to take a light hand in regulating consumer devices. Things such as the Fitbit or Zeo that are used for personal fitness should not have to obey the same rigorous regulations as devices used to make diagnostic decisions or deliver treatments.

Open Source

One of the conference organizers is Shahid Shah, a software developer in the medical device space with a strong leaning toward open-source solutions. As in his OSCon presentation, Shah spoke here of the many high-quality components one can get just for the effort of integration and testing. The device manufacturer takes responsibility for the behavior of any third-party software, proprietary or free, that goes into the device. But for at least a decade, devices using free software have been approved by the FDA. Shah also goaded his audience to follow standards wherever possible (such as XMPP for communication), because that will allow third parties to add useful functionality and enhance the marketability of their devices.

Solving old problems in new ways

Tim Gee, program chair, gave an inspiring talk showing several new devices that do their job in radically better ways. Many have incorporated the touch screen interface popularized by the iPhone, and use accelerometers like modern consumer devices to react intelligently to patient movements. For instance, many devices give off false alarms when patients move. The proper interpretation of accelerometers might help suppress these alarms.

In addition to user interface advances and low-power CPUs with more memory processing power, manufacturers are taking advantage of smaller, better WiFi radios and tools that reduce their time to market. They are also using third party software modules (including open source) and following standards more. Networking allows the device to offload some processing to a PC or network server, thus allowing the device itself to shrink even more in physical size and complexity. Perhaps most welcome to clinicians, manufacturers are adding workflow automation.

Meaningful use and the HITECH act play a role in device advances too. Clinicians are expected to report more and more data, and the easiest way to get and store it is go to the devices where it originates.

Small, pesky problems

One of the biggest barriers to sharing device data is a seemingly trivial question: what patient does the data apply to? The question is actually a major sticking point, because scenarios like the following can happen:

  • A patient moves to a new hospital room. Either her monitor reports a new location, or she gets a new monitor. Manually updating the information to put the new data into the patient's chart is an error-prone operation.

  • A patient moves out and a device is hooked up to a new patient. This time the staff must be sure to stop reporting data to the old patient's record and direct it into the new patient's record.

    • Several speakers indicated that even the smallest risk of putting data into the wrong person's record was unacceptable. Hospitals assign IDs to patients, and most monitors report the ID with each data sample, but Luis Melendez of Partners HealthCare says the EHR vendors don't trust this data (nurses could have entered it incorrectly) so they throw the IDs away and record only the location. Defaults can be changed, but these introduce new problems.

      Another one of these frustratingly basic problems is attaching the correct time to data. It seems surprisingly hard to load the correct time on devices and keep them synched to it. I guess medical devices don't run NTP. They often have timestamps that vary from the correct time by minutes or even months, and EHRs store these without trying to verify them.

      Other steps forward

      We are only beginning to imagine what we could do with smarter medical devices. For instance, if their output is recorded and run through statistical tests, problem with devices might be revealed earlier in their life cycles. (According to expert Julian Goldman, the FDA and ONC are looking into this.) And I didn't even hear talk of RFIDs, which could greatly reduce the cost of tracking equipment as well as ensure that each patient is using what she needs. No doubt about it: even if medical device improvements aren't the actually protagonists of the health care reform drama, they will play a more than supporting role. But few clinical institutions use the capabilities of smart devices yet, and those that do exploit them do so mostly to insert data into patient records. So we'll probably wait longer than we want for the visions unveiled at this conference to enter our everyday lives.

September 06 2011

Medical device experts and their devices converse at Boston conference

I'm looking forward to Medical Device Connectivity conference this week at Harvard Medical School. It's described by one of the organizers, Shahid N. Shah, as an "operational" conference, focused not on standards or the potential of the field but on how to really make these things work. On the other hand, as I learned from a call with Program Chair Tim Gee, many of the benefits for which the field is striving have to wait for changes in technology, markets, FDA regulations, and hospital deployment practices.

For instance, Gee pointed out that patients under anesthesia need a ventilator in order to breath. But this interferes with certain interventions that may be needed during surgery, such as X-Rays, so the ventilator must be also sometimes turned off temporarily. Guess how often the staff forget to turn the ventilator back on after taking the X-Ray? Often enough, anyway, to take this matter out of human hands and make the X-Ray machine talk directly to the ventilator. Activating the X-Ray machine should turn off the ventilator, and finishing the X-Ray should turn it back on. But that's not done now.

Another example where Gee pointed out the benefits of smarter and better networked devices are patient-controlled analgesia (PCA) pumps. When overused, they can cause the patient's organs to slow down and eventually cause death. But a busy nurse or untrained family member might keep the pump going even after the patient monitors issue alarms. It would be better for the monitor to shut off the pump when vital signs get dangerously low.

The first requirement for such life-saving feedback loops is smart software that can handle feedback loops and traverse state machines, as well as rules engines for more complex scenarios such as self-regulating intravenous feeds. Standards will make it easier to communicate between devices. But the technologies already exist and will be demonstrated at an interoperability lab on Wednesday evening.

None of the functions demonstrated at the lab are available in the field. They require more testing by manufacturers. Obviously, devices that communicate also add an entirely new dimension of tasks to hospital staff. And FDA certification rules may also need to be updated to be more flexible. Currently, a manufacturer selling a device that interoperates with other devices must test every configuration it wants to sell, specifying every device in that configuration and taking legal responsibility for their safe and reliable operation. It cannot sell into a hospital that uses the device in a different configuration. Certifying a device to conform to a standard instead of working with particular devices would increase the sale and installation of smart, interconnected instruments.

Reading the conference agenda (PDF), I came up with three main themes for the conference:

The device in practice

Hospitals, nursing homes, and other institutions who are current major uses of devices (although they're increasingly being used in homes as well) need to make lots of changes in workflow and IT to reap the benefits of newer device capabilities. Currently, according to Gee, the output from devices is used to make immediate medical decisions but are rarely stored in EHRs. The quantities of data could become overwhelming (perhaps why one session at the conference covers cloud storage) and doctors would need tools for analyzing it. Continuous patient monitoring, which tends to send 12 Kb of data per second and must have a reliable channel that prevents data loss, has a session all its own.

Standards and regulations

The FDA has responded with unusual speed and flexibility to device manufacturers recently. One simple but valuable directive (PDF) clarified that devices could exchange information with other systems (such as EHRs) without requiring the FDA to regulate those EHRs. It has also issued a risk assessment standard named IEC80001, which specifies what manufacturers and hospitals should do when installing devices or upgrading environments. This is an advance for the FDA because they normally do not regulate the users of products (only the manufacturers) but they realize that users need guidance for best practices. Another conference session covers work by the mHealth Regulatory Coalition.

Wireless networks

This single topic is complex and important enough to earn several sessions, including security and the use of the 5 GHz band. No surprise that so much attention is being devoted to wireless networking: it's the preferred medium for connecting devices, but its reliability is hard to guarantee and absolutely critical.

The conference ends with three workshops: one on making nurse response to alarms and calls more efficient, one on distributed antenna systems (good for cell phones and ensuring public safety), and one on the use of open source libraries to put connectivity into devices. The latter workshop is given by Shah and goes more deeply into the topic he presented in a talk at O'Reilly's Open Source convention and an interview with me.

July 30 2011

Report from Open Source convention health track, 2011

Open source software in health care? It's limited to a few pockets of use--at least in the United States--but if you look at it a bit, you start to wonder why any health care institution uses any proprietary software at all.

What the evidence suggests

Take the conference session by University of Chicago researchers commissioned to produce a report for Congress on open source in health care. They found several open source packages that met the needs for electronic records at rural providers with few resources, such as safety-net providers.

They found that providers who adopted open source started to make the changes that the adoption of electronic health records (or any major new system) is supposed to do, but rarely does in proprietary health settings.

  • They offer the kinds of extra attention to patients that improve their health, such as asking them questions about long-term health issues.

  • They coordinate care better between departments.

  • They have improved their workflows, saving a lot of money

And incidentally, deployment of an open source EHR took an estimated 40% of the cost of deploying a proprietary one.

Not many clinics of the type examined--those in rural, low-income areas--have the time and money to install electronic records, and far fewer use open source ones. But the half-dozen examined by the Chicago team were clear success stories. They covered a variety of areas and populations, and three used WorldVistA while three used other EHRs.

Their recommendations are:

  • Greater coordination between open source EHR developers and communities, to explain what open source is and how they benefit providers.

  • Forming a Community of Practice on health centers using open source EHRs.

  • Greater involvement from the Federal Government, not to sponsor open source, but to make communities aware that it's an option.

Why do so few providers adopt open source EHRs? The team attributed the problem partly to prejudice against open source. But I picked up another, deeper concern from their talk. They said success in implementing open source EHRs depends on a "strong, visionary leadership team." As much as we admire health providers, teams like that are hard to form and consequently hard to find. But of course, any significant improvement in work processes would require such a team. What the study demonstrated is that it happens more in the environment of an open source product.

There are some caveats to keep in mind when considering these findings--some limitations to the study. First, the researchers had very little data about the costs of implementing proprietary health care systems, because the vendors won't allow customers to discuss it, and just two studies have been published. Second, the sample of open source projects was small, although the consistency of positive results was impressive. And the researchers started out sympathetic to open source. Despite the endorsement of open source represented by their findings, they recognized that it's harder to find open source and that all the beneficial customizations take time and money. During a Birds-of-a-Feather session later in the conference, many of us agreed that proprietary solutions are here for quite some time, and can benefit by incorporating open source components.

The study nevertheless remains important and deserves to be released to Congress and the public by the Department of Health and Human Services. There's no point to keeping it under wraps; the researchers are proceeding with phase 2 of the study with independent funding and are sure to release it.

So who uses open source?

It's nice to hear about open source projects (and we had presentations on several at last year's OSCon health care track) but the question on the ground is what it's like to actually put one in place. The implementation story we heard this year was from a team involving Roberts-Hoffman Software and Tolven.

Roberts-Hoffman is an OSCon success story. Last year they received a contract from a small health care provider to complete a huge EHR project in a crazily short amount of time, including such big-ticket requirements as meeting HIPAA requirements. Roberts-Hoffman knew little about open source, but surmised that the customization it permitted would let them achieve their goal. Roberts-Hoffman CEO Vickie Hoffman therefore attended OSCon 2010, where she met a number of participants in the health care track (including me) and settled on Tolven as their provider.

The customer put some bumps in the road to to the open source approach. For instance, they asked with some anxiety whether an open source product would expose their data. Hoffman had a little educating to do.

Another hurdle was finding a vendor to take medication orders. Luckily, Lexicomp was willing to work with a small provider and showed a desire to have an open source solution for providers. Roberts-Hoffman ended up developing a Tolven module using Lexicomp's API and contributing it back to Tolven. This proprietary/open source merger was generally quite successful, although it was extra work providing tests that someone could run without a Lexicomp license.

In addition to meeting what originally seemed an impossible schedule, Tolven allowed an unusual degree of customization through templating, and ensured the system would work with standard medical vocabularies.

Why can't you deliver my data?

After presentations on health information exchanges at OSCON, I started to ruminate about data delivery. My wife and I had some problems with appliances this past Spring and indulged in some purchases of common household items, a gas grill from one company and a washing machine from another. Each offered free delivery. So if low-margin department stores can deliver 100-pound appliances, why can't my doctor deliver my data to a specialist I'm referred to?

The CONNECT Gateway and Direct project hopefully solve that problem. CONNECT is the older solution, with Direct offering an easier-to-implement system that small health care providers will appreciate. Both have the goal of allowing health care providers to exchange patient data with each other, and with other necessary organizations such as public health agencies, in a secure manner.

David Riley, who directed the conversion of CONNECT to an open-source, community-driven project at the Office of the National Coordinator in the Department of Health and Human Services, kicked off OSCon's health care track by describing the latest developments. He had led off last year's health care track with a perspective on CONNECT delivered from his role in government, and he moved smoothly this time into covering the events of the past year as a private developer.

The open-source and community aspects certainly proved their value when a controversy and lawsuit over government contracts threatened to stop development on CONNECT. Although that's all been resolved now, Riley decided in the Spring to leave government and set up an independent non-profit foundation, Alembic, to guide CONNECT. The original developers moved over to Alembic, notably Brian Behlendorf, and a number of new companies and contributors came along. Most of the vendors who had started out on the ONC project stayed with the ONC, and were advised by Riley to do so until Alembic's course was firm.

Lots of foundations handle open source projects (Apache, etc.) but Riley and Behlendorf decided none of them were proper for a government-centric health care project. CONNECT demanded a unique blend of sensitivity to the health care field and experience dealing with government agencies, who have special contract rules and have trouble dealing with communities. For instance, government agencies are tasked by Congress with developing particular solutions in a particular time frame, and cannot cite as an excuse that some developer had to take time off to get a full-time job elsewhere.

Riley knows how to handle the myriad pressures of these projects, and has brought that expertise to Alembic. CONNECT software has been released and further developed under a BSD license as the Aurion project. Now that the ONC is back on track and is making changes of its own, the two projects are trying to heal the fork and are following each other's changes closely. Because Aurion has to handle sensitive personal data deftly, Riley hopes to generalize some of the software and create other projects for handling personal data.

Two Microsoft staff came to OSCon to describe Direct and the open-source .NET libraries implementing it. It turned out that many in the audience were uninformed about Direct (despite an intense outreach effort by the ONC) and showed a good deal of confusion about it. So speakers Vaibhav Bhandari and Ali Emami spent the whole time alloted (and more) explaining Direct, with time for just a couple slides pointing out what the .NET libraries can do.

Part of the problem is that security is broken down into several different functions in ONC's solution. Direct does not help you decide whether to trust the person you're sending data to (you need to establish a trust relationship through a third party that grants certificates) or find out where to send it (you need to know the correspondent's email address or another connection point). But two providers or other health care entities who make an agreement to share data can use Direct to do so over email or other upcoming interfaces.

There was a lot of cynicism among attendees and speakers about whether government efforts, even with excellent protocols and libraries, can get doctors to offer patients and other doctors the necessary access to data. I think the reason I can get a big-box store to deliver an appliance but I can't get my doctor to deliver data is that the big-box store is part of a market, and therefore wants to please the customer. Despite all our talk of free markets in this country, health care is not a market. Instead, it's a grossly subsidized system where no one has choice. And it's not just the patients who suffer. Control is removed from the providers and payers as well.

The problem will be solved when patients start acting like customers and making appropriate demands. If you could say, "I'm not filling out those patient history forms one more time--you just get the information where I'm going," it might have an effect. More practically speaking, let's provide simple tools that let patients store their history on USB keys or some similar medium, so we can walk into a doctor's office and say "Here, load this up and you'll have everything you need."

What about you, now?

Patient control goes beyond data. It's really core to solving our crisis in health care and costs. A lot of sessions at OSCon covered things patients could do to take control of their health and their data, but most of them were assigned to the citizen health track (I mentioned them at the end of my preview article a week ago) and I couldn't attend them because they were concurrent with the health care track.

Eri Gentry delivered an inspiring keynote about her work in the biology start-up BioCurious, Karen Sandler (who had spoken in last year's health care track scared us all with the importance of putting open source software in medical devices, and Fred Trotter gave a brief but riveting summary of the problems in health care. Fred also led a session on the Quantified Self, which was largely a discussion with the audience about ways we could encourage better behavior in ourselves and the public at large.

Guaranteed to cause meaningful change

I've already touched on the importance of changing how most health care institutions treat patients, and how open source can help. David Uhlman (who has written a book for O'Reilly with Fred Trotter) covered the complex topic of meaningful use, a phrase that appeared in the recovery act of 2009 and that drives just about all the change in current U.S. institutions. The term "meaningful use" implies that providers do more than install electronic systems; they use them in ways that benefit the patients, the institutions themselves, and the government agencies that depend on their data and treatments.

But Uhlman pointed out that doctors and health administrators--let alone the vendors of EHRs--focus on the incentive money and seem eager to do the minimum that gets them a payout. This is self-defeating, because as the government will raise the requirements for meaningful use over the years, and will overwhelm quick-and-dirty implementations that fail to solve real problems. Of course, the health providers keep pushing back the more stringent requirements to later years, but they'll have to face the music someday. Perhaps the delay will be good for everyone in the long run, because it will give open source products a chance to demonstrate their value and make inroads where they are desperately needed.

As a crude incentive to install electronic records, meaningful use has been a big success. Before the recover act was passed, 15%-20% of U.S. providers had EHRs. Now the figures is 60% or 70% percent, and by the end of 2012 it will probably be 90%. But it remains to be seen whether doctors use these systems to make better clinical decisions, follow up with patients so they comply with treatments, and eliminate waste.

Uhlman said that technology accounts for about 20% of the solution. The rest is workflow. For instance, every provider should talk to patients on every visit about central health concerns, such as hypertension and smoking. Research has suggested that this will add 30% more time per visit. If it reduces illness and hospital admissions, of course, we'll all end up paying less in taxes and insurance. His slogan: meaningful use is a payout for quality data.

It may be surprising--especially to an OSCon audience--that one of the biggest hurdles to achieving meaningful use is basic computer skills. We're talking here about typing information in correctly, knowing that you need to scroll down to look at all information on the screen, and such like. All the institutions Uhlman visits think they're in fine shape and everybody has the basic skills, but every examination he's done proves that 20%-30% of the staff are novices in computer use. And of course, facilities are loath to spend extra money to develop these skills.

Open source everywhere

Open source has image and marketing problems in the health care field, but solutions are emerging all over the place. Three open source systems right now are certified for meaningful use: ClearHealth (Uhlman's own product), CareVue from MedSphere, and WorldVistA. OpenEMR is likely to join them soon, having completed the testing phase. vxVistA is certified but may depend on some proprietary pieces (the status was unclear during the discussion).

Two other intriguing projects presented at OSCon this year were popHealth and Indivo X. I interviewed architects from Indivo X and popHealth before they came to speak at OSCon. I'll just say here that popHealth has two valuable functions. It helps providers improve quality by providing a simple web interface that makes it easy for them to view and compare their quality measures (for instance, whether they offered appropriate treatment for overweight patients). Additionally, popHealth saves a huge amount of tedious manual effort by letting them automatically generate reports about these measures for government agencies. Indivo fills the highly valued space of personal health records. It is highly modular, permitting new data sources and apps to be added; in fact, speaker Daniel Haas wants it to be an "app store" for medical applications. Both projects use modern languages, frameworks, and databases, facilitating adoption and use.

Other health care track sessions

An excellent and stimulating track was rounded out with several other talks.

Shahid Shah delivered a talk on connecting medical devices to electronic record systems. He adroitly showed how the data collected from these devices is the most timely and accurate data we can get (better than direct reports from patients or doctors, and faster than labs), but we currently let it slip away from us. He also went over standard pieces of the open source stacks that facilitate the connection of devices, talked a bit about regulations, and discussed the role of routine engineering practices such as risk assessments and simulations.

Continuing on the quality theme, David Richards mentioned some lessons he learned designing a ways clinical decision support system. It's a demanding discipline. Accuracy is critical, but results must be available quickly so the doctor can use them to make decisions during the patient visit. Furthermore, the suggestions returned must be clear and precise.

Charlie Quinn talked about the collection of genetic information to achieve earlier diagnoses of serious conditions. I could not attend his talk because I was needed at another last-minute meeting, but I sat down for a while with him later.

The motto at his Benaroya Research Institute is to have diagnosis be more science, less art. With three drops of blood, they can do a range of tests on patients suspected of having particular health conditions. Genomic information in the blood can tell a lot about health, because blood contains viruses and other genomic material besides the patient's own genes.

Tests can compare the patients to each other and to a healthy population, narrowing down comparisons by age, race, and other demographics. As an example, the institute took samples before a vaccine was administered, and then at several frequent intervals in the month afterward. They could tell when the vaccine had the most powerful effect on the body.

The open source connection here is the institute's desire to share data among multiple institutions so that more patients can be compared and more correlations can be made. Quinn said it's hard to get institutions to open up their data.

All in all, I was energized by the health care track this year, and really impressed with the knowledge and commitment of the people I met. Audience questions were well-informed and contributed a lot to the presentations. OSCon shows that open source health care, although it hasn't broken into the mainstream yet, already inspires a passionate and highly competent community.

July 22 2011

Preview of OSCON's health care track

The success of our health care track at the O'Reilly Open Source convention last year (which I covered in a series of blogs) called for a follow-up. This year we offer another impressive line-up. In fact, we had to turn away several interesting presenters, some of whom I am following up with for separate interviews or work projects. This year we're looking more at what you — patients, clinicians, and researchers — can do with the data you collect, while we continue our coverage of critical IT parts of the health care system.

The health care sessions are tucked away in a little-trafficked area of the Oregon Convention Center. To get to the room, you have to come down to the ground level and walk all the around, away from the registration area, to the part of the building facing the Boulevard. I'm writing this blog to encourage more OSCON attendees to take the steps there.

Open source and health care go together like alkyls and hydroxyls. Open source software offers greater standards compliance, which helps institutions exchange critical data, and gives the extremely diverse field of health providers the flexibility they need to offer the kinds of interfaces and services they want.

And open source software is starting to take its rightful place in the health care field. The people that the federal government put in charge of driving improvements in health care — the Office of the National Coordinator in the Department of Health and Human Services — know the benefits of open source well, and have instigated many such projects. OSCON highlights some of the best-known, as well as some valuable ones that most people have never heard of.

Alison Muckle and, Jason Goldwater will report on the state of open source software in health care. Under contract from the ONC, these researchers studied several systems and found they could provide the features needed by health care providers at a low cost.

Health IT at OSCON 2011 — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement for patients. These topics will be explored in the health care track at OSCON (July 25-29 in Portland, Ore.)

Save 20% on registration with the code OS11RAD

The CONNECT Gateway was an existing government project that the ONC decided to make open source. It cut through the proprietary mess in health information exchanges that has been holding doctors back from sharing information on patients for years. Instead of individual translations from one proprietary record system to another (leading to N2 translation procedures among N systems), CONNECT provided a standard protocol that vendors are adopting.

Drawing on the expertise in open source communities demonstrated by Brian Behlendorf (of Apache fame) and David Riley, the ONC built a robust community around CONNECT, including both commercial entities and individuals who care about health care. Behlendorf and Riley spun out the non-profit Alembic Foundation to coordinate further community efforts, and Riley is coming to the health care track to describe where the project is going.

Although CONNECT is going to turn more and more in health information exchanges, it is SOAP-based and heavyweight for casual exchanges among small medical practices. The rural one-physician practice needs to exchange data as much as the hospitals that use CONNECT, and needs to meet the same requirements for privacy. The Direct project provides secure email and other simple channels between facilities that trust each other.

Microsoft provided an open source .NET library for creating tools that use Direct, and its many facets will be covered by Vaibhav Bhandari and Ali Emami.

As Bhandari's and Emami's talk shows, implementation is a big part of the job of making standards and open source software work. Three representatives of open source projects will discuss their collaboration on an open source health project.

Among the lesser-known beneficiaries of ONC funding is the popHealth project, run by MITRE. The vision behind this project is improving the quality of health care, which requires collecting data from each health care provider. When providers know they're being measured for quality, they do more of the right things like encouraging patients to come in for follow-up care. And when we know where we did well and not so well, we can apply resources to raise up the laggards.

So the health care reform in the stimulus bill, in calling for "meaningful use" of provider data, gives the providers incentives to send a range of data to the government, such as how many smokers and diabetics they treat. popHealth hooks into electronic health records and makes it easy to generate reports on standard measures of quality. It also provides simple Web interfaces where providers can check their data, and this encourages them to track quality themselves. Programmer Andrew Gregorowicz will speak about popHealth at OSCON. (I interviewed Gregorowicz about his presentation.)

Another intriguing presentation can be expected by David Uhlman, who covers not only the use of open source for meeting meaningful use requirements, but how providers can use open source tools to manage the data.

David Richards also covers ways to use data for planning and quality improvement, a practice generally known as clinical decision support.

Patients can collect and use data as well, as explained by Fred Trotter in his talk on the quantified self and software.

Personal health records let patients keep track of their health information, but as the recent demise of Google Health shows, it's hard to make a system that ordinary people find useful. Institutions can use the open source Indivo system to give clients access to PHRs. Its API and uses will be described in a talk by chief architect Daniel Haas. (I interviewed Haas about his presentation.)

Another key link in the chain of patient data is the increasing number of devices that measure blood pressure, glucose levels, and other vital signs. Shahid Shah explains why high-quality treatment depends on connecting these devices to health records, and interviewed Shah about his presentation.)

Charlie Quinn addresses himself to researchers trying to tame the flood of data that will be generated by meaningful use and other international sources.

Several sessions in other tracks are also related to health care:

There's so much happening this year at OSCON that I wish I were multi-threaded. (But, actually, threads are on the way out this year — didya hear? Asynchronous callbacks are in.) I hope some of you can make time, by cloning yourselves if necessary, and join us at some of the talks in this blog.



Related:


July 21 2011

OSCON Preview: Interview with Eri Gentry on a biologist's coffeehouse

BioCurious is a Silicon Valley gathering place for biologists and other people such as artists who are fascinated by biology. It serves for learning, sharing, and an incubator for products and ideas. In this interview, community manager Eri Gentry talks about who supports BioCurious, what goes on there, and adventures in synthetic biology and art.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl