Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 13 2012

Growth of SMART health care apps may be slow, but inevitable

This week has been teaming with health care conferences, particularly in Boston, and was declared by President Obama to be National Health IT Week as well. I chose to spend my time at the second ITdotHealth conference, where I enjoyed many intense conversations with some of the leaders in the health care field, along with news about the SMART Platform at the center of the conference, the excitement of a Clayton Christiansen talk, and the general panache of hanging out at the Harvard Medical School.

SMART, funded by the Office of the National Coordinator in Health and Human Services, is an attempt to slice through the Babel of EHR formats that prevent useful applications from being developed for patient data. Imagine if something like the wealth of mash-ups built on Google Maps (crime sites, disaster markers, restaurant locations) existed for your own health data. This is what SMART hopes to do. They can already showcase some working apps, such as overviews of patient data for doctors, and a real-life implementation of the heart disease user interface proposed by David McCandless in WIRED magazine.

The premise and promise of SMART

At this conference, the presentation that gave me the most far-reaching sense of what SMART can do was by Nich Wattanasin, project manager for i2b2 at Partners. His implementation showed SMART not just as an enabler of individual apps, but as an environment where a user could choose the proper app for his immediate needs. For instance, a doctor could use an app to search for patients in the database matching certain characteristics, then select a particular patient and choose an app that exposes certain clinical information on that patient. In this way, SMART an combine the power of many different apps that had been developed in an uncoordinated fashion, and make a comprehensive data analysis platform from them.

Another illustration of the value of SMART came from lead architect Josh Mandel. He pointed out that knowing a child’s blood pressure means little until one runs it through a formula based on the child’s height and age. Current EHRs can show you the blood pressure reading, but none does the calculation that shows you whether it’s normal or dangerous. A SMART app has been developer to do that. (Another speaker claimed that current EHRs in general neglect the special requirements of child patients.)

SMART is a close companion to the Indivo patient health record. Both of these, aong with the i2b2 data exchange system, were covered in article from an earlier conference at the medical school. Let’s see where platforms for health apps are headed.

How far we’ve come

As I mentioned, this ITdotHealth conference was the second to be held. The first took place in September 2009, and people following health care closely can be encouraged by reading the notes from that earlier instantiation of the discussion.

In September 2009, the HITECH act (part of the American Recovery and Reinvestment Act) had defined the concept of “meaningful use,” but nobody really knew what was expected of health care providers, because the ONC and the Centers for Medicare & Medicaid Services did not release their final Stage 1 rules until more than a year after this conference. Aneesh Chopra, then the Federal CTO, and Todd Park, then the CTO of Health and Human Services, spoke at the conference, but their discussion of health care reform was a “vision.” A surprisingly strong statement for patient access to health records was made, but speakers expected it to be accomplished through the CONNECT Gateway, because there was no Direct. (The first message I could find on the Direct Project forum dated back to November 25, 2009.) Participants had a sophisticated view of EHRs as platforms for applications, but SMART was just a “conceptual framework.”

So in some ways, ONC, Harvard, and many other contributors to modern health care have accomplished an admirable amount over three short years. But some ways we are frustratingly stuck. For instance, few EHR vendors offer API access to patient records, and existing APIs are proprietary. The only SMART implementation for a commercial EHR mentioned at this week’s conference was one created on top of the Cerner API by outsiders (although Cerner was cooperative). Jim Hansen of Dossia told me that there is little point to encourage programmers to create SMART apps while the records are still behind firewalls.

Keynotes

I couldn’t call a report on ITdotHealth complete without an account of the two keynotes by Christiansen and Eric Horvitz, although these took off in different directions from the rest of the conference and served as hints of future developments.

Christiansen is still adding new twists to the theories laid out in c The Innovator’s Dilemma and other books. He has been a backer of the SMART project from the start and spoke at the first ITdotHealth conference. Consistent with his famous theory of disruption, he dismisses hopes that we can reduce costs by reforming the current system of hospitals and clinics. Instead, he projects the way forward through technologies that will enable less trained experts to successively take over tasks that used to be performed in more high-cost settings. Thus, nurse practitioners will be able to do more and more of what doctors do, primary care physicians will do more of what we current delegate to specialists, and ultimately the patients and their families will treat themselves.

He also has a theory about the progression toward openness. Radically new technologies start out tightly integrated, and because they benefit from this integration they tend to be created by proprietary companies with high profit margins. As the industry comes to understand the products better, they move toward modular, open standards and become commoditized. Although one might conclude that EHRs, which have been around for some forty years, are overripe for open solutions, I’m not sure we’re ready for that yet. That’s because the problems the health care field needs to solve are quite different from the ones current EHRs solve. SMART is an open solution all around, but it could serve a marketplace of proprietary solutions and reward some of the venture capitalists pushing health care apps.

While Christiansen laid out the broad environment for change in health care, Horvitz gave us a glimpse of what he hopes the practice of medicine will be in a few years. A distinguished scientist at Microsoft, Horvitz has been using machine learning to extract patterns in sets of patient data. For instance, in a collection of data about equipment uses, ICD codes, vital signs, etc. from 300,000 emergency room visits, they found some variables that predicted a readmission within 14 days. Out of 10,000 variables, they found 500 that were relevant, but because the relational database was strained by retrieving so much data, they reduced the set to 23 variables to roll out as a product.

Another project predicted the likelihood of medical errors from patient states and management actions. This was meant to address a study claiming that most medical errors go unreported.

A study that would make the privacy-conscious squirm was based on the willingness of individuals to provide location data to researchers. The researchers tracked searches on Bing along with visits to hospitals and found out how long it took between searching for information on a health condition and actually going to do something about it. (Horvitz assured us that personally identifiable information was stripped out.)

His goal is go beyond measuring known variables, and to find new ones that could be hidden causes. But he warned that, as is often the case, causality is hard to prove.

As prediction turns up patterns, the data could become a “fabric” on which many different apps are based. Although Horvitz didn’t talk about combining data sets from different researchers, it’s clearly suggested by this progression. But proper de-identification and flexible patient consent become necessities for data combination. Horvitz also hopes to move from predictions to decisions, which he says is needed to truly move to evidence-based health care.

Did the conference promote more application development?

My impression (I have to admit I didn’t check with Dr. Ken Mandl, the organizer of the conference) was that this ITdotHealth aimed to persuade more people to write SMART apps, provide platforms that expose data through SMART, and contribute to the SMART project in general. I saw a few potential app developers at the conference, and a good number of people with their hands on data who were considering the use of SMART. I think they came away favorably impressed–maybe by the presentations, maybe by conversations that the meeting allowed them to have with SMART developers–so we may see SMART in wider use soon. Participants came far for the conference; I talked to one from Geneva, for instance.

The presentations were honest enough, though, to show that SMART development is not for the faint-hearted. On the supply side–that is, for people who have patient data and want to expose it–you have to create a “container” that presents data in the format expected by SMART. Furthermore, you must make sure the data conforms to industry standards, such as SNOMED for diagnoses. This could be a lot of conversion.

On the application side, you may have to deal with SMART’s penchant for Semantic Web technologies such as OWL and SPARQL. This will scare away a number of developers. However, speakers who presented SMART apps at the conference said development was fairly easy. No one matched the developer who said their app was ported in two days (most of which were spent reading the documentation) but development times could usually be measured in months.

Mandl spent some time airing the idea of a consortium to direct SMART. It could offer conformance tests (but probably not certification, which is a heavy-weight endeavor) and interact with the ONC and standards bodies.

After attending two conferences on SMART, I’ve got the impression that one of its most powerful concepts is that of an “app store for health care applications.” But correspondingly, one of the main sticking points is the difficulty of developing such an app store. No one seems to be taking it on. Perhaps SMART adoption is still at too early a stage.

Once again, we are batting our heads up against the walls erected by EHRs to keep data from being extracted for useful analysis. And behind this stands the resistance of providers, the users of EHRs, to give their data to their patients or to researchers. This theme dominated a federal government conference on patient access.

I think SMART will be more widely adopted over time because it is the only useful standard for exposing patient data to applications, and innovation in health care demands these apps. Accountable Care Organizations, smarter clinical trials (I met two representatives of pharmaceutical companies at the conference), and other advances in health care require data crunching, so those apps need to be written. And that’s why people came from as far as Geneva to check out SMART–there’s nowhere else to find what they need. The technical requirements to understand SMART seem to be within the developers’ grasps.

But a formidable phalanx of resistance remains, from those who don’t see the value of data to those who want to stick to limited exchange formats such as CCDs. And as Sean Nolan of Microsoft pointed out, one doesn’t get very far unless the app can fit into a doctor’s existing workflow. Privacy issues were also raised at the conference, because patient fears could stymie attempts at sharing. Given all these impediments, the government is doing what it can; perhaps the marketplace will step in to reward those who choose a flexible software platform for innovation.

June 21 2012

Clinician, researcher, and patients working together: progress aired at Indivo conference

While thousands of health care professionals were flocking to the BIO International Convention this week, I spent Monday in a small library at the Harvard Medical School listening to a discussion of the Indivo patient health record and related open source projects with about 80 intensely committed followers. Lead Indivo architect Daniel Haas, whom I interviewed a year ago, succeeded in getting the historical 2.0 release of Indivo out on the day of the conference. This article explains the significance of the release in the health care field and the promise of the work being done at Harvard Medical School and its collaborators.

Although still at the early adoption stages, Indivo and the related SMART and i2b2 projects merit attention and have received impressive backing. The Office of the National Coordinator funded SMART, and NIH funded i2b2. National Coordinator Farzad Mostashari was scheduled to attend Monday's conference (although he ended up having to speak over a video hookup). Indivo inspired both Microsoft HealthVault and Google Health, and a good deal of its code underlies HealthVault. Australia has taken a nationwide PHR initiative inspired by Indivo. A Partners HealthCare representative spoke at the conference, as did someone from the MIT Media Lab. Clayton M. Christensen et al. cited Indivo as a good model in The Innovator's Prescription: A Disruptive Solution for Health Care. Let's take a look at what makes the combination so powerful.

Platform and reference implementation

The philosophy underlying this distributed open source initiative is to get clinicians, health researchers, and patients to share data and work together. Today, patient data is locked up in thousands of individual doctors or hospital repositories; whether they're paper or electronic hardly makes a difference because they can't be combined or queried. The patient usually can't see his own data, as I described in an earlier posting, much less offer it to researchers. Dr. Kenneth Mandl, opening the conference, pointed out that currently, an innovative company in the field of health data will die on the vine because they can't get data without making deals with each individual institution and supporting its proprietary EHR.

The starting point for changing all that, so far as this conference goes, is the SMART platform. It simply provides data models for storing data and APIs to retrieve it. If an electronic health record can translate data into a simple RDF model and support the RESTful API, any other program or EHR that supports SMART can access the data. OAuth supports security and patient control over access.

Indivo is a patient health record (or, to use the term preferred by the conference speakers, a personally controlled health record). It used to have its own API, and the big significance of Monday's 2.0 release is that it now supports SMART. The RESTful interface will make Indivo easy to extend beyond its current Java and Python interfaces. So there's a far-reaching platform now for giving patients access to data and working seemlessly with other cooperating institutions.

The big missing piece is apps, and a hackathon on Tuesday (which I couldn't attend) was aimed at jump-starting a few. Already, a number of researchers are using SMART to coordinate data sharing and computation through the i2b2 platform developed by Partners. Ultimately, the SMART and Indivo developers hope to create an app store, inspired by Apple's, where a whole marketplace can develop. Any app written to the SMART standard can run in Indivo or any other system supporting SMART. But the concept of an app in SMART and Indivo is different from a consumer market, though. The administrator of the EHR or PHR would choose apps, vetting them for quality and safety, and then a doctor, researcher, or patient could use one of the chosen apps.

Shawn Murphy of Partners described the use of i2b2 to choose appropriate patients for a clinical study. Instead of having to manually check many different data repositories manually for patients meeting the requirements (genetic, etc.), a researcher could issue automated queries over SMART to the databases. The standard also supports teamwork across institutions. Currently, 60 different children's hospitals' registries talk to each other through i2b2.

It should be noted i2b2 does not write into a vendor's EHR system (which the ONC and many others call an important requirement for health information exchange) because putting data back into a silo isn't disruptive innovation. It's better to give patients a SMART-compatible PHR such as Indivo.

Regarding Tuesday's hackathon, Haas wrote me, "By the end of the day, we had several interesting projects in the works, including an app to do contextualized search based on a patient's Problems list (integration with google.com and MedlinePlus), and app integration with BodyTrack, which displays Indivo labs data in time-series form alongside data from several other open API inputs, such as Fitbit and Zeo devices."

Standards keep things simple

All the projects mentioned are low-budget efforts, so they all borrow and repurpose whatever open source tools they can. As Mostashari said in his video keynote, they believe in "using what you've got." I have already mentioned SMART's dependence on standards, and Indivo is just as behold to other projects, particularly Django. For instance, Indivo allows data to be stored in Django's data models (Python structures that represent basic relational tables). Indivo also provides an even simpler JSON-based data model.

The format of data is just as important as the exchange protocol, if interoperability is to success. The SMART team chose to implement several "best-of-breed" standards that would cover 80% of use cases: for instance, SNOMED for medical conditions, RxNORM for medications, and LOINC for labs. Customers using other terminologies will have to translate them into the supported standards, so SMART contains Provenance fields indicating the data source.

The software is also rigorously designed to be modular, so both the original developers and other adopters can replace pieces as desired. Indivo already has plenty of fields about patient data and about context (provider names, etc.), but more can be added ad infinitum to support any health app that comes along. Indivo 2.0 includes pluggable data models, which allow a site to customize every step from taking in data to removing it. It also supports new schemas for data of any chosen type.

The simplicity of Indivo, SMART, and i2b2--so much in contrast with most existing health information exchanges--is reminiscent of Blue Button. Mandl suggested that a Blue Button app would be easy to write. But the difference is that Blue Button aimed to be user-friendly whereas the projects at this conference are developer-friendly. That means that can add some simple structure and leave it up to app developers to present the data to users in a friendly manner.

The last hurdle

Because SMART and Indivo ultimately want the patient to control access to data, trust is a prerequisite. OAuth is widely used by Twitter apps and other sites across the Web, but hasn't been extensively tested in a health care environment. We'll need more experience with OAuth to see whether the user experience and their sense of security are adequate. And after that, trust is up to the institutions adopting Indivo or SMART. A couple speakers pointed out that huge numbers of people trust mint.com with their passwords to financial accounts, so when they learn the benefits of access to patient records they should adopt Indivo as well. An Indivo study found that 84% of people are willing to share data with social networks for research and learning.

SMART, Indivo, and i2b2 make data sharing easier than ever. but as many have pointed out, none of this will get very far until patients, government, and others demand that institutions open up. Mandl suggested that one of the major reasons Google Health failed was that it could never get enough data to gain traction--the health providers just wouldn't work with the PHR. At least the open source standards take away some of the technical excuses they have used up to now.

September 21 2011

David Blumenthal lauds incrementalism at forum on electronic health records

Anyone who follows health issues in the U.S. has to be obsessed with the workings of the Office of the National Coordinator (ONC). During the critical early phases of implementing HITECH and meaningful use, the National Coordinator himself was Dr. David Blumenthal, who came to speak yesterday in the Longwood medical area in Boston.

A long-time Bostonian, where he moved up from being a primary care physician, Blumenthal is now back at Mass General and Harvard Business School. Most of his speech yesterday was a summary of the reasoning behind meaningful use, but some off-the-cuff remarks at the end, as well as vigorous discussion during a following panel, provided some interesting perspectives. Best of all was hearing a lot of facts on the ground. These helped explain the difference between EHRs in theory and in practice.

Which comes first, electronic records or standard formats?

There were a lot of complaints at the forum about the lack of interoperability between electronic health records. Blumenthal declared twice that pushing doctors to adopt EHRs was a good idea because we have to have our information digitized before we can think of interchanging it. Coming from the perspective of having seen systems and standards develop--and having seen the mess that results from products out of sync with standards in areas ranging from CORBA to browsers--I disagree with this claim. Luckily, Blumenthal's actual work didn't match the simplistic "digitize first" approach. The ONC built some modest requirements for interoperability into the first stage of meaningful use and plans to ramp these requirements up quickly. Furthermore, they're engaging in intensive negotiations with industry players over EHR standards (see, for instance, my write-up of a presentation by John Halamka last May) and worked quite early on the ground-breaking CONNECT and Direct projects for information exchange.

I understand that an ideal standard can't be expected to spring from the head of Zeus. What perhaps the standards proponents should have worked on is a separation of formats from products. Most EHRs reflect an old-fashioned design that throws together data format, architecture, and user interface. Wouldn't it be great to start the formats off on their own course, and tell EHR vendors to design wonderful interfaces that are flexible enough to adapt to format changes, while competing on providing clinicians with the best possible interface and workflow support? (Poor workflow was another common complaint at last night's forum.) That's the goal of the Indivo project. I interviewed Daniel Haas from that project in June.

Incrementalism in EHRs: accepting imperfection

Perhaps Blumenthal's enthusiasm for putting electronic records in place and seek interoperability later may reflect a larger pragmatism he brought up several times yesterday. He praised the state of EHRs (pushing back against members of the audience with stories to tell of alienated patients and doctors quitting the field in frustration), pointing to a recent literature survey where 92% of studies found improved outcomes in patient care, cost control, or user satisfaction. And he said we would always be dissatisfied with EHRs because we compare them to some abstract ideal

I don't think his assurances or the literature survey can assuage everyone's complaints. But his point that we should compare EHRs to paper is a good one. Several people pointed out that before EHRs, doctors simply lacked basic information when making decisions, such as what labs and scans the patient had a few months ago, or even what diagnosis a specialist had rendered. How can you complain that EHRs slow down workflow? Before EHRs there often was no workflow! Many critical decisions were stabs in the dark.

Too much content, too much discontent

Even so, it's clear that EHRs have to get better at sifting and presenting information. Perhaps even more important, clinicians have to learn how to use them better, so they can focus on the important information. One member of the audience said that after her institution adopted EHRs, discharge summaries went from 3 pages to 10 pages in average length. This is probably not a problem with EHRS, but with clinicians being lazy and taking advantage of the cut-and-paste function.

The computer was often described as a "third person in the room" during patient visits, and even, by panelist and primary care physician Gerard Coste, as a two-year-old who takes up everybody's attention. One panelist, law professor and patient representative Michael Meltsner, suggested that medical residents need to be trained about how to maintain a warm, personal atmosphere during an interview while looking up and entering data. Some people suggested that better devices for input and output (read: iPads) would help.

Blumenthal admitted that electronic records can increase workloads and slow doctors down. "I've said that the EHR made me a better doctor, but I didn't say it made me a faster one." He used this as a lead-in to his other major point during the evening, which is that EHRs have to be adopted in conjunction with an overhaul of our payment and reward system for doctors. He cited Kaiser Permanente (a favorite of health care reformers, even though doctors and patients in that system have their share of complaints) as a model because they look for ways to keep patients healthy with less treatment.

While increasing workloads, electronic records also raise patient expectations. Doctors are really on the hook for everything in the record, and have to act as if they know everything in it. Similar expectations apply to coordination of care. Head nurse Diane L Gilworth said, "Patients think we talk to each other much more than we do." The promise of EHRs and information interchange hasn't been realized.

New monitoring devices and the movement for a patient centered medical home will add even more data to the mix. I didn't ask a question during the session (because I felt it was for clinicians and they should be the ones to have their say), but if I could have posed a question, it would be this: one speaker reminded the audience that the doctor is liable for all the information in the patient's record. But the patient centered medical home requires the uploading of megabytes of data that is controlled by the patient, not the doctor. Doctors are reluctant to accept such data. How can we get the doctor and patient to collaborate to produce high-quality data, and do we need changes in regulations for that to happen?

A plea for an old-fashioned relationship

One theme bubbled up over and over at yesterday's meeting The clinicians don't want to be dazzled by more technology. They just want more time to interview patients and a chance to understand them better. Their focus is not on meaningful use but on meaningful contact. If EHRs can give them and their patients that experience, EHRs are useful and will be adopted enthusiastically. If EHRs get in the way, they will be rejected or undermined. This was an appropriate theme for a panel organized by the Schwartz Center for Compassionate Healthcare.

That challenge is harder to deal with than interchange formats or better I/O devices. It's at the heart of complaints over workflow and many other things. But perhaps it should be at the top of the EHR vendors' agendas.

July 30 2011

Report from Open Source convention health track, 2011

Open source software in health care? It's limited to a few pockets of use--at least in the United States--but if you look at it a bit, you start to wonder why any health care institution uses any proprietary software at all.

What the evidence suggests

Take the conference session by University of Chicago researchers commissioned to produce a report for Congress on open source in health care. They found several open source packages that met the needs for electronic records at rural providers with few resources, such as safety-net providers.

They found that providers who adopted open source started to make the changes that the adoption of electronic health records (or any major new system) is supposed to do, but rarely does in proprietary health settings.

  • They offer the kinds of extra attention to patients that improve their health, such as asking them questions about long-term health issues.

  • They coordinate care better between departments.

  • They have improved their workflows, saving a lot of money

And incidentally, deployment of an open source EHR took an estimated 40% of the cost of deploying a proprietary one.

Not many clinics of the type examined--those in rural, low-income areas--have the time and money to install electronic records, and far fewer use open source ones. But the half-dozen examined by the Chicago team were clear success stories. They covered a variety of areas and populations, and three used WorldVistA while three used other EHRs.

Their recommendations are:

  • Greater coordination between open source EHR developers and communities, to explain what open source is and how they benefit providers.

  • Forming a Community of Practice on health centers using open source EHRs.

  • Greater involvement from the Federal Government, not to sponsor open source, but to make communities aware that it's an option.

Why do so few providers adopt open source EHRs? The team attributed the problem partly to prejudice against open source. But I picked up another, deeper concern from their talk. They said success in implementing open source EHRs depends on a "strong, visionary leadership team." As much as we admire health providers, teams like that are hard to form and consequently hard to find. But of course, any significant improvement in work processes would require such a team. What the study demonstrated is that it happens more in the environment of an open source product.

There are some caveats to keep in mind when considering these findings--some limitations to the study. First, the researchers had very little data about the costs of implementing proprietary health care systems, because the vendors won't allow customers to discuss it, and just two studies have been published. Second, the sample of open source projects was small, although the consistency of positive results was impressive. And the researchers started out sympathetic to open source. Despite the endorsement of open source represented by their findings, they recognized that it's harder to find open source and that all the beneficial customizations take time and money. During a Birds-of-a-Feather session later in the conference, many of us agreed that proprietary solutions are here for quite some time, and can benefit by incorporating open source components.

The study nevertheless remains important and deserves to be released to Congress and the public by the Department of Health and Human Services. There's no point to keeping it under wraps; the researchers are proceeding with phase 2 of the study with independent funding and are sure to release it.

So who uses open source?

It's nice to hear about open source projects (and we had presentations on several at last year's OSCon health care track) but the question on the ground is what it's like to actually put one in place. The implementation story we heard this year was from a team involving Roberts-Hoffman Software and Tolven.

Roberts-Hoffman is an OSCon success story. Last year they received a contract from a small health care provider to complete a huge EHR project in a crazily short amount of time, including such big-ticket requirements as meeting HIPAA requirements. Roberts-Hoffman knew little about open source, but surmised that the customization it permitted would let them achieve their goal. Roberts-Hoffman CEO Vickie Hoffman therefore attended OSCon 2010, where she met a number of participants in the health care track (including me) and settled on Tolven as their provider.

The customer put some bumps in the road to to the open source approach. For instance, they asked with some anxiety whether an open source product would expose their data. Hoffman had a little educating to do.

Another hurdle was finding a vendor to take medication orders. Luckily, Lexicomp was willing to work with a small provider and showed a desire to have an open source solution for providers. Roberts-Hoffman ended up developing a Tolven module using Lexicomp's API and contributing it back to Tolven. This proprietary/open source merger was generally quite successful, although it was extra work providing tests that someone could run without a Lexicomp license.

In addition to meeting what originally seemed an impossible schedule, Tolven allowed an unusual degree of customization through templating, and ensured the system would work with standard medical vocabularies.

Why can't you deliver my data?

After presentations on health information exchanges at OSCON, I started to ruminate about data delivery. My wife and I had some problems with appliances this past Spring and indulged in some purchases of common household items, a gas grill from one company and a washing machine from another. Each offered free delivery. So if low-margin department stores can deliver 100-pound appliances, why can't my doctor deliver my data to a specialist I'm referred to?

The CONNECT Gateway and Direct project hopefully solve that problem. CONNECT is the older solution, with Direct offering an easier-to-implement system that small health care providers will appreciate. Both have the goal of allowing health care providers to exchange patient data with each other, and with other necessary organizations such as public health agencies, in a secure manner.

David Riley, who directed the conversion of CONNECT to an open-source, community-driven project at the Office of the National Coordinator in the Department of Health and Human Services, kicked off OSCon's health care track by describing the latest developments. He had led off last year's health care track with a perspective on CONNECT delivered from his role in government, and he moved smoothly this time into covering the events of the past year as a private developer.

The open-source and community aspects certainly proved their value when a controversy and lawsuit over government contracts threatened to stop development on CONNECT. Although that's all been resolved now, Riley decided in the Spring to leave government and set up an independent non-profit foundation, Alembic, to guide CONNECT. The original developers moved over to Alembic, notably Brian Behlendorf, and a number of new companies and contributors came along. Most of the vendors who had started out on the ONC project stayed with the ONC, and were advised by Riley to do so until Alembic's course was firm.

Lots of foundations handle open source projects (Apache, etc.) but Riley and Behlendorf decided none of them were proper for a government-centric health care project. CONNECT demanded a unique blend of sensitivity to the health care field and experience dealing with government agencies, who have special contract rules and have trouble dealing with communities. For instance, government agencies are tasked by Congress with developing particular solutions in a particular time frame, and cannot cite as an excuse that some developer had to take time off to get a full-time job elsewhere.

Riley knows how to handle the myriad pressures of these projects, and has brought that expertise to Alembic. CONNECT software has been released and further developed under a BSD license as the Aurion project. Now that the ONC is back on track and is making changes of its own, the two projects are trying to heal the fork and are following each other's changes closely. Because Aurion has to handle sensitive personal data deftly, Riley hopes to generalize some of the software and create other projects for handling personal data.

Two Microsoft staff came to OSCon to describe Direct and the open-source .NET libraries implementing it. It turned out that many in the audience were uninformed about Direct (despite an intense outreach effort by the ONC) and showed a good deal of confusion about it. So speakers Vaibhav Bhandari and Ali Emami spent the whole time alloted (and more) explaining Direct, with time for just a couple slides pointing out what the .NET libraries can do.

Part of the problem is that security is broken down into several different functions in ONC's solution. Direct does not help you decide whether to trust the person you're sending data to (you need to establish a trust relationship through a third party that grants certificates) or find out where to send it (you need to know the correspondent's email address or another connection point). But two providers or other health care entities who make an agreement to share data can use Direct to do so over email or other upcoming interfaces.

There was a lot of cynicism among attendees and speakers about whether government efforts, even with excellent protocols and libraries, can get doctors to offer patients and other doctors the necessary access to data. I think the reason I can get a big-box store to deliver an appliance but I can't get my doctor to deliver data is that the big-box store is part of a market, and therefore wants to please the customer. Despite all our talk of free markets in this country, health care is not a market. Instead, it's a grossly subsidized system where no one has choice. And it's not just the patients who suffer. Control is removed from the providers and payers as well.

The problem will be solved when patients start acting like customers and making appropriate demands. If you could say, "I'm not filling out those patient history forms one more time--you just get the information where I'm going," it might have an effect. More practically speaking, let's provide simple tools that let patients store their history on USB keys or some similar medium, so we can walk into a doctor's office and say "Here, load this up and you'll have everything you need."

What about you, now?

Patient control goes beyond data. It's really core to solving our crisis in health care and costs. A lot of sessions at OSCon covered things patients could do to take control of their health and their data, but most of them were assigned to the citizen health track (I mentioned them at the end of my preview article a week ago) and I couldn't attend them because they were concurrent with the health care track.

Eri Gentry delivered an inspiring keynote about her work in the biology start-up BioCurious, Karen Sandler (who had spoken in last year's health care track scared us all with the importance of putting open source software in medical devices, and Fred Trotter gave a brief but riveting summary of the problems in health care. Fred also led a session on the Quantified Self, which was largely a discussion with the audience about ways we could encourage better behavior in ourselves and the public at large.

Guaranteed to cause meaningful change

I've already touched on the importance of changing how most health care institutions treat patients, and how open source can help. David Uhlman (who has written a book for O'Reilly with Fred Trotter) covered the complex topic of meaningful use, a phrase that appeared in the recovery act of 2009 and that drives just about all the change in current U.S. institutions. The term "meaningful use" implies that providers do more than install electronic systems; they use them in ways that benefit the patients, the institutions themselves, and the government agencies that depend on their data and treatments.

But Uhlman pointed out that doctors and health administrators--let alone the vendors of EHRs--focus on the incentive money and seem eager to do the minimum that gets them a payout. This is self-defeating, because as the government will raise the requirements for meaningful use over the years, and will overwhelm quick-and-dirty implementations that fail to solve real problems. Of course, the health providers keep pushing back the more stringent requirements to later years, but they'll have to face the music someday. Perhaps the delay will be good for everyone in the long run, because it will give open source products a chance to demonstrate their value and make inroads where they are desperately needed.

As a crude incentive to install electronic records, meaningful use has been a big success. Before the recover act was passed, 15%-20% of U.S. providers had EHRs. Now the figures is 60% or 70% percent, and by the end of 2012 it will probably be 90%. But it remains to be seen whether doctors use these systems to make better clinical decisions, follow up with patients so they comply with treatments, and eliminate waste.

Uhlman said that technology accounts for about 20% of the solution. The rest is workflow. For instance, every provider should talk to patients on every visit about central health concerns, such as hypertension and smoking. Research has suggested that this will add 30% more time per visit. If it reduces illness and hospital admissions, of course, we'll all end up paying less in taxes and insurance. His slogan: meaningful use is a payout for quality data.

It may be surprising--especially to an OSCon audience--that one of the biggest hurdles to achieving meaningful use is basic computer skills. We're talking here about typing information in correctly, knowing that you need to scroll down to look at all information on the screen, and such like. All the institutions Uhlman visits think they're in fine shape and everybody has the basic skills, but every examination he's done proves that 20%-30% of the staff are novices in computer use. And of course, facilities are loath to spend extra money to develop these skills.

Open source everywhere

Open source has image and marketing problems in the health care field, but solutions are emerging all over the place. Three open source systems right now are certified for meaningful use: ClearHealth (Uhlman's own product), CareVue from MedSphere, and WorldVistA. OpenEMR is likely to join them soon, having completed the testing phase. vxVistA is certified but may depend on some proprietary pieces (the status was unclear during the discussion).

Two other intriguing projects presented at OSCon this year were popHealth and Indivo X. I interviewed architects from Indivo X and popHealth before they came to speak at OSCon. I'll just say here that popHealth has two valuable functions. It helps providers improve quality by providing a simple web interface that makes it easy for them to view and compare their quality measures (for instance, whether they offered appropriate treatment for overweight patients). Additionally, popHealth saves a huge amount of tedious manual effort by letting them automatically generate reports about these measures for government agencies. Indivo fills the highly valued space of personal health records. It is highly modular, permitting new data sources and apps to be added; in fact, speaker Daniel Haas wants it to be an "app store" for medical applications. Both projects use modern languages, frameworks, and databases, facilitating adoption and use.

Other health care track sessions

An excellent and stimulating track was rounded out with several other talks.

Shahid Shah delivered a talk on connecting medical devices to electronic record systems. He adroitly showed how the data collected from these devices is the most timely and accurate data we can get (better than direct reports from patients or doctors, and faster than labs), but we currently let it slip away from us. He also went over standard pieces of the open source stacks that facilitate the connection of devices, talked a bit about regulations, and discussed the role of routine engineering practices such as risk assessments and simulations.

Continuing on the quality theme, David Richards mentioned some lessons he learned designing a ways clinical decision support system. It's a demanding discipline. Accuracy is critical, but results must be available quickly so the doctor can use them to make decisions during the patient visit. Furthermore, the suggestions returned must be clear and precise.

Charlie Quinn talked about the collection of genetic information to achieve earlier diagnoses of serious conditions. I could not attend his talk because I was needed at another last-minute meeting, but I sat down for a while with him later.

The motto at his Benaroya Research Institute is to have diagnosis be more science, less art. With three drops of blood, they can do a range of tests on patients suspected of having particular health conditions. Genomic information in the blood can tell a lot about health, because blood contains viruses and other genomic material besides the patient's own genes.

Tests can compare the patients to each other and to a healthy population, narrowing down comparisons by age, race, and other demographics. As an example, the institute took samples before a vaccine was administered, and then at several frequent intervals in the month afterward. They could tell when the vaccine had the most powerful effect on the body.

The open source connection here is the institute's desire to share data among multiple institutions so that more patients can be compared and more correlations can be made. Quinn said it's hard to get institutions to open up their data.

All in all, I was energized by the health care track this year, and really impressed with the knowledge and commitment of the people I met. Audience questions were well-informed and contributed a lot to the presentations. OSCon shows that open source health care, although it hasn't broken into the mainstream yet, already inspires a passionate and highly competent community.

June 27 2011

Open source personal health record: no need to open Google Health

The news went out Friday that Google is shutting down Google Health. This portal was, along with Microsoft HealthVault (which is still going strong) the world's best-known place for people to store health information on themselves. Google Health and Microsoft HealthVault were widely cited as harbingers of a new zeal for taking control of one's body and becoming a partner with one's doctors in being responsible for health.

Great ideas, but hardly anybody uses these two services. Many more people use a PHR provided by their hospital or general practitioner, which is not quite the point of a PHR because you see many practitioners over the course of your life and your data ought to be integrated in one place where you can always get it.

Predictably, free software advocates say, "make Google Health open source!" This also misses the point. The unique attributes of cloud computing were covered in a series of articles I put up a few months ago. As I explain there, the source code for services such as Google Health is not that important. The key characteristic that makes Google Health and Microsoft HealthVault appealing is...that they are run by Google and Microsoft. Those companies were banking on the trust that the public has for large, well-endowed institutions to maintain a service. And Google's decision to shutter its Health service (quite reasonable because of its slow take-off) illustrates the weakness of such cloud services.

The real future of PHRs is already here in the form of open source projects that people can take in any direction they want. One is Indivo, whose lead architect I recently interviewed (video) and which is also covered in a useful blog about the end of Google Health by an author of mine, Fred Trotter.

Two other projects worth following are OpenMRS and
Tolven (which includes a PHR).
People are talking about extending the Department of Veterans Affairs' Blue Button. Trotter's Healthevet (the software behind Blue Button) is also an open source PHR.

Whatever features a PHR may offer are overshadowed by the key ability to accept data in industry-standard formats and interact with a wide range of devices. A good piece of free software can be endlessly enhanced with these capabilities.

So in short, there are great projects that are already open source and worth contributing to and implementing. The question is still open of who is best suited to host the services. I'm not picking winners, but as we get more and more sensors, personal health monitors, and other devices emitting data about ourselves, the PHR will find a home.

March 23 2011

SMART challenge and P4: open source projects look toward the broader use of health records

In a country where doctors are still struggling to transfer basic
patient information (such as continuity of care records) from one
clinic to another, it may seem premature to think about seamless data
exchange between a patient and multiple care organizations to support
such things as real-time interventions in patient behavior and better
clinical decision support. But this is precisely what medicine will
need for the next breakthrough in making patients better and reducing
costs. And many of the building blocks have recently fallen into
place.

Two recent open source developments have noticed these opportunities
and hope to create new ones from them. One is the href="http://challenge.gov/challenges/134">SMART Apps for Health
contest at Challenge.gov, based on the href="http://www.smartplatforms.org/">SMART Platform that is one
of the darlings of href="http://www.whitehouse.gov/blog/2011/03/10/smart-prize-patients-physicians-and-researchers">Federal
CTO Aneesh Chopra and other advocates for health care innovation.
The other development is href="http://healthurl.com/www/P4.html">P4, the brainchild of a
physician named Adrian Gropper who has recognized the importance of
electronic records and made the leap into technology.

SMART challenge: Next steps for a quickly spreading open source API

I'm hoping the SMART Platform augurs the future of health IT: an open
source project that proprietary vendors are rushing to adopt. The
simple goal of SMART is to pull together health data from any
appropriate source--labs, radiology, diagnoses, and even
administrative information--and provide it in a common,
well-documented, simple format so any programmer can write an app to
process it. It's a sign of the mess electronic records have become
over the years that this functionality hasn't emerged till now. And
it's a sign of the tremendous strides health IT has made recently that
SMART (and the building blocks on which it is based) has become so
popular.

SMART has been released under the GPL, and is based on two other
important open source projects: the href="http://indivohealth.org/">INDIVO health record system and
the I2B2 informatics system. Like
INDIVO, the SMART project was largely developed by Children's Hospital
Boston, and was presented at a meeting I attended today by Dr. Kenneth
D. Mandl, a director of the Intelligent Health Laboratory at the
hospital and at Harvard Medical School. SMART started out with the
goal of providing a RESTful API into data. Not surprisingly, as Mandl
reported, the team quickly found itself plunged into the task of
developing standards for health-related data. Current standards either
didn't apply to the data they were exposing or were inappropriate for
the new uses to which they wanted to put it.

Health data is currently stored in a Babel of formats. Converting them
all to a single pure information stream is hopeless; to make them
available to research one must translate them on the fly to some
universally recognized format. That's one of the goals of the href="http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-health-it-report.pdf">report
on health care released in December 2010 by the President's
Council of Advisors on Science and Technology. SMART is developing
software to do the translation and serve up data from whatever desired
source in "containers." Applications can then query the containers
through SMART's API to retrieve data and feed to research and clinical
needs.

Justifying SMART, Mandl presented solid principles of modern data
processing that will be familiar to regular Radar readers:

Data as a platform

Storage should be as flexible and free of bias as possible, so that
innovators can easily write new applications that do surprising and
wonderful things with it. This principle contrasts starkly with most
current health records, which make the data conform to a single
original purpose and make it hard to extract the data for any other
use, much less keep it clean enough for unanticipated uses. (Talk to
doctors about how little the diagnoses they enter for billing purposes
have to do with the actual treatments patients need.)

An "Appstore for health"

New applications should be welcome from any quarter. Mandl is hoping
that apps will eventually cost just a few dollars, like a cell phone
app. (Note to Apple: Mandl and the audience tended to use the terms
"iPhone" and "Appstore" in a casual manner that slid from metaphors to
generic terms for mobile devices and program repositories.) Mandl said
that his teams' evaluation of apps would be on the loose side, more
like Android than iPhone, but that the environment would not be a
"Wild West." At each hospital or clinic, IT staff could set up their
own repositories of approved apps, and add custom-built ones.

A "learning health system"

Data should be the engine behind continuous improvement of our health
care system. As Mandl said, "every patient should be an opportunity to
learn."

Open source and open standards

As we've seen, standards are a prerequisite for data as a platform.
Open source has done well for SMART and the platforms on which is
based. But the current challenge, notably, allows proprietary as well
as open source submissions. This agnosticism about licensing is a
common factor across Challenge.gov. Apparently the sponsors believe
they will encourage more and better submissions by allowing the
developers to keep control over the resulting code. But at least most
Challenge.gov rules require some kind of right to use the app the
SMART challenge is totally silent on rights. The danger, of course, is
the developers will get tired of maintaining an app or will add
onerous features after it becomes popular.

An impressive list of electronic record vendors have promised support
for SMART or integrated it into products in some way: Cerner, Siemens,
Google, Microsoft, General Electric, and more. SMART seems to be on
its way to a clean sweep of the electronic health care record
industry. And one of its projects is aimed at the next frontier:
integrating devices such as blood glucose readers into the system.

P4: Bringing patients into the health record and their own treatment

SMART is a widely championed collaboration among stellar institutions;
P4 is the modest suggestion of a single doctor. But I'm including P4
in this blog because I think it's incredibly elegant. As you delve
into it, the concept evolves from seeming quite clever to completely
natural.

The project aims to create a lightweight communication system based on
standards and open source software. Any device or application that the
patient runs to record such things as blood pressure or mood could be
hooked into the system. Furthermore, the patient would be able to
share data with multiple care providers in a fine-grained way--just
the cholesterol and blood pressure readings, for example, or just
vaccination information. (This was another goal of the PCAST report
mentioned in the previous section.)

Communicating medical records is such a central plank of health care
reform that a division of Health and Human Services called the Office
of the National Coordinator created two major open source projects
with the help of electronic health record vendors: href="http://www.connectopensource.org/">CONNECT and href="http://wiki.directproject.org/">Direct. The latter is more
lightweight, recently having released libraries that support the
secure exchange of data over email.

Vendors will jump in now and produce systems they can sell to doctors
for the exchange of continuity of care records. But Gropper wants the
patients to have the same capabilities. To do that, he is linking up
Direct with another open source project developed by the Markle
Foundation for the Veterans Administration and Department of Defense:
Blue Button.

Blue Button is a patient portal with a particularly simple interface.
Log in to your account, press the button, and get a flat file in an
easy-to-read format. Linked Data proponents grumble that the format is
not structured enough, but like HTML it is simple to use and can be
extended in the future.

Blue Button is currently only a one-way system, however. A veteran can
look at his health data but can't upload new information. Nor can
multiple providers share the data. P4 will fix all that by using a
Direct interface to create two-way channels. If you are recovering
from a broken leg and want to upload your range-of-motion progress
every day, you will be able to do this (given that a format for the
data is designed and universally recognized) with your orthopedic
surgeon, your physical therapist, and your primary care provider. P4
will permit fine-grained access, so you can send out only the data you
think is relevant to each institution.

Gropper is aiming to put together a team of open source coders to
present this project to a VA challenge. Details can be found on the href="http://healthurl.com/www/P4.html">P4 web page.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl