Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 13 2012

Growth of SMART health care apps may be slow, but inevitable

This week has been teaming with health care conferences, particularly in Boston, and was declared by President Obama to be National Health IT Week as well. I chose to spend my time at the second ITdotHealth conference, where I enjoyed many intense conversations with some of the leaders in the health care field, along with news about the SMART Platform at the center of the conference, the excitement of a Clayton Christiansen talk, and the general panache of hanging out at the Harvard Medical School.

SMART, funded by the Office of the National Coordinator in Health and Human Services, is an attempt to slice through the Babel of EHR formats that prevent useful applications from being developed for patient data. Imagine if something like the wealth of mash-ups built on Google Maps (crime sites, disaster markers, restaurant locations) existed for your own health data. This is what SMART hopes to do. They can already showcase some working apps, such as overviews of patient data for doctors, and a real-life implementation of the heart disease user interface proposed by David McCandless in WIRED magazine.

The premise and promise of SMART

At this conference, the presentation that gave me the most far-reaching sense of what SMART can do was by Nich Wattanasin, project manager for i2b2 at Partners. His implementation showed SMART not just as an enabler of individual apps, but as an environment where a user could choose the proper app for his immediate needs. For instance, a doctor could use an app to search for patients in the database matching certain characteristics, then select a particular patient and choose an app that exposes certain clinical information on that patient. In this way, SMART an combine the power of many different apps that had been developed in an uncoordinated fashion, and make a comprehensive data analysis platform from them.

Another illustration of the value of SMART came from lead architect Josh Mandel. He pointed out that knowing a child’s blood pressure means little until one runs it through a formula based on the child’s height and age. Current EHRs can show you the blood pressure reading, but none does the calculation that shows you whether it’s normal or dangerous. A SMART app has been developer to do that. (Another speaker claimed that current EHRs in general neglect the special requirements of child patients.)

SMART is a close companion to the Indivo patient health record. Both of these, aong with the i2b2 data exchange system, were covered in article from an earlier conference at the medical school. Let’s see where platforms for health apps are headed.

How far we’ve come

As I mentioned, this ITdotHealth conference was the second to be held. The first took place in September 2009, and people following health care closely can be encouraged by reading the notes from that earlier instantiation of the discussion.

In September 2009, the HITECH act (part of the American Recovery and Reinvestment Act) had defined the concept of “meaningful use,” but nobody really knew what was expected of health care providers, because the ONC and the Centers for Medicare & Medicaid Services did not release their final Stage 1 rules until more than a year after this conference. Aneesh Chopra, then the Federal CTO, and Todd Park, then the CTO of Health and Human Services, spoke at the conference, but their discussion of health care reform was a “vision.” A surprisingly strong statement for patient access to health records was made, but speakers expected it to be accomplished through the CONNECT Gateway, because there was no Direct. (The first message I could find on the Direct Project forum dated back to November 25, 2009.) Participants had a sophisticated view of EHRs as platforms for applications, but SMART was just a “conceptual framework.”

So in some ways, ONC, Harvard, and many other contributors to modern health care have accomplished an admirable amount over three short years. But some ways we are frustratingly stuck. For instance, few EHR vendors offer API access to patient records, and existing APIs are proprietary. The only SMART implementation for a commercial EHR mentioned at this week’s conference was one created on top of the Cerner API by outsiders (although Cerner was cooperative). Jim Hansen of Dossia told me that there is little point to encourage programmers to create SMART apps while the records are still behind firewalls.

Keynotes

I couldn’t call a report on ITdotHealth complete without an account of the two keynotes by Christiansen and Eric Horvitz, although these took off in different directions from the rest of the conference and served as hints of future developments.

Christiansen is still adding new twists to the theories laid out in c The Innovator’s Dilemma and other books. He has been a backer of the SMART project from the start and spoke at the first ITdotHealth conference. Consistent with his famous theory of disruption, he dismisses hopes that we can reduce costs by reforming the current system of hospitals and clinics. Instead, he projects the way forward through technologies that will enable less trained experts to successively take over tasks that used to be performed in more high-cost settings. Thus, nurse practitioners will be able to do more and more of what doctors do, primary care physicians will do more of what we current delegate to specialists, and ultimately the patients and their families will treat themselves.

He also has a theory about the progression toward openness. Radically new technologies start out tightly integrated, and because they benefit from this integration they tend to be created by proprietary companies with high profit margins. As the industry comes to understand the products better, they move toward modular, open standards and become commoditized. Although one might conclude that EHRs, which have been around for some forty years, are overripe for open solutions, I’m not sure we’re ready for that yet. That’s because the problems the health care field needs to solve are quite different from the ones current EHRs solve. SMART is an open solution all around, but it could serve a marketplace of proprietary solutions and reward some of the venture capitalists pushing health care apps.

While Christiansen laid out the broad environment for change in health care, Horvitz gave us a glimpse of what he hopes the practice of medicine will be in a few years. A distinguished scientist at Microsoft, Horvitz has been using machine learning to extract patterns in sets of patient data. For instance, in a collection of data about equipment uses, ICD codes, vital signs, etc. from 300,000 emergency room visits, they found some variables that predicted a readmission within 14 days. Out of 10,000 variables, they found 500 that were relevant, but because the relational database was strained by retrieving so much data, they reduced the set to 23 variables to roll out as a product.

Another project predicted the likelihood of medical errors from patient states and management actions. This was meant to address a study claiming that most medical errors go unreported.

A study that would make the privacy-conscious squirm was based on the willingness of individuals to provide location data to researchers. The researchers tracked searches on Bing along with visits to hospitals and found out how long it took between searching for information on a health condition and actually going to do something about it. (Horvitz assured us that personally identifiable information was stripped out.)

His goal is go beyond measuring known variables, and to find new ones that could be hidden causes. But he warned that, as is often the case, causality is hard to prove.

As prediction turns up patterns, the data could become a “fabric” on which many different apps are based. Although Horvitz didn’t talk about combining data sets from different researchers, it’s clearly suggested by this progression. But proper de-identification and flexible patient consent become necessities for data combination. Horvitz also hopes to move from predictions to decisions, which he says is needed to truly move to evidence-based health care.

Did the conference promote more application development?

My impression (I have to admit I didn’t check with Dr. Ken Mandl, the organizer of the conference) was that this ITdotHealth aimed to persuade more people to write SMART apps, provide platforms that expose data through SMART, and contribute to the SMART project in general. I saw a few potential app developers at the conference, and a good number of people with their hands on data who were considering the use of SMART. I think they came away favorably impressed–maybe by the presentations, maybe by conversations that the meeting allowed them to have with SMART developers–so we may see SMART in wider use soon. Participants came far for the conference; I talked to one from Geneva, for instance.

The presentations were honest enough, though, to show that SMART development is not for the faint-hearted. On the supply side–that is, for people who have patient data and want to expose it–you have to create a “container” that presents data in the format expected by SMART. Furthermore, you must make sure the data conforms to industry standards, such as SNOMED for diagnoses. This could be a lot of conversion.

On the application side, you may have to deal with SMART’s penchant for Semantic Web technologies such as OWL and SPARQL. This will scare away a number of developers. However, speakers who presented SMART apps at the conference said development was fairly easy. No one matched the developer who said their app was ported in two days (most of which were spent reading the documentation) but development times could usually be measured in months.

Mandl spent some time airing the idea of a consortium to direct SMART. It could offer conformance tests (but probably not certification, which is a heavy-weight endeavor) and interact with the ONC and standards bodies.

After attending two conferences on SMART, I’ve got the impression that one of its most powerful concepts is that of an “app store for health care applications.” But correspondingly, one of the main sticking points is the difficulty of developing such an app store. No one seems to be taking it on. Perhaps SMART adoption is still at too early a stage.

Once again, we are batting our heads up against the walls erected by EHRs to keep data from being extracted for useful analysis. And behind this stands the resistance of providers, the users of EHRs, to give their data to their patients or to researchers. This theme dominated a federal government conference on patient access.

I think SMART will be more widely adopted over time because it is the only useful standard for exposing patient data to applications, and innovation in health care demands these apps. Accountable Care Organizations, smarter clinical trials (I met two representatives of pharmaceutical companies at the conference), and other advances in health care require data crunching, so those apps need to be written. And that’s why people came from as far as Geneva to check out SMART–there’s nowhere else to find what they need. The technical requirements to understand SMART seem to be within the developers’ grasps.

But a formidable phalanx of resistance remains, from those who don’t see the value of data to those who want to stick to limited exchange formats such as CCDs. And as Sean Nolan of Microsoft pointed out, one doesn’t get very far unless the app can fit into a doctor’s existing workflow. Privacy issues were also raised at the conference, because patient fears could stymie attempts at sharing. Given all these impediments, the government is doing what it can; perhaps the marketplace will step in to reward those who choose a flexible software platform for innovation.

August 23 2012

Balancing health privacy with innovation will rely on improving informed consent

Society is now faced with how to balance the privacy of the individual patient with the immense social good that could come through great health data sharing. Making health data more open and fluid holds both the potential to be hugely beneficial for patients and enormously harmful. As my colleague Alistair Croll put it this summer, big data may well be a civil rights issue that much of the world doesn’t know about yet.

This will likely be a tension that persists throughout my lifetime as technology spreads around the world. While big data breaches are likely to make headlines, more subtle uses of health data have the potential to enable employers, insurers or governments to discriminate — or worse. Figuring out shopping habits can also allow a company to determine a teenager was pregnant before her father did. People simply don’t realize how much about their lives can be intuited through analysis of their data exhaust.

To unlock the potential of health data for the public good, informed consent must mean something. Patients must be given the information and context for how and why their health data will be used in clear, transparent ways. To do otherwise is to duck the responsibility that comes with the immense power of big data.

In search of an informed opinion on all of these issues, I called up Deven McGraw (@HealthPrivacy), the director of the Health Privacy Project at the Center for Democracy and Technology (CDT). Our interview, lightly edited for content and clarity, follows.

Should people feel better about, say, getting their genome decoded because the Patient Protection and Affordable Care Act (PPACA) was upheld by the Supreme Court? What about other health-data-based discrimination?

Deven McGraw: The reality that someone could get data and use it in a way that harms people, and the inability to get affordable health care insurance or to get insurance at all, has been a significant driver of the concerns people have about health data for a very long time.

It’s not the only driver of people’s privacy concerns. Just removing the capacity for entities to do harm to individuals using their health data is probably not going to fully resolve the problem.

It’s important to pursue from a policy standpoint, but it’s also the case that people feel stigmatized by their health data. They feel health care is something they want to be able to pursue privately, even if the chances are very low that anybody could get the information and actually harm them with it by denying them insurance or denying them employment, which is an area we actually haven’t fully fixed. Your ability to get life insurance or disability insurance was not fixed by the Affordable Care Act.

Even if you fix all of those issues, privacy protections are about building an ecosystem in health care that people will trust. When they need to seek care that might be deemed to be sensitive to them, they feel like they can go get care and have some degree of confidence that that information isn’t going to be shared outside of those who have a need to know it, like health care providers or their insurance company if they are seeking to be reimbursed for care.

Obviously, public health can play a role. The average individual doesn’t realize that, often, their health data is sent to public health authorities if they have certain conditions or diseases, or even just as a matter of routine reporting for surveillance purposes.

Some of this is about keeping a trustworthy environment for individuals so they can seek the care they need. That’s a key goal for privacy. The other aspect of it is making sure we have the data available for important public purposes, but in a way that respects the fact that this data is sensitive.

We need to not be disrupting the trust people have in the health care system. If you can’t give people some reasonable assurance about how their data is used, there are lots of folks who will decline to seek care or will lie about health conditions when truthfulness is important.

Are health care providers and services being honest about health data use?

Deven McGraw: Transparency and openness about how we use health data in this country is seriously lacking. Part of it is the challenge of being up front with people, disclosing things they need to know but not overwhelming them with so much information in a consent form that they just sign on the bottom and don’t read it and don’t fully understand it.

It’s really hard to get notice and transparency right, and it’s a constant struggle. The FTC report on privacy talks a lot about how hard it is to be transparent with people about data sharing on the Internet or data collection on your mobile phone.

Ideally, for people to be truly informed, you’d give them an exhaustive amount of information, right? But if you give them too much information, the chances that they’ll read it and understand it are really low. So then people end up saying “yes” to things they don’t even realize they’re saying “yes” to.

On the other hand, we haven’t put enough effort into trying different ways of educating people. We, for too long, have assumed that, in a regulatory regime that provides permissive data sharing within the health care context, people will just trust their doctors.

I’ve been to a seminar on researchers getting access to data. The response of one of the researchers to the issue of “How do you regulate data uses for research?” and “What’s the role of consent?” and “What’s the role of institutional review boards?” was, “Well, people should just trust researchers.”

Maybe some people trust researchers, but that’s not really good enough. You have to earn trust. There’s a lot of room for innovative thinking along those lines. It’s something I have been increasingly itchy to try to dive into in more detail with folks who have expertise in other disciplines, like sociology, anthropology and community-building. What does it take to build trusted infrastructures that are transparent, open and that people are comfortable participating in?

There’s no magic endpoint for privacy, like, “Oh, we have privacy now,” versus, “Oh, we don’t have privacy.” To me, the magic endpoint is whether we have a health care data ecosystem that most people trust. It’s not perfect, but it’s good enough. I don’t think we’re quite there yet.

What specifically needs to happen on the openness and transparency side?

Deven McGraw: When I hear about state-based or community-based health information exchanges (HIE) going out and having town meetings with people in advance of building the HIE, working with the physicians in their communities to make sure they’re having conversations with their patients about what’s happening in the community, the electronic records movement and the HIE they’re building, that’s exactly the kind of work you need to do. When I hear about initiatives where people have actually spent the time and resources to educate patients, it warms my heart.

Yes, it’s fairly time- and resource-intensive, but in my view, it pays huge dividends on the backend, in terms of the level of trust and buy-in the community has to what you’re doing. It’s not that big of a leap. If you live in a community where people tend to go to church on Sundays, reach out to the churches. Ask pastors if you can speak to their congregations. Or bring them along and have them speak to their own congregations. Do tailored outreach to people through vehicles they already trust.

I think a lot of folks are pressed for time and resources, and feeling like digitization of the health care system should have happened yesterday. People are dying from errors in care and not getting their care coordinated. All of that is true. But this is a huge change in health care, and we have to do the hard work of outreach and engagement of patients in the community to do it right. In many ways, it’s a community-by-community effort. We’re not one great ad campaign away from solving the issue.

Is there mistrust for good reason? There have been many years of data breaches, coupled with new fears sparked by hacks enabled by electronic health record (EHR) adoption.

Deven McGraw: Part of it is when one organization has a breach, it’s like they all did. There is a collective sense that the health care industry, overall, doesn’t have its act together. It can’t quite figure out how to do electronic records right when we have breach after breach after breach. If breaches were rare, that would be one thing, but they’re still far too frequent. Institutions aren’t taking the basic steps they could take to reduce breaches. You’re never going to eliminate them, but you certainly can reduce them below where we are today.

In the context of certain secondary data uses, like when parents find out after the fact that blood spots collected from their infants at birth are being used for multiple purposes, you don’t want to surprise people about what you’re doing with their health information, the health information of their children, and that of other family members.

I think most people would be quite comfortable with many uses of health data, including those that do not necessarily directly benefit them but benefit human beings generally, or people who have the same disease, or people like them. In general, we’re actually a fairly generous people, but we don’t want to be surprised by unexpected use.

There’s a tremendous amount of work to do. We have a tendency to think issues like secondary use get resolved by asking for people’s consent ahead of time. Consent certainly plays an important role in protecting people’s privacy and giving them some sense of control over their health care information, but because consent in practice actually doesn’t do such a great job, we can’t over-rely on it to create a trust ecosystem. We have to do more on the openness and transparency side so that people are brought along with where we’re going with these health information technology initiatives.

What do doctors’ offices need to do to mitigate risks from EHR adoption?

Deven McGraw: It’s absolutely true that digitizing data in the absence of the adoption of technical security safeguards puts it much more at risk. You cannot hack into a paper file. If you lose a paper file, you’ve lost one paper file. If you lose a laptop, you’ve lost hundreds of thousands of records, if they’re on there and you didn’t encrypt the data.

Having said that, there are so many tools that you can adopt in technology with data in a digital form that are much stronger from a security standpoint than is true in paper. You can set role-based access controls for who can access a file and track who has accessed a file. You can’t do that with paper. You can use encryption technology. You can use stronger identity and authentication levels in order to make sure the person accessing the data is, in fact, authorized to do so and is the person they say they are on the other end of the transaction.

We do need people to adopt those technologies and to use them. You’re talking about a health care industry that has stewardship over some of the most sensitive data we have out there. It’s not the nuclear codes, but for a lot of people, it’s incredibly sensitive data — and yet, we trust the security of that data to rank amateurs. Honestly, there’s no other way around that. The people who create the data are physicians. Most of them don’t have any experience in digital security.

We have to count on the vendors of those systems to build in security safeguards. Then, we have to count on giving physicians and their staffs as much guidance as we can so they can actually deploy those safeguards and don’t create workarounds to them that create bigger holes in the security of the data and potentially create patient safety issues. It’s an enormously complex problem, but it’s not the reason to say, “Well, we can’t do this.”

Due to the efforts of many advocates, as you well know, health data has become a big part of the discussion around open data. What are the risks and benefits?

Deven McGraw: Honestly, people throw the term “open data” around a lot, and I don’t think we have a clear, agreed-upon definition for what that is. It’s a mistake to think that open data means all health data, fully identifiable, available to anybody, for any purpose, for any reason. That would be a totally “open data” environment. No rules, no restrictions, you get what you need. It certainly would be transformative and disruptive. We’d probably learn an awful lot from the data. But at the same time, we’ve potentially completely blown trust in the system because we can give no guarantees to anybody about what’s going to happen with their data.

Open data means creating rules that provide greater access to data but with certain privacy protections in place, such as protections on minimizing the identifiability of the data. That typically has been the way government health data initiatives, for example, have been put forth: the data that’s open, that’s really widely accessible, is data with a very low risk of being identified with a particular patient. The focus is typically on the patient side, but I think, even in the government health data initiatives that I’m aware of, it’s also not identifiable to a particular provider. It’s aggregate data that says, “How often is that very expensive cardiac surgery done and in what populations of patients? What are the general outcomes?” That’s all valuable information but not data at the granular level, where it’s traceable to an individual and, therefore, puts at risk the notion they can confidentially receive care.

We have a legal regime that opens the doors to data use much wider if you mask identifiers in data, remove them from a dataset, or use statistical techniques to render data to have a very low risk of re-identification.

We don’t have a perfect regulatory regime on that front. We don’t have any strict prohibitions against re-identifying that data. We don’t have any mechanisms to hold people accountable if they do re-identify the data, or if they release a dataset that then is subsequently re-identified because they were sloppy in how they de-identified it. We don’t have the regulatory regime that we need to create an open data ecosystem that loosens some of the regulatory constraints on data but in a way that still protects individual privacy to the maximum extent possible.

Again, it’s a balance. What we’re trying to achieve is a very low risk of re-identification; it’s impossible to achieve no risk of re-identification and still have any utility in the data whatsoever, or so I’m told by researchers.

It is absolutely the path we need to proceed down. Our health care system is so messed up and fails so many people so much of the time. If we don’t start using this data, learning from it and deploying testing initiatives more robustly, getting rid of the ones that don’t work and more aggressively pursuing the interventions that do, we’re never going to move the needle. And consumers suffer from that. They suffer as much — or more, quite frankly — than they do from violations of their privacy. The end goal here is we need to create a health care system that works and that people trust. You need to be pursuing both of those goals.

Congress hasn’t had much appetite for passing new health care legislation in this election year, aside from the House trying to repeal PPACA 33 times. That would seem to leave reform up to the U.S. Department of Health and Human Services (HHS), for now. Where do we stand with rulemaking around creating regulatory regimes like those you’ve described?

Deven McGraw: HHS certainly has made progress in some areas and is much more proactive on the issue of health privacy than I think they have been in the past. On the other hand, I’m not sure I can point to significant milestones that have been met.

Some of that isn’t completely their fault. Within an administration, there are multiple decision-makers. For any sort of policy matter where you want to move the ball forward, there’s a fair amount of process and approval up the food chain that has to happen. In an election year, in particular, that whole mechanism gets jammed up in ways that are often disappointing.

We still don’t have finalized HIPAA rules from the HITECH changes, which is really unfortunate. And I’m now thinking we won’t see them until November. Similarly, there was a study on de-identification that Congress called for in the HITECH legislation. It’s two years late, creeping up on three, and we still haven’t seen it.

You can point to those and you sort of throw up your hands and say, “What’s going on? Who’s minding the store?” If we know and appreciate that we need to build this trust environment to move the needle forward on using health IT to address quality and cost issues, then it starts to look very bad in terms of a report card for the agency on those elements.

On the other hand, you have the Office of the National Coordinator for Health IT doing more work through setting funding conditions on states to get them to adopt privacy frameworks for health information exchanges.

You have progress being made by the Office for Civil Rights on HIPAA enforcement. They’re doing audits. They now have more enforcement actions in the last year than they had in the total number of years the regulations were in effect prior to this year. They’re getting serious.

From a research perspective, the other thing I would mention is the efforts to try to make the common rule — the set of rules that governs federally funded research — more consistent with HIPAA and more workable for researchers. But there’s still a lot of work to be done on that initiative as well.

We started the conversation by saying these are really complex issues. They don’t get fixed overnight. In some respects, fast action is less important than getting it right, but we really should be making faster progress than we are.

What does the trend toward epatients and peer-to-peer health care mean for privacy, prevention and informed consent?

Deven McGraw: I think the epatient movement and increase in people’s use of Internet technologies, like social media, to connect with one another and to share data and experiences in order to improve their care is an enormously positive development. It’s a huge game-changer. And, of course, it will have an impact on privacy.

One of the things we’re going to have to keep an eye on is the fact that one out of six people, when they’re surveyed, say they practice what we call “privacy protective behaviors.” They lie to their physicians. They don’t go to seek the care they need, which is often the case with respect to mental illness. Or they seek care out of their area in order to prevent people they might know who work in their local hospital from seeing their data.

But that’s only one out of six people who say that, so there are an awful lot of people who, from the start, even when they’re healthy, are completely comfortable being open with their data. Certainly when you’re sick, your desire is to get better. And when you’re seriously sick, your desire is to save your life. Anything you can do to do that means whatever qualms you may have had about sharing your data, if they existed at all, go right out the window.

On the other hand, we have to build an ecosystem that the one out of six people can use as well. That’s what I’m focusing on, in particular, in the consumer-facing health space, the “Health 2.0 space” and on social media sites. It really should be the choice of the individual about how much data they share. There needs to be a lot of transparency about how that data is used.

When I look at a site like PatientsLikeMe, I know some privacy advocates think it’s horrifying and that those people are crazy for sharing the level of detail in their data on that site. On the other hand, I have read few privacy policies that are as transparent and open about what they do with data as PatientsLikeMe’s policy. They’re very up front about what happens with that data. I’m confident that people who go on the site absolutely know what they’re doing. It’s not my job to tell them they can’t do it.

But we also need to create environments so people can get the benefits of sharing their experiences with other patients who have their disease — because it’s enormously empowering and groundbreaking from a research standpoint — without telling people they have to throw all of their inhibitions out the door.

You clearly care about these issues deeply. How did you end up in your current position?

Deven McGraw: I was working at the National Partnership for Women and Families, which is another nonprofit advocacy organization here in town [Washington, D.C.], as their chief operating officer. I had been working on health information technology policy issues — specifically, the use of technology to improve health care quality and trying to normalize or reduce costs. I was getting increasingly involved in being a consumer representative at meetings on health information technology adoption and applauding health information technology adoption, and thinking about what the benefits for consumers were and how we can make sure that those happen.

The one issue that kept coming up in those conversations was that we know we need to build in privacy protections for this data and we know we have HIPAA — so where are the gaps? What do we need to do to move the ball forward? I never had enough time to really drill down on that issue because I was the chief operating officer of a nonprofit.

At the time, the Health Privacy Project was an independent nonprofit organization that had been founded and led by one dynamic woman, Janlori Goldman. She was living in New York and was ready to transition the work to somebody else. When the CDT approached me about being the director of the Health Privacy Project, they were moving it into CDT to take advantage of all the technology and Internet expertise at a time when we’re trying to move health care aggressively into the digital space. It was a perfect storm, with me wishing I had more time to think through the privacy issues and then this job aligned with the way I like to do policy work, which is to sit down with stakeholders and try to figure out a solution that ideally works for everybody.

From a timing perspective, it couldn’t have been more perfect. It was right during the consideration of bills on health IT. There were hearings on health information technology that we were invited to testify in. We wrote papers to put ourselves on the map, in terms of our theory about how to do privacy well in health IT and what the role of patient consent should be in privacy, because a lot of the debate was really spinning around that one issue. It’s been a terrific experience. It’s an enormous challenge.

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting health care.

Save 20% on registration with the code RADAR20

Related:

August 14 2012

Solving the Wanamaker problem for health care

By Tim O’Reilly, Julie Steele, Mike Loukides and Colin Hill

“The best minds of my generation are thinking about how to make people click ads.” — Jeff Hammerbacher, early Facebook employee

“Work on stuff that matters.” — Tim O’Reilly

Doctors in operating room with data

In the early days of the 20th century, department store magnate John Wanamaker famously said, “I know that half of my advertising doesn’t work. The problem is that I don’t know which half.”

The consumer Internet revolution was fueled by a search for the answer to Wanamaker’s question. Google AdWords and the pay-per-click model transformed a business in which advertisers paid for ad impressions into one in which they pay for results. “Cost per thousand impressions” (CPM) was replaced by “cost per click” (CPC), and a new industry was born. It’s important to understand why CPC replaced CPM, though. Superficially, it’s because Google was able to track when a user clicked on a link, and was therefore able to bill based on success. But billing based on success doesn’t fundamentally change anything unless you can also change the success rate, and that’s what Google was able to do. By using data to understand each user’s behavior, Google was able to place advertisements that an individual was likely to click. They knew “which half” of their advertising was more likely to be effective, and didn’t bother with the rest.

Since then, data and predictive analytics have driven ever deeper insight into user behavior such that companies like Google, Facebook, Twitter, Zynga, and LinkedIn are fundamentally data companies. And data isn’t just transforming the consumer Internet. It is transforming finance, design, and manufacturing — and perhaps most importantly, health care.

How is data science transforming health care? There are many ways in which health care is changing, and needs to change. We’re focusing on one particular issue: the problem Wanamaker described when talking about his advertising. How do you make sure you’re spending money effectively? Is it possible to know what will work in advance?

Too often, when doctors order a treatment, whether it’s surgery or an over-the-counter medication, they are applying a “standard of care” treatment or some variation that is based on their own intuition, effectively hoping for the best. The sad truth of medicine is that we don’t really understand the relationship between treatments and outcomes. We have studies to show that various treatments will work more often than placebos; but, like Wanamaker, we know that much of our medicine doesn’t work for half or our patients, we just don’t know which half. At least, not in advance. One of data science’s many promises is that, if we can collect data about medical treatments and use that data effectively, we’ll be able to predict more accurately which treatments will be effective for which patient, and which treatments won’t.

A better understanding of the relationship between treatments, outcomes, and patients will have a huge impact on the practice of medicine in the United States. Health care is expensive. The U.S. spends over $2.6 trillion on health care every year, an amount that constitutes a serious fiscal burden for government, businesses, and our society as a whole. These costs include over $600 billion of unexplained variations in treatments: treatments that cause no differences in outcomes, or even make the patient’s condition worse. We have reached a point at which our need to understand treatment effectiveness has become vital — to the health care system and to the health and sustainability of the economy overall.

Why do we believe that data science has the potential to revolutionize health care? After all, the medical industry has had data for generations: clinical studies, insurance data, hospital records. But the health care industry is now awash in data in a way that it has never been before: from biological data such as gene expression, next-generation DNA sequence data, proteomics, and metabolomics, to clinical data and health outcomes data contained in ever more prevalent electronic health records (EHRs) and longitudinal drug and medical claims. We have entered a new era in which we can work on massive datasets effectively, combining data from clinical trials and direct observation by practicing physicians (the records generated by our $2.6 trillion of medical expense). When we combine data with the resources needed to work on the data, we can start asking the important questions, the Wanamaker questions, about what treatments work and for whom.

The opportunities are huge: for entrepreneurs and data scientists looking to put their skills to work disrupting a large market, for researchers trying to make sense out of the flood of data they are now generating, and for existing companies (including health insurance companies, biotech, pharmaceutical, and medical device companies, hospitals and other care providers) that are looking to remake their businesses for the coming world of outcome-based payment models.

Making health care more effective

Downloadable Editions

This report will soon be available in PDF, EPUB and Mobi formats. Submit your email to be alerted when the downloadable editions are ready.

What, specifically, does data allow us to do that we couldn’t do before? For the past 60 or so years of medical history, we’ve treated patients as some sort of an average. A doctor would diagnose a condition and recommend a treatment based on what worked for most people, as reflected in large clinical studies. Over the years, we’ve become more sophisticated about what that average patient means, but that same statistical approach didn’t allow for differences between patients. A treatment was deemed effective or ineffective, safe or unsafe, based on double-blind studies that rarely took into account the differences between patients. With the data that’s now available, we can go much further. The exceptions to this are relatively recent and have been dominated by cancer treatments, the first being Herceptin for breast cancer in women who over-express the Her2 receptor. With the data that’s now available, we can go much further for a broad range of diseases and interventions that are not just drugs but include surgery, disease management programs, medical devices, patient adherence, and care delivery.

For a long time, we thought that Tamoxifen was roughly 80% effective for breast cancer patients. But now we know much more: we know that it’s 100% effective in 70 to 80% of the patients, and ineffective in the rest. That’s not word games, because we can now use genetic markers to tell whether it’s likely to be effective or ineffective for any given patient, and we can tell in advance whether to treat with Tamoxifen or to try something else.

Two factors lie behind this new approach to medicine: a different way of using data, and the availability of new kinds of data. It’s not just stating that the drug is effective on most patients, based on trials (indeed, 80% is an enviable success rate); it’s using artificial intelligence techniques to divide the patients into groups and then determine the difference between those groups. We’re not asking whether the drug is effective; we’re asking a fundamentally different question: “for which patients is this drug effective?” We’re asking about the patients, not just the treatments. A drug that’s only effective on 1% of patients might be very valuable if we can tell who that 1% is, though it would certainly be rejected by any traditional clinical trial.

More than that, asking questions about patients is only possible because we’re using data that wasn’t available until recently: DNA sequencing was only invented in the mid-1970s, and is only now coming into its own as a medical tool. What we’ve seen with Tamoxifen is as clear a solution to the Wanamaker problem as you could ask for: we now know when that treatment will be effective. If you can do the same thing with millions of cancer patients, you will both improve outcomes and save money.

Dr. Lukas Wartman, a cancer researcher who was himself diagnosed with terminal leukemia, was successfully treated with sunitinib, a drug that was only approved for kidney cancer. Sequencing the genes of both the patient’s healthy cells and cancerous cells led to the discovery of a protein that was out of control and encouraging the spread of the cancer. The gene responsible for manufacturing this protein could potentially be inhibited by the kidney drug, although it had never been tested for this application. This unorthodox treatment was surprisingly effective: Wartman is now in remission.

While this treatment was exotic and expensive, what’s important isn’t the expense but the potential for new kinds of diagnosis. The price of gene sequencing has been plummeting; it will be a common doctor’s office procedure in a few years. And through Amazon and Google, you can now “rent” a cloud-based supercomputing cluster that can solve huge analytic problems for a few hundred dollars per hour. What is now exotic inevitably becomes routine.

But even more important: we’re looking at a completely different approach to treatment. Rather than a treatment that works 80% of the time, or even 100% of the time for 80% of the patients, a treatment might be effective for a small group. It might be entirely specific to the individual; the next cancer patient may have a different protein that’s out of control, an entirely different genetic cause for the disease. Treatments that are specific to one patient don’t exist in medicine as it’s currently practiced; how could you ever do an FDA trial for a medication that’s only going to be used once to treat a certain kind of cancer?

Foundation Medicine is at the forefront of this new era in cancer treatment. They use next-generation DNA sequencing to discover DNA sequence mutations and deletions that are currently used in standard of care treatments, as well as many other actionable mutations that are tied to drugs for other types of cancer. They are creating a patient-outcomes repository that will be the fuel for discovering the relation between mutations and drugs. Foundation has identified DNA mutations in 50% of cancer cases for which drugs exist (information via a private communication), but are not currently used in the standard of care for the patient’s particular cancer.

The ability to do large-scale computing on genetic data gives us the ability to understand the origins of disease. If we can understand why an anti-cancer drug is effective (what specific proteins it affects), and if we can understand what genetic factors are causing the cancer to spread, then we’re able to use the tools at our disposal much more effectively. Rather than using imprecise treatments organized around symptoms, we’ll be able to target the actual causes of disease, and design treatments tuned to the biology of the specific patient. Eventually, we’ll be able to treat 100% of the patients 100% of the time, precisely because we realize that each patient presents a unique problem.

Personalized treatment is just one area in which we can solve the Wanamaker problem with data. Hospital admissions are extremely expensive. Data can make hospital systems more efficient, and to avoid preventable complications such as blood clots and hospital re-admissions. It can also help address the challenge of hot-spotting (a term coined by Atul Gawande): finding people who use an inordinate amount of health care resources. By looking at data from hospital visits, Dr. Jeffrey Brenner of Camden, NJ, was able to determine that “just one per cent of the hundred thousand people who made use of Camden’s medical facilities accounted for thirty per cent of its costs.” Furthermore, many of these people came from two apartment buildings. Designing more effective medical care for these patients was difficult; it doesn’t fit our health insurance system, the patients are often dealing with many serious medical issues (addiction and obesity are frequent complications), and have trouble trusting doctors and social workers. It’s counter-intuitive, but spending more on some patients now results in spending less on them when they become really sick. While it’s a work in progress, it looks like building appropriate systems to target these high-risk patients and treat them before they’re hospitalized will bring significant savings.

Many poor health outcomes are attributable to patients who don’t take their medications. Eliza, a Boston-based company started by Alexandra Drane, has pioneered approaches to improve compliance through interactive communication with patients. Eliza improves patient drug compliance by tracking which types of reminders work on which types of people; it’s similar to the way companies like Google target advertisements to individual consumers. By using data to analyze each patient’s behavior, Eliza can generate reminders that are more likely to be effective. The results aren’t surprising: if patients take their medicine as prescribed, they are more likely to get better. And if they get better, they are less likely to require further, more expensive treatment. Again, we’re using data to solve Wanamaker’s problem in medicine: we’re spending our resources on what’s effective, on appropriate reminders that are mostly to get patients to take their medications.

More data, more sources

The examples we’ve looked at so far have been limited to traditional sources of medical data: hospitals, research centers, doctor’s offices, insurers. The Internet has enabled the formation of patient networks aimed at sharing data. Health social networks now are some of the largest patient communities. As of November 2011, PatientsLikeMe has over 120,000 patients in 500 different condition groups; ACOR has over 100,000 patients in 127 cancer support groups; 23andMe has over 100,000 members in their genomic database; and diabetes health social network SugarStats has over 10,000 members. These are just the larger communities, thousands of small communities are created around rare diseases, or even uncommon experiences with common diseases. All of these communities are generating data that they voluntarily share with each other and the world.

Increasingly, what they share is not just anecdotal, but includes an array of clinical data. For this reason, these groups are being recruited for large-scale crowdsourced clinical outcomes research.

Thanks to ubiquitous data networking through the mobile network, we can take several steps further. In the past two or three years, there’s been a flood of personal fitness devices (such as the Fitbit) for monitoring your personal activity. There are mobile apps for taking your pulse, and an iPhone attachment for measuring your glucose. There has been talk of mobile applications that would constantly listen to a patient’s speech and detect changes that might be the precursor for a stroke, or would use the accelerometer to report falls. Tanzeem Choudhury has developed an app called Be Well that is intended primarily for victims of depression, though it can be used by anyone. Be Well monitors the user’s sleep cycles, the amount of time they spend talking, and the amount of time they spend walking. The data is scored, and the app makes appropriate recommendations, based both on the individual patient and data collected across all the app’s users.

Continuous monitoring of critical patients in hospitals has been normal for years; but we now have the tools to monitor patients constantly, in their home, at work, wherever they happen to be. And if this sounds like big brother, at this point most of the patients are willing. We don’t want to transform our lives into hospital experiences; far from it! But we can collect and use the data we constantly emit, our “data smog,” to maintain our health, to become conscious of our behavior, and to detect oncoming conditions before they become serious. The most effective medical care is the medical care you avoid because you don’t need it.

Paying for results

Once we’re on the road toward more effective health care, we can look at other ways in which Wanamaker’s problem shows up in the medical industry. It’s clear that we don’t want to pay for treatments that are ineffective. Wanamaker wanted to know which part of his advertising was effective, not just to make better ads, but also so that he wouldn’t have to buy the advertisements that wouldn’t work. He wanted to pay for results, not for ad placements. Now that we’re starting to understand how to make treatment effective, now that we understand that it’s more than rolling the dice and hoping that a treatment that works for a typical patient will be effective for you, we can take the next step: Can we change the underlying incentives in the medical system? Can we make the system better by paying for results, rather than paying for procedures?

It’s shocking just how badly the incentives in our current medical system are aligned with outcomes. If you see an orthopedist, you’re likely to get an MRI, most likely at a facility owned by the orthopedist’s practice. On one hand, it’s good medicine to know what you’re doing before you operate. But how often does that MRI result in a different treatment? How often is the MRI required just because it’s part of the protocol, when it’s perfectly obvious what the doctor needs to do? Many men have had PSA tests for prostate cancer; but in most cases, aggressive treatment of prostate cancer is a bigger risk than the disease itself. Yet the test itself is a significant profit center. Think again about Tamoxifen, and about the pharmaceutical company that makes it. In our current system, what does “100% effective in 80% of the patients” mean, except for a 20% loss in sales? That’s because the drug company is paid for the treatment, not for the result; it has no financial interest in whether any individual patient gets better. (Whether a statistically significant number of patients has side-effects is a different issue.) And at the same time, bringing a new drug to market is very expensive, and might not be worthwhile if it will only be used on the remaining 20% of the patients. And that’s assuming that one drug, not two, or 20, or 200 will be required to treat the unlucky 20% effectively.

It doesn’t have to be this way.

In the U.K., Johnson & Johnson, faced with the possibility of losing reimbursements for their multiple myeloma drug Velcade, agreed to refund the money for patients who did not respond to the drug. Several other pay-for-performance drug deals have followed since, paving the way for the ultimate transition in pharmaceutical company business models in which their product is health outcomes instead of pills. Such a transition would rely more heavily on real-world outcome data (are patients actually getting better?), rather than controlled clinical trials, and would use molecular diagnostics to create personalized “treatment algorithms.” Pharmaceutical companies would also focus more on drug compliance to ensure health outcomes were being achieved. This would ultimately align the interests of drug makers with patients, their providers, and payors.

Similarly, rather than paying for treatments and procedures, can we pay hospitals and doctors for results? That’s what Accountable Care Organizations (ACOs) are about. ACOs are a leap forward in business model design, where the provider shoulders any financial risk. ACOs represent a new framing of the much maligned HMO approaches from the ’90s, which did not work. HMOs tried to use statistics to predict and prevent unneeded care. The ACO model, rather than controlling doctors with what the data says they “should” do, uses data to measure how each doctor performs. Doctors are paid for successes, not for the procedures they administer. The main advantage that the ACO model has over the HMO model is how good the data is, and how that data is leveraged. The ACO model aligns incentives with outcomes: a practice that owns an MRI facility isn’t incentivized to order MRIs when they’re not necessary. It is incentivized to use all the data at its disposal to determine the most effective treatment for the patient, and to follow through on that treatment with a minimum of unnecessary testing.

When we know which procedures are likely to be successful, we’ll be in a position where we can pay only for the health care that works. When we can do that, we’ve solved Wanamaker’s problem for health care.

Enabling data

Data science is not optional in health care reform; it is the linchpin of the whole process. All of the examples we’ve seen, ranging from cancer treatment to detecting hot spots where additional intervention will make hospital admission unnecessary, depend on using data effectively: taking advantage of new data sources and new analytics techniques, in addition to the data the medical profession has had all along.

But it’s too simple just to say “we need data.” We’ve had data all along: handwritten records in manila folders on acres and acres of shelving. Insurance company records. But it’s all been locked up in silos: insurance silos, hospital silos, and many, many doctor’s office silos. Data doesn’t help if it can’t be moved, if data sources can’t be combined.

There are two big issues here. First, a surprising amount of medical records are still either hand-written, or in digital formats that are scarcely better than hand-written (for example, scanned images of hand-written records). Getting medical records into a format that’s computable is a prerequisite for almost any kind of progress. Second, we need to break down those silos.

Anyone who has worked with data knows that, in any problem, 90% of the work is getting the data in a form in which it can be used; the analysis itself is often simple. We need electronic health records: patient data in a more-or-less standard form that can be shared efficiently, data that can be moved from one location to another at the speed of the Internet. Not all data formats are created equal, and some are certainly better than others: but at this point, any machine-readable format, even simple text files, is better than nothing. While there are currently hundreds of different formats for electronic health records, the fact that they’re electronic means that they can be converted from one form into another. Standardizing on a single format would make things much easier, but just getting the data into some electronic form, any, is the first step.

Once we have electronic health records, we can link doctor’s offices, labs, hospitals, and insurers into a data network, so that all patient data is immediately stored in a data center: every prescription, every procedure, and whether that treatment was effective or not. This isn’t some futuristic dream; it’s technology we have now. Building this network would be substantially simpler and cheaper than building the networks and data centers now operated by Google, Facebook, Amazon, Apple, and many other large technology companies. It’s not even close to pushing the limits.

Electronic health records enable us to go far beyond the current mechanism of clinical trials. In the past, once a drug has been approved in trials, that’s effectively the end of the story: running more tests to determine whether it’s effective in practice would be a huge expense. A physician might get a sense for whether any treatment worked, but that evidence is essentially anecdotal: it’s easy to believe that something is effective because that’s what you want to see. And if it’s shared with other doctors, it’s shared while chatting at a medical convention. But with electronic health records, it’s possible (and not even terribly expensive) to collect documentation from thousands of physicians treating millions of patients. We can find out when and where a drug was prescribed, why, and whether there was a good outcome. We can ask questions that are never part of clinical trials: is the medication used in combination with anything else? What other conditions is the patient being treated for? We can use machine learning techniques to discover unexpected combinations of drugs that work well together, or to predict adverse reactions. We’re no longer limited by clinical trials; every patient can be part of an ongoing evaluation of whether his treatment is effective, and under what conditions. Technically, this isn’t hard. The only difficult part is getting the data to move, getting data in a form where it’s easily transferred from the doctor’s office to analytics centers.

To solve problems of hot-spotting (individual patients or groups of patients consuming inordinate medical resources) requires a different combination of information. You can’t locate hot spots if you don’t have physical addresses. Physical addresses can be geocoded (converted from addresses to longitude and latitude, which is more useful for mapping problems) easily enough, once you have them, but you need access to patient records from all the hospitals operating in the area under study. And you need access to insurance records to determine how much health care patients are requiring, and to evaluate whether special interventions for these patients are effective. Not only does this require electronic records, it requires cooperation across different organizations (breaking down silos), and assurance that the data won’t be misused (patient privacy). Again, the enabling factor is our ability to combine data from different sources; once you have the data, the solutions come easily.

Breaking down silos has a lot to do with aligning incentives. Currently, hospitals are trying to optimize their income from medical treatments, while insurance companies are trying to optimize their income by minimizing payments, and doctors are just trying to keep their heads above water. There’s little incentive to cooperate. But as financial pressures rise, it will become critically important for everyone in the health care system, from the patient to the insurance executive, to assume that they are getting the most for their money. While there’s intense cultural resistance to be overcome (through our experience in data science, we’ve learned that it’s often difficult to break down silos within an organization, let alone between organizations), the pressure of delivering more effective health care for less money will eventually break the silos down. The old zero-sum game of winners and losers must end if we’re going to have a medical system that’s effective over the coming decades.

Data becomes infinitely more powerful when you can mix data from different sources: many doctor’s offices, hospital admission records, address databases, and even the rapidly increasing stream of data coming from personal fitness devices. The challenge isn’t employing our statistics more carefully, precisely, or guardedly. It’s about letting go of an old paradigm that starts by assuming only certain variables are key and ends by correlating only these variables. This paradigm worked well when data was scarce, but if you think about, these assumptions arise precisely because data is scarce. We didn’t study the relationship between leukemia and kidney cancers because that would require asking a huge set of questions that would require collecting a lot of data; and a connection between leukemia and kidney cancer is no more likely than a connection between leukemia and flu. But the existence of data is no longer a problem: we’re collecting the data all the time. Electronic health records let us move the data around so that we can assemble a collection of cases that goes far beyond a particular practice, a particular hospital, a particular study. So now, we can use machine learning techniques to identify and test all possible hypotheses, rather than just the small set that intuition might suggest. And finally, with enough data, we can get beyond correlation to causation: rather than saying “A and B are correlated,” we’ll be able to say “A causes B,” and know what to do about it.

Building the health care system we want

The U.S. ranks 37th out of developed economies in life expectancy and other measures of health, while by far outspending other countries on per-capita health care costs. We spend 18% of GDP on health care, while other countries on average spend on the order of 10% of GDP. We spend a lot of money on treatments that don’t work, because we have a poor understanding at best of what will and won’t work.

Part of the problem is cultural. In a country where even pets can have hip replacement surgery, it’s hard to imagine not spending every penny you have to prolong Grandma’s life — or your own. The U.S. is a wealthy nation, and health care is something we choose to spend our money on. But wealthy or not, nobody wants ineffective treatments. Nobody wants to roll the dice and hope that their biology is similar enough to a hypothetical “average” patient. No one wants a “winner take all” payment system in which the patient is always the loser, paying for procedures whether or not they are helpful or necessary. Like Wanamaker with his advertisements, we want to know what works, and we want to pay for what works. We want a smarter system where treatments are designed to be effective on our individual biologies; where treatments are administered effectively; where our hospitals our used effectively; and where we pay for outcomes, not for procedures.

We’re on the verge of that new system now. We don’t have it yet, but we can see it around the corner. Ultra-cheap DNA sequencing in the doctor’s office, massive inexpensive computing power, the availability of EHRs to study whether treatments are effective even after the FDA trials are over, and improved techniques for analyzing data are the tools that will bring this new system about. The tools are here now; it’s up to us to put them into use.

Recommended reading:

We recommend the following books regarding technology, data, and health care reform:

June 22 2012

June 21 2012

The state of Health Information Exchange in Massachusetts

I recently attended the Massachusetts Health Data Consortium's (MHDC) conference on Health Information Exchange (HIE), modestly titled "The Key to Integration and Accountability." Although I'm a health IT geek, I felt I needed help understanding life outside the electronic health record (EHR) world. So, I roped in Char Kasprzak, statistical data analyst at Massachusetts Health Quality Partners, to give me a better picture of the quality implications of HIE (and to help me write this post).

John Halamka, CIO of Caregroup/Beth Israel Deaconess Medical Center, took the stage first and blasted through all the progress being made establishing the necessary frameworks for HIE to occur in Massachusetts. The takeaway message from John's talk was that there have been many changes since September 2011 in the financial, technical, and legal structures involved in building health information exchange. The lessons learned from the initial pilot should enable Massachusetts to be ready for the first stage of statewide HIE.

HIE development in Massachusetts

Health care providers historically thought of HIE as a large institution run by a state or a major EHR vendor. It carried out the exchange of patient records in the crudest and most heavyweight way, by setting up one-to-one relationships with local hospitals and storing the records. (Some of the more sophisticated HIEs could link together hospitals instead, rather like Napster linked together end-users for file exchange.) These institutions still dominate, but HIE is now being used in a much broader sense, referring to the ability of institutions to share data with each other and even with patients over a variety of channels.

Despite the push for the health IT industry to use "HIE" as a verb rather than a noun, there was quite a lot of discussion at the event surrounding the structures and applications involved. Although HIE should be conceptually identified as a process (verb), having the structures and organizations (nouns) necessary to facilitate exchange is a challenge facing health care entities across the country. This conference did a good job of articulating these organizational challenges, and it presented clear plans on how Massachusetts is addressing them.

In Massachusetts, the model moving forward for phase one of HIE will be based on the Direct Project, with one central Health Information Service Provider (HISP) that will focus on PKI and S/MIME certificate management, maintaining a provider/entity directory, creating a web portal for those not ready for Direct, and maintaining an audit log of transactions. The concept of HISP was created in the Direct Project Implementation and Best Practices workgroups, and was designed to be an organizational and functional framework for the management of directed exchange between health care providers. The statewide HISP will consist of several existing HISP organizations, including Berkshire Health, Partners, Athena Health, and the New England Health Exchange Network. No small task, but not insurmountable.

I remain skeptical about the ability of providers and even hospitals to install EHRs capable of sending Direct-compliant messages conforming to the XDR/XDM IHE Profile for Direct Messaging. Not that it doesn't work or because it's some Herculean task, but essentially because it hasn't been mandated. That may change, though, with the inclusion of Direct Messaging in the transport standards for Meaningful Use Stage 2. In Massachusetts, the creation of a health information highway (phase 1) is set to go live on October 15, 2012. Phase 2 will include analytics and population health, and Phase 3 is set to have search and retrieve, which will include a governance model for an Electronic Master Patient Index (EMPI) and Record Locator Service (RLS). Phase 2 and 3 will set a framework for querying patient data across entities, which is one of the biggest technical barriers to HIE. Currently, one of the best methods for this process is the Patient Identifier Cross-Referencing (PIX) profile, but few organizations are using this tool to its full potential.

What are the challenges?

When experts talk about exchanging health information, they tend to focus on the technology. Micky Tripathi, CEO and executive director of the Massachusetts eHealth Collaborative, pointed out at the event that the problem isn't the aggregation or analysis of data, but the recording of data during the documentation process. In my experience, this is quite accurate: Having exchange standards and the ability to analyze big data is useless if you don't capture the data in the first place, or capture it in a non-standard way. This was highlighted when the Massachusetts eHealth Collaborative ran the same reports on 44 quality measures, first using popHealth data, then again with Massachusetts eHealth Collaborative data, and received conflicting results for each measure. There are certainly lessons to be learned from this pilot about the importance of specifying numerators, denominators, vocabularies, and transmission templates.

Determining what to capture can be as important as how the data is captured. Natasha Khouri elaborated on the challenges of accurate data capture during her presentation on "Implementing Race and Ethnicity Data Collection in Massachusetts Hospitals — Not as Easy as It Sounds." In 2006, Massachusetts added three new fields and 33 categories to more accurately record race and ethnicity information. The purpose of this is to address health disparities, which is something I'm very excited to see discussed at a health IT conference.

With accurate data in hand, direct interventions in communities can be more targeted and effective. However, the largest barrier to this seems to have been getting providers to ask questions about race and ethnicity. This was due to high training costs, staff resistance, and workflow changes necessary for collecting the demographic data. This problem was particularly interesting to me, having worked with the Fenway Health Institute to craft their Meaningful Use Stage 2 comments regarding the inclusion of gender identity and sexual orientation in the demographics criteria. Recording accurate data on vulnerable populations is vital to improving public health campaigns.

What about patients?

For a conference with no patient speakers, there was a surprising amount of discussion about how patients will be involved in HIE and the impact EHRs have on patients. Dr. Lawrence Garber,who serves as the medical informatics director for Reliant Medical Group, examined issues of patient consent. The research he discussed showed that when given the choice, about 5% of patients will opt out of HIE, while 95% will opt in. When patients opt in at the entity/organizational level, this enables automated exchange between providers, entities, care teams, and patients. Organizations utilize a Data Use and Reciprocal Support Agreement (DURSA) to establish a trust framework for authenticating entities that exchange data (presumably for the benefit of patients). DURSAs will likely play an important role as organizations move toward Accountable Care Organization models of care.

Information exchange should also lead to more patient satisfaction with their medical visits, where they will be able to spend more time talking to their doctors about current concerns instead of wasting time reviewing medical history from records that may be incomplete or inaccessible.

Dana Safran, VP of performance measurement and improvement at Blue Cross Blue Shield, explained at the conference that patients can expect better quality of care because quality improvement efforts start with being able to measure processes and outcomes. With HIE, it will be possible to get actual clinical data with which to enhance patient-reported outcome measures (PROMs) and really make them more reliable. Another topic that can be better measured with HIE is provider practice pattern variation. For example, identifying which providers are "outliers" in the number of tests they order, and showing them where they stand compared to their peers, can motivate them to more carefully consider whether each test is needed. Fewer unnecessary tests means cost savings for the whole system, including patients.

Toward the end of the conference, Dr. Nakhle A. Tarazi gave a presentation on his Elliot M. Stone Intern Project on the impact of EHRs on patient experience and satisfaction. The results were quite interesting, including:

  • 59% of patients noticed no change in time spent with their provider.
  • 65% of patients noticed no change in eye contact with their provider.
  • 67% of patients noticed no change in wait time in the office.

The sample size was small, interviewing only 50 patients, but the results certainly warrant a larger, more in-depth study.

In Massachusetts, it seems like the state of the HIE is strong. The next year should be quite exciting. By this time in 2013, we should have a statewide HISP and a web portal service that enables exchange between providers. Halamka has promised that on October 15 the walls between Massachusetts health care orgs will begin to come down. If it is successful in Massachusetts, it could be a valuable model for other states. We also have the opportunity to involve patients in the process, and I hope organizations such as The Society for Participatory Medicine and Direct Trust will be involved in making patients active partners in the exchange of health data.

OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR

Related:

June 15 2012

Top Stories: June 11-15, 2012

Here's a look at the top stories published across O'Reilly sites this week.

A reduced but important future for desktop computing
Josh Marinacci says most people will rely on mobile devices to handle their computing needs, but a select and small group of power users will continue to use desktop machines.

Big ethics for big data
"Ethics of Big Data" authors Kord Davis and Doug Patterson explore ownership, anonymization, privacy, and ways to evaluate and establish ethical data practices within an organization.

Stories over spreadsheets
Imagine a future where clear language supplants spreadsheets. In a recent interview, Narrative Science CTO Kris Hammond explained how we might get there.


Data in use from public health to personal fitness
Releasing public data can't fix the health care system by itself, but it provides tools as well as a model for data sharing.


What is DevOps?
NoOps, DevOps — no matter what you call it, operations won't go away. Ops experts and development teams will jointly evolve to meet the challenges of delivering reliable software to customers.


Velocity 2012: Web Operations & Performance — The smartest minds in web operations and performance are coming together for the Velocity Conference, being held June 25-27 in Santa Clara, Calif. Save 20% on registration with the code RADAR20.

March 15 2012

Left and right and wrong

Sometimes I find a picture or a blog post that leaps off the screen at me and says "your readers must see this as it applies to health IT."

Normal Modes, a solid UX company based in Houston, sends me fairly good UX tips on a regular business. The last one featured this photo (used with permission):

Parking log picture from Normal Modes

Normal Modes points out, very clearly, that points of confusion like this are bad for users. They regard their job, as UX experts, to eliminate this kind of experience for users. Their analysis about how to do this is right on.

I have seen this kind of error in EHR systems and PHR systems on countless occasions. From an engineering perspective, it is really useful to take a moment and consider how something like this happens. First, you have two different "levels" of operation here. One is concerned with how traffic flows in the parking lot. The other is concerned with directions in the parking lot. For whatever reasons, these two "parking lot features" were implemented separately by people who had access to two different sets of resources. It stands to reason that the people who had access to white paint and stencils to make the sign on the right were the same people using stencils to mark the parking spots. It stands to reason that the people who had access to the professional sign-making system were somewhat removed from the people actually designing the parking lot.

In short, what you are seeing here is the artifact of a political and process disconnect. In health IT, there are constant political disconnects that cause similar issues. The EHR vendor is one political group, the insurance companies another, and the government is so large that it actually has multiple groups with different agendas. (HHS alone has so many sub groups that it's very difficult to completely follow what is happening.)

As enthusiastic as I am about the potential for meaningful use incentives, I think there will be lots of artifacts like this in EHRs that do not make much sense because the EHR vendor was pulled in a new direction by these incentives.

I have said in almost every talk about health IT I have ever given that the problems in health IT are political and not technical. I think it is my most tweeted quote. But sometimes a picture is worth a thousand words.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

March 12 2012

Parts of healthcare are moving to the cloud

Healthcare providers are increasingly required to do more with less. Regulations, HIPAA, Meaningful Use, recovery audit contractor (RAC) audits and decreasing revenues are motivating providers to consider cloud computing as a solution to potentially help them cut costs, maintain quality, meet regulations, and increase productivity.

Some electronic health record (EHR) vendors are offering solutions as a cloud-based offering. This offers an approach intended to help providers better manage the IT investments that need to be made to support EHR implementations. And just as we've seen in other industries, there is an ongoing debate within healthcare as to the viability of cloud-based solutions given the care needed for patient privacy and sensitive personal information.

Providers' trust in the public cloud is still relatively weak, but increasing numbers are considering using private clouds. However, EHR applications hosted in the cloud do seem to be gaining traction.

One example of a cloud-based EHR offering is CareCloud. My fellow Radar blogger Andy Oram wrote about them two years ago at HIMSS, and they have made significant progress since then. CareCloud creates apps that help medical professionals run their businesses. Those apps include a community collaboration and communication platform to securely share patient information, a medical practice management system for billing and scheduling, and a revenue cycle management service. CareCloud also provides electronic health records. It's built with Ruby on Rails, a highly abstracted programming language quite well suited for rapid development of web applications. CareCloud was a co-winner of the IBM Global Entrepreneur Silicon Valley SmartCamp competition in 2010 (see video below).

I ran into the folks at CareCloud at the HIMSS 2012 conference and was impressed with both their use of open source and their strategy on leveraging the cloud in healthcare. Mike Cuesta, CareCloud's director of marketing and user experience, defined CareCloud's strategy as one of future survival.

"Being able to deliver the product across platforms is crucial," Cuesta said. "In healthcare there is a glaring lack of modern web apps. What we wanted to do was create an elegant and user-friendly application that is accessible anywhere. Companies have to be able to deliver a desktop-class experience that works across platforms."

CareCloud relies on open source. "I had my eyes opened to open source about eight years ago when I was looking for a project management system," said CareCloud CTO Tom Packert. "I discovered I could use something like dotproject, which is a GPL-licensed PHP-MySQL web-based project management application. It only took us a day to put it up on SUSE Linux and we didn't need SQL seat licenses. Open source allows you to scale horizontally. It's not as scary as a lot of people think it is."

Another EHR in the cloud is athenahealth. Athenahealth's co-founders Todd Park, the new U.S. chief technology officer (CTO), and Jonathan Bush, purchased a birthing practice in 1997. Soon, like most medical practices, they were buried in paper and spent most of their resources trying to get paid. Searching for innovative solutions led them to create their own software. Enlisting the help of Todd's younger brother Ed, a software developer, they created an EHR and financial revenue cycle system with a rules engine of dynamic billing rules data. I met Ed Park at HIMSS when I remarked that he looked a lot like Todd Park, and Jonathan introduced him to me as Todd's "younger, smarter, and much better looking brother." Apparently his programming skills are paying off ...

This year, athenahealth was named to the TR50, Technology Review's third annual list of the world's most innovative technology companies. At this year's HIMSS conference, athenahealth showed the company's plans for an iPhone app that will gives its EHR users access to certain features of its athenaClinicals cloud-based platform. An iPad version of the web-based athenahealth EHR app is also currently under development and set to launch in 2013.

Being based on cloud technology makes athenahealth much more nimble in launching mobile products in services. In the video below, I discuss with Jonathan Bush how athenahealth is using the cloud in their EHR.

(Thanks to Nate DeNiro and Open Affairs Television for their assistance with this video.)

Related:

February 24 2012

The Direct Project in action

The Direct ProjectThe Direct Project is all over HIMSS12, and really all over the country now. But it still carries controversy. When I found out that one of the Houston Health Information Exchange efforts had successfully launched a Direct Pilot, I simply had to do an interview. After all, here was software that I had contributed to as an open source project that was being deployed in my own backyard.

Jim Langabeer is the CEO of the newly renamed Greater Houston Healthconnect. I caught up with Jim at Starbucks and peppered him with questions about where Health Information Exchange (HIE) is going and what HIE looks like in Houston.

What's your background?

Jim Langabeer: I have been in healthcare for a long time in the Texas Medical Center.  I started as a hospital administrator at UTMB, where my first project was to work on an IT project team developing a Human Resources Management System. It was a collaborative effort between three hospitals.

I recently led a software company in the business intelligence space, which was later acquired by Oracle. After that, I decided I wanted to come back to Houston and continue to work in healthcare, so I returned to work for MD Anderson leading project and performance management. I eventually worked with the CIO Lynn Vogel to assess the business value of information systems. I most recently taught healthcare administration at the UT School of Public Health.

Throughout my healthcare career, I have been using data to drive healthcare decisions. My PhD is in decision sciences — quantitative modeling of data for decision-making — and my research grants have all involved analyzing large datasets to make healthcare decisions better. I have also worked between organizations in a collaborative manner. Health Information Exchange was an obvious next step for me.

Where are you in the process of creating a health information exchange?

Jim Langabeer: We are in the middle stage of operations. We are finalizing our architectural vision and choosing vendors. Most importantly, we have strong community support: 41% of the doctors in the region have committed to the exchange with letters of support as well as 61 of the 117 local hospitals.

We are meeting with all of the doctors we can. We are calling them and faxing them and visiting them, with one simple message: Health IT is coming and we want you to participate.


You mentioned an "architectural vision." Can you expand on that?

Jim Langabeer: We really cannot have just one architecture, so our architectural vision really means choosing several protocols and architectures to support the various needs of our stakeholders in parallel. We need to accommodate the entire range of transactions that our physicians and hospitals perform. The numbers say that 50% of Houston docs work in small practices with only one or two doctors, and they typically do not have electronic health records (EHR). Hooking these doctors into a central hub model does not make sense, so a different model where they can use browser/view capabilities and direct connections between providers must be part of our architectural vision.

Houston also has several large hospitals using EPIC or other mature EHR systems. That means we need a range of solutions. Some docs just want to be able to share records. Some are more sophisticated and want to do full EHR linking. Some doctors just want to be able to view data on the exchange using a web portal.

We want to accommodate all of these requests. That means we want a portfolio of products and a flexible overall architectural vision. Practically, that means we will be supporting Direct, IHE and also older Hl7 v2.

Some people are saying Direct is all we want. We do not want a solution that is way over what small providers can handle and then it never gets used. We are architecture- and vendor-neutral, which can be difficult because EPIC is so prevalent in Houston.

We have practices that are still on paper on one hand and very sophisticated hospitals on the other, and that is just in the central Houston area. Immediately outside of Houston, lots of rural hospitals that we plan to support have older EHR systems or home-grown systems. That means we have to work with just about every potential health IT situation and still provide value.

Recently, a JAMIA perspectives article criticized Direct as a threat to non-profit HIE efforts like yours. Do you feel that Direct is a threat?

Jim Langabeer: I do not see Direct as a threat. I hear that from lots of sources, that Direct is a distraction for health information exchanges. I disagree.

I see it as another offering. The market is obviously responding to Direct. The price point on the software for Direct is definitely a benefit to smaller docs. We see it as a parallel path.

We do not see Surescripts (which is offering Direct email addresses to doctors with the AAFP) as a threat because we see them as a collaborator. We want them, and similar companies, to be part of our network. We are also having conversations with insurance companies and others who are not typically involved in health information exchanges because we are looking for partners.

The problem in healthcare is that it has always been very fragmented; no single solution gets much penetration. So, as we consider different protocols, we have to go with what people are asking for and what is already being adopted. We have to get to a point were these technologies have a very high penetration rate.

How are you narrowing your health IT vendors?

Jim Langabeer: What we want is a vendor that is going to be with us long term, sharing our risks and making sure we are successful. The sustainability of the vendor is connected to the sustainability of our exchange, so that is really important. Our 20-county region represents 6.4 million people, and that population is larger than most states that are pursing exchanges. Not many vendors have experience on that scale.

How important is the software licensing? Do open source vendors have an advantage?

Jim Langabeer: I am not sure they have an advantage. Of course, open source is ideal, but often proprietary vendors are ahead in terms of features. A mix in the long-term solution would be really cool.

How will you work with outside EHR vendors?


Jim Langabeer: We are trying to engage at the CIO level. We're trying to understand what solutions standards and data they want to share. There is a core set of things everyone needs to do. Beyond that core, some people want to go with SOA; other people really want IHE or Direct. There is not much data sharing between hospitals. That is why industry standards are so important to us. It helps us shorten those discussions and make a more narrow offering. So, we are focusing on protocols as a means to work with the various EHR vendors.

One CIO told us, "We do not want to exchange data at all; we just want our doctors to be able to open a browser and see your data." We may not like to hear that, but that is the reality for many organizations in Houston.

The other thing that is unique about Houston is that you are not going to see the state of Texas taking a dictatorial role. In other large exchanges, you often have a state-level government dictating HIE. In that environment, it is easier to insist on specific standards. That is not our situation in Houston, so we have to meet our constituents where they are.

I have been frustrated that the Direct Project reference implementations only come in Java and .NET at this point. I would like to see implementations in PHP, Python, Ruby, etc. — languages that are more popular with entrepreneurs. Are you concerned with issues like that?

Jim Langabeer: We're definitely thinking about things like that. We do not want to be merely business-to-business — we want to offer services to consumers. So, we care about the technology becoming accessible to consumers, which means getting to iPhones. We want to be able to offer consumers tools that will bring them value, so we certainly care about issues like implementation language because we see those issues as connected.

If I let you dictate which Houston clinic or hospital I go to, when can I go see a doctor and get my patient data sent to my HealthVault or other PHR Direct account?

Jim Langabeer: I would hope that the technology would be ready by the end of year. What I envision is a core group of early adopters. We already have several hospitals and some physician groups that are interested in taking that role.

This interview was edited and condensed.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

February 23 2012

Direct Project will be required in the next version of Meaningful Use

The Direct ProjectThe Office of the National Coordinator for Health Information Technology (ONC) announced that the Direct Project would be required in stage 2 of Meaningful Use.

As usual the outside world knew almost instantly because of Twitter. Nearly simultaneous posts from @ahier (Brain Ahier) and @techydoc (Steven Waldren MD). More information followed shortly after from @amalec Arien Malec a former leader for the Direct Project.


There are some other important announcements ahead of the official release, such as the end of support for CCD, but this requirement element has the deepest implications. This is jaw-dropping news! Meaningful Use is the standard by which all doctors and hospitals receive money for Electronic Health Record (EHR) systems from the federal government. In fact, the term "Electronic Health Record" is really just a synonym for "meaningful use software" (at least in the U.S. market). Meaningful Use is at the heart of what health IT will look like in the United States over the coming decades.

The Direct Project has a simple but ambitious goal: to replace the fax machine as the point-to-point communications tool for healthcare. That goal depends on adoption and nothing spurs adoption like a mandate. Every Health Information Exchange (HIE) in the country is going to be retooling as the result of this news. Some of them will be totally changing directions.



This mandate will make the Direct Project into the first Health Internet platform. Every doctor in the country will eventually use this technology to communicate. Given the way that healthcare is financed in the U.S., it is reasonable to say that doctors will either have a Direct email address to communicate with other doctors and their patients in a few years, or they will probably retire from the practice of medicine.

It was this potential, to be the first reliable communications platform for healthcare information, that has caused me to invest so heavily in this project. This is why I contributed so much time to the Direct Project Security and Trust Working Group when the Direct Protocol was just forming. This is an Open Source project that can still use your help.



The Direct Project is extensively covered in "Meaningful Use and Beyond" (chapter 11 is on interoperability). I wrote about the advantages of the Direct Project architecture. I helped arrange talks about about Direct at OSCON in 2010, and in 2011, I gave an OSCON keynote about the Health Internet , which featured Direct. I wrote a commentary for the Journal of Participatory Medicine, about how accuracy is more important than privacy for healthcare records and how to use the Direct Project to achieve that accuracy. I pointed out that the last significant impact from Google Health would be to make Direct more important. I am certainly not the only person at O'Reilly who has recognized the significance of the Direct Project, but I am one of the most vocal and consistent advocates of the Direct Project technology approach. So you can see why I think this a big announcement.

Of course, we will not know for sure exactly what has been mandated by the new revisions of Meaningful Use, but it is apparent that this is a huge victory for those of us who have really invested in this effort. My hat is off to Sean Nolan and Umesh Madan from Microsoft, to Brian Behlendorf and Arien Malec, who were both at at ONC during the birth of Direct, to Dr. David Kibbe, Brett Peterson and to John Moehrke. There are countless others who have contributed to the Direct Project, but these few are the ones who had to tolerate contributing with me, which I can assure you, is above and beyond the call of duty.

Obviously, we will be updating "Meaningful Use and Beyond" to include this new requirement as well as the other changes to the next version of Meaningful Use (which apparently will no longer be called "stage 2"). Most of the book will not change however, since it focuses on covering what you need to know in order to understand the requirements at all. While the requirements will be more stringent as time goes on, the core health IT concepts that are needed to understand them will not change that much. However, I recommend that you get a digital copy of the book directly through O'Reilly, because doing so entitles you to future versions of the book for free. You can get today's version and know we will update your digital edition with the arrival of subsequent versions of the Meaningful Use standard.



I wonder what other changes will be in store in the new requirements? ONC keeps promising to release the new rule "tomorrow." Once the new rules emerge, they will be devoured instantly, and you can expect to read more about the new standards here. The new rule will be subject to a 60-day commentary period. It will be interesting to see if the most dramatic aspects of the rule will survive this commentary. Supporters of CCR will be deeply upset and there are many entrenched EHR players who would rather not support Direct. Time will tell if this is truly a mandate, or merely a strong suggestion.


Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

January 09 2012

Are EHRs safe?

Are electronic health records (EHR) safe?

No.

EHRs are not safe. They are fundamentally and irreparably dangerous even during normal use.

EHRs will kill people.

Lots of people.

EHRs have been killing people for years. They will kill even more people as they become more popular and available.

Take a deep breath and get comfortable with the notion that healthcare computer systems can and will kill people. If it's any consolation, none of the people that EHRs will kill would have gotten out alive. As it turns out, everyone gets to have a "cause of death" in the end.

As one of the new "health IT" writers at O'Reilly, I feel that I should be totally up front about this: I am promoting, installing, supporting and programming software that will kill people. Frequently.

Happily, the software that I promote, install, support, and program will also save lives. On balance, it will save thousands of people for each life it takes.

The healthcare system is already a dangerous place. The classic evidence for this was presented in the report "To Err is Human" in 1999 by the Institute of Medicine (IOM). It showed results that the healthcare system was killing about as many people as the highway system each year. EHR systems could do a huge amount of good for the healthcare system as a whole while still being responsible for tens of thousands of deaths each year.

In fact, let's replace "EHRs" with "cars."

Are cars safe?

No.

Cars are not safe. They are fundamentally and irreparably dangerous even during normal use.

Cars will kill people.

Lots of people.

Cars have been killing people for years. They will kill even more people as they become more popular and available.

All of a sudden, the same kind of dramatic talk sounds pretty tame. We also know that cars, on balance, save more lives than they kill (just the ambulances alone ...).

EHRs are a fundamental technology, one that will become a pervasive part of our own healthcare, and therefore, our lives. Saying that they will "kill people" is both true and irrelevant. It's like saying "hospitals kill people" or "dogs kill people" or "doctors kill people." So what? On balance, we need hospitals, dogs, doctors, cars, and you guessed it ... EHRs.

Of course, we need to do everything we can to make EHRs safer. EHR safety will improve with time, just like cars. Safety in the auto industry is a pretty good analogy for the health IT industry. You can look forward to further posts that extend, explain and abuse the analogy.

But I hope that this post will give you a little insight into the recent results from the IOM about the safety of health it systems, (there's an excellent overview here) and why industry defenders like H. Stephen Lieber, the president of the Healthcare Information and Management Systems Society (HIMSS), reacted with sputtering defenses of health IT. (Another excellent summary is here.)

Lieber's defense is laudable, but I think it's a little strained. I really wish the health IT industry would stop trying to put lipstick on this pig. The new IOM report aimed at health IT is just as pointed as "To Err is Human." Let's not imagine that we can dodge this bullet.

We will be killing people accidentally with healthcare software over the next few years. That really sucks, but it's worth it. So, let's all take a deep breath, and focus on the problem. Hysterics don't help. What helps is openness, honesty, transparency and a willingness to admit it when bad mistakes happen. Pointing fingers and getting hysterical will really not help here. Those same activities have already slowed down the privacy discussion.

Are EHRs safe? Not in the least. But they're safer than doing nothing. They are safer than paper. You can tweet me on that.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

October 21 2011

Why geeks should care about meaningful use and ACOs

Healthcare reform pairs two basic concepts:

  • Change incentives: lower costs by paying less for "better" care not "more" care
  • Use software to measure whether you are getting "better" care

These issues are deeply connected and mostly worthless independently. This is why all geeks should really care about meaningful use, which is the new regulatory framework from the Office of the National Coordinator of Health Information Technology (or ONC for short) that determines just how doctors will get paid for using electronic health records (EHR).

The clinical people in this country tend to focus on meaningful use incentives as "how do I get paid to install an EHR" rather than seeing it as deeply connected to the whole process of healthcare reform. But any geek can quickly see the bottom line: all of the other healthcare reform efforts are pointless unless we can get the measurement issue right.

Health economists can and do go on and on about whether the "individual mandate" will be effective. Constitutional law experts fret about whether the U.S. federal government should be able to force people to purchase insurance. We are all concerned about issues like the coverage of pre-existing conditions. Hell, I am certainly in the 99%.

Make no mistake, the core problem with healthcare in the United States is that costs are out of control. Under the current system, absent better health information technology, any kind of major system change — like the individual mandate — will simply assure that you get lots more of what you already have. That would be a disaster.

The only way to make healthcare in the U.S. both better and cheaper is to use health information technology. I recently was able to have a whiteboard session with Dr. Farzad Mostashari, and he drew out his view of the whole reform system. It was nice to be able to have such an intimate explanation, but I can think of nothing that he told me that he does not also say in his frequent public appearances (he was awesome at Health 2.0). He talked about this issue as one of "levers." His point was simple: pulling one lever alone does nothing.

One of the levers on his whiteboard was something called Accountable Care Organizations (ACO), which is term that any technologist who cares about government or healthcare needs to get familiar with. The ACO is a new twist on Capitation. The idea is simple: lets pay doctors for keeping people healthy rather than paying them to treat the sick. But capitation has a bad name in the U.S. because of its abuse by Health Management Organizations (HMOs).

The only differences between an HMO and an ACO are the quality of data systems they will be required to use and the level of detail they will be required to report as a result. You might think of an ACO as the organizational vehicle that healthcare reform will move forward in.

With all of that context, technologists can now intelligently read news regarding the changes in meaningful use requirements for ACOs. For those not wishing to delve further, the news is pretty basic: the rules for ACOs around meaningful use have been made a little easier in the final ACO rule.

The final rule gives more time for ACOs to achieve meaningful use in some cases, and that is generally a good thing. Meaningful use seems simple to technologists, but the real-world rural medical practices and small offices that will need to implement it have very inconsistent computer skills. One of the most important issues for meaningful use is to go at the right speed — and for the most part, that should be as fast as possible ... but no faster. Don Berwick (a legend in patient safety circles) explained that the final ACO rule relaxed the meaningful use requirements in response to a "mountain" of comments.

Generally, this is another example of consistently reasonable policy decisions coming from the meaningful use team at ONC. I grew up Republican/Libertarian/Texan and so it seems pretty strange to admit this, but the meaningful use regulations are good government. It is a core component (the geek component) of healthcare reform, and that healthcare reform will be painful. There is just no way around it.

As geeks, we can all call our local congressional representatives and say "this meaningful use thing seems to be going OK."

I'm pretty sure that's not a call they get a lot.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

October 20 2011

OSEHRA's first challenge: VistA version control

As I mentioned previously, the VA is making a formal open-source project and governing body for its VistA electronic health record (EHR) system. VistA was developed internally by the VA in a collaborative, open-source fashion, but it has essentially been open source within the VA, with no attention or collaboration with VistA users outside the VA.

There are several challenges that the Open Source Electronic Health Record Agent (OSEHRA) faces, and many of them sit squarely between technical and political issues.

The first question any open-source project must answer is: "Who can commit?" That's almost immediately followed by: "How do we decide who can commit?" But to manage VistA OSEHRA, we must first ask: "How do we commit?" There is no version control system that currently works with VA VistA.

VistA is famously resistant to being managed via version control systems. The reason? Over the course of development of VistA, which predates most modern version control systems, updating was handled by a tool developed specifically for VistA. That system is called KIDS.

The problem with KIDS, and managing version control generally in VistA, is that MUMPS, the language/database that VistA is programmed in, happily lends itself to "source code in the database." MUMPS is a merged database and programming language, and is an intellectual predecessor to modern NoSQL databases. MUMPS is a dominant force in healthcare, and despite comments to the contrary by its numerous critics, it has features that are ideal for healthcare. (This is another excellent place to shamelessly plug "Meaningful Use and Beyond," which discusses the comprehensive use of MUMPS in healthcare.)

To understand the issue with MUMPS and VistA you must:

  • Imagine developing an application using Node.js and MongoDB.
  • Imagine that Node.js and MongoDB are one project.
  • Imagine doing that for 30 years.

Given that in MUMPS the programming language is the database and the database is the programming language, certain problematic historical decisions were made. First, MUMPS' executable code is sometimes included inside the MUMPS database. That makes it pretty impossible to map out what the source code is doing separately from what the database is doing.

Overlay the fact that the KIDS update system (think YUM for VistA) would update both the source code "on disk" at the same time as updating the system "in memory." KIDS is like source code patches and system updates all rolled together.

OSEHRA must find a way to reconcile the KIDS process with a version control system. Without that, there is no way to reconcile a VistA development environment with a given production environment.

Without a modern distributed version control system (DVCS) powering VistA development, OSEHRA will be unable to run VistA development as a meritocracy. The technology enables multiple parties to participate as "core VistA developers."

Without a DVCS, the VA had to rely on a process for software based on "classes" of software. Class I software was released by the national VA office and was the core of the EHR installed at each VA hospital. Class II was software that was used regionally or at several hospitals. Class III software was software that was used locally at one VA hospital.

VistA programmers at each hospital were required to reconcile Class I patches from the national office (which came as KIDS patches) with local instances that included Class III software. Each KIDS patch had the potential for requiring a local hospital programming effort in order to reconcile with local software changes.

With a DVCS, multiple parties could develop "core" changes to VistA, freeing the VA from the top-down VistA development management that prevents outsiders from contributing back. Remember, one of the largest "outsiders" to the VA is the Resource and Patient Management System (RPMS), which is a VistA fork extensively developed and improved by another federal agency, the Indian Health Service (IHS).

The KIDS- and Class-based software deployment model essentially made interagency cooperation between IHS and the VA impossible. There was no way for another "master" copy of the EHR to exist in cooperation.

To solve this problem, OSEHRA is betting heavily on Git and creating a project called SKIDS. From that project's page:

This project will design and implement "Source KIDS" in order to represent VistA software in a source tree suitable for use with modern version control tools. Once deployed, SKIDS will make it possible to exchange source code and globals between VistA instances and Git repositories.

"Globals" in MUMPS means "the database," which is different than most programming languages. With that explained, you can see that this project is intended to address exactly the problem that I have just outlined.

There is no guarantee that OSEHRA will be successful. Not everyone in the VA bureaucracy is keen on the notion of truly open-source VistA. Those of us in the VistA community outside the VA are apprehensive. We want to believe that OSEHRA is for real and will actually become the true seat of VistA development that is friendly to outsiders like us and run as a transparent open-source project. The single most important thing that OSEHRA can do to ensure the success of a truly open-source VistA process is to tackle and handle these challenges well.

The fact that OSEHRA is addressing these problems head-on is a very good sign. The emphasis on Git has been obvious from early days. The new SKIDS project indicates that there is a distinct lack of pointy-haired bosses at OSEHRA. They might be biting off more than they can chew, but there is no confusion about identifying the problems that need solving.

There are other major challenges facing OSEHRA, but almost all of them are manageable if a collaborative development process is created that lets the best ideas about VistA bubble to the top. Most open-source projects simply take for granted the transparency afforded by a version control system, and almost all theory about how to manage a project well are based on the assumption of that transparency. So, this problem really lives "underneath" all of the other potential issues that OSHERA will face. If it can get this one right, or mostly right — or even "leastly" wrong — it will be a long way toward solving all of the other issues.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

October 19 2011

OSEHRA and the future of VA VistA

Apache Web Server, GNU/Linux Operating System, MySQL Database, Mozilla's Firefox Browser.

All pillars among the open-source community.

Each of these deserves its imminent position as a venerated project. Each has changed the world, and not a little. Moreover, they are the projects that spring to mind when we seek to justify the brilliance of the open-source licensing and development models.

But if this is intended to be a list of the highest-impact and most significant open-source projects, there is a project missing from this list.

VA VistA.

VA VistA is arguably the best electronic health record (EHR) in existence. It was developed over the course of several decades by federal employees in a collaborative, open-source fashion. Because it was developed by the U.S. government, it is available under the Freedom of Information Act (FOIA) in the public domain. VistaA has served as the basis for several open-source and proprietary products.

Anyone with expertise in health IT knows about VistA, but few in the open-source community are aware of the project. This is tragic, because it really is one of the most important examples of an open-source system anywhere, for any reason. Why? Because Veterans Affairs (VA) has been able to use VistA to deliver a system of high-quality healthcare.

That word "system" is important. You will get excellent healthcare at Mayo, Harvard, Cleveland Clinic, etc., but it is difficult to make a "system" that consistently delivers excellent healthcare. The VA has done that, and VistA is basically the health IT "operating system" that has made it possible. (I wrote and maintain the WorldVistA "What is VistA Really?" page if you would like further context on the system.)

Sounds amazing right? And it is. But there's a problem: VistA has been rotting. Development has largely stagnated in the last two decades.

This stagnation is mostly due to VistA's institutionalization at the VA. VistA was not developed as an "approved" project. It was developed as a kind of rebellion against the backward software that was available at the time and a rebellion against the backward ideas held by VA bureaucracy. This rebellion was called the "underground railroad" among VistA insiders.

Once the VA approved the project and started managing the software development using top-down practices, everything slowed to a crawl. Imagine the bazaar being "blessed" and moved into the cathedral.

Several outsiders in open-source health IT have been advocating for the VA to return VistA to its open-source roots. I wrote a proposal to make VistA truly open source, and Rick Marshal blogs about this almost exclusively.

Recently, a new, enlightened leadership at the VA has decided that taking the open-source route is precisely what VA VistA needs. The result is the Open Source Electronic Health Record Agent, or OSEHRA.

OSEHRA faces tremendous challenges. VistA uses a technical stack that makes typical project management very difficult, and there are several thorny political issues involved. But if this transition is successful, it could truly be a revolution for health IT.

If you care about healthcare software and open source, participating in OSEHRA is worth your while.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

July 29 2011

Open source alchemy: Health care and Alembic at OSCON

At last year's OSCON I spoke with David Riley, Brian Behlendorf and Arien Malec about how open source solutions can help improve our health care system. A lot has happened in the past year, both with the Direct Project and the efforts to build a Nationwide Health Information Network.

This year David Riley gave an update on Aurion (developed from the CONNECT codebase), which is a major project of the Alembic Foundation. Alembic was founded by David, formerly the CONNECT initiative lead for the Federal Health Architecture (FHA), and Brian Behlendorf, the chief technology officer for theWorld Economic Forum. David's presentation focused on the Aurion Project's relationship to CONNECT, and it gave us a sense of where the Project is heading in the future. I spoke with Brian and David during OSCON, and in this first clip they discuss the mission and goals of the Alembic Foundation:

In this next clip they speak about how their efforts are related to the broader work on health information exchange, and specifically how Aurion will support the Direct Project:

In this final clip, brought on by an earlier question from Fred Trotter, they explore possibilities for Alembic to work on open source electronic health records like the Veterans Health Information Systems and Technology Architecture (VistA):

Strata Conference New York 2011, being held Sept. 22-23, covers the latest and best tools and technologies for data science -- from gathering, cleaning, analyzing, and storing data to communicating data intelligence effectively.

Save 20% on registration with the code STN11RAD



Related:



July 06 2011

OSCon preview: Shahid N. Shah on medical devices and open source

I talked recently with Shahid N. Shah, who is speaking in the health care track at the O'Reilly Open Source convention later this month about The Implications of Open Source Technologies in Safety-critical Medical Device Platforms. Shahid and I discussed:

Podcast (MP3)

  • Why the data generated from medical devices is particularly reliable patient-related information, and its value for improving treatment

  • The value of connecting these devices to electronic health records, and the kinds of research this enables

  • The role of open source software in making it easier for device manufacturers to add connectivity--and to get it approved by the FDA

  • How it's time for regulators such as the Department of Health and Human Services to take a look at how devices can contribute to better health care

Another OSCon health-care-related posting is my video interview about the Indivo X personal health record with Daniel Haas.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl