Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 21 2013

VA looks to apply innovation to better care and service for veterans

va-header-logova-header-logoThere are few areas as emblematic of a nation’s values than how it treats the veterans of its wars. As improved battlefield care keeps more soldiers alive from injuries that would have been lethal in past wars, more grievously injured veterans survive to come home to the United States.

Upon return, however, the newest veterans face many of the challenges that previous generations have encountered, ranging from re-entering the civilian workforce to rehabilitating broken bodies and treating traumatic brain injuries. As they come home, they are encumbered by more than scars and memories. Their war records are missing. When they apply for benefits, they’re added to a growing backlog of claims at the Department of Veterans Affairs (VA). And even as the raw number of claims grows to nearly 900,000, the average time to process them is also rising. According to Aaron Glanz of the Center for Investigative Reporting, veterans now wait an average of 272 days for their claims to be processed, with some dying in the interim.

While new teams and technologies are being deployed to help with the backlog, a recent report (PDF) from the Office of the Inspector General of the Veterans Administration found that new software deployed around the country that was designed to help reduce the backlog was actually adding to it. While high error rates, disorganization and mishandled claims may be traced to issues with training and implementation of the new systems, the transition from paper-based records to a digital system is proving to be difficult and deeply painful to veterans and families applying for benefits. As Andrew McAfee bluntly put it more than two years ago, these kinds of bureaucratic issues aren’t just a problem to be fixed: “they’re a moral stain on the country.”

Given that context, the launch of a new VA innovation center today takes on a different meaning. The scale and gravity of the problems that the VA faces demand true innovation: new ideas, technology or methodologies that challenge and improve upon existing processes and systems, improving the lives of people or the function of the society that they live within.

“When we set out in 2010 to knowingly adopt the ‘I word’, we did so with the full knowledge that there had to be something there,” said Jonah J. Czerwinski, senior advisor to VA Secretary Eric Shinseki and director of the VA Innovation Initiative, in a recent interview. “We chose to define value around four measurable attributes that mean something to taxpayers, veterans, Congressional delegations and staff: access, quality, cost control and customer satisfaction. The hard part was making it real. We focused for the first year on creating a foundation for what we knew had to justify its own existence, including identifying problem areas.”

The new VA Center for Innovation (VACI) is the descendent of the VA’s Innovation Initiative (VAi2), which was launched in 2010. Along with the VACI, the VA announced that it would adopt an innovation fellows program, following the successful example set by the White House, Department of Health and Human Services and the Consumer Financial Protection Bureau, and bring in an “entrepreneur-in-residence.” The new VACI will back 13 new projects from an industry competition, including improvements to prosthetics, automated sterilization, the Blue Button and cochlear implants. The VA also released a report on the VACI’s progress to date.

“We’re delving into new ways of providing audiology at great distances,” said Czerwinski, “delivering video into the home cheaply, with on-demand care, and the first wearable automatic kidney. Skeptics can judge any innovation endeavor by different measures. The question is whether at the end of the cycle if it’s still relevant.”

The rest of my interview with Czerwinski follows, slightly edited for clarity and content.

Why launch an “innovation center?”

Jonah J. Czerwinski: When we started VAi2, our intent was delving into the projects the secretary charged us with achieving. The secretary has big goals: eliminate homelessness, eliminate backlog, increase access to care.

It’s not enough for an organization to create a VC fund. It’s the way in which we structure ourselves and find compelling new ways of solving problems. We had more ways to do that. The reason why we have a center for innovation is not because we need to start innovating — we have been innovating for decades, at local levels. We’ve been disaggregated in different way. We may accomplish objectives but the organization as a whole may not benefit.

We have a cultural mission with the center that’s a little more subtle. It’s not just about funding different areas. It’s about changing from a culture where people are incented to manage problems in perpetuity to one in which people are incented to solve problems. It’s not enough to reduce backlog by a percentage point or the number of re-admissions with an infection. How do you reward someone for eliminating something wholesale?

We want our workforce to be part of that objective, to be part of coming up with those ideas. The innovation competition started in 2009 led to 75 ideas to solve problems. We have projects in almost every state now.

How will innovation help with the claims backlog?

Jonah J. Czerwinski: It’s complicated. Tech, laws, people factors, process factors, preferences by parts of interest groups all combine to make this hard. We hear different answers, depending upon the state. The variation is frustrating because it seems unfair. There are process improvements that you can’t solve from a central office. It can’t be solved simply by creating a new claims process. We can’t hire people to do this for us. It is inherently a governmental duty.

We’ve started to wrestle with automation, end-to-end. We have a Fast Track Initiative, where we’re asking how would you take a process, starting with a veteran, and end up with a decision. The insurance industry does this. We’ve hired a company to create the first end-to-end claims process as a prototype. It works enough that it created a new definition for what’s in the realm of the possible. It’s created permission to start revisiting the rules. There’s going to be a better way to automate the claims process.

What’s changed for veterans because of the “Blue Button?”

Jonah J. Czerwinski: There’s a use case where veterans receive care from both the VA and private sector hospitals. That happens about half the time. A non-VA hospital doesn’t have VISTA, our EHR [electronic health record.] If a patient goes there for care, like for an ER visit during a weekend because of congestive heart failure, doctors don’t have the information that we know about the patient at the VA. We can provide it for them without interoperability issues. That’s one direction. It’s also a way to create transparency in quality of care, if the hospital has visibility in your healthcare status.

In terms of continuity of care, when that veteran comes back to a VA hospital, the techs don’t have visibility into what happened at the other hospital. A veteran can download clinical information and bring that back. We now have a level of care between the public and private sector you never had before.

February 06 2013

Have an idea for a health care startup?

I sit down now and then with Roy Rosin at the East coast hub of health care business networking, the Gryphon Cafe in Wayne, PA. (I’m saying that only slightly tongue in cheek.) Roy was the long-time Chief Innovation Officer at Intuit and now holds that role with the University of Pennsylvania Health System. Our conversations tend to be wide ranging, but this morning he let me know that he’s been working on a partnership between Penn Medicine and Independence Blue Cross to fund a health care incubator with DreamIt Ventures in Philadelphia.

If you are working on a health-related startup this is worth your time because it’s being funded by the largest provider and payer in the region. This will give your startup access to both sides of the payer/provider equation in a meaningful way (the aforementioned “unfair advantage”). The application deadline is coming up fast on February 8. Details can be found here.

November 27 2012

Printing ourselves

Tim O’Reilly recently asked me and some other colleagues which technology seems most like magic to us. There was a thoughtful pause as we each considered the amazing innovations we read about and interact with every day.

I didn’t have to think for long. To me, the thing that seems most like magic isn’t Siri or self-driving cars or augmented reality displays. It’s 3D printing.

My reasons are different than you might think. Yes, it’s amazing that, with very little skill, we can manufacture complex objects in our homes and workshops that are made from things like plastic or wood or chocolate or even titanium. This seems an amazing act of conjuring that, just a short time ago, would have been difficult to imagine outside of the “Star Trek” set.

But the thing that makes 3D printing really special is the magic it allows us to perform: the technology is capable of making us more human.

I recently had the opportunity to lay out this idea in an Ignite talk at Strata Rx, a new conference on data science and health care that I chaired with Colin Hill. Here’s the talk I gave there (don’t worry: like all Ignite talks, it’s only five minutes long).

In addition to the applications mentioned in my talk, there are even more amazing accomplishments just over the horizon. Doctor Anthony Atala, of the Wake Forest University School of Medicine, recently printed a human kidney onstage at TED.

This was not actually a working kidney — one of the challenges to creating working organs is building blood vessels that can provide cells on the inside of the organ structure with nutrients; right now, the cells inside larger structures tend to die rapidly. But researchers at MIT and the University of Pennsylvania are experimenting with printing these vessel networks in sugar. Cells can be grown around the networks, and then the sugar can be dissolved, leaving a void through which blood could flow. As printer resolution improves, these networks can become finer.

And 3D printing becomes even more powerful when combined with other technologies. For example, researchers at the Wake Forest Institute of Regenerative Medicine are using a hybrid 3D printing/electrospinning technique to print replacement cartilage.

As practiced by Bespoke Innovations, the WREX team, and others , 3D printing requires a very advanced and carefully honed skillset; it is not yet within reach of the average DIYer. But what is so amazing — what makes it magic — is that when used in these ways at such a level, the technology disappears. You don’t really see it, not unless you’re looking. What you see is the person it benefits.

Technology that augments us, that makes us more than we are even at our best (such as self-driving cars or sophisticated digital assistants) is a neat party trick, and an homage to our superheros. But those that are superhuman are not like us; they are Other. Every story, from Superman to the X-Men to the Watchmen, includes an element of struggle with what it means to be more than human. In short, it means outsider status.

We are never more acutely aware of our own humanity, and all the frailty that entails, as when we are sick or injured. When we can use technology such as 3D printing to make us more whole, then it makes us more human, not Other. It restores our insider status.

Ask anyone who has lost something truly precious and then found it again. I’m talking on the level of an arm, a leg, a kidney, a jaw. If that doesn’t seem like magic, then I don’t know what does.

October 17 2012

Data from health care reviews could power “Yelp for health care” startups

A hospital in MaineA hospital in MaineGiven where my work and health has taken me this year, I’ve been thinking much more about the relationship of the Internet and health data to accountability and patient-driven health care.

When I was looking for a place in Maine to go for care this summer, I went online to look at my options. I consulted hospital data from the government at HospitalCompare.HHS.gov and patient feedback data on Yelp, and then made a decision based upon proximity and those ratings. If I had been closer to where I live in Washington D.C., I would also have consulted friends, peers or neighbors for their recommendations of local medical establishments.

My brush with needing to find health care when I was far from home reminded me of the prism that collective intelligence can now provide for the treatment choices we make, if we have access to the Internet.

Patients today are sharing more of their health data and experiences online voluntarily, which in turn means that the Internet is shaping health care. There’s a growing phenomenon of “e-patients” and caregivers going online to find communities and information about illness and disability.

Aided by search engines and social media, newly empowered patients are discussing health conditions with others suffering from disease and sickness — and they’re taking that peer-to-peer health care knowledge into their doctors’ offices with them, frequently on mobile devices. E-patients are sharing their health data of their own volition because they have a serious health condition, want to get healthy, and are willing.

From the perspective of practicing physicians and hospitals, the trend of patients contributing to and consulting on online forums adds the potential for errors, fraud, or misunderstanding. And yet, I don’t think there’s any going back from a networked future of peer-to-peer health care, anymore than we can turn back the dial on networked politics or disaster response.

What’s needed in all three of these areas is better data that informs better data-driven decisions. Some of that data will come from industry, some from government, and some from citizens.

This fall, the Obama administration proposed a system for patients to report medical mistakes. The system would create a new “consumer reporting system for patient safety” that would enable patients to tell the federal government about unsafe practices or errors. This kind of review data, if validated by government, could be baked into the next generation of consumer “choice engines,” adding another layer for people, like me, searching for care online.

There are precedents for the collection and publishing of consumer data, including the Consumer Product Safety Commission’s public complaint database at SaferProducts.gov and the Consumer Financial Protection Bureau’s complaint database. Each met with initial resistance by industry but have successfully gone online without massive abuse or misuse, at least to date.

It will be interesting to see how medical associations, hospitals and doctors react. Given that such data could amount to government collecting data relevant to thousands of “Yelps for health care,” there’s both potential and reason for caution. Health care is a bit different than product safety or consumer finance, particularly with respect to how a patient experiences or understands his or her treatment or outcomes for a given injury or illness. For those that support or oppose this approach, there is an opportunity for public comment on proposed data collection at the Federal Register.

The power of performance data

Combining patients review data with government-collected performance data could be quite powerful in helping to drive better decisions and adding more transparency to health care.

In the United Kingdom, officials are keen to find the right balance between open data, transparency and prosperity.

“David Cameron, the Prime Minister, has made open data a top priority because of the evidence that this public asset can transform outcomes and effectiveness, as well as accountability,” said Tim Kelsey, in an interview this year. He used to head up the United Kingdom’s transparency and open data efforts and now works at its National Health Service.

“There is a good evidence base to support this,” said Kelsey. “Probably the most famous example is how, in cardiac surgery, surgeons on both sides of the Atlantic have reduced the number of patient deaths through comparative analysis of their outcomes.”

More data collected by patients, advocates, governments and industry could help to shed light on the performance of more physicians and clinics engaged in other expensive and lifesaving surgeries and associated outcomes.

Should that be extrapolated across the medical industry, it’s a safe bet that some medical practices or physicians will use whatever tools or legislative influence they have to fight or discredit websites, services or data that puts them in a poor light. This might parallel the reception that BrightScope’s profiles of financial advisors have received in industry.

When I talked recently with Dr. Atul Gawande about health data and care givers, he said more transparency in these areas is crucial:

“As long as we are not willing to open up data to let people see what the results are, we will never actually learn. The experience of what happens in fields where the data is open is that it’s the practitioners themselves that use it.”

In that context, health data will be the backbone of the disruption in health care ahead. Part of that change will necessarily have to come from health care entrepreneurs and watchdogs connecting code to research. In the future, a move to open science and perhaps establish a health data commons could accelerate that change.

The ability of caregivers and patients alike to make better data-driven decisions is limited by access to data. To make a difference, that data will also need to be meaningful to both the patient and the clinician, said Dr. Gawande. He continued:

“[Health data] needs to be able to connect the abstract world of data to the physical world of what really happens, which means it has to be timely data. A six-month turnaround on data is not great. Part of what has made Wal-Mart powerful, for example, is they took retail operations from checking their inventory once a month to checking it once a week and then once a day and then in real-time, knowing exactly what’s on the shelves and what’s not. That equivalent is what we’ll have to arrive at if we’re to make our systems work. Timeliness, I think, is one of the under-recognized but fundamentally powerful aspects because we sometimes over prioritize the comprehensiveness of data and then it’s a year old, which doesn’t make it all that useful. Having data that tells you something that happened this week, that’s transformative.”

Health data, in other words, will need to be open, interoperable, timely, higher quality, baked into the services that people use, and put at the fingertips of caregivers, as US CTO Todd Park explains in the video below:

There is more that needs to be done than simply putting “how to live better” information online or into an app. To borrow a phrase from Robert Kirkpatrick, for data to change health care, we’ll need to apply the wisdom of the crowds, the power of algorithms and the intuition of experts to find meaning in health data and help patients and caregivers alike make better decisions.

That isn’t to say that health data, once published, can’t be removed or filtered. Witness the furor over the removal of a malpractice database from the Internet last year, along with its restoration.
But as more data about doctors, services, drugs, hospitals and insurance companies goes online, the ability of those institutions to control public perception of the institutions will shift, just as it has with government and media. Given flaws in devices or poor outcomes, patients deserve such access, accountability and insight.

Enabling better health-data-driven decisions to happen across the world will be far from easy. It is, however, a future worth building toward.

Reposted byfortmyersrealty fortmyersrealty

October 16 2012

How we can consumerize health care

Operating room in the Elliot Community Hospital by Keene and Cheshire County (NH) Historical Photos, on FlickrOperating room in the Elliot Community Hospital by Keene and Cheshire County (NH) Historical Photos, on FlickrRecently I wrote about one of my key product principles that is particularly relevant for designing software for the enterprise. The principle is called the Zero Overhead Principle, and it states that no feature may add training costs to the user.

The essence of the Zero Overhead Principle is that consumer products have figured out how to turn the “how-to manual” into a relic. They’ve focused on creating a glide path for the user to quickly move from newbie to proficient in minimal time. Put another way, the products must teach the user how they should be used.

Just this weekend, I downloaded a new game for my son on the iPad, and he was a pro in a matter of minutes (or at least proficient enough to kick my butt). No manual required. In fact, he didn’t even read anything before starting to play. This highly optimized glide path is exactly what we need to focus on when we talk about the consumerization of the enterprise.

This week, the first Strata RX conference will focus on bringing data and health together. Just as in national security (the place where we came up with the Zero Overhead Principle to help combat the lack of tech adoption by overloaded security analysts), there is tremendous opportunity to apply lessons learned in the consumer space to the health care sector. We know the space needs disruption and it is a way to make constructive disruption with a rapid adoption cycle.

Take, for example, my sister who recently started her residency at one of the most well regarded medical institutions in the world. How did her training start? With a meet and greet so all the residents could get to know each other? An orientation on what they could expect from their training? Or perhaps, an inspiring speech about the choice they’ve made? No, none of the above. Her first day started with an eight-hour training on health IT systems! Only the day after did they finally have a meet and greet.

As another example, talking to a chief physician of a major health care division, this person told me about how they have worked to create programs where one physician can help another to increase proficiency on their state-of-the-art IT systems. When I pushed on why they would want to spend such critical time in this way, this person remarked about the amount of time physicians spend working in those systems (e.g., reviewing films, tests, etc). To me this is horrifying. Health care professionals (not just the physicians) go through tremendous training, already have brutal hours where they are literally making life and death decisions, and now we’re subjecting them to an absurd amount of overhead training due to the IT systems. Sure the systems are complex, but so are Google, Facebook, and LinkedIn. When was the last time you looked at a manual or took a training class for any of those products? We must not allow complexity to be used as an excuse to accept technology that requires anything short of the Zero Overhead Principle.

The time to consumerize health care is now. Just like entrepreneurs are doing with the enterprise, we need to apply consumer product principles to health care. Let’s build systems that create a dialogue with users. For example, when someone gives a dosage of 100 mg/liter instead of 10 mg/liter the technology presents a dialogue box that simply asks, “Did you mean 10 mg/liter?” instead of providing either an obtrusive error message, or worse, letting the mistake happen. Let’s focus on the design of our technology to be interactive so that it facilitates efficiency. For example, when a physician enters a diagnosis, they are also presented with a dialogue that says, “Some people who were diagnosed with X were also diagnosed with Y.” This functionality is the equivalent of the serendipity that Amazon provides when users are browsing or checking out of their site, and it can be totally relevant and helpful for health care systems.

The bottom line is there is no excuse for violating the Zero Overhead Principle even in areas as complex as health care. By leveraging the key lessons that are now standard in the consumer space, we can put the power back into the health care provider’s hands.

Dr. DJ Patil is a Data Scientist in Residence at Greylock Partners. More details can be found on his LinkedIn profile and you can follow him on Twitter at @dpatil.

Related:

Photo: Operating room in the Elliot Community Hospital by Keene and Cheshire County (NH) Historical Photos, on Flickr

Reposted byRK RK

September 18 2012

When data disrupts health care

Health care appears immune to disruption. It’s a space where the stakes are high, the incumbents are entrenched, and lessons from other industries don’t always apply.

Yet, in a recent conversation between Tim O’Reilly and Roger Magoulas it became evident that we’re approaching an unparalleled opportunity for health care change. O’Reilly and Magoulas explained how the convergence of data access, changing perspectives on privacy, and the enormous expense of care are pushing the health space toward disruption.

As always, the primary catalyst is money. The United States is facing what Magoulas called an “existential crisis in health care costs” [discussed at the 3:43 mark]. Everyone can see that the current model is unsustainable. It simply doesn’t scale. And that means we’ve arrived at a place where party lines are irrelevant and tough solutions are the only options.

“Who is it that said change happens when the pain of not changing is greater than the pain of changing?” O’Reilly asked. “We’re now reaching that point.” [3:55]

(Note: The source of that quote is hard to pin down, but the sentiment certainly applies.)

This willingness to change is shifting perspectives on health data. Some patients are making their personal data available so they and others can benefit. Magoulas noted that even health companies, which have long guarded their data, are warming to collaboration.

At the same time there’s a growing understanding that health data must be contextualized. Simply having genomic information and patient histories isn’t good enough. True insight — the kind that can improve quality of life — is only possible when datasets are combined.

“Genes aren’t destiny,” Magoulas said. “It’s how they interact with other things. I think people are starting to see that. It’s the same with the EHR [Electronic Health Record]. The EHR doesn’t solve anything. It’s part of a puzzle.” [4:13]

And here’s where the opportunity lies. Extracting meaning from datasets is a process data scientists and Silicon Valley entrepreneurs have already refined. That means the same skills that improve mindless ad-click rates can now be applied to something profound.

“There’s this huge opportunity for those people with those talents, with that experience, to come and start working on stuff that really matters,” O’Reilly said. “They can save lives and they can save money in one of the biggest and most critical industries of the future.” [5:20]

The language O’Reilly and Magoulas used throughout their conversation was telling. “Save lives,” “work on stuff that matters,” “huge opportunity” — these aren’t frivolous phrases. The health care disruption they discussed will touch everyone, which is why it’s imperative the best minds come together to shape these changes.

The full conversation between O’Reilly and Magoulas is available in the following video.

Here are key points with direct links to those segments:

  • Internet companies used data to solve John Wanamaker’s advertising dilemma (“Half the money I spend on advertising is wasted; the trouble is I don’t know which half”). Similar methods can apply to health care. [17 seconds in]
  • The “quasi-market system” of health care makes it harder to disrupt than other industries. [3:15]
  • The U.S. is facing an existential crisis around health care costs. “This is bigger than one company.” [3:43]
  • We can benefit from the multiple data types coming “on stream” at the same time. These include electronic medical records, inexpensive gene sequencing, and personal sensor data. [4:28]
  • The availability of different datasets presents an opportunity for Silicon Valley because data scientists and technologists already have the skills to manage the data. Important results can be found when this data is correlated: “The great thing is we know it can work.” [5:20]
  • Personal data donation is a trend to watch. [6:40]
  • Disruption is often associated with trivial additions to the consumer Internet. With an undisrupted market like health care, technical skills can create real change. [7:04]
  • “There’s no question this is going to be a huge field.” [8:15]

If the disruption of health care and associated opportunities interests you, O’Reilly has more to offer. Check out our interviews, ongoing coverage, our recent report, “Solving the Wanamaker problem for health care,” and the upcoming Strata Rx conference in San Francisco.

This post was originally published on strata.oreilly.com.

September 13 2012

Growth of SMART health care apps may be slow, but inevitable

This week has been teaming with health care conferences, particularly in Boston, and was declared by President Obama to be National Health IT Week as well. I chose to spend my time at the second ITdotHealth conference, where I enjoyed many intense conversations with some of the leaders in the health care field, along with news about the SMART Platform at the center of the conference, the excitement of a Clayton Christiansen talk, and the general panache of hanging out at the Harvard Medical School.

SMART, funded by the Office of the National Coordinator in Health and Human Services, is an attempt to slice through the Babel of EHR formats that prevent useful applications from being developed for patient data. Imagine if something like the wealth of mash-ups built on Google Maps (crime sites, disaster markers, restaurant locations) existed for your own health data. This is what SMART hopes to do. They can already showcase some working apps, such as overviews of patient data for doctors, and a real-life implementation of the heart disease user interface proposed by David McCandless in WIRED magazine.

The premise and promise of SMART

At this conference, the presentation that gave me the most far-reaching sense of what SMART can do was by Nich Wattanasin, project manager for i2b2 at Partners. His implementation showed SMART not just as an enabler of individual apps, but as an environment where a user could choose the proper app for his immediate needs. For instance, a doctor could use an app to search for patients in the database matching certain characteristics, then select a particular patient and choose an app that exposes certain clinical information on that patient. In this way, SMART an combine the power of many different apps that had been developed in an uncoordinated fashion, and make a comprehensive data analysis platform from them.

Another illustration of the value of SMART came from lead architect Josh Mandel. He pointed out that knowing a child’s blood pressure means little until one runs it through a formula based on the child’s height and age. Current EHRs can show you the blood pressure reading, but none does the calculation that shows you whether it’s normal or dangerous. A SMART app has been developer to do that. (Another speaker claimed that current EHRs in general neglect the special requirements of child patients.)

SMART is a close companion to the Indivo patient health record. Both of these, aong with the i2b2 data exchange system, were covered in article from an earlier conference at the medical school. Let’s see where platforms for health apps are headed.

How far we’ve come

As I mentioned, this ITdotHealth conference was the second to be held. The first took place in September 2009, and people following health care closely can be encouraged by reading the notes from that earlier instantiation of the discussion.

In September 2009, the HITECH act (part of the American Recovery and Reinvestment Act) had defined the concept of “meaningful use,” but nobody really knew what was expected of health care providers, because the ONC and the Centers for Medicare & Medicaid Services did not release their final Stage 1 rules until more than a year after this conference. Aneesh Chopra, then the Federal CTO, and Todd Park, then the CTO of Health and Human Services, spoke at the conference, but their discussion of health care reform was a “vision.” A surprisingly strong statement for patient access to health records was made, but speakers expected it to be accomplished through the CONNECT Gateway, because there was no Direct. (The first message I could find on the Direct Project forum dated back to November 25, 2009.) Participants had a sophisticated view of EHRs as platforms for applications, but SMART was just a “conceptual framework.”

So in some ways, ONC, Harvard, and many other contributors to modern health care have accomplished an admirable amount over three short years. But some ways we are frustratingly stuck. For instance, few EHR vendors offer API access to patient records, and existing APIs are proprietary. The only SMART implementation for a commercial EHR mentioned at this week’s conference was one created on top of the Cerner API by outsiders (although Cerner was cooperative). Jim Hansen of Dossia told me that there is little point to encourage programmers to create SMART apps while the records are still behind firewalls.

Keynotes

I couldn’t call a report on ITdotHealth complete without an account of the two keynotes by Christiansen and Eric Horvitz, although these took off in different directions from the rest of the conference and served as hints of future developments.

Christiansen is still adding new twists to the theories laid out in c The Innovator’s Dilemma and other books. He has been a backer of the SMART project from the start and spoke at the first ITdotHealth conference. Consistent with his famous theory of disruption, he dismisses hopes that we can reduce costs by reforming the current system of hospitals and clinics. Instead, he projects the way forward through technologies that will enable less trained experts to successively take over tasks that used to be performed in more high-cost settings. Thus, nurse practitioners will be able to do more and more of what doctors do, primary care physicians will do more of what we current delegate to specialists, and ultimately the patients and their families will treat themselves.

He also has a theory about the progression toward openness. Radically new technologies start out tightly integrated, and because they benefit from this integration they tend to be created by proprietary companies with high profit margins. As the industry comes to understand the products better, they move toward modular, open standards and become commoditized. Although one might conclude that EHRs, which have been around for some forty years, are overripe for open solutions, I’m not sure we’re ready for that yet. That’s because the problems the health care field needs to solve are quite different from the ones current EHRs solve. SMART is an open solution all around, but it could serve a marketplace of proprietary solutions and reward some of the venture capitalists pushing health care apps.

While Christiansen laid out the broad environment for change in health care, Horvitz gave us a glimpse of what he hopes the practice of medicine will be in a few years. A distinguished scientist at Microsoft, Horvitz has been using machine learning to extract patterns in sets of patient data. For instance, in a collection of data about equipment uses, ICD codes, vital signs, etc. from 300,000 emergency room visits, they found some variables that predicted a readmission within 14 days. Out of 10,000 variables, they found 500 that were relevant, but because the relational database was strained by retrieving so much data, they reduced the set to 23 variables to roll out as a product.

Another project predicted the likelihood of medical errors from patient states and management actions. This was meant to address a study claiming that most medical errors go unreported.

A study that would make the privacy-conscious squirm was based on the willingness of individuals to provide location data to researchers. The researchers tracked searches on Bing along with visits to hospitals and found out how long it took between searching for information on a health condition and actually going to do something about it. (Horvitz assured us that personally identifiable information was stripped out.)

His goal is go beyond measuring known variables, and to find new ones that could be hidden causes. But he warned that, as is often the case, causality is hard to prove.

As prediction turns up patterns, the data could become a “fabric” on which many different apps are based. Although Horvitz didn’t talk about combining data sets from different researchers, it’s clearly suggested by this progression. But proper de-identification and flexible patient consent become necessities for data combination. Horvitz also hopes to move from predictions to decisions, which he says is needed to truly move to evidence-based health care.

Did the conference promote more application development?

My impression (I have to admit I didn’t check with Dr. Ken Mandl, the organizer of the conference) was that this ITdotHealth aimed to persuade more people to write SMART apps, provide platforms that expose data through SMART, and contribute to the SMART project in general. I saw a few potential app developers at the conference, and a good number of people with their hands on data who were considering the use of SMART. I think they came away favorably impressed–maybe by the presentations, maybe by conversations that the meeting allowed them to have with SMART developers–so we may see SMART in wider use soon. Participants came far for the conference; I talked to one from Geneva, for instance.

The presentations were honest enough, though, to show that SMART development is not for the faint-hearted. On the supply side–that is, for people who have patient data and want to expose it–you have to create a “container” that presents data in the format expected by SMART. Furthermore, you must make sure the data conforms to industry standards, such as SNOMED for diagnoses. This could be a lot of conversion.

On the application side, you may have to deal with SMART’s penchant for Semantic Web technologies such as OWL and SPARQL. This will scare away a number of developers. However, speakers who presented SMART apps at the conference said development was fairly easy. No one matched the developer who said their app was ported in two days (most of which were spent reading the documentation) but development times could usually be measured in months.

Mandl spent some time airing the idea of a consortium to direct SMART. It could offer conformance tests (but probably not certification, which is a heavy-weight endeavor) and interact with the ONC and standards bodies.

After attending two conferences on SMART, I’ve got the impression that one of its most powerful concepts is that of an “app store for health care applications.” But correspondingly, one of the main sticking points is the difficulty of developing such an app store. No one seems to be taking it on. Perhaps SMART adoption is still at too early a stage.

Once again, we are batting our heads up against the walls erected by EHRs to keep data from being extracted for useful analysis. And behind this stands the resistance of providers, the users of EHRs, to give their data to their patients or to researchers. This theme dominated a federal government conference on patient access.

I think SMART will be more widely adopted over time because it is the only useful standard for exposing patient data to applications, and innovation in health care demands these apps. Accountable Care Organizations, smarter clinical trials (I met two representatives of pharmaceutical companies at the conference), and other advances in health care require data crunching, so those apps need to be written. And that’s why people came from as far as Geneva to check out SMART–there’s nowhere else to find what they need. The technical requirements to understand SMART seem to be within the developers’ grasps.

But a formidable phalanx of resistance remains, from those who don’t see the value of data to those who want to stick to limited exchange formats such as CCDs. And as Sean Nolan of Microsoft pointed out, one doesn’t get very far unless the app can fit into a doctor’s existing workflow. Privacy issues were also raised at the conference, because patient fears could stymie attempts at sharing. Given all these impediments, the government is doing what it can; perhaps the marketplace will step in to reward those who choose a flexible software platform for innovation.

September 05 2012

The future of medicine relies on massive collection of real-life data

Health care costs rise as doctors try batches of treatments that don’t work in search of one that does. Meanwhile, drug companies spend billions on developing each drug and increasingly end up with nothing to show for their pains. This is the alarming state of medical science today. Shahid Shah, device developer and system integrator, sees a different paradigm emerging. In this interview at the Open Source convention, Shah talks about how technologies and new ways of working can open up medical research.

Shah will be speaking at Strata Rx in October.

Highlights from the full video interview include:

  • Medical science will come unstuck from the clinical trials it has relied on for a couple hundred years, and use data collected in the field [Discussed at the 0:30 mark]
  • Failing fast in science [Discussed at the 2:38 mark]

  • Why and how patients will endorse the system [Discussed at the 3:00 mark]

  • Online patient communities instigating research [Discussed at the 3:55 mark]

  • Consumerization of health care [Discussed at the 5:15 mark]

  • The pharmaceutical company of the future: how research will get faster [Discussed at the 6:00 mark]

  • Medical device integration to preserve critical data [Discussed at the 7:20 mark]

You can view the entire conversation in the following video:

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting healthcare.

Save 20% on registration with the code RADAR20

August 29 2012

Analyzing health care data to empower patients

The stress of falling seriously ill often drags along the frustration of having no idea what the treatment will cost. We’ve all experienced the maddening stream of seemingly endless hospital bills, and testimony by E-patient Dave DeBronkart and others show just how absurd U.S. payment systems are.

So I was happy to seize the opportunity to ask questions of three researchers from Castlight Health about the service they’ll discuss at the upcoming Strata Rx conference about data in health care.

Castlight casts its work in the framework of a service to employers and consumers. But make no mistake about it: they are a data-rich research operation, and their consumers become empowered patients (e-patients) who can make better choices.

As Arjun Kulothungun, John Zedlewski, and Eugenia Bisignani wrote to me, “Patients become empowered when actionable information is made available to them. In health care, like any other industry, people want high quality services at competitive prices. But in health care, quality and cost are often impossible for an average consumer to determine. We are proud to do the heavy lifting to bring this information to our users.”

Following are more questions and answers from the speakers:

1. Tell me a bit about what you do at Castlight and at whom you aim your services.

We work together in the Research team at Castlight Health. We provide price and quality information to our users for most common health care services, including those provided by doctors, hospitals, labs, and imaging facilities. This information is provided to patients through a user-friendly web interface and mobile app that shows their different healthcare options customized to their health care plan. Our research team has built a sophisticated pricing system that factors in a wide variety of data sources to produce accurate prices for our users.

At a higher level, this fits into our company’s drive toward healthcare transparency, to help users better understand and navigate their healthcare options. Currently, we sell this product to employers to be offered as a benefit to their employees and their dependents. Our product is attractive to self-insured employers who operate a high-deductible health plan. High-deductible health plans motivate employees to explore their options, since doing so helps them save on their healthcare costs and find higher quality care. Our product helps patients easily explore those options.

2. What kinds of data do you use? What are the challenges involved in working with this data and making it available to patients?

We bring in data from a variety of sources to model the financial side of the healthcare industry, so that we can accurately represent the true cost of care to our users. One of the challenges we face is that the data is often messy. This is due to the complex ways that health care claims are adjudicated, and the largely manual methods of data entry. Additionally, provider data is not highly standardized, so it is often difficult to match data from different sources. Finally, in a lot of cases the data is sparse: some health care procedures are frequent, but others are only seldom performed, so it is more challenging to determine their prices.

The variability of care received also presents a challenge, because the exact care a patient receives during a visit cannot always be predicted ahead of time. A single visit to a doctor can yield a wide array of claim line items, and the patient is subsequently responsible for the sum of these services. Thus, our intent is to convey the full cost of the care patients are to receive. We believe patients are interested in understanding their options in a straightforward way, and that they don’t think in terms of claim line items and provider billing codes. So we spend a lot of time determining the best way to reflect the total cost of care to our users.

3. How much could a patient save if they used Castlight effectively? What would this mean for larger groups?

For a given procedure or service, the difference in prices in a local area can vary by 100% or more. For instance, right here in San Francisco, we can see that the cost for a particular MRI varies from $450 to nearly $3000, depending on the facility that a patient chooses, while an office visit with a primary care doctor can range from $60 to $180. But a patient may not always wish to choose the lowest cost option. A number of different factors affect how much a patient could save: the availability of options in their vicinity, the quality of the services, the patient’s ability to change the current doctor/hospital for a service, personal preferences, and the insurance benefits provided. Among our customers, the empowerment of patients adds up to employer savings of around 13% in comparison to expected trends.

In addition to cost savings, Castlight also helps drive better quality care. We have shown a 38% reduction in gaps in care for chronic conditions such as diabetes and high blood pressure. This will help drive further savings as individuals adhere to clinically proven treatment schedules.

4. What other interesting data sets are out there for healthcare consumers to use? What sorts of data do you wish were available?

Unfortunately, data on prices of health care procedures is still not widely available from government sources and insurers. Data sources that are available publicly are typically too complex and arcane to be actionable for average health care consumers.

However, CMS has recently made a big push to provide data on hospital quality. Their “hospital compare” website is a great resource to access this data. We have integrated the Medicare statistics into the Castlight product, and we’re proud of the role that Castlight co-founder and current CTO of the United States Todd Park played in making it available to the public. Despite this progress on sharing hospital data, the federal government has not made the same degree of progress in sharing information for individual physicians, so we would love to see more publicly collected data in this area.

5. Are there crowdsourcing opportunities? If patients submitted data, could it be checked for quality, and how could it further improve care and costs?

We believe that engaging consumers by asking them to provide data is a great idea! The most obvious place for users to provide data is by writing reviews of their experiences with different providers, as well as rating those providers on various facets of care. Castlight and other organizations aggregate and report on these reviews as one measure of provider quality.

It is harder to use crowdsourced information to compute costs. There are significant challenges in matching crowdsourced data to providers and especially to services performed, because line items are not identified to consumers by their billing codes. Additionally, rates tend to depend on the consumer’s insurance plan. Nonetheless, we are exploring ways to use crowdsourced pricing data for Castlight.

August 21 2012

Hawaii and health care: A small state takes a giant step forward

Knots by Uncle Catherine, on FlickrIn an era characterized by political polarization and legislative stalemate, the tiny state of Hawaii has just demonstrated extraordinary leadership. The rest of the country should now recognize, applaud, and most of all, learn from Hawaii’s accomplishment.

Hawaii enacted a new law that harmonizes its state medical privacy laws with HIPAA, the federal medical privacy law. Hawaii’s legislators and governor, along with an impressive array of patient groups, health care providers, insurance companies, and health information technologists, agreed that having dozens of unique Hawaii medical privacy laws in addition to HIPAA was confusing, expensive, and bad for patients. HB 1957 thus eliminates the need for entities covered by HIPAA to also comply with Hawaii’s complex array of medical privacy laws.

How did this thicket of state medical privacy laws arise?

Hawaii’s knotty web of state medical privacy laws is not unique. There are vast numbers of state health privacy laws across the country — certainly many hundreds, likely thousands. Hawaii alone has more than 50. Most were enacted before HIPAA, which helps explain why there are so many; when no federal guarantee of health privacy existed, states took action to protect their constituents from improper invasions of their medical privacy. These laws grew helter-skelter over decades. For example, particularly restrictive laws were enacted after inappropriate and traumatizing disclosures of HIV status during the 1980s.

These laws were often rooted in a naïve faith that patient consent, rather than underlying structural protection, is the be-all and end-all of patient protection. Consent requirements thus became more detailed and demanding. Countless laws, sometimes buried in obscure areas of state law, created unique consent requirements over mental health, genetic information, reproductive health, infectious disease, adolescent, and disability records.

When the federal government created HIPAA, a comprehensive and complex medical privacy law, the powers in Washington realized that preempting this thicket of state laws would be a political impossibility. As every HIPAA 101 class teaches, HIPAA thus became “a floor, not a ceiling.” All state laws stricter than HIPAA continue to exist in full force.

So what’s so bad about having lots of state health privacy laws?

The harmful consequences of the state medical privacy law thicket coexisting with HIPAA include:

  • Adverse patient impact — First and foremost, the privacy law thicket is terrible for individual patients. The days when we saw only doctors in one state are long gone. We travel, we move, we get sick in different states, we choose caregivers in different states. We need our health information to be rapidly available to us and our providers wherever we are, but these state consent laws make it tough for providers to share records. Even providing patients with our own medical records — which is mandated by HIPAA — is impeded by perceptions that state-specific, or even institution-specific, consent forms must be used instead of national HIPAA-compliant forms.
  • Harmful to those intended to be protected — Paradoxically, laws intended to protect particular groups of patients, like those with HIV or mental health conditions, now undermine their clinical care. Providers sending records containing sensitive content are wary of letting complete records move, yet may be unable to mask the regulated data. When records are incomplete, delayed, or simply unavailable, providers can make wrong decisions and patients can get hurt.
  • Antiquated and legalistic consent forms and systems — Most providers feel obliged to honor a patient’s request to move medical records only in the form of a “wet signature” on a piece of paper. Most then insist that the piece of paper be moved only in person or by 1980s-era fax machines, despite the inconvenience to patients who don’t have a fax machine at hand. HIPAA allows the disclosure of health information for treatment, payment, and health care operations (all precisely defined terms), but because so many state laws require consent for particular situations, it is easier (and way more CYA) for institutions to err on the side of strict consent forms for all disclosures, even when permitted by HIPAA.
  • Obstacles to technological innovation and telemedicine — Digital systems to move information need simplicity — either, yes, the data can move, or no, it cannot. Trying to build systems when a myriad of complex, and essentially unknowable, laws govern whether data can move, who must consent, on what form, for what duration, or what data subsets must be expurgated, becomes a nightmare. No doubt, many health innovators today are operating in blissful ignorance of the state health privacy law thicket, but ignorance of these laws does not protect against enforcement or class action lawsuits.
  • Economic waste — As taxpayers, the state legal thicket hurts us all. Redundant tests and procedures are often ordered when medical records cannot be timely produced. Measuring the comparative effectiveness of alternative treatments and the performance of hospitals, providers, and insurers is crucial to improving quality and reducing costs, but state laws can restrict such uses. The 2009 stimulus law provided billions of dollars for health information technology and information exchange, but some of our return on that national investment is lost when onerous state-specific consent requirements must be baked into electronic health record (EHR) and health information exchange (HIE) design.

What can we learn from Hawaii?

Other states should follow Hawaii’s lead by having the boldness and foresight to wipe their own medical privacy laws off the books in favor of a simpler and more efficient national solution that protects privacy and facilitates clinical care. Our national legal framework is HIPAA, plus HITECH, a 2009 law that made HIPAA stricter, plus other new federal initiatives intended to create a secure, private, and reliable infrastructure for moving health information. While that federal framework isn’t perfect, that’s where we should be putting our efforts to protect, exchange, and make appropriate use of health information. Hawaii’s approach of reducing the additional burden of the complex state law layer just makes sense.

Some modest progress has occurred already. A few states are harmonizing their laws affecting health information exchanges (e.g., Kansas and Utah). Some states exempt HIPAA-regulated entities subject to new HITECH breach requirements from also having to comply with the state breach laws (e.g., Michigan and Indiana). These breach measures are helpful in a crisis, to be sure, by saving money on wasteful legal research, but irrelevant from the standpoint of providing care for patients or designing technology solutions or system improvements. California currently has a medical law harmonization initiative underway, which I hope is broadly supported in order to reduce waste and improve care.

To be blunt, we need much more dramatic progress in this area. In the case of health information exchange, states are not useful “laboratories of democracy“; they are towers of Babel that disserve patients. The challenges of providing clinical care, let alone making dramatic improvements while lowering costs, in the context of this convoluted mess of state laws, are severe. Patients, disease advocacy groups, doctors, nurses, hospitals, and technology innovators should let their state legislators know that harmonizing medical privacy laws would be a huge win for all involved.

Photo: Knots by Uncle Catherine, on Flickr

August 14 2012

Solving the Wanamaker problem for health care

By Tim O’Reilly, Julie Steele, Mike Loukides and Colin Hill

“The best minds of my generation are thinking about how to make people click ads.” — Jeff Hammerbacher, early Facebook employee

“Work on stuff that matters.” — Tim O’Reilly

Doctors in operating room with data

In the early days of the 20th century, department store magnate John Wanamaker famously said, “I know that half of my advertising doesn’t work. The problem is that I don’t know which half.”

The consumer Internet revolution was fueled by a search for the answer to Wanamaker’s question. Google AdWords and the pay-per-click model transformed a business in which advertisers paid for ad impressions into one in which they pay for results. “Cost per thousand impressions” (CPM) was replaced by “cost per click” (CPC), and a new industry was born. It’s important to understand why CPC replaced CPM, though. Superficially, it’s because Google was able to track when a user clicked on a link, and was therefore able to bill based on success. But billing based on success doesn’t fundamentally change anything unless you can also change the success rate, and that’s what Google was able to do. By using data to understand each user’s behavior, Google was able to place advertisements that an individual was likely to click. They knew “which half” of their advertising was more likely to be effective, and didn’t bother with the rest.

Since then, data and predictive analytics have driven ever deeper insight into user behavior such that companies like Google, Facebook, Twitter, Zynga, and LinkedIn are fundamentally data companies. And data isn’t just transforming the consumer Internet. It is transforming finance, design, and manufacturing — and perhaps most importantly, health care.

How is data science transforming health care? There are many ways in which health care is changing, and needs to change. We’re focusing on one particular issue: the problem Wanamaker described when talking about his advertising. How do you make sure you’re spending money effectively? Is it possible to know what will work in advance?

Too often, when doctors order a treatment, whether it’s surgery or an over-the-counter medication, they are applying a “standard of care” treatment or some variation that is based on their own intuition, effectively hoping for the best. The sad truth of medicine is that we don’t really understand the relationship between treatments and outcomes. We have studies to show that various treatments will work more often than placebos; but, like Wanamaker, we know that much of our medicine doesn’t work for half or our patients, we just don’t know which half. At least, not in advance. One of data science’s many promises is that, if we can collect data about medical treatments and use that data effectively, we’ll be able to predict more accurately which treatments will be effective for which patient, and which treatments won’t.

A better understanding of the relationship between treatments, outcomes, and patients will have a huge impact on the practice of medicine in the United States. Health care is expensive. The U.S. spends over $2.6 trillion on health care every year, an amount that constitutes a serious fiscal burden for government, businesses, and our society as a whole. These costs include over $600 billion of unexplained variations in treatments: treatments that cause no differences in outcomes, or even make the patient’s condition worse. We have reached a point at which our need to understand treatment effectiveness has become vital — to the health care system and to the health and sustainability of the economy overall.

Why do we believe that data science has the potential to revolutionize health care? After all, the medical industry has had data for generations: clinical studies, insurance data, hospital records. But the health care industry is now awash in data in a way that it has never been before: from biological data such as gene expression, next-generation DNA sequence data, proteomics, and metabolomics, to clinical data and health outcomes data contained in ever more prevalent electronic health records (EHRs) and longitudinal drug and medical claims. We have entered a new era in which we can work on massive datasets effectively, combining data from clinical trials and direct observation by practicing physicians (the records generated by our $2.6 trillion of medical expense). When we combine data with the resources needed to work on the data, we can start asking the important questions, the Wanamaker questions, about what treatments work and for whom.

The opportunities are huge: for entrepreneurs and data scientists looking to put their skills to work disrupting a large market, for researchers trying to make sense out of the flood of data they are now generating, and for existing companies (including health insurance companies, biotech, pharmaceutical, and medical device companies, hospitals and other care providers) that are looking to remake their businesses for the coming world of outcome-based payment models.

Making health care more effective

Downloadable Editions

This report will soon be available in PDF, EPUB and Mobi formats. Submit your email to be alerted when the downloadable editions are ready.

What, specifically, does data allow us to do that we couldn’t do before? For the past 60 or so years of medical history, we’ve treated patients as some sort of an average. A doctor would diagnose a condition and recommend a treatment based on what worked for most people, as reflected in large clinical studies. Over the years, we’ve become more sophisticated about what that average patient means, but that same statistical approach didn’t allow for differences between patients. A treatment was deemed effective or ineffective, safe or unsafe, based on double-blind studies that rarely took into account the differences between patients. With the data that’s now available, we can go much further. The exceptions to this are relatively recent and have been dominated by cancer treatments, the first being Herceptin for breast cancer in women who over-express the Her2 receptor. With the data that’s now available, we can go much further for a broad range of diseases and interventions that are not just drugs but include surgery, disease management programs, medical devices, patient adherence, and care delivery.

For a long time, we thought that Tamoxifen was roughly 80% effective for breast cancer patients. But now we know much more: we know that it’s 100% effective in 70 to 80% of the patients, and ineffective in the rest. That’s not word games, because we can now use genetic markers to tell whether it’s likely to be effective or ineffective for any given patient, and we can tell in advance whether to treat with Tamoxifen or to try something else.

Two factors lie behind this new approach to medicine: a different way of using data, and the availability of new kinds of data. It’s not just stating that the drug is effective on most patients, based on trials (indeed, 80% is an enviable success rate); it’s using artificial intelligence techniques to divide the patients into groups and then determine the difference between those groups. We’re not asking whether the drug is effective; we’re asking a fundamentally different question: “for which patients is this drug effective?” We’re asking about the patients, not just the treatments. A drug that’s only effective on 1% of patients might be very valuable if we can tell who that 1% is, though it would certainly be rejected by any traditional clinical trial.

More than that, asking questions about patients is only possible because we’re using data that wasn’t available until recently: DNA sequencing was only invented in the mid-1970s, and is only now coming into its own as a medical tool. What we’ve seen with Tamoxifen is as clear a solution to the Wanamaker problem as you could ask for: we now know when that treatment will be effective. If you can do the same thing with millions of cancer patients, you will both improve outcomes and save money.

Dr. Lukas Wartman, a cancer researcher who was himself diagnosed with terminal leukemia, was successfully treated with sunitinib, a drug that was only approved for kidney cancer. Sequencing the genes of both the patient’s healthy cells and cancerous cells led to the discovery of a protein that was out of control and encouraging the spread of the cancer. The gene responsible for manufacturing this protein could potentially be inhibited by the kidney drug, although it had never been tested for this application. This unorthodox treatment was surprisingly effective: Wartman is now in remission.

While this treatment was exotic and expensive, what’s important isn’t the expense but the potential for new kinds of diagnosis. The price of gene sequencing has been plummeting; it will be a common doctor’s office procedure in a few years. And through Amazon and Google, you can now “rent” a cloud-based supercomputing cluster that can solve huge analytic problems for a few hundred dollars per hour. What is now exotic inevitably becomes routine.

But even more important: we’re looking at a completely different approach to treatment. Rather than a treatment that works 80% of the time, or even 100% of the time for 80% of the patients, a treatment might be effective for a small group. It might be entirely specific to the individual; the next cancer patient may have a different protein that’s out of control, an entirely different genetic cause for the disease. Treatments that are specific to one patient don’t exist in medicine as it’s currently practiced; how could you ever do an FDA trial for a medication that’s only going to be used once to treat a certain kind of cancer?

Foundation Medicine is at the forefront of this new era in cancer treatment. They use next-generation DNA sequencing to discover DNA sequence mutations and deletions that are currently used in standard of care treatments, as well as many other actionable mutations that are tied to drugs for other types of cancer. They are creating a patient-outcomes repository that will be the fuel for discovering the relation between mutations and drugs. Foundation has identified DNA mutations in 50% of cancer cases for which drugs exist (information via a private communication), but are not currently used in the standard of care for the patient’s particular cancer.

The ability to do large-scale computing on genetic data gives us the ability to understand the origins of disease. If we can understand why an anti-cancer drug is effective (what specific proteins it affects), and if we can understand what genetic factors are causing the cancer to spread, then we’re able to use the tools at our disposal much more effectively. Rather than using imprecise treatments organized around symptoms, we’ll be able to target the actual causes of disease, and design treatments tuned to the biology of the specific patient. Eventually, we’ll be able to treat 100% of the patients 100% of the time, precisely because we realize that each patient presents a unique problem.

Personalized treatment is just one area in which we can solve the Wanamaker problem with data. Hospital admissions are extremely expensive. Data can make hospital systems more efficient, and to avoid preventable complications such as blood clots and hospital re-admissions. It can also help address the challenge of hot-spotting (a term coined by Atul Gawande): finding people who use an inordinate amount of health care resources. By looking at data from hospital visits, Dr. Jeffrey Brenner of Camden, NJ, was able to determine that “just one per cent of the hundred thousand people who made use of Camden’s medical facilities accounted for thirty per cent of its costs.” Furthermore, many of these people came from two apartment buildings. Designing more effective medical care for these patients was difficult; it doesn’t fit our health insurance system, the patients are often dealing with many serious medical issues (addiction and obesity are frequent complications), and have trouble trusting doctors and social workers. It’s counter-intuitive, but spending more on some patients now results in spending less on them when they become really sick. While it’s a work in progress, it looks like building appropriate systems to target these high-risk patients and treat them before they’re hospitalized will bring significant savings.

Many poor health outcomes are attributable to patients who don’t take their medications. Eliza, a Boston-based company started by Alexandra Drane, has pioneered approaches to improve compliance through interactive communication with patients. Eliza improves patient drug compliance by tracking which types of reminders work on which types of people; it’s similar to the way companies like Google target advertisements to individual consumers. By using data to analyze each patient’s behavior, Eliza can generate reminders that are more likely to be effective. The results aren’t surprising: if patients take their medicine as prescribed, they are more likely to get better. And if they get better, they are less likely to require further, more expensive treatment. Again, we’re using data to solve Wanamaker’s problem in medicine: we’re spending our resources on what’s effective, on appropriate reminders that are mostly to get patients to take their medications.

More data, more sources

The examples we’ve looked at so far have been limited to traditional sources of medical data: hospitals, research centers, doctor’s offices, insurers. The Internet has enabled the formation of patient networks aimed at sharing data. Health social networks now are some of the largest patient communities. As of November 2011, PatientsLikeMe has over 120,000 patients in 500 different condition groups; ACOR has over 100,000 patients in 127 cancer support groups; 23andMe has over 100,000 members in their genomic database; and diabetes health social network SugarStats has over 10,000 members. These are just the larger communities, thousands of small communities are created around rare diseases, or even uncommon experiences with common diseases. All of these communities are generating data that they voluntarily share with each other and the world.

Increasingly, what they share is not just anecdotal, but includes an array of clinical data. For this reason, these groups are being recruited for large-scale crowdsourced clinical outcomes research.

Thanks to ubiquitous data networking through the mobile network, we can take several steps further. In the past two or three years, there’s been a flood of personal fitness devices (such as the Fitbit) for monitoring your personal activity. There are mobile apps for taking your pulse, and an iPhone attachment for measuring your glucose. There has been talk of mobile applications that would constantly listen to a patient’s speech and detect changes that might be the precursor for a stroke, or would use the accelerometer to report falls. Tanzeem Choudhury has developed an app called Be Well that is intended primarily for victims of depression, though it can be used by anyone. Be Well monitors the user’s sleep cycles, the amount of time they spend talking, and the amount of time they spend walking. The data is scored, and the app makes appropriate recommendations, based both on the individual patient and data collected across all the app’s users.

Continuous monitoring of critical patients in hospitals has been normal for years; but we now have the tools to monitor patients constantly, in their home, at work, wherever they happen to be. And if this sounds like big brother, at this point most of the patients are willing. We don’t want to transform our lives into hospital experiences; far from it! But we can collect and use the data we constantly emit, our “data smog,” to maintain our health, to become conscious of our behavior, and to detect oncoming conditions before they become serious. The most effective medical care is the medical care you avoid because you don’t need it.

Paying for results

Once we’re on the road toward more effective health care, we can look at other ways in which Wanamaker’s problem shows up in the medical industry. It’s clear that we don’t want to pay for treatments that are ineffective. Wanamaker wanted to know which part of his advertising was effective, not just to make better ads, but also so that he wouldn’t have to buy the advertisements that wouldn’t work. He wanted to pay for results, not for ad placements. Now that we’re starting to understand how to make treatment effective, now that we understand that it’s more than rolling the dice and hoping that a treatment that works for a typical patient will be effective for you, we can take the next step: Can we change the underlying incentives in the medical system? Can we make the system better by paying for results, rather than paying for procedures?

It’s shocking just how badly the incentives in our current medical system are aligned with outcomes. If you see an orthopedist, you’re likely to get an MRI, most likely at a facility owned by the orthopedist’s practice. On one hand, it’s good medicine to know what you’re doing before you operate. But how often does that MRI result in a different treatment? How often is the MRI required just because it’s part of the protocol, when it’s perfectly obvious what the doctor needs to do? Many men have had PSA tests for prostate cancer; but in most cases, aggressive treatment of prostate cancer is a bigger risk than the disease itself. Yet the test itself is a significant profit center. Think again about Tamoxifen, and about the pharmaceutical company that makes it. In our current system, what does “100% effective in 80% of the patients” mean, except for a 20% loss in sales? That’s because the drug company is paid for the treatment, not for the result; it has no financial interest in whether any individual patient gets better. (Whether a statistically significant number of patients has side-effects is a different issue.) And at the same time, bringing a new drug to market is very expensive, and might not be worthwhile if it will only be used on the remaining 20% of the patients. And that’s assuming that one drug, not two, or 20, or 200 will be required to treat the unlucky 20% effectively.

It doesn’t have to be this way.

In the U.K., Johnson & Johnson, faced with the possibility of losing reimbursements for their multiple myeloma drug Velcade, agreed to refund the money for patients who did not respond to the drug. Several other pay-for-performance drug deals have followed since, paving the way for the ultimate transition in pharmaceutical company business models in which their product is health outcomes instead of pills. Such a transition would rely more heavily on real-world outcome data (are patients actually getting better?), rather than controlled clinical trials, and would use molecular diagnostics to create personalized “treatment algorithms.” Pharmaceutical companies would also focus more on drug compliance to ensure health outcomes were being achieved. This would ultimately align the interests of drug makers with patients, their providers, and payors.

Similarly, rather than paying for treatments and procedures, can we pay hospitals and doctors for results? That’s what Accountable Care Organizations (ACOs) are about. ACOs are a leap forward in business model design, where the provider shoulders any financial risk. ACOs represent a new framing of the much maligned HMO approaches from the ’90s, which did not work. HMOs tried to use statistics to predict and prevent unneeded care. The ACO model, rather than controlling doctors with what the data says they “should” do, uses data to measure how each doctor performs. Doctors are paid for successes, not for the procedures they administer. The main advantage that the ACO model has over the HMO model is how good the data is, and how that data is leveraged. The ACO model aligns incentives with outcomes: a practice that owns an MRI facility isn’t incentivized to order MRIs when they’re not necessary. It is incentivized to use all the data at its disposal to determine the most effective treatment for the patient, and to follow through on that treatment with a minimum of unnecessary testing.

When we know which procedures are likely to be successful, we’ll be in a position where we can pay only for the health care that works. When we can do that, we’ve solved Wanamaker’s problem for health care.

Enabling data

Data science is not optional in health care reform; it is the linchpin of the whole process. All of the examples we’ve seen, ranging from cancer treatment to detecting hot spots where additional intervention will make hospital admission unnecessary, depend on using data effectively: taking advantage of new data sources and new analytics techniques, in addition to the data the medical profession has had all along.

But it’s too simple just to say “we need data.” We’ve had data all along: handwritten records in manila folders on acres and acres of shelving. Insurance company records. But it’s all been locked up in silos: insurance silos, hospital silos, and many, many doctor’s office silos. Data doesn’t help if it can’t be moved, if data sources can’t be combined.

There are two big issues here. First, a surprising amount of medical records are still either hand-written, or in digital formats that are scarcely better than hand-written (for example, scanned images of hand-written records). Getting medical records into a format that’s computable is a prerequisite for almost any kind of progress. Second, we need to break down those silos.

Anyone who has worked with data knows that, in any problem, 90% of the work is getting the data in a form in which it can be used; the analysis itself is often simple. We need electronic health records: patient data in a more-or-less standard form that can be shared efficiently, data that can be moved from one location to another at the speed of the Internet. Not all data formats are created equal, and some are certainly better than others: but at this point, any machine-readable format, even simple text files, is better than nothing. While there are currently hundreds of different formats for electronic health records, the fact that they’re electronic means that they can be converted from one form into another. Standardizing on a single format would make things much easier, but just getting the data into some electronic form, any, is the first step.

Once we have electronic health records, we can link doctor’s offices, labs, hospitals, and insurers into a data network, so that all patient data is immediately stored in a data center: every prescription, every procedure, and whether that treatment was effective or not. This isn’t some futuristic dream; it’s technology we have now. Building this network would be substantially simpler and cheaper than building the networks and data centers now operated by Google, Facebook, Amazon, Apple, and many other large technology companies. It’s not even close to pushing the limits.

Electronic health records enable us to go far beyond the current mechanism of clinical trials. In the past, once a drug has been approved in trials, that’s effectively the end of the story: running more tests to determine whether it’s effective in practice would be a huge expense. A physician might get a sense for whether any treatment worked, but that evidence is essentially anecdotal: it’s easy to believe that something is effective because that’s what you want to see. And if it’s shared with other doctors, it’s shared while chatting at a medical convention. But with electronic health records, it’s possible (and not even terribly expensive) to collect documentation from thousands of physicians treating millions of patients. We can find out when and where a drug was prescribed, why, and whether there was a good outcome. We can ask questions that are never part of clinical trials: is the medication used in combination with anything else? What other conditions is the patient being treated for? We can use machine learning techniques to discover unexpected combinations of drugs that work well together, or to predict adverse reactions. We’re no longer limited by clinical trials; every patient can be part of an ongoing evaluation of whether his treatment is effective, and under what conditions. Technically, this isn’t hard. The only difficult part is getting the data to move, getting data in a form where it’s easily transferred from the doctor’s office to analytics centers.

To solve problems of hot-spotting (individual patients or groups of patients consuming inordinate medical resources) requires a different combination of information. You can’t locate hot spots if you don’t have physical addresses. Physical addresses can be geocoded (converted from addresses to longitude and latitude, which is more useful for mapping problems) easily enough, once you have them, but you need access to patient records from all the hospitals operating in the area under study. And you need access to insurance records to determine how much health care patients are requiring, and to evaluate whether special interventions for these patients are effective. Not only does this require electronic records, it requires cooperation across different organizations (breaking down silos), and assurance that the data won’t be misused (patient privacy). Again, the enabling factor is our ability to combine data from different sources; once you have the data, the solutions come easily.

Breaking down silos has a lot to do with aligning incentives. Currently, hospitals are trying to optimize their income from medical treatments, while insurance companies are trying to optimize their income by minimizing payments, and doctors are just trying to keep their heads above water. There’s little incentive to cooperate. But as financial pressures rise, it will become critically important for everyone in the health care system, from the patient to the insurance executive, to assume that they are getting the most for their money. While there’s intense cultural resistance to be overcome (through our experience in data science, we’ve learned that it’s often difficult to break down silos within an organization, let alone between organizations), the pressure of delivering more effective health care for less money will eventually break the silos down. The old zero-sum game of winners and losers must end if we’re going to have a medical system that’s effective over the coming decades.

Data becomes infinitely more powerful when you can mix data from different sources: many doctor’s offices, hospital admission records, address databases, and even the rapidly increasing stream of data coming from personal fitness devices. The challenge isn’t employing our statistics more carefully, precisely, or guardedly. It’s about letting go of an old paradigm that starts by assuming only certain variables are key and ends by correlating only these variables. This paradigm worked well when data was scarce, but if you think about, these assumptions arise precisely because data is scarce. We didn’t study the relationship between leukemia and kidney cancers because that would require asking a huge set of questions that would require collecting a lot of data; and a connection between leukemia and kidney cancer is no more likely than a connection between leukemia and flu. But the existence of data is no longer a problem: we’re collecting the data all the time. Electronic health records let us move the data around so that we can assemble a collection of cases that goes far beyond a particular practice, a particular hospital, a particular study. So now, we can use machine learning techniques to identify and test all possible hypotheses, rather than just the small set that intuition might suggest. And finally, with enough data, we can get beyond correlation to causation: rather than saying “A and B are correlated,” we’ll be able to say “A causes B,” and know what to do about it.

Building the health care system we want

The U.S. ranks 37th out of developed economies in life expectancy and other measures of health, while by far outspending other countries on per-capita health care costs. We spend 18% of GDP on health care, while other countries on average spend on the order of 10% of GDP. We spend a lot of money on treatments that don’t work, because we have a poor understanding at best of what will and won’t work.

Part of the problem is cultural. In a country where even pets can have hip replacement surgery, it’s hard to imagine not spending every penny you have to prolong Grandma’s life — or your own. The U.S. is a wealthy nation, and health care is something we choose to spend our money on. But wealthy or not, nobody wants ineffective treatments. Nobody wants to roll the dice and hope that their biology is similar enough to a hypothetical “average” patient. No one wants a “winner take all” payment system in which the patient is always the loser, paying for procedures whether or not they are helpful or necessary. Like Wanamaker with his advertisements, we want to know what works, and we want to pay for what works. We want a smarter system where treatments are designed to be effective on our individual biologies; where treatments are administered effectively; where our hospitals our used effectively; and where we pay for outcomes, not for procedures.

We’re on the verge of that new system now. We don’t have it yet, but we can see it around the corner. Ultra-cheap DNA sequencing in the doctor’s office, massive inexpensive computing power, the availability of EHRs to study whether treatments are effective even after the FDA trials are over, and improved techniques for analyzing data are the tools that will bring this new system about. The tools are here now; it’s up to us to put them into use.

Recommended reading:

We recommend the following books regarding technology, data, and health care reform:

August 09 2012

Five elements of reform that health providers would rather not hear about

The quantum leap we need in patient care requires a complete overhaul of record-keeping and health IT. Leaders of the health care field know this and have been urging the changes on health care providers for years, but the providers are having trouble accepting the changes for several reasons.

What’s holding them back? Change certainly costs money, but the industry is already groaning its way through enormous paradigm shifts to meet current financial and regulatory climate, so the money might as well be directed to things that work. Training staff to handle patients differently is also difficult, but the staff on the floor of these institutions are experiencing burn-out and can be inspired by a new direction. The fundamental resistance seems to be expectations by health providers and their vendors about the control they need to conduct their business profitably.

A few months ago I wrote an article titled Five Tough Lessons I Had to Learn About Health Care. Here I’ll delineate some elements of a new health care system that are promoted by thought leaders, that echo the evolution of other industries, that will seem utterly natural in a couple decades–but that providers are loathe to consider. I feel that leaders in the field are not confronting that resistance with an equivalent sense of conviction that these changes are crucial.

1. Reform will not succeed unless electronic records standardize on a common, robust format

Records are not static. They must be combined, parsed, and analyzed to be useful. In the health care field, records must travel with the patient. Furthermore, we need an explosion of data analysis applications in order to drive diagnosis, public health planning, and research into new treatments.

Interoperability is a common mantra these days in talking about electronic health records, but I don’t think the power and urgency of record formats can be conveyed in eight-syllable words. It can be conveyed better by a site that uses data about hospital procedures, costs, and patient satisfaction to help consumers choose a desirable hospital. Or an app that might prevent a million heart attacks and strokes.

Data-wise (or data-ignorant), doctors are stuck in the 1980s, buying proprietary record systems that don’t work together even between different departments in a hospital, or between outpatient clinics and their affiliated hospitals. Now the vendors are responding to pressures from both government and the market by promising interoperability. The federal government has taken this promise as good coin, hoping that vendors will provide windows onto their data. It never really happens. Every baby step toward opening up one field or another requires additional payments to vendors or consultants.

That’s why exchanging patient data (health information exchange) requires a multi-million dollar investment, year after year, and why most HIEs go under. And that’s why the HL7 committee, putatively responsible for defining standards for electronic health records, keeps on putting out new, complicated variations on a long history of formats that were not well enough defined to ensure compatibility among vendors.

The Direct project and perhaps the nascent RHEx RESTful exchange standard will let hospitals exchange the limited types of information that the government forces them to exchange. But it won’t create a platform (as suggested in this PDF slideshow) for the hundreds of applications we need to extract useful data from records. Nor will it open the records to the masses of data we need to start collecting. It remains to be seen whether Accountable Care Organizations, which are the latest reform in U.S. health care and are described in this video, will be able to use current standards to exchange the data that each member institution needs to coordinate care. Shahid Shaw has laid out in glorious detail the elements of open data exchange in health care.

2. Reform will not succeed unless massive amounts of patient data are collected

We aren’t giving patients the most effective treatments because we just don’t know enough about what works. This extends throughout the health care system:

  • We can’t prescribe a drug tailored to the patient because we don’t collect enough data about patients and their reactions to the drug.

  • We can’t be sure drugs are safe and effective because we don’t collect data about how patients fare on those drugs.

  • We don’t see a heart attack or other crisis coming because we don’t track the vital signs of at-risk populations on a daily basis.

  • We don’t make sure patients follow through on treatment plans because we don’t track whether they take their medications and perform their exercises.

  • We don’t target people who need treatment because we don’t keep track of their risk factors.

Some institutions have adopted a holistic approach to health, but as a society there’s a huge amount more that we could do in this area. O’Reilly is hosting a conference called Strata Rx on this subject.

Leaders in the field know what health care providers could accomplish with data. A recent article even advises policy-makers to focus on the data instead of the electronic records. The question is whether providers are technically and organizationally prepped to accept it in such quantities and variety. When doctors and hospitals think they own the patients’ records, they resist putting in anything but their own notes and observations, along with lab results they order. We’ve got to change the concept of ownership, which strikes deep into their culture.

3. Reform will not succeed unless patients are in charge of their records

Doctors are currently acting in isolation, occasionally consulting with the other providers seen by their patients but rarely sharing detailed information. It falls on the patient, or a family advocate, to remember that one drug or treatment interferes with another or to remind treatment centers of follow-up plans. And any data collected by the patient remains confined to scribbled notes or (in the modern Quantified Self equivalent) a web site that’s disconnected from the official records.

Doctors don’t trust patients. They have some good reasons for this: medical records are complicated documents in which a slight rewording or typographical error can change the meaning enough to risk a life. But walling off patients from records doesn’t insulate them against errors: on the contrary, patients catch errors entered by staff all the time. So ultimately it’s better to bring the patient onto the team and educate her. If a problem with records altered by patients–deliberately or through accidental misuse–turns up down the line, digital certificates can be deployed to sign doctor records and output from devices.

The amounts of data we’re talking about get really big fast. Genomic information and radiological images, in particular, can occupy dozens of gigabytes of space. But hospitals are moving to the cloud anyway. Practice Fusion just announced that they serve 150,000 medical practitioners and that “One in four doctors selecting an EHR today chooses Practice Fusion.” So we can just hand over the keys to the patients and storage will grow along with need.

The movement for patient empowerment will take off, as experts in health reform told US government representatives, when patients are in charge of their records. To treat people, doctors will have to ask for the records, and the patients can offer the full range of treatment histories, vital signs, and observations of daily living they’ve collected. Applications will arise that can search the data for patterns and relevant facts.

Once again, the US government is trying to stimulate patient empowerment by requiring doctors to open their records to patients. But most institutions meet the formal requirements by providing portals that patients can log into, the way we can view flight reservations on airlines. We need the patients to become the pilots. We also need to give them the information they need to navigate.

4. Reform will not succeed unless providers conform to practice guidlines

Now that the government is forcing doctors to release informtion about outcomes, patients can start to choose doctors and hospitals that offer the best chances of success. The providers will have to apply more rigor to their activities, using checklists and more, to bring up the scores of the less successful providers. Medicine is both a science and an art, but many lag on the science–that is, doing what has been statistically proven to produce the best likely outcome–even at prestigious institutions.

Patient choice is restricted by arbitrary insurance rules, unfortunately. These also contribute to the utterly crazy difficulty determining what a medical procedure will cost as reported by e-Patient Dave and WBUR radio. Straightening out this problem goes way beyond the doctors and hospitals, and settling on a fair, predictable cost structure will benefit them almost as much as patients and taxpayers. Even some insurers have started to see that the system is reaching a dead-end and are erecting new payment mechanisms.

5. Reform will not succeed unless providers and patients can form partnerships

I’m always talking about technologies and data in my articles, but none of that constitutes health. Just as student testing is a poor model for education, data collection is a poor model for medical care. What patients want is time to talk intensively with their providers about their needs, and providers voice the same desires.

Data and good record keeping can help us use our resources more efficiently and deal with the physician shortage, partly by spreading out jobs among other clinical staff. Computer systems can’t deal with complex and overlapping syndromes, or persuade patients to adopt practices that are good for them. Relationships will always have to be in the forefront. Health IT expert Fred Trotter says, “Time is the gas that makes the relationship go, but the technology should be focused on fuel efficiency.”

Arien Malec, former contractor for the Office of the National Coordinator, used to give a speech about the evolution of medical care. Before the revolution in antibiotics, doctors had few tools to actually cure patients, but they live with the patients in the same community and know their needs through and through. As we’ve improved the science of medicine, we’ve lost that personal connection. Malec argued that better records could help doctors really know their patients again. But conversations are necessary too.

August 08 2012

Technical requirements for coordinating care in an Accountable Care Organization

The concept of an Accountable Care Organization (ACO) reflects modern hopes to improve medicine and cut costs in the health system. Tony MCormick, a pioneer in the integration of health care systems, describes what is needed on the ground to get doctors working together.

Highlights from the full video interview include:

  • What an Accountable Care Organization is. [Discussed at the 00:19 mark]
  • Biggest challenge in forming an ACO. [Discussed at the 01:23 mark]
  • The various types of providers who need to exchange data. [Discussed at the 03:08 mark]
  • Data formats and gaps in the market. [Discussed at the 03:58 mark]
  • Uses for data in ACOs. [Discussed at the 5:39 mark]
  • Problems with current Medicare funding and solutions through ACOs. [Discussed at the 7:50 mark]

You can view the entire conversation in the following video:

August 03 2012

StrataRx: Data science and health(care)

By Mike Loukides and Jim Stogdill

StrataRxWe are launching a conference at the intersection of health, health care, and data. Why?

Our health care system is in crisis. We are experiencing epidemic levels of obesity, diabetes, and other preventable conditions while at the same time our health care system costs are spiraling higher. Most of us have experienced increasing health care costs in our businesses or have seen our personal share of insurance premiums rise rapidly. Worse, we may be living with a chronic or life-threatening disease while struggling to obtain effective therapies and interventions — finding ourselves lumped in with “average patients” instead of receiving effective care designed to work for our specific situation.

In short, particularly in the United States, we are paying too much for too much care of the wrong kind and getting poor results. All the while our diet and lifestyle failures are demanding even more from the system. In the past few decades we’ve dropped from the world’s best health care system to the 37th, and we seem likely to drop further if things don’t change.

The very public fight over the Affordable Care Act (ACA) has brought this to the fore of our attention, but this is a situation that has been brewing for a long time. With the ACA’s arrival, increasing costs and poor outcomes, at least in part, are going to be the responsibility of the federal government. The fiscal outlook for that responsibility doesn’t look good and solving this crisis is no longer optional; it’s urgent.

There are many reasons for the crisis, and there’s no silver bullet. Health and health care live at the confluence of diet and exercise norms, destructive business incentives, antiquated care models, and a system that has severe learning disabilities. We aren’t preventing the preventable, and once we’re sick we’re paying for procedures and tests instead of results; and those interventions were designed for some non-existent average patient so much of it is wasted. Later we mostly ignore the data that could help the system learn and adapt.

It’s all too easy to be gloomy about the outlook for health and health care, but this is also a moment of great opportunity. We face this crisis armed with vast new data sources, the emerging tools and techniques to analyze them, an ACA policy framework that emphasizes outcomes over procedures, and a growing recognition that these are problems worth solving.

Data has a long history of being “unreasonably effective.” And at least from the technologist point of view it looks like we are on the cusp of something big. We have the opportunity to move from “Health IT” to an era of data-illuminated technology-enabled health.

For example, it is well known that poverty places a disproportionate burden on the health care system. Poor people don’t have medical insurance and can’t afford to see doctors; so when they’re sick they go to the emergency room at great cost and often after they are much sicker than they need to be. But what happens when you look deeper? One project showed that two apartment buildings in Camden, NJ accounted for a hugely disproportionate number of hospital admissions.

Targeting those buildings, and specific people within them, with integrated preventive care and medical intervention has led to significant savings.

That project was made possible by the analysis of hospital admissions, costs, and intervention outcomes — essentially, insurance claims data — across all the hospitals in Camden. Acting upon that analysis and analyzing the results of the action led to savings.

But claims data isn’t the only game in town anymore. Even more is possible as electronic medical records (EMR), genomic, mobile sensor, and other emerging data streams become available.

With mobile-enabled remote sensors like glucometers, blood pressure monitors, and futuristic tools like digital pills that broadcast their arrival in the stomach, we have the opportunity to completely revolutionize disease management. By moving from discrete and costly data events to a continuous stream of inexpensive remotely monitored data, care will improve for a broad range of chronic and life-threatening diseases. By involving fewer office visits, physician productivity will rise and costs will come down.

We are also beginning to see tantalizing hints of the future of personalized medicine in action. Cheap gene sequencing, better understanding of how drug molecules interact with our biology (and each other), and the tools and horsepower to analyze these complex interactions for a specific patient with specific biology in near real time will change how we do medicine. In the same way that Google’s AdSense took cost out of advertising by using data to target ads with precision, we’ll soon be able to make medical interventions that are much more patient-specific and cost effective.

StrataRx is based on the idea that data will improve health and health care, but we aren’t naive enough to believe that data alone solves all the problems we are facing. Health and health care are incredibly complex and multi-layered and big data analytics is only one piece of the puzzle. Solving our national crisis will also depend on policy and system changes, some of them to systems outside of health care. However, we know that data and its analysis have an important role to play in illuminating the current reality and creating those solutions.

StrataRx is a call for data scientists, technologists, health professionals, and the sector’s business leadership to convene, take part in the discussion, and make a difference!

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting healthcare.

Save 20% on registration with the code RADAR20

Related:

August 02 2012

Why microchips in pills matter

Earlier this week, Proteus announced that they have been approved by the FDA to market their ingestible microchips for pills.

Generally, the FDA approval process for devices that are totally new like this is a painful one, with much suffering. So it is a big deal for anyone to get approved for anything.

But this is a far more important accomplishment than a mere incremental improvement. It is an entirely new kind of medical device and, most importantly, a whole new potential data stream on one of the most critical issues in the delivery of health care.

Modern drugs work wonders, but it does not help a patient if the patient does not take them. Historically, and I do mean the stone age here, the ability to consistently take pills correctly has been called “compliance.” But please do not call it that. Most participants in the movement for patient rights regard that term as paternalistic.

“Adherence” is a much more respectful term (although I have certainly heard it used in a paternalistic manner). In fact, as we progress, I should define that when I say “adherence,” what I mean is “adherence to a plan that belongs to the patient.” If a pill that I am taking makes me so sick to my stomach that I cannot take it any more, then when I decide to stop taking it, I am not being “non-compliant” or “non-adherent” to a medication plan; I have changed my medication plan, and I have yet to discuss the issue with my doctor.

Still, even for patients who want to consistently take their pills according to a plan, it can be extraordinarily difficult. It is hard to remember when a pill has been taken. It is hard to remember to pick up a new supply or to call for a renewal. It is easy to forget pills on trips and run out unexpectedly far from home. Pills frequently must be taken with food, or without food, or with or without specific foods. They must be taken before bed or before breakfast. Personally, I have trouble remembering to take even a single pill consistently. But many people need to manage a tremendous number of pills, and they frequently go off one set of pills and onto another. In short, pill management is a huge mess, and it is difficult to organize anything.

The Proteus technology allows a person to wear a patch that can detect when any pill carrying a proteus microchip is swallowed. Using that patch and a wireless connection of some kind, it is pretty simple to enable all kinds of clever “take your pill UX.” With this technology, it will finally be possibly to reliably perform “quantified self” on pill regimes.

It is hard to imagine just how large of an impact this will have on the problem of pills, but “huge” is a conservative estimate. New systems will be able to have digital pharmacy dispensers in the home, which will enable schedules like “take one of these every 56 hours until you have taken 10, and then take one after that in a Fibonacci retracement.” Or complex sequences like “take A and B every two hours, take C every three hours, take D every other day and take E as needed but not more than once a day.”

This should not be taken to mean that this technology will be taking the thought out of taking pills. Instead, think of this as a mechanism to enable greater self-knowledge through numbers. Already, Proteus engineer Nancy Dougherty has demonstrated that the technology could revolutionize placebo research. Dougherty’s work is and should be utterly surprising and delightful, and it should also demonstrate that the potential uses for this technology are unlimited.

As with any game-changing tech, Proteus will have to make sure it uses its new super power for good. I see no reason at all why they would not be in a position to do that (and yes, I fully recognize that many people will disagree).

A quick survey of pill management

In order to celebrate this accomplishment, I thought I would take a quick look at the state of pill management until now. Here is an assortment of the pill management systems on sale at my local pharmacy:

Untitled

In the following picture, can you tell these pills apart? One is over the counter and one is prescription. Can you imagine trying to sort through these every day?

Two pills

Pictured below are the “big” pill organizers. The one on the bottom is enormous. I think you could cook cupcakes in it. I seriously hope I will never have to use something like that. Note that the ones with the little blue stickers are specifically designed to be easy to open for patients with arthritis.

Big Pill Organizers

I am releasing these and all of my “pill box flickr set” under the creative commons to mark the Proteus accomplishment.

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting health care.

Save 20% on registration with the code RADAR20

Photo of pills: “Untitled by tr0tt3r, on Flickr.

Related:

July 25 2012

Democratizing data, and other notes from the Open Source convention

There has been enormous talk over the past few years of open data and what it can do for society, but proponents have largely come to admit: data is not democratizing in itself. This topic is hotly debated, and a nice summary of the viewpoints is available in this PDF containing articles by noted experts. At the Open Source convention last week, I thought a lot about the democratizing potential of data and how it could be realized.

Who benefits from data sets

At a high level, large businesses and other well-funded organizations have three natural advantages over the general public in the exploitation of data sets:

  • The resources to gather the data
  • The resources to do the necessary programming to crunch and interpret the data
  • The resources to act on the results

These advantages will probably always exist, but data can be useful to the public too. We have some tricks that can compensate for each of the large institutions’ advantages:

  • Crowdsourcing can create data sets that can help everybody, including the formation of new businesses. OpenStreetMap, an SaaS project based on open source software, is a superb example. Its maps have been built up through years of contributions by people trying to support their communities, and it supports interesting features missing from proprietary map projects, such as tools for laying out bike paths.

  • Data-crunching is where developers, like those at the Open Source convention, come in. Working at non-profits, during week-end challenges, or just on impulse, they can code up the algorithms that make sense of data sets and apps to visualize and accept interaction from people with less technical training.

  • Some apps, such as reports of neighborhood crime or available health facilities, can benefit individuals, but we can really drive progress by joining together in community organizations or other associations that use the data. I saw a fantastic presentation by high school students in the Boston area who demonstrated a correlation between funding for summer jobs programs and lowered homicides in the inner city–and they won more funding from the Massachusetts legislature with that presentation.

Health care track

This year was the third in which the Open Source convention offered a health care track. IT plays a growing role in health care, but a lot of the established institutions are creaking forward slowly, encountering lots of organizational and cultural barriers to making good use of computers. This year our presentations clustered around areas where innovation is most robust: personal tracking, using data behind the scenes to improve care, and international development.

Open source coders Fred Trotter and David Neary gave popular talks about running and tracking one’s achievements. Bob Evans discussed a project named PACO that he started at Google to track productivity by individuals and in groups of people who come together for mutual support, while Anne Wright and Candide Kemmler described the ambitious BodyTrack project. Jason Levitt gave the science of sitting (and how to make it better for you).

In a high-energy presentation, systems developer Shahid Shah described the cornucopia of high-quality, structured data that will be made available when devices are hooked together. “Gigabytes of data is being lost every minute from every patient hooked up to hospital monitors,” he said. DDS, HTTP, and XMPP are among the standards that will make an interconnected device mesh possible. Michael Italia described the promise of genome sequencing and the challenges it raises, including storage requirements and the social impacts of storing sensitive data about people’s propensity for disease. Mohamed ElMallah showed how it was sometimes possible to work around proprietary barriers in electronic health records and use them for research.

Representatives from OpenMRS and IntraHealth international spoke about the difficulties and successes of introducing IT into very poor areas of the world, where systems need to be powered by their own electricity generators. A maintainable project can’t be dropped in by external NGO staff, but must cultivate local experts and take a whole-systems approach. Programmers in Rwanda, for instance, have developed enough expertise by now in OpenMRS to help clinics in neighboring countries install it. Leaders of OSEHRA, which is responsible for improving the Department of Veteran Affairs’ VistA and developing a community around it, spoke to a very engaged audience about their work untangling and regularizing twenty years’ worth of code.

In general, I was pleased with the modest growth of the health care track this year–most session drew about thirty people, and several drew a lot more–and both the energy and the expertise of the people who came. Many attendees play an important role in furthering health IT.

Other thoughts

The Open Source convention reflected much of the buzz surrounding developments in computing. Full-day sessions on OpenStack and Gluster were totally filled. A focus on developing web pages came through in the popularity of talks about HTML5 and jQuery (now a platform all its own, with extensions sprouting in all directions). Perl still has a strong community. A few years ago, Ruby on Rails was the must-learn platform, and knock-off derivatives appeared in almost every other programming language imaginable. Now the Rails paradigm has been eclipsed (at least in the pursuit of learning) by Node.js, which was recently ported to Microsoft platforms, and its imitators.

No two OSCons are the same, but the conference continues to track what matters to developers and IT staff and to attract crowds every year. I enjoyed nearly all the speakers, who often pump excitement into the dryest of technical topics through their own sense of continuing wonder. This is an industry where imagination’s wildest thoughts become everyday products.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl