Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 22 2012

14 big trends to watch in 2013

2012 was a remarkable year for technology, government and society. In our 2012 year in review, we looked back at 10 trends that mattered. Below, we look ahead to the big ideas and technologies that will change the world, again.

Liquid data

In 2012, people still kept publishing data in PDFs or trapping it in paper. In 2013, as entrepreneurs and venture capitalists look to use government data as a platform, civic startups that digitize documents will help make data not just open but liquid, flowing across sectors previously stuck in silos.

Networked accountability

In 2012, mobile technology, social media and the Internet have given first responders and government officials new ways to improve situational awareness during natural disasters, like Hurricane Sandy. A growing number of free or low-cost online tools empowers people to do more than just donate money or blood: now, they can donate, time, expertise or, increasingly, act as sensors. In 2013, expect mobile sensors, “sensor journalism” and efforts like Safecast to add to that skein of networked accountability.

Data as infrastructure

When natural disasters loomed in 2012, public open government data feeds became critical infrastructure. In 2013, more of the public sector will see open data as a strategic national resource that merits stewardship and investment.

Social coding

The same peer networks that helped build the Internet are forming around building digital civic infrastructure, from collaboration between newsrooms to open government hackers working together around the country. 2012 was a breakout year for GitHub’s use in government and media. 2013 will be even bigger.

Data commons

Next year, more people will take a risk to tap into the rewards of a health data commons. Open science will be part of the reward equation. (Don’t expect revolutionary change here, just evolutionary change.)

Lean government

The idea of “lean government” gained some traction in 2012, as cities and agencies experimented with applying the lean startup approach to the public sector. With GOV.UK, the British government both redefined the online government platform and showed how citizen-centric design can be done right. In 2013, the worth of a lean government approach will be put to the test when the work of the White House Innovation Fellows is released.

Smart government

Gartner analyst Andrea DiMaio is now looking at the intersection of government and technology through the lens of “smart government.” In 2013, I expect to hear much more about that, from smartphones to smarter cities to smart disclosure.

Sharing economy

Whether it’s co-working, bike sharing, exchanging books and videos, or cohabiting hackerspaces and community garden spaces, there are green shoots throughout the economy that suggest the way we work, play and learn is changing due to the impact of connection technologies and the Great Recession. One of the most dynamic sectors of the sharing economy is the trend toward more collaborative consumption — and the entrepreneurs have followed, from Airbnb to Getable to Freecycle. The private sector and public sector are saving real money through collaborative consumption. Given support from across the ideological spectrum, expect more adoption in 2013.

Preemptive health care

Data science and new health IT offer an extraordinary opportunity to revolutionize health care, a combination that gave Dr. Atul Gawande hope for health care when we spoke in 2012. In 2013, watch for a shift toward “preemptive health care,” as behavioral science becomes part of how affordable care organizations try to keep patients healthy.

Predictive data analytics

Just as doctors hope to detect disease earlier, professionals across industry and the public sector will look to make sense of the data deluge using new tools next year. Predictive data analytics saved lives and taxpayer dollars in New York City in 2012. U.S. cities have now formed a working group to share predictive data analytics skills. Look for data science to be applied to regulatory data more in 2013.

Algorithmic censorship and algorithmic transparency

Expect speech online to continue be a flashpoint next year. As algorithmic censorship becomes a common approach to moderation on social networks and predictive analytics are applied in law enforcement, media, commerce and regulation, there will be even more interest in understanding bias in these systems and the civil rights implications of big data.

Personal data ownership

Should the Freedom of Information Act apply to private companies? In 2012, a report from the World Economic Forum and McKinsey Consulting described personal data as a new asset class. Much of the time, however, people are separated from their personal data. In 2013, expect to see more data disclosed to consumers and citizens and applied in new choice engines.

Open journalism

In 2012, Guardian Editor Alan Rusbridger shared 10 principles for open journalism. While the process of gathering and sharing news in a hyper-networked environment will only grow more messy as more people gain access to tools to publish around the world, this trend isn’t going backward. Despite the trend toward the “broadcast-ification of social media,” there are many more of us listening and sharing now than ever before. Expect journalism to be a more participatory experience in 2013.

Automation, artificial intelligence and employment

The combination of big data, automation and artificial intelligence looked like something new in 2012, from self-driving cars to e-discovery software to “robojournalism” to financial advisers to medical diagnostics. Wherever it’s possible, “software is eating the world.” In 2013, the federal government will need an innovation agenda to win the race against the machines.

September 13 2012

Growth of SMART health care apps may be slow, but inevitable

This week has been teaming with health care conferences, particularly in Boston, and was declared by President Obama to be National Health IT Week as well. I chose to spend my time at the second ITdotHealth conference, where I enjoyed many intense conversations with some of the leaders in the health care field, along with news about the SMART Platform at the center of the conference, the excitement of a Clayton Christiansen talk, and the general panache of hanging out at the Harvard Medical School.

SMART, funded by the Office of the National Coordinator in Health and Human Services, is an attempt to slice through the Babel of EHR formats that prevent useful applications from being developed for patient data. Imagine if something like the wealth of mash-ups built on Google Maps (crime sites, disaster markers, restaurant locations) existed for your own health data. This is what SMART hopes to do. They can already showcase some working apps, such as overviews of patient data for doctors, and a real-life implementation of the heart disease user interface proposed by David McCandless in WIRED magazine.

The premise and promise of SMART

At this conference, the presentation that gave me the most far-reaching sense of what SMART can do was by Nich Wattanasin, project manager for i2b2 at Partners. His implementation showed SMART not just as an enabler of individual apps, but as an environment where a user could choose the proper app for his immediate needs. For instance, a doctor could use an app to search for patients in the database matching certain characteristics, then select a particular patient and choose an app that exposes certain clinical information on that patient. In this way, SMART an combine the power of many different apps that had been developed in an uncoordinated fashion, and make a comprehensive data analysis platform from them.

Another illustration of the value of SMART came from lead architect Josh Mandel. He pointed out that knowing a child’s blood pressure means little until one runs it through a formula based on the child’s height and age. Current EHRs can show you the blood pressure reading, but none does the calculation that shows you whether it’s normal or dangerous. A SMART app has been developer to do that. (Another speaker claimed that current EHRs in general neglect the special requirements of child patients.)

SMART is a close companion to the Indivo patient health record. Both of these, aong with the i2b2 data exchange system, were covered in article from an earlier conference at the medical school. Let’s see where platforms for health apps are headed.

How far we’ve come

As I mentioned, this ITdotHealth conference was the second to be held. The first took place in September 2009, and people following health care closely can be encouraged by reading the notes from that earlier instantiation of the discussion.

In September 2009, the HITECH act (part of the American Recovery and Reinvestment Act) had defined the concept of “meaningful use,” but nobody really knew what was expected of health care providers, because the ONC and the Centers for Medicare & Medicaid Services did not release their final Stage 1 rules until more than a year after this conference. Aneesh Chopra, then the Federal CTO, and Todd Park, then the CTO of Health and Human Services, spoke at the conference, but their discussion of health care reform was a “vision.” A surprisingly strong statement for patient access to health records was made, but speakers expected it to be accomplished through the CONNECT Gateway, because there was no Direct. (The first message I could find on the Direct Project forum dated back to November 25, 2009.) Participants had a sophisticated view of EHRs as platforms for applications, but SMART was just a “conceptual framework.”

So in some ways, ONC, Harvard, and many other contributors to modern health care have accomplished an admirable amount over three short years. But some ways we are frustratingly stuck. For instance, few EHR vendors offer API access to patient records, and existing APIs are proprietary. The only SMART implementation for a commercial EHR mentioned at this week’s conference was one created on top of the Cerner API by outsiders (although Cerner was cooperative). Jim Hansen of Dossia told me that there is little point to encourage programmers to create SMART apps while the records are still behind firewalls.

Keynotes

I couldn’t call a report on ITdotHealth complete without an account of the two keynotes by Christiansen and Eric Horvitz, although these took off in different directions from the rest of the conference and served as hints of future developments.

Christiansen is still adding new twists to the theories laid out in c The Innovator’s Dilemma and other books. He has been a backer of the SMART project from the start and spoke at the first ITdotHealth conference. Consistent with his famous theory of disruption, he dismisses hopes that we can reduce costs by reforming the current system of hospitals and clinics. Instead, he projects the way forward through technologies that will enable less trained experts to successively take over tasks that used to be performed in more high-cost settings. Thus, nurse practitioners will be able to do more and more of what doctors do, primary care physicians will do more of what we current delegate to specialists, and ultimately the patients and their families will treat themselves.

He also has a theory about the progression toward openness. Radically new technologies start out tightly integrated, and because they benefit from this integration they tend to be created by proprietary companies with high profit margins. As the industry comes to understand the products better, they move toward modular, open standards and become commoditized. Although one might conclude that EHRs, which have been around for some forty years, are overripe for open solutions, I’m not sure we’re ready for that yet. That’s because the problems the health care field needs to solve are quite different from the ones current EHRs solve. SMART is an open solution all around, but it could serve a marketplace of proprietary solutions and reward some of the venture capitalists pushing health care apps.

While Christiansen laid out the broad environment for change in health care, Horvitz gave us a glimpse of what he hopes the practice of medicine will be in a few years. A distinguished scientist at Microsoft, Horvitz has been using machine learning to extract patterns in sets of patient data. For instance, in a collection of data about equipment uses, ICD codes, vital signs, etc. from 300,000 emergency room visits, they found some variables that predicted a readmission within 14 days. Out of 10,000 variables, they found 500 that were relevant, but because the relational database was strained by retrieving so much data, they reduced the set to 23 variables to roll out as a product.

Another project predicted the likelihood of medical errors from patient states and management actions. This was meant to address a study claiming that most medical errors go unreported.

A study that would make the privacy-conscious squirm was based on the willingness of individuals to provide location data to researchers. The researchers tracked searches on Bing along with visits to hospitals and found out how long it took between searching for information on a health condition and actually going to do something about it. (Horvitz assured us that personally identifiable information was stripped out.)

His goal is go beyond measuring known variables, and to find new ones that could be hidden causes. But he warned that, as is often the case, causality is hard to prove.

As prediction turns up patterns, the data could become a “fabric” on which many different apps are based. Although Horvitz didn’t talk about combining data sets from different researchers, it’s clearly suggested by this progression. But proper de-identification and flexible patient consent become necessities for data combination. Horvitz also hopes to move from predictions to decisions, which he says is needed to truly move to evidence-based health care.

Did the conference promote more application development?

My impression (I have to admit I didn’t check with Dr. Ken Mandl, the organizer of the conference) was that this ITdotHealth aimed to persuade more people to write SMART apps, provide platforms that expose data through SMART, and contribute to the SMART project in general. I saw a few potential app developers at the conference, and a good number of people with their hands on data who were considering the use of SMART. I think they came away favorably impressed–maybe by the presentations, maybe by conversations that the meeting allowed them to have with SMART developers–so we may see SMART in wider use soon. Participants came far for the conference; I talked to one from Geneva, for instance.

The presentations were honest enough, though, to show that SMART development is not for the faint-hearted. On the supply side–that is, for people who have patient data and want to expose it–you have to create a “container” that presents data in the format expected by SMART. Furthermore, you must make sure the data conforms to industry standards, such as SNOMED for diagnoses. This could be a lot of conversion.

On the application side, you may have to deal with SMART’s penchant for Semantic Web technologies such as OWL and SPARQL. This will scare away a number of developers. However, speakers who presented SMART apps at the conference said development was fairly easy. No one matched the developer who said their app was ported in two days (most of which were spent reading the documentation) but development times could usually be measured in months.

Mandl spent some time airing the idea of a consortium to direct SMART. It could offer conformance tests (but probably not certification, which is a heavy-weight endeavor) and interact with the ONC and standards bodies.

After attending two conferences on SMART, I’ve got the impression that one of its most powerful concepts is that of an “app store for health care applications.” But correspondingly, one of the main sticking points is the difficulty of developing such an app store. No one seems to be taking it on. Perhaps SMART adoption is still at too early a stage.

Once again, we are batting our heads up against the walls erected by EHRs to keep data from being extracted for useful analysis. And behind this stands the resistance of providers, the users of EHRs, to give their data to their patients or to researchers. This theme dominated a federal government conference on patient access.

I think SMART will be more widely adopted over time because it is the only useful standard for exposing patient data to applications, and innovation in health care demands these apps. Accountable Care Organizations, smarter clinical trials (I met two representatives of pharmaceutical companies at the conference), and other advances in health care require data crunching, so those apps need to be written. And that’s why people came from as far as Geneva to check out SMART–there’s nowhere else to find what they need. The technical requirements to understand SMART seem to be within the developers’ grasps.

But a formidable phalanx of resistance remains, from those who don’t see the value of data to those who want to stick to limited exchange formats such as CCDs. And as Sean Nolan of Microsoft pointed out, one doesn’t get very far unless the app can fit into a doctor’s existing workflow. Privacy issues were also raised at the conference, because patient fears could stymie attempts at sharing. Given all these impediments, the government is doing what it can; perhaps the marketplace will step in to reward those who choose a flexible software platform for innovation.

September 05 2012

The future of medicine relies on massive collection of real-life data

Health care costs rise as doctors try batches of treatments that don’t work in search of one that does. Meanwhile, drug companies spend billions on developing each drug and increasingly end up with nothing to show for their pains. This is the alarming state of medical science today. Shahid Shah, device developer and system integrator, sees a different paradigm emerging. In this interview at the Open Source convention, Shah talks about how technologies and new ways of working can open up medical research.

Shah will be speaking at Strata Rx in October.

Highlights from the full video interview include:

  • Medical science will come unstuck from the clinical trials it has relied on for a couple hundred years, and use data collected in the field [Discussed at the 0:30 mark]
  • Failing fast in science [Discussed at the 2:38 mark]

  • Why and how patients will endorse the system [Discussed at the 3:00 mark]

  • Online patient communities instigating research [Discussed at the 3:55 mark]

  • Consumerization of health care [Discussed at the 5:15 mark]

  • The pharmaceutical company of the future: how research will get faster [Discussed at the 6:00 mark]

  • Medical device integration to preserve critical data [Discussed at the 7:20 mark]

You can view the entire conversation in the following video:

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting healthcare.

Save 20% on registration with the code RADAR20

August 30 2012

A marriage of data and caregivers gives Dr. Atul Gawande hope for health care

Dr. Atul GawandeDr. Atul GawandeDr. Atul Gawande (@Atul_Gawande) has been a bard in the health care world, straddling medicine, academia and the humanities as a practicing surgeon, medical school professor, best-selling author and staff writer at the New Yorker magazine. His long-form narratives and books have helped illuminate complex systems and wicked problems to a broad audience.

One recent feature that continues to resonate for those who wish to apply data to the public good is Gawande’s New Yorker piece “The Hot Spotters,” where Gawande considered whether health data could help lower medical costs by giving the neediest patients better care. That story brings home the challenges of providing health care in a city, from cultural change to gathering data to applying it.

This summer, after meeting Gawande at the 2012 Health DataPalooza, I interviewed him about hot spotting, predictive analytics, networked transparency, health data, feedback loops and the problems that technology won’t solve. Our interview, lightly edited for content and clarity, follows.

Given what you’ve learned in Camden, N.J. — the backdrop for your piece on hot spotting — do you feel hot spotting is an effective way for cities and people involved in public health to proceed?

Gawande: The short answer, I think, is “yes.”

Here we have this major problem of both cost and quality — and we have signs that some of the best places that seem to do the best jobs can be among the least expensive. How you become one of those places is a kind of mystery.

It really parallels what happened in the police world. Here is something that we thought was an impossible problem: crime. Who could possibly lower crime? One of the ways we got a handle on it was by directing policing to the places where there was the most crime. It sounds kind of obvious, but it was not apparent that crime is concentrated and that medical costs are concentrated.

The second thing I knew but hadn’t put two and two together about is that the sickest people get the worst care in the system. People with complex illness just don’t fit into 20-minute office visits.

The work in Camden was emblematic of work happening in pockets all around the country where you prioritize. As soon as you look at the system, you see hundreds, thousands of things that don’t work properly in medicine. But when you prioritize by saying, “For the sickest people — the 5% who account for half of the spending — let’s look at what their $100,000 moments are,” you then understand it’s strengthening primary care and it’s the ability to manage chronic illness.

It’s looking at a few acute high-cost, high-failure areas of care, such as how heart attacks and congestive heart failure are managed in the system; looking at how renal disease patients are cared for; or looking at a few things in the commercial population, like back pain, being a huge source of expense. And then also end-of-life care.

With a few projects, it became more apparent to me that you genuinely could transform the system. You could begin to move people from depending on the most expensive places where they get the least care to places where you actually are helping people achieve goals of care in the most humane and least wasteful ways possible.

The data analytics office in New York City is doing fascinating predictive analytics. That approach could have transformative applications in health care, but it’s notable how careful city officials have been about publishing certain aspects of the data. How do you think about the relative risks and rewards here, including balancing social good with the need to protect people’s personal health data?

Gawande: Privacy concerns can sometimes be a barrier, but I haven’t seen it be the major barrier here. There are privacy concerns in the data about households as well in the police data.

The reason it works well for the police is not just because you have a bunch of data geeks who are poking at the data and finding interesting things. It’s because they’re paired with people who are responsible for responding to crime, and above all, reducing crime. The commanders who have the responsibility have a relationship with the people who have the data. They’re looking at their population saying, “What are we doing to make the system better?”

That’s what’s been missing in health care. We have not married the people who have the data with people who feel responsible for achieving better results at lower costs. When you put those people together, they’re usually within a system, and within a system, there is no privacy barrier to being able to look and say, “Here’s what we can be doing in this health system,” because it’s often that particular.

The beautiful aspect of the work in New York is that it’s not at a terribly abstract level. Yes, they’re abstracting the data, but they’re also helping the police understand: “It’s this block that’s the problem. It’s shifted in the last month into this new sector. The pattern of the crime is that it looks more like we have a problem with domestic violence. Here are a few more patterns that might give you a clue about what you can go in and do.” There’s this give and take about what can be produced and achieved.

That, to me, is the gold in the health care world — the ability to peer in and say: “Here are your most expensive patients and your sickest patients. You didn’t know it, but here, there’s an alcohol and drug addiction issue. These folks are having car accidents and major trauma and turning up in the emergency rooms and then being admitted with $12,000 injuries.”

That’s a system that could be improved and, lo and behold, there’s an intervention here that’s worked before to slot these folks into treatment programs, which by and large, we don’t do at all.

That sense of using the data to help you solve problems requires two things. It requires data geeks and it requires the people in a system who feel responsible, the way that Bill Bratton made commanders feel responsible in the New York police system for the rate of crime. We haven’t had physicians who felt that they were responsible for 10,000 ICU patients and how well they do on everything from the cost to how long they spend in the ICU.

Health data is creating opportunities for more transparency into outcomes, treatments and performance. As a practicing physician, do you welcome the additional scrutiny that such collective intelligence provides, or does it concern you?

Gawande: I think that transparency of our data is crucial. I’m not sure that I’m with the majority of my colleagues on this. The concerns are that the data can be inaccurate, that you can overestimate or underestimate the sickness of the people coming in to see you, and that my patients aren’t like your patients.

That said, I have no idea who gets better results at the kinds of operations I do and who doesn’t. I do know who has high reputations and who has low reputations, but it doesn’t necessarily correspond to the kinds of results they get. As long as we are not willing to open up data to let people see what the results are, we will never actually learn.

The experience of what happens in fields where the data is open is that it’s the practitioners themselves that use it. I’ll give a couple of examples. Mortality for childbirth in hospitals has been available for a century. It’s been public information, and the practitioners in that field have used that data to drive the death rates for infants and mothers down from the biggest killer in people’s lives for women of childbearing age and for newborns into a rarity.

Another field that has been able to do this is cystic fibrosis. They had data for 40 years on the performance of the centers around the country that take care of kids with cystic fibrosis. They shared the data privately. They did not tell centers how the other centers were doing. They just told you where you stood relative to everybody else and they didn’t make that information public. About four or five years ago, they began making that information public. It’s now available on the Internet. You can see the rating of every center in the country for cystic fibrosis.

Several of the centers had said, “We’re going to pull out because this isn’t fair.” Nobody ended up pulling out. They did not lose patients in hoards and go bankrupt unfairly. They were able to see from one another who was doing well and then go visit and learn from one and other.

I can’t tell you how fundamental this is. There needs to be transparency about our costs and transparency about the kinds of results. It’s murky data. It’s full of lots of caveats. And yes, there will be the occasional journalist who will use it incorrectly. People will misinterpret the data. But the broad result, the net result of having it out there, is so much better for everybody involved that it far outweighs the value of closing it up.

U.S. officials are trying to apply health data to improve outcomes, reduce costs and stimulate economic activity. As you look at the successes and failures of these sorts of health data initiatives, what do you think is working and why?

Gawande: I get to watch from the sidelines, and I was lucky to participate in Datapalooza this year. I mostly see that it seems to be following a mode that’s worked in many other fields, which is that there’s a fundamental role for government to be able to make data available.

When you work in complex systems that involve multiple people who have to, in health care, deal with patients at different points in time, no one sees the net result. So, no one has any idea of what the actual experience is for patients. The open data initiative, I think, has innovative people grabbing the data and showing what you can do with it.

Connecting the data to the physical world is where the cool stuff starts to happen. What are the kinds of costs to run the system? How do I get people to the right place at the right time? I think we’re still in primitive days, but we’re only two or three years into starting to make something more than just data on bills available in the system. Even that wasn’t widely available — and it usually was old data and not very relevant to this moment in time.

My concern all along is that data needs to be meaningful to both the patient and the clinician. It needs to be able to connect the abstract world of data to the physical world of what really happens, which means it has to be timely data. A six-month turnaround on data is not great. Part of what has made Wal-Mart powerful, for example, is they took retail operations from checking their inventory once a month to checking it once a week and then once a day and then in real-time, knowing exactly what’s on the shelves and what’s not.

That equivalent is what we’ll have to arrive at if we’re to make our systems work. Timeliness, I think, is one of the under-recognized but fundamentally powerful aspects because we sometimes over prioritize the comprehensiveness of data and then it’s a year old, which doesn’t make it all that useful. Having data that tells you something that happened this week, that’s transformative.

Are you using an iPad at work?

Gawande: I do use the iPad here and there, but it’s not readily part of the way I can manage the clinic. I would have to put in a lot of effort for me to make it actually useful in my clinic.

For example, I need to be able to switch between radiology scans and past records. I predominantly see cancer patients, so they’ll have 40 pages of records that I need to have in front of me, from scans to lab tests to previous notes by other folks.

I haven’t found a better way than paper, honestly. I can flip between screens on my iPad, but it’s too slow and distracting, and it doesn’t let me talk to the patient. It’s fun if I can pull up a screen image of this or that and show it to the patient, but it just isn’t that integrated into practice.

What problems are immune to technological innovation? What will need to be changed by behavior?

Gawande: At some level, we’re trying to define what great care is. Great care means being able to provide optimally knowledgeable care in the right time and the right way for people and not wasting resources.

Some of it’s crucially aided by information technology that connects information to where it needs to be so that good decision-making happens, both by patients and by the clinicians who work with them.

If you’re going to be able to make health care work better, you’ve got to be able to make that system work better for people, more efficiently and less wastefully, less harmfully and with much better teamwork. I think that information technology is a tool in that, but fundamentally you’re talking about making teams that can go from being disconnected cowboys in care to pit crews that actually work together toward solving a problem.

In a football team or a pit crew, technology is really helpful, but it’s only a tiny part of what makes that team great. What makes the team great is that they know what they’re aiming to do, they’re very clear about their goals, and they are able to make sure they execute every basic thing that’s crucial for that success.

What do you worry about in this surge of interest in more data-driven approaches to medicine?

Gawande: I worry the most about a disconnect between the people who have to use the information and technology and tools, and the people who make them. We see this in the consumer world. Fundamentally, there is not a single [health] application that is remotely like my iPod, which is instantly usable. There are a gazillion number of ways in which information would make a huge amount of difference.

That sense of being able to understand the world of the user, the task that’s accomplished and the complexity of what they have to do, and connecting that to the people making the technology — there just aren’t that many lines of marriage. In many of the companies that have some of the dominant systems out there, I don’t see signs that that’s necessarily going to get any better.

If people gain access to better information about the consequences of various choices, will that lead to improved outcomes and quality of life?

Gawande: That’s where the art comes in. There are problems because you lack information, but when you have information like “you shouldn’t drink three cans of Coke a day — you’re going to put on weight,” then having that information is not sufficient for most people.

Understanding what is sufficient to be able to either change the care or change the behaviors that we’re concerned about is the crux of what we’re trying to figure out and discover.

When the information is presented in a really interesting way, people have gradually discovered — for example, having a little ball on your dashboard that tells you when you’re accelerating too fast and burning off extra fuel — how that begins to change the actual behavior of the person in the car.

No amount of presenting the information that you ought to be driving in a more environmentally friendly way ends up changing anything. It turns out that change requires the psychological nuance of presenting the information in a way that provokes the desire to actually do it.

We’re at the very beginning of understanding these things. There’s also the same sorts of issues with clinician behavior — not just information, but how you are able to foster clinicians to actually talk to one another and coordinate when five different people are involved in the care of a patient and they need to get on the same page.

That’s why I’m fascinated by the police work, because you have the data people, but they’re married to commanders who have responsibility and feel responsibility for looking out on their populations and saying, “What do we do to reduce the crime here? Here’s the kind of information that would really help me.” And the data people come back to them and say, “Why don’t you try this? I’ll bet this will help you.”

It’s that give and take that ends up being very powerful.

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting health care.

Save 20% on registration with the code RADAR20

Related:

August 29 2012

Analyzing health care data to empower patients

The stress of falling seriously ill often drags along the frustration of having no idea what the treatment will cost. We’ve all experienced the maddening stream of seemingly endless hospital bills, and testimony by E-patient Dave DeBronkart and others show just how absurd U.S. payment systems are.

So I was happy to seize the opportunity to ask questions of three researchers from Castlight Health about the service they’ll discuss at the upcoming Strata Rx conference about data in health care.

Castlight casts its work in the framework of a service to employers and consumers. But make no mistake about it: they are a data-rich research operation, and their consumers become empowered patients (e-patients) who can make better choices.

As Arjun Kulothungun, John Zedlewski, and Eugenia Bisignani wrote to me, “Patients become empowered when actionable information is made available to them. In health care, like any other industry, people want high quality services at competitive prices. But in health care, quality and cost are often impossible for an average consumer to determine. We are proud to do the heavy lifting to bring this information to our users.”

Following are more questions and answers from the speakers:

1. Tell me a bit about what you do at Castlight and at whom you aim your services.

We work together in the Research team at Castlight Health. We provide price and quality information to our users for most common health care services, including those provided by doctors, hospitals, labs, and imaging facilities. This information is provided to patients through a user-friendly web interface and mobile app that shows their different healthcare options customized to their health care plan. Our research team has built a sophisticated pricing system that factors in a wide variety of data sources to produce accurate prices for our users.

At a higher level, this fits into our company’s drive toward healthcare transparency, to help users better understand and navigate their healthcare options. Currently, we sell this product to employers to be offered as a benefit to their employees and their dependents. Our product is attractive to self-insured employers who operate a high-deductible health plan. High-deductible health plans motivate employees to explore their options, since doing so helps them save on their healthcare costs and find higher quality care. Our product helps patients easily explore those options.

2. What kinds of data do you use? What are the challenges involved in working with this data and making it available to patients?

We bring in data from a variety of sources to model the financial side of the healthcare industry, so that we can accurately represent the true cost of care to our users. One of the challenges we face is that the data is often messy. This is due to the complex ways that health care claims are adjudicated, and the largely manual methods of data entry. Additionally, provider data is not highly standardized, so it is often difficult to match data from different sources. Finally, in a lot of cases the data is sparse: some health care procedures are frequent, but others are only seldom performed, so it is more challenging to determine their prices.

The variability of care received also presents a challenge, because the exact care a patient receives during a visit cannot always be predicted ahead of time. A single visit to a doctor can yield a wide array of claim line items, and the patient is subsequently responsible for the sum of these services. Thus, our intent is to convey the full cost of the care patients are to receive. We believe patients are interested in understanding their options in a straightforward way, and that they don’t think in terms of claim line items and provider billing codes. So we spend a lot of time determining the best way to reflect the total cost of care to our users.

3. How much could a patient save if they used Castlight effectively? What would this mean for larger groups?

For a given procedure or service, the difference in prices in a local area can vary by 100% or more. For instance, right here in San Francisco, we can see that the cost for a particular MRI varies from $450 to nearly $3000, depending on the facility that a patient chooses, while an office visit with a primary care doctor can range from $60 to $180. But a patient may not always wish to choose the lowest cost option. A number of different factors affect how much a patient could save: the availability of options in their vicinity, the quality of the services, the patient’s ability to change the current doctor/hospital for a service, personal preferences, and the insurance benefits provided. Among our customers, the empowerment of patients adds up to employer savings of around 13% in comparison to expected trends.

In addition to cost savings, Castlight also helps drive better quality care. We have shown a 38% reduction in gaps in care for chronic conditions such as diabetes and high blood pressure. This will help drive further savings as individuals adhere to clinically proven treatment schedules.

4. What other interesting data sets are out there for healthcare consumers to use? What sorts of data do you wish were available?

Unfortunately, data on prices of health care procedures is still not widely available from government sources and insurers. Data sources that are available publicly are typically too complex and arcane to be actionable for average health care consumers.

However, CMS has recently made a big push to provide data on hospital quality. Their “hospital compare” website is a great resource to access this data. We have integrated the Medicare statistics into the Castlight product, and we’re proud of the role that Castlight co-founder and current CTO of the United States Todd Park played in making it available to the public. Despite this progress on sharing hospital data, the federal government has not made the same degree of progress in sharing information for individual physicians, so we would love to see more publicly collected data in this area.

5. Are there crowdsourcing opportunities? If patients submitted data, could it be checked for quality, and how could it further improve care and costs?

We believe that engaging consumers by asking them to provide data is a great idea! The most obvious place for users to provide data is by writing reviews of their experiences with different providers, as well as rating those providers on various facets of care. Castlight and other organizations aggregate and report on these reviews as one measure of provider quality.

It is harder to use crowdsourced information to compute costs. There are significant challenges in matching crowdsourced data to providers and especially to services performed, because line items are not identified to consumers by their billing codes. Additionally, rates tend to depend on the consumer’s insurance plan. Nonetheless, we are exploring ways to use crowdsourced pricing data for Castlight.

August 24 2012

The Direct Project has teeth, but it needs pseudonymity

Yesterday, Meaningful Use Stage 2 was released.

You can read the final rule here and you can read the announcement here.

As we read and parse the 900 or so pages of government-issued goodness, you can expect lots of commentary and discussion. Geek Doctor  already has a summary and Motorcycle Guy can be expected to help us all parse the various health IT standards that have been newly blessed. Expect Brian Ahier to also be worth reading over the next couple of days.

I just wanted to highlight one thing about the newly released rules. As suspected, the actual use of the Direct Project will be a requirement. That means certified electronic health record (EHR) systems will have to implement it, and doctors and hospitals will have to exchange data with it. Awesome.

More importantly, this will be the first health IT interoperability standard with teeth. The National Institute of Standards and Technology (NIST) will be setting up an interoperability test server. It will not be enough to say that you support Direct. People will have to prove it. I love it. This has been the problem with Health Level 7 et al for years. No central standard for testing always means an unreliable and weak standard. Make no mistake, this is a critical and important move from the Office of the National Coordinator for Health Information Technology (ONC).

(Have I mentioned that I love that Farzad Mostashari — our current ONC — uses Twitter? I also love that he has a sense of humor!)

Now we just need to make sure that patient pseudonymity is supported on the Directed Exchange network. To do otherwise is to force patients to trust the whole network rather than to merely trust their own doctors. I have already made that case, but it is really nice to see both Arien Malec (founding coordinator of the Direct Project) and Sean Nolan (chief architect at Microsoft HealthVault) have weighed in with similar thoughts. Malec wrote a  lovely piece that details how to translate patient pseudonymity into NIST assurance levels. Nolan talked about how difficult it would be for HealthVault to have to do identity proofing on patients.

In order to emphasize my point in a more public way, I have beat everyone to the punch and registered the account of DaffyDuck@direct.healthvault.com. Everyone seems to think this is just the kind of madness that we need to avoid. But this is just the kind of madness that patients need to really protect their privacy.

Here’s an example. Lets imagine that I am a pain patient and I am seeking treatment from a pain specialist named Dr. John Doe who works at Pain No More clinic. His Direct address might be john.doe@direct.painnomore.com

Now if I provide DaffyDuck@direct.healthvault.com to Dr. Doe and Dr. Doe can be sure that he is always talking to me when he communicates with that address, then there is nothing else that needs to happen here. There never needs to be a formal cryptographic association between DaffyDuck@direct.healthvault.com and Fred Trotter. I know that there is a connection and my doctor knows that there is a connection and those are the only people that need to know.

If any cryptographic or otherwise published association were to exist, then anyone who had access to my public certifications and/or knew of communication between john.doe@direct.painnomore.com and DaffyDuck@direct.healthvault.com could make a pretty good guess about my health care status. I am not actually interested in trusting the Directed Exchange network. I am interested in trusting through the Directed Exchange network. Pseudonymity gives both me and my doctor that privilege. If a patient wants to give a different Direct email address to every doctor they work with, they should have that option.

This is a critical patient privacy feature of the Direct protocol and it was designed in from the beginning. It is critical that later policy makers not screw this up.

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting healthcare.

Save 20% on registration with the code RADAR20

Related:

August 21 2012

Hawaii and health care: A small state takes a giant step forward

Knots by Uncle Catherine, on FlickrIn an era characterized by political polarization and legislative stalemate, the tiny state of Hawaii has just demonstrated extraordinary leadership. The rest of the country should now recognize, applaud, and most of all, learn from Hawaii’s accomplishment.

Hawaii enacted a new law that harmonizes its state medical privacy laws with HIPAA, the federal medical privacy law. Hawaii’s legislators and governor, along with an impressive array of patient groups, health care providers, insurance companies, and health information technologists, agreed that having dozens of unique Hawaii medical privacy laws in addition to HIPAA was confusing, expensive, and bad for patients. HB 1957 thus eliminates the need for entities covered by HIPAA to also comply with Hawaii’s complex array of medical privacy laws.

How did this thicket of state medical privacy laws arise?

Hawaii’s knotty web of state medical privacy laws is not unique. There are vast numbers of state health privacy laws across the country — certainly many hundreds, likely thousands. Hawaii alone has more than 50. Most were enacted before HIPAA, which helps explain why there are so many; when no federal guarantee of health privacy existed, states took action to protect their constituents from improper invasions of their medical privacy. These laws grew helter-skelter over decades. For example, particularly restrictive laws were enacted after inappropriate and traumatizing disclosures of HIV status during the 1980s.

These laws were often rooted in a naïve faith that patient consent, rather than underlying structural protection, is the be-all and end-all of patient protection. Consent requirements thus became more detailed and demanding. Countless laws, sometimes buried in obscure areas of state law, created unique consent requirements over mental health, genetic information, reproductive health, infectious disease, adolescent, and disability records.

When the federal government created HIPAA, a comprehensive and complex medical privacy law, the powers in Washington realized that preempting this thicket of state laws would be a political impossibility. As every HIPAA 101 class teaches, HIPAA thus became “a floor, not a ceiling.” All state laws stricter than HIPAA continue to exist in full force.

So what’s so bad about having lots of state health privacy laws?

The harmful consequences of the state medical privacy law thicket coexisting with HIPAA include:

  • Adverse patient impact — First and foremost, the privacy law thicket is terrible for individual patients. The days when we saw only doctors in one state are long gone. We travel, we move, we get sick in different states, we choose caregivers in different states. We need our health information to be rapidly available to us and our providers wherever we are, but these state consent laws make it tough for providers to share records. Even providing patients with our own medical records — which is mandated by HIPAA — is impeded by perceptions that state-specific, or even institution-specific, consent forms must be used instead of national HIPAA-compliant forms.
  • Harmful to those intended to be protected — Paradoxically, laws intended to protect particular groups of patients, like those with HIV or mental health conditions, now undermine their clinical care. Providers sending records containing sensitive content are wary of letting complete records move, yet may be unable to mask the regulated data. When records are incomplete, delayed, or simply unavailable, providers can make wrong decisions and patients can get hurt.
  • Antiquated and legalistic consent forms and systems — Most providers feel obliged to honor a patient’s request to move medical records only in the form of a “wet signature” on a piece of paper. Most then insist that the piece of paper be moved only in person or by 1980s-era fax machines, despite the inconvenience to patients who don’t have a fax machine at hand. HIPAA allows the disclosure of health information for treatment, payment, and health care operations (all precisely defined terms), but because so many state laws require consent for particular situations, it is easier (and way more CYA) for institutions to err on the side of strict consent forms for all disclosures, even when permitted by HIPAA.
  • Obstacles to technological innovation and telemedicine — Digital systems to move information need simplicity — either, yes, the data can move, or no, it cannot. Trying to build systems when a myriad of complex, and essentially unknowable, laws govern whether data can move, who must consent, on what form, for what duration, or what data subsets must be expurgated, becomes a nightmare. No doubt, many health innovators today are operating in blissful ignorance of the state health privacy law thicket, but ignorance of these laws does not protect against enforcement or class action lawsuits.
  • Economic waste — As taxpayers, the state legal thicket hurts us all. Redundant tests and procedures are often ordered when medical records cannot be timely produced. Measuring the comparative effectiveness of alternative treatments and the performance of hospitals, providers, and insurers is crucial to improving quality and reducing costs, but state laws can restrict such uses. The 2009 stimulus law provided billions of dollars for health information technology and information exchange, but some of our return on that national investment is lost when onerous state-specific consent requirements must be baked into electronic health record (EHR) and health information exchange (HIE) design.

What can we learn from Hawaii?

Other states should follow Hawaii’s lead by having the boldness and foresight to wipe their own medical privacy laws off the books in favor of a simpler and more efficient national solution that protects privacy and facilitates clinical care. Our national legal framework is HIPAA, plus HITECH, a 2009 law that made HIPAA stricter, plus other new federal initiatives intended to create a secure, private, and reliable infrastructure for moving health information. While that federal framework isn’t perfect, that’s where we should be putting our efforts to protect, exchange, and make appropriate use of health information. Hawaii’s approach of reducing the additional burden of the complex state law layer just makes sense.

Some modest progress has occurred already. A few states are harmonizing their laws affecting health information exchanges (e.g., Kansas and Utah). Some states exempt HIPAA-regulated entities subject to new HITECH breach requirements from also having to comply with the state breach laws (e.g., Michigan and Indiana). These breach measures are helpful in a crisis, to be sure, by saving money on wasteful legal research, but irrelevant from the standpoint of providing care for patients or designing technology solutions or system improvements. California currently has a medical law harmonization initiative underway, which I hope is broadly supported in order to reduce waste and improve care.

To be blunt, we need much more dramatic progress in this area. In the case of health information exchange, states are not useful “laboratories of democracy“; they are towers of Babel that disserve patients. The challenges of providing clinical care, let alone making dramatic improvements while lowering costs, in the context of this convoluted mess of state laws, are severe. Patients, disease advocacy groups, doctors, nurses, hospitals, and technology innovators should let their state legislators know that harmonizing medical privacy laws would be a huge win for all involved.

Photo: Knots by Uncle Catherine, on Flickr

August 14 2012

Solving the Wanamaker problem for health care

By Tim O’Reilly, Julie Steele, Mike Loukides and Colin Hill

“The best minds of my generation are thinking about how to make people click ads.” — Jeff Hammerbacher, early Facebook employee

“Work on stuff that matters.” — Tim O’Reilly

Doctors in operating room with data

In the early days of the 20th century, department store magnate John Wanamaker famously said, “I know that half of my advertising doesn’t work. The problem is that I don’t know which half.”

The consumer Internet revolution was fueled by a search for the answer to Wanamaker’s question. Google AdWords and the pay-per-click model transformed a business in which advertisers paid for ad impressions into one in which they pay for results. “Cost per thousand impressions” (CPM) was replaced by “cost per click” (CPC), and a new industry was born. It’s important to understand why CPC replaced CPM, though. Superficially, it’s because Google was able to track when a user clicked on a link, and was therefore able to bill based on success. But billing based on success doesn’t fundamentally change anything unless you can also change the success rate, and that’s what Google was able to do. By using data to understand each user’s behavior, Google was able to place advertisements that an individual was likely to click. They knew “which half” of their advertising was more likely to be effective, and didn’t bother with the rest.

Since then, data and predictive analytics have driven ever deeper insight into user behavior such that companies like Google, Facebook, Twitter, Zynga, and LinkedIn are fundamentally data companies. And data isn’t just transforming the consumer Internet. It is transforming finance, design, and manufacturing — and perhaps most importantly, health care.

How is data science transforming health care? There are many ways in which health care is changing, and needs to change. We’re focusing on one particular issue: the problem Wanamaker described when talking about his advertising. How do you make sure you’re spending money effectively? Is it possible to know what will work in advance?

Too often, when doctors order a treatment, whether it’s surgery or an over-the-counter medication, they are applying a “standard of care” treatment or some variation that is based on their own intuition, effectively hoping for the best. The sad truth of medicine is that we don’t really understand the relationship between treatments and outcomes. We have studies to show that various treatments will work more often than placebos; but, like Wanamaker, we know that much of our medicine doesn’t work for half or our patients, we just don’t know which half. At least, not in advance. One of data science’s many promises is that, if we can collect data about medical treatments and use that data effectively, we’ll be able to predict more accurately which treatments will be effective for which patient, and which treatments won’t.

A better understanding of the relationship between treatments, outcomes, and patients will have a huge impact on the practice of medicine in the United States. Health care is expensive. The U.S. spends over $2.6 trillion on health care every year, an amount that constitutes a serious fiscal burden for government, businesses, and our society as a whole. These costs include over $600 billion of unexplained variations in treatments: treatments that cause no differences in outcomes, or even make the patient’s condition worse. We have reached a point at which our need to understand treatment effectiveness has become vital — to the health care system and to the health and sustainability of the economy overall.

Why do we believe that data science has the potential to revolutionize health care? After all, the medical industry has had data for generations: clinical studies, insurance data, hospital records. But the health care industry is now awash in data in a way that it has never been before: from biological data such as gene expression, next-generation DNA sequence data, proteomics, and metabolomics, to clinical data and health outcomes data contained in ever more prevalent electronic health records (EHRs) and longitudinal drug and medical claims. We have entered a new era in which we can work on massive datasets effectively, combining data from clinical trials and direct observation by practicing physicians (the records generated by our $2.6 trillion of medical expense). When we combine data with the resources needed to work on the data, we can start asking the important questions, the Wanamaker questions, about what treatments work and for whom.

The opportunities are huge: for entrepreneurs and data scientists looking to put their skills to work disrupting a large market, for researchers trying to make sense out of the flood of data they are now generating, and for existing companies (including health insurance companies, biotech, pharmaceutical, and medical device companies, hospitals and other care providers) that are looking to remake their businesses for the coming world of outcome-based payment models.

Making health care more effective

Downloadable Editions

This report will soon be available in PDF, EPUB and Mobi formats. Submit your email to be alerted when the downloadable editions are ready.

What, specifically, does data allow us to do that we couldn’t do before? For the past 60 or so years of medical history, we’ve treated patients as some sort of an average. A doctor would diagnose a condition and recommend a treatment based on what worked for most people, as reflected in large clinical studies. Over the years, we’ve become more sophisticated about what that average patient means, but that same statistical approach didn’t allow for differences between patients. A treatment was deemed effective or ineffective, safe or unsafe, based on double-blind studies that rarely took into account the differences between patients. With the data that’s now available, we can go much further. The exceptions to this are relatively recent and have been dominated by cancer treatments, the first being Herceptin for breast cancer in women who over-express the Her2 receptor. With the data that’s now available, we can go much further for a broad range of diseases and interventions that are not just drugs but include surgery, disease management programs, medical devices, patient adherence, and care delivery.

For a long time, we thought that Tamoxifen was roughly 80% effective for breast cancer patients. But now we know much more: we know that it’s 100% effective in 70 to 80% of the patients, and ineffective in the rest. That’s not word games, because we can now use genetic markers to tell whether it’s likely to be effective or ineffective for any given patient, and we can tell in advance whether to treat with Tamoxifen or to try something else.

Two factors lie behind this new approach to medicine: a different way of using data, and the availability of new kinds of data. It’s not just stating that the drug is effective on most patients, based on trials (indeed, 80% is an enviable success rate); it’s using artificial intelligence techniques to divide the patients into groups and then determine the difference between those groups. We’re not asking whether the drug is effective; we’re asking a fundamentally different question: “for which patients is this drug effective?” We’re asking about the patients, not just the treatments. A drug that’s only effective on 1% of patients might be very valuable if we can tell who that 1% is, though it would certainly be rejected by any traditional clinical trial.

More than that, asking questions about patients is only possible because we’re using data that wasn’t available until recently: DNA sequencing was only invented in the mid-1970s, and is only now coming into its own as a medical tool. What we’ve seen with Tamoxifen is as clear a solution to the Wanamaker problem as you could ask for: we now know when that treatment will be effective. If you can do the same thing with millions of cancer patients, you will both improve outcomes and save money.

Dr. Lukas Wartman, a cancer researcher who was himself diagnosed with terminal leukemia, was successfully treated with sunitinib, a drug that was only approved for kidney cancer. Sequencing the genes of both the patient’s healthy cells and cancerous cells led to the discovery of a protein that was out of control and encouraging the spread of the cancer. The gene responsible for manufacturing this protein could potentially be inhibited by the kidney drug, although it had never been tested for this application. This unorthodox treatment was surprisingly effective: Wartman is now in remission.

While this treatment was exotic and expensive, what’s important isn’t the expense but the potential for new kinds of diagnosis. The price of gene sequencing has been plummeting; it will be a common doctor’s office procedure in a few years. And through Amazon and Google, you can now “rent” a cloud-based supercomputing cluster that can solve huge analytic problems for a few hundred dollars per hour. What is now exotic inevitably becomes routine.

But even more important: we’re looking at a completely different approach to treatment. Rather than a treatment that works 80% of the time, or even 100% of the time for 80% of the patients, a treatment might be effective for a small group. It might be entirely specific to the individual; the next cancer patient may have a different protein that’s out of control, an entirely different genetic cause for the disease. Treatments that are specific to one patient don’t exist in medicine as it’s currently practiced; how could you ever do an FDA trial for a medication that’s only going to be used once to treat a certain kind of cancer?

Foundation Medicine is at the forefront of this new era in cancer treatment. They use next-generation DNA sequencing to discover DNA sequence mutations and deletions that are currently used in standard of care treatments, as well as many other actionable mutations that are tied to drugs for other types of cancer. They are creating a patient-outcomes repository that will be the fuel for discovering the relation between mutations and drugs. Foundation has identified DNA mutations in 50% of cancer cases for which drugs exist (information via a private communication), but are not currently used in the standard of care for the patient’s particular cancer.

The ability to do large-scale computing on genetic data gives us the ability to understand the origins of disease. If we can understand why an anti-cancer drug is effective (what specific proteins it affects), and if we can understand what genetic factors are causing the cancer to spread, then we’re able to use the tools at our disposal much more effectively. Rather than using imprecise treatments organized around symptoms, we’ll be able to target the actual causes of disease, and design treatments tuned to the biology of the specific patient. Eventually, we’ll be able to treat 100% of the patients 100% of the time, precisely because we realize that each patient presents a unique problem.

Personalized treatment is just one area in which we can solve the Wanamaker problem with data. Hospital admissions are extremely expensive. Data can make hospital systems more efficient, and to avoid preventable complications such as blood clots and hospital re-admissions. It can also help address the challenge of hot-spotting (a term coined by Atul Gawande): finding people who use an inordinate amount of health care resources. By looking at data from hospital visits, Dr. Jeffrey Brenner of Camden, NJ, was able to determine that “just one per cent of the hundred thousand people who made use of Camden’s medical facilities accounted for thirty per cent of its costs.” Furthermore, many of these people came from two apartment buildings. Designing more effective medical care for these patients was difficult; it doesn’t fit our health insurance system, the patients are often dealing with many serious medical issues (addiction and obesity are frequent complications), and have trouble trusting doctors and social workers. It’s counter-intuitive, but spending more on some patients now results in spending less on them when they become really sick. While it’s a work in progress, it looks like building appropriate systems to target these high-risk patients and treat them before they’re hospitalized will bring significant savings.

Many poor health outcomes are attributable to patients who don’t take their medications. Eliza, a Boston-based company started by Alexandra Drane, has pioneered approaches to improve compliance through interactive communication with patients. Eliza improves patient drug compliance by tracking which types of reminders work on which types of people; it’s similar to the way companies like Google target advertisements to individual consumers. By using data to analyze each patient’s behavior, Eliza can generate reminders that are more likely to be effective. The results aren’t surprising: if patients take their medicine as prescribed, they are more likely to get better. And if they get better, they are less likely to require further, more expensive treatment. Again, we’re using data to solve Wanamaker’s problem in medicine: we’re spending our resources on what’s effective, on appropriate reminders that are mostly to get patients to take their medications.

More data, more sources

The examples we’ve looked at so far have been limited to traditional sources of medical data: hospitals, research centers, doctor’s offices, insurers. The Internet has enabled the formation of patient networks aimed at sharing data. Health social networks now are some of the largest patient communities. As of November 2011, PatientsLikeMe has over 120,000 patients in 500 different condition groups; ACOR has over 100,000 patients in 127 cancer support groups; 23andMe has over 100,000 members in their genomic database; and diabetes health social network SugarStats has over 10,000 members. These are just the larger communities, thousands of small communities are created around rare diseases, or even uncommon experiences with common diseases. All of these communities are generating data that they voluntarily share with each other and the world.

Increasingly, what they share is not just anecdotal, but includes an array of clinical data. For this reason, these groups are being recruited for large-scale crowdsourced clinical outcomes research.

Thanks to ubiquitous data networking through the mobile network, we can take several steps further. In the past two or three years, there’s been a flood of personal fitness devices (such as the Fitbit) for monitoring your personal activity. There are mobile apps for taking your pulse, and an iPhone attachment for measuring your glucose. There has been talk of mobile applications that would constantly listen to a patient’s speech and detect changes that might be the precursor for a stroke, or would use the accelerometer to report falls. Tanzeem Choudhury has developed an app called Be Well that is intended primarily for victims of depression, though it can be used by anyone. Be Well monitors the user’s sleep cycles, the amount of time they spend talking, and the amount of time they spend walking. The data is scored, and the app makes appropriate recommendations, based both on the individual patient and data collected across all the app’s users.

Continuous monitoring of critical patients in hospitals has been normal for years; but we now have the tools to monitor patients constantly, in their home, at work, wherever they happen to be. And if this sounds like big brother, at this point most of the patients are willing. We don’t want to transform our lives into hospital experiences; far from it! But we can collect and use the data we constantly emit, our “data smog,” to maintain our health, to become conscious of our behavior, and to detect oncoming conditions before they become serious. The most effective medical care is the medical care you avoid because you don’t need it.

Paying for results

Once we’re on the road toward more effective health care, we can look at other ways in which Wanamaker’s problem shows up in the medical industry. It’s clear that we don’t want to pay for treatments that are ineffective. Wanamaker wanted to know which part of his advertising was effective, not just to make better ads, but also so that he wouldn’t have to buy the advertisements that wouldn’t work. He wanted to pay for results, not for ad placements. Now that we’re starting to understand how to make treatment effective, now that we understand that it’s more than rolling the dice and hoping that a treatment that works for a typical patient will be effective for you, we can take the next step: Can we change the underlying incentives in the medical system? Can we make the system better by paying for results, rather than paying for procedures?

It’s shocking just how badly the incentives in our current medical system are aligned with outcomes. If you see an orthopedist, you’re likely to get an MRI, most likely at a facility owned by the orthopedist’s practice. On one hand, it’s good medicine to know what you’re doing before you operate. But how often does that MRI result in a different treatment? How often is the MRI required just because it’s part of the protocol, when it’s perfectly obvious what the doctor needs to do? Many men have had PSA tests for prostate cancer; but in most cases, aggressive treatment of prostate cancer is a bigger risk than the disease itself. Yet the test itself is a significant profit center. Think again about Tamoxifen, and about the pharmaceutical company that makes it. In our current system, what does “100% effective in 80% of the patients” mean, except for a 20% loss in sales? That’s because the drug company is paid for the treatment, not for the result; it has no financial interest in whether any individual patient gets better. (Whether a statistically significant number of patients has side-effects is a different issue.) And at the same time, bringing a new drug to market is very expensive, and might not be worthwhile if it will only be used on the remaining 20% of the patients. And that’s assuming that one drug, not two, or 20, or 200 will be required to treat the unlucky 20% effectively.

It doesn’t have to be this way.

In the U.K., Johnson & Johnson, faced with the possibility of losing reimbursements for their multiple myeloma drug Velcade, agreed to refund the money for patients who did not respond to the drug. Several other pay-for-performance drug deals have followed since, paving the way for the ultimate transition in pharmaceutical company business models in which their product is health outcomes instead of pills. Such a transition would rely more heavily on real-world outcome data (are patients actually getting better?), rather than controlled clinical trials, and would use molecular diagnostics to create personalized “treatment algorithms.” Pharmaceutical companies would also focus more on drug compliance to ensure health outcomes were being achieved. This would ultimately align the interests of drug makers with patients, their providers, and payors.

Similarly, rather than paying for treatments and procedures, can we pay hospitals and doctors for results? That’s what Accountable Care Organizations (ACOs) are about. ACOs are a leap forward in business model design, where the provider shoulders any financial risk. ACOs represent a new framing of the much maligned HMO approaches from the ’90s, which did not work. HMOs tried to use statistics to predict and prevent unneeded care. The ACO model, rather than controlling doctors with what the data says they “should” do, uses data to measure how each doctor performs. Doctors are paid for successes, not for the procedures they administer. The main advantage that the ACO model has over the HMO model is how good the data is, and how that data is leveraged. The ACO model aligns incentives with outcomes: a practice that owns an MRI facility isn’t incentivized to order MRIs when they’re not necessary. It is incentivized to use all the data at its disposal to determine the most effective treatment for the patient, and to follow through on that treatment with a minimum of unnecessary testing.

When we know which procedures are likely to be successful, we’ll be in a position where we can pay only for the health care that works. When we can do that, we’ve solved Wanamaker’s problem for health care.

Enabling data

Data science is not optional in health care reform; it is the linchpin of the whole process. All of the examples we’ve seen, ranging from cancer treatment to detecting hot spots where additional intervention will make hospital admission unnecessary, depend on using data effectively: taking advantage of new data sources and new analytics techniques, in addition to the data the medical profession has had all along.

But it’s too simple just to say “we need data.” We’ve had data all along: handwritten records in manila folders on acres and acres of shelving. Insurance company records. But it’s all been locked up in silos: insurance silos, hospital silos, and many, many doctor’s office silos. Data doesn’t help if it can’t be moved, if data sources can’t be combined.

There are two big issues here. First, a surprising amount of medical records are still either hand-written, or in digital formats that are scarcely better than hand-written (for example, scanned images of hand-written records). Getting medical records into a format that’s computable is a prerequisite for almost any kind of progress. Second, we need to break down those silos.

Anyone who has worked with data knows that, in any problem, 90% of the work is getting the data in a form in which it can be used; the analysis itself is often simple. We need electronic health records: patient data in a more-or-less standard form that can be shared efficiently, data that can be moved from one location to another at the speed of the Internet. Not all data formats are created equal, and some are certainly better than others: but at this point, any machine-readable format, even simple text files, is better than nothing. While there are currently hundreds of different formats for electronic health records, the fact that they’re electronic means that they can be converted from one form into another. Standardizing on a single format would make things much easier, but just getting the data into some electronic form, any, is the first step.

Once we have electronic health records, we can link doctor’s offices, labs, hospitals, and insurers into a data network, so that all patient data is immediately stored in a data center: every prescription, every procedure, and whether that treatment was effective or not. This isn’t some futuristic dream; it’s technology we have now. Building this network would be substantially simpler and cheaper than building the networks and data centers now operated by Google, Facebook, Amazon, Apple, and many other large technology companies. It’s not even close to pushing the limits.

Electronic health records enable us to go far beyond the current mechanism of clinical trials. In the past, once a drug has been approved in trials, that’s effectively the end of the story: running more tests to determine whether it’s effective in practice would be a huge expense. A physician might get a sense for whether any treatment worked, but that evidence is essentially anecdotal: it’s easy to believe that something is effective because that’s what you want to see. And if it’s shared with other doctors, it’s shared while chatting at a medical convention. But with electronic health records, it’s possible (and not even terribly expensive) to collect documentation from thousands of physicians treating millions of patients. We can find out when and where a drug was prescribed, why, and whether there was a good outcome. We can ask questions that are never part of clinical trials: is the medication used in combination with anything else? What other conditions is the patient being treated for? We can use machine learning techniques to discover unexpected combinations of drugs that work well together, or to predict adverse reactions. We’re no longer limited by clinical trials; every patient can be part of an ongoing evaluation of whether his treatment is effective, and under what conditions. Technically, this isn’t hard. The only difficult part is getting the data to move, getting data in a form where it’s easily transferred from the doctor’s office to analytics centers.

To solve problems of hot-spotting (individual patients or groups of patients consuming inordinate medical resources) requires a different combination of information. You can’t locate hot spots if you don’t have physical addresses. Physical addresses can be geocoded (converted from addresses to longitude and latitude, which is more useful for mapping problems) easily enough, once you have them, but you need access to patient records from all the hospitals operating in the area under study. And you need access to insurance records to determine how much health care patients are requiring, and to evaluate whether special interventions for these patients are effective. Not only does this require electronic records, it requires cooperation across different organizations (breaking down silos), and assurance that the data won’t be misused (patient privacy). Again, the enabling factor is our ability to combine data from different sources; once you have the data, the solutions come easily.

Breaking down silos has a lot to do with aligning incentives. Currently, hospitals are trying to optimize their income from medical treatments, while insurance companies are trying to optimize their income by minimizing payments, and doctors are just trying to keep their heads above water. There’s little incentive to cooperate. But as financial pressures rise, it will become critically important for everyone in the health care system, from the patient to the insurance executive, to assume that they are getting the most for their money. While there’s intense cultural resistance to be overcome (through our experience in data science, we’ve learned that it’s often difficult to break down silos within an organization, let alone between organizations), the pressure of delivering more effective health care for less money will eventually break the silos down. The old zero-sum game of winners and losers must end if we’re going to have a medical system that’s effective over the coming decades.

Data becomes infinitely more powerful when you can mix data from different sources: many doctor’s offices, hospital admission records, address databases, and even the rapidly increasing stream of data coming from personal fitness devices. The challenge isn’t employing our statistics more carefully, precisely, or guardedly. It’s about letting go of an old paradigm that starts by assuming only certain variables are key and ends by correlating only these variables. This paradigm worked well when data was scarce, but if you think about, these assumptions arise precisely because data is scarce. We didn’t study the relationship between leukemia and kidney cancers because that would require asking a huge set of questions that would require collecting a lot of data; and a connection between leukemia and kidney cancer is no more likely than a connection between leukemia and flu. But the existence of data is no longer a problem: we’re collecting the data all the time. Electronic health records let us move the data around so that we can assemble a collection of cases that goes far beyond a particular practice, a particular hospital, a particular study. So now, we can use machine learning techniques to identify and test all possible hypotheses, rather than just the small set that intuition might suggest. And finally, with enough data, we can get beyond correlation to causation: rather than saying “A and B are correlated,” we’ll be able to say “A causes B,” and know what to do about it.

Building the health care system we want

The U.S. ranks 37th out of developed economies in life expectancy and other measures of health, while by far outspending other countries on per-capita health care costs. We spend 18% of GDP on health care, while other countries on average spend on the order of 10% of GDP. We spend a lot of money on treatments that don’t work, because we have a poor understanding at best of what will and won’t work.

Part of the problem is cultural. In a country where even pets can have hip replacement surgery, it’s hard to imagine not spending every penny you have to prolong Grandma’s life — or your own. The U.S. is a wealthy nation, and health care is something we choose to spend our money on. But wealthy or not, nobody wants ineffective treatments. Nobody wants to roll the dice and hope that their biology is similar enough to a hypothetical “average” patient. No one wants a “winner take all” payment system in which the patient is always the loser, paying for procedures whether or not they are helpful or necessary. Like Wanamaker with his advertisements, we want to know what works, and we want to pay for what works. We want a smarter system where treatments are designed to be effective on our individual biologies; where treatments are administered effectively; where our hospitals our used effectively; and where we pay for outcomes, not for procedures.

We’re on the verge of that new system now. We don’t have it yet, but we can see it around the corner. Ultra-cheap DNA sequencing in the doctor’s office, massive inexpensive computing power, the availability of EHRs to study whether treatments are effective even after the FDA trials are over, and improved techniques for analyzing data are the tools that will bring this new system about. The tools are here now; it’s up to us to put them into use.

Recommended reading:

We recommend the following books regarding technology, data, and health care reform:

August 09 2012

Five elements of reform that health providers would rather not hear about

The quantum leap we need in patient care requires a complete overhaul of record-keeping and health IT. Leaders of the health care field know this and have been urging the changes on health care providers for years, but the providers are having trouble accepting the changes for several reasons.

What’s holding them back? Change certainly costs money, but the industry is already groaning its way through enormous paradigm shifts to meet current financial and regulatory climate, so the money might as well be directed to things that work. Training staff to handle patients differently is also difficult, but the staff on the floor of these institutions are experiencing burn-out and can be inspired by a new direction. The fundamental resistance seems to be expectations by health providers and their vendors about the control they need to conduct their business profitably.

A few months ago I wrote an article titled Five Tough Lessons I Had to Learn About Health Care. Here I’ll delineate some elements of a new health care system that are promoted by thought leaders, that echo the evolution of other industries, that will seem utterly natural in a couple decades–but that providers are loathe to consider. I feel that leaders in the field are not confronting that resistance with an equivalent sense of conviction that these changes are crucial.

1. Reform will not succeed unless electronic records standardize on a common, robust format

Records are not static. They must be combined, parsed, and analyzed to be useful. In the health care field, records must travel with the patient. Furthermore, we need an explosion of data analysis applications in order to drive diagnosis, public health planning, and research into new treatments.

Interoperability is a common mantra these days in talking about electronic health records, but I don’t think the power and urgency of record formats can be conveyed in eight-syllable words. It can be conveyed better by a site that uses data about hospital procedures, costs, and patient satisfaction to help consumers choose a desirable hospital. Or an app that might prevent a million heart attacks and strokes.

Data-wise (or data-ignorant), doctors are stuck in the 1980s, buying proprietary record systems that don’t work together even between different departments in a hospital, or between outpatient clinics and their affiliated hospitals. Now the vendors are responding to pressures from both government and the market by promising interoperability. The federal government has taken this promise as good coin, hoping that vendors will provide windows onto their data. It never really happens. Every baby step toward opening up one field or another requires additional payments to vendors or consultants.

That’s why exchanging patient data (health information exchange) requires a multi-million dollar investment, year after year, and why most HIEs go under. And that’s why the HL7 committee, putatively responsible for defining standards for electronic health records, keeps on putting out new, complicated variations on a long history of formats that were not well enough defined to ensure compatibility among vendors.

The Direct project and perhaps the nascent RHEx RESTful exchange standard will let hospitals exchange the limited types of information that the government forces them to exchange. But it won’t create a platform (as suggested in this PDF slideshow) for the hundreds of applications we need to extract useful data from records. Nor will it open the records to the masses of data we need to start collecting. It remains to be seen whether Accountable Care Organizations, which are the latest reform in U.S. health care and are described in this video, will be able to use current standards to exchange the data that each member institution needs to coordinate care. Shahid Shaw has laid out in glorious detail the elements of open data exchange in health care.

2. Reform will not succeed unless massive amounts of patient data are collected

We aren’t giving patients the most effective treatments because we just don’t know enough about what works. This extends throughout the health care system:

  • We can’t prescribe a drug tailored to the patient because we don’t collect enough data about patients and their reactions to the drug.

  • We can’t be sure drugs are safe and effective because we don’t collect data about how patients fare on those drugs.

  • We don’t see a heart attack or other crisis coming because we don’t track the vital signs of at-risk populations on a daily basis.

  • We don’t make sure patients follow through on treatment plans because we don’t track whether they take their medications and perform their exercises.

  • We don’t target people who need treatment because we don’t keep track of their risk factors.

Some institutions have adopted a holistic approach to health, but as a society there’s a huge amount more that we could do in this area. O’Reilly is hosting a conference called Strata Rx on this subject.

Leaders in the field know what health care providers could accomplish with data. A recent article even advises policy-makers to focus on the data instead of the electronic records. The question is whether providers are technically and organizationally prepped to accept it in such quantities and variety. When doctors and hospitals think they own the patients’ records, they resist putting in anything but their own notes and observations, along with lab results they order. We’ve got to change the concept of ownership, which strikes deep into their culture.

3. Reform will not succeed unless patients are in charge of their records

Doctors are currently acting in isolation, occasionally consulting with the other providers seen by their patients but rarely sharing detailed information. It falls on the patient, or a family advocate, to remember that one drug or treatment interferes with another or to remind treatment centers of follow-up plans. And any data collected by the patient remains confined to scribbled notes or (in the modern Quantified Self equivalent) a web site that’s disconnected from the official records.

Doctors don’t trust patients. They have some good reasons for this: medical records are complicated documents in which a slight rewording or typographical error can change the meaning enough to risk a life. But walling off patients from records doesn’t insulate them against errors: on the contrary, patients catch errors entered by staff all the time. So ultimately it’s better to bring the patient onto the team and educate her. If a problem with records altered by patients–deliberately or through accidental misuse–turns up down the line, digital certificates can be deployed to sign doctor records and output from devices.

The amounts of data we’re talking about get really big fast. Genomic information and radiological images, in particular, can occupy dozens of gigabytes of space. But hospitals are moving to the cloud anyway. Practice Fusion just announced that they serve 150,000 medical practitioners and that “One in four doctors selecting an EHR today chooses Practice Fusion.” So we can just hand over the keys to the patients and storage will grow along with need.

The movement for patient empowerment will take off, as experts in health reform told US government representatives, when patients are in charge of their records. To treat people, doctors will have to ask for the records, and the patients can offer the full range of treatment histories, vital signs, and observations of daily living they’ve collected. Applications will arise that can search the data for patterns and relevant facts.

Once again, the US government is trying to stimulate patient empowerment by requiring doctors to open their records to patients. But most institutions meet the formal requirements by providing portals that patients can log into, the way we can view flight reservations on airlines. We need the patients to become the pilots. We also need to give them the information they need to navigate.

4. Reform will not succeed unless providers conform to practice guidlines

Now that the government is forcing doctors to release informtion about outcomes, patients can start to choose doctors and hospitals that offer the best chances of success. The providers will have to apply more rigor to their activities, using checklists and more, to bring up the scores of the less successful providers. Medicine is both a science and an art, but many lag on the science–that is, doing what has been statistically proven to produce the best likely outcome–even at prestigious institutions.

Patient choice is restricted by arbitrary insurance rules, unfortunately. These also contribute to the utterly crazy difficulty determining what a medical procedure will cost as reported by e-Patient Dave and WBUR radio. Straightening out this problem goes way beyond the doctors and hospitals, and settling on a fair, predictable cost structure will benefit them almost as much as patients and taxpayers. Even some insurers have started to see that the system is reaching a dead-end and are erecting new payment mechanisms.

5. Reform will not succeed unless providers and patients can form partnerships

I’m always talking about technologies and data in my articles, but none of that constitutes health. Just as student testing is a poor model for education, data collection is a poor model for medical care. What patients want is time to talk intensively with their providers about their needs, and providers voice the same desires.

Data and good record keeping can help us use our resources more efficiently and deal with the physician shortage, partly by spreading out jobs among other clinical staff. Computer systems can’t deal with complex and overlapping syndromes, or persuade patients to adopt practices that are good for them. Relationships will always have to be in the forefront. Health IT expert Fred Trotter says, “Time is the gas that makes the relationship go, but the technology should be focused on fuel efficiency.”

Arien Malec, former contractor for the Office of the National Coordinator, used to give a speech about the evolution of medical care. Before the revolution in antibiotics, doctors had few tools to actually cure patients, but they live with the patients in the same community and know their needs through and through. As we’ve improved the science of medicine, we’ve lost that personal connection. Malec argued that better records could help doctors really know their patients again. But conversations are necessary too.

August 08 2012

Technical requirements for coordinating care in an Accountable Care Organization

The concept of an Accountable Care Organization (ACO) reflects modern hopes to improve medicine and cut costs in the health system. Tony MCormick, a pioneer in the integration of health care systems, describes what is needed on the ground to get doctors working together.

Highlights from the full video interview include:

  • What an Accountable Care Organization is. [Discussed at the 00:19 mark]
  • Biggest challenge in forming an ACO. [Discussed at the 01:23 mark]
  • The various types of providers who need to exchange data. [Discussed at the 03:08 mark]
  • Data formats and gaps in the market. [Discussed at the 03:58 mark]
  • Uses for data in ACOs. [Discussed at the 5:39 mark]
  • Problems with current Medicare funding and solutions through ACOs. [Discussed at the 7:50 mark]

You can view the entire conversation in the following video:

August 02 2012

Why microchips in pills matter

Earlier this week, Proteus announced that they have been approved by the FDA to market their ingestible microchips for pills.

Generally, the FDA approval process for devices that are totally new like this is a painful one, with much suffering. So it is a big deal for anyone to get approved for anything.

But this is a far more important accomplishment than a mere incremental improvement. It is an entirely new kind of medical device and, most importantly, a whole new potential data stream on one of the most critical issues in the delivery of health care.

Modern drugs work wonders, but it does not help a patient if the patient does not take them. Historically, and I do mean the stone age here, the ability to consistently take pills correctly has been called “compliance.” But please do not call it that. Most participants in the movement for patient rights regard that term as paternalistic.

“Adherence” is a much more respectful term (although I have certainly heard it used in a paternalistic manner). In fact, as we progress, I should define that when I say “adherence,” what I mean is “adherence to a plan that belongs to the patient.” If a pill that I am taking makes me so sick to my stomach that I cannot take it any more, then when I decide to stop taking it, I am not being “non-compliant” or “non-adherent” to a medication plan; I have changed my medication plan, and I have yet to discuss the issue with my doctor.

Still, even for patients who want to consistently take their pills according to a plan, it can be extraordinarily difficult. It is hard to remember when a pill has been taken. It is hard to remember to pick up a new supply or to call for a renewal. It is easy to forget pills on trips and run out unexpectedly far from home. Pills frequently must be taken with food, or without food, or with or without specific foods. They must be taken before bed or before breakfast. Personally, I have trouble remembering to take even a single pill consistently. But many people need to manage a tremendous number of pills, and they frequently go off one set of pills and onto another. In short, pill management is a huge mess, and it is difficult to organize anything.

The Proteus technology allows a person to wear a patch that can detect when any pill carrying a proteus microchip is swallowed. Using that patch and a wireless connection of some kind, it is pretty simple to enable all kinds of clever “take your pill UX.” With this technology, it will finally be possibly to reliably perform “quantified self” on pill regimes.

It is hard to imagine just how large of an impact this will have on the problem of pills, but “huge” is a conservative estimate. New systems will be able to have digital pharmacy dispensers in the home, which will enable schedules like “take one of these every 56 hours until you have taken 10, and then take one after that in a Fibonacci retracement.” Or complex sequences like “take A and B every two hours, take C every three hours, take D every other day and take E as needed but not more than once a day.”

This should not be taken to mean that this technology will be taking the thought out of taking pills. Instead, think of this as a mechanism to enable greater self-knowledge through numbers. Already, Proteus engineer Nancy Dougherty has demonstrated that the technology could revolutionize placebo research. Dougherty’s work is and should be utterly surprising and delightful, and it should also demonstrate that the potential uses for this technology are unlimited.

As with any game-changing tech, Proteus will have to make sure it uses its new super power for good. I see no reason at all why they would not be in a position to do that (and yes, I fully recognize that many people will disagree).

A quick survey of pill management

In order to celebrate this accomplishment, I thought I would take a quick look at the state of pill management until now. Here is an assortment of the pill management systems on sale at my local pharmacy:

Untitled

In the following picture, can you tell these pills apart? One is over the counter and one is prescription. Can you imagine trying to sort through these every day?

Two pills

Pictured below are the “big” pill organizers. The one on the bottom is enormous. I think you could cook cupcakes in it. I seriously hope I will never have to use something like that. Note that the ones with the little blue stickers are specifically designed to be easy to open for patients with arthritis.

Big Pill Organizers

I am releasing these and all of my “pill box flickr set” under the creative commons to mark the Proteus accomplishment.

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting health care.

Save 20% on registration with the code RADAR20

Photo of pills: “Untitled by tr0tt3r, on Flickr.

Related:

July 25 2012

Democratizing data, and other notes from the Open Source convention

There has been enormous talk over the past few years of open data and what it can do for society, but proponents have largely come to admit: data is not democratizing in itself. This topic is hotly debated, and a nice summary of the viewpoints is available in this PDF containing articles by noted experts. At the Open Source convention last week, I thought a lot about the democratizing potential of data and how it could be realized.

Who benefits from data sets

At a high level, large businesses and other well-funded organizations have three natural advantages over the general public in the exploitation of data sets:

  • The resources to gather the data
  • The resources to do the necessary programming to crunch and interpret the data
  • The resources to act on the results

These advantages will probably always exist, but data can be useful to the public too. We have some tricks that can compensate for each of the large institutions’ advantages:

  • Crowdsourcing can create data sets that can help everybody, including the formation of new businesses. OpenStreetMap, an SaaS project based on open source software, is a superb example. Its maps have been built up through years of contributions by people trying to support their communities, and it supports interesting features missing from proprietary map projects, such as tools for laying out bike paths.

  • Data-crunching is where developers, like those at the Open Source convention, come in. Working at non-profits, during week-end challenges, or just on impulse, they can code up the algorithms that make sense of data sets and apps to visualize and accept interaction from people with less technical training.

  • Some apps, such as reports of neighborhood crime or available health facilities, can benefit individuals, but we can really drive progress by joining together in community organizations or other associations that use the data. I saw a fantastic presentation by high school students in the Boston area who demonstrated a correlation between funding for summer jobs programs and lowered homicides in the inner city–and they won more funding from the Massachusetts legislature with that presentation.

Health care track

This year was the third in which the Open Source convention offered a health care track. IT plays a growing role in health care, but a lot of the established institutions are creaking forward slowly, encountering lots of organizational and cultural barriers to making good use of computers. This year our presentations clustered around areas where innovation is most robust: personal tracking, using data behind the scenes to improve care, and international development.

Open source coders Fred Trotter and David Neary gave popular talks about running and tracking one’s achievements. Bob Evans discussed a project named PACO that he started at Google to track productivity by individuals and in groups of people who come together for mutual support, while Anne Wright and Candide Kemmler described the ambitious BodyTrack project. Jason Levitt gave the science of sitting (and how to make it better for you).

In a high-energy presentation, systems developer Shahid Shah described the cornucopia of high-quality, structured data that will be made available when devices are hooked together. “Gigabytes of data is being lost every minute from every patient hooked up to hospital monitors,” he said. DDS, HTTP, and XMPP are among the standards that will make an interconnected device mesh possible. Michael Italia described the promise of genome sequencing and the challenges it raises, including storage requirements and the social impacts of storing sensitive data about people’s propensity for disease. Mohamed ElMallah showed how it was sometimes possible to work around proprietary barriers in electronic health records and use them for research.

Representatives from OpenMRS and IntraHealth international spoke about the difficulties and successes of introducing IT into very poor areas of the world, where systems need to be powered by their own electricity generators. A maintainable project can’t be dropped in by external NGO staff, but must cultivate local experts and take a whole-systems approach. Programmers in Rwanda, for instance, have developed enough expertise by now in OpenMRS to help clinics in neighboring countries install it. Leaders of OSEHRA, which is responsible for improving the Department of Veteran Affairs’ VistA and developing a community around it, spoke to a very engaged audience about their work untangling and regularizing twenty years’ worth of code.

In general, I was pleased with the modest growth of the health care track this year–most session drew about thirty people, and several drew a lot more–and both the energy and the expertise of the people who came. Many attendees play an important role in furthering health IT.

Other thoughts

The Open Source convention reflected much of the buzz surrounding developments in computing. Full-day sessions on OpenStack and Gluster were totally filled. A focus on developing web pages came through in the popularity of talks about HTML5 and jQuery (now a platform all its own, with extensions sprouting in all directions). Perl still has a strong community. A few years ago, Ruby on Rails was the must-learn platform, and knock-off derivatives appeared in almost every other programming language imaginable. Now the Rails paradigm has been eclipsed (at least in the pursuit of learning) by Node.js, which was recently ported to Microsoft platforms, and its imitators.

No two OSCons are the same, but the conference continues to track what matters to developers and IT staff and to attract crowds every year. I enjoyed nearly all the speakers, who often pump excitement into the dryest of technical topics through their own sense of continuing wonder. This is an industry where imagination’s wildest thoughts become everyday products.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl