Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 26 2012

Health records support genetics research at Children's Hospital of Philadelphia

Michael Italia leads a team of programmers and scientists at Children's Hospital of Philadelphia (CHOP), Center for Biomedical Informatics, where they develop applications, data repositories, and web interfaces to support CHOP's leading roles in both treatment and research. Recently we recorded an interview discussing the collection of data at CHOP and its use to improve both care and long-term research.

Italia, who will speak on this topic at OSCON, describes how the informatics staff derived structured data from electronic health record (EHR) forms developed by audiologists to support both research and clinical care. He describes the custom web interface that makes data available to researchers and discusses the exciting potential of genomic sequencing to improve care. He also lists tools used to collect and display data, many of which are open source.

Particular topics in this video include:

  • The relationship between clinical care and research at Children's. [Discussed at the 00:22 mark]
  • The value of research using clinical data. [Discussed at the 02:30 mark]
  • The challenge of getting good data from health records. [Discussed at the 03:30 mark]
  • Tools for capturing, exporting, and displaying data. [Discussed at the 05:41 mark]
  • Making data useful to clinicians through a simple, modular web interface; tools used. [Discussed at the 12:07 mark]
  • Size of the database and user cohort. [Discussed at the 17:19 mark]
  • The ethical and technical issues of genome sequencing in medical treatment; benefits of sequencing. [Discussed at the 18:23 mark]
  • "Pick out the signal from the noise": integrating genetic information into the electronic health record and "actionable information". [Discussed at the 24:27 mark]

You can view the entire conversation in the following video:

OSCON 2012 Health Care Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR


June 22 2012

June 21 2012

The state of Health Information Exchange in Massachusetts

I recently attended the Massachusetts Health Data Consortium's (MHDC) conference on Health Information Exchange (HIE), modestly titled "The Key to Integration and Accountability." Although I'm a health IT geek, I felt I needed help understanding life outside the electronic health record (EHR) world. So, I roped in Char Kasprzak, statistical data analyst at Massachusetts Health Quality Partners, to give me a better picture of the quality implications of HIE (and to help me write this post).

John Halamka, CIO of Caregroup/Beth Israel Deaconess Medical Center, took the stage first and blasted through all the progress being made establishing the necessary frameworks for HIE to occur in Massachusetts. The takeaway message from John's talk was that there have been many changes since September 2011 in the financial, technical, and legal structures involved in building health information exchange. The lessons learned from the initial pilot should enable Massachusetts to be ready for the first stage of statewide HIE.

HIE development in Massachusetts

Health care providers historically thought of HIE as a large institution run by a state or a major EHR vendor. It carried out the exchange of patient records in the crudest and most heavyweight way, by setting up one-to-one relationships with local hospitals and storing the records. (Some of the more sophisticated HIEs could link together hospitals instead, rather like Napster linked together end-users for file exchange.) These institutions still dominate, but HIE is now being used in a much broader sense, referring to the ability of institutions to share data with each other and even with patients over a variety of channels.

Despite the push for the health IT industry to use "HIE" as a verb rather than a noun, there was quite a lot of discussion at the event surrounding the structures and applications involved. Although HIE should be conceptually identified as a process (verb), having the structures and organizations (nouns) necessary to facilitate exchange is a challenge facing health care entities across the country. This conference did a good job of articulating these organizational challenges, and it presented clear plans on how Massachusetts is addressing them.

In Massachusetts, the model moving forward for phase one of HIE will be based on the Direct Project, with one central Health Information Service Provider (HISP) that will focus on PKI and S/MIME certificate management, maintaining a provider/entity directory, creating a web portal for those not ready for Direct, and maintaining an audit log of transactions. The concept of HISP was created in the Direct Project Implementation and Best Practices workgroups, and was designed to be an organizational and functional framework for the management of directed exchange between health care providers. The statewide HISP will consist of several existing HISP organizations, including Berkshire Health, Partners, Athena Health, and the New England Health Exchange Network. No small task, but not insurmountable.

I remain skeptical about the ability of providers and even hospitals to install EHRs capable of sending Direct-compliant messages conforming to the XDR/XDM IHE Profile for Direct Messaging. Not that it doesn't work or because it's some Herculean task, but essentially because it hasn't been mandated. That may change, though, with the inclusion of Direct Messaging in the transport standards for Meaningful Use Stage 2. In Massachusetts, the creation of a health information highway (phase 1) is set to go live on October 15, 2012. Phase 2 will include analytics and population health, and Phase 3 is set to have search and retrieve, which will include a governance model for an Electronic Master Patient Index (EMPI) and Record Locator Service (RLS). Phase 2 and 3 will set a framework for querying patient data across entities, which is one of the biggest technical barriers to HIE. Currently, one of the best methods for this process is the Patient Identifier Cross-Referencing (PIX) profile, but few organizations are using this tool to its full potential.

What are the challenges?

When experts talk about exchanging health information, they tend to focus on the technology. Micky Tripathi, CEO and executive director of the Massachusetts eHealth Collaborative, pointed out at the event that the problem isn't the aggregation or analysis of data, but the recording of data during the documentation process. In my experience, this is quite accurate: Having exchange standards and the ability to analyze big data is useless if you don't capture the data in the first place, or capture it in a non-standard way. This was highlighted when the Massachusetts eHealth Collaborative ran the same reports on 44 quality measures, first using popHealth data, then again with Massachusetts eHealth Collaborative data, and received conflicting results for each measure. There are certainly lessons to be learned from this pilot about the importance of specifying numerators, denominators, vocabularies, and transmission templates.

Determining what to capture can be as important as how the data is captured. Natasha Khouri elaborated on the challenges of accurate data capture during her presentation on "Implementing Race and Ethnicity Data Collection in Massachusetts Hospitals — Not as Easy as It Sounds." In 2006, Massachusetts added three new fields and 33 categories to more accurately record race and ethnicity information. The purpose of this is to address health disparities, which is something I'm very excited to see discussed at a health IT conference.

With accurate data in hand, direct interventions in communities can be more targeted and effective. However, the largest barrier to this seems to have been getting providers to ask questions about race and ethnicity. This was due to high training costs, staff resistance, and workflow changes necessary for collecting the demographic data. This problem was particularly interesting to me, having worked with the Fenway Health Institute to craft their Meaningful Use Stage 2 comments regarding the inclusion of gender identity and sexual orientation in the demographics criteria. Recording accurate data on vulnerable populations is vital to improving public health campaigns.

What about patients?

For a conference with no patient speakers, there was a surprising amount of discussion about how patients will be involved in HIE and the impact EHRs have on patients. Dr. Lawrence Garber,who serves as the medical informatics director for Reliant Medical Group, examined issues of patient consent. The research he discussed showed that when given the choice, about 5% of patients will opt out of HIE, while 95% will opt in. When patients opt in at the entity/organizational level, this enables automated exchange between providers, entities, care teams, and patients. Organizations utilize a Data Use and Reciprocal Support Agreement (DURSA) to establish a trust framework for authenticating entities that exchange data (presumably for the benefit of patients). DURSAs will likely play an important role as organizations move toward Accountable Care Organization models of care.

Information exchange should also lead to more patient satisfaction with their medical visits, where they will be able to spend more time talking to their doctors about current concerns instead of wasting time reviewing medical history from records that may be incomplete or inaccessible.

Dana Safran, VP of performance measurement and improvement at Blue Cross Blue Shield, explained at the conference that patients can expect better quality of care because quality improvement efforts start with being able to measure processes and outcomes. With HIE, it will be possible to get actual clinical data with which to enhance patient-reported outcome measures (PROMs) and really make them more reliable. Another topic that can be better measured with HIE is provider practice pattern variation. For example, identifying which providers are "outliers" in the number of tests they order, and showing them where they stand compared to their peers, can motivate them to more carefully consider whether each test is needed. Fewer unnecessary tests means cost savings for the whole system, including patients.

Toward the end of the conference, Dr. Nakhle A. Tarazi gave a presentation on his Elliot M. Stone Intern Project on the impact of EHRs on patient experience and satisfaction. The results were quite interesting, including:

  • 59% of patients noticed no change in time spent with their provider.
  • 65% of patients noticed no change in eye contact with their provider.
  • 67% of patients noticed no change in wait time in the office.

The sample size was small, interviewing only 50 patients, but the results certainly warrant a larger, more in-depth study.

In Massachusetts, it seems like the state of the HIE is strong. The next year should be quite exciting. By this time in 2013, we should have a statewide HISP and a web portal service that enables exchange between providers. Halamka has promised that on October 15 the walls between Massachusetts health care orgs will begin to come down. If it is successful in Massachusetts, it could be a valuable model for other states. We also have the opportunity to involve patients in the process, and I hope organizations such as The Society for Participatory Medicine and Direct Trust will be involved in making patients active partners in the exchange of health data.

OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR


Clinician, researcher, and patients working together: progress aired at Indivo conference

While thousands of health care professionals were flocking to the BIO International Convention this week, I spent Monday in a small library at the Harvard Medical School listening to a discussion of the Indivo patient health record and related open source projects with about 80 intensely committed followers. Lead Indivo architect Daniel Haas, whom I interviewed a year ago, succeeded in getting the historical 2.0 release of Indivo out on the day of the conference. This article explains the significance of the release in the health care field and the promise of the work being done at Harvard Medical School and its collaborators.

Although still at the early adoption stages, Indivo and the related SMART and i2b2 projects merit attention and have received impressive backing. The Office of the National Coordinator funded SMART, and NIH funded i2b2. National Coordinator Farzad Mostashari was scheduled to attend Monday's conference (although he ended up having to speak over a video hookup). Indivo inspired both Microsoft HealthVault and Google Health, and a good deal of its code underlies HealthVault. Australia has taken a nationwide PHR initiative inspired by Indivo. A Partners HealthCare representative spoke at the conference, as did someone from the MIT Media Lab. Clayton M. Christensen et al. cited Indivo as a good model in The Innovator's Prescription: A Disruptive Solution for Health Care. Let's take a look at what makes the combination so powerful.

Platform and reference implementation

The philosophy underlying this distributed open source initiative is to get clinicians, health researchers, and patients to share data and work together. Today, patient data is locked up in thousands of individual doctors or hospital repositories; whether they're paper or electronic hardly makes a difference because they can't be combined or queried. The patient usually can't see his own data, as I described in an earlier posting, much less offer it to researchers. Dr. Kenneth Mandl, opening the conference, pointed out that currently, an innovative company in the field of health data will die on the vine because they can't get data without making deals with each individual institution and supporting its proprietary EHR.

The starting point for changing all that, so far as this conference goes, is the SMART platform. It simply provides data models for storing data and APIs to retrieve it. If an electronic health record can translate data into a simple RDF model and support the RESTful API, any other program or EHR that supports SMART can access the data. OAuth supports security and patient control over access.

Indivo is a patient health record (or, to use the term preferred by the conference speakers, a personally controlled health record). It used to have its own API, and the big significance of Monday's 2.0 release is that it now supports SMART. The RESTful interface will make Indivo easy to extend beyond its current Java and Python interfaces. So there's a far-reaching platform now for giving patients access to data and working seemlessly with other cooperating institutions.

The big missing piece is apps, and a hackathon on Tuesday (which I couldn't attend) was aimed at jump-starting a few. Already, a number of researchers are using SMART to coordinate data sharing and computation through the i2b2 platform developed by Partners. Ultimately, the SMART and Indivo developers hope to create an app store, inspired by Apple's, where a whole marketplace can develop. Any app written to the SMART standard can run in Indivo or any other system supporting SMART. But the concept of an app in SMART and Indivo is different from a consumer market, though. The administrator of the EHR or PHR would choose apps, vetting them for quality and safety, and then a doctor, researcher, or patient could use one of the chosen apps.

Shawn Murphy of Partners described the use of i2b2 to choose appropriate patients for a clinical study. Instead of having to manually check many different data repositories manually for patients meeting the requirements (genetic, etc.), a researcher could issue automated queries over SMART to the databases. The standard also supports teamwork across institutions. Currently, 60 different children's hospitals' registries talk to each other through i2b2.

It should be noted i2b2 does not write into a vendor's EHR system (which the ONC and many others call an important requirement for health information exchange) because putting data back into a silo isn't disruptive innovation. It's better to give patients a SMART-compatible PHR such as Indivo.

Regarding Tuesday's hackathon, Haas wrote me, "By the end of the day, we had several interesting projects in the works, including an app to do contextualized search based on a patient's Problems list (integration with and MedlinePlus), and app integration with BodyTrack, which displays Indivo labs data in time-series form alongside data from several other open API inputs, such as Fitbit and Zeo devices."

Standards keep things simple

All the projects mentioned are low-budget efforts, so they all borrow and repurpose whatever open source tools they can. As Mostashari said in his video keynote, they believe in "using what you've got." I have already mentioned SMART's dependence on standards, and Indivo is just as behold to other projects, particularly Django. For instance, Indivo allows data to be stored in Django's data models (Python structures that represent basic relational tables). Indivo also provides an even simpler JSON-based data model.

The format of data is just as important as the exchange protocol, if interoperability is to success. The SMART team chose to implement several "best-of-breed" standards that would cover 80% of use cases: for instance, SNOMED for medical conditions, RxNORM for medications, and LOINC for labs. Customers using other terminologies will have to translate them into the supported standards, so SMART contains Provenance fields indicating the data source.

The software is also rigorously designed to be modular, so both the original developers and other adopters can replace pieces as desired. Indivo already has plenty of fields about patient data and about context (provider names, etc.), but more can be added ad infinitum to support any health app that comes along. Indivo 2.0 includes pluggable data models, which allow a site to customize every step from taking in data to removing it. It also supports new schemas for data of any chosen type.

The simplicity of Indivo, SMART, and i2b2--so much in contrast with most existing health information exchanges--is reminiscent of Blue Button. Mandl suggested that a Blue Button app would be easy to write. But the difference is that Blue Button aimed to be user-friendly whereas the projects at this conference are developer-friendly. That means that can add some simple structure and leave it up to app developers to present the data to users in a friendly manner.

The last hurdle

Because SMART and Indivo ultimately want the patient to control access to data, trust is a prerequisite. OAuth is widely used by Twitter apps and other sites across the Web, but hasn't been extensively tested in a health care environment. We'll need more experience with OAuth to see whether the user experience and their sense of security are adequate. And after that, trust is up to the institutions adopting Indivo or SMART. A couple speakers pointed out that huge numbers of people trust with their passwords to financial accounts, so when they learn the benefits of access to patient records they should adopt Indivo as well. An Indivo study found that 84% of people are willing to share data with social networks for research and learning.

SMART, Indivo, and i2b2 make data sharing easier than ever. but as many have pointed out, none of this will get very far until patients, government, and others demand that institutions open up. Mandl suggested that one of the major reasons Google Health failed was that it could never get enough data to gain traction--the health providers just wouldn't work with the PHR. At least the open source standards take away some of the technical excuses they have used up to now.

June 20 2012

How the federal government helps health care standards evolve

Health information exchange is on the front lines of cost control and health care improvements. To provide simple tools and channels for hospitals, doctors, and other institutions to exchange data with the government, patients, and each other, the Department of Health and Human Services coordinates an initiative called the Federal Health Architecture (FHA).

Dr. Lauren Thompson, director of the FHA, speaks in this interview about the FHA's accomplishments and the current state of health information exchange.

Topics in the interview include:

  • How the FHA arose as a solution to the problem of exchanging health data, both across government agencies and with partners in the private sector. [Discussed at the 0:45 mark]
  • Status of the Nationwide Health Information Network (NwHIN), which is participating in the NwHIN Exchange, and what the requirements are for participation. [Discussed at the 4:45 mark]
  • Initiatives at the Department of Defense and the Department of Veteran Affairs. [Discussed at the 7:45 mark]
  • Future of the NwHIN Exchange and the CONNECT project, including support for the Direct standard. [Discussed at the 9:45 mark]
  • Cost savings and other benefits offered by the work of the Federal Health Architecture to the health care field. [Discussed at the 13:15 mark]
  • Promoting services to Health Information Exchanges. [Discussed at the 14:45 mark]
  • You can view the entire conversation in the following video:

    OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

    Save 20% on registration with the code RADAR

June 19 2012

Why health IT systems integrate poorly today, and what future EHRs can do about it

Physicians, patients, healthcare providers, and other health industry participants have been clamoring for modernization of health IT systems for years. Recently, the HITECH Act, meaningful use, and other major government initiatives led by the Office of the National Coordinator (ONC) have been accelerating the demand. Unfortunately, as stated eloquently in the recent New England Journal of Medicine (NEJM) article "Escaping the EHR Trap - The Future of Health IT," health IT systems are trapped in legacy infrastructures:

"It is a widely accepted myth that medicine requires complex, highly specialized information-technology (IT) systems. This myth continues to justify soaring IT costs, burdensome physician workloads, and stagnation in innovation — while doctors become increasingly bound to documentation and communication products that are functionally decades behind those they use in their 'civilian' life."

The problem is not that engineers don't know how to create the right technology solutions or that we're facing a big governance problem. Rather, the real cross-industry issue is much bigger: Our approach and the methods we have chosen for integration are opaque, decades old, and they reward closed systems. Drs. Mandl and Kohane summarize it well in their NEJM article by saying "a few companies controlling much of the market remain entrenched in 'legacy' approaches, threatening other vendors' viability." They elaborated further on what they feel is the reason:

"We believe that EHR [electronic health record] vendors propagate the myth that health IT is qualitatively different from industrial and consumer products in order to protect their prices and market share and block new entrants. In reality, diverse functionality needn't reside within single EHR systems, and there's a clear path toward better, safer, cheaper, and nimbler tools for managing healthcare's complex tasks."

From the 1950s through the mid-1990s, systems integration required every system to know about each other in advance, agree on what data they would share, engage in governance meetings, put memoranda of understanding or contracts in place, and so on. In the age of the web, the approach has changed to one where the owner of the data provides whatever they decide (e.g., through a web server) and whoever wants it can come get it through a secure access method (e.g., through a browser or HTTP client). This kind of revolutionary approach in systems integration is what the health IT and medical device sectors are sorely lacking, and something that ONC, the U.S. Department of Health and Human Services (HHS) and the National Institute of Standards and Technology (NIST) can help promote. No amount of government money will solve health IT integration issues so long as our approach is incorrect.

As users of health IT systems, Drs. Mandl and Kohane have identified the problem of legacy approaches doing a lot of damage. What can we in the technology industry do to help? Let's take a look at the major issues holding back modernization of IT and integration of systems in healthcare, and what the government and systems owners — such as EHR vendors — can do about it.

We don't support shared identities, single sign on (SSO), and industry-neutral authentication and authorization

Most health IT systems create their own custom logins and identities for users, storing metadata about roles, permissions, access controls, etc., in an opaque part of a proprietary database. Without identity sharing and exchange, there can be no easy and secure application integration capabilities, no matter how good the formats are. ONC should mandate that all future EHRs use industry-neutral and well supported identity management technologies so that each system has at least the ability to share identities. ONC does not need to do anything new — they can can simply piggyback on the The White House's National Strategy for Trusted Identities in Cyberspace (NSTIC) that is already defined and being managed by the National Institutes for Standards and Technology (NIST).

I'm continually surprised how little attention is paid to this cornerstone of application integration. There are very nice open identity exchange protocols, such as SAML, OpenID, and OAuth, as well as open roles and permissions-management protocols, such as XACML, that allow identity and permission sharing. Free open source tools such as OpenAM, Apache Directory, OpenLDAP, Shibboleth, and many commercial vendors have drop-in tools to make it almost trivial to do identity sharing, SSO, attribute-based access control (ABAC), and role-based access control (RBAC). It's quite hard to believe, but most current enterprise health IT systems don't even support Active Directory or LDAP.

We're too focused on "structured data integration" instead of "practical app integration" in our early project phases

In the early days of data collection and dissemination (it's sad to say that after 50 years of computing, health IT is still in those early days, but it's true), it's not important to share structured data at detailed machine-computable levels. Instead, different applications need immediate access to portions of data they don't already manage. When industries take on structured data integration too early, they often waste time because they don't understand the use cases well enough to specify best-case solutions. Poor implementations result.

For example, instead of asking for HL7 (the health IT vendors' evolved standard) or other structured data about patients, we can use simple techniques like HTML widgets to share "snippets" of our apps. Widgets are portions of apps that can be embedded or "mashed up" in other apps without tight coupling. The Department of Veterans Affairs' successful Blue Button approach has demonstrated the power of app integration versus structured data integration. It provides immediate benefit to users while the data geeks figure out what they need for analytics, computations, etc.

Once app integration, SSO, identity sharing, and simple formats like JSON are in good shape, we can shift our focus to structured data integration, with all the governance and analytics associated with it. Future EHRs must master the production and consumption of secure authenticated application widgets using industry-standard approaches such as JavaScript and JSON.

We focus more on "pushing" versus "pulling" data than is warranted early in projects

A question we commonly ask at the beginning of every integration project is "what data can you send me?" This is called the "push" model, where the system that contains the data is responsible for sending the data to all those who are interested (or to some central provider, such as a health information exchange). What future EHRs should do is implement syndicated Atom-like feeds (which could contain HL7 or other formats) for all the data they can share and allow anyone who wants it to subscribe to the data. This is called the "pull" model. Data holders allow secure authenticated subscriptions to their data and don't worry about direct coupling with other apps. If our future EHRs became completely decoupled, many of our integration problems would go away. Using ATOM and JSON as formats, the Open Data Protocol (oData), which has free open source implementations, should be used to actually open patient data in days rather than months.

To make sure security and privacy are maintained in the decoupled systems, automated auditing of protected health information can be enabled by logging data transfers through use of syslog and other reliable methods with proper access control rules expressed in standards like XACML.

We're too focused on heavyweight industry-specific formats instead of lightweight or micro formats

Appointment scheduling in the health IT ecosystem is a major source of health IT integration pain (in fact, much worse than most other areas). If EHRs just used industry-standard iCalendar/ICS publishing and subscribing we could solve, based on my experience, a large majority of appointment schedule integration problems. Think about how your iPad can sync with your Outlook/Exchange server at work — it's not magic; it's an industry-neutral standard widely used and supported.

Another example of outmoded industry practice is the use of HL7 ADTs for patient profile exchanges instead of more common and better supported SAML (which emerged to meet the need for industry-neutral user identities and profile exchange). If you've ever used your Google account/profile to log into another app on another website, you're using SAML. Again, no magic. It works millions of times a day with "good enough" security and user-controlled privacy.

Data emitted is not tagged using semantic markup, so it's not securable or shareable by default

In many existing contracts CIOs have signed, the vendors of systems that house the data also "own" the data. The data can't be easily liberated because the vendors actively prevent it from being shared. The healthcare industry sets up large data governance structures where vendors are cajoled and are often begged for access to patient data, but vendors claim that it's not easy or not possible because health data is special. However, Drs. Mandl and Kohane, like me, think otherwise and clearly state "some types of data used in healthcare are stored and used in ways that are unique to the medical field, but the field is not unusual in its need to share data across diverse electronic systems." Even when systems are opened up after data governance establishes the sources and sinks of data along with specifications of data ownership rules, vendors only do the minimal tagging possible. They do structured data integration and then present information on the screen (usually as HTML), failing to tag data with proper semantic markup when it's basically free to do (no extra development is required).

One easy way to create semantically meaningful patient data is to have all HTML tags generated with companion RDFa or HTML5 Data Attributes using industry-neutral schemas and microformats similar to the ones defined at Using microformats and RDFa as a start, EHRs can then start tagging (in backward-compatible HTML) so that it's easier to discover metadata and allow simple securing and sharing of data. None of this is technically challenging if we really care about integration and are not just giving it lip service. Google's recent implementation of its Knowledge Graph is a great example of the utility of this semantic mapping approach. Once even basic microformats are in place with RDFa for authenticated or unauthenticated semantic tagging, we can then create SPARQL endpoints to make data easier to understand.

When health IT systems produce HTML, CSS, JavaScript, JSON, and other common outputs, it's not done in a security- and integration-friendly manner

Future EHRs should start to use industry-neutral CSS frameworks like Twitter's Bootstrap (which is free and open source). When using JavaScript, EHRs should use common lightweight and integration-friendly libraries like jQuery, instead of JavaScript frameworks that take over the app and prevent easy discovery and integration. Lastly, instead of emitting just complex XML or non-semantically aware HTML, EHRs should emit JSON from APIs so client-side applications can be easily written to take advantage of data. Also, they should be sure to offer both JSON and JSONP so that integration can occur more easily without getting into security problems, like cross-site scripting. Modern engineers who care about integration should always assume that their user interfaces (UI) might be "scraped" or connected to other systems. These interfaces should make it easy for others to securely take UI-focused data and create secure secondary uses.

All of these techniques I've mentioned are widely accepted secure web practices that need to make their way into our EHRs. Drs. Mandl and Kohane summed up the benefits of these approaches perfectly in their NEJM article:

"Health IT vendors should adapt modern technologies wherever possible. Clinicians choosing products in order to participate in the Medicare and Medicaid EHR Incentive Programs should not be held hostage to EHRs that reduce their efficiency and strangle innovation. New companies will offer bundled, best-of-breed, interoperable, substitutable technologies — several of which are being developed with ONC funding — that can be optimized for use in healthcare improvement. Properly nurtured, these products will rapidly reach the market, effectively addressing the goals of 'meaningful use,' signaling the post-EHR era, and returning to the innovative spirit of EHR pioneers."

I'll go one step further and say that the government's multi-billion-dollar incentives push won't do much if the technical methods and approaches being promoted don't match the commonly accepted, lightweight, and modern approaches mentioned above.

OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR


June 15 2012

Games for Health covers current status of behavior change

I had a chance yesterday to attend one day of the Games for Health conference, which covers one of the fastest-growing areas of mobile apps and an area of innovation that clinicians and policy-makers are embracing with growing enthusiasm.

The gamification of everyday life has become a theme of modern business, as well as public health and other groups interested in motivating people. Fun is now the ally, not the enemy, of intelligence, productivity, social engagement, and well-being. Here are a few existing or upcoming projects that illustrate what games are doing in health care:

  • A researcher developed a game for people with Attention Deficit Disorder that pops distractions up from time to time. If the player gives in to the distraction, the game ends. Over time, as the player gets better at ignoring distractions, they increase in order to test him further. The researcher claims that a few hours of this game eliminated the symptoms of ADD for several months afterward in many children, achieving more than drugs and other therapies.

  • A company is working with the Department of Defense on a game that encourages wounded soldiers to do their physical therapy. Normally, PT is an hour or more of boring, repetitive, painful exercise (I know, having undergone it). The game simply presents you with obstacles that you have to remove by performing one of the motions prescribed by the physical therapist. Thus, it keeps you engaged and randomizes the exercises to keep them fresh.

  • A web-based game asks you to wager game currency on whether an individual is likely to get a particular disease. The game presents you with increasing amounts of information about the relationships between genes and disease. The overall message of the game is that knowing your personal genome doesn't offer much guidance on whether you'll get the disease or how to avoid it.

  • A soccer ball is loaded with a device that measures how much it's moving. From this, a hub can determine how much children are playing and track activity over time.

The last device, clever as it is, arouses depressing thoughts in me. When I was a kid (insert appropriate background music here), nobody had to provide sensors or track our progress to persuade us to take a ball to an empty lot across the street for a game. But that particular lot is now covered with tract housing and the street is so busy that not even the most danger-immune wild child would try to cross it. Meanwhile, parents are afraid (sometimes for good reason and sometimes not) of letting kids wander unattended, and the lures of cable TV and social networks keep them on their couches. So I'm happy to see the digital incentives to increase exercise.

And although gaming hasn't reached the mainstream of health care yet, it's getting there. The Department of Health and Human Services has championed games, and major research centers in health care are developing programs for clinicians.

Getting to the conference at the Hyatt Harborside on the Boston waterfront was the first challenge, and after earning that badge, my next hurdle was avoiding the breakfast buffer. But as an attendee pointed out to me, being physically isolated helped keep people on site and talking to each other. Certainly, the location was spectacular, with lunch on the patio facing a view of the Boston skyline.

Personal control and empowerment in all areas of life were the theme of the day, and were expertly introduced in the opening keynote by well-known researcher Jane McGonigal. She started by reviewing the major regrets people express at the end of their lives. I don't think that I'll regret spending time listening to Jane McGonigal. Although she was pushing the use of her SuperBetter tool for personal growth, the basic principles are easy to follow independently. Pick a difficult but achievable goal that means a lot to you. Measure what you do each week. Enlist friends for support and positive thinking, etc. I'm doing it myself, and maybe next year I won't eat the muffins.

Jane McGonigal's keynote
Jane McGonigal's keynote.

The government is here to help you

There's a fine line between games that promote general health and games that have a special medical purpose. I would guess (as a lay person) that the latter category includes the game to combat ADD and the game to promote PT. And this category is subject to regulation by the FDA. We had a session by lawyer James M. Flaherty, Jr. on this seemingly dull topic, and I'm happy that a lot of people came and treated the subject respectfully. When we trust something with a medical matter, even a game, we need to trust that it will have the desired effect and not harm us.

Thus, if a game is tied to a particular medical device that the FDA is already regulating, the game is subject to the same regulation. That may require the manufacturer to go so far as to arrange a clinical trial and get approval from an Institutional Review Board. A game could also be subject to FDA regulation if the manufacturer claims a medical benefit. (On the other hand, a doctor is free to advise patients to use a game for some medical purpose without triggering FDA regulation.)

FDA regulations are undergoing major changes in this area. A year ago they release a Draft Guidance Document on Mobile Medical Applications, which may be worth consideration by gamers, and some documents on games are likely to follow. Recognizing that current registration procedures are cumbersome, Congress is well along the way to passing legislation that would reform the regulations and ask the FDA to hold discussions with people in the field--discussions that Flaherty urged us all to join. Game-makers also have to start thinking of experiments that can demonstrate the safety and effectiveness of their products.

Too healthy for your own good?

I brought away only a couple dystopic thoughts from Games for Health. One revolved around the privacy worries that accompany every activity modern people do online. Doctors and other professionals engaged in our care are regulated concerning whom the share our information with, and for what purposes. But game manufacturers and sites that offer to track us are not covered by rules like HIPAA. We should check their privacy policies before using them, and be aware that they have lots of incentives to mine the data and use it for marketing and other purposes.

The other, related, worry was about compelled participation. If your employer forces you to enroll in a program to lose weight, or your insurance company bases its premiums on your blood sugar levels, it's a game-changer. One journalist recently compared self-tracking and Quantified Self to B.F. Skinner-like behaviorism, which struck me as absurd because in self-driven health movements the individual is making choices all along. The comparison takes on more relevance if an outsider is trying to control your behavior.

And if external rewards are tied to game-playing, incentives to cheat tail along. People will hack devices to report better results than they actually achieve, hire people to do things that they report themselves doing, etc. Certificates and encryption will have to be put in place. The landscape of health and gamification will be degraded.

Let's reserve these concerns for policy-making, while keeping them in mind while designing games that people use voluntarily and enjoy.

June 13 2012

Health care privacy discussed as an aspect of patient control

If health care reform depends on patient engagement and the mining of public health data, it depends equally on protecting the patient's privacy. Moreover, real-life stories from victimized patients show that privacy is caught up with issues of security, clinical decision-making, mobile health, and medical errors. After the patient access summit and the health data initiative forum, therefore, it was supremely appropriate for me to attend the second annual health privacy summit, which I helped to organize.

Joy Pritts and others on panel
Joy Pritts and others on panel.

The conference this year had even more detail and more subtle nuance than the conference I reported on last year. Last year's summit put a valuable stake in the ground to acknowledge the importance of privacy in health policy, and this year we took off from that point. Two leading members of the Office of the National Coordinator at the Department of Health and Human Services came to speak--National Coordinator Farzad Mostashari and Chief Privacy Officer Joy Pritts--and Patient Privacy Rights, the conference organizers, created a new Louis D. Brandeis privacy award that was accepted by Congressmen Joe Barton and Ed Markey, world-renowned security expert Ross Anderson, and long-term privacy researcher Alan Westin.

About 150 people came to the conference, which took place Wednesday and Thursday last week. Hundreds more followed webcasts live, and these will be posted online.

Scope of the privacy debate

The health care field is divided between those who think privacy is pretty good already and should not suck up resources that could go into other reforms, and those who insist on reviewing all changes to practices and technology. The latter sometimes say that it need not be a "zero-sum game" (in fact, Mostashari stated that in his keynote). On the contrary, they suggest that a patient's trust in privacy protection is actually a prerequisite to data sharing and good medical care, because a patient will just keep embarrassing information secret if she is afraid it will fall into the wrong hands.

The debate can get complicated because it involves laws that have changed over time and vary from state to state, common practices that undermine stated commitments to following the law (such as taking data home on unencrypted laptops), ignorance on many sides, and bad actors who are not dissuaded by even the best regulations and institutional practices. Because the debate was covered in my article from last year's conference, I'll just update that to say that more speakers this year affirmed a tension between privacy and the kind of data sharing needed to improve patient care. I heard several statements along the lines of one by Ann Freeman Cook, a psychology professor and ethics researcher, who found IRBs struggling and finding it impossible to reconcile patient privacy with the needs of researchers and the public.

Fred Trotter (who co-authored a book explaining the health IT field for O'Reilly) recently wrote that the most urgent needs in health care data were letting patients see their records and correcting errors in the records. He's one of the "privacy is good enough" activists, but his concerns came up at the privacy conference as well. One of the major announcements at the conference, in fact, was a draft of a Consumer Health Privacy Bill of Rights that drew on the White House's recent Consumer Privacy Bill of Rights. The Health Privacy bill goes far beyond keeping patients' data out of unauthorized hands. It also addresses the right of patients to read data written by their doctors, to correct errors, and to be told when their data is shared outside the context in which they offered it.

A number of heart-rending stories from patients were shared at the beginning of the summit. If one examined them carefully, one could cavil over whether each story really represented a privacy breach. Some of the stories were more about errors or about poorly recorded decisions (often in EHRs that were too rigid to accurately represent patient complaints). And the privacy breaches were sometimes just bad luck--more the result of a malicious actor bypassing safeguards than a lack of safeguards.

Nevertheless, I accepted that all of them fell under the umbrella of "privacy protections." Privacy is about the right of the patient to control his data, and it involves all these things. So the topics at this conference are relevant to all the issues health care advocates talk about regularly: data exchange and ACOs, clinical research, the use of apps on mobile devices, the Quantified Self movement, and social networking in patient empowerment.


Here are some of the interesting topics mentioned at the conference.

  • Leading privacy researcher Latanya Sweeney showed off her Data Map that shows all the places patient data gets sent in the normal run of treatment, payment, public health, and research. Suggestions are requested.

  • Built-in privacy: Mostashari pointed out that a concern for privacy led the group designing the Direct project to make sure that the middleman routing data should never know who is sending or receiving. Identities are buried in the encrypted body of the message.

  • Ross Anderson delivers keynote
    Ross Anderson delivers keynote.

    Security expert Ross Anderson, who has studied health care systems all over Europe, suggested a number of measures to protect patient privacy. Some are standard security measures: keep information scattered in different repositories (this would mandate HIEs in the US that query doctors for information instead of uploading it to their own servers); don't give central authorities automatic access to data; use role-based access (but that's hard to do properly). Another safeguard is to let the patients audit their own data. Anderson pointed out that longitudinal data--which researchers value highly--is impossible to de-identify because there is too much data snoopers can use to link the data with other sources about the patient. He also said problems arise when the government tries to move fast and throws a lot of money at a problem, which sounds uncomfortably like the meaningful use payments.

    Three companies were chosen for the best health privacy technologies of 2012:

    Trend Micro wins technology award
    Trend Micro wins technology award.

    • Jericho Systems captures patent consents and translates them to technological controls. A patient can can see in his PHR who is making a request for his data, for instance.

    • Trend Micro's Deep Security incorporates the standard security protections for a networked environment (virus scanner, firewall, file integrity checker, etc.) into a cloud solution. Thus, even if the server is breached, the system may be able to prevent data from being extracted.

    • ID Experts' RADAR offers response services to breaches.

  • Segmented data, which means the ability to share certain specific information while hiding other, more sensitive information, came up several times. The field is nowhere near ready, technically or organizationally, to support something like sharing information about your broken arm while hiding your psychiatric records. But several institutions are working on standards.

  • Several panelists called for privacy by default: it isn't fair to present a complex document to a patient and expect her to understand all the implications (which no one can do anyway). Maneesha Mithal reported a policy at the Federal Trade Commission that the most important privacy impacts must be highlighted, not buried in an inscrutable policy. Information technology research Andrew Dillon suggested that, instead of educating patients about the awful forms they sign, we should improve the forms (and by implication, the policies they define).

  • A couple doctors spoke up to say that they felt uneasy entering information into records (particularly psychiatric information) because they didn't know who would end up seeing it.

  • A lot of discussion covered who should explain privacy policies to the patient. Handing them a form at the start of a visit is not an effective way to get meaningful consent. Some said the doctor herself should ideally explain the privacy implications of the visit, although this eats into the severely restricted time that the doctor has with the patient.

  • Two speakers--EPIC representative Lillie Coney and re-identification expert Daniel Barth-Jones--reported that, luckily, it's quite hard to re-identify patient data that has been de-identified for the purposes of research and public health. Barth-Jones doubted that anyone has performed any actual re-identifications, other than researchers proving that re-identification is theoretically possible.

  • Ann Freeman Cook pointed out that people often agree to share data, tissues, and other samples with with researchers in order to get free care. Therefore, the poor and uninsured are more likely to relinquish privacy safeguards. And these samples are kept for a long time, so it's impossible to know how they'll be used.

  • The ONC's Standards & Interoperation Framework got contrasting reviews. On the one hand, it is hard to understand because it refers to so many technologies and standards. On the other hand, these references root it firmly in state-of-the-art practices and make implementation feasible.


Last week's series of conferences in Washington--of which I attended maybe half--were the most intense concentration I've seen of health care events. A few people got to bounce around and experience everything. Only that elite tends to put in the research to really understand all the facets of patient engagement, data sharing, application development, business opportunities, privacy issues, and points to leverage institutions for change that will really improve our health care system and lower costs. I think that most providers, administrators, and researchers stumble along with good intentions but a lack of a full vision.

We can fix our health care systems if we educate doctors and patients to work together; create teams that have incentives to deliver the best care; open up data about the health care industry; incorporate low-cost devices into patient-centered medical homes, and incorporate the best research into clinical decision support. I'm sure readers could suggest other related elements of a solution. A crucial background role will be played by technological improvements and standards. All this is extremely hard to explain in a single coherent vision, although numerous books about radical reform to the health care system have come out over the past couple years. Those with expertise in a particular area of technology or organizational development must do their best to educate themselves with the wider vision, and then act locally to make it happen.

June 12 2012

Data in use from public health to personal fitness

Back in 2010, the first health data initiative forum by the Dept. of Health and Human Services introduced the public to the idea of an agency releasing internal data in forms easy for both casual viewers and programmers to use. The third such forum, which took place last week in Washington, DC, was so enormous (1,400 participants) that it had to be held in a major convention center. Todd Park, who as CTO made HHS a leader in the open data movement, has moved up to take a corresponding role for the entire federal government. Open data is a world movement, and the developer challenges that the HDI forum likes to highlight are standard strategies for linking governments with app programmers.

Todd Park on main stage
Todd Park on main stage.

Following my attendance at a privacy access summit the previous day, the HDI forum made me think of a government bent on reform and an open-minded public crossing hands over the heads of the hidebound health institutions that blunder onward without the benefits of tapping their own data. I am not tossing all hospitals, doctors, and clinics into this category (in fact, I am constantly talking to institutions who work with available data to improve care), but recording and storage of information in health care generally retards anyone interested in change.

The "datapalooza" was already covered on Radar by Alex Howard, so here I'll list some of the observations I made during the parts I attended.

Health and Human Services chooses torrents over leaks

Able to attend the forum only on the first day, I spent a lot of it in a session on HHS data sets at because I wanted to know exactly what the department has to offer and how the data is being used.

HHS staff at break-out session
HHS staff at break-out session.

Several things impressed me about the procession of HHS staff that crossed the stage to give five- or ten-minute presentations on data sets. First was the ethos of data sharing that the department heads have instilled. Each staff person showed visible pride in finding data that could be put on the Web. A bit of competitive spirit drives different departments that may have more or fewer resources, and data that comes naturally in a more structured or less structured form. One person, for instance, said, "We're a small division and don't have the resources of the others, but we managed to release several data sets this year and one has an API."

Second, the department is devoting resources to quality. I've heard several complaints in the field about lack of consistency and other problems in public health data. One could hardly avoid such issues when data is being collected from hundreds of agencies scattered across the country. But the people I talked to at the HHS forum had ways of dealing with it, such as by requiring the researchers who collect data to submit it (so that trained professionals do the data entry), and running it through quality checks to look for anomalies.

Third, the department knows that outside developers coming to their site will need extra help understanding the data being collected: what the samples represent, what the scope of collection was, and so forth. In addition to a catalog powered by a Solr search engine, HHS provides direct guidance to the perplexed for those developing apps. They are also adding Linked Data elements to help developers combine data sets.

A few examples of data sets include:

  • The Center for Medicare & Medicaid Services offers aggregate data on emergency visits, hospital readmission rates (a major source of waste in health costs), and performance measurement.

  • The Administration for Children and Families has a Head Start locator that helps parents find services, aggregate data on people who apply for Low Income Home Energy Assistance, etc.

  • The Agency for Healthcare Research and Quality has longitudinal data abut spending on health care and its effect on outcomes, based on an annual survey, plus a service offering statistics on hospital treatments, morbidity, etc.

  • The Assistant Secretary for Planning and Evaluation tracks workforce development, particularly in health IT, and measures the affordability of health care reflected in costs to employers, patients, and the government.

Recently, HHS has intensified its efforts by creating a simple Web interface where its staff can enter data about new data sets. Data can be uploaded automatically from spreadsheets. And a new Data Access and Use Committee identifies data sets to release.

So now we have public health aids like the Community Indicators Data Portal, which maps the use of Medicaid services to poverty indicators, infant mortality, etc.

HealthMap, created by Children's Hospital Boston, is used by a fascinating range of projects. They scoop in huge amounts of data--mostly from news sites, but also blogs, and social networks--in multiple languages around the world, and apply a Bayesian filter to determine what's a possible report of a recent disease outbreak. After a successful flu-tracking program based on accepting reports from the public, they did a dengue-tracking program and, in Haiti, a cholera-tracking program.

But valuable as HHS data is to public health, most of it is not very sexy to the ordinary patient or consumer. If you're curious how your Medicare charges compare with average payments for your county, go ahead and mine the data. But what about something immediately practical, such as finding the best hospital for a procedure?

Recently, it turns out, HHS has been collecting and releasing data on that level, such as comparative information on the quality of care at hospitals. So a datapalooza like the HDI forum really takes on everyday significance. HHS also provides the site, with services such finding insurance plans for individuals and small groups.

Other jurisdictions are joining the health data movement. Many countries have more centralized systems and therefore can release large amounts of data about public health. The United Kingdom's National Health Service was featured at the HDI forum, where they boasted of posting 3,000 health indicators to their web site.

The state of Louisiana showed off a cornucopia of data, ranging from user restaurant ratings to ratings of oyster beds. Pregnancy risk factors, morbidity rates, etc. are broken down by race, sex, and other demographics. The representative freely admitted that the state has big health problems, and urgently called on developers to help it mine its data. The state recently held a "Cajun codefest" to kick off its effort. HHS also announced five upcoming local datapaloozas in other states around the U.S.

I talked to Sunnie Southern, a cofounder of a Cincinnati incubator called Innov8 for Health. They offer not only challenges for new apps, but guidance to help developers turn the apps into sustainable businesses. The organization also signs up local hospitals and other institutional users to guarantee a market to app developers. Southern describes Innov8 for Health as a community-wide initiative to support local developers and attract new ones, while maintaining deep roots among multiple stakeholders across the health care, university, startup, investors, and employer stake holders. At the inaugural class, which just took place, eight companies were chosen to receive intensive mentoring, introductions and connections to potential customers and investors, and $20,000 to start their company in 12 weeks. Health data is a core element.

How far can a datapalooza take the health care field?

Health apps are a fast-growing segment of mobile development, and the government can certainly take some of the credit, along with VC and developer recognition that there's a lot of potential money to be made fixing health care. As Todd Park said, "The health innovation ecosystem is beautifully chaotic, self-propelled, and basically out of control." That means the toothpaste can't be put back in the tube, which is a good thing.

The HDI forum is glitzy and exciting--everybody in health care reform shows up, and the stage show is slickly coordinated--but we must remember the limits of apps in bringing about systemic change. It's great that you can use myDrugCo$ to find a discount drug store near you. Even better, if your employer hooks you up to data sets provided by your insurer, myDrugCo$ can warn you about restrictions that affect costs. But none of this will change the crazy pricing in the insurance plans themselves, or the overuse of drugs in medicine, or the inefficient development and testing methods that lead to high medication prices in the first place.

Caucus of Society for Participatory Medicine and friends
Caucus of Society for Participatory Medicine and friends.

Transparency by one department on one level can lead to expectations of transparency in other places too. As pricing in health care becomes more visible, it will become less defensible. But this requires a public movement. We could do great things if we could unlock the data collected by each hospital and insurance agency, but they see that data as their competitive arsenal and we are left with a tragedy of the anti-commons. It would be nice to say, "You use plenty of public data to aid your decision-making, now reciprocate with some of your own." This can be a campaign for reformers such as the Society for Participatory Medicine.

At the HDI forum, United Healthcare reported that they had enough data to profile patients at risk for diabetes and brought them in for a diabetes prevention program. This is only a sample of what can be done with data that is not yet public.

Aetna presenter shows CarePass on the main conference stage al at health care conference
Aetna presenter shows CarePass on the main conference stage.

Aetna is leading the way with a service called CarePass, currently holding a developer challenge. CarePass offers Aetna's data through an API, and they partner with other major data centers (somewhat as Microsoft does with HealthVault) to hook up data. Practice Fusion is also offering some data to researchers.

Even those bright-faced entrepreneurs launching businesses around data from HHS and elsewhere--certainly their success is one of the goals of the open data movement, but I worry that they will recreate the silos of the health care field in the area of patient data. What are they collecting on us as we obsessively enter our personal statistics into those devices? Who will be able to use the aggregate data building up on their servers?

So there are hints of a qualitative change that can come from quantitative growth in the release and reuse of health care data. The next step involves the use of personal data, which raises its own litany of issues in quality and privacy. That will be the subject of the last posting in this series.

June 11 2012

Health reform leaders focus on patient access to records as key barrier

A convocation of trend-setters and organizational leaders in U.S. health care was called together in Washington last Monday, June 4. The attendees advised two government organizations driving health reform--the Office of the National Coordinator at the Dept. of Health and Human Services, and the Dept. of Veteran Affairs--how to push forward one of their top goals, patient engagement.

The results of the meeting, to me, demonstrated mostly the primitive state of communications and coordinated care in the U.S. health system. In an earlier posting I discussed the sorry state of health data exchange, and Monday's patient access summit centered on the same factors of siloing and data hoarding as barriers to patient engagement.

Farzad Mostashari, the National Coordinator for Health Information Technology, tried to set the scope of the meeting as an incubator to suggest practical ways patients could use the data they get from health providers. (As I'll explain later, we also touched on data patients generate themselves.) His reasoning, which I endorse, is that patients currently can't do much with data except keep it somewhere and pass it to other health providers, so in order to engage them we need to provide tools for them to improve their health with this data.

But the pulse of the 75 or so attendees gave quite a different message: that we're nowhere near ready to discuss uses of data, and that our efforts at patient engagement should start with getting the data to the patients in the first place.

Several attendees have already blogged about various aspects of the meeting:

  • Brian Ahier summarizes the purpose and outcomes.

  • Dave Chase urges the government to create an environment that encourages the release of data to the patient.

  • Keith Boone focused on some interesting statements and ideas aired at the meeting.

In this posting, I'll discuss:

Why patient access is so important, and why it doesn't happen

The notions of patients pouring over doctors' notes, correlating their own test results, and making demands on their care providers may carry a faint whiff of utopianism, but thousands of patients do these things every day--and do them even when deprived of the electronic aids that could make these activities natural. The people in the room for the patient access summit were by no means utopians. They are intense movers in the health care field with deadlines to meet and budgets to allocate. So when they call for patient access to data, it's because they all see it as critical to solving the quality and cost problems their own organizations face.

Patient engagement is critical because most health care takes place outside the doctor's office or operating room. Patients need to take control of their own lifestyles for the problems that put a lot of strain on our health care system, such as obesity. They need to follow through on post-release instructions and monitor themselves for symptoms.

And in the silo'd state of today's health system, the patients need to make sure their data gets to health providers. We heard over and over at the patient access summit how patients have entered treatment centers without the information needed to treat them, how doctors would refuse point-blank (in violation of the law) to give patients their folders, and how patients received inadequate care because of the lack of information.

Patient participation in health care is not only good for the individuals who do it, but are crucial for prying open the system as a whole. The providers, vendors, and insurers are moving too slowly. Their standards and electronic health records lack fields for all the data people are generating through their Fitbits and Zeos, and they don't have pathways for continuously uploading patient-generated data. This lapse can be turned into a plus: device manufacturers and programmers out in the field will develop new, more flexible, more robust standards that will become the next generation of EHRs and personal health records. A strong push from empowered patients can really change the way doctors work, and the associated costs.

Major topics of debate

Opinions differ about the roles of electronic records, interchange systems, culture, and business models in the recalcitrance of doctors to release patient data, which I'll discuss in the last section of the article. Getting the answer to these questions right should determine the strategy government and consumers use to breach the silos. But the consensus at the patient access summity was that we need to pursue these strategies fast, and that the fate of the rest of health care reform will rest on our success.

The first half of the Washington meeting meandered through various classic areas under constant debate in the health care field. This seemed necessary so that the participants in the summit could feel each other out, untangle some of their differences and ultimately come to a position of trust so they could agree on the topics in the previous section. I noted the following topics that threaded through the debate without resolution.

Technology versus culture

Debates come up all the time when organizational change is on the agenda about the importance of the technologies people use versus their workflows, attitudes, and willingness to change. I find the discussions silly because people usually find themselves pushed to an either-or position and that just doesn't make sense. Of course technology can facilitate change, and of course the technology will be a big waste of time and money if the human participants fail to understand the behavior changes they need to make along the way.

But the Washington attendees raised these issues as part of the strategy-setting I mentioned earlier. Certainly, the government would prefer to avoid creating or mandating the use of certain technologies. The question is whether the ONC and VA can set goals and leave it up to the market to find the way.

Sometimes the health care field is so distorted and dysfunctional that the government feels it has to step in, such as when HHS created CONNECT and then Direct. Without these, the health care providers and health information exchanges (HIEs) would claim that exchanging patient data was an expensive or intractable problem. One might also interpret the release of VistA and BlueButton to the general public as the VA's statements about how health care should be conducted.

So Mostashari's original call for actions that patients could take fits into the technology end of the debate. By suggesting technological paths forward, we can effect cultural change. For instance, if a patient uses an app or web site to view all the potential reactions between the drugs she takes (and I heard one estimate this week that people in their 80s take between five and eight medications), she can warn her own doctor about an adverse reaction.

Ultimately, the working groups that today's meeting settled on included a lot of technological innovation.

The need for standards

Standard setting is another perennial area for disagreement, because premature standard-setting, like premature optimization, can have an effect opposite to what you want. If we took all the efforts that companies put into standards that bombed in the marketplace and devoted the resources over the decades to competition between innovations, we might have an explosion of new technologies. So even if you accept the value of technology to effect culture change, you can ask where and when can governments and standards committees can intervene positively.

And this caution applies to health care too. The old guard of EHRs and HIE suffer from a lack of (useful) standards. But I mentioned earlier, an exciting explosion of patient-centered apps and devices is developing in the absence of standards. The Washington meeting ended up endorsing many standard-setting efforts, although these applied mostly to mature fields such as EHRs.

Transfer standards versus data format standards

Mixed up in the debate over the timing of standards was a distinction between standards used for sending data around and standards used to represent the data. The former are called protocols in the communications field. HTTP is a transfer standard, for instance, whereas as HTML is a data format standard. Both are needed to make the World Wide Web operate. And both ended up part of the action items from the patient access summit.

Privacy versus data availability

As I reported from the first health privacy conference, health care advocates argue over the importance of privacy. At the patient access summit, everybody who spoke on this topic prioritized the exchange of data. Privacy concerns are the magic amulet that providers wave at patients to ward off their requests for data. But in fact, the much-derided Health Insurance Portability and Accountability Act (HIPAA) requires providers to give patients data: that's what the terms Portability and Accountability in the name refer to. The providers are required to take reasonable steps to preserve privacy--and the Direct project aims to simplify these--but the patient can waive even these modest safeguards if he or she is anxious to get the data quickly.

Given our skepticism toward claims of security concerns, a bit of security theater we encountered as we entered the conference center is illustrative. We were warned ahead of time that the facility was secure and told to bring a government-issued photo ID. Indeed, the guard checked my ID and looked at my face when I entered, but nobody checked my name against a list to see whether I was actually supposed to be there.

A later article in this series will explore the relationships between privacy, security, patient access, accuracy, and accountability that create a philosophy of control.

Motivations for doctors versus patients

Another topic at the patient access summit that reflected a dilemma in the health care field is how much effort to aim at the doctors versus the patients, when trying to change the behavior of both. Many patients try to engage as adults in their own care and are stymied by resistant doctors. And as I pointed out in an earlier posting, the patients who need the most lifestyle changes ignore their own perilous conditions. So these considerations would suggest focusing on motivations for doctors to change.

But a market approach would suggest that, when enough patients want to have a say in their care, and have the means to choose their doctors, change will reach the examination rooms. The conclusions of the patient access summit did not reflect any particular positions along this spectrum. Participants pointed out, however, that institutions such as Kaiser Permanente who wanted patients to use their portals invested a lot into advertising them.

Pushing versus pulling data

Telephone calls, email, and online chats are push technology, in that the person sending them decides when (approximately) they are delivered. The web is a pull technology, because the recipient visits the site at his or her choosing. In health exchange, one doctor may push a patient's records to the next provider, or the next provider can pull them when the patient is due to arrive. Sometimes articulated unhelpfully as a battle for push versus pull, our discussion revealed that each had its uses.

The issue is especially salient when a patient has records stored by multiple institutions. Currently, a patient can pull records from each and (if they use a common format such as BlueButton) combine them. In fact, a mobile app named iBlueButton allows a patient to show data from providers to a doctor during a visit. But it would be much better for each institution to push information to the patient as it's added to the institution's record. This would bring us closer to the ideal situation where records are stored by a site on behalf of the patient, not the doctor.

Three action items from today's meeting

Now we get to the meat of the summit. Leaders asked participants to define areas for research and to make commitments to incorporate the results of the research teams into their products and activities. Three action items were chosen, and two were excluded from consideration at this round.

Automated downloads

A number of organizations, such as Aetna Health Plans have adopted the BlueButton format created at the VA. In the line-up of data formats available for storing health information, BlueButton is shockingly casual. But it's list of plain-text fields is easy to read and unfrightening for patients. It is also undeniably popular, as the number of VA patients downloading their data approaches one million. So the immediate impetus for the first goal of the patient access summit, dubbed "automating BlueButton," is to keep patients' records up to date and integrated by pushing data to them from institutional EHRs.

But BlueButton can be massaged into other formats easier for programs to manipulate, the so the "automating BlueButton" task really refers to the entire movement to empower patients who want control over their records. One way to state the principle is that every action in a hospital's or doctor's EHR will be accompanied by an update to the patient's copy of the data. Hopefully this movement will soon lead to simple but program-friendly XML formats, robust transfer standards such as Direct, and universal integration of hospital and clinic EHRs with patient health records.

Identification and access technologies

Congress has ruled out a single nation-wide ID for patients, thanks to worries from privacy advocates that the system could facilitate identity theft and commercial data mining. Some have proposed a Voluntary Universal Healthcare Identifier (VUHID), but that's encumbered with the same problems. Identification systems used nowadays for HIE are cumbersome and error-prone, and revolve around cooperating health care institutions rather than individual patients with few resources. Individual hospitals can verify patients' email addresses and passwords when they come in for treatment, but in-person authentication doesn't scale to data exchange.

A more rational solution revolves around certificates and digital signatures, which security-conscious institutions in government and industry have used for years. The has gotten a bit of a bad rep because it has been poorly implemented on the Web (where browsers trust too many certificate authorities, and system administrators fail to keep accurate signatures) but the health care system is quite capable of implementing it properly. The Direct Trust project is creating a set of practices and hopefully will stimulate the industry to create such a system. In fact, I think Direct Trust is already addressing the issues listed under this task. OAuth was also mentioned repeatedly at the summit. the National Strategy for Trusted Identities in Cyberspace was also mentioned.

The questions of identifying oneself and of authorizing access to data are linked, so they were combined in a single working group even though they are somewhat distinct technically.

Standards for content

The final task approved at the patient access summit was to work further on data standards. It was late in the day and the task was defined only in a very broad manner. But I think it's an important leg of the patient access stool because current standards for patient data, such as HL7's CDA, were meant for communicating the results of clinical interventions. They'll be hard to use when patients generate and store their own data, both because they lack the appropriate fields and because they aren't designed for continuous uploads of data. Segmented access (allowing providers to see certain records while withholding records that the patient considers sensitive) was also mentioned.

Patient-generated data

I mentioned at the summit that patients are starting to generate data that could be invaluable in their treatment, and that the possession of this data gives them leverage. Doctors who are serious about treating common chronic issues such as hypertension, or any condition that can be improved through careful monitoring, will want the patient data. And patients can use their leverage to open up doctors' EHRs. As patients got more involved in their care, the very term "provider" (meaning a doctor or other professional who provides diagnosis and treatment) will become obsolete. Patients will be co-providers along with their professional team.

Patient-generated data got some attention during the day, but the attendees concluded that not enough time had been spent on it to turn it into an action item.


The final issue on the agenda for the day was privacy. I estimate that we spent a full half-hour at one point, in addition to which it was raised at other times. Because I am covering privacy in the third article of this series, I'll simply say here that the attendees were most concerned about removing excuses for data exchange, and did not treat risks to privacy as a problem to be fixed.

What did the patient access summit accomplish?

I'm proud that the ONC and VA created a major discussion forum for patient access. I think the issues that came up were familiar to all participants in the meeting, and that ONC together with industry partners is already moving forward on them. The summit provided affirmation that the health care field as a whole takes the issues seriously, and the commitments that will arise from the meeting will lend more weight to government efforts.

And a lot of the time, knowledgeable patients need to know that progressive health care leaders and the government have "got their back" as they demand their rights to know what's going on in their bodies. The Office of Civil Rights has publicly championed the patients' right to their data (in fact, the biggest fine they've levied for a HIPAA violation concerns a refusal to release data to a patient), and the initiatives we all supported last Monday will give them more tools to use it.

Regulations can make a difference. A representative from Practice Fusion told me they offered a patient download option on their EHR service years ago, but that most doctors refused to allow it. After the ONC's meaningful use regulations required patient access, adoption by doctors went up 600%.

While laying the groundwork for patient access, we are ready to look forward to wonderful things patients and providers can do with data. That will be the subject of my next article in the series, which will cover the health data initiative forum I attended the next day.

June 08 2012

mHealth apps are just the beginning of the disruption in healthcare from open health data

Two years ago, the potential of government making health information as useful as weather data felt like an abstraction. Healthcare data could give citizens the same "blue dot" for navigating health and illness akin to the one GPS data fuels on the glowing map of geolocated mobile devices that are in more and more hands.

After all, profound changes in entire industries, take years, even generations, to occur. In government, the pace of progress can feel even slower, measured in evolutionary time and epochs.

Sometimes, history works differently, particularly given the effect of rapid technological changes. It's only a little more than a decade since President Clinton announced he would unscramble global positioning system data (GPS) for civilian use. President Obama's second U.S. chief technology officer, Todd Park, estimated that GPS data is estimated to have unlocked some $90 billion dollars in value in the United States.

In the context, the arc of the Health Data Initiative (HDI) in the United States might leave some jaded observers with whiplash. From a small beginning, the initiative to put health data to work has now expanded around the United States and attracted great interest from abroad, including observers from England National Health Service eager to understand what strategies have unlocked innovation around public data sets.

While the potential of government health data driving innovation may well have felt like an abstraction to many observers, in June 2012, real health apps and services are here -- and their potential to change how society accesses health information, deliver care, lowers costs, connects patients to one another, creates jobs, empowers care givers and cuts fraud is profound. The venture capital community seems to have noticed the opportunity here: according to HHS Secretary Sebelius, investment in healthcare startups is up 60% since 2009.

Headlines about rockstar Bon Jovi 'rocking Datapalooza' and the smorgasbord of health apps on display, however, while both understandable and largely warranted, don't convey the deeper undercurrent of change.

On March 10, 2010, the initiative started with 36 people brainstorming in a room. On June 2, 2010, approximately 325 in-person attendees saw 7 health apps demoed at an historic forum in the theater of Institute of Medicine in Washington, D.C, with another 10 apps packed into an expo in the rotunda outside. All of the apps or services used open government data from the United States Department of Health and Human Services (HHS).

In 2012, 242 applications or services that were based upon or use open data were submitted for consideration to third annual "Health Datapalooza. About 70 health app exhibitors made it to the expo. The conference itself had some 1400 registered attendees, not counting press and staff, and was sold out in advance of the event in the cavernous Washington Convention Center in DC. On Wednesday, I asked Dr. Bob Kucher, now of Venrock Capital and the Brookings Institution, about how the Health Data Initiative has grown and evolved. Dr. Kucher was instrumental to its founding when he served in the Obama administration. Our interview is embedded below:

Revolutionizing the healthcare industry --- in HHS Secretary Sebelius's words, reformulating Wired executive editor Thomas Goetz's 'latent data' to "lazy data" --- has meant years of work unlocking government data and actively engaging the developers, entrepreneurial and venture capital community. While the process of making health data open and machine-readable is far from done, there has been incontrovertible progress in standing up new application programming interfaces (APIs) that enable entrepreneurs, academic institutions and government itself to retrieve it one demand. On Monday, in concert with the Health Data Palooza, a new version of launched, including the release of new data sets that enable not just hospital quality comparisons but insurance fees as well.

Two years later, the blossoming of the HDI Forum into a massive conference that attracted the interest of the media, venture capitalists and entrepreneurs from around the nation is a short-term development that few people would have predicted in 2010 but that a nation starved for solutions to spiraling healthcare costs and some action from a federal government that all too frequently looks broken is welcome.

"The immense fiscal pressure driving 'innovation' in the health context actually means belated leveraging of data insights other industries take for granted from customer databases," said Chuck Curran, executive director and general counsel or the Network Advertising Initiative, when interviewed at this year's HDI Forum. For example, he suggested, look at "the dashboarding of latent/lazy data on community health, combined with geographic visualizations, to enable “hotspot”-focused interventions, or info about service plan information like the new HHS interface for insurance plan data (including the API).

Curran also highlighted the role that fiscal pressure is having on making both individual payers and employers a natural source of business funding and adoption for entrepreneurs innovating with health data, with apps like My Drugs Costs holding the potential to help citizens and businesses alike cut down on an estimated $95 billion dollars in annual unnecessary spending on pharmaceuticals.

Curran said that health app providers have fully internalized smart disclosure : "it’s not enough to have open data available for specialist analysis -- there must be simplified interfaces for actionable insights and patient ownership of the care plan."

For entrepreneurs eying the healthcare industry and established players within it, the 2012 Health Data Palooza offers an excellent opportunity to "take the pulse of mHealth, as Jody Ranck wrote at GigaOm this week:

Roughly 95 percent of the potential entrepreneur pool doesn’t know that these vast stores of data exist, so the HHS is working to increase awareness through the Health Data Initiative. The results have been astounding. Numerous companies, including Google and Microsoft, have held health-data code-a-thons and Health 2.0 developer challenges. These have produced applications in a fraction of the time it has historically taken. Applications for understanding and managing chronic diseases, finding the best healthcare provider, locating clinical trials and helping doctors find the best specialist for a given condition have been built based on the open data available through the initiative.

In addition to the Health Datapalooza, the Health Data Initiative hosts other events which have spawned more health innovators. RockHealth, a Health 2.0 incubator, launched at its SXSW 2011 White House Startup America Roundtable. In the wake of these successful events, StartUp Health, a network of health startup incubators, entrepreneurs and investors, was created. The organization is focused on building a robust ecosystem that can support entrepreneurs in the health and wellness space.

This health data ecosystem has now spread around the United States, from Silicon Valley to New York to Louisiana. During this year's Health Datapalooza, I spoke with Ramesh Kolluru, a technologist who works at the University of Louisiana, about his work on a hackathon in Louisiana, the "Cajun Codefest," and his impressions of the forum in Washington:

One story that stood out from this year's crop of health data apps was Symcat, an mHealth app that enables people to look up their symptoms and find nearby hospitals and clinics. The application was developed by two medical students at Johns Hopkins University who happened to share a passion for tinkering, engineering and healthcare. They put their passion to work - and somehow found the time (remember, they're in medical school) to build a beautiful, usable health app. The pair landed a $100,000 prize from the Robert Wood Johnson Foundation for their efforts. In the video embedded below, I interview Craig Munsen, one of the medical students, about his application. (Notably, the pair intends to use their prize to invest in the business, not pay off medical school debt.)

There are more notable applications and services to profile from this year's expo - and in the weeks ahead, expect to see some of them here on Radar, For now, it's important now to recognize the work of all of the men and women who have worked so hard over the past two years create public good from public data.

Releasing and making open health data useful, however, is about far more than these mHealth apps: It's about saving lives, improving the quality of care, adding more transparency to a system that needs it, and creating jobs. Park spoke with me this spring about how open data relates to much more than consumer-facing mHealth apps:

As the US CTO seeks to scale open data across federal government by applying the lessons learned in the health data initiative, look for more industries to receive digital fuel for innovation, from energy to education to transit and finance. The White House digital government strategy explicitly embraces releasing open data in APIs to enable more accountability, civic utility and economic value creation.

While major challenges lie ahead, from data quality to security or privacy, the opportunity to extend the data revolution in healthcare to other industries looks more tangible now than it has in years past.

Business publications, including the Wall Street Journal, have woken up to the disruptive potential of open government data As Michael Hickins wrote this week, "The potential applications for data from agencies as disparate as the Department of Transportation and Department of Labor are endless, and will affect businesses in every industry imaginable. Including yours. But if you can think of how that data could let someone disrupt your business, you can stop that from happening by getting there first."

This growing health data movement is not placed within any single individual city, state, agency or company. It's beautifully chaotic, decentralized, and self-propelled, said Park this past week.

"The Health Data Initiative is no longer a government initiative," he said. "It's an American one. "

June 06 2012

Who owns patient data?

Who owns a patient's health information?

  • The patient to whom it refers?
  • The health provider that created it?
  • The IT specialist who has the greatest control over it?

The notion of ownership is inadequate for health information. For instance, no one has an absolute right to destroy health information. But we all understand what it means to own an automobile: You can drive the car you own into a tree or into the ocean if you want to. No one has the legal right to do things like that to a "master copy" of health information.

All of the groups above have a complex series of rights and responsibilities relating to health information that should never be trivialized into ownership.

Raising the question of ownership at all is a hash argument. What is a hash argument? Here's how Julian Sanchez describes it:

"Come to think of it, there's a certain class of rhetoric I'm going to call the 'one-way hash' argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of 'fingerprint' for messages based on the same mathematical idea: It's really easy to run the algorithm in one direction, but much harder and more time consuming to undo. Certain bad arguments work the same way — skim online debates between biologists and earnest ID (Intelligent Design) aficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it's both intelligible — even somewhat intuitive — to the layman and sounds as though it might qualify as some kind of insight ... The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it's really possible to explain why the talking point is wrong."

The question "Who owns the data?" presumes that the notion of ownership is valid, and it jettisons those foolish enough to try to answer the question into a needless circular debate. Once you mistakenly assume that the question is answerable, you cannot help but back an unintelligible position.

Ownership is a poor starting point for health data because the concept itself doesn't map well to the people and organizations that have relationships with that data. The following chart shows what's possible depending on a given role.

Person / Privilege Delete their copy of data Arbitrarily (without logs) edit their copy of data Correct the provider's copy of the data Append to the provider's copy of the data Acquire copies of HIPAA-covered data Sourcing Provider No. HIPAA mandates that the provider who creates HIPAA-covered data must ensure that a copy of the record is available. Mere deletion is not a privilege that providers have with their copies of patient records. Most EHR systems enforce this rule for providers.

No. While providers can change the contents of the EHR, they are not allowed to change the contents without a log of those changes being maintained. Many EHRs contain the concept of "signing" EHR data, which translates to "the patient data entering the state where it cannot be changed without logging anymore."

Yes. Providers can correct their copy of the EHR data, providing they maintain a copy of the incorrect version of the data. Again, EHR software enforces this rule.
Yes. The providers can merely add to data, without changing the "correctness" of previous instances of the data. EHR systems should seamlessly handle this case.
Sometimes. Depending on the ongoing "treatment" status of the patient, providers typically have the right to acquire copies of treatment data from other treating providers. If they are "fired," they can lose this right.
  Person / Privilege Delete their copy of data Arbitrarily (without logs) edit their copy of data Correct the provider's copy of the data Append to the provider's copy of the data Acquire copies of HIPAA-covered data
Patient rights Yes, they can delete their own copies of their patient records, but requests to providers that their charts be deleted will be denied.
No. Patients cannot change the "canonical" version of a patient record.

No. While patients have the right to comment on and amend the file, they can merely suggest that the "canonical" version of the patient record be updated.

Yes. The patient has the right to append to EHR records under HIPAA. HIPAA does not require that this amendment impact the "canonical" version of the patient record, but these additions must be present somewhere, and there is likely to be a substantial civil liability for providers who fail to act in a clinically responsible manner on the amended data. The relationship between "patient amendments" and the "canonical version" is a complex procedural and technical issue that will see lots of attention in the years to come.

Usually. Patients typically have the right to access the contents of an EHR system, assuming they pay a copying cost. EHRs frequently make this copying cost unreasonable, and the results are so dense that they are not useful. There are also exceptions to this "right to read," including psychiatric notes and legal investigations.
  Person / Privilege Delete their copy of data Arbitrarily (without logs) edit their copy of data Correct the provider's copy of the data Append to the provider's copy of the data Acquire copies of HIPAA-covered data
True Copyright Ownership (i.e. the relationship you have with a paper you have written or a photo you have taken) Yes. You can destroy things you own. Yes. You can change things you own without recording what changes you made. No. If you hold copyright to material and someone has purchased a right to a copy of that material, you cannot make them change it, even if you make "corrections." Sometimes, people use licensing rather than mere "copy sales" to enforce this right (i.e. Microsoft might have the right to change your copy of Windows, etc.).
No. Again, you have no rights to change another person's copy of something you own the copyright to. Again, some people use licensing as a means to gain this power rather than just "sale of a copy."
No. You do not have an automatic right to copies of other people's copyrighted works, even if they depict you somehow. (This is why your family photographer can gouge you on reprints.)   Person / Privilege Delete their copy of data Arbitrarily (without logs) edit their copy of data Correct the provider's copy of the data Append to the provider's copy of the data Acquire copies of HIPAA-covered data

IT Specialist

Kind of. Regulations dictate that IT specialists and vendors should not have the right to delete patient records. But root (or admin) access to the underlying EHR databases ensure that only people with backend access can truly delete patient records. Only people with direct access to source code or direct access to the database can completely circumvent EHR logging systems. The "delete privilege" is somewhat difficult to accomplish entirely without detection, however, since it is likely that someone (i.e. the patient) will know that the record should be present.

Yes. Source code or database-level access ensures that patient records can be modified without logging.

Yes. Source code or database-level access ensures that patient records can be modified without logging.

Yes. Source code or database-level access ensures that patient records can be modified without logging.

No. Typically, database administrators and programmers do not have the standing to request medical records from other sources.

Ergo, neither a patient nor a doctor nor the programmer has an "ownership" relationship with patient data. All of them have a unique set of privileges that do not line up exactly with any traditional notion of "ownership." Ironically, it is neither the patient nor the provider (when I say "provider," this usually means a doctor) who is closest to "owning" the data. The programmer has the most complete access and the only role with the ability to avoid rules that are enforced automatically by electronic health record (EHR) software.

So, asking "who owns the data?" is a meaningless, time-wasting, and shallow conceptualization of the issue at hand.

The real issue is: "What rights do patients have regarding healthcare data that refers to them?" This is a deep question because patient rights to data vary depending on how the data was acquired. For instance, a standalone personal health record (PHR) is primarily governed by the end-user license agreement (EULA) between the patient and the PHR provider (which usually gives the patient wildly varying rights), while right to a doctor's EHR data is dictated by both HIPAA and Meaningful Use standards.

Usually, what people really mean when they say "The patient owns the data" is "The patient's needs and desires regarding data should be respected." That is a wonderful instinct, but unless we are going to talk about specific privileges enabled by regulation or law, it really means "whatever the provider/programmer holding the data thinks it means."

For instance, while current Meaningful Use does require providers to give patients digital access to summary documents, there is no requirement for "complete" and "instant" access to the full contents of the EHR. While HIPAA mandates "complete" access, the EHR serves to make printed copies of digitized patient data completely useless. The devil is in the details here, and when people start going on about "the patient owning the data," what they are really doing is encouraging a mental shortcut that cannot readily be undone.

Note: This is a refresh of an article originally published here. Photo on home and category pages: Stethoscope by rosmary, on Flickr

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a Rosetta Stone for the IT implementer who wants to help organizations harness EHR systems.


May 21 2012

Health Information Technology: putting the patient back into health care

(Background: Most government advisory committees are stocked with representatives of corporations and special interest groups who distort government policies, sometimes unconsciously and with good intentions, to fertilize their own turfs. In health information technology, we have a rare chance to ensure that the most affected members of the public actually have their own direct representative. The GAO is directed by law to propose members for a Health Information Technology Policy Committee, and there is an opening for someone who "advocates for patients or consumers." A movement is building in support of Regina Holliday, nationally famous for her work on opening patient data, comments on Meaningful Use, and her images in her Walking Gallery. My letter follows. Letters to the GAO,, are due May 25.)

Government Accountability Office

441 G Street NW.

Washington, DC 20548

Dear Sirs and Madams:

I am writing in support of appointing Regina Holliday as a patient and consumer advocate on the Health Information Technology Policy Committee. I suggest this on two grounds: that she would be an excellent contributor to the committee, and that it is critical for the committee to hear from directly patients rather than the proxies who usually insert themselves in place of the patients.

Ms Holliday is nationally recognized among patient advocates as a leading expert on the patient experience and on the information technology required to improve health care, particularly the tools that will enable patient engagement, the Holy Grail of health care reform. Ms. Holliday is an expert on the Meaningful Use requirements that embody the health provisions of the American Recovery and Reinvestment Act (having submitted substantial comments on Stage 1 of Meaningful Use) and has advocated over many years for both technologies and policies that can improve delivery of health care and health information to patients.

Furthermore, Ms Holliday is in an excellent position to reflect the influence of public opinion on the HIT Policy Committee. She is a tireless researcher and advocate in the area of patient engagement, mastering both traditional channels such as lectures and modern Web-based media. In her Walking Gallery she collects stories from other people who have engaged intensively with the health care system and reflects the widespread experiences in her advocacy work. She is articulate and clear about the demands made by the public.

Finally, I would like to stress the importance of appointing an independent expert voice such as Ms Holliday on the HIT Policy Committee. Organizations claiming to represent patients have institutional agendas that always take first priority in their advocacy work. Members of the HIT Policy Committee who are paid representatives of established organizations are constantly tempted to bend policies to favor those established institutions, and the actual needs of the patient are never paramount. The thrust of the patient advocacy movement is to elevate the health of the patient above the continuity or profit of the institutions, which is why the voice of someone like Ms Holliday is crucial.

Andrew Oram

Editor, O'Reilly Media

(This letter represents my personal view only)

What do mHealth, eHealth and behavioral science mean for the future of healthcare?

We're living through one of the most dynamic periods in healthcare in our collective history. Earlier this year, Dr. Farzad Mostashari, the national coordinator of health IT, highlighted how the web, data and epatients are poised to revolutionize healthcare. The Internet is shaping healthcare in many ways, from the quantified self movement to participatory medicine, and even through the threat of a new "data divide" driven by unequal access to information, algorithmic and processing power.

Dr. Audie AtienzaInto this heady mix, add the mobile computing revolution, where smart devices rest in the pockets of hundreds of millions of citizens, collecting data and providing access to medical information. There's also the rapidly expanding universe of healthcare apps that promise to revolutionize how many services are created, distributed and delivered.

This month, I had the opportunity to discuss some of these trends with Dr. Audie Atienza (@AudieAtienza), a researcher who focuses on behavioral science and healthcare. Our interview, lightly edited for content and clarity, follows.

We first met when you were a senior health technology adviser at the U.S. Department of Health and Human Services (HHS). What do you do now?

Audie Atienza: Working with Todd Park at the Department of Health and Human Services (HHS) was a distinct privilege and an honor. I learned a great deal working at HHS with Todd. I am now at the new Science of Research and Technology Branch of the National Cancer Institute, National Institutes of Health.  My title is Behavioral Scientist and Health Scientist Administrator. In a typical week, I attend health-technology-related conferences and meetings, work with colleagues across HHS and the federal government on health-technology-related initiatives, discuss funding opportunities with extramural researchers, and engage in scientific research related to health technology and/or cancer control.

How well did your education prepare you for your work?

Audie Atienza: My undergraduate, graduate and post-doctoral education has provided me with the critical thinking skills and knowledge that is required of a health researcher. My interest in health technology actually started when I was a Fellow at Stanford University, where I was gathering data on cardiovascular disease risk factors using paper and pencil diaries.  Using paper and pencil measures seemed so inefficient. Study participants sometimes forgot to complete the diaries or had incomplete entries — and sometimes the handwriting was difficult to decipher.  So, my mentor, Dr. Abby King, and I collaborated with Dr. BJ Fogg (also at Stanford) and we "went digital" with the cardiovascular disease risk factor assessments. (We used "state of the art" PDAs at the time.)  This fortuitous collaboration and the "there has to be a better way to do this" idea launched me into the field of electronic and mobile health.

What does "eHealth" mean now?

Audie Atienza: After my postdoctoral fellowship at Stanford, I accepted a position at the National Cancer Institute (NCI), Health Promotion Research Branch.  The NCI offered me the opportunity to further explore the field of electronic health (or eHealth) on a national (U.S.) and international scale.  The term "eHealth" generally represents the use of electronic or digital information technology to assess and/or modify health behaviors, states and outcomes.

When I arrived at NCI, I was asked to bring the best and brightest behavioral researchers together to discuss how to assess health in "real-time."  A book was published based on this meeting: "The Science of Real-Time Data Capture Self-Reports in Health Research." Other national and international conferences followed, including the 2010 mHealth Summit, in which I was intimately involved.

How does behavioral science affect our capacity to understand the causes of cancer?

Audie Atienza: It is clear that behavioral factors contribute to cancer and many other diseases, like diabetes and heart disease.  For example, the link between smoking and cancer is well established. There is also a solid body of research that has linked obesity, physical inactivity, and poor diet to various cancers. The Centers for Disease Control (CDC) reports that 69% of U.S. adults are currently overweight or obese.[Data on adults: PDF and children: PDF]

Accurately measuring and changing these everyday health behaviors — including smoking, physical activity, what people eat — is not easy. This is where technology can be of great assistance. Through sensors, cell phones, GPS systems, social networking technology, and web-based technology, we may be able to better assess and hopefully improve these key health behaviors that contribute to cancer and other diseases.

We are, however, just at the beginning of discovering how to best develop and utilize technology to improve the health of individuals and the public.  There is much work to be done to determine what is effective and what isn't.

How do mobile devices figure into that work?

Audie Atienza: Mobile technology is everywhere. We are seeing more integrated devices, like smartphones with cameras, accelerometers, GPS, and all types of apps.  But it isn't about the technology — a phrase I have borrowed from Todd Park. It's really about addressing health issues and improving the health of individuals and the public.  If technology can facilitate this, then great. But using technology may not always be the best way to improve health and well-being.  This is a critical research question.

How is mobile technology being applied to specific health issues?

Audie Atienza: Mobile technology can be (and is being) applied to address many different health and disease issues: infection disease (AIDS/HIV, tuberculosis, influenza), chronic disease (heart disease, cancer, diabetes, arthritis, asthma), mental health (depression, stress, anxiety), child and maternal health (pregnancy, infant care, childhood obesity), gerontology (healthy living in place, falls prevention, caregiving), health promotion (e.g., exercise, diet, smoking cessation, cancer screening, sun safety), and health-provider-related issues (medication adherence, patient-provider communication, point-of-care diagnostics, vital signs monitoring).

Mobile technology cuts across the disease and health spectrum with great potential to address problems that have been previously difficult to solve.  It is difficult to say which mobile health technology is most important because they are all addressing distinct and critical issues.  Heart disease and cancer are the leading causes of death in the United States. Others may argue that infectious diseases and maternal/child health are the most critical issues to address globally. Still others may argue for tobacco control and reducing obesity (increasing physical activity and improving nutrition).  The National Institutes of Health (NIH) has 27 institutes and centers (ICs), each with a particular mission.  More than 20 of the 27 ICs are currently funding mobile technology-related research.

What do we need next in mHealth?

Audie Atienza: More research. We need to better understand what works and what does not. Researchers who have systematically reviewed smartphone health apps (e.g., smoking cessation, diabetes) have found that most are not based on established public health or clinical guidelines. Very few have actually assessed whether the apps are effective in changing health outcomes. With thousands of apps, sensors, and other mobile health tools currently available, it can be difficult for the user to know what is effective, useful, and (most importantly) safe.

How close are we to a real tricorder? (There's now an X Prize for that.)

Audie Atienza: I love science-fiction and "Star Trek"!  Certainly, mobile sensors and monitors currently exist that can accurately monitor physiological states and vital signs. And the technology is becoming increasingly integrated and more powerful.  But, to have an all-in-one mobile device that can assess and diagnose health and diseases as well as, if not better than, a clinical provider is a very tall order. If such a tool or prototype is developed, it will be science and research that will determine if the "tricorder" is effective or not.  Time will tell whether such a tool can be developed.  While I am all for reducing diagnostic errors, I personally would be hesitant to accept a diagnosis from only a mobile device without the clinical judgment of a medical or health professional.

OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR20


May 16 2012

How to start a successful business in health care at Health 2.0 conference

Great piles of cash are descending on entrepreneurs who develop health care apps, but that doesn't make it any easier to create a useful one that your audience will adopt. Furthermore, lowered costs and streamlined application development technique let you fashion a working prototype faster than ever, but that also reduces the time you can fumble around looking for a business model. These were some of the insights I got at Spring Fling 2012: Matchpoint Boston, put on by Health 2.0 this week.

This conference was a bit of a grab-bag, including one-on-one meetings between entrepreneurs and their potential funders and customers, keynotes and panels by health care experts, round-table discussions among peers, and lightning-talk demos. I think the hallway track was the most potent part of this conference, and it was probably planned that way. The variety at the conference mirrors the work of Health 2.0 itself, which includes local chapters, challenges, an influential blog, and partnerships with a range of organizations. Overall, I appreciated the chance to get a snapshot of a critical industry searching for ways to make a positive difference in the world while capitalizing on ways to cut down on the blatant waste and mismanagement that bedevil the multi-trillion-dollar health care field.

Let's look, for instance, at the benefits of faster development time. Health IT companies go through fairly standard early stages (idea, prototype, incubator, venture capital funding) but cochairs Indu Subaiya and Matthew Holt showed slides demonstrating that modern techniques can leave companies in the red for less time and accelerate earnings. On the other hand, Jonathan Bush of athenahealth gave a keynote listing bits of advice for company founders and admitting that his own company had made significant errors that required time to recover from. Does the fast pace of modern development leave less room for company heads to make the inevitable mistakes?

I also heard Margaret Laws, director of the California HealthCare Foundation's Innovations Fund, warn that most of the current applications being developed for health care aim to salve common concerns among doctors or patients but don't address what she calls the "crisis points" in health care. Brad Fluegel of Health Evolution Partners observed that, with the flood of new entrepreneurs in health IT, a lot of old ideas are being recycled without adequate attention to why they failed before.

I'm afraid this blog is coming out too negative, focusing on the dour and the dire, but I do believe that health IT needs to acknowledge its risks in order to avoid squandering the money and attention it's getting, and on the positive side to reap the benefits of this incredibly fertile moment of possibilities in health care. Truly, there's a lot to celebrate in health IT as well. Here are some of the fascinating start-ups I saw at the show:

  • hellohealth aims at that vast area of health care planning and administration that cries out for efficiency improvements--the area where we could do the most good by cutting costs without cutting back on effective patient care. Presenter Shahid Shah described the company as the intersection of patient management with revenue cycle management. They plan to help physicians manage appointments and follow-ups better, and rationalize the whole patient experience.

  • hellohealth will offer portals for patients as well. They're unique, so far as I know, in charging patients for certain features.

  • Corey Booker demo'd onPulse, which aims to bring together doctors with groups of patients, and patients with groups of the doctors treating them. For instance, when a doctor finds an online article of interest to diabetics, she can share it with all the patients in her practice suffering from diabetes. onPulse also makes it easier for a doctor to draw in others who are treating the same patient. The information built up about their interactions can be preserved for billing.

    onPulse overlaps in several respects with HealthTap, a doctor-patient site that I've covered several times and for which an onPulse staffer expressed admiration. But HealthTap leaves discussions out in the open, whereas onPulse connects doctors and patients in private.

  • is another one of these patient/doctor services with a patient portal. It allows doctors to upload continuity of care documents in the standard CCD format to the patient's site, and supports various services such as making appointments.

    A couple weeks ago I reported a controversy over hospitals' claims that they couldn't share patient records with the patients. Check out the innovative services I've just highlighted here as a context for judging whether the technical and legal challenges for hospitals are really too daunting. I recognize that each of the sites I've described pick off particular pieces of the EHR problem and that opening up the whole kit and kaboodle is a larger task, but these sites still prove that all the capabilities are in place for institutions willing to exploit them.

  • GlobalMed has recently released a suitcase-sized box that contains all the tools required to do a standard medical exam. This allows traveling nurse practitioners or other licensed personnel to do a quick check-up at a patient's location without requiring a doctor or a trip to the clinic. Images can also be taken. Everything gets uploaded to a site where a doctor can do an assessment and mark up records later. The suitcase weighs about 30 pounds, rolls on wheels, and costs about $30,000 (price to come down if they start manufacturing in high quantities).

  • SwipeSense won Health 2.0's 100 Day Innovation Challenge. They make a simple device that hospital staff can wear on their belts and wipe their hands on. This may not be as good as washing your hands, but takes advantage of people's natural behavior and reduces the chance of infections. It also picks up when someone is using the device and creates reports about compliance. SwipeSense is being tested at the Rush University Medical Center.

  • Thryve, one of several apps that helps you track your food intake and make better choices, won the highest audience approval at Thursday's Launch! demos.

  • Winner of last weekend's developer challenge was No Sleep Kills, an app that aims to reduce accidents related to sleep deprivation (I need a corresponding app to guard against errors from sleep-deprived blogging). You can enter information on your recent sleep patterns and get back a warning not to drive.

It's worth noting that the last item in that list, No Sleep Kills, draws information from Health and Human Services's Healthy People site. This raises the final issue I want to bring up in regard to the Spring Fling. Sophisticated developers know their work depends heavily on data about public health and on groups of patients. HHS has actually just released another major trove of public health statistics. Our collective knowledge of who needs help, what works, and who best delivers the care would be immensely enhanced if doctors and institutions who currently guard their data would be willing to open it up in aggregate, non-identifiable form. I recently promoted this ideal in coverage of Sage Congress.

In the entirely laudable drive to monetize improvements in health care, I would like the health IT field to choose solutions that open up data rather than keep it proprietary. One of the biggest problems with health care, in this age of big data and incredibly sophisticated statistical tools, is our tragedy of the anti-commons where each institution seeks to gain competitive advantage through hoarding its data. They don't necessarily use their own data in socially beneficial ways, either (they're more interested in ratcheting up opportunities for marketing expensive care). We need collective sources of data in order to make the most of innovation.

OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR20

May 06 2012

The state of health IT according to the American Hospital Association

Last week, the American Hospital Association released a major document. Framed as comments on a major federal initiative, the proposed Stage 2 Meaningful Use criteria by the Centers for Medicare & Medicaid Services (CMS), the letter also conveys a rather sorrowful message about the state of health IT in the United States. One request--to put brakes on the requirement for hospitals to let patients see their own information electronically--has received particularly strong coverage and vigorous responses from e-Patient Dave deBronkart, Regina Holliday, Dr. Adrian Gropper, Fred Trotter, the Center for Democracy and Technology, and others.

I think the AHA has overreached in its bid to slow down patient access to data, which I'll examine later in this article. But to me, the most poignant aspect of the AHA letter is its careful accumulation of data to show the huge gap between what health care calls for and what hospitals, vendors, standards bodies, and even the government are capable of providing.

Two AHA staff were generous enough to talk to me on very short notice and offer some clarifications that I'll include with the article.

A survey of the U.S. health care system

According to the AHA (translated into my own rather harsh words), the state of health IT in American hospitals is as follows:

  • Few hospitals and doctors can fulfill basic requirements of health care quality and cost control. For instance, 62% could not record basic patient health indicators such as weight and blood pressure (page 51 of their report) in electronic health records (EHRs).

  • Many EHR vendors can't support the meaningful use criteria in real-life settings, even when their systems were officially certified to do so. I'll cite some statements from the AHA report later in the article. Meaningful use is a big package of reforms, of course, promulgated over just a few years, but it's also difficult because vendors and hospitals had also been heading for a long time in the opposite direction: toward closed, limited functionality.

  • Doctors still record huge globs of patient data in unstructured text format, where they are unavailable for quality reporting, tracking clinical effectiveness, etc. Data is often unstructured because humans are complex and their symptoms don't fit into easy categories. Yet doctors have learned to make diagnoses for purposes of payment and other requirements; we need to learn what other forms of information are worth formalizing for the sake of better public health.

  • Quality reporting is a mess. The measures currently being reported are unreliable, and standards have not been put in place to allow valid comparisons of measures from different hospitals.

  • Government hasn't stepped up to the plate to perform its role in supporting electronic reporting. For instance, the Centers for Medicare & Medicaid Services (CMS) wants the hospitals to report lots of quality measures, but its own electronic reporting system is still in the testing stages, so hospitals must enter data through a cumbersome and error-prone manual "attestation." States aren't ready to accept electronic submissions either. The Direct project is moving along, but its contribution to health data exchange is still very new.

There's no easy place to assign blame for a system that is killing hundreds of thousands of people a year while sticking the US public with rising costs. The AHA letter constantly assures us that they approve the meaningful use objectives , but say their implementation in a foreseeable time frame is unfeasible. "We can envision a time when all automated quality reporting will occur effortlessly in a reliable and valid fashion. However, we are not there yet." (pp. 42-43)

So the AHA message petition to the CMS can be summarized overall as, "Slow everything down, but keep the payments coming."

AHA staff referred to the extensively researched article, A Progress Report On Electronic Health Records In U.S. Hospitals. It corroborates observations that adoption of EHRs has vastly increased between 2010 and 2011. However, the capabilities of the EHRs and hospitals using them have not kept up with meaningful use requirements, particularly among small rural hospitals with few opportunities to hire sophisticated computer technicians, etc. Some small hospitals have trouble even getting an EHR vendor to talk to them.

Why all this matters

Before looking at some details, let me lay out some of the reasons that meaningful use criteria are so important to patients and the general public:

  • After treatment, data must be transferred quickly to patients and the next organizations treating them (such as rehab centers and visiting nurses) so that the patients receive proper care.

  • Quality measures are critical so that hospitals can be exposed to sunshine, the best disinfectant, and be shamed into lowering costs and reducing errors.

  • Data must be collected by public agencies so that data crunchers can find improvements in outreach and treatment. Hospitals love to keep their data private, but that gives them relatively tiny samples on which to base decisions, and they often lack the skills to analyze the data.

No one can predict what will break logjams and propel health care forward, but the patient engagement seems crucial because most health care problems in developed countries involve lifestyle issues such as smoking and body weight. Next, to provide the kind of instant, pervasive patient engagement that can produce change, we need electronic records that are open to innovative apps, that can accept data from the patient-centered medical home, and that link together all care-givers.

The state of electronic health records

The EHR industry does not come out well in the AHA list of woes. The letter cites "unworkable, but certified, vendor products" (p.3) and say, "Current experience is marked by limited vendor and workforce capacity." (p. 7) The latter complaint points to one of the big hurdles facing health care reform: we don't have enough staff who understand computer systems and who can adapt their behavior to use them effectively.

Functionality falls far short of real hospital needs: hospital system spent more than $1 million on a quality reporting tool from its vendor that was, for the most part, an unwieldy data entry screen. Even medication orders placed using CPOE [computerized physician order entry] needed to be manually re-entered for the CQM [Center For Quality Management] calculation. Even then, the data were not reliable, despite seven months of working with the vendor to attempt to get it right. Thus, after tremendous investment of financial and human resources, the data are not useful. (p. 45)

The AHA claims that vendors were lax in testing their systems, and that the government abetted the omission: "the proposals within the certification regulation require vendors to incorporate all of the data elements needed to calculate only one CQM. There is no proposal to require that certified EHRs be capable of generating all of the relevant CQMs proposed/finalized by CMS." (p. 41) With perhaps a subtle sarcasm, the AHA proposes, "CMS should not require providers to report more e-measures than vendors are required to generate." (p. 36)

Vendors kind of take it on the chin for fundamental failures in electronic capabilities. "AHA survey data indicate that only 10 percent of hospitals had a patient portal of any kind in Fall 2011. Our members report that none had anywhere near the functionality required by this objective. In canvassing vendors, they report no technology companies can currently support this volume of data or the listed functions." (p. 26)

We can add an observation from the College of Healthcare Information Management Executives (CHIME): Stage 1, some vendors were able to dictate which clinical quality measures providers chose to report--not based on the priorities of the provider, but based on the capabilities of the system. Subsequently, market forces corrected this and vendors have gone on to develop more capabilities. But this anecdote provides an important lesson when segmenting certification criteria--indeed for most technologies in general--flexibility for users necessitates consistent and robust standards for developers. In short, the 2014 Edition must require more of the vendor community if providers are to have space to pursue meaningful use of Meaningful Use. (p. 2)

Better standards--which take time to develop--could improve the situation, which is why the Office of the National Coordinator (ONC) has set up a Health IT Standards Committee. For instance, the AHA says, "we have discovered that vendors needed to program many decisions into EHRs that were not included in the e-specifications. Not only has this resulted in rampant inconsistencies between different vendors, it produced inconsistent measure results when the e-measures are compared to their counterparts in the Inpatient Quality Reporting (IQR) Program." (p. 35)

The AHA goes so far as to say, "The market cannot sustain this level of chaos." (p. 7) They conclude that the government is pushing too hard. One of their claims, though, comes across as eccentric: "Providers and vendors agree that the meaningful use program has stifled innovation in the development of new uses of EHRs." (p. 9)

To me, all the evidence points in the opposite direction. The vendors were happy for decades to push systems that performed minimal record-keeping and modest support such as formularies at huge costs, and the hospitals that adopted EHRs failed to ask for more. It wasn't a case of market failure because, as I have pointed out (and others have too), health care is not a market. But nothing would have changed had not the government stepped in.

Patient empowerment

Now for the point that has received the most press, AHA's request to weaken the rules giving patients access to their data. Once again, the AHA claims to favor patient access--and actually, they have helped hospitals over the years to give patients summaries of care, mostly on paper--but are passing on the evidence they have accumulated from their members that the systems will not be in place to support electronic distribution for some time. I won't repeat all the criticisms of the experts mentioned at the beginning of this article, but provide some perspective about patient engagement.

Let's start with the AHA's request to let the hospital can choose the format for patient data (pp. 25-26). So long as hospitals can do that, we will be left with formats that are not interoperable. Many hospitals will choose formats that are human-readable but not machine-readable, so that correlations and useful data cannot be extracted programmatically. Perhaps the technology lags in this area--but if the records are not in structured format already, hospitals themselves lose critical opportunities to check for errors, mine data for trends, and perform other useful tasks with their records.

The AHA raises alarms at the difficulties of providing data. They claim that for each patient who is treated, the hospital will have to invest resources "determining which records are relevant and appropriate." (p. 26) "It is also unclear whether a hospital would be expected to spend resources to post information and verify that all of the data listed are available within 36 hours." (p. 27)

From my perspective, the patient download provisions would simply require hospitals to clean up their ways of recording data so that it is in a useable and structured format for all, including their own staff. Just evaluate what the AHA is admitting to in the following passage: "Transferring these clinical observations into a structured, coded problem list in the EHR requires significant changes to work flows and training to ensure accuracy. It also increases time demands for documentation by physicians who already are stretched thin." (p. 27)

People used to getting instant information from commercial web sites find it very hard to justify even the 36-hour delay offered by the Stage 2 meaningful use guidelines. can provide me with information on all my current and recent orders. Google offers each registered user a dashboard that shows me everything they track about me, including all my web searches going back to mid-2006. They probably do this to assure people that they are not the egregious privacy violators they are regularly accused of being. Nevertheless, it shows that sites collecting data can make it available to users without friction, and with adequate security to manage privacy risks.

The AHA staff made a good point in talking to me. The CMS "transmit" requirement would let a patient ask the hospital to send his records to any institution or individual of his choice. First of all, this would assume that the recipient has encrypted email or access to an encrypted web site. And it could be hard for a hospital to make sure both the requester and the intended recipient are who they claim to be. "The transmit function also heightens security risks, as the hospital could be asked to send data to an individual with whom it has no existing relationship and no mechanism for authentication of their identity." (p. 27) Countering this claim, Gropper and the Society for Participatory Medicine offer the open OAuth standard to give patients easy and secure access. But while OAuth is a fairly stable standard, the AHA's concerns are justified because it hasn't been applied yet to the health care field.

Unfortunately, allowing a patient to send his or her data to a third party is central to Accountable Care Organizations (ACOs), which hold the promise of improving patient care by sharing data among cooperating health care providers. If the "transmit" provision is delayed, I don't see how ACOs can take off.

The AHA drastically reduces the information hospitals would have to give patients, at least for the next stage of the requirements. Among the material they would remove are diagnoses, the reason for hospitalization, providers of care during hospitalization, vital signs at discharge, laboratory test results, the care transition summary and plan for next provider of care, and discharge instructions for patient. (p. 27) All this vastly reduces the value of data for increasing quality care. For instance, removing lab test results will lead to expensive and redundant retesting. (However, the AHA staff told me they support the ability of patients to get results directly from the labs.)

I'll conclude this section with the interesting observation that the CHIME comments on meaningful use I mentioned earlier say nothing about the patient engagement rules. In other words, the hospital CIOs in CHIME don't back up the hospitals' own claims.

Some reasonable AHA objections

Now I'm happy to turn to AHA proposals that leave fewer impediments to the achievement of better health care. Their 49-page letter (plus appendices) details many aspects of Stage 2 that seem unnecessarily burdensome or of questionable value.

It seems reasonable to me to ask the ONC, "Remove measures that make the performance of hospitals and EPs contingent on the actions of others." (p. 2) For instance, to engage in successful exchanges of patient data, hospitals depend on their partners (labs, nursing homes, other hospitals) to have Stage 2 capabilities, and given the slow rate of adoption, such partners could be really hard to find.

The same goes for patient downloads. Not only do hospitals have to permit patients to get access to data over the Internet, but they have to get 10% of the patients to actually do it. I don't think the tools are in place yet for patients to make good use of the data. When data is available, apps for processing the data will flood the market and patients will gradually understand the data's value, but right now there are few reasons to download it: perhaps to give it to a relative who is caring for the patient or to a health provider who doesn't have the technical means to request the data directly. Such uses may allow hospitals to reach the 10% required by the Stage 2 rule, but why make them responsible?

The AHA documents a growing digital divide among hospitals and other health care providers. "Rural, smaller and nonteaching hospitals have fewer financial and technical resources at their disposal. They also are starting from a lower base of adoption." (p. 59) The open source community needs to step up here. There are plenty of free software solutions to choose from, but small providers can't use them unless they become as easy to set up and configure as MySQL or even LibreOffice.

The AHA is talking from deep experience when it questions whether patients will actually be able to make use of medical images. "Images are generally very large files, and would require that the individual downloading or receiving the file have specialized, expensive software to access the images. The effort required to make the images available would be tremendous." (p. 26) We must remember that parts of our country don't even have high-speed Internet access.

The AHA's detailed comments about CMS penalties for the slow adoption of EHRs (pp. 9-18) also seem to reflect the hard realities out in the field.

But their attitude toward HIPAA is unclear. They point out that Congress required meaningful use to "take into account the requirements of HIPAA privacy and security law." (p. 25) Nevertheless, they ask the ONC to remove its HIPAA-related clauses from meaningful use because HIPAA is already administered by the Office of Civil Rights (OCR). It's reasonable to remove redundancy by keeping regulations under a single agency, but the AHA admits that the OCR proposal itself is "significantly flawed." Their staff explained to me that their goal is to wait for the next version of the OCR's own proposal, which should be released soon, before creating a new requirement that could well be redundant or conflicting.

Unless we level the playing field for small providers, an enormous wave of buy-outs and consolidation will occur. Market forces and the push to form ACOs are already causing such consolidation. Maybe it's even a good thing--who feels nostalgic for the corner grocery? But consolidation will make it even more important to empower patients with their data, in order to counterbalance the power of the health care institutions.

A closing note about hospital inertia

The AHA includes in its letter some valuable data about difficulties and costs of implementing new systems (pp. 47-48). They say, "More than one hospital executive has reported that managing the meaningful use implementation has been more challenging than building a new hospital, even while acknowledging the need to move ahead." (p. 49)

What I find particularly troublesome about their report is that the AHA offers no hint that the hospitals spent all this money to put in place new workflows that could improve care. All the money went to EHRs and the minimal training and installation they require. What will it take for hospitals to make the culture changes that reap the potential benefits of EHRs and data transfers? The public needs to start asking tough questions, and the Stage 2 requirements should be robust enough to give these questions a basis.

Principles of patient access in Directed Exchange

The Health Insurance Portability and Accountability Act (HIPPA) is good law. HIPPA formalized principles of patient privacy that should have been codified industry norms for more than 50 years (better late than never). HIPPA provided the right to patients in the U.S. to get access to their own healthcare records. The law struck reasonable balances on hundreds of complicated issues in order to achieve these goals. The law solved more problems, by far, than it created. Which is as close to the definition of good government as I can imagine. Patients are better off after HIPPA than before.

Sadly, the "letter of the law" in HIPPA is frequently either ignored or worse, fully embraced, in order to make patient access to their own healthcare data more cumbersome. This is evidenced nowhere better than Regina Holiday's experience with access to her husband's medical records. To make a long story short, she was able to acquire an unpublished manuscript of a Stephen King novel, sooner and for less money than she was able to get her husband's medical records.

Principle zero: Some clinicians will do anything they can to make patient access to their health records impossible or cumbersome.

Regina's work, detailing her experience with her husband is titled 73 cents, because that's how much it cost to get one page of her husband's medical record. HIPPA allows hospitals and clinicians to charge a "reasonable" copying fee for access to patient records. The problem with that is that in the digital age, a single healthcare record print out looks like this:

A single EHR record, printed out
A partial printout of a patient's medical record.

This is what happens when you print out a digital health record. Having patients pay the copying costs for access to medical records makes a simple presumption: there are only a few pages there. Obviously no patient will be able to afford copying costs in the age of all-digital records.

Principle one: Patient access to their own healthcare records must be digital once the record is digital.

Once you concede that access to the patient's medical record must be digital, we can discuss the push vs. pull question. When someone else on the Internet has data that is important to you, you can generally find ways to have it "pushed" to you or you can choose to "pull" it. The simplest example is the weather. You can always check the weather easily online by visiting a website (by pulling). But you can also have software text you when it is going to rain (by pushing).

There are advantages of both push and pull approaches for patient access to data. People who are excited about the pull model tend to focus on the benefits of the "portal" requirements in Meaningful Use, and those that favor the push model are excited about directed exchange. Without getting into the debate, I can posit that there are some cases where push access to patient data is critical. Without supporting patient participation in directed exchange we regulate patients to second-class citizens with regard to healthcare exchange. That is unacceptable. Patients should be first-class citizens in healthcare exchange.

Principle two: Patients should be able to participate in health information exchange as first-class citizens.

The Office of the National Coordinator for Health Information Technology (ONC) should be applauded for requiring directed exchange with patients in the current proposed rule. I hope that ONC does not back off of this new requirement.

The current proposed rule making, however, is silent on a critical issue for directed health information exchange. How do we ensure that providers will not refuse to communicate with patients over directed exchange because of bogus "security concerns"? As we see with the copying costs under HIPPA, every potential barrier to a patient's access to data will be used against patients.

There are already rumors of cases in the pilots of directed exchange where organizations are using the trust architecture of the Direct Project to refuse to communicate with certain parties. While that might be reasonable between institutions (do you really think Planned Parenthood will ever automate communication with Catholic charity clinics or vice-versa?), it is absolutely critical that this not hamper patient-clinician communication.

When we first designed the Direct Project Trust model, we presumed that patient-clinicians communication would take place based on "business-card" identity verification. That meant that when a patient provided a clinician with a public key (no matter how they did that) the clinician would trust it because the patient provided the public key. We did this because we knew that if clinicians could reject a patient's public key based on "security concerns," they would do so. Either the clinicians (or more likely the vendors that they hired) would choose directed exchange "partners" that were "approved" and "secure," ensuring that the patient's experience of directed exchange was merely a more extensive menu of patient portal options. Patient data is very valuable and controlling the flow of patient data is central to more business plans than I care to count.

In order for patients to be first-class citizens in health information exchange, they should have the right to send their records, in an automated fashion, anywhere they want. Even if it meant sending it to a service that the patient was enthusiastic about, but the clinician disapproved of (i.e. In the world of secure email enabled by public-key infrastructure (PKI), that translates to clinicians must accept any public key/direct address presented by a patient in a reasonable manner. This acceptance must be unconditional, but should probably mean limiting the acceptance of that key to communication with just that patient. Anything less than this means that the patient is a second-class citizen with regards to the information exchange of their own data.

Conclusion: ONC should require that clinicians communicate with a patient's chosen directed exchange provider, which means accepting any public key presented by a patient in a reasonable manner.

The community at Direct Trust is working hard to agree on what "reasonable manner" should mean, exactly. Here is my latest proposal on the subject, and here are similar ideas from Dr. David Kibbe. Eventually the Direct Trust community will knock out a firm understanding on the specific ways that might be "reasonable" for a patient to provide a certificate. But we are certainly agreed that without firm requirements on certificate acceptance, this issue will be used by clinicians to limit where patients can send their own data.

As the U.S. federal government is preparing to pay healthcare providers to adopt electronic health records (EHR) they will insist that those doctors/hospitals/etc. show that they are using the new software in clinically meaningful ways. On Monday (May 7, 2012) they will be accepting comments on the second stage of the requirements that clinicians must meet in order to receive compensation. These requirements are usually short-handed as "meaningful use."

I will be submitting this blog post as my comments to that process. Others will be submitting comments that directly contradict the principles and conclusions I write here. Most notably the American Hospital Association (AHA) has argued that the requirements for patient portals and for providing patients with access to their digital record should be entirely removed from the meaningful use standards (PDF). Specifically:

"Our members are particularly concerned with the proposed objective to provide patients with the ability to view, download and transmit large volumes of protected health information via the Internet (a "patient portal"). The AHA believes that this objective is not feasible as proposed, raises significant security issues, and goes well beyond current technical capacity. We also believe that CMS should not include this objective because the Office of Civil Rights, and not CMS, regulates how health care providers and other covered entities fulfill their obligations under the Health Insurance Portability and Accountability Act (HIPAA), including the obligation to give patients access to their health records."

This is fairly ironic, since the report also says:

"To date, OCR has received comments on its own significantly flawed original proposal to implement this section of HITECH, but has yet to finalize the standard."

Apparently, AHA is not satisfied with any government agency's interpretation of giving electronic access to patient data. The AHA would prefer that patients continue to wait the same amount of time for access to their digital records that they do for their paper records. Specifically:

"Further, 30 days are necessary to make determinations about how to respond to a request no matter the format of the protected health information. While providing an electronic copy of protected health information maintained in an EHR eventually may be facilitated more easily by technology, the process of determining which records are relevant and appropriate takes the same amount of time as it does for evaluating paper records."

Of course, this is entirely false. Indeed, HIPPA does maintain that certain parts of healthcare records (i.e. a psychiatrist's notes) and disclosures (i.e. when the FBI asks for records) are not subject to patient access. An EHR should be capable of understanding which parts of an EHR record are subject to HIPPA and which are not. If the EHR system can understand this distinction, then responses to HIPPA requests can be made in near-real-time. If the EHR system cannot make the distinction between which portions of the record to automatically provide to honor a HIPPA patient access request, then having 30 days is not going to be enough. Can you imagine a nurse reading through the entire stack of papers above to ensure that a certain mental health diagnosis is redacted?

One of the most critical features of patient participation in directed exchange is the patient's capacity to prevent the spread of bad information as it is happening. Apparently, the AHA believes that patients should tolerate the spread of mis-information in their health records to other institutions for a month before correcting it. This of course works in every situation where patients can wait a whole month to get correct information to other hospitals and clinicians.

I would like to be the first to welcome the American Hospital Association to the digital age. (Okay, maybe the second.) From a technology perspective, there is nothing at all that would prevent patients from receiving copies of their updated digital health records seconds after it is "signed" by their clinicians. Inside those seconds is plenty of time to digitally determine whether sharing with the patient is appropriate, legal and safe. Seconds after a patient like me receives data, I intend to process it in an automated fashion. It is not unreasonable, in this new digital world, for me to get a text message that a doctor has ordered a medication that I am allergic to. I wish to get that message after the doctor has ordered the medication, but before I receive it in my IV.

In this new digital world, 36 hours is unreasonable. It means that humans continue to be involved in tasks that can be performed perfectly by a computer without errors. Even 36 hours means that doctors, nurses and hospital administrators are still "thinking in paper." Thirty-six hours means that you still do not view me, the patient, as an equal data partner. It means that I am blind to the data in your hospital at the only time it really matters, which is right now. Health data that is 36-hours old can only be analyzed as a post-mortem and data that is 30-days old is already rotting. As a patient, 36 hours is a short-term solution. It is an opportunity for you to rethink how information flows in your hospitals. It is an opportunity for you to rethink the notions of "inside" the hospital and "outside" the hospital.

This is not that I do not take your point regarding the reconciliation of the policies from the perspective of HIPPA and meaningful use. Two time-lines for compliance is difficult. But the reconciliation is to speed HIPPA up, not slow meaningful use down. The notion that you will give patients a stack of paper like the one above 30 days after it is useful is a bad joke. It was a bad joke 20 years ago, when the technologies already existed to fix the problem, but you decided that the patient's experience was not worth that investment.

There is always something you can do, if you feel as strongly about this as I do.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Photo: Medical record printout by jodi0327, on Flickr


May 02 2012

Recombinant Research: Breaking open rewards and incentives

In the previous articles in this series I've looked at problems in current medical research, and at the legal and technical solutions proposed by Sage Bionetworks. Pilot projects have shown encouraging results but to move from a hothouse environment of experimentation to the mainstream of one of the world's most lucrative and tradition-bound industries, Sage Bionetworks must aim for its nucleus: rewards and incentives.

Previous article in the series: Sage Congress plans for patient engagement.

Think about the publication system, that wretchedly inadequate medium for transferring information about experiments. Getting the data on which a study was based is incredibly hard; getting the actual samples or access to patients is usually impossible. Just as boiling vegetables drains most of their nutrients into the water, publishing results of an experiment throws away what is most valuable.

But the publication system has been built into the foundation of employment and funding over the centuries. A massive industry provides distribution of published results to libraries and research institutions around the world, and maintains iron control over access to that network through peer review and editorial discretion. Even more important, funding grants require publication (but the data behind the study only very recently). And of course, advancement in one's field requires publication.

Lawrence Lessig, in his keynote, castigated for-profit journals for restricting access to knowledge in order to puff up profits. A chart in his talk showed skyrocketing prices for for-profit journals in comparison to non-profit journals. Lessig is not out on the radical fringe in this regard; Harvard Library is calling the current pricing situation "untenable" in a move toward open access echoed by many in academia.

Lawrence Lessig keynote at Sage Congress
Lawrence Lessig keynote at Sage Congress.

How do we open up this system that seemed to serve science so well for so long, but is now becoming a drag on it? One approach is to expand the notion of publication. This is what Sage Bionetworks is doing with Science Translational Medicine in publishing validated biological models, as mentioned in an earlier article. An even more extensive reset of the publication model is found in Open Network Biology (ONB), an online journal. The publishers require that an article be accompanied by the biological model, the data and code used to produce the model, a description of the algorithm, and a platform to aid in reproducing results.

But neither of these worthy projects changes the external conditions that prop up the current publication system.

When one tries to design a reward system that gives deserved credit to other things besides the final results of an experiment, as some participants did at Sage Congress, great unknowns loom up. Is normalizing and cleaning data an activity worth praise and recognition? How about combining data sets from many different projects, as a Synapse researcher did for the TCGA? How much credit do you assign researchers at each step of the necessary procedure for a successful experiment?

Let's turn to the case of free software to look at an example of success in open sharing. It's clear that free software has swept the computer world. Most web sites use free software ranging from the server on which they run to the language compilers that deliver their code. Everybody knows that the most popular mobile platform, Android, is based on Linux, although fewer realize that the next most popular mobile platforms, Apple's iPhones and iPads, run on a modified version of the open BSD operating system. We could go on and on citing ways in which free and open source software have changed the field.

The mechanism by which free and open source software staked out its dominance in so many areas has not been authoritatively established, but I think many programmers agree on a few key points:

  • Computer professionals encountered free software early in their careers, particularly as students or tinkerers, and brought their predilection for it into jobs they took at stodgier institutions such as banks and government agencies. Their managers deferred to them on choices for programming tools, and the rest is history.

  • Of course, computer professionals would not have chosen the free tools had they not been fit for the job (and often best for the job). Why is free software so good? Probably because the people creating it have complete jurisdiction over what to produce and how much time to spend producing it, unlike in commercial ventures with requirements established through marketing surveys and deadlines set unreasonably by management.

  • Different pieces of free software are easy to hook up, because one can alter their interfaces as necessary. Free software developers tend to look for other tools and platforms that could work with their own, and provide hooks into them (Apache, free database engines such as MySQL, and other such platforms are often accommodated.) Customers of proprietary software, in contrast, experience constant frustration when they try to introduce a new component or change components, because the software vendors are hostile to outside code (except when they are eager to fill a niche left by a competitor with market dominance). Formal standards cannot overcome vendor recalcitrance--a painful truth particularly obvious in health care with quasi-standards such as HL7.

  • Free software scales. Programmers work on it tirelessly until it's as efficient as it needs to be, and when one solution just can't scale any more, programmers can create new components such as Cassandra, CouchDB, or Redis that meet new needs.

Are there lessons we can take from this success story? Biological research doesn't fit the circumstances that made open source software a success. For instance, researchers start out low on the totem pole in very proprietary-minded institutions, and don't get to choose new ways of working. But the cleverer ones are beginning to break out and try more collaboration. Software and Internet connections help.

Researchers tend to choose formats and procedures on an ad hoc, project by project basis. They haven't paid enough attention to making their procedures and data sets work with those produced by other teams. This has got to change, and Sage Bionetworks is working hard on it.

Research is labor-intensive. It needs desperately to scale, as I have pointed out throughout this article, but to do so it needs entire new paradigms for thinking about biological models, workflow, and teamwork. This too is part of Sage Bionetworks' mission.

Certain problems are particularly resistant in research:

  • Conditions that affect small populations have trouble raising funds for research. The Sage Congress initiatives can lower research costs by pooling data from the affected population and helping researchers work more closely with patients.

  • Computation and statistical methods are very difficult fields, and biological research is competing with every other industry for the rare individuals who know these well. All we can do is bolster educational programs for both computer scientists and biologists to get more of these people.

  • There's a long lag time before one knows the effects of treatments. As Heywood's keynote suggested, this is partly solved by collecting longitudinal data on many patients and letting them talk among themselves.

Another process change has revolutionized the computer field: agile programming. That paradigm stresses close collaboration with the end-users whom the software is supposed to benefit, and a willingness to throw out old models and experiment. BRIDGE and other patient initiatives hold out the hope of a similar shift in medical research.

All these things are needed to rescue the study of genetics. It's a lot to do all at once. Progress on some fronts were more apparent than others at this year's Sage Congress. But as more people get drawn in, and sometimes fumbling experiments produce maps for changing direction, we may start to see real outcomes from the efforts in upcoming years.

All articles in this series, and others I've written about Sage Congress, are available through a bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

May 01 2012

Recombinant Research: Sage Congress plans for patient engagement

Clinical trials are the pathway for approving drug use, but they aren't good enough. That has become clear as a number of drugs (Vioxx being the most famous) have been blessed by the FDA, but disqualified after years of widespread use reveal either lack of efficacy or dangerous side effects. And the measures taken by the FDA recently to solve this embarrassing problem continue the heavy-weight bureaucratic methods it has always employed: more trials, raising the costs of every drug and slowing down approval. Although I don't agree with the opinion of Avik S. A. Roy (reprinted in Forbes) that Phase III trials tend to be arbitrary, I do believe it is time to look for other ways to test drugs for safety and efficacy.

First article in the series: Recombinant Research: Sage Congress Promotes Data Sharing in Genetics.

But the Vioxx problem is just one instance of the wider malaise afflicting the drug industry. They just aren't producing enough new medications, either to solve pressing public needs or to keep up their own earnings. Vicki Seyfert-Margolis of the FDA built on her noteworthy speech at last year's Sage Congress (reported in one of my articles about the conference) with the statistic that drug companies have submitted 20% fewer medications to the FDA between 2001 and 2007. Their blockbuster drugs produce far fewer profits than before as patents expire and fewer new drugs emerge (a predicament called the "patent cliff"). Seyfert-Margolis intimated that this crisis in the cause of layoffs in the industry, although I heard elsewhere that the companies are outsourcing more research, so perhaps the downsizing is just a reallocation of the same money.

Benefits of patient involvement

The field has failed to rise to the challenges posed by new complexity. Speakers at Sage Congress seemed to feel that genetic research has gone off the tracks. As the previous article in this series explained, Sage Bionetworks wants researchers to break the logjam by sharing data and code in GitHub fashion. And surprisingly, pharma is hurting enough to consider going along with an open research system. They're bleeding from a situation where as much as 80% of each clinical analysis is spent retrieving, formatting, and curating the data. Meanwhile, Kathy Giusti of the Multiple Myeloma Research Foundation says that in their work, open clinical trials are 60% faster.

Attendees at a breakout session where I sat in, including numerous managers from major pharma companies, expressed confidence that they could expand public or "pre-competitive" research in the direction Sage Congress proposed. The sector left to engage is the one that's central to all this work--the public.

If we could collect wide-ranging data from, say, 50,000 individuals (a May 2013 goal cited by John Wilbanks of Sage Bionetworks, a Kauffman Foundation Fellow), we could uncover a lot of trends that clinical trials are too narrow to turn up. Wilbanks ultimately wants millions of such data samples, and another attendee claimed that "technology will be ready by 2020 for a billion people to maintain their own molecular and longitudinal health data." And Jamie Heywood of PatientsLikeMe, in his keynote, claimed to have demonstrated through shared patient notes that some drugs were ineffective long before the FDA or manufacturers made the discoveries. He decried the current system of validating drugs for use and then failing to follow up with more studies, snorting that, "Validated means that I have ceased the process of learning."

But patients have good reasons to keep a close hold on their health data, fearing that an insurance company, an identity thief, a drug marketer, or even their own employer will find and misuse it. They already have little enough control over it, because the annoying consent forms we always have shoved in our faces when we come to a clinic give away a lot of rights. Current laws allow all kinds of funny business, as shown in the famous case of the Vermont law against data mining, which gave the Supreme Court a chance to say that marketers can do anything they damn please with your data, under the excuse that it's de-identified.

In a noteworthy poll by Sage Bionetworks, 80% of academics claimed they were comfortable sharing their personal health data with family members, but only 31% of citizen advocates would do so. If that 31% is more representative of patients and the general public, how many would open their data to strangers, even when supposedly de-identified?

The Sage Bionetworks approach to patient consent

It's basic research that loses. So Wilbanks and a team have been working for the past year on a "portable consent" procedure. This is meant to overcome the hurdle by which a patient has to be contacted and give consent anew each time a new researcher wants data related to his or her genetics, conditions, or treatment. The ideal behind portable consent is to treat the entire research community as a trusted user.

The current plan for portable consent provides three tiers:

Tier 1

No restrictions on data, so long as researchers follow the terms of service. Hopefully, millions of people will choose this tier.

Tier 2

A middle ground. Someone with asthma may state that his data can be used only by asthma researchers, for example.

Tier 3

Carefully controlled. Meant for data coming from sensitive populations, along with anything that includes genetic information.

Synapse provides a trusted identification service. If researchers find a person with useful characteristics in the last two tiers, and are not authorized automatically to use that person's data, they can contact Synapse with the random number assigned to the person. Synapse keeps the original email address of the person on file and will contact him or her to request consent.

Portable consent also involves a lot of patient education. People will sign up through a software wizard that explains the risks. After choosing portable consent, the person decides how much to put in: 23andMe data, prescriptions, or whatever they choose to release.

Sharon Terry of the Genetic Alliance said that patient advocates currently try to control patient data in order to force researchers to share the work they base on that data. Portable consent loosens this control, but the field may be ready for its more flexible conditions for sharing.

Pharma companies and genetics researchers have lots to gain from access to enormous repositories of patient data. But what do the patients get from it? Leaders in health care already recognize that patients are more than experimental subjects and passive recipients of treatment. The recent ONC proposal for Stage 2 of Meaningful Use includes several requirements to share treatment data with the people being treated (which seems kind of a no-brainer when stated this baldly) and the ONC has a Consumer/Patient Engagement Power Team.

Sage Congress is fully engaged in the patient engagement movement too. One result is the BRIDGE initiative, a joint project of Sage Bionetworks and Ashoka with funding from the Robert Wood Johnson Foundation, to solicit questions and suggestions for research from patients. Researchers can go for years researching a condition without even touching on some symptom that patients care about. Listening to patients in the long run produces more cooperation and more funding.

Portable consent requires a leap of faith, because as Wilbanks admits, releasing aggregates of patient data mean that over time, a patient is almost certain to be re-identified. Statistical techniques are just getting too sophisticated and compute power growing too fast for anyone to hide behind current tricks such as using only the first three digits of a five-digit postal code. Portable consent requires the data repository to grant access only to bona fide researchers and to set terms of use, including a ban on re-identifying patients. Still, researchers will have rights to do research, redistribute data, and derive products from it. Audits will be built in.

But as mentioned by Kelly Edwards of the University of Washington, tools and legal contracts can contribute to trust, but trust is ultimately based on shared values. Portable consent, properly done, engages with frameworks like Synapse to create a culture of respect for data.

In fact, I think the combination of the contractual framework in portable consent and a platform like Synapse, with its terms of use, might make a big difference in protecting patient privacy. Seyfert-Margolis cited predictions that 500 million smartphone users will be using medical apps by 2015. But mobile apps are notoriously greedy for personal data and cavalier toward user rights. Suppose all those smartphone users stored their data in a repository with clear terms of use and employed portable consent to grant access to the apps? We might all be safer.

The final article in this series will evaluate the prospects for open research in genetics, with a look at the grip of journal publishing on the field, and some comparisons to the success of free and open source software.

Next: Breaking Open Rewards and Incentives. All articles in this series, and others I've written about Sage Congress, are available through a bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

April 30 2012

Recombinant Research: Sage Congress promotes data sharing in genetics

Given the exponential drop in the cost of personal genome sequencing (you can get a basic DNA test from 23andMe for a couple hundred dollars, and a full sequence will probably soon come down to one thousand dollars in cost), a new dawn seems to be breaking forth for biological research. Yet the assessment of genetics research at the recent Sage Congress was highly cautionary. Various speakers chided their own field for tilling the same ground over and over, ignoring the urgent needs of patients, and just plain researching the wrong things.

Sage Congress also has some plans to fix all that. These projects include tools for sharing data and storing it in cloud facilities, running challenges, injecting new fertility into collaboration projects, and ways to gather more patient data and bring patients into the planning process. Through two days of demos, keynotes, panels, and breakout sessions, Sage Congress brought its vision to a high-level cohort of 230 attendees from universities, pharmaceutical companies, government health agencies, and others who can make change in the field.

In the course of this series of articles, I'll pinpoint some of the pain points that can force researchers, pharmaceutical companies, doctors, and patients to work together better. I'll offer a look at the importance of public input, legal frameworks for cooperation, the role of standards, and a number of other topics. But we'll start by seeing what Sage Bionetworks and its pals have done over the past year.

Synapse: providing the tools for genetics collaboration

Everybody understands that change is driven by people and the culture they form around them, not by tools, but good tools can make it a heck of a lot easier to drive change. To give genetics researchers the best environment available to share their work, Sage Bionetworks created the Synapse platform.

Synapse recognizes that data sets in biological research are getting too large to share through simple data transfers. For instance, in his keynote about cancer research (where he kindly treated us to pictures of cancer victims during lunch), UC Santa Cruz professor David Haussler announced plans to store 25,000 cases at 200 gigabytes per case in the Cancer Genome Atlas, also known as TCGA in what seems to be a clever pun on the four nucleotides in DNA. Storage requirements thus work out to 5 petabytes, which Haussler wants to be expandable to 20 petabytes. In the face of big data like this, the job becomes moving the code to the data, not moving the data to the code.

Synapse points to data sets contributed by cooperating researchers, but also lets you pull up a console in a web browser to run R or Python code on the data. Some effort goes into tagging each data set with associated metadata: tissue type, species tested, last update, number of samples, etc. Thus, you can search across Synapse to find data sets that are pertinent to your research.

One group working with Synapse has already harmonized and normalized the data sets in TCGA so that a researcher can quickly mix and run stats on them to extract emerging patterns. The effort took about one and half full-time employees for six months, but the project leader is confident that with the system in place, "we can activate a similar size repository in hours."

This contribution highlights an important principle behind Synapse (appropriately called "viral" by some people in the open source movement): when you have manipulated and improved upon the data you find through Synapse, you should put your work back into Synapse. This work could include cleaning up outlier data, adding metadata, and so on. To make work sharing even easier, Synapse has plans to incorporate the Amazon Simple Workflow Service (SWF). It also hopes to add web interfaces to allow non-programmers do do useful work with data.

The Synapse development effort was an impressive one, coming up with a feature-rich Beta version in a year with just four coders. And Synapse code is entirely open source. So not only is the data distributed, but the creators will be happy for research institutions to set up their own Synapse sites. This may make Synapse more appealing to geneticists who are prevented by inertia from visiting the original Synapse.

Mike Kellen, introducing Synapse, compared its potential impact to that of moving research from a world of journals to a world like GitHub, where people record and share every detail of their work and plans. Along these lines, Synapse records who has used a data set. This has many benefits:

  • Researchers can meet up with others doing related work.

  • It gives public interest advocates a hook with which to call on those who benefit commercially from Synapse--as we hope the pharmaceutical companies will--to contribute money or other resources.

  • Members of the public can monitor accesses for suspicious uses that may be unethical.

There's plenty more work to be done to get data in good shape for sharing. Researchers must agree on some kind of metadata--the dreaded notion of ontologies came up several times--and clean up their data. They must learn about data provenance and versioning.

But sharing is critical for such basics of science as reproducing results. One source estimates that 75% of published results in genetics can't be replicated. A later article in this series will examine a new model in which enough metainformation is shared about a study for it to be reproduced, and even more important to be a foundation for further research.

With this Beta release of Synapse, Sage Bionetworks feels it is ready for a new initiative to promote collaboration in biological research. But how do you get biologists around the world to start using Synapse? For one, try an activity that's gotten popular nowadays: a research challenge.

The Sage DREAM challenge

Sage Bionetworks' DREAM challenge asks genetics researchers to find predictors of the progression of breast cancer. The challenge uses data from 2000 women diagnosed with breast cancer, combining information on DNA alterations affecting how their genes were expressed in the tumors, clinical information about their tumor status, and their outcomes over ten years. The challenge is to build models integrating the alterations with molecular markers and clinical features to predict which women will have the most aggressive disease over a ten year period.

Several hidden aspects of the challenge make it a clever vehicle for Sage Bionetworks' values and goals. First, breast cancer is a scourge whose urgency is matched by its stubborn resistance to diagnosis. The famous 2009 recommendations of U.S. Preventive Services Task Force, after all the controversy was aired, left us with the dismal truth that we don't know a good way to predict breast cancer. Some women get mastectomies in the total absence of symptoms based just on frightening family histories. In short, breast cancer puts the research and health care communities in a quandary.

We need finer-grained predictors to say who is likely to get breast cancer, and standard research efforts up to now have fallen short. The Sage proposal is to marshal experts in a new way that combines their strengths, asking them to publish models that show the complex interactions between gene targets and influences from the environment. Sage Bionetworks will publish data sets at regular intervals that it uses to measure the predictive ability of each model. A totally fresh data set will be used at the end to choose the winning model.

The process behind the challenge--particularly the need to upload code in order to run it on the Synapse site--automatically forces model builders to publish all their code. According to Stephen Friend, founder of Sage Bionetworks, "this brings a level of accountability, transparency, and reproducibility not previously achieved in clinical data model challenges."

Finally, the process has two more effects: it shows off the huge amount of genetic data that can be accessed through Synapse, and it encourages researchers to look at each other's models in order to boost their own efforts. In less than a month, the challenge already received more than 100 models from 10 sources.

The reward for winning the challenge is publication in a respected journal, the gold medal still sought by academic researchers. (More on shattering this obelisk later in the series.) Science Translational Medicine will accept results of the evaluation as a stand-in for peer review, a real breakthrough for Sage Bionetworks because it validates their software-based, evidence-driven process.

Finally, the DREAM challenge promotes use of the Synapse infrastructure, and in particular the method of bringing the code to the data. Google is donating server space for the challenge, which levels the playing field for researchers, freeing them from paying for their own computing.

A single challenge doesn't solve all the problems of incentives, of course. We still need to persuade researchers to put up their code and data on a kind of genetic GitHub, persuade pharmaceutical companies to support open research, and persuade the general public to share data about the phonemes (life data) and genes--all topics for upcoming articles in the series.

Next: Sage Congress Plans for Patient Engagement. All articles in this series, and others I've written about Sage Congress, are available through a bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!