Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 21 2012

Health Information Technology: putting the patient back into health care

(Background: Most government advisory committees are stocked with representatives of corporations and special interest groups who distort government policies, sometimes unconsciously and with good intentions, to fertilize their own turfs. In health information technology, we have a rare chance to ensure that the most affected members of the public actually have their own direct representative. The GAO is directed by law to propose members for a Health Information Technology Policy Committee, and there is an opening for someone who "advocates for patients or consumers." A movement is building in support of Regina Holliday, nationally famous for her work on opening patient data, comments on Meaningful Use, and her images in her Walking Gallery. My letter follows. Letters to the GAO, HITCommittee@gao.gov, are due May 25.)

Government Accountability Office

441 G Street NW.

Washington, DC 20548

Dear Sirs and Madams:

I am writing in support of appointing Regina Holliday as a patient and consumer advocate on the Health Information Technology Policy Committee. I suggest this on two grounds: that she would be an excellent contributor to the committee, and that it is critical for the committee to hear from directly patients rather than the proxies who usually insert themselves in place of the patients.

Ms Holliday is nationally recognized among patient advocates as a leading expert on the patient experience and on the information technology required to improve health care, particularly the tools that will enable patient engagement, the Holy Grail of health care reform. Ms. Holliday is an expert on the Meaningful Use requirements that embody the health provisions of the American Recovery and Reinvestment Act (having submitted substantial comments on Stage 1 of Meaningful Use) and has advocated over many years for both technologies and policies that can improve delivery of health care and health information to patients.

Furthermore, Ms Holliday is in an excellent position to reflect the influence of public opinion on the HIT Policy Committee. She is a tireless researcher and advocate in the area of patient engagement, mastering both traditional channels such as lectures and modern Web-based media. In her Walking Gallery she collects stories from other people who have engaged intensively with the health care system and reflects the widespread experiences in her advocacy work. She is articulate and clear about the demands made by the public.

Finally, I would like to stress the importance of appointing an independent expert voice such as Ms Holliday on the HIT Policy Committee. Organizations claiming to represent patients have institutional agendas that always take first priority in their advocacy work. Members of the HIT Policy Committee who are paid representatives of established organizations are constantly tempted to bend policies to favor those established institutions, and the actual needs of the patient are never paramount. The thrust of the patient advocacy movement is to elevate the health of the patient above the continuity or profit of the institutions, which is why the voice of someone like Ms Holliday is crucial.

Andrew Oram

Editor, O'Reilly Media

(This letter represents my personal view only)

What do mHealth, eHealth and behavioral science mean for the future of healthcare?

We're living through one of the most dynamic periods in healthcare in our collective history. Earlier this year, Dr. Farzad Mostashari, the national coordinator of health IT, highlighted how the web, data and epatients are poised to revolutionize healthcare. The Internet is shaping healthcare in many ways, from the quantified self movement to participatory medicine, and even through the threat of a new "data divide" driven by unequal access to information, algorithmic and processing power.

Dr. Audie AtienzaInto this heady mix, add the mobile computing revolution, where smart devices rest in the pockets of hundreds of millions of citizens, collecting data and providing access to medical information. There's also the rapidly expanding universe of healthcare apps that promise to revolutionize how many services are created, distributed and delivered.

This month, I had the opportunity to discuss some of these trends with Dr. Audie Atienza (@AudieAtienza), a researcher who focuses on behavioral science and healthcare. Our interview, lightly edited for content and clarity, follows.

We first met when you were a senior health technology adviser at the U.S. Department of Health and Human Services (HHS). What do you do now?

Audie Atienza: Working with Todd Park at the Department of Health and Human Services (HHS) was a distinct privilege and an honor. I learned a great deal working at HHS with Todd. I am now at the new Science of Research and Technology Branch of the National Cancer Institute, National Institutes of Health.  My title is Behavioral Scientist and Health Scientist Administrator. In a typical week, I attend health-technology-related conferences and meetings, work with colleagues across HHS and the federal government on health-technology-related initiatives, discuss funding opportunities with extramural researchers, and engage in scientific research related to health technology and/or cancer control.

How well did your education prepare you for your work?

Audie Atienza: My undergraduate, graduate and post-doctoral education has provided me with the critical thinking skills and knowledge that is required of a health researcher. My interest in health technology actually started when I was a Fellow at Stanford University, where I was gathering data on cardiovascular disease risk factors using paper and pencil diaries.  Using paper and pencil measures seemed so inefficient. Study participants sometimes forgot to complete the diaries or had incomplete entries — and sometimes the handwriting was difficult to decipher.  So, my mentor, Dr. Abby King, and I collaborated with Dr. BJ Fogg (also at Stanford) and we "went digital" with the cardiovascular disease risk factor assessments. (We used "state of the art" PDAs at the time.)  This fortuitous collaboration and the "there has to be a better way to do this" idea launched me into the field of electronic and mobile health.

What does "eHealth" mean now?

Audie Atienza: After my postdoctoral fellowship at Stanford, I accepted a position at the National Cancer Institute (NCI), Health Promotion Research Branch.  The NCI offered me the opportunity to further explore the field of electronic health (or eHealth) on a national (U.S.) and international scale.  The term "eHealth" generally represents the use of electronic or digital information technology to assess and/or modify health behaviors, states and outcomes.

When I arrived at NCI, I was asked to bring the best and brightest behavioral researchers together to discuss how to assess health in "real-time."  A book was published based on this meeting: "The Science of Real-Time Data Capture Self-Reports in Health Research." Other national and international conferences followed, including the 2010 mHealth Summit, in which I was intimately involved.

How does behavioral science affect our capacity to understand the causes of cancer?

Audie Atienza: It is clear that behavioral factors contribute to cancer and many other diseases, like diabetes and heart disease.  For example, the link between smoking and cancer is well established. There is also a solid body of research that has linked obesity, physical inactivity, and poor diet to various cancers. The Centers for Disease Control (CDC) reports that 69% of U.S. adults are currently overweight or obese.[Data on adults: PDF and children: PDF]

Accurately measuring and changing these everyday health behaviors — including smoking, physical activity, what people eat — is not easy. This is where technology can be of great assistance. Through sensors, cell phones, GPS systems, social networking technology, and web-based technology, we may be able to better assess and hopefully improve these key health behaviors that contribute to cancer and other diseases.

We are, however, just at the beginning of discovering how to best develop and utilize technology to improve the health of individuals and the public.  There is much work to be done to determine what is effective and what isn't.

How do mobile devices figure into that work?

Audie Atienza: Mobile technology is everywhere. We are seeing more integrated devices, like smartphones with cameras, accelerometers, GPS, and all types of apps.  But it isn't about the technology — a phrase I have borrowed from Todd Park. It's really about addressing health issues and improving the health of individuals and the public.  If technology can facilitate this, then great. But using technology may not always be the best way to improve health and well-being.  This is a critical research question.

How is mobile technology being applied to specific health issues?

Audie Atienza: Mobile technology can be (and is being) applied to address many different health and disease issues: infection disease (AIDS/HIV, tuberculosis, influenza), chronic disease (heart disease, cancer, diabetes, arthritis, asthma), mental health (depression, stress, anxiety), child and maternal health (pregnancy, infant care, childhood obesity), gerontology (healthy living in place, falls prevention, caregiving), health promotion (e.g., exercise, diet, smoking cessation, cancer screening, sun safety), and health-provider-related issues (medication adherence, patient-provider communication, point-of-care diagnostics, vital signs monitoring).

Mobile technology cuts across the disease and health spectrum with great potential to address problems that have been previously difficult to solve.  It is difficult to say which mobile health technology is most important because they are all addressing distinct and critical issues.  Heart disease and cancer are the leading causes of death in the United States. Others may argue that infectious diseases and maternal/child health are the most critical issues to address globally. Still others may argue for tobacco control and reducing obesity (increasing physical activity and improving nutrition).  The National Institutes of Health (NIH) has 27 institutes and centers (ICs), each with a particular mission.  More than 20 of the 27 ICs are currently funding mobile technology-related research.

What do we need next in mHealth?

Audie Atienza: More research. We need to better understand what works and what does not. Researchers who have systematically reviewed smartphone health apps (e.g., smoking cessation, diabetes) have found that most are not based on established public health or clinical guidelines. Very few have actually assessed whether the apps are effective in changing health outcomes. With thousands of apps, sensors, and other mobile health tools currently available, it can be difficult for the user to know what is effective, useful, and (most importantly) safe.

How close are we to a real tricorder? (There's now an X Prize for that.)

Audie Atienza: I love science-fiction and "Star Trek"!  Certainly, mobile sensors and monitors currently exist that can accurately monitor physiological states and vital signs. And the technology is becoming increasingly integrated and more powerful.  But, to have an all-in-one mobile device that can assess and diagnose health and diseases as well as, if not better than, a clinical provider is a very tall order. If such a tool or prototype is developed, it will be science and research that will determine if the "tricorder" is effective or not.  Time will tell whether such a tool can be developed.  While I am all for reducing diagnostic errors, I personally would be hesitant to accept a diagnosis from only a mobile device without the clinical judgment of a medical or health professional.

OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR20

Related:

May 16 2012

How to start a successful business in health care at Health 2.0 conference

Great piles of cash are descending on entrepreneurs who develop health care apps, but that doesn't make it any easier to create a useful one that your audience will adopt. Furthermore, lowered costs and streamlined application development technique let you fashion a working prototype faster than ever, but that also reduces the time you can fumble around looking for a business model. These were some of the insights I got at Spring Fling 2012: Matchpoint Boston, put on by Health 2.0 this week.

This conference was a bit of a grab-bag, including one-on-one meetings between entrepreneurs and their potential funders and customers, keynotes and panels by health care experts, round-table discussions among peers, and lightning-talk demos. I think the hallway track was the most potent part of this conference, and it was probably planned that way. The variety at the conference mirrors the work of Health 2.0 itself, which includes local chapters, challenges, an influential blog, and partnerships with a range of organizations. Overall, I appreciated the chance to get a snapshot of a critical industry searching for ways to make a positive difference in the world while capitalizing on ways to cut down on the blatant waste and mismanagement that bedevil the multi-trillion-dollar health care field.

Let's look, for instance, at the benefits of faster development time. Health IT companies go through fairly standard early stages (idea, prototype, incubator, venture capital funding) but cochairs Indu Subaiya and Matthew Holt showed slides demonstrating that modern techniques can leave companies in the red for less time and accelerate earnings. On the other hand, Jonathan Bush of athenahealth gave a keynote listing bits of advice for company founders and admitting that his own company had made significant errors that required time to recover from. Does the fast pace of modern development leave less room for company heads to make the inevitable mistakes?

I also heard Margaret Laws, director of the California HealthCare Foundation's Innovations Fund, warn that most of the current applications being developed for health care aim to salve common concerns among doctors or patients but don't address what she calls the "crisis points" in health care. Brad Fluegel of Health Evolution Partners observed that, with the flood of new entrepreneurs in health IT, a lot of old ideas are being recycled without adequate attention to why they failed before.

I'm afraid this blog is coming out too negative, focusing on the dour and the dire, but I do believe that health IT needs to acknowledge its risks in order to avoid squandering the money and attention it's getting, and on the positive side to reap the benefits of this incredibly fertile moment of possibilities in health care. Truly, there's a lot to celebrate in health IT as well. Here are some of the fascinating start-ups I saw at the show:

  • hellohealth aims at that vast area of health care planning and administration that cries out for efficiency improvements--the area where we could do the most good by cutting costs without cutting back on effective patient care. Presenter Shahid Shah described the company as the intersection of patient management with revenue cycle management. They plan to help physicians manage appointments and follow-ups better, and rationalize the whole patient experience.

  • hellohealth will offer portals for patients as well. They're unique, so far as I know, in charging patients for certain features.

  • Corey Booker demo'd onPulse, which aims to bring together doctors with groups of patients, and patients with groups of the doctors treating them. For instance, when a doctor finds an online article of interest to diabetics, she can share it with all the patients in her practice suffering from diabetes. onPulse also makes it easier for a doctor to draw in others who are treating the same patient. The information built up about their interactions can be preserved for billing.

    onPulse overlaps in several respects with HealthTap, a doctor-patient site that I've covered several times and for which an onPulse staffer expressed admiration. But HealthTap leaves discussions out in the open, whereas onPulse connects doctors and patients in private.

  • HealthPasskey.com is another one of these patient/doctor services with a patient portal. It allows doctors to upload continuity of care documents in the standard CCD format to the patient's site, and supports various services such as making appointments.

    A couple weeks ago I reported a controversy over hospitals' claims that they couldn't share patient records with the patients. Check out the innovative services I've just highlighted here as a context for judging whether the technical and legal challenges for hospitals are really too daunting. I recognize that each of the sites I've described pick off particular pieces of the EHR problem and that opening up the whole kit and kaboodle is a larger task, but these sites still prove that all the capabilities are in place for institutions willing to exploit them.

  • GlobalMed has recently released a suitcase-sized box that contains all the tools required to do a standard medical exam. This allows traveling nurse practitioners or other licensed personnel to do a quick check-up at a patient's location without requiring a doctor or a trip to the clinic. Images can also be taken. Everything gets uploaded to a site where a doctor can do an assessment and mark up records later. The suitcase weighs about 30 pounds, rolls on wheels, and costs about $30,000 (price to come down if they start manufacturing in high quantities).

  • SwipeSense won Health 2.0's 100 Day Innovation Challenge. They make a simple device that hospital staff can wear on their belts and wipe their hands on. This may not be as good as washing your hands, but takes advantage of people's natural behavior and reduces the chance of infections. It also picks up when someone is using the device and creates reports about compliance. SwipeSense is being tested at the Rush University Medical Center.

  • Thryve, one of several apps that helps you track your food intake and make better choices, won the highest audience approval at Thursday's Launch! demos.

  • Winner of last weekend's developer challenge was No Sleep Kills, an app that aims to reduce accidents related to sleep deprivation (I need a corresponding app to guard against errors from sleep-deprived blogging). You can enter information on your recent sleep patterns and get back a warning not to drive.

It's worth noting that the last item in that list, No Sleep Kills, draws information from Health and Human Services's Healthy People site. This raises the final issue I want to bring up in regard to the Spring Fling. Sophisticated developers know their work depends heavily on data about public health and on groups of patients. HHS has actually just released another major trove of public health statistics. Our collective knowledge of who needs help, what works, and who best delivers the care would be immensely enhanced if doctors and institutions who currently guard their data would be willing to open it up in aggregate, non-identifiable form. I recently promoted this ideal in coverage of Sage Congress.

In the entirely laudable drive to monetize improvements in health care, I would like the health IT field to choose solutions that open up data rather than keep it proprietary. One of the biggest problems with health care, in this age of big data and incredibly sophisticated statistical tools, is our tragedy of the anti-commons where each institution seeks to gain competitive advantage through hoarding its data. They don't necessarily use their own data in socially beneficial ways, either (they're more interested in ratcheting up opportunities for marketing expensive care). We need collective sources of data in order to make the most of innovation.

OSCON 2012 Healthcare Track — The conjunction of open source and open data with health technology promises to improve creaking infrastructure and give greater control and engagement to patients. Learn more at OSCON 2012, being held July 16-20 in Portland, Oregon.

Save 20% on registration with the code RADAR20

May 06 2012

The state of health IT according to the American Hospital Association

Last week, the American Hospital Association released a major document. Framed as comments on a major federal initiative, the proposed Stage 2 Meaningful Use criteria by the Centers for Medicare & Medicaid Services (CMS), the letter also conveys a rather sorrowful message about the state of health IT in the United States. One request--to put brakes on the requirement for hospitals to let patients see their own information electronically--has received particularly strong coverage and vigorous responses from e-Patient Dave deBronkart, Regina Holliday, Dr. Adrian Gropper, Fred Trotter, the Center for Democracy and Technology, and others.

I think the AHA has overreached in its bid to slow down patient access to data, which I'll examine later in this article. But to me, the most poignant aspect of the AHA letter is its careful accumulation of data to show the huge gap between what health care calls for and what hospitals, vendors, standards bodies, and even the government are capable of providing.

Two AHA staff were generous enough to talk to me on very short notice and offer some clarifications that I'll include with the article.

A survey of the U.S. health care system

According to the AHA (translated into my own rather harsh words), the state of health IT in American hospitals is as follows:

  • Few hospitals and doctors can fulfill basic requirements of health care quality and cost control. For instance, 62% could not record basic patient health indicators such as weight and blood pressure (page 51 of their report) in electronic health records (EHRs).

  • Many EHR vendors can't support the meaningful use criteria in real-life settings, even when their systems were officially certified to do so. I'll cite some statements from the AHA report later in the article. Meaningful use is a big package of reforms, of course, promulgated over just a few years, but it's also difficult because vendors and hospitals had also been heading for a long time in the opposite direction: toward closed, limited functionality.

  • Doctors still record huge globs of patient data in unstructured text format, where they are unavailable for quality reporting, tracking clinical effectiveness, etc. Data is often unstructured because humans are complex and their symptoms don't fit into easy categories. Yet doctors have learned to make diagnoses for purposes of payment and other requirements; we need to learn what other forms of information are worth formalizing for the sake of better public health.

  • Quality reporting is a mess. The measures currently being reported are unreliable, and standards have not been put in place to allow valid comparisons of measures from different hospitals.

  • Government hasn't stepped up to the plate to perform its role in supporting electronic reporting. For instance, the Centers for Medicare & Medicaid Services (CMS) wants the hospitals to report lots of quality measures, but its own electronic reporting system is still in the testing stages, so hospitals must enter data through a cumbersome and error-prone manual "attestation." States aren't ready to accept electronic submissions either. The Direct project is moving along, but its contribution to health data exchange is still very new.

There's no easy place to assign blame for a system that is killing hundreds of thousands of people a year while sticking the US public with rising costs. The AHA letter constantly assures us that they approve the meaningful use objectives , but say their implementation in a foreseeable time frame is unfeasible. "We can envision a time when all automated quality reporting will occur effortlessly in a reliable and valid fashion. However, we are not there yet." (pp. 42-43)

So the AHA message petition to the CMS can be summarized overall as, "Slow everything down, but keep the payments coming."

AHA staff referred to the extensively researched article, A Progress Report On Electronic Health Records In U.S. Hospitals. It corroborates observations that adoption of EHRs has vastly increased between 2010 and 2011. However, the capabilities of the EHRs and hospitals using them have not kept up with meaningful use requirements, particularly among small rural hospitals with few opportunities to hire sophisticated computer technicians, etc. Some small hospitals have trouble even getting an EHR vendor to talk to them.

Why all this matters

Before looking at some details, let me lay out some of the reasons that meaningful use criteria are so important to patients and the general public:

  • After treatment, data must be transferred quickly to patients and the next organizations treating them (such as rehab centers and visiting nurses) so that the patients receive proper care.

  • Quality measures are critical so that hospitals can be exposed to sunshine, the best disinfectant, and be shamed into lowering costs and reducing errors.

  • Data must be collected by public agencies so that data crunchers can find improvements in outreach and treatment. Hospitals love to keep their data private, but that gives them relatively tiny samples on which to base decisions, and they often lack the skills to analyze the data.

No one can predict what will break logjams and propel health care forward, but the patient engagement seems crucial because most health care problems in developed countries involve lifestyle issues such as smoking and body weight. Next, to provide the kind of instant, pervasive patient engagement that can produce change, we need electronic records that are open to innovative apps, that can accept data from the patient-centered medical home, and that link together all care-givers.

The state of electronic health records

The EHR industry does not come out well in the AHA list of woes. The letter cites "unworkable, but certified, vendor products" (p.3) and say, "Current experience is marked by limited vendor and workforce capacity." (p. 7) The latter complaint points to one of the big hurdles facing health care reform: we don't have enough staff who understand computer systems and who can adapt their behavior to use them effectively.

Functionality falls far short of real hospital needs:

...one hospital system spent more than $1 million on a quality reporting tool from its vendor that was, for the most part, an unwieldy data entry screen. Even medication orders placed using CPOE [computerized physician order entry] needed to be manually re-entered for the CQM [Center For Quality Management] calculation. Even then, the data were not reliable, despite seven months of working with the vendor to attempt to get it right. Thus, after tremendous investment of financial and human resources, the data are not useful. (p. 45)

The AHA claims that vendors were lax in testing their systems, and that the government abetted the omission: "the proposals within the certification regulation require vendors to incorporate all of the data elements needed to calculate only one CQM. There is no proposal to require that certified EHRs be capable of generating all of the relevant CQMs proposed/finalized by CMS." (p. 41) With perhaps a subtle sarcasm, the AHA proposes, "CMS should not require providers to report more e-measures than vendors are required to generate." (p. 36)

Vendors kind of take it on the chin for fundamental failures in electronic capabilities. "AHA survey data indicate that only 10 percent of hospitals had a patient portal of any kind in Fall 2011. Our members report that none had anywhere near the functionality required by this objective. In canvassing vendors, they report no technology companies can currently support this volume of data or the listed functions." (p. 26)

We can add an observation from the College of Healthcare Information Management Executives (CHIME):

...in Stage 1, some vendors were able to dictate which clinical quality measures providers chose to report--not based on the priorities of the provider, but based on the capabilities of the system. Subsequently, market forces corrected this and vendors have gone on to develop more capabilities. But this anecdote provides an important lesson when segmenting certification criteria--indeed for most technologies in general--flexibility for users necessitates consistent and robust standards for developers. In short, the 2014 Edition must require more of the vendor community if providers are to have space to pursue meaningful use of Meaningful Use. (p. 2)

Better standards--which take time to develop--could improve the situation, which is why the Office of the National Coordinator (ONC) has set up a Health IT Standards Committee. For instance, the AHA says, "we have discovered that vendors needed to program many decisions into EHRs that were not included in the e-specifications. Not only has this resulted in rampant inconsistencies between different vendors, it produced inconsistent measure results when the e-measures are compared to their counterparts in the Inpatient Quality Reporting (IQR) Program." (p. 35)

The AHA goes so far as to say, "The market cannot sustain this level of chaos." (p. 7) They conclude that the government is pushing too hard. One of their claims, though, comes across as eccentric: "Providers and vendors agree that the meaningful use program has stifled innovation in the development of new uses of EHRs." (p. 9)

To me, all the evidence points in the opposite direction. The vendors were happy for decades to push systems that performed minimal record-keeping and modest support such as formularies at huge costs, and the hospitals that adopted EHRs failed to ask for more. It wasn't a case of market failure because, as I have pointed out (and others have too), health care is not a market. But nothing would have changed had not the government stepped in.

Patient empowerment

Now for the point that has received the most press, AHA's request to weaken the rules giving patients access to their data. Once again, the AHA claims to favor patient access--and actually, they have helped hospitals over the years to give patients summaries of care, mostly on paper--but are passing on the evidence they have accumulated from their members that the systems will not be in place to support electronic distribution for some time. I won't repeat all the criticisms of the experts mentioned at the beginning of this article, but provide some perspective about patient engagement.

Let's start with the AHA's request to let the hospital can choose the format for patient data (pp. 25-26). So long as hospitals can do that, we will be left with formats that are not interoperable. Many hospitals will choose formats that are human-readable but not machine-readable, so that correlations and useful data cannot be extracted programmatically. Perhaps the technology lags in this area--but if the records are not in structured format already, hospitals themselves lose critical opportunities to check for errors, mine data for trends, and perform other useful tasks with their records.

The AHA raises alarms at the difficulties of providing data. They claim that for each patient who is treated, the hospital will have to invest resources "determining which records are relevant and appropriate." (p. 26) "It is also unclear whether a hospital would be expected to spend resources to post information and verify that all of the data listed are available within 36 hours." (p. 27)

From my perspective, the patient download provisions would simply require hospitals to clean up their ways of recording data so that it is in a useable and structured format for all, including their own staff. Just evaluate what the AHA is admitting to in the following passage: "Transferring these clinical observations into a structured, coded problem list in the EHR requires significant changes to work flows and training to ensure accuracy. It also increases time demands for documentation by physicians who already are stretched thin." (p. 27)

People used to getting instant information from commercial web sites find it very hard to justify even the 36-hour delay offered by the Stage 2 meaningful use guidelines. Amazon.com can provide me with information on all my current and recent orders. Google offers each registered user a dashboard that shows me everything they track about me, including all my web searches going back to mid-2006. They probably do this to assure people that they are not the egregious privacy violators they are regularly accused of being. Nevertheless, it shows that sites collecting data can make it available to users without friction, and with adequate security to manage privacy risks.

The AHA staff made a good point in talking to me. The CMS "transmit" requirement would let a patient ask the hospital to send his records to any institution or individual of his choice. First of all, this would assume that the recipient has encrypted email or access to an encrypted web site. And it could be hard for a hospital to make sure both the requester and the intended recipient are who they claim to be. "The transmit function also heightens security risks, as the hospital could be asked to send data to an individual with whom it has no existing relationship and no mechanism for authentication of their identity." (p. 27) Countering this claim, Gropper and the Society for Participatory Medicine offer the open OAuth standard to give patients easy and secure access. But while OAuth is a fairly stable standard, the AHA's concerns are justified because it hasn't been applied yet to the health care field.

Unfortunately, allowing a patient to send his or her data to a third party is central to Accountable Care Organizations (ACOs), which hold the promise of improving patient care by sharing data among cooperating health care providers. If the "transmit" provision is delayed, I don't see how ACOs can take off.

The AHA drastically reduces the information hospitals would have to give patients, at least for the next stage of the requirements. Among the material they would remove are diagnoses, the reason for hospitalization, providers of care during hospitalization, vital signs at discharge, laboratory test results, the care transition summary and plan for next provider of care, and discharge instructions for patient. (p. 27) All this vastly reduces the value of data for increasing quality care. For instance, removing lab test results will lead to expensive and redundant retesting. (However, the AHA staff told me they support the ability of patients to get results directly from the labs.)

I'll conclude this section with the interesting observation that the CHIME comments on meaningful use I mentioned earlier say nothing about the patient engagement rules. In other words, the hospital CIOs in CHIME don't back up the hospitals' own claims.

Some reasonable AHA objections

Now I'm happy to turn to AHA proposals that leave fewer impediments to the achievement of better health care. Their 49-page letter (plus appendices) details many aspects of Stage 2 that seem unnecessarily burdensome or of questionable value.

It seems reasonable to me to ask the ONC, "Remove measures that make the performance of hospitals and EPs contingent on the actions of others." (p. 2) For instance, to engage in successful exchanges of patient data, hospitals depend on their partners (labs, nursing homes, other hospitals) to have Stage 2 capabilities, and given the slow rate of adoption, such partners could be really hard to find.

The same goes for patient downloads. Not only do hospitals have to permit patients to get access to data over the Internet, but they have to get 10% of the patients to actually do it. I don't think the tools are in place yet for patients to make good use of the data. When data is available, apps for processing the data will flood the market and patients will gradually understand the data's value, but right now there are few reasons to download it: perhaps to give it to a relative who is caring for the patient or to a health provider who doesn't have the technical means to request the data directly. Such uses may allow hospitals to reach the 10% required by the Stage 2 rule, but why make them responsible?

The AHA documents a growing digital divide among hospitals and other health care providers. "Rural, smaller and nonteaching hospitals have fewer financial and technical resources at their disposal. They also are starting from a lower base of adoption." (p. 59) The open source community needs to step up here. There are plenty of free software solutions to choose from, but small providers can't use them unless they become as easy to set up and configure as MySQL or even LibreOffice.

The AHA is talking from deep experience when it questions whether patients will actually be able to make use of medical images. "Images are generally very large files, and would require that the individual downloading or receiving the file have specialized, expensive software to access the images. The effort required to make the images available would be tremendous." (p. 26) We must remember that parts of our country don't even have high-speed Internet access.

The AHA's detailed comments about CMS penalties for the slow adoption of EHRs (pp. 9-18) also seem to reflect the hard realities out in the field.

But their attitude toward HIPAA is unclear. They point out that Congress required meaningful use to "take into account the requirements of HIPAA privacy and security law." (p. 25) Nevertheless, they ask the ONC to remove its HIPAA-related clauses from meaningful use because HIPAA is already administered by the Office of Civil Rights (OCR). It's reasonable to remove redundancy by keeping regulations under a single agency, but the AHA admits that the OCR proposal itself is "significantly flawed." Their staff explained to me that their goal is to wait for the next version of the OCR's own proposal, which should be released soon, before creating a new requirement that could well be redundant or conflicting.

Unless we level the playing field for small providers, an enormous wave of buy-outs and consolidation will occur. Market forces and the push to form ACOs are already causing such consolidation. Maybe it's even a good thing--who feels nostalgic for the corner grocery? But consolidation will make it even more important to empower patients with their data, in order to counterbalance the power of the health care institutions.

A closing note about hospital inertia

The AHA includes in its letter some valuable data about difficulties and costs of implementing new systems (pp. 47-48). They say, "More than one hospital executive has reported that managing the meaningful use implementation has been more challenging than building a new hospital, even while acknowledging the need to move ahead." (p. 49)

What I find particularly troublesome about their report is that the AHA offers no hint that the hospitals spent all this money to put in place new workflows that could improve care. All the money went to EHRs and the minimal training and installation they require. What will it take for hospitals to make the culture changes that reap the potential benefits of EHRs and data transfers? The public needs to start asking tough questions, and the Stage 2 requirements should be robust enough to give these questions a basis.

May 02 2012

Recombinant Research: Breaking open rewards and incentives

In the previous articles in this series I've looked at problems in current medical research, and at the legal and technical solutions proposed by Sage Bionetworks. Pilot projects have shown encouraging results but to move from a hothouse environment of experimentation to the mainstream of one of the world's most lucrative and tradition-bound industries, Sage Bionetworks must aim for its nucleus: rewards and incentives.

Previous article in the series: Sage Congress plans for patient engagement.

Think about the publication system, that wretchedly inadequate medium for transferring information about experiments. Getting the data on which a study was based is incredibly hard; getting the actual samples or access to patients is usually impossible. Just as boiling vegetables drains most of their nutrients into the water, publishing results of an experiment throws away what is most valuable.

But the publication system has been built into the foundation of employment and funding over the centuries. A massive industry provides distribution of published results to libraries and research institutions around the world, and maintains iron control over access to that network through peer review and editorial discretion. Even more important, funding grants require publication (but the data behind the study only very recently). And of course, advancement in one's field requires publication.

Lawrence Lessig, in his keynote, castigated for-profit journals for restricting access to knowledge in order to puff up profits. A chart in his talk showed skyrocketing prices for for-profit journals in comparison to non-profit journals. Lessig is not out on the radical fringe in this regard; Harvard Library is calling the current pricing situation "untenable" in a move toward open access echoed by many in academia.

Lawrence Lessig keynote at Sage Congress
Lawrence Lessig keynote at Sage Congress.

How do we open up this system that seemed to serve science so well for so long, but is now becoming a drag on it? One approach is to expand the notion of publication. This is what Sage Bionetworks is doing with Science Translational Medicine in publishing validated biological models, as mentioned in an earlier article. An even more extensive reset of the publication model is found in Open Network Biology (ONB), an online journal. The publishers require that an article be accompanied by the biological model, the data and code used to produce the model, a description of the algorithm, and a platform to aid in reproducing results.

But neither of these worthy projects changes the external conditions that prop up the current publication system.

When one tries to design a reward system that gives deserved credit to other things besides the final results of an experiment, as some participants did at Sage Congress, great unknowns loom up. Is normalizing and cleaning data an activity worth praise and recognition? How about combining data sets from many different projects, as a Synapse researcher did for the TCGA? How much credit do you assign researchers at each step of the necessary procedure for a successful experiment?

Let's turn to the case of free software to look at an example of success in open sharing. It's clear that free software has swept the computer world. Most web sites use free software ranging from the server on which they run to the language compilers that deliver their code. Everybody knows that the most popular mobile platform, Android, is based on Linux, although fewer realize that the next most popular mobile platforms, Apple's iPhones and iPads, run on a modified version of the open BSD operating system. We could go on and on citing ways in which free and open source software have changed the field.

The mechanism by which free and open source software staked out its dominance in so many areas has not been authoritatively established, but I think many programmers agree on a few key points:

  • Computer professionals encountered free software early in their careers, particularly as students or tinkerers, and brought their predilection for it into jobs they took at stodgier institutions such as banks and government agencies. Their managers deferred to them on choices for programming tools, and the rest is history.

  • Of course, computer professionals would not have chosen the free tools had they not been fit for the job (and often best for the job). Why is free software so good? Probably because the people creating it have complete jurisdiction over what to produce and how much time to spend producing it, unlike in commercial ventures with requirements established through marketing surveys and deadlines set unreasonably by management.

  • Different pieces of free software are easy to hook up, because one can alter their interfaces as necessary. Free software developers tend to look for other tools and platforms that could work with their own, and provide hooks into them (Apache, free database engines such as MySQL, and other such platforms are often accommodated.) Customers of proprietary software, in contrast, experience constant frustration when they try to introduce a new component or change components, because the software vendors are hostile to outside code (except when they are eager to fill a niche left by a competitor with market dominance). Formal standards cannot overcome vendor recalcitrance--a painful truth particularly obvious in health care with quasi-standards such as HL7.

  • Free software scales. Programmers work on it tirelessly until it's as efficient as it needs to be, and when one solution just can't scale any more, programmers can create new components such as Cassandra, CouchDB, or Redis that meet new needs.

Are there lessons we can take from this success story? Biological research doesn't fit the circumstances that made open source software a success. For instance, researchers start out low on the totem pole in very proprietary-minded institutions, and don't get to choose new ways of working. But the cleverer ones are beginning to break out and try more collaboration. Software and Internet connections help.

Researchers tend to choose formats and procedures on an ad hoc, project by project basis. They haven't paid enough attention to making their procedures and data sets work with those produced by other teams. This has got to change, and Sage Bionetworks is working hard on it.

Research is labor-intensive. It needs desperately to scale, as I have pointed out throughout this article, but to do so it needs entire new paradigms for thinking about biological models, workflow, and teamwork. This too is part of Sage Bionetworks' mission.

Certain problems are particularly resistant in research:

  • Conditions that affect small populations have trouble raising funds for research. The Sage Congress initiatives can lower research costs by pooling data from the affected population and helping researchers work more closely with patients.

  • Computation and statistical methods are very difficult fields, and biological research is competing with every other industry for the rare individuals who know these well. All we can do is bolster educational programs for both computer scientists and biologists to get more of these people.

  • There's a long lag time before one knows the effects of treatments. As Heywood's keynote suggested, this is partly solved by collecting longitudinal data on many patients and letting them talk among themselves.

Another process change has revolutionized the computer field: agile programming. That paradigm stresses close collaboration with the end-users whom the software is supposed to benefit, and a willingness to throw out old models and experiment. BRIDGE and other patient initiatives hold out the hope of a similar shift in medical research.

All these things are needed to rescue the study of genetics. It's a lot to do all at once. Progress on some fronts were more apparent than others at this year's Sage Congress. But as more people get drawn in, and sometimes fumbling experiments produce maps for changing direction, we may start to see real outcomes from the efforts in upcoming years.

All articles in this series, and others I've written about Sage Congress, are available through a bit.ly bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

May 01 2012

Recombinant Research: Sage Congress plans for patient engagement

Clinical trials are the pathway for approving drug use, but they aren't good enough. That has become clear as a number of drugs (Vioxx being the most famous) have been blessed by the FDA, but disqualified after years of widespread use reveal either lack of efficacy or dangerous side effects. And the measures taken by the FDA recently to solve this embarrassing problem continue the heavy-weight bureaucratic methods it has always employed: more trials, raising the costs of every drug and slowing down approval. Although I don't agree with the opinion of Avik S. A. Roy (reprinted in Forbes) that Phase III trials tend to be arbitrary, I do believe it is time to look for other ways to test drugs for safety and efficacy.

First article in the series: Recombinant Research: Sage Congress Promotes Data Sharing in Genetics.

But the Vioxx problem is just one instance of the wider malaise afflicting the drug industry. They just aren't producing enough new medications, either to solve pressing public needs or to keep up their own earnings. Vicki Seyfert-Margolis of the FDA built on her noteworthy speech at last year's Sage Congress (reported in one of my articles about the conference) with the statistic that drug companies have submitted 20% fewer medications to the FDA between 2001 and 2007. Their blockbuster drugs produce far fewer profits than before as patents expire and fewer new drugs emerge (a predicament called the "patent cliff"). Seyfert-Margolis intimated that this crisis in the cause of layoffs in the industry, although I heard elsewhere that the companies are outsourcing more research, so perhaps the downsizing is just a reallocation of the same money.

Benefits of patient involvement

The field has failed to rise to the challenges posed by new complexity. Speakers at Sage Congress seemed to feel that genetic research has gone off the tracks. As the previous article in this series explained, Sage Bionetworks wants researchers to break the logjam by sharing data and code in GitHub fashion. And surprisingly, pharma is hurting enough to consider going along with an open research system. They're bleeding from a situation where as much as 80% of each clinical analysis is spent retrieving, formatting, and curating the data. Meanwhile, Kathy Giusti of the Multiple Myeloma Research Foundation says that in their work, open clinical trials are 60% faster.

Attendees at a breakout session where I sat in, including numerous managers from major pharma companies, expressed confidence that they could expand public or "pre-competitive" research in the direction Sage Congress proposed. The sector left to engage is the one that's central to all this work--the public.

If we could collect wide-ranging data from, say, 50,000 individuals (a May 2013 goal cited by John Wilbanks of Sage Bionetworks, a Kauffman Foundation Fellow), we could uncover a lot of trends that clinical trials are too narrow to turn up. Wilbanks ultimately wants millions of such data samples, and another attendee claimed that "technology will be ready by 2020 for a billion people to maintain their own molecular and longitudinal health data." And Jamie Heywood of PatientsLikeMe, in his keynote, claimed to have demonstrated through shared patient notes that some drugs were ineffective long before the FDA or manufacturers made the discoveries. He decried the current system of validating drugs for use and then failing to follow up with more studies, snorting that, "Validated means that I have ceased the process of learning."

But patients have good reasons to keep a close hold on their health data, fearing that an insurance company, an identity thief, a drug marketer, or even their own employer will find and misuse it. They already have little enough control over it, because the annoying consent forms we always have shoved in our faces when we come to a clinic give away a lot of rights. Current laws allow all kinds of funny business, as shown in the famous case of the Vermont law against data mining, which gave the Supreme Court a chance to say that marketers can do anything they damn please with your data, under the excuse that it's de-identified.

In a noteworthy poll by Sage Bionetworks, 80% of academics claimed they were comfortable sharing their personal health data with family members, but only 31% of citizen advocates would do so. If that 31% is more representative of patients and the general public, how many would open their data to strangers, even when supposedly de-identified?

The Sage Bionetworks approach to patient consent

It's basic research that loses. So Wilbanks and a team have been working for the past year on a "portable consent" procedure. This is meant to overcome the hurdle by which a patient has to be contacted and give consent anew each time a new researcher wants data related to his or her genetics, conditions, or treatment. The ideal behind portable consent is to treat the entire research community as a trusted user.

The current plan for portable consent provides three tiers:

Tier 1

No restrictions on data, so long as researchers follow the terms of service. Hopefully, millions of people will choose this tier.

Tier 2

A middle ground. Someone with asthma may state that his data can be used only by asthma researchers, for example.

Tier 3

Carefully controlled. Meant for data coming from sensitive populations, along with anything that includes genetic information.

Synapse provides a trusted identification service. If researchers find a person with useful characteristics in the last two tiers, and are not authorized automatically to use that person's data, they can contact Synapse with the random number assigned to the person. Synapse keeps the original email address of the person on file and will contact him or her to request consent.

Portable consent also involves a lot of patient education. People will sign up through a software wizard that explains the risks. After choosing portable consent, the person decides how much to put in: 23andMe data, prescriptions, or whatever they choose to release.

Sharon Terry of the Genetic Alliance said that patient advocates currently try to control patient data in order to force researchers to share the work they base on that data. Portable consent loosens this control, but the field may be ready for its more flexible conditions for sharing.

Pharma companies and genetics researchers have lots to gain from access to enormous repositories of patient data. But what do the patients get from it? Leaders in health care already recognize that patients are more than experimental subjects and passive recipients of treatment. The recent ONC proposal for Stage 2 of Meaningful Use includes several requirements to share treatment data with the people being treated (which seems kind of a no-brainer when stated this baldly) and the ONC has a Consumer/Patient Engagement Power Team.

Sage Congress is fully engaged in the patient engagement movement too. One result is the BRIDGE initiative, a joint project of Sage Bionetworks and Ashoka with funding from the Robert Wood Johnson Foundation, to solicit questions and suggestions for research from patients. Researchers can go for years researching a condition without even touching on some symptom that patients care about. Listening to patients in the long run produces more cooperation and more funding.

Portable consent requires a leap of faith, because as Wilbanks admits, releasing aggregates of patient data mean that over time, a patient is almost certain to be re-identified. Statistical techniques are just getting too sophisticated and compute power growing too fast for anyone to hide behind current tricks such as using only the first three digits of a five-digit postal code. Portable consent requires the data repository to grant access only to bona fide researchers and to set terms of use, including a ban on re-identifying patients. Still, researchers will have rights to do research, redistribute data, and derive products from it. Audits will be built in.

But as mentioned by Kelly Edwards of the University of Washington, tools and legal contracts can contribute to trust, but trust is ultimately based on shared values. Portable consent, properly done, engages with frameworks like Synapse to create a culture of respect for data.

In fact, I think the combination of the contractual framework in portable consent and a platform like Synapse, with its terms of use, might make a big difference in protecting patient privacy. Seyfert-Margolis cited predictions that 500 million smartphone users will be using medical apps by 2015. But mobile apps are notoriously greedy for personal data and cavalier toward user rights. Suppose all those smartphone users stored their data in a repository with clear terms of use and employed portable consent to grant access to the apps? We might all be safer.

The final article in this series will evaluate the prospects for open research in genetics, with a look at the grip of journal publishing on the field, and some comparisons to the success of free and open source software.

Next: Breaking Open Rewards and Incentives. All articles in this series, and others I've written about Sage Congress, are available through a bit.ly bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

April 30 2012

Recombinant Research: Sage Congress promotes data sharing in genetics

Given the exponential drop in the cost of personal genome sequencing (you can get a basic DNA test from 23andMe for a couple hundred dollars, and a full sequence will probably soon come down to one thousand dollars in cost), a new dawn seems to be breaking forth for biological research. Yet the assessment of genetics research at the recent Sage Congress was highly cautionary. Various speakers chided their own field for tilling the same ground over and over, ignoring the urgent needs of patients, and just plain researching the wrong things.

Sage Congress also has some plans to fix all that. These projects include tools for sharing data and storing it in cloud facilities, running challenges, injecting new fertility into collaboration projects, and ways to gather more patient data and bring patients into the planning process. Through two days of demos, keynotes, panels, and breakout sessions, Sage Congress brought its vision to a high-level cohort of 230 attendees from universities, pharmaceutical companies, government health agencies, and others who can make change in the field.

In the course of this series of articles, I'll pinpoint some of the pain points that can force researchers, pharmaceutical companies, doctors, and patients to work together better. I'll offer a look at the importance of public input, legal frameworks for cooperation, the role of standards, and a number of other topics. But we'll start by seeing what Sage Bionetworks and its pals have done over the past year.

Synapse: providing the tools for genetics collaboration

Everybody understands that change is driven by people and the culture they form around them, not by tools, but good tools can make it a heck of a lot easier to drive change. To give genetics researchers the best environment available to share their work, Sage Bionetworks created the Synapse platform.

Synapse recognizes that data sets in biological research are getting too large to share through simple data transfers. For instance, in his keynote about cancer research (where he kindly treated us to pictures of cancer victims during lunch), UC Santa Cruz professor David Haussler announced plans to store 25,000 cases at 200 gigabytes per case in the Cancer Genome Atlas, also known as TCGA in what seems to be a clever pun on the four nucleotides in DNA. Storage requirements thus work out to 5 petabytes, which Haussler wants to be expandable to 20 petabytes. In the face of big data like this, the job becomes moving the code to the data, not moving the data to the code.

Synapse points to data sets contributed by cooperating researchers, but also lets you pull up a console in a web browser to run R or Python code on the data. Some effort goes into tagging each data set with associated metadata: tissue type, species tested, last update, number of samples, etc. Thus, you can search across Synapse to find data sets that are pertinent to your research.

One group working with Synapse has already harmonized and normalized the data sets in TCGA so that a researcher can quickly mix and run stats on them to extract emerging patterns. The effort took about one and half full-time employees for six months, but the project leader is confident that with the system in place, "we can activate a similar size repository in hours."

This contribution highlights an important principle behind Synapse (appropriately called "viral" by some people in the open source movement): when you have manipulated and improved upon the data you find through Synapse, you should put your work back into Synapse. This work could include cleaning up outlier data, adding metadata, and so on. To make work sharing even easier, Synapse has plans to incorporate the Amazon Simple Workflow Service (SWF). It also hopes to add web interfaces to allow non-programmers do do useful work with data.

The Synapse development effort was an impressive one, coming up with a feature-rich Beta version in a year with just four coders. And Synapse code is entirely open source. So not only is the data distributed, but the creators will be happy for research institutions to set up their own Synapse sites. This may make Synapse more appealing to geneticists who are prevented by inertia from visiting the original Synapse.

Mike Kellen, introducing Synapse, compared its potential impact to that of moving research from a world of journals to a world like GitHub, where people record and share every detail of their work and plans. Along these lines, Synapse records who has used a data set. This has many benefits:

  • Researchers can meet up with others doing related work.

  • It gives public interest advocates a hook with which to call on those who benefit commercially from Synapse--as we hope the pharmaceutical companies will--to contribute money or other resources.

  • Members of the public can monitor accesses for suspicious uses that may be unethical.

There's plenty more work to be done to get data in good shape for sharing. Researchers must agree on some kind of metadata--the dreaded notion of ontologies came up several times--and clean up their data. They must learn about data provenance and versioning.

But sharing is critical for such basics of science as reproducing results. One source estimates that 75% of published results in genetics can't be replicated. A later article in this series will examine a new model in which enough metainformation is shared about a study for it to be reproduced, and even more important to be a foundation for further research.

With this Beta release of Synapse, Sage Bionetworks feels it is ready for a new initiative to promote collaboration in biological research. But how do you get biologists around the world to start using Synapse? For one, try an activity that's gotten popular nowadays: a research challenge.

The Sage DREAM challenge

Sage Bionetworks' DREAM challenge asks genetics researchers to find predictors of the progression of breast cancer. The challenge uses data from 2000 women diagnosed with breast cancer, combining information on DNA alterations affecting how their genes were expressed in the tumors, clinical information about their tumor status, and their outcomes over ten years. The challenge is to build models integrating the alterations with molecular markers and clinical features to predict which women will have the most aggressive disease over a ten year period.

Several hidden aspects of the challenge make it a clever vehicle for Sage Bionetworks' values and goals. First, breast cancer is a scourge whose urgency is matched by its stubborn resistance to diagnosis. The famous 2009 recommendations of U.S. Preventive Services Task Force, after all the controversy was aired, left us with the dismal truth that we don't know a good way to predict breast cancer. Some women get mastectomies in the total absence of symptoms based just on frightening family histories. In short, breast cancer puts the research and health care communities in a quandary.

We need finer-grained predictors to say who is likely to get breast cancer, and standard research efforts up to now have fallen short. The Sage proposal is to marshal experts in a new way that combines their strengths, asking them to publish models that show the complex interactions between gene targets and influences from the environment. Sage Bionetworks will publish data sets at regular intervals that it uses to measure the predictive ability of each model. A totally fresh data set will be used at the end to choose the winning model.

The process behind the challenge--particularly the need to upload code in order to run it on the Synapse site--automatically forces model builders to publish all their code. According to Stephen Friend, founder of Sage Bionetworks, "this brings a level of accountability, transparency, and reproducibility not previously achieved in clinical data model challenges."

Finally, the process has two more effects: it shows off the huge amount of genetic data that can be accessed through Synapse, and it encourages researchers to look at each other's models in order to boost their own efforts. In less than a month, the challenge already received more than 100 models from 10 sources.

The reward for winning the challenge is publication in a respected journal, the gold medal still sought by academic researchers. (More on shattering this obelisk later in the series.) Science Translational Medicine will accept results of the evaluation as a stand-in for peer review, a real breakthrough for Sage Bionetworks because it validates their software-based, evidence-driven process.

Finally, the DREAM challenge promotes use of the Synapse infrastructure, and in particular the method of bringing the code to the data. Google is donating server space for the challenge, which levels the playing field for researchers, freeing them from paying for their own computing.

A single challenge doesn't solve all the problems of incentives, of course. We still need to persuade researchers to put up their code and data on a kind of genetic GitHub, persuade pharmaceutical companies to support open research, and persuade the general public to share data about the phonemes (life data) and genes--all topics for upcoming articles in the series.

Next: Sage Congress Plans for Patient Engagement. All articles in this series, and others I've written about Sage Congress, are available through a bit.ly bundle.

OSCON 2012 — Join the world's open source pioneers, builders, and innovators July 16-20 in Portland, Oregon. Learn about open development, challenge your assumptions, and fire up your brain.

Save 20% on registration with the code RADAR20

April 19 2012

Sage Congress: The synthesis of open source with genetics

For several years, O'Reilly Radar has been covering the exciting
potential that open source software, open data, and a general attitude
of sharing and cooperation bring to health care. Along with many
exemplary open source projects in areas directly affecting the
public — such as the VA's Blue
Button
in electronic medical records and the href="http://wiki.directproject.org/">Direct project in data
exchange — the study of disease is undergoing a paradigm shift.

Sage Bionetworks stands at the
center of a wide range of academic researchers, pharmaceutical
companies, government agencies, and health providers realizing that
the old closed system of tiny teams who race each other to a cure has
got to change. Today's complex health problems, such as Alzheimer's,
AIDS, and cancer, are too big for a single team. And these
institutions are slowly wrenching themselves out of the habit of data
hoarding and finding ways to work together.

A couple weeks ago I talked to the founder of Sage Bionetworks,
Stephen Friend, about recent advances in open source in this area, and
the projects to be highlighted at the upcoming http://sagecongress.org/">Sage Commons congress. Steve is careful
to call this a "congress" instead of a "conference" because all
attendees are supposed to pitch in and contribute to the meme pool. I
covered Sage Congress in a series of
articles last year
. The following podcast ranges over
topics such as:

  • what is Sage Bionetworks [Discussed at the 00:25 mark];
  • the commitment of participants to open source software [Discussed at the 01:01 mark];
  • how open source can support a business model in drug development [Discussed at the 01:40 mark];
  • a look at the upcoming congress [Discussed at the 03:47 mark];
  • citizen-led contributions or network science [Discussed at the 06:12 mark];
  • data sharing philosophy [Discussed at the 09:01 mark];
  • when projects are shared with other institutions [Discussed at the 12:43 mark];
  • how to democratize medicine [Discussed at the 17:10 mark];
  • a portable legal consent approach where the patient controls his or her own data [Discussed at the 20:07 mark];
  • solving the problem of non-sharing in the industry [Discussed at the 22:15 mark]; and
  • key speakers at the congress [Discussed at the 26:35 mark].

Sessions from the congress will be broadcast live via webcast and posted on the Internet.

March 27 2012

Four short links: 27 March 2012

  1. Five Tough Lessons I Had To Learn About Healthcare (Andy Oram) -- I don't normally link to things from Radar but this gels 110% with my limited experience with the healthcare industry.
  2. Makematics: Math for Makers -- I want the hardware hackers who are building the next generation of DIY 3D printers to be able to turn topological algorithms and concepts into open source tool path generation software that creates more efficient gcode and enables the fabrication of previously impossible physical forms. I don’t know the best way to go about this, but this site is intended to act as home for my experiments.
  3. CASH Music -- they build open source tools for musicians and labels to make money. What Wordpress did for bloggers, we're doing for musicians. (via New York Times)
  4. PL101: Create Your Own Programming Language -- you'll build it in Javascript as you learn how programming languages and compilers work. It'll run on AppEngine and be hosted on GitHub.

March 26 2012

Five tough lessons I had to learn about health care

Working in the health care space has forced me to give up many hopes and expectations that I had a few years ago. Forgive me for being cynical (it's an easy feeling to have following the country's largest health IT conference, as I reported a month ago), and indeed some positive trends do step in to shore up hope. I'll go over the redeeming factors after listing the five tough lessons.

1. The health care field will not adopt a Silicon Valley mentality

Wild, willful, ego-driven experimentation--a zeal for throwing money after intriguing ideas with minimal business plans--has seemed work for the computer field, and much of the world is trying to adopt a "California optimism." A lot of venture capitalists and technology fans deem this attitude the way to redeem health care from its morass of expensive solutions that don't lead to cures. But it won't happen, at least not the way they paint it.

Health care is one of the most regulated fields in public life, and we want it that way. From the moment we walk into a health facility, we expect the staff to be following rigorous policies to avoid infections. (They don't, but we expect them to.) And not just anybody can set up a shield outside the door and call themselves a doctor. In the nineteenth century it was easier, but we don't consider that a golden age of medicine.

Instead, doctors go through some of the longest and most demanding training that exists in the world today. And even after they're licensed, they have to regularly sign up for continuing education to keep practicing. Other fields in medicine are similar. The whole industry is constrained by endless requirements that make sure the insiders remain in their seats and no "disruptive technologies" raise surprises. Just ask a legal expert about the complex mesh of Federal and state regulations that a health care provider has to navigate to protect patient privacy--and you do want your medical records to be private, don't you?--before you rave about the Silicon Valley mentality. Also read the O'Reilly book by Fred Trotter and David Uhlman about the health care system as it really is.

Nor can patients change treatments with the ease of closing down a Facebook account. Once a patient has established a trust relationship with a doctor and obtained a treatment plan, he or she won't say, "I think I'll go down the road to another center that charges $100 less for this procedure." And indeed, health reform doesn't prosper from breaking down treatments into individual chunks. Progress lies in the opposite direction: the redemptive potential of long-term relationships.

2. Regulations can't force change

I am very impressed with the HITECH act (a product of the American Recovery and Reinvestment Act, more than the Affordable Care Act) that set modern health reform in motion, as well as the efforts of the Department of Health and Human Services to push institutions forward. But change in health care, like education, boils down to the interaction in a room between a professional and a client. Just as lesson plans and tests can't ensure that a teacher inspires a child to learn, regulations can't keep a doctor from ordering an unnecessary test to placate an anxious patient.

We can offer clinical decision support to suggest what has worked for other patients, but we can't keep a patient from asking for a expensive procedure that has a 10% chance of making him better (and a 20% chance of making him worse), nor can we make the moral decision about what treatment to pursue, for the patient or the doctor. Each patient is different, anyway. No one wants to be a statistic.

3. The insurance companies are not the locus of cost and treatment problems

Health insurers are a favorite target of hatred by Americans, exemplified by Michael Moore's 2007 movie Sicko and more surprisingly in the 1997 romantic comedy As Good as it Gets, where I saw an audience applaud as Helen Hunt delivered a rant against health maintenance organizations. A lot of activists, looking at other countries, declare that our problems would be solved (well, would improve a lot) if we got private insurers out of the picture.

Sure, there's a lot of waste in the current insurance system, which deliberately stretches out the task of payment and makes it take up the days of full-time staff in each doctor's office. But that's not the cause of the main problems in either costs or treatment failures. The problems lie with the beloved treatment staff. We can respect their hard work and the lives they save, but we don't have to respect them for releasing patients from hospitals without adequate follow-up, or for ordering unnecessary radiation that creates harm for patients, or for the preventable errors that still (after years of publicity) kill 90,000 to 100,000 patients a year.

4. Doctors don't want to be care managers

The premise of health reform is to integrate patients into a larger plan for managing a population. A doctor is supposed to manage a case load and keep his or her pipeline full while not spending too much. The thrust of various remuneration schemes, old and new, that go beyond fee for service (capitation, global payment systems) is to reward a doctor for handling patients of a particular type (for instance, elderly people with hypertension) at a particular cost. But doctors aren't trained for this. They want to fix the immediate, presenting complaint and send the patient home until they're needed again. Some think longitudinally, and diligently try to treat the whole person rather than a symptom. But managing their treatment options as a finite resource is just not in their skill set.

The United Kingdom--host of one of the world's great national care systems--is about to launch a bold new program where doctors have to do case management. The doctors are rebelling. If this is the future of medicine, we'll have to find new medical personnel to do it.

5. Patients don't want to be care managers

Now that the medical field has responded superbly to acute health problems, we are left with long-term problems that require lifestyle and environmental changes. The patient is even more important than the doctor in these modern ills. But the patients who cost the most and need to make the most far-ranging changes are demonstrating an immunity to good advice. They didn't get emphysema or Type 2 diabetes by acting healthily in the first place, and they aren't about to climb out of their condition voluntarily either.

You know what the problem with chronic disease is? Its worst effects are not likely to show up early in life when lifestyle change could make the most difference. (Serious pain can come quickly from some chronic illnesses, such as asthma and Crohn's disease, but these are also hard to fix through lifestyle changes, if by "lifestyle change" you mean breathing clean air.) The changes a patient would have to make to prevent smoking-related lung disease or obesity-related problems would require a piercing re-evaluation of his course of life, which few can do. And incidentally, they are neither motivated nor trained to store their own personal health records.

Hope for the future

Despite the disappointments I've undergone in learning about health care, I expect the system to change for the better. It has to, because the public just won't tolerate more precipitous price hikes and sub-standard care.

There's a paucity of citations in my five lessons because they tend not to be laid out bluntly in research or opinion pieces; for the most part, they emerged gradually over many hallway conversations I had. Each of the five lessons contain a "not," indicating that they attack common myths. Myths (in the traditional sense) in fact are very useful constructs, because they organize the understanding of the world that societies have trouble articulating in other ways. We can realize that myths are historically inaccurate while finding positive steps forward in them.

The Silicon Valley mentality will have some effect through new devices and mobile phone apps that promote healthy activity. They can help with everything from basic compliance--remembering to take prescribed meds--to promoting fitness crazes and keeping disabled people in their homes. Lectures given once in a year in the doctor's office don't lead to deep personal change, but having a helper nearby (even a digital one) can impel a person to act better, hour by hour and day by day. This has been proven by psychologists over and over: motivation is best delivered in small, regular doses (a theme found in my posting from HIMSS).

Because the most needy patients are often the most recalcitrant ones, personal responsibility has to intersect with professional guidance. A doctor has to work the patient, and other staff can shore up good habits as well. This requires the doctors' electronic record systems to accept patient data, such as weight and mood. Projects such as Indivo X support these enhancements, which traditional electronic record systems are ill-prepared for.

Although doctors eschew case management, there are plenty of other professionals who can help them with it, and forming Accountable Care Organizations gives the treatment staff access to such help. Tons of potential savings lie in the data that clinicians could collect and aggregate. Still more data is being loaded by the federal government regularly at Health.Data.Gov. ACOs and other large institutions can hire people who love to crunch big data (if such staff can be found, because they're in extremely high demand now in almost every industry) to create systems that slide seamlessly into clinical decision support and provide guidelines for better treatment, as well as handle the clinic's logistics better. So what we need to do is train a lot more experts in big data to understand the health care field and crunch its numbers.

Change will be disruptive, and will not be welcomed with open arms. Those who want a better system need to look at the areas where change is most likely to make a difference.

February 29 2012

Report from HIMSS 12: wrap-up of the largest health IT conference

This is a time of great promise in health care, yet an oppressive atmosphere hung over much of href="http://www.himssconference.org/">HIMSS. All the speakers--not least the government representatives who announced rules for the adoption of electronic health records--stressed commendable practices such as data exchange, providing the patient with information, and engaging with the patient. Many berated hospitals, doctors, and vendors for neglecting the elements that maintain health. But the thrust of most sessions was on such details as how to convert patient records to the latest classification of diseases (ICD-10).

Intelligent Hospital pavilion shows off tempting technology
Intelligent Hospital pavilion shows off tempting technology.

I have nothing against ICD-10 and I'm sure adopting it is a big headache that deserves attention at the conference. The reason I call the atmosphere oppressive is that I felt stuck among health care providers unable to think long-term or to embrace the systems approach that we'll need to cure people and cut costs. While some health care institutions took the ICD-10 change-over seriously and put resources into meeting the deadline, others pressured the Dept. of Health and Human services to delay implementation, and apparently won a major reprieve. The health IT community, including HIMSS, criticized the delay. But resistance to progress usually does not break out so overtly, and remains ingrained in day-to-day habits.

But ICD-10 is a sideline to the major issue of Stage 2 meaningful use. Why, as I reported on Wednesday, were so many of the 35,000 HIMSS attendees wrapped up in the next step being forced on them by the federal government? The scandal is that these meaningful use concepts (using data to drive care, giving care-givers information that other care-givers have collected about the patient) have to be forced on them. Indeed, institutions like Kaiser Permanente that integrated their electronic records years ago and concentrated on the whole patient had relatively little work to do to conform to Stage 1, and probably have the building blocks for Stage 2 in place. And of course these things are part of the landscape of health care in other countries. (The proposed regulations were finally posted last Thursday.)

Recipients of Regina Holliday jackets record patient involvement stories
Recipients of Regina Holliday jackets record patient involvement stories.

Haven't our providers heard that an ounce of prevention is worth a pound of cure? Don't well-educated and well-paid executives invest in quality measures with the expectation that they'll pay off in the long run? And aren't we all in the field for the good of the patients? What is that snickering I hear?

Actually, I don't accept the premise that providers are all in it for the money. If so many are newly incentivized to join the government's program for a mere $15,000 per doctor (plus avoiding some cuts in Medicare payments), which is a small fraction of the money they'll have to spend implementing the program, they must know that it's time to do the right thing. Meaningful use can be a good framework to concretize the idealistic goals of health care reform, but I just wish the vendors and doctors would keep their eyes more on the final goal.

Redwood MedNet in Northern California is an example of a health information exchange that adopted standards (CONNECT, before the Direct project was in place) to simplify data exchange between health providers. Will Ross of Redwood MedNet told me that qualifying for Stage 2 would be simple for them, "but you won't hear that from many vendors in this exhibit hall."

Annual surveys by Family Practice Management journal about their readers' satisfaction with EHRs, reviewed in one HIMSS session, showed widespread dissatisfaction that doesn't change from year to year. For instance, 39% were dissatisfied with support and training, although a few vendors rated quite high. Still, considering that doctors tend to veer away from open source solutions and pay big bucks for proprietary ones out of a hope of receiving better support and training, they deserve better. It's worth noting that the longer a practice uses its system, the more they're likely to express satisfaction. But only 38% of respondents would purchase the same systems now if they weren't already locked in.

That's the big, frustrating contradiction at HIMSS. The vendors have standards (HL7 and others), they've been setting up health information exchanges (under various other names) for years, they have a big, popular interoperability lab at each conference--and yet most patients still have to carry paper records and CDs with images from one doctor to another. (A survey of HIMSS members showed that one-quarter allowed access by patients to their data, which is an advance but still just a start.) The industry as a whole has failed to make a dent in the 90,000 to 100,000 needless deaths that occur in treatment facilities each year. And (according to one speaker) 20% of patients hospitalized under Medicare have to return to the hospital shortly after discharge.

Omens of change

Suffice it say that by my fourth day at HIMSS I was not happy. Advances come, but slowly. Examples of developments I can give a thumbs-up to at HIMSS were data sharing among physicians who use Practice Fusion, a popular example of a growing move to web services for electronic records, and a CardioEngagement Challenge funded by Novartis to encourage at-risk patients to take more interest in their health. The winner was a Sensei mobile app that acts as an automated coach. Sensei CEO Robert Schwarzberg, a cardiologist, told me had put together phone-in coaching services for heart patients during the years before mobile apps, and was frustrated that these coaches were available less than once a week when what patients needed was round-the-clock motivation. Sensei Wellness is one of the many mobile apps that make both patients and doctors more connected, and HIMSS quite properly devoted a whole section of the exhibit floor to them.

Talking about Sensei Wellness with Dr. Robert Schwarzberg
Talking about Sensei Wellness with Dr. Robert Schwarzberg.

I dropped by the IBM booth for the obligatory demo of Watson's medical application, and some background from Dr. Josko Silobrcic. I also filled in some of this report from an earlier conversation with tech staff.

Medical diagnosis involves more structured data than solving Jeopardy riddles, structure that appears mostly in the form of links between data sets. For instance, medicines are linked to diagnoses, to lab results, and to other medicines (for instance, some drugs are counter-indicated when the patient is taking other drugs). Watson follows these relationships.

But because Watson is a natural language processing application--based on UIMA, which IBM donated to the Apache Foundation--it doesn't try to do much reasoning to pick out the best diagnosis or treatment, both of which are sometimes requested of it. Instead, it dumps huge indexes of medical articles into its data stores on one side, and takes in the text about the patient's complaint and doctor's evaluation on the other. Matching them up is not so different from a Jeopardy question, after all. Any possible match is considered and kept live until the final round of weighing answers, even if the chance of matching is near zero.

Dr. Josko Silobrcic before Watson demonstration
Dr. Josko Silobrcic before Watson demonstration.

Also because of the NLP basis for matching, there is rarely a need to harmonize disparate data taken in from different journals or medical sources.

I assumed that any processing that uses such a large data set and works so fast must run on a huge server farm, but the staff assured me it's not as big as one would think. For production use, of course, they'll need to take into account the need to scale. The medical informatics equivalent of a Christmas rush on sales would be an epidemic where everybody in the region is urgently hitting Watson for critical diagnoses.

Coming to peace

Healing came to me on my last day at HIMSS, at too related conferences off to the side of the main events: a meeting of Open Health Tools members and the eCollaboration forum, run by health activists who want to break down barriers to care. Both groups have partnerships with HIMSS.

Open Health Tools positions itself as an umbrella organization for projects making free software for a lot of different purposes in health care: recording, treatment, research and more. One illustrative project I got to hear about at their meeting was the Medical Imaging Network Transport (MINT), which Johns Hopkins is working on in coordination with other teams

MINT cuts down on the transfers of huge images by doing some processing in place and transferring only portions of the data. Switching to modern storage formats (XML and JSON) and better methods of data transfer also reduces waste. For instance, current DICOM vendors transmit images over TCP, which introduces more overhead than necessary when handling the packet losses engendered by transmitting files that are several gigabytes in size. MINT allows UDP and other protocols that are leaner than TCP.

Best of all, MINT DICOM images can be displayed through HTML5, which means any browser can view them in good resolution, there is no need to install a specialized viewer at each location where the doctor is checking the image, and dependence on proprietary software is reduced. (The same reliance on standard browsers is also claimed by eMix in a recent interview.

At the eCollaboration forum, E-patient Dave DeBronkart reported that being an engaged patient is still swimming upstream. It's hard to get one's records, hard to find out what treatments will cost, and hard to get taken seriously as an adult interested in monitoring one's own care. Meg McCabe of Aetna says that insurers need to offer more sophisticated guidance to patients trying to choose a health provider--simple lists of options are confusing and hard to choose from.

One speaker warned providers that if they try to open their data for collaborative care, they may find themselves hampered by contracts that maintain vendor ownership of EHR data. But speakers assured us vendors are not evil. The issue is what the providers ask for when they buy the EHR systems.

Here's the strange thing about the eCollaboration forum: they signed up enough people to fill the room ahead of time and left many potential attendees lamenting that they couldn't get in. Yet on the actual day of the event, there were about eight empty seats for every attendee. Maybe HIMSS attendees felt that had to devote all their time to the stage 2 regulations, previously mentioned. But I take the disappointing turn-out as a sign of the providers' and vendors' lack of commitment to change. Shown a dazzling roster of interesting talks about data exchange, open record sharing, and patient engagement, they're quick to sign up--but they don't show up when it counts.

As members of the general public, we can move the health care field forward by demanding more from our providers, at the point where we have some influence. Anyone looking for concrete guidance for increasing their influence as a patient can try e-Patients Live Longer: The Complete Guide to Managing Health Care Using Technology, by Nancy B. Finn.

Public attention and anger have been focused on insurers, who have certainly engaged in some unsavory practices to avoid paying for care--but nothing as destructive as the preventable errors and deaths caused by old-fashioned medical practices. And while economists complain about the 30 cents out of every dollar wasted in the American hodge-podge of payment systems, we know that unnecessary medical procedures or, conversely, preventative steps that were omitted, also suck up a lot of money. One speaker at the eCollaboration forum compared the sky-rocketing costs of health care and insurance to a financial bubble that can't last. Let's all take some responsibility for instituting better medical and reporting systems so the costs come down in a healthy manner.

Other articles about HIMSS were posted last Tuesday and Wednesday.

February 27 2012

Big data is the next big thing in health IT

During the 2012 HIMSS conference in Las Vegas I was invited by Dell Healthcare, along with a group of health IT experts, to discuss issues in health information technology. The session sparked some passionate discourse about the challenges and opportunities that are important to the health IT industry.

Moderator Dan Briody started the event with a question about things we had seen at HIMSS that had changed our thinking about health IT. Never being shy, I jumped right in and spoke about the issues of payment reform and how the private market is beginning to show signs of disruptive innovation. After a great deal of back and forth among the panelists it seemed we slipped into listing many of the barriers — technological, political and cultural — that health IT faces. I was hoping we would get back to sharing possible solutions, so I made the proposal that big data is the next big thing in health IT (see video below).

When I talk about "big data" I am referring to a dataset that is too large for a typical database software tool to store, manage, and analyze. Obviously, as technology changes and improves, the size of a dataset that would be qualify as "big data" will change as well. There is also a big data difference between healthcare and other industry sectors, since there are different tools available and the required datasets have varying sizes. Since health data is very personal and sensitive, it also has special security and privacy protections. This makes sharing, aggregating, sorting and analyzing the data sometimes challenging.

Another difficulty in making the most of big data in healthcare is those who control different pools of data have different financial incentives. There is a lack of transparency in performance, cost and quality; it is currently structured so that payers who would gain from decreasing revenue to providers, but the providers control the clinical data that is necessary to analyze in order to pay for value. The payers control another pool, which includes claims data. This is not very useful for advanced analysis that will provide real insight. But enabling transparency of the data will help to identify and analyze sources of variability as well as find waste and inefficiencies. Publishing quality and performance data will also help patients make informed health decisions.

The proliferation of digital health information, including both clinical and claims information, is creating some very large datasets. This also creates some significant opportunity. For instance, analyzing and synthesizing clinical records and claims data can help identify patients appropriate for inclusion in a particular clinical trial. These new datasets can also help to provide insight into improved clinical decision making. One great example of this is when an analysis of a database of 1.4 million Kaiser Permanente members helped determine that Vioxx, a popular pain reliever that was widely used by arthritis patients, was dangerous. Vioxx was a big moneymaker for Merck, generating about $2.5 billion in yearly sales, and there was quite a battle to get the drug off the market. Only by having the huge dataset available from years of electronic health records, and tools to properly analyze the data, was this possible.

The big data portion of the Dell think tank discussion is embedded below. You can find video from the full session here.

Related:

February 24 2012

Four short links: 24 February 2012

  1. Excel Cloud Data Analytics (Microsoft Research) -- clever--a cloud analytics backend with Excel as the frontend. Almost every business and finance person I've known has been way more comfortable with Excel than any other tool. (via Dr Data)
  2. HTTP Client -- Mac OS X app for inspecting and automating a lot of HTTP. cf the lovely Charles proxy for debugging. (via Nelson Minar)
  3. The Creative Destruction of Medicine -- using big data, gadgets, and sweet tech in general to personalize and improve healthcare. (via New York Times)
  4. EFF Wins Protection of Time Zone Database (EFF) -- I posted about the silliness before (maintainers of the only comprehensive database of time zones was being threatened by astrologers). The EFF stepped in, beat back the buffoons, and now we're back to being responsible when we screw up timezones for phone calls.

February 23 2012

Direct Project will be required in the next version of Meaningful Use

The Direct ProjectThe Office of the National Coordinator for Health Information Technology (ONC) announced that the Direct Project would be required in stage 2 of Meaningful Use.

As usual the outside world knew almost instantly because of Twitter. Nearly simultaneous posts from @ahier (Brain Ahier) and @techydoc (Steven Waldren MD). More information followed shortly after from @amalec Arien Malec a former leader for the Direct Project.


There are some other important announcements ahead of the official release, such as the end of support for CCD, but this requirement element has the deepest implications. This is jaw-dropping news! Meaningful Use is the standard by which all doctors and hospitals receive money for Electronic Health Record (EHR) systems from the federal government. In fact, the term "Electronic Health Record" is really just a synonym for "meaningful use software" (at least in the U.S. market). Meaningful Use is at the heart of what health IT will look like in the United States over the coming decades.

The Direct Project has a simple but ambitious goal: to replace the fax machine as the point-to-point communications tool for healthcare. That goal depends on adoption and nothing spurs adoption like a mandate. Every Health Information Exchange (HIE) in the country is going to be retooling as the result of this news. Some of them will be totally changing directions.



This mandate will make the Direct Project into the first Health Internet platform. Every doctor in the country will eventually use this technology to communicate. Given the way that healthcare is financed in the U.S., it is reasonable to say that doctors will either have a Direct email address to communicate with other doctors and their patients in a few years, or they will probably retire from the practice of medicine.

It was this potential, to be the first reliable communications platform for healthcare information, that has caused me to invest so heavily in this project. This is why I contributed so much time to the Direct Project Security and Trust Working Group when the Direct Protocol was just forming. This is an Open Source project that can still use your help.



The Direct Project is extensively covered in "Meaningful Use and Beyond" (chapter 11 is on interoperability). I wrote about the advantages of the Direct Project architecture. I helped arrange talks about about Direct at OSCON in 2010, and in 2011, I gave an OSCON keynote about the Health Internet , which featured Direct. I wrote a commentary for the Journal of Participatory Medicine, about how accuracy is more important than privacy for healthcare records and how to use the Direct Project to achieve that accuracy. I pointed out that the last significant impact from Google Health would be to make Direct more important. I am certainly not the only person at O'Reilly who has recognized the significance of the Direct Project, but I am one of the most vocal and consistent advocates of the Direct Project technology approach. So you can see why I think this a big announcement.

Of course, we will not know for sure exactly what has been mandated by the new revisions of Meaningful Use, but it is apparent that this is a huge victory for those of us who have really invested in this effort. My hat is off to Sean Nolan and Umesh Madan from Microsoft, to Brian Behlendorf and Arien Malec, who were both at at ONC during the birth of Direct, to Dr. David Kibbe, Brett Peterson and to John Moehrke. There are countless others who have contributed to the Direct Project, but these few are the ones who had to tolerate contributing with me, which I can assure you, is above and beyond the call of duty.

Obviously, we will be updating "Meaningful Use and Beyond" to include this new requirement as well as the other changes to the next version of Meaningful Use (which apparently will no longer be called "stage 2"). Most of the book will not change however, since it focuses on covering what you need to know in order to understand the requirements at all. While the requirements will be more stringent as time goes on, the core health IT concepts that are needed to understand them will not change that much. However, I recommend that you get a digital copy of the book directly through O'Reilly, because doing so entitles you to future versions of the book for free. You can get today's version and know we will update your digital edition with the arrival of subsequent versions of the Meaningful Use standard.



I wonder what other changes will be in store in the new requirements? ONC keeps promising to release the new rule "tomorrow." Once the new rules emerge, they will be devoured instantly, and you can expect to read more about the new standards here. The new rule will be subject to a 60-day commentary period. It will be interesting to see if the most dramatic aspects of the rule will survive this commentary. Supporters of CCR will be deeply upset and there are many entrenched EHR players who would rather not support Direct. Time will tell if this is truly a mandate, or merely a strong suggestion.


Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

Report from HIMSS 2012: toward interoperability and openness

I was wondering how it would feel to be in the midst of 35,000 people whose livelihoods are driven by the decisions of a large institution at the moment when that institution releases a major set of rules. I didn't really find out, though. The 35,000 people I speak of are the attendees of the HIMSS conference and the institution is the Department of Health and Human Services. But HHS just sort of half-released the rules (called Stage 2 of meaningful use), telling us that they would appear online tomorrow and meanwhile rushing over a few of the key points in a presentation that drew overflow crowds in two rooms.

The reaction, I sensed, was a mix of relief and frustration. Relief because Farzad Mostashari, National Coordinator for Health Information Technology, promised us the rules would be familiar and hewed closely to what advisors had requested. Frustration, however, at not seeing the details. The few snippets put up on the screen contained enough ambiguities and poorly worded phrases that I'm glad there's a 60-day comment period before the final rules are adopted.

There isn't much one can say about the Stage 2 rules until they are posted and the experts have a chance to parse them closely, and I'm a bit reluctant to throw onto the Internet one of potentially 35,000 reactions to the announcement, but a few points struck me enough to be worth writing about. Mostashari used his pulpit for several pronouncements about the rules:

  • HHS would push ahead on goals for interoperability and health information exchange. "We can't wait five years," said Mostashari. He emphasized the phrase "standard-based" in referring to HIE.

  • Patient engagement was another priority. To attest to Stage 2, institutions will have to allow at least half their patients to download and transfer their records.

  • They would strive for continuous quality improvement and clinical decision support, key goals enabled by the building blocks of meaningful use.

Two key pillars of the Stage 2 announcement are requirements to use the Direct project for data exchange and HL7's consolidated CDA for the format (the only data exchange I heard mentioned was a summary of care, which is all that most institutions exchange when a patient is referred).

The announcement demonstrates the confidence that HHS has in the Direct project, which it launched just a couple years ago and that exemplifies a successful joint government/private sector project. Direct will allow health care providers of any size and financial endowment to use email or the Web to share summaries of care. (I mentioned it in yesterday's article.) With Direct, we can hope to leave the cumbersome and costly days of health information exchange behind. The older and more complex CONNECT project will be an option as well.

The other half of that announcement, regarding adoption of the CDA (incarnated as a CCD for summaries of care), is a loss for the older CCR format, which was an option in Stage 1. The CCR was the Silicon Valley version of health data, a sleek and consistent XML format used by Google Health and Microsoft HealthVault. But health care experts criticized the CCR as not rich enough to convey the information institutions need, so it lost out to the more complex CCD.

The news on formats is good overall, though. The HL7 consortium, which has historically funded itself by requiring organizations to become members in order to use its standards, is opening some of them for free use. This is critical for the development of open source projects. And at an HL7 panel today, a spokesperson said they would like to head more in the direction of free licensing and have to determine whether they can survive financially while doing so.

So I'm feeling optimistic that U.S. health care is moving "toward interoperability and openness," the phrase I used in the title to his article and also used in a posting from HIMSS two years ago.

HHS allowed late-coming institutions (those who began the Stage 1 process in 2011) to continue at Stage 1 for another year. This is welcome because they have so much work to do, but means that providers who want to demonstrate Stage 2 information exchange may have trouble because they can't do it with other providers who are ready only for Stage 1.

HHS endorsed some other standards today as well, notably SNOMED for diseases and LRI for lab results. Another nice tidbit from the summit includes the requirement to use electronic medication administration (for instance, bar codes to check for errors in giving medicine) to foster patient safety.

February 22 2012

Data for the public good

Can data save the world? Not on its own. As an age of technology-fueled transparency, open innovation and big data dawns around the world, the success of new policy won't depend on any single chief information officer, chief executive or brilliant developer. Data for the public good will be driven by a distributed community of media, nonprofits, academics and civic advocates focused on better outcomes, more informed communities and the new news, in whatever form it is delivered.

Advocates, watchdogs and government officials now have new tools for data journalism and open government. Globally, there's a wave of transparency that will wash over every industry and government, from finance to healthcare to crime.

In that context, open government is about much more than open data — just look at the issues that flow around the #opengov hashtag on Twitter, including the nature identity, privacy, security, procurement, culture, cloud computing, civic engagement, participatory democracy, corruption, civic entrepreneurship or transparency.

If we accept the premise that Gov 2.0 is a potent combination of open government, mobile, open data, social media, collective intelligence and connectivity, the lessons of the past year suggest that a tidal wave of technology-fueled change is still building worldwide.

The Economist's support for open government data remains salient today:

"Public access to government figures is certain to release economic value and encourage entrepreneurship. That has already happened with weather data and with America's GPS satellite-navigation system that was opened for full commercial use a decade ago. And many firms make a good living out of searching for or repackaging patent filings."

As Clive Thompson reported at Wired last year, public sector data can help fuel jobs, and "shoving more public data into the commons could kick-start billions in economic activity." In the transportation sector, for instance, transit data is open government fuel for economic growth.

There is a tremendous amount of work ahead in building upon the foundations that civil society has constructed over decades. If you want a deep look at what the work of digitizing data really looks like, read Carl Malamud's interview with Slashdot on opening government data.

Data for the public good, however, goes far beyond government's own actions. In many cases, it will happen despite government action — or, often, inaction — as civic developers, data scientists and clinicians pioneer better analysis, visualization and feedback loops.

For every civic startup or regulation, there's a backstory that often involves a broad number of stakeholders. Governments have to commit to open up themselves but will, in many cases, need external expertise or even funding to do so. Citizens, industry and developers have to show up to use the data, demonstrating that there's not only demand, but also skill outside of government to put open data to work in service accountability, citizen utility and economic opportunity. Galvanizing the co-creation of civic services, policies or apps isn't easy, but tapping the potential of the civic surplus has attracted the attention of governments around the world.

There are many challenges for that vision to pass. For one, data quality and access remain poor. Socrata's open data study identified progress, but also pointed to a clear need for improvement: Only 30% of developers surveyed said that government data was available, and of that, 50% of the data was unusable.

Open data will not be a silver bullet to all of society's ills, but an increasing number of states are assembling platforms and stimulating an app economy.

Results-oriented mayors like Rahm Emanuel and Mike Bloomberg are committing to opening Chicago and opening government data in New York City, respectively.

Following are examples of where data for the public good is already having an impact upon the world we live in, along with some ideas about what lies ahead.

Financial good

Anyone looking for civic entrepreneurship will be hard pressed to find a better recent example than BrightScope. The efforts of Mike and Ryan Alfred are in line with traditional entrepreneurship: identifying an opportunity in a market that no one else has created value around, building a team to capitalize on it, and then investing years of hard work to execute on that vision. In the process, BrightScope has made government data about the financial industry more usable, searchable and open to the public.

Due to the efforts of these two entrepreneurs and their California-based startup, anyone who wants to learn more about financial advisers before tapping one to manage their assets can do so online.

Prior to BrightScope, the adviser data was locked up at the Securities and Exchange Commission (SEC) and the Financial Industry Regulatory Authority (FINRA).

"Ryan and I knew this data was there because we were advisers," said BrightScope co-founder Mike Alfred in a 2011 interview. "We knew data had been filed, but it wasn't clear what was being done with it. We'd never seen it liberated from the government databases."

While they knew the public data existed and had their idea years ago, Alfred said it didn't happen because they "weren't in the mindset of being data entrepreneurs" yet. "By going after 401(k) first, we could build the capacity to process large amounts of data," Alfred said. "We could take that data and present it on the web in a way that would be usable to the consumer."

Notably, the government data that BrightScope has gathered on financial advisers goes further than a given profile page. Over time, as search engines like Google and Bing index the information, the data has become searchable in places consumers are actually looking for it. That's aligned with one of the laws for open data that Tim O'Reilly has been sharing for years: Don't make people find data. Make data find the people.

As agencies adapt to new business relationships, consumers are starting to see increased access to government data. Now, more data that the nation's regulatory agencies collected on behalf of the public can be searched and understood by the public. Open data can improve lives, not least through adding more transparency into a financial sector that desperately needs more of it. This kind of data transparency will give the best financial advisers the advantage they deserve and make it much harder for your Aunt Betty to choose someone with a history of financial malpractice.

The next phase of financial data for good will use big data analysis and algorithmic consumer advice tools, or "choice engines," to make better decisions. The vast majority of consumers are unlikely to ever look directly at raw datasets themselves. Instead, they'll use mobile applications, search engines and social recommendations to make smarter choices.

There are already early examples of such services emerging. Billshrink, for example, lets consumers get personalized recommendations for a cheaper cell phone plan based on calling histories. Mint makes specific recommendations on how a citizen can save money based upon data analysis of the accounts added. Moreover, much of the innovation in this area is enabled by the ability of entrepreneurs and developers to go directly to data aggregation intermediaries like Yodlee or CashEdge to license the data.

EMC's Big Data solution accelerates business transformation. We offer a cost-efficient and scale-out IT infrastructure that allows organizations to access broad data sources, collaborate and execute real-time analysis and drive actionable insight.

Transit data as economic fuel

Transit data continues to be one of the richest and most dynamic areas for co-creation of services. Around the United States and beyond, there has been a blossoming of innovation in the city transit sector, driven by the passion of citizens and fueled by the release of real-time transit data by city governments.

Francisca Rojas, research director at the Harvard Kennedy School's Transparency Policy Project, has investigated the dynamics behind the disclosure of data by transit agencies in the United States, which she calls one of the most successful implementations of open government. "In just a few years, a rich community has developed around this data, with visionary champions for disclosure inside transit agencies collaborating with eager software developers to deliver multiple ways for riders to access real-time information about transit," wrote Rojas.

The Massachusetts Bay Transit Authority (MBTA) learned from Portland, Oregon's, TriMet that open data is better. "This was the best thing the MBTA had done in its history," said Laurel Ruma, O'Reilly's director of talent and a long-time resident in greater Boston, in her 2010 Ignite talk on real-time transit data. The MBTA's move to make real-time data available and support it has spawned a new ecosystem of mobile applications, many of which are featured at MBTA.com.

There are now 44 different consumer-facing applications for the TriMet system. Chicago, Washington and New York City also have a growing ecosystem of applications.

As more sensors go online in smarter cities, tracking the movements of traffic patterns will enable public administrators to optimize routes, schedules and capacity, driving efficiency and a better allocation of resources.

Transparency and civic goods

As John Wonderlich, policy director at the Sunlight Foundation, observed last year, access to legislative data brings citizens closer to their representatives. "When developers and programmers have better access to the data of Congress, they can better build the databases and tools that let the rest of us connect with the legislature."

That's the promise of the Sunlight Foundation's work, in general: Technology-fueled transparency will help fight corruption, fraud and reveal the influence behind policies. That work is guided by data, generated, scraped and aggregated from government and regulatory bodies. The Sunlight Foundation has been focused on opening up Congress through technology since the organization was founded. Some of its efforts culminated recently with the publication of a live XML feed for the House floor and a transparency portal for House legislative documents.

There are other horizons for transparency through open government data, which broadly refers to public sector records that have been made available to citizens. For a canonical resource on what makes such releases truly "open," consult the "8 Principles of Open Government Data."

For instance, while gerrymandering has been part of American civic life since the birth of the republic, one of the best policy innovations of 2011 may offer hope for improving the redistricting process. DistrictBuilder, an open-source tool created by the Public Mapping Project, allows anyone to easily create legal districts.

"During the last year, thousands of members of the public have participated in online redistricting and have created hundreds of valid public plans," said Micah Altman, senior research scientist at Harvard University Institute for Quantitative Social Science, via an email last year.

"In substantial part, this is due to the project's effort and software. This year represents a huge increase in participation compared to previous rounds of redistricting — for example, the number of plans produced and shared by members of the public this year is roughly 100 times the number of plans submitted by the public in the last round of redistricting 10 years ago," Altman said. "Furthermore, the extensive news coverage has helped make a whole new set of people aware of the issue and has re framed it as a problem that citizens can actively participate in to solve, rather than simply complain about."

Principles for data in the public good

As a result of digital technology, our collective public memory can now be shared and expanded upon daily. In a recent lecture on public data for public good at Code for America, Michal Migurski of Stamen Design made the point that part of the global financial crisis came through a crisis in public knowledge, citing "The Destruction of Economic Facts," by Hernando de Soto.

To arrive at virtuous feedback loops that amplify the signals that citizens, regulators, executives and elected leaders inundated with information need to make better decisions, data providers and infomediaries will need to embrace key principles, as Migurski's lecture outlined.

First, "data drives demand," wrote Tim O'Reilly, who attended the lecture and distilled Migurski's insights. "When Stamen launched crimespotting.org, it made people aware that the data existed. It was there, but until they put visualization front and center, it might as well not have been."

Second, "public demand drives better data," wrote O'Reilly. "Crimespotting led Oakland to improve their data publishing practices. The stability of the data and publishing on the web made it possible to have this data addressable with public links. There's an 'official version,' and that version is public, rather than hidden."

Third, "version control adds dimension to data," wrote O'Reilly. "Part of what matters so much when open source, the web, and open data meet government is that practices that developers take for granted become part of the way the public gets access to data. Rather than static snapshots, there's a sense that you can expect to move through time with the data."

The case for open data

Accountability and transparency are important civic goods, but adopting open data requires grounded arguments for a city chief financial officer to support these initiatives. When it comes to making a business case for open data, John Tolva, the chief technology officer for Chicago, identified four areas that support the investment in open government:

  1. Trust — "Open data can build or rebuild trust in the people we serve," Tolva said. "That pays dividends over time."
  2. Accountability of the work force — "We've built a performance dashboard with KPIs [key performance indicators] that track where the city directly touches a resident."
  3. Business building — "Weather apps, transit apps ... that's the easy stuff," he said. "Companies built on reading vital signs of the human body could be reading the vital signs of the city."
  4. Urban analytics — "Brett [Goldstein] established probability curves for violent crime. Now we're trying to do that elsewhere, uncovering cost savings, intervention points, and efficiencies."

New York City is also using data internally. The city is doing things like applying predictive analytics to building code violations and housing data to try to understand where potential fire risks might exist.

"The thing that's really exciting to me, better than internal data, of course, is open data," said New York City chief digital officer Rachel Sterne during her talk at Strata New York 2011. "This, I think, is where we really start to reach the potential of New York City becoming a platform like some of the bigger commercial platforms and open data platforms. How can New York City, with the enormous amount of data and resources we have, think of itself the same way Facebook has an API ecosystem or Twitter does? This can enable us to produce a more user-centric experience of government. It democratizes the exchange of information and services. If someone wants to do a better job than we are in communicating something, it's all out there. It empowers citizens to collaboratively create solutions. It's not just the consumption but the co-production of government services and democracy."

The promise of data journalism

NYTimes: 365/360 - 1984 (in color) by blprnt_van, on FlickrThe ascendance of data journalism in media and government will continue to gather force in the years ahead.

Journalists and citizens are confronted by unprecedented amounts of data and an expanded number of news sources, including a social web populated by our friends, family and colleagues. Newsrooms, the traditional hosts for information gathering and dissemination, are now part of a flattened environment for news. Developments often break first on social networks, and that information is then curated by a combination of professionals and amateurs. News is then analyzed and synthesized into contextualized journalism.

Data is being scraped by journalists, generated from citizen reporting, or gleaned from massive information dumps — such as with the Guardian's formidable data journalism, as detailed in a recent ebook. ScraperWiki, a favorite tool of civic coders at Code for America and elsewhere, enables anyone to collect, store and publish public data. As we grapple with the consumption challenges presented by this deluge of data, new publishing platforms are also empowering us to gather, refine, analyze and share data ourselves, turning it into information.

There are a growing number of data journalism efforts around the world, from New York Times interactive features to the award-winning investigative work of ProPublica. Here are just a few promising examples:

  • Spending Stories, from the Open Knowledge Foundation, is designed to add context to news stories based upon government data by connecting stories to the data used.
  • Poderopedia is trying to bring more transparency to Chile, using data visualizations that draw upon a database of editorial and crowdsourced data.
  • The State Decoded is working to make the law more user-friendly.
  • Public Laboratory is a tool kit and online community for grassroots data gathering and research that builds upon the success of Grassroots Mapping.
  • Internews and its local partner Nai Mediawatch launched a new website that shows incidents of violence against journalists in Afghanistan.

Open aid and development

The World Bank has been taking unprecedented steps to make its data more open and usable to everyone. The data.worldbank.org website that launched in September 2010 was designed to make the bank's open data easier to use. In the months since, more than 100 applications have been built using the data.

"Up until very recently, there was almost no way to figure out where a development project was," said Aleem Walji, practice manager for innovation and technology at the World Bank Institute, in an interview last year. "That was true for all donors, including us. You could go into a data bank, find a project ID, download a 100-page document, and somewhere it might mention it. To look at it all on a country level was impossible. That's exactly the kind of organization-centric search that's possible now with extracted information on a map, mashed up with indicators. All of sudden, donors and recipients can both look at relationships."

Open data efforts are not limited to development. More data-driven transparency in aid spending is also going online. Last year, the United States Agency for International Development (USAID) launched a public engagement effort to raise awareness about the devastating famine in the Horn of Africa. The FWD campaign includes a combination of open data, mapping and citizen engagement.

"Frankly, it's the first foray the agency is taking into open government, open data, and citizen engagement online," said Haley Van Dyck, director of digital strategy at USAID, in an interview last year.

"We recognize there is a lot more to do on this front, but are happy to start moving the ball forward. This campaign is different than anything USAID has done in the past. It is based on informing, engaging, and connecting with the American people to partner with us on these dire but solvable problems. We want to change not only the way USAID communicates with the American public, but also the way we share information."

USAID built and embedded interactive maps on the FWD site. The agency created the maps with open source mapping tools and published the datasets it used to make these maps on data.gov. All are available to the public and media to download and embed as well.

The combination of publishing maps and the open data that drives them simultaneously online is significantly evolved for any government agency, and it serves as a worthy bar for other efforts in the future to meet. USAID accomplished this by migrating its data to an open, machine-readable format.

"In the past, we released our data in inaccessible formats — mostly PDFs — that are often unable to be used effectively," said Van Dyck. "USAID is one of the premiere data collectors in the international development space. We want to start making that data open, making that data sharable, and using that data to tell stories about the crisis and the work we are doing on the ground in an interactive way."

Crisis data and emergency response

Unprecedented levels of connectivity now exist around the world. According to a 2011 survey from the Pew Internet and Life Project, more than 50% of American adults use social networks, 35% of American adults have smartphones, and 78% of American adults are connected to the Internet. When combined, those factors mean that we now see earthquake tweets spread faster than the seismic waves themselves. Networked publics can now share the effects of disasters in real time, providing officials with unprecedented insight into what's happening. Citizens act as sensors in the midst of the storm, creating an ad hoc system of networked accountability through data.

The growth of an Internet of Things is an important evolution. What we saw during Hurricane Irene in 2011 was the increasing importance of an Internet of people, where citizens act as sensors during an emergency. Emergency management practitioners and first responders have woken up to the potential of using social data for enhanced situational awareness and resource allocation.

An historic emergency social data summit in Washington in 2010 highlighted how relevant this area has become. And last year's hearing in the United States Senate on the role of social media in emergency management was "a turning point in Gov 2.0," said Brian Humphrey of the Los Angeles Fire Department.

The Red Cross has been at the forefront of using social data in a time of need. That's not entirely by choice, given that news of disasters has consistently broken first on Twitter. The challenge is for the men and women entrusted with coordinating response to identify signals in the noise.

First responders and crisis managers are using a growing suite of tools for gathering information and sharing crucial messages internally and with the public. Structured social data and geospatial mapping suggest one direction where these tools are evolving in the field.

A web application from ESRI deployed during historic floods in Australia demonstrated how crowdsourced social intelligence provided by Ushahidi can enable emergency social data to be integrated into crisis response in a meaningful way.

The Australian flooding web app includes the ability to toggle layers from OpenStreetMap, satellite imagery, and topography, and then filter by time or report type. By adding structured social data, the web app provides geospatial information system (GIS) operators with valuable situational awareness that goes beyond standard reporting, including the locations of property damage, roads affected, hazards, evacuations and power outages.

Long before the floods or the Red Cross joined Twitter, however, Brian Humphrey of the Los Angeles Fire Department (LAFD) was already online, listening. "The biggest gap directly involves response agencies and the Red Cross," said Humphrey, who currently serves as the LAFD's public affairs officer. "Through social media, we're trying to narrow that gap between response and recovery to offer real-time relief."

After the devastating 2010 earthquake in Haiti, the evolution of volunteers working collaboratively online also offered a glimpse into the potential of citizen-generated data. Crisis Commons has acted as a sort of "geeks without borders." Around the world, developers, GIS engineers, online media professionals and volunteers collaborated on information technology projects to support disaster relief for post-earthquake Haiti, mapping streets on OpenStreetMap and collecting crisis data on Ushahidi.

Healthcare

What happens when patients find out how good their doctors really are? That was the question that Harvard Medical School professor Dr. Atul Gawande asked in the New Yorker, nearly a decade ago.

The narrative he told in that essay makes the history of quality improvement in medicine compelling, connecting it to the creation of a data registry at the Cystic Fibrosis Foundation in the 1950s. As Gawande detailed, that data was privately held. After it became open, life expectancy for cystic fibrosis patients tripled.

In 2012, the new hope is in big data, where techniques for finding meaning in the huge amounts of unstructured data generated by healthcare diagnostics offer immense promise.

The trouble, say medical experts, is that data availability and quality remain significant pain points that are holding back existing programs.

There are, literally, bright spots that suggest what's possible. Dr. Gawande's 2011 essay, which considered whether "hotspotting" using health data could help lower medical costs by giving the neediest patients better care, offered another perspective on the issue. Early outcomes made the approach look compelling. As Dr. Gawande detailed, when a Medicare demonstration program offered medical institutions payments that financed the coordination of care for its most chronically expensive beneficiaries, hospital stays and trips to the emergency rooms dropped more than 15% over the course of three years. A test program adopting a similar approach in Atlantic City saw a 25% drop in costs.

Through sharing data and knowledge, and then creating a system to convert ideas into practice, clinicians in the ImproveCareNow network were able to improve the remission rate for Crohn's disease from 49% to 67% without the introduction of new drugs.

In Britain, researchers found that the outcomes for adult cardiac patients improved after the publication of information on death rates. With the release of meaningful new open government data about performance and outcomes from the British national healthcare system, similar improvements may be on the way.

"I do believe we are at the beginning of a revolutionary moment in health care, when patients and clinicians collect and share data, working together to create more effective health care systems," said Susannah Fox, associate director for digital strategy at the Pew Internet and Life Project, in an interview in January. Fox's research has documented the social life of health information, the concept of peer-to-peer healthcare, and the role of the Internet among people living with chronic disease.

In the past few years, entrepreneurs, developers and government agencies have been collaboratively exploring the power of open data to improve health. In the United States, the open data story in healthcare is evolving quickly, from new mobile apps that lead to better health decisions to data spurring changes in care at the U.S. Department of Veterans Affairs.

Since he entered public service, Todd Park, the first chief technology officer of the U.S. Department of Health and Human Services (HHS), has focused on unleashing the power of open data to improve health. If you aren't familiar with this story, read the Atlantic's feature article that explores Park's efforts to revolutionize the healthcare industry through better use of data.

Park has focused on releasing data at Health.Data.Gov. In a speech to a Hacks and Hackers meetup in New York City in 2011, Park emphasized that HHS wasn't just releasing new data: "[We're] also making existing data truly accessible or usable," he said, taking "stuff that's in a book or on a website and turning it into machine-readable data or an API."

Park said it's still quite early in the project and that the work isn't just about data — it's about how and where it's used. "Data by itself isn't useful. You don't go and download data and slather data on yourself and get healed," he said. "Data is useful when it's integrated with other stuff that does useful jobs for doctors, patients and consumers."

What lies ahead

There are four trends that warrant special attention as we look to the future of data for public good: civic network effects, hybridized data models, personal data ownership and smart disclosure.

Civic network effects

Community is a key ingredient in successful open government data initiatives. It's not enough to simply release data and hope that venture capitalists and developers magically become aware of the opportunity to put it to work. Marketing open government data is what repeatedly brought federal Chief Technology Officer Aneesh Chopra and Park out to Silicon Valley, New York City and other business and tech hubs.

Despite the addition of topical communities to Data.gov, conferences and new media efforts, government's attempts to act as an "impatient convener" can only go so far. Civic developer and startup communities are creating a new distributed ecosystem that will help create that community, from BuzzData to Socrata to new efforts like Max Ogden's DataCouch.

Smart disclosure

There are enormous economic and civic good opportunities in the "smart disclosure" of personal data, whereby a private company or government institution provides a person with access to his or her own data in open formats. Smart disclosure is defined by Cass Sunstein, Administrator of the White House Office for Information and Regulatory Affairs, as a process that "refers to the timely release of complex information and data in standardized, machine-readable formats in ways that enable consumers to make informed decisions."

For instance, the quarterly financial statements of the top public companies in the world are now available online through the Securities and Exchange Commission.

Why does it matter? The interactions of citizens with companies or government entities generate a huge amount of economically valuable data. If consumers and regulators had access to that data, they could tap it to make better choices about everything from finance to healthcare to real estate, much in the same way that web applications like Hipmunk and Zillow let consumers make more informed decisions.

Personal data assets

When a trend makes it to the World Economic Forum (WEF) in Davos, it's generally evidence that the trend is gathering steam. A report titled "Personal Data Ownership: The Emergence of a New Asset Class" suggests that 2012 will be the year when citizens start thinking more about data ownership, whether that data is generated by private companies or the public sector.

"Increasing the control that individuals have over the manner in which their personal data is collected, managed and shared will spur a host of new services and applications," wrote the paper's authors. "As some put it, personal data will be the new 'oil' — a valuable resource of the 21st century. It will emerge as a new asset class touching all aspects of society."

The idea of data as a currency is still in its infancy, as Strata Conference chair Edd Dumbill has emphasized. The Locker Project, which provides people with the ability to move their own data around, is one of many approaches.

The growth of the Quantified Self movement and online communities like PatientsLikeMe and 23andMe validates the strength of the movement. In the U.S. federal government, the Blue Button initiative, which enables veterans to download personal health data, has now spread to all federal employees and earned adoption at Aetna and Kaiser Permanente.

In early 2012, a Green Button was launched to unleash energy data in the same way. Venture capitalist Fred Wilson called the Green Button an "OAuth for energy data."

Wilson wrote:

"It is a simple standard that the utilities can implement on one side and web/mobile developers can implement on the other side. And the result is a ton of information sharing about energy consumption and, in all likelihood, energy savings that result from more informed consumers."

Hybridized public-private data

Free or low-cost online tools are empowering citizens to do more than donate money or blood: Now, they can donate, time, expertise or even act as sensors. In the United States, we saw a leading edge of this phenomenon in the Gulf of Mexico, where Oil Reporter, an open source oil spill reporting app, provided a prototype for data collection via smartphone. In Japan, an analogous effort called Safecast grew and matured in the wake of the nuclear disaster that resulted from a massive earthquake and subsequent tsunami in 2011.

Open source software and citizens acting as sensors have steadily been integrated into journalism over the past few years, most dramatically in the videos and pictures uploaded after the 2009 Iran election and during 2011's Arab Spring.

Citizen science looks like the next frontier. Safecast is combining open data collected by citizen science with academic, NGO and open government data (where available), and then making it widely available. It's similar to other projects, where public data and experimental data are percolating.

Public data is a public good

Despite the myriad challenges presented by legitimate concerns about privacy, security, intellectual property and liability, the promise of more informed citizens is significant. McKinsey's 2011 report dubbed big data as the next frontier for innovation, with billions of dollars of economic value yet to be created. When that innovation is applied on behalf of the public good, whether it's in city planning, transit, healthcare, government accountability or situational awareness, those effects will be extended.

We're entering the feedback economy, where dynamic feedback loops between customers and corporations, partners and providers, citizens and governments, or regulators and companies can both drive efficiencies and leaner, smarter governments.

The exabyte age will bring with it the twin challenges of information overload and overconsumption, both of which will require organizations of all sizes to use the emerging toolboxes for filtering, analysis and action. To create public good from public goods — the public sector data that governments collect, the private sector data that is being collected and the social data that we generate ourselves — we will need to collectively forge new compacts that honor existing laws and visionary agreements that enable the new data science to put the data to work.

Photo: NYTimes: 365/360 - 1984 (in color) by blprnt_van, on Flickr

Related:

Report from HIMSS: health care tries to leap the chasm from the average to the superb

I couldn't attend the session today on StealthVest--and small surprise. Who wouldn't want to come see an Arduino-based garment that can hold numerous health-monitoring devices in a way that is supposed to feel like a completely normal piece of clothing? As with many events at the HIMSS conference, which has registered over 35,000 people (at least four thousand more than last year), the StealthVest presentation drew an overflow crowd.

StealthVest sounds incredibly cool (and I may have another chance to report on it Thursday), but when I gave up on getting into the talk I walked downstairs to a session that sounds kind of boring but may actually be more significant: Practical Application of Control Theory to Improve Capacity in a Clinical Setting.

The speakers on this session, from Banner Gateway Medical Center in Gilbert, Arizona, laid out a fairly standard use of analytics to predict when the hospital units are likely to exceed their capacity, and then to reschedule patients and provider schedules to smooth out the curve. The basic idea comes from chemical engineering, and requires them to monitor all the factors that lead patients to come in to the hospital and that determine how long they stay. Queuing theory can show when things are likely to get tight. Hospitals care a lot about these workflow issues, as Fred Trotter and David Uhlman discuss in the O'Reilly book Beyond Meaningful Use, and they have a real effect on patient care too.

The reason I find this topic interesting is that capacity planning leads fairly quickly to visible cost savings. So hospitals are likely to do it. Furthermore, once they go down the path of collecting long-term data and crunching it, they may extend the practice to clinical decision support, public health reporting, and other things that can make a big difference to patient care.

A few stats about data in U.S. health care

Do we need a big push to do such things? We sure do, and that's why meaningful use was introduced into HITECH sections of the American Recovery and Reinvestment Act. HHS released mounds of government health data on Health.data.gov hoping to serve a similar purpose. Let's just take a look at how far the United States is from using its health data effectively.

  • Last November, a CompTIA survey (reported by Health Care IT News) found that only 28% of providers have comprehensive EHRs in use, and another 17% have partial implementations. One has to remember that even a "comprehensive" EHR is unlikely to support the sophisticated data mining, information exchange, and process improvement that will eventually lead to lower costs and better care.

  • According to a recent Beacon Partners survey (PDF), half of the responding institutions have not yet set up an infrastructure for pursuing health information exchange, although 70% consider it a priority. The main problem, according to a HIMSS survey, is budget: HIEs are shockingly expensive. There's more to this story, which I reported on from a recent conference in Massachusetts.

Stats like these have to be considered when HIMSS board chair, Charlene S. Underwood, extolled the organization's achievements in the morning keynote. HIMSS has promoted good causes, but only recently has it addressed cost, interoperability, and open source issues that can allow health IT to break out of the elite of institutions large or sophisticated enough to adopt the right practices.

As signs of change, I am particularly happy to hear of HIMSS's new collaboration with Open Health Tools and their acquisition of the mHealth summit. These should guide the health care field toward more patient engagement and adaptable computer systems. HIEs are another area crying out for change.

An HIE optimist

With the flaccid figures for HIE adoption in mind, I met Charles Parisot, chair of Interoperability Standards and Testing Manager for EHRA, which is HIMSS's Electronic Health Records Association. The biggest EHR vendors and HIEs come together in this association, and Parisot was just stoked with positive stories about their advances.

His take on the cost of HIEs is that most of them just do it in a brute force manner that doesn't work. They actually copy the data from each institution into a central database, which is hard to manage from many standpoints. The HIEs that have done it right (notably in New York state and parts of Tennessee) are sleek and low-cost. The solution involves:

  • Keeping the data at the health care providers, and storing in the HIE only some glue data that associates the patient and the type of data to the provider.

  • Keeping all metadata about formats out to the HIE, so that new formats, new codes, and new types of data can easily be introduced into the system without recoding the HIE.

  • Breaking information exchange down into constituent parts--the data itself, the exchange protocols, identification, standards for encryption and integrity, etc.--and finding standard solutions for each of these.

So EHRA has developed profiles (also known by its ONC term, implementation specifications) that indicate which standard is used for each part of the data exchange. Metadata can be stored in the core HL7 document, the Clinical Document Architecture, and differences between implementations of HL7 documents by different vendors can also be documented.

A view of different architectures in their approach can be found in an EHRA white paper, Supporting a Robust Health Information Exchange Strategy with a Pragmatic Transport Framework. As testament to their success, Parisot claimed that the interoperability lab (a huge part of the exhibit hall floor space, and a popular destination for attendees) could set up the software connecting all the vendors' and HIEs' systems in one hour.

I asked him about the simple email solution promised by the government's Direct project, and whether that may be the path forward for small, cash-strapped providers. He accepted that Direct is part of the solution, but warned that it doesn't make things so simple. Unless two providers have a pre-existing relationship, they need to be part of a directory or even a set of federated directories, and assure their identities through digital signatures.

And what if a large hospital receives hundreds of email messages a day from various doctors who don't even know to whom their patients are being referred? Parisot says metadata must accompany any communications--and he's found that it's more effective for institutions to pull the data they want than for referring physicians to push it.

Intelligence for hospitals

Finally, Parisot told me EHRA has developed standards for submitting data to EHRs from 350 types of devices, and have 50 manufacturers working on devices with these standards. I visited a booth of iSirona as an example. They accept basic monitoring data such as pulses from different systems that use different formats, and translate over 50 items of information into a simple text format that they transmit to an EHR. They also add networking to devices that communicate only over cables. Outlying values can be rejected by a person monitoring the data. The vendor pointed out that format translation will be necessary for some time to come, because neither vendors nor hospitals will replace their devices simply to implement a new data transfer protocol.

For more about devices, I dropped by one of the most entertaining parts of the conference, the Intelligent Hospital Pavilion. Here, after a badge scan, you are somberly led through a series of locked doors into simulated hospital rooms where you get to watch actors in nursing outfits work with lifesize dolls and check innumerable monitors. I think the information overload is barely ameliorated and may be worsened by the arrays of constantly updated screens.

But the background presentation is persuasive: by using attaching RFIDs and all sorts of other devices to everything from people to equipment, and basically making the hospital more like a factory, providers can radically speed up responses in emergency situations and reduce errors. Some devices use the ISM "junk" band, whereas more critical ones use dedicated spectrum. Redundancy is built in throughout the background servers.

Waiting for the main event

The US health care field held their breaths most of last week, waiting for Stage 2 meaningful use guidelines from HHS. The announcement never came, nor did it come this morning as many people had hoped. Because meaningful use is the major theme of HIMSS, and many sessions were planned on helping providers move to Stage 2, the delay in the announcement put the conference in an awkward position.

HIMSS is also nonplussed over a delay in another initiative, the adoption of a new standard in the classification of disease and procedures. ICD-10 is actually pretty old, having been standardized in the 1980s, and the U.S. lags decades behind other countries in adopting it. Advantages touted for ICD-10 are:

  • It incorporates newer discoveries in medicine than the dominant standard in the U.S., ICD-9, and therefore permits better disease tracking and treatment.

  • Additionally, it's much more detailed than ICD-9 (with an order of magnitude more classifications). This allows the recording of more information but complicates the job of classifying a patient correctly.

ICD-10 is rather controversial. Some people would prefer to base clinical decisions on SNOMED, a standard described in the Beyond Meaningful Use book mentioned earlier. Ultimately, doctors lobbied hard against the HHS timeline for adopting ICD-10 because providers are so busy with meaningful use. (But of course, the goals of adopting meaningful use are closely tied to the goals of adopting ICD-10.) It was the pushback from these institutions that led HHS to accede and announce a delay. HIMSS and many of its members were disappointed by the delay.

In addition, there is an upcoming standard, ICD-11, whose sandal some say ICD-10 is not even worthy to lace. A strong suggestion that the industry just move to ICD-11 was aired in Government Health IT, and the possibility was raised in Health Care IT News as well. In addition reflecting the newest knowledge about disease, ICD-11 is praised for its interaction with SNOMED and its use of Semantic Web technology.

That last point makes me a bit worried. The Semantic Web has not been widely adopted, and if people in the health IT field think ICD-10 is complex, how are they going to deal with drawing up and following relationships through OWL? I plan to learn more about ICD-11 at the conference.

February 21 2012

HIMSS asks: Who is Biz Stone and what is Twitter?


Today, one of the founders of Twitter, Biz Stone, gave the opening keynote at HIMSS.

This is probably going to be the best keynote at HIMSS, followed by a speech from Dr. Farzad Mostashari, which will also be excellent. It goes downhill after that: there will be a talk about politics and another talk from an "explorer." I am sure those will be great talks, but when I go to HIMSS, I want to hear about health information technology. Want to know what @biz actually said? As usual, Twitter itself provides an instant summary.

HIMSS stands for Healthcare Information and Management Systems Society. The annual HIMSS conference is the largest Health IT gathering on the planet. Almost 40,000 people will show up to discuss healthcare information systems. Many of them will be individuals sent by their hospitals to try and find out what solutions they will need to purchase in order to meet meaningful use requirements. But many of the attendees are old school health IT experts, many of whom have spent entire careers trying to bring technology into a healthcare system that has resisted computerization tooth and nail. This year will likely break all kind of attendance records for HIMSS. Rightly so: The value of connecting thousands of health IT experts with tens of thousands who are seeking health IT experts has never been higher.

It is ironic that Biz Stone is keynoting this year's talk, because Twitter has changed the health IT game so substantially. I say Twitter specifically, and not "social media" generally. I do not think Facebook or Google+ or your social media of choice has had nearly the impact that Twitter has had on healthcare communications.

HIMSS, and in many cases traditional health IT along with it, is experiencing something of a whirlwind. One force adding wind has been the fact that President Obama has funded EHR systems with meaningful use, and made it clear that the future of healthcare funding will take place at Accountable Care Organizations (ACO) that are paid to keep people healthy rather than to cover procedures when they are sick. It is hard to understate the importance of this. Meaningful Use and ACOs will do more to computerize medicine in five years than the previous 50 years without these incentive changes.

But in the same breath, we must admit that the healthcare system as a whole is strained and unable to meet the needs of millions of its patients. The new force in healthcare is peer to peer medicine. There are really only a few things that doctors provide to patients. They either provide treatment, or they provide facts, or perhaps, they provide context for those facts. More and more, patients are seeking facts and context for that information, from the Internet generally and other patients specifically. This can be dangerous, but when done correctly it can be revolutionary .

It's not rocket science really; our culture has changed. Baby boomers still wonder if it is OK to discuss sexual issues in polite company. Their kids blog about their vasectomies. It's not just that we blog about vasectomies. We read blogs about vasectomies and consider it normal.

Someday, I will decide whether or not I should get a vasectomy. (I would like to have kids first). When I make that decision, I might just give @johnbiggs a shout and ask him how its going. He might not have time to answer me. But some vasectomy patient somewhere will have the time to tell me what it is like. Some epatient will be willing to spend an hour talking to me about what it meant to them to have this procedure. I can talk with patients who had a good experience, I can talk to patients who had a bad experience. I will have access to insights that my urologist does not have, and most importantly does not have time to discuss with me in any case.

For whatever reason, the epatient community centers around Twitter. More than likely this is because of the fundamentally open nature of this network. Although it is possible to "protect" tweets, most account holders tend to tweet to the whole world. If you are interested in a particular health-related issue, you can use Twitter to find the group of people who are discussing that issue. Twitter is a natural way for people who are connected by a common thought or issue to organize. Facebook, on the other hand, is about connecting with people you already know. The famous quote applies: "Facebook is about people you used to know; Twitter is about people you'd like to know better." You could change that quote to read "Twitter is about people you'd like to know who have had vasectomies."

There are people on Twitter right now discussing very personal health issues. All you need to experience this is to do a little research to understand what hashtag a community is using to connect with each other. For instance:

I intentionally chose diseases that are not easy to discuss in person. Discussion on these delicate issues between people dealing with these problems happens all the time on Twitter. Very often Twitter is the place to find and meet people who are dealing with the same healthcare issues that you are, and then discover another place on the web where patients with similar conditions are gathering and helping each other. For better or worse, Twitter has become a kind of peer-to-peer healthcare marketplace. I think this is about a billion times more interesting than surgeons who update families via Twitter, although that is cool, too.

At Health 2.0 or the OSCON healthcare track, these kinds of insights are regarded as somewhat obvious. It is obvious that patients are seeking each other out using social media technologies and that this must somehow eventually be reconciled with the process that doctors are just undertaking to computerize medicine. But at HIMSS this is a revolutionary idea. HIMSS is full of old-school EHR vendors who are applying technology that was cutting edge in 1995 to 2012 problems. HIMSS is full of hospital administrators who recognize that their biggest barrier to meaningful use dollars is not an EHR, but the fact that 50% of their nurses do not know how to type.

I can promise you that the following conversation will be happening thousands of times in the main hall at HIMSS before Biz Stone speaks:

Attendee 1: Who is this speaking?

Attendee 2: Biz Stone.

Attendee 1: Who is that?

Attendee 2: One of the founders of Twitter.

Attendee 1: What is Twitter?

For this audience, Biz Stone talking about how Twitter revolutionizes healthcare will be electric. I wish I could be there.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.

Related:

February 10 2012

Preview of HIMSS 2012

I am very happy to be attending the Healthcare Information and Management Systems Society (HIMSS) conference this year. We are at a pivotal moment in the history of healthcare in this country and health IT is playing a very prominent role. This will be one of the most important healthcare conferences of the year. If you can't make it to Las Vegas in person, there are opportunities to attend virtually. Just go to himssvirtual.org for more information.

I will be moderating panel presentations at the HIMSS Social Media Center on Tuesday and Wednesday. This year I expect social media to play a much larger presence in the conference, and the new location for the pavilion will put it front and center. Since the keynote this year is from one of the founders of Twitter, Biz Stone, I'm sure there will be a social media flavor throughout the event.

I will also be participating in the brand new eCollaboration Forum at HIMSS on Thursday. The Collaborative Health Consortium has partnered with HIMSS to sponsor a new, exclusive event focused on the shift to collaborative care platforms to take place at the conference. The event will focus on collaborative platforms as foundations for transformation to accountable care. Attendees will be able to learn what a collaborative healthcare platform is and why the healthcare industry needs it, discover paths to take to effectively implement collaborative technologies, and get further resources to help evaluate the solutions available in the shift toward an accountable care health model.

I am honored to be moderating a panel with David C. Kibbe, MD MBA, senior advisor at the American Academy of Family Physicians; Jonathan Hare, chairman of Resilient Network Systems; and Scott Rea, vice president GOV/EDU Relations and senior PKI Architect at DigiCert.

Our session, "Developing Trust in the Health Internet as a Platform," will focus on the tools, technologies and rules we must decide upon to establish trust in the Internet as the platform for healthcare. Effective health information exchange of any resource requires deep trust, following from the right architecture and the right rules. We will discuss efforts like DirectTrust.org and the EHR/HIE Interoperability Workgroup as conveners that are creating a community to move us forward.

My fellow Radar blogger Andy Oram will also be on hand to provide context and his own unique perspective (as well as keep me focused on what matters).

Related:

February 06 2012

Small Massachusetts HIT conference returns to big issues in health care

I've come to look forward to the Massachusetts Heath Data Consortium's annual HIT conference because--although speakers tout the very real and impressive progress made by Massachusetts health providers--you can also hear acerbic and ruthlessly candid critiques of policy and the status quo. Two notable take-aways from last year's conference (which I wrote up at the time) were the equivalence of old "managed care" to new "accountable care organizations" and the complaint that electronic health records were "too expensive, too hard to use, and too disruptive to workflow." I'll return to these claims later.

The sticking point: health information exchange

This year, the spears were lobbed by Ashish Jha of Harvard Medical School, who laid out a broad overview of progress since the release of meaningful use criteria and then accused health care providers of undermining one of its main goals, the exchange of data between different providers who care for the same patient. Through quantitative research (publication in progress), Jha's researchers showed a correlation between fear of competition and low adoption of HIEs. Hospitals with a larger, more secure position in their markets, or in more concentrated markets, were more likely to join an HIE.

The research bolsters Jha's claim that the commonly cited barriers to using HIEs (technical challenges, cost, and privacy concerns) are surmountable, and that the real problem is a refusal to join because a provider fears that patients would migrate to other providers. It seems to me that the government and public can demand better from providers, but simply cracking the whip may be ineffective. Nor should it be necessary. An urgent shortage of medical care exists everywhere in the country, except perhaps a few posh neighborhoods. There's plenty for all providers. Once insurance is provided to all the people in need, no institution should need to fear a lack of business, unless it's performance record is dismal.

Jha also put up some research showing a strong trend toward adopting electronic health records, although the small offices that give half the treatment in the United States are still left behind. He warned that to see big benefits, we need to bring in health care institutions that are currently given little attention by the government--nursing home, rehab facilities, and so forth--and give them incentives to digitize. He wrapped up by quoting David Blumenthal, former head of the ONC, on the subject of HIEs. Blumenthal predicted that we'd see EHRs in most providers over the next few years, and that the real battle would be getting them to adopt health information exchange.

Meanwhile, meaningful use could trigger a shake-out in the EHR industry, as vendors who have spent years building silo'd projects fail to meet the Stage 2 requirements that fulfill the highest aspirations of the HITECH act that defined meaningful use, including health information exchange. Meanwhile, a small but steadily increasing number of open source projects have achieved meaningful use certification. So we'll see more advances in the adoption of both EHRs and HIEs.

Low-hanging fruit signals a new path for cost savings

The big achievement in Massachusetts, going into the conference today, was a recent agreement between the state's major insurer, Blue Cross Blue Shield, and the 800-pound gorilla of the state's health care market, Partners HealthCare System. The pact significantly slows the skyrocketing costs that we've all become accustomed to in the United States, through the adoption of global payments (that is, fixed reimbursements for treating patients in certain categories). That two institutions of such weight can relinquish the old, imprisoning system of fee-for-service is news indeed.

Note that the Blue Cross/Partners agreement doesn't even involve the formation of an Accountable Care Organization. Presumably, Partners believes it can pick some low-hanging fruit through modest advances in efficiency. Cost savings you can really count will come from ACOs, where total care of the patient is streamlined through better transfers of care and intensive communication. Patient-centered medical homes can do even more. So an ACO is actually much smarter than old managed care. But it depends on collecting good data and using it right.

The current deal is an important affirmation of the path Massachusetts took long before the rest of the country in aiming for universal health coverage. We all knew at the time that the Massachusetts bill was not addressing costs and that these would have to be tackled eventually. And at first, of course, health premiums went up because a huge number of new people were added to the roles, and many of them were either sick or part of high-risk populations.

The cost problem is now being addressed through administrative pressure (at one point, Governor Deval Patrick flatly denied a large increase requested by insurers), proposed laws, and sincere efforts at the private level such as the Blue Cross/Partners deal. I asked a member of the Patrick administration whether they problem could be solved without a new law, and he expressed the opinion that there's a good chance it could be. Steven Fox of Blue Cross Blue Shield said that 70% of their HMO members go to physicians in their Alternative Quality Network, which features global payments. And he said these members have better outcomes at lower costs.

ACOs have a paradoxical effect on health information exchange Jha predicted that ACOs, while greatly streamlining the exchanges between their member organizations, because these save money, they will resist exchanging data with outside providers because keeping patients is even more important for ACOs than for traditional hospitals and clinics. Only by keeping a patient can the ACO reap the benefits of the investments they make in long-term patient health.

As Doris Mitchell received an award for her work with the MHDC, executive directory Ray Campbell mentioned the rapid growth and new responsibilities of her agency, the Group Insurance Commission, which negotiates all health insurance coverage for state employees, as cities and towns have been transferring their municipal employees to it. A highly contentious bill last year that allowed the municipalities to transfer their workers to the GIC was widely interpreted as a blow against unionized workers, when it was actually just a ploy to save money through the familiar gambit of combining the insured into a larger pool. I covered this controversy at the time.

A low-key conference

Attendance was down at this year's conference, with about half the attendees and vendors as last year's. Lowered interest seemed to be reflected as none of the three CEOs receiving awards turned up to represent their institutions (the two institutions mentioned earlier for their historic cost-cutting deal--Blue Cross Blue Shield and Partners HealthCare--along with Steward Health Care).

The morning started with a thoughtful look at the requirements for ACOs by Frank Ingari of Essence Healthcare, who predicted a big rise in investment by health care institutions in their IT departments. Later speakers echoed this theme, saying that hospitals should invest less in state-of-the-art equipment that leads to immediately billable activities, and more in the underlying IT that will allow them to collect research data and cut down waste. Some of the benefits available through this research were covered in a talk at the Open Source convention a couple years ago.

Another intriguing session covered technologies available today that could be more widely adopted to improve health care. Videos of robots always draw an enthusiastic response, but a more significant innovation ultimately may be a database McKesson is developing that lets doctors evaluate genetic tests and decide when such tests are worth the money and trouble.

The dozen vendors were joined by a non-profit, Sustainable Healthcare for Haiti. Their first project is one of the most basic health interventions one can make: providing wells for drinkable water. They have a local sponsor who can manage their relationship with the government, and an ambitious mission that includes job development, an outpatient clinic, and an acute care children's hospital.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl