Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 23 2012

Balancing health privacy with innovation will rely on improving informed consent

Society is now faced with how to balance the privacy of the individual patient with the immense social good that could come through great health data sharing. Making health data more open and fluid holds both the potential to be hugely beneficial for patients and enormously harmful. As my colleague Alistair Croll put it this summer, big data may well be a civil rights issue that much of the world doesn’t know about yet.

This will likely be a tension that persists throughout my lifetime as technology spreads around the world. While big data breaches are likely to make headlines, more subtle uses of health data have the potential to enable employers, insurers or governments to discriminate — or worse. Figuring out shopping habits can also allow a company to determine a teenager was pregnant before her father did. People simply don’t realize how much about their lives can be intuited through analysis of their data exhaust.

To unlock the potential of health data for the public good, informed consent must mean something. Patients must be given the information and context for how and why their health data will be used in clear, transparent ways. To do otherwise is to duck the responsibility that comes with the immense power of big data.

In search of an informed opinion on all of these issues, I called up Deven McGraw (@HealthPrivacy), the director of the Health Privacy Project at the Center for Democracy and Technology (CDT). Our interview, lightly edited for content and clarity, follows.

Should people feel better about, say, getting their genome decoded because the Patient Protection and Affordable Care Act (PPACA) was upheld by the Supreme Court? What about other health-data-based discrimination?

Deven McGraw: The reality that someone could get data and use it in a way that harms people, and the inability to get affordable health care insurance or to get insurance at all, has been a significant driver of the concerns people have about health data for a very long time.

It’s not the only driver of people’s privacy concerns. Just removing the capacity for entities to do harm to individuals using their health data is probably not going to fully resolve the problem.

It’s important to pursue from a policy standpoint, but it’s also the case that people feel stigmatized by their health data. They feel health care is something they want to be able to pursue privately, even if the chances are very low that anybody could get the information and actually harm them with it by denying them insurance or denying them employment, which is an area we actually haven’t fully fixed. Your ability to get life insurance or disability insurance was not fixed by the Affordable Care Act.

Even if you fix all of those issues, privacy protections are about building an ecosystem in health care that people will trust. When they need to seek care that might be deemed to be sensitive to them, they feel like they can go get care and have some degree of confidence that that information isn’t going to be shared outside of those who have a need to know it, like health care providers or their insurance company if they are seeking to be reimbursed for care.

Obviously, public health can play a role. The average individual doesn’t realize that, often, their health data is sent to public health authorities if they have certain conditions or diseases, or even just as a matter of routine reporting for surveillance purposes.

Some of this is about keeping a trustworthy environment for individuals so they can seek the care they need. That’s a key goal for privacy. The other aspect of it is making sure we have the data available for important public purposes, but in a way that respects the fact that this data is sensitive.

We need to not be disrupting the trust people have in the health care system. If you can’t give people some reasonable assurance about how their data is used, there are lots of folks who will decline to seek care or will lie about health conditions when truthfulness is important.

Are health care providers and services being honest about health data use?

Deven McGraw: Transparency and openness about how we use health data in this country is seriously lacking. Part of it is the challenge of being up front with people, disclosing things they need to know but not overwhelming them with so much information in a consent form that they just sign on the bottom and don’t read it and don’t fully understand it.

It’s really hard to get notice and transparency right, and it’s a constant struggle. The FTC report on privacy talks a lot about how hard it is to be transparent with people about data sharing on the Internet or data collection on your mobile phone.

Ideally, for people to be truly informed, you’d give them an exhaustive amount of information, right? But if you give them too much information, the chances that they’ll read it and understand it are really low. So then people end up saying “yes” to things they don’t even realize they’re saying “yes” to.

On the other hand, we haven’t put enough effort into trying different ways of educating people. We, for too long, have assumed that, in a regulatory regime that provides permissive data sharing within the health care context, people will just trust their doctors.

I’ve been to a seminar on researchers getting access to data. The response of one of the researchers to the issue of “How do you regulate data uses for research?” and “What’s the role of consent?” and “What’s the role of institutional review boards?” was, “Well, people should just trust researchers.”

Maybe some people trust researchers, but that’s not really good enough. You have to earn trust. There’s a lot of room for innovative thinking along those lines. It’s something I have been increasingly itchy to try to dive into in more detail with folks who have expertise in other disciplines, like sociology, anthropology and community-building. What does it take to build trusted infrastructures that are transparent, open and that people are comfortable participating in?

There’s no magic endpoint for privacy, like, “Oh, we have privacy now,” versus, “Oh, we don’t have privacy.” To me, the magic endpoint is whether we have a health care data ecosystem that most people trust. It’s not perfect, but it’s good enough. I don’t think we’re quite there yet.

What specifically needs to happen on the openness and transparency side?

Deven McGraw: When I hear about state-based or community-based health information exchanges (HIE) going out and having town meetings with people in advance of building the HIE, working with the physicians in their communities to make sure they’re having conversations with their patients about what’s happening in the community, the electronic records movement and the HIE they’re building, that’s exactly the kind of work you need to do. When I hear about initiatives where people have actually spent the time and resources to educate patients, it warms my heart.

Yes, it’s fairly time- and resource-intensive, but in my view, it pays huge dividends on the backend, in terms of the level of trust and buy-in the community has to what you’re doing. It’s not that big of a leap. If you live in a community where people tend to go to church on Sundays, reach out to the churches. Ask pastors if you can speak to their congregations. Or bring them along and have them speak to their own congregations. Do tailored outreach to people through vehicles they already trust.

I think a lot of folks are pressed for time and resources, and feeling like digitization of the health care system should have happened yesterday. People are dying from errors in care and not getting their care coordinated. All of that is true. But this is a huge change in health care, and we have to do the hard work of outreach and engagement of patients in the community to do it right. In many ways, it’s a community-by-community effort. We’re not one great ad campaign away from solving the issue.

Is there mistrust for good reason? There have been many years of data breaches, coupled with new fears sparked by hacks enabled by electronic health record (EHR) adoption.

Deven McGraw: Part of it is when one organization has a breach, it’s like they all did. There is a collective sense that the health care industry, overall, doesn’t have its act together. It can’t quite figure out how to do electronic records right when we have breach after breach after breach. If breaches were rare, that would be one thing, but they’re still far too frequent. Institutions aren’t taking the basic steps they could take to reduce breaches. You’re never going to eliminate them, but you certainly can reduce them below where we are today.

In the context of certain secondary data uses, like when parents find out after the fact that blood spots collected from their infants at birth are being used for multiple purposes, you don’t want to surprise people about what you’re doing with their health information, the health information of their children, and that of other family members.

I think most people would be quite comfortable with many uses of health data, including those that do not necessarily directly benefit them but benefit human beings generally, or people who have the same disease, or people like them. In general, we’re actually a fairly generous people, but we don’t want to be surprised by unexpected use.

There’s a tremendous amount of work to do. We have a tendency to think issues like secondary use get resolved by asking for people’s consent ahead of time. Consent certainly plays an important role in protecting people’s privacy and giving them some sense of control over their health care information, but because consent in practice actually doesn’t do such a great job, we can’t over-rely on it to create a trust ecosystem. We have to do more on the openness and transparency side so that people are brought along with where we’re going with these health information technology initiatives.

What do doctors’ offices need to do to mitigate risks from EHR adoption?

Deven McGraw: It’s absolutely true that digitizing data in the absence of the adoption of technical security safeguards puts it much more at risk. You cannot hack into a paper file. If you lose a paper file, you’ve lost one paper file. If you lose a laptop, you’ve lost hundreds of thousands of records, if they’re on there and you didn’t encrypt the data.

Having said that, there are so many tools that you can adopt in technology with data in a digital form that are much stronger from a security standpoint than is true in paper. You can set role-based access controls for who can access a file and track who has accessed a file. You can’t do that with paper. You can use encryption technology. You can use stronger identity and authentication levels in order to make sure the person accessing the data is, in fact, authorized to do so and is the person they say they are on the other end of the transaction.

We do need people to adopt those technologies and to use them. You’re talking about a health care industry that has stewardship over some of the most sensitive data we have out there. It’s not the nuclear codes, but for a lot of people, it’s incredibly sensitive data — and yet, we trust the security of that data to rank amateurs. Honestly, there’s no other way around that. The people who create the data are physicians. Most of them don’t have any experience in digital security.

We have to count on the vendors of those systems to build in security safeguards. Then, we have to count on giving physicians and their staffs as much guidance as we can so they can actually deploy those safeguards and don’t create workarounds to them that create bigger holes in the security of the data and potentially create patient safety issues. It’s an enormously complex problem, but it’s not the reason to say, “Well, we can’t do this.”

Due to the efforts of many advocates, as you well know, health data has become a big part of the discussion around open data. What are the risks and benefits?

Deven McGraw: Honestly, people throw the term “open data” around a lot, and I don’t think we have a clear, agreed-upon definition for what that is. It’s a mistake to think that open data means all health data, fully identifiable, available to anybody, for any purpose, for any reason. That would be a totally “open data” environment. No rules, no restrictions, you get what you need. It certainly would be transformative and disruptive. We’d probably learn an awful lot from the data. But at the same time, we’ve potentially completely blown trust in the system because we can give no guarantees to anybody about what’s going to happen with their data.

Open data means creating rules that provide greater access to data but with certain privacy protections in place, such as protections on minimizing the identifiability of the data. That typically has been the way government health data initiatives, for example, have been put forth: the data that’s open, that’s really widely accessible, is data with a very low risk of being identified with a particular patient. The focus is typically on the patient side, but I think, even in the government health data initiatives that I’m aware of, it’s also not identifiable to a particular provider. It’s aggregate data that says, “How often is that very expensive cardiac surgery done and in what populations of patients? What are the general outcomes?” That’s all valuable information but not data at the granular level, where it’s traceable to an individual and, therefore, puts at risk the notion they can confidentially receive care.

We have a legal regime that opens the doors to data use much wider if you mask identifiers in data, remove them from a dataset, or use statistical techniques to render data to have a very low risk of re-identification.

We don’t have a perfect regulatory regime on that front. We don’t have any strict prohibitions against re-identifying that data. We don’t have any mechanisms to hold people accountable if they do re-identify the data, or if they release a dataset that then is subsequently re-identified because they were sloppy in how they de-identified it. We don’t have the regulatory regime that we need to create an open data ecosystem that loosens some of the regulatory constraints on data but in a way that still protects individual privacy to the maximum extent possible.

Again, it’s a balance. What we’re trying to achieve is a very low risk of re-identification; it’s impossible to achieve no risk of re-identification and still have any utility in the data whatsoever, or so I’m told by researchers.

It is absolutely the path we need to proceed down. Our health care system is so messed up and fails so many people so much of the time. If we don’t start using this data, learning from it and deploying testing initiatives more robustly, getting rid of the ones that don’t work and more aggressively pursuing the interventions that do, we’re never going to move the needle. And consumers suffer from that. They suffer as much — or more, quite frankly — than they do from violations of their privacy. The end goal here is we need to create a health care system that works and that people trust. You need to be pursuing both of those goals.

Congress hasn’t had much appetite for passing new health care legislation in this election year, aside from the House trying to repeal PPACA 33 times. That would seem to leave reform up to the U.S. Department of Health and Human Services (HHS), for now. Where do we stand with rulemaking around creating regulatory regimes like those you’ve described?

Deven McGraw: HHS certainly has made progress in some areas and is much more proactive on the issue of health privacy than I think they have been in the past. On the other hand, I’m not sure I can point to significant milestones that have been met.

Some of that isn’t completely their fault. Within an administration, there are multiple decision-makers. For any sort of policy matter where you want to move the ball forward, there’s a fair amount of process and approval up the food chain that has to happen. In an election year, in particular, that whole mechanism gets jammed up in ways that are often disappointing.

We still don’t have finalized HIPAA rules from the HITECH changes, which is really unfortunate. And I’m now thinking we won’t see them until November. Similarly, there was a study on de-identification that Congress called for in the HITECH legislation. It’s two years late, creeping up on three, and we still haven’t seen it.

You can point to those and you sort of throw up your hands and say, “What’s going on? Who’s minding the store?” If we know and appreciate that we need to build this trust environment to move the needle forward on using health IT to address quality and cost issues, then it starts to look very bad in terms of a report card for the agency on those elements.

On the other hand, you have the Office of the National Coordinator for Health IT doing more work through setting funding conditions on states to get them to adopt privacy frameworks for health information exchanges.

You have progress being made by the Office for Civil Rights on HIPAA enforcement. They’re doing audits. They now have more enforcement actions in the last year than they had in the total number of years the regulations were in effect prior to this year. They’re getting serious.

From a research perspective, the other thing I would mention is the efforts to try to make the common rule — the set of rules that governs federally funded research — more consistent with HIPAA and more workable for researchers. But there’s still a lot of work to be done on that initiative as well.

We started the conversation by saying these are really complex issues. They don’t get fixed overnight. In some respects, fast action is less important than getting it right, but we really should be making faster progress than we are.

What does the trend toward epatients and peer-to-peer health care mean for privacy, prevention and informed consent?

Deven McGraw: I think the epatient movement and increase in people’s use of Internet technologies, like social media, to connect with one another and to share data and experiences in order to improve their care is an enormously positive development. It’s a huge game-changer. And, of course, it will have an impact on privacy.

One of the things we’re going to have to keep an eye on is the fact that one out of six people, when they’re surveyed, say they practice what we call “privacy protective behaviors.” They lie to their physicians. They don’t go to seek the care they need, which is often the case with respect to mental illness. Or they seek care out of their area in order to prevent people they might know who work in their local hospital from seeing their data.

But that’s only one out of six people who say that, so there are an awful lot of people who, from the start, even when they’re healthy, are completely comfortable being open with their data. Certainly when you’re sick, your desire is to get better. And when you’re seriously sick, your desire is to save your life. Anything you can do to do that means whatever qualms you may have had about sharing your data, if they existed at all, go right out the window.

On the other hand, we have to build an ecosystem that the one out of six people can use as well. That’s what I’m focusing on, in particular, in the consumer-facing health space, the “Health 2.0 space” and on social media sites. It really should be the choice of the individual about how much data they share. There needs to be a lot of transparency about how that data is used.

When I look at a site like PatientsLikeMe, I know some privacy advocates think it’s horrifying and that those people are crazy for sharing the level of detail in their data on that site. On the other hand, I have read few privacy policies that are as transparent and open about what they do with data as PatientsLikeMe’s policy. They’re very up front about what happens with that data. I’m confident that people who go on the site absolutely know what they’re doing. It’s not my job to tell them they can’t do it.

But we also need to create environments so people can get the benefits of sharing their experiences with other patients who have their disease — because it’s enormously empowering and groundbreaking from a research standpoint — without telling people they have to throw all of their inhibitions out the door.

You clearly care about these issues deeply. How did you end up in your current position?

Deven McGraw: I was working at the National Partnership for Women and Families, which is another nonprofit advocacy organization here in town [Washington, D.C.], as their chief operating officer. I had been working on health information technology policy issues — specifically, the use of technology to improve health care quality and trying to normalize or reduce costs. I was getting increasingly involved in being a consumer representative at meetings on health information technology adoption and applauding health information technology adoption, and thinking about what the benefits for consumers were and how we can make sure that those happen.

The one issue that kept coming up in those conversations was that we know we need to build in privacy protections for this data and we know we have HIPAA — so where are the gaps? What do we need to do to move the ball forward? I never had enough time to really drill down on that issue because I was the chief operating officer of a nonprofit.

At the time, the Health Privacy Project was an independent nonprofit organization that had been founded and led by one dynamic woman, Janlori Goldman. She was living in New York and was ready to transition the work to somebody else. When the CDT approached me about being the director of the Health Privacy Project, they were moving it into CDT to take advantage of all the technology and Internet expertise at a time when we’re trying to move health care aggressively into the digital space. It was a perfect storm, with me wishing I had more time to think through the privacy issues and then this job aligned with the way I like to do policy work, which is to sit down with stakeholders and try to figure out a solution that ideally works for everybody.

From a timing perspective, it couldn’t have been more perfect. It was right during the consideration of bills on health IT. There were hearings on health information technology that we were invited to testify in. We wrote papers to put ourselves on the map, in terms of our theory about how to do privacy well in health IT and what the role of patient consent should be in privacy, because a lot of the debate was really spinning around that one issue. It’s been a terrific experience. It’s an enormous challenge.

Strata Rx — Strata Rx, being held Oct. 16-17 in San Francisco, is the first conference to bring data science to the urgent issues confronting health care.

Save 20% on registration with the code RADAR20

Related:

August 21 2012

Hawaii and health care: A small state takes a giant step forward

Knots by Uncle Catherine, on FlickrIn an era characterized by political polarization and legislative stalemate, the tiny state of Hawaii has just demonstrated extraordinary leadership. The rest of the country should now recognize, applaud, and most of all, learn from Hawaii’s accomplishment.

Hawaii enacted a new law that harmonizes its state medical privacy laws with HIPAA, the federal medical privacy law. Hawaii’s legislators and governor, along with an impressive array of patient groups, health care providers, insurance companies, and health information technologists, agreed that having dozens of unique Hawaii medical privacy laws in addition to HIPAA was confusing, expensive, and bad for patients. HB 1957 thus eliminates the need for entities covered by HIPAA to also comply with Hawaii’s complex array of medical privacy laws.

How did this thicket of state medical privacy laws arise?

Hawaii’s knotty web of state medical privacy laws is not unique. There are vast numbers of state health privacy laws across the country — certainly many hundreds, likely thousands. Hawaii alone has more than 50. Most were enacted before HIPAA, which helps explain why there are so many; when no federal guarantee of health privacy existed, states took action to protect their constituents from improper invasions of their medical privacy. These laws grew helter-skelter over decades. For example, particularly restrictive laws were enacted after inappropriate and traumatizing disclosures of HIV status during the 1980s.

These laws were often rooted in a naïve faith that patient consent, rather than underlying structural protection, is the be-all and end-all of patient protection. Consent requirements thus became more detailed and demanding. Countless laws, sometimes buried in obscure areas of state law, created unique consent requirements over mental health, genetic information, reproductive health, infectious disease, adolescent, and disability records.

When the federal government created HIPAA, a comprehensive and complex medical privacy law, the powers in Washington realized that preempting this thicket of state laws would be a political impossibility. As every HIPAA 101 class teaches, HIPAA thus became “a floor, not a ceiling.” All state laws stricter than HIPAA continue to exist in full force.

So what’s so bad about having lots of state health privacy laws?

The harmful consequences of the state medical privacy law thicket coexisting with HIPAA include:

  • Adverse patient impact — First and foremost, the privacy law thicket is terrible for individual patients. The days when we saw only doctors in one state are long gone. We travel, we move, we get sick in different states, we choose caregivers in different states. We need our health information to be rapidly available to us and our providers wherever we are, but these state consent laws make it tough for providers to share records. Even providing patients with our own medical records — which is mandated by HIPAA — is impeded by perceptions that state-specific, or even institution-specific, consent forms must be used instead of national HIPAA-compliant forms.
  • Harmful to those intended to be protected — Paradoxically, laws intended to protect particular groups of patients, like those with HIV or mental health conditions, now undermine their clinical care. Providers sending records containing sensitive content are wary of letting complete records move, yet may be unable to mask the regulated data. When records are incomplete, delayed, or simply unavailable, providers can make wrong decisions and patients can get hurt.
  • Antiquated and legalistic consent forms and systems — Most providers feel obliged to honor a patient’s request to move medical records only in the form of a “wet signature” on a piece of paper. Most then insist that the piece of paper be moved only in person or by 1980s-era fax machines, despite the inconvenience to patients who don’t have a fax machine at hand. HIPAA allows the disclosure of health information for treatment, payment, and health care operations (all precisely defined terms), but because so many state laws require consent for particular situations, it is easier (and way more CYA) for institutions to err on the side of strict consent forms for all disclosures, even when permitted by HIPAA.
  • Obstacles to technological innovation and telemedicine — Digital systems to move information need simplicity — either, yes, the data can move, or no, it cannot. Trying to build systems when a myriad of complex, and essentially unknowable, laws govern whether data can move, who must consent, on what form, for what duration, or what data subsets must be expurgated, becomes a nightmare. No doubt, many health innovators today are operating in blissful ignorance of the state health privacy law thicket, but ignorance of these laws does not protect against enforcement or class action lawsuits.
  • Economic waste — As taxpayers, the state legal thicket hurts us all. Redundant tests and procedures are often ordered when medical records cannot be timely produced. Measuring the comparative effectiveness of alternative treatments and the performance of hospitals, providers, and insurers is crucial to improving quality and reducing costs, but state laws can restrict such uses. The 2009 stimulus law provided billions of dollars for health information technology and information exchange, but some of our return on that national investment is lost when onerous state-specific consent requirements must be baked into electronic health record (EHR) and health information exchange (HIE) design.

What can we learn from Hawaii?

Other states should follow Hawaii’s lead by having the boldness and foresight to wipe their own medical privacy laws off the books in favor of a simpler and more efficient national solution that protects privacy and facilitates clinical care. Our national legal framework is HIPAA, plus HITECH, a 2009 law that made HIPAA stricter, plus other new federal initiatives intended to create a secure, private, and reliable infrastructure for moving health information. While that federal framework isn’t perfect, that’s where we should be putting our efforts to protect, exchange, and make appropriate use of health information. Hawaii’s approach of reducing the additional burden of the complex state law layer just makes sense.

Some modest progress has occurred already. A few states are harmonizing their laws affecting health information exchanges (e.g., Kansas and Utah). Some states exempt HIPAA-regulated entities subject to new HITECH breach requirements from also having to comply with the state breach laws (e.g., Michigan and Indiana). These breach measures are helpful in a crisis, to be sure, by saving money on wasteful legal research, but irrelevant from the standpoint of providing care for patients or designing technology solutions or system improvements. California currently has a medical law harmonization initiative underway, which I hope is broadly supported in order to reduce waste and improve care.

To be blunt, we need much more dramatic progress in this area. In the case of health information exchange, states are not useful “laboratories of democracy“; they are towers of Babel that disserve patients. The challenges of providing clinical care, let alone making dramatic improvements while lowering costs, in the context of this convoluted mess of state laws, are severe. Patients, disease advocacy groups, doctors, nurses, hospitals, and technology innovators should let their state legislators know that harmonizing medical privacy laws would be a huge win for all involved.

Photo: Knots by Uncle Catherine, on Flickr

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl