Facebook and Healthcare data, the contrarian view.

(Update May 2019.) Publishing this article was part of what started my recent investigation into Facebook’s cybersecurity and privacy practices. The outcome of that investigation means that I do not have any hopeful or positive things to say regarding Facebook and I believe that I was… and we all were, naive. I no longer hold the position that Facebook can be trusted with any clinical information. Please see https://missingconsent.org for more information and context.

Recently, I read an article that Facebook had been considering partnering with hospitals to connect their social data with the hospitals’ patient data in order to provide improved services to patients. Facebook had decided, in the wake of the Cambridge Analytica fiasco, to put those plans on hold. Here is the video version of the report, which is definitely worth watching.

I thought this was disappointing, because I know that many patients rely on social media generally, and Facebook in particular to coordinate patient care. Connecting healthcare data with a patients social graph, when done with permission and with limited and intelligent goals could result in real improvements in patient care, especially for our most vulnerable populations.

I tweeted as much:

I have been surprised by the subsequent reactions, very few of my tweets seem to garner this much attention or engagement. Given this reaction, I thought it wise to more carefully defend my position.

I do not think anyone should claim “expertise” in anything as nebulous and unknowable as healthcare cybersecurity currently is, but I am definitely comfortable saying, that I am not a novice.

I have spent time thinking carefully about the intersection of healthcare information systems, and cybersecurity and privacy. This has lead me to be frequently at odds with other cybersecurity experts who are legitimately concerned about the dangers of connecting to early.

The problem that I see again and again are knee-jerk policy reactions to technology potential and, more generally, a tendency for talking-head histrionics regarding healthcare information privacy. Probably the most extreme of these, historically, has been my friend Dr. Deborah Peel. Dr Peel has continued to suggest that all health information exchange halt, until it can be made entirely secure and entirely respect patient privacy and ongoing consent.

The problem with that approach is that it tends to drive healthcare data exchange efforts “underground”. The discussion about Facebooks change in policies is a good example such fear-mongering. Note how CNBC chose to frame the news that Facebook was NOT going to reach out to hospitals. Let me quote some of the article, highlighting some of the terms that I find concerning.

Facebook sent a doctor on a secret mission to ask hospitals to share patient data

Facebook was in talks with top hospitals and other medical groups as recently as last month about a proposal to share data about the social networks of their most vulnerable patients.

The idea was to build profiles of people that included their medical conditions, information that health systems have, as well as social and economic factors gleaned from Facebook.

Now, CNBC is not as given as some of the other networks to outright fear-mongering, but I do need to quibble with this type of reporting. First, if you read the article closely you will see that the project intended to link data using a two-sided hashing mechanism. This serves to protect the privacy of both the Facebook user data, and the hospitals’ patient data. The headline makes it seem like it would be trivial for both the hospital and Facebook to identify these patients. Of course, such a dataset would be relatively simple to re-identify given how much of Facebook’s user data is public information. And it is highly unlikely that either Facebook or the hospitals intended to release this merged dataset to the public. Still, de-identifying a dataset like this is a useful precaution to ensure that researchers are not tempted to violate patient privacy.

This type of de-identification strategy would have made the resulting dataset almost useless for Facebooks main profit center: selling targeted advertising. But the article makes it sounds like this is the aim, because the partnership was seeking to “build a profile”. A profile that is not connected to an identity is… well it’s not profile. A profile is a kind of an aggregation of multiple people.. and a dossier is about one specific person. Deidentified data is not really either one of those things. It is about a specific person, unlike a profile and unlike a dossier because no identity is attached. In this respect, building a “profile” does not seem like such a big deal, its an aggregate of people, a single average that is useful to help understand many potential individuals…

In fact, “building a profile” is clearly not the aim of such an endeavor, but only an intermediate goal. The reason such a “double anonymized” merged dataset would be useful, is because you could learn how to help patients by studying it. The research might help the hospitals, and Facebook, to understand how to better serve the patients that they both have as customers. A non-public anonymized dataset like this, shared only between the a limited set of researchers representing the two parties who contributed data, is pretty hard to abuse.

In fact, this is exactly the type of research that both Facebook and the hospitals have an independent and shared ethical obligation to undertake. There are more patients who share clinical information across Facebook than any software designed for that purpose (typically those products are called PHR systems). Facebook users use the platform every day to coordinate caregiving for their friends and family. They use it to coordinate whose turn it is to make dinner, and to coordinate which “friend” is going to show up and hold their loved ones hand. They use it promote the go-fund-me pages that have frequently taken the place of comprehensive health insurance in this country. They use it to request prayers, when the pain is really bad, and the pills no longer work.

Is this a good idea? Well, there are many who would warn that sharing data publicly like this is dangerous and they are profoundly correct. But this sharing is not done because users trust Facebook, quite the contrary. Facebook is tolerated, as a gateway to the friendships and family members that Facebook users so desperately need when they become seriously ill.

That use-case is not what Mark Zuckerburg imagined in his Harvard dorm. Frankly, it was a case of great foresight for Mark to guess that people might use his young platform get laid. But as Facebook has become the de-facto mechanism to connect with friends and family, especially across generations, it has also become a very common place for patients to connect with their care community. Or at least the parts of their care community that are NOT professional clinicians.

The professional clinicians not only fail to connect with patients friend and family network, they also fail to connect digitally with each other. Instead, they are rewarded for hoarding their portion of a patients medical data to themselves, a problem regularly referred to as the “silo” problem in healthcare informatics.

There has been no technical reason why patient data is not regularly shared between healthcare providers for more than three decades. However, our healthcare system continues to financially reward providers who hoard rather than share data. Healthcare technologists like myself do not, despite appearances to the contrary work to make data sharing possible, instead, we spend our careers desperately seeking technical solutions to health data exchange that are politically palatable.

So I hope you can understand that when two parts of the healthcare eco-system start to consider collaborating in a way that helps patients, this is something that we should celebrate with… concerned… optimism.

And I am concerned. I am very concerned that Facebooks basic structure does little to protect those who share healthcare information across its network already.

Just as we should be concerned that Apples recently announced Hospital integrations will serve reduce the investments that hospitals make in other patient-data sharing methods. Which might serve to widen the digital health divide. Poor people in this country have trouble affording iPhones, which could be soon be one of the few ways to conveniently access their own hospital data. But we should cautiously celebrate Apples work in this area.

We should be concerned that Google has recently announced multiple new Health IT API initiatives, despite having unceremoniously shut down its previous healthcare API offering.

We should celebrate Grindr’s efforts to encourage regular STD testing, even if this action has been clearly overshadowed by the news that they were sharing HIV status with third party companies.

Look, if you are this far in the article and thinking that I am defending Facebook’s egregious cybersecurity mistakes, its constantly over-reaching data grabs and generally cavalier (even sometimes malicious) attitude towards personal privacy, then you are missing my point entirely. As twitter user _j3lena_ pointed out correctly, it is only reasonable to assume that there are dozens of other organizations that have Facebook data on the same scale as Cambridge Analytica. That is just the one that we see. (updated to acknowledge _j3lena_’s comments)

Facebook has been a privacy nightmare for years, and I am very hopeful that they might see their failure in these areas as an existential threat to their existence. Because they should go out of business if they cannot ensure that their platform is something more than a monetized privacy-abuse vector. Facebook deserves to go the way of the Dodo, if they cannot help its users differentiate between real and fake news. Make no mistake, Russians advertising on facebook is a big problem, but this pales in comparison to the personal consequences for a person who is convinced not give their child a vaccine because of a facebook group.

My point is just this. We need to give companies credit when they embrace security best-practices as they pursue ethically reasonable goals. Like leveraging a hashing for de-identification scheme in an attempt to do things with patients’ data to help clinicians to improve care they give those patients. We need to criticize, and if needed, boycott and regulate companies that abuse our data. We need to have national policies that create real consequences for companies that abuse their positions of trust.

But we also need to give credit where credit is due, and Facebook was probably trying to do some good work with this hospital collaboration.

I hope this better explains my tweet.

-ft

Good Questions:

As per always, the Twitter community has given me new things to think about.

  • First, it is not clear what it means for this to be done “in secret” if this deal included non-disclosure agreements, that is problematic.
  • Second, and this is something that I did not get into, but that CNBC did a good job emphasizing, especially in the video version of the report, is that it is not clear how, or if, explicit patient consent would have been involved.

Updates:

Added several good points from Twitter, and as @corbinpetro pointed out, its CNBC and not CBS.