Security in Medical Devices, implications

There are more and more examples of how standard hacking techniques apply in healthcare, with serious consequences. Recent issues include RFID hacking and interference issues.

Recently, a talk at BlackHat regarding hacking medical devices, including pacemakers, has begun appearing in popular blogs.

What is most dangerous about this is not actually the hack itself, but the fact that the hacks could become widespread. Think about it; there is no real benefit to a hacker to simply kill a person. It is a serious crime and unless there is something to gain by doing it, it is unlikely to generate new interest with blackhat hackers.

Now that the information regarding the vulnerability is in normal media channels, a Cracker (another name for a blackhat hacker) can blackmail a person with a pacemaker. “give me ten thousand dollars or I will remotely shut down your heart.” Before a victim would say “that’s impossible” and not worry about it. Now they go to Google and discover that it is possible. Both Victim and Cracker are aware that the only way for the Cracker to prove to the Victim that he has the ability to stop the Victims heart is for the Cracker to actually kill the Victim. Now the Victim is wondering “Can I afford to take this chance?”

If this even happens once in the real world, you will see a slew of social engineering attacks with this threat as the basis. A Cracker will simply threaten a hundred people with this attack and see how many will pay up. The Cracker would not even need to know how to make the hack work. All he would need is a list of people with pacemakers.

Now we get to the real implications. Where is the information about who has a pacemaker installed and who does not? Perhaps someday they will invent “pacemaker wardriving” but for the time being, the easiest way to get a list of people with pacemakers is to hack into someone’s Electronic Health Record system.

Currently, the Healthcare Industry under-invests in Information Technology. However, with these new vulnerabilities, the value of personal health information is steadily rising. Usually, a typical cracker strategy was to use identifying information inside PHI to steal someone’s identity, or to use healthcare information (like sexually transmitted diseases) to blackmail someone. These new vulnerabilities increase potential profit of hacking into an EHR, and hospitals, even large ones, do not typically have the kind of defence systems that banks usually invest in.

Have you ever considered why “the club” works? These devices are relatively easy for a determined thief to overcome. They work because when you park your BMW in a parking lot, and put the club on it, there is typically another BMW in the parking lot, without the club. The thief will take the car that is easier to take. The club works because of the “low-hanging fruit” principle of security. A person who has decided to take an unethical risk by stealing or cracking is basically saying; “I can tolerate this risk, because it is easier to do this then have a similar economic gain, by legitimate means”. Perhaps some are thrill-seekers, but typically people who break the rules for profit are lazy. The “low hanging fruit” principle might be phrased “A thief or cracker will always try the easiest way to profit unethically first”

As the number of ways to profit from PHI goes up, hospitals and practices will become the low-hanging fruit. This is a problem because your small country doctor is already being squeezed by third-party payers. He does not feel that he has the money to invest in proper electronic security measures, and he does not actually have the skills to tell what would be legitimate security measures in any case. Information technology mom-and-popism is rampant in healthcare. The “computer guy” for many doctors is the nephew of of the office manager; he might be the smartest kid in 9th grade, but he has no idea how to properly secure PHI. Healthcare institutions have always been easy to hack, but now they are becoming profitable to hack. They are becoming “low hanging fruit”.

Concern for these kinds of issues will do little but grow.

-FT

Update: Jon Bartels wrote to mention that Chinese researchers have pushed this concept further.

5 thoughts on “Security in Medical Devices, implications

  1. Well-written and insightful.

    You compare the security aims of hospitals to those of banks, and I consider this the most telling of all industry comparisons.

    Both health care and finance deal with valuable cargo — lives on the one hand, and money on the other. Financial organizations, such as banks and hedge funds, hang a sword over their own necks because of their dependence on investors. They must hold themselves to squeeze more and more return from their investments. The criterion for optimization is clear.

    However, when lives *and* money enter the picture, as in the health care industry, even identifying a goal becomes impossible. For example, when I think of “universal health care,” it’s hard for me to define precisely what I mean — do I mean health care for even who needs it? If so, who needs it? This becomes a philosophical question, and I get so mired in determining the right course of action that no action happens.

    I can take the free-market perspective and claim that the goals of finance and health care are the same: to maximize profit. But when health care administrators take this approach, security loses importance.

    I’ve made many unfounded and unfair generalizations, so I hope I’ve sparked some thought. These are issues to which I am still trying to find answers.

  2. I hope I’m not too late to comment on this, its been a bit since I read the blog.

    The angle taken here has a huge amount of FUD in it, though the underlying points are valid. Yes remote control and access to pacemakers is becoming possible. However, remote control means a few meters at best. It means that the doc (or med tech) can wave a wand around instead of plugging into an interface or having to do surgery again.

    Without specific sources handy, I suspect the ranges involved are on par with those used to snatch touchless credit cards. If you’re a cracker after easy money, thats a lower hanging fruit than pacemakers.

    The real moneymaking criminal element here comes from FUD. People hear ‘remote’ and think ‘ohmigosh he can kill me from Kerblopistan! Gotta pay the ransom!’. The first step is making this another educational point for the doctor to discuss with the patient. Its like older implants being sensitive to high RF environments, you have to tell your patients about whats being put in them and the risks and their responsibilites.

    Long term, these devices should have some access controls, but will ultimately be left fairly open. For example, a device could be coded to only respond to a patients cardiologist. What happens during an emergency and the ER doctor can’t tweak the pacemaker to keep the patient’s heart rate under control? That risk seems much higher and would be mitigated by an emergency override or general ‘any doctor’ access code. That workaround then makes it just a bit harder for a black hat to get at it.

    Further threat mitigation can be done by limiting the effective range. A wand held up to your chest would be perfect. The devices are smaller and you have to be right with a doctor or tech for your pacemaker tuneup. If a black hat has to get that close they might as well just shoot you.

    As always, this provoked some thought. Saketh made some interesting points about earning a fair income vs. maximum profit vs. the patient that I think we all run into, lets hope the patient comes first and the profit follows them.

  3. The flaw I see in this argument is that although a pacemaker could potentially be hacked in order to kill someone, it would be much easier and cheaper to kill someone a variety of other ways. I wouldn’t even need to have previously hacked an EHR to send out threatening letters seeing who would respond with payment for my blackmail attempts not to kill them.

    Far too often, especially in healthcare, I’ve seen absurd arguments such as this one for why we need to limit our reliance on technology. I’ve seen healthcare professionals time and time again, limit the technology that is implemented in healthcare settings because of irrational security fears. We need to focus on moving foreword with innovative solutions first and not let potential security flaws that may or may not actually surface 10 years from now stop our progress.

    I can’t even count the number of times that a potential technical solution to a problem has been shot down under the guise of “won’t comply with HIPPA requirements”. Our banks and financial institutions are not only more secure but have kept up with modern technology and simply build in security to work with technology instead of simply dismissing it. We need to move past this mindset of technology fear that is rampant in healthcare which is preventing great technologies from gaining widespread adoption.

  4. ccrosbie,
    I can understand how I might appear alarmist. I too have seen technophobia based on trumped up security concerns. I hope my comments on medical device security will not be used to prevent progress.

    However, I disagree with your point regarding “other ways to kill”. It is actually pretty difficult to kill someone and not leave a trace of the fact that you did it. Knives are available all over the world, and killing someone with a knife is pretty straight-forward. However, there is a substantial amount of evidence available after killing someone with a knife. To hide a knife murder you want to ensure that there are no witnesses, and you want to clean the evidence carefully.

    To hide a pacemaker murder, you would want to ensure that there were hundreds of people around, so that no one could be sure who sent the dangerous signal. Victims would be sought in places like large sporting events, exactly the wrong place to stab someone and get away with it.

    My point, which is still valid, is that this represents a new attack vector, with new implications.

    Being aware of threats, and discussing them, should not directly lead to poor decisions based on fear.

    Thanks for commenting,

    -FT

Comments are closed.