I have long been an advocate of reasonable and measured reaction to “privacy scare tactics”. I have argued, for instance, that it was a good thing that HIPAA does not cover PHR systems. But that does not mean that I do not think privacy is important. In fact there has been something nagging at the back of my brain for several years now:
We typically use technology to provide security, but we use typically use policy to protect privacy.
That is deeply problematic. To see why we should carefully consider privacy and security in a simple context.
Imagine that you have purchased a safe-deposit box at a bank. Safe-deposit boxes are a great example, because if the bank accidentally gives away the money from your account, its not really a big deal… after all, its a bank, if they lose your money, they probably have some laying around somewhere else. The “lost money” could be restored to you from bank profits. But if you have baby pictures, your medical records, your will, Document X or your family jewels (not a metaphor) in your box, the bank cannot replace them.
Now to “protect the contents” really subdivides into two issues, security and privacy.
Security is the degree of protection against danger, loss, and criminals… according to wikipedia.. at least for today 😉
In the bank context “Security” means that no one can easily break into the bank vault and grab your “Document X” from your box and run off. Of course there are many movies about how bank security can be overrun, but, thankfully, it is not a typical event for a bank have its security deposit boxes robbed. Banks use both technology (the vault, the alarm system, the video system) and policy (only the bank manager can open the vault, the vault can only be open during the day, etc etc) to protect the security of the bank boxes. Note that security is a spectrum of more secure to less secure, there is no such thing as just “secure”. Security can almost always be improved, but eventually improving security begins to interfere with the usefulness of something. If the bank vault could only open once a year, it would be more secure, but not very useful.
In the world of health information “Security” means that is difficult for a hacker to break in and get access to someone’s private health record. That is an oversimplification, but a useful one for this discussion.
Privacy is something else. The source for all knowledge (at least until someone changes it) says: Privacy is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively.
In the bank example, Privacy is all about who the bank lets into the deposit box. For instance, if they decide suddenly that all blood cousins will now get automatic access to each others safe-deposit boxes, that would be a violation of your privacy. If your cousin could get access to your Document X because the bank let him, then the problem is not that the vault walls are not thick enough or that the combination is not hard enough. The problem is that the bank changed the basic relationship with you in terms of privacy. This is the first of several principles: Security Technology does not necessarily help with privacy.
Perhaps you want your spouse to be able to get access to your box. Perhaps you even want your cousin to have access. But the idea that the bank can just “change the deal” from what you had explicitly allowed is pretty strange. Thankfully banks rarely do that. But you could use technology to ensure that your privacy was protected, even if the bank arbitrarily changed its policy.
If you wanted you could keep a safe inside your safety deposit box, and keep your Document X inside the safe. Then you could give your combination to your spouse, so she/he could also open both the safe, (as long as you had also told the bank to give the access to your to the safety deposit box to your spouse). Even if the bank decided that your cousin should have access to your box too, it would not matter, since your cousin could not open your safe. (we will pretend for the sake of this analogy that explosives and other means to circumvent the security of the safe would not work and the safe could not be removed from the safety deposit box). Our next principle: It is possible to use technology to help protect privacy.
Which brings me to my point. We need to have more technologies available for protecting health information privacy. We have lots of technologies available for protecting security, but these do not protect privacy at all. These technologies, if they are going to work, need to give people the power to ensure that their health information is protected even from the people who provide a given technology service.
So far we have several principles:
- Security technology does not necessarily help with privacy.
- Privacy policies should not be the only thing protecting our privacy.
- It is possible to use technology to help protect privacy.
- Privacy technologies should prevent unwanted access from insiders and authority figures as well as from “bad” guys.
To these I would like to add some implied corollaries:
Encryption, by itself is not a privacy technology. It is a security technology, but it is only a privacy technology depending on who has the keys. This was the problem with the infamous clipper chip. The first issue with Information System privacy is, and will always be, “who has the keys?”. So when a service says in response to a privacy challenge “oh don’t worry its all encrypted” that’s like saying: “You are afraid that I will fax document X to your mother? Don’t worry I keep document X at home in my safe!” If you are afraid that I am going to fax document X to someone, the fact that it is now in my safe should not make you feel more comfortable. I can still get in my safe and I can still fax document X wherever I want. Document X being in a safe is only helpful if you trust everyone with the keys to the safe.
The other thing is that you simply cannot trust proprietary software to provide privacy. If you cannot read the sourcecode to see what the software does, then it does not matter what kinds of privacy features it advertises or even who has the keys. At any time the developers could change the code and make it violate your privacy. To further extend and abuse our example, it does not help you to have a safe inside the safe-deposit box if the bank can change the safe’s combination whenever they want, without you being the wiser. Only Open Source systems can be trusted to provide privacy features. I do not argue that this is enough, but it is an important starting point.
I have been working on a relatively complicated system for achieving these goals. My initial application is blessedly simple, and so I have been able to avoid some of the issues that make these kinds of systems commonly intractable. My service is still in skunkworks and will continue to be very hush-hush until I make the beta launch. But I will be announcing my designs soon and I have already submitted those designs to some Internet Security professionals that I respect to make sure I am moving in a sane direction. This kind of thing is really technically difficult, so I am certainly not promising that I can deliver this kind of technology, but perhaps I can deliver something working that would give other people a place to start. As you might expect, the sourcecode for this project will eventually be made available under a FOSS license.
I do not want to get into the design details yet (so please don’t ask) or even talk about my new application (which is just entering closed beta) but I wanted to start talking about why these issues are important. Please feel free to comment on the features that privacy protecting technology should have…