Security Reviews in Open Source Health Software

Recently, Shawn Hernan wrote a piece on Microsoft’s security blog that argues that the “many eyeballs’ effect does not ensure that open source software is secure.

His argument is excellent and while I disagree somewhat with his conclusions, his point is undeniable. his argument, if it can be boiled down to a quote is:

The key word in Raymond’s argument is can. I’ll concede that open source can be reviewed by more people than proprietary software, but I don’t think it is reviewed by more people than proprietary software.

The simple reality is that -most- Open Source software projects are -not- popular enough to create a sub-culture of project developers who devote themselves to fixing bugs and ensuring quality code. Note that just because Open Source software is popular with its users does not always ensure that the size and makeup of its developer community is large enough to encourage such a subculture.

This is where Shawns piece breaks down. He does not bother segmenting the various “Open Source” projects nor does he bother segmenting proprietary software companies. One of the most important lessons to learn about software development is that very very few proprietary software companies are capable of operating at the level that Microsoft does.

Lets imagine for a second what methods would produce the most bug-free and therefore secure software.

The most important issue for determining how buggy software will be is to determine what the “game” is for developers. “The game” is the basic motivational structure that each individual developer is subject to when writing the given software. This kind of assumes that there is some variance in the rate at which developers debug their own code, and the rate at which they review other developers code, based on some kind of reputation, financial, or pleasure incentive.

What happens if you make bug-fixing the primary focus of the game?

The primary culture of the development team would be to develop highly secure and bug-free code, then simple competition and social pressure, inside your core development team, would help ensure that you have secure code. If a new developer were to join the project/company and was seeking to establish credibility, he or she would know that finding a bug and fixing it would increase their own prominence, probably at the expense of the developer that wrote the bug. The original developer would know that and take every effort to slow down and write good code in order not lose credibility.

The problem with this “game” is that any Open Source project that took this approach would slow to a crawl. New features would be added very slowly, as developers spent lots of time reviewing, testing, and generally obsessing about bugs present in any portion of the code. In fact if secure code and bug-free software were truly the goal then this software project would intentionally continue to slow down as long as going any slower would produce better, more stable code.

Frankly, no reasonable business plan for a proprietary software company could ever tolerate such an intentionally metered pace. However, Open Source software presents the opportunity for any community culture to grow and flourish. The OpenBSD project is a project with a focus almost identical to what I described here. The OpenBSD operating system is objectively the most secure operating system in the world based on several measures. You might imagine that a project so obsessed with security and bug-fixing would have a web-page devoted to their thoughts and practices on the subject, and you would be right.

Shawn Hernan neglects to acknowledge that projects like OpenBSD are possible in Open Source but are impossible with proprietary software companies. But it is important that we not straw man Shawn’s argument. Proprietary Software companies have one substantial advantage over Open Source projects. They can pay developers to follow procedures that ensure high quality code and they can pay some developers to do nothing but professionally audit code. Microsoft is very good about this. But then they are in a pretty unique position regarding their software. You could restate Microsofts business plan as “protect the billions that we already make selling operating systems”. The profits from anything that the company is doing “to compete with Google” for instance, pales in comparison to the profits from selling operating systems. But Microsoft can recognize that if they do not compete with Google now, and at least sometimes on Google’s terms, Google will eventually begin to hit them where they live, by subverting the all important operating system profit.

Microsoft has developed strict code-review procedures, like the ones Shawn Hernan mentions, because every time a vulnerability comes out for Microsoft software, alternatives look better and better. At this stage in Microsoft’s history, they have decided that security is a financial priority, but this only after years of ignoring it in favor of other more profitable business priorities. Just like most Open Source projects are not focused on being fundamentally secure, most proprietary software companies do not have an financial incentive to invest in programmers and procedures that produce fewer bugs and more secure code. Especially in healthcare software, bug-free code is an afterthought. This is just the nature of for-profit endeavors. The focus is always on what makes the most money. Microsoft is now in a position where secure code will protect profits. Proprietary EHR companies are not in that position. They write code that helps them sell to doctors. Since doctors are typically irrational purchasers, the feature set and priorities of typical EHR companies are similarly irrational.

It should also be noted that just because Microsoft has an financial incentive to produce secure code in one product line does not means that this extends to all of its product lines.

There is another method by which Open Source projects can be more secure and bug-free than code developed by a proprietary software company with an financial incentive to produce solid code. Its pretty simple really, and Shawn has already acknowledged it:

But could “enough” code review, which might happen for popular projects like Linux and the Apache Web Server compensate for the more comprehensive approach of the SDL?….

Shawn is acknowledging here that there are differences between Open Source projects. Ironically both the Apache project and the Linux Kernel are often subjected to distributed attempts at systematic bug detection similar to the whole contents of SDL. A great example of this is the Security Enhanced Linux (selinux) project run by the NSA. The problem with software bugs is that they are often unimportant for the average user, but a cracker can still use them to break into the system. This point is made most eloquently in the classic paper by Anderson and Needham, Programming Satans Computer. Projects like selinux attempt to make other bugs less prone to becoming a security weakness, an important step. This is exactly the kind of comprehensive security improvement that, according to Shawn, auditing is not supposed to have that Microsoft’s SDL does have. But the many eyes principle is not limited to mere audits. The whole point is that anyone can could shoehorn bug-detecting methods onto Open Source projects. I will use the term “Audit” to embrace all of the bug-detection and fixing techniques that Shawn mentioned. Since simply human auditing is often the most onerous and unlikely task for all but the most motivated developers.

So we are now able to make a little ranking. Who has the most secure and bug free development practices and what kind of “game” are they using to achieve that security. I think it looks like this:

  1. Best: Security Obsessed Open Source: Open Source Projects that build-in bug-fixing at the expense of everything else, like OpenBSD.
  2. Really Good: Popular Open Source with audit teams: Open Source Projects that are so popular -with developers- that they can afford to create a security obsessed sub-culture within the development team. Projects like Apache, Linux, and MySQL (before… well you know…)
  3. Really Good: Proprietary Vendors who pay for audit: Proprietary software vendors who have a financial incentive (Microsoft) or cultural imperative (Fog Creek)  to create a development structure that reduces bugs.
  4. OK: Audited Open Source: Open Source projects that are sufficiently popular with developers to actually get some kind of formal code audit, preferably both automated and by a programmer trained in testing.
  5. Crappy: Unaudited Open Source: Open Source projects that are essentially one-man shows, whose code is rarely used and even more rarely looked at. (The vast majority of Open Source software projects in existence fall into this category)
  6. Worst: Unaudited Proprietary: Proprietary companies without a financial incentive to pay for expensive testing and bug-detection essentially have a financial incentive to -ignore- bugs. For most proprietary companies a software problem is only actually a bug, if it prevented a sale or lost a client.

Shawn’s post correctly points out that Microsoft, which is a typically in group #3 is much better than Open Source generally which is typically in group #5. of course his argument will be embraced and touted by companies who are in group #6.

Which brings me back to my subject of passion: Open Source medical software. I believe, tragically, that most Open Source Health software projects are in category #5 (Unaudited Open Source). I think this is changing with companies like Medsphere and ClearHealth becoming more mature companies, they can afford to sponsor auditors for their projects. Others like MOSS and Indivo are starting out with a security focus. I think #2 is the right target for us since it is not clear that OpenBSD is a -better project- just because it is -more secure-. Linux is still more secure that Microsoft Windows, and it is developed at much greater pace.

Still, I think the lesson here is that the best security happens when people focus on the boring and mundane. Starting with reading someone else’s code and then moving all the way up to using software to make software more secure. We simply do not have enough security focus in our corner of the Open Source world and I have to admit it is partly my fault. Before coming to the world of Open Source Health Software I was trained in Information Security at the Air Force Information Warfare Center in San Antonio. That payed off with later security contracting with Rackspace, Verisign and Nokia. I have never been 100% devoted to code auditing in any of my previous roles, but I know enough by rubbing shoulders to know when I should be nervous. As I look around the Open Source Health Software arena, there is a lot to be nervous about. There are also upsides. Our community has embraced at least two other serious security people. Alesha Adamson with MOSS was trained as a security geek at one of the few NSA National Centers of Academic Excellence and Ben Adida with Indivo X has a PhD from MIT, under the Cryptography and Information Security group

Both of these people are fully cognizant of security research and they are in leadership roles in their respective projects, and in the community as a whole But is it important that the whole community learn from their strong kung-fu. We need to develop a kind of security focused sub-group within the Open Source health information movement. We need people willing to do security audits on not one but several important Open Source codebases. Perhaps we should be trying to get members of the larger security community involved with Open Source healthcare software auditing. I will be thinking more about this, but I would love to hear my readers thoughts on the subject.

-FT

9 thoughts on “Security Reviews in Open Source Health Software

  1. The biggest security problem with open source is one I wrote about at ZDNet some time ago.

    It’s not regularly updated.

    I think my source was a Palamida study. Since major open source projects don’t force updates on all registered users, and many users aren’t even registered anyway, many people are running old code that may have been compromised.

  2. good point!! but that applies so much to software generally.

    I think the reason that open source has been hurt by this more than others might be is that geeks are generally offended by the notion of automatic updates. So they do not build automatic updates into thier applications directly and instead rely on aggregation projects like Ubuntu/Fedora to do this work..

    -FT

  3. What will change it is when the likes of CCHIT/FDA (at least on your side of the pond) measure quality rather than feature sets. To some extent the FDA do already (and CE marking on this side), but much software doesn’t come under their remit. There appears to be a harmonization to EN 62304 which is good – but again – only for that software under the remit (which isn’t as wide as perhaps it should be).

    PS Seems your WP is 5 releases out! 🙂 Still – better than mine 🙁

  4. I think there are two other phenomena that can impact a security process:

    1)Once a code is released into production it can come under attack or it can be subject to internal attacks. This almost always leads, where there are active programmers developing the code, into a security updating process.

    2) Security is part of the original design goal, i.e. the code has explicit security objectives to achieve during development.

    I would like to note that #2 does not necessarily make a deliverable IT environment, such as a health care delivery system, more secure. This is also true for code auditing exercises.

    The reason for this is that total security is as strong as the weakest link. Most codes are not used in isolation from other collections of codes. For example, a PHP code or a .Net code run’s inside of a web server, not programmed by the developer, and that web server run’s inside an operating system not programmed by the developer and that operating system runs inside a network using codes also not programmed by the developer and that network in turn exposes the original code to user interactions from entirely separate systems (i.e. a workstation running a browser), not programmed by the developer.

    Fred discusses the ‘game’ or reward system used to develop code. This turns on financial concerns. One of the primary things that any security process must take into account is a risk/threat assessment. Since there are usually limits on financial resources, security activities, if rational, would proceed from the most likely*most costly list.

    So, when developing a PHP system that is intended to be secure, does the programmer have any incentive at all to pay attention to the server delivery environment? I say not usually. But the customer does. And therein lies one of the serious weakness’s in any market for IT systems that need to be secure, the customers must provide the financial incentives but they have no way to quantify total system risk. So, instead they focus on shifting liability away from themselves and engage in what Bruce Scheier calls security theater.

  5. Shawn pulls a common diversion to confuse us.
    He takes Eric Raymond’s statement “Given enough eyeballs, all bugs are shallow.” and converts it to another statement “Code review makes software more secure” and then attacks the second statement by stating that there aren’t that many programmers doing code review for open source and that of course Microsoft had more programmers.
    If we go back and evaluate Eric’s statement, we find this much harder to refute. For any given software that is used by x number of people, there will be a higher percentage of eyeballs engaged in finding bugs in the open source software than in the proprietary software. This is because proprietary software does not give you access to the code and the community that open source does.
    I have found and reported many bugs in open source software just because I can. I have found lots more bugs in Microsoft software but have not had the community or source code or even a venue to report the bugs. The hard part of bug fixing is finding them and being able to describe exactly the conditions that trigger them. Once these are specified, it is easy to fix the bug. This is Eric’s point… (and Shawn misses this completely… and probably intentionally).

  6. Fred, how do you recommend someone become an auditor of open source software, without becoming a PhD from MIT?

    “bug-free and therefore secure software.”

    Alas bug-free software may still not be secure in the world of Open Source because of the diversity of tools used to build the software.

    Lets take for example where a Social Security Number (A practices that should be stomped out) is used for a patent ID. Secure Coding technique’s would have you zero the memory buffer when the information is no longer required, and the function using the memory exits. One compiler may dutifully zero the memory, while an other will remove the zeroing as an optimization because it saw the memory was not subsequently used. So the same software may be bug-free and secure in one office where it is used, but not an other because a different person built it with different tools. Some compilers do have appropriate functions to handle this, but they are not common enough for most developers to know about them.

    I expect you are familiar with these, but your readers might find them of interest:

    “Secure Coding in C and C++”
    http://www.cert.org/books/secure-coding/
    [What would be even better is to dump these bad languages and use safer languages such as ADA/SPARK or Erlang. Erlang would be most interesting due to its distributed nature. It also does do live updates to running systems.]

    http://www.cert.org/secure-coding/

    That site brings up an other interesting wrinkle in Open Source development, that of domain. For example the “Managed string library” goal to do secure handling of strings to prevent things like buffer overflow. Where domain entries, is that this ‘safe’ library can never be used in a safe Embedded System due to its usage of dynamic memory; MISRA 20.4.

    Application Security Procurement Language and contract:
    http://www.sans.org/appseccontract/

  7. Great links and ideas, but you are one stage past what I really want.
    I want people to read the code more. I want more automated testing tools.

    I am willing to assume that programming language, the database application and the rest of the “stack” for that matter are 100% secure. Not because they are but because it not practical to try and consider security auditing from that level with the resources that we could muster in the FOSS Health IT community. Further, we have enough warts in the FOSS code itself that we really do not need to obsess to much about the compiler. It is our own code that should be secure first. So I would be much more interested in ensuring that FOSS EHR systems do not use the socialsecurity number as a primary ID, then ensuring that the memory for the social security number is freed correctly.

    Its not that you are wrong, I just prefer to focus on solvable problems first.

Comments are closed.