Back in July 2001 two security researchers, Marc Maiffret and Ryan Permeh from eEye Digital Security, discovered the Code Red worm - a piece of malware that targeted Microsoft's IIS software and propagated wildly until it was stopped. It was followed by more vulnerabilities and threats until Microsoft was forced to launch its Trustworthy Computing initiative in 2002.
Since then, Microsoft has changed the way it approaches software development and security is now just as important as other functionality.
Flash-forward to 2014 and we are, again, at a security crossroads. This time, it's open source software that is in the hacker's crosshairs.
In late December 2013, the vulnerability designated as CVE-2014-0160 was logged in the Common Vulnerabilities and Exposures system. We know this fault as Heartbleed. At the time, it was considered to be one of the most significant threats to have hit the Internet with up to 17% of the entire Internet affected.
With less fanfare - perhaps because it lacked a catchy name and fancy logo - the CCS Injection Vulnerability, designated CVE-2014-0224, reared its head in June 2014. Like Heartbleed, it exploited a vulnerability in the SSL and TLS libraries in many open source *nix distributions.
In September, we saw the arrival of Shellshock, or Bashdoor (CVE-2014-6271). This flaw affects the Bash shell used to process system requests. The flaw allowed malicious parties to execute their own commands and grant themselves access to a computer system.
Earlier this year, security consultant Dan Klein spoke at AusCERT 2014. He told the audience " “When you look at the Heartbleed bug, and you look at the SSL code, it’s incomprehensible, uncommented and untested. There are no unit tests".
Herein lies the open source movement's great challenge. Within corporate development environments, substantial rigour can be built into the software development lifecycle. Before code can move from design to development to testing and to production, there are a number of gatekeepers who check that the code conforms to a set of standards.
That's not to say all closed source or proprietary code is perfect. Software is developed and tested by humans who can make mistakes. However, there are checks and balances in place to mitigate the risks.
Furthermore, the way software is developed has changed significantly over recent years with developers developing new skills in understanding how and why buffer overruns can occur as well as other vectors that have been used for malicious code injection and execution.
As proprietary software tools and developers have improved their security posture, the focus of hackers has started to shift towards other platforms and attack methods. And that's meant vulnerabilities in open source platforms are now a more significant target.
How does code get approved into open source?
Peter Lees is the Chief Technologist in APAC for SUSE. We asked him about the process for getting code approved so that malicious or poorly written code doesn’t enter the mainstream.
"The way that open source projects are managed actively works against the inclusion of malicious code. Firstly, the fact that the source is open and available to be audited by a large audience means that there is a much greater opportunity to find deliberate attacks than there is in closed source. It was source code audits that found the bugs in openSSL and Bash. Malicious code would have to be carefully hidden - without leaving trace that is being hidden - to avoid detection. Obfuscated code immediately attracts attention".
Furthermore, the reputation of a software developer is important. Lees pointed out that when a new developer tries to contribute to an open source project that their code is checked more thoroughly.
"It's human nature that when a new programmer joins a project, his or her code will be vetted more carefully than already-known contributors. Establishing and maintaining a high reputation requires significant effort, so lends against the rapid or repeated insertion of malicious code", he added.
Trust, but Verify
Ronald Reagan was famous for his use of the Russian proverb "Trust, but verify". And this is probably the right posture for companies that have either already invested in open source software or who are looking at deploying open source solutions.
Like any software, open source solutions need to be verified before deployment. In one sense, open source offers some significant advantages as it's possible to inspect the source code before loading a program into your environment. But that means having the expertise in place to conduct thorough code reviews.
Chances are, there's already some open source running within your organisation or within systems you're accessing from cloud or other providers. That means you need to know what platforms all of your key systems are working on.
Almost every security expert we talk to advocates completing a detailed system audit in order to know exactly what you have so that you can identify potential weaknesses and know if a previously unknown vulnerability affects you.
Armed with that review, you can then identify software modules that are critical in maintaining system security. Those modules can be prioritised according to their risk profile and proactively reviewed before a potential vulnerability is exploited.
It's time for the open source community to pick up its game
Proprietary software developers such as Apple, Microsoft and Adobe have all suffered the ignominy of having their software blamed for various weaknesses and exploits. As a result, all have changed their operations in order to make security a greater focus. That's not to say that they are flawless but there has been a cultural change in many large software companies that has lead to better processes to reduce the number of significant flaws released in mainstream software.
It's time for the open source industry to apply the same rigour.
Read more: Smart Cities deliver Innovation & Security
Interestingly, there are dozens of code management tools available as open source applications that can be used for software development but it seems that rigorous code review is not always exercised. That means deployment of open source software needs to involve a different testing regime to proprietary software.
With operating system distributions, there is more structure with releases as, even though many desktop and server operating systems are distributed for free, there are paid options that include support that ensures releases are built to a particular level of quality.
Less told us that SUSE doesn’t release code into open source until it was carefully checked.
"When SUSE contributes changes - either as new product features or patches - the software goes through our QA team to test functionality, stability, scalability and for regressions including vulnerabilities, as well as integration testing with our partners. SUSE makes heavy use of testing automation, and a lot of effort goes into maintaining & refining our automated methods. SUSE also relies on individual expertise to review results and find errors".
However, as we saw with Heartbleed, it is possible for broken code to be widely distributed. And, if we're to accept Dan Klein's assertion, the problem stemmed from poor controls and programming practices.
This means there is a responsibility for the open source developer community to actively review each other's code, especially with respect to modules that have a direct bearing on security.
"At the end of the day for any software - open source, closed source or embedded - it's a question of confidence," said Lees. "The attractive thing about open source is that there is potential for a much higher degree of confidence than for closed source. It is much harder to hide mistakes, much harder to secretly introduce malicious code, and much more likely that a programmer wanting to make a name for him or herself will discover and fix problems when the source is open and available for review. Ultimately, if the users of open source software want to perform rigorous and exhaustive examinations of code, they can. The option is not even there for closed source software".
Is this Open Source's Code Red Moment?
Although there are significant differences in how open and closed software are developed, tested and deployed, it is arguable that the open source software movement is at an important place in its development.
On one hand, open source code is developed and reviewed in an informal setting that allows for many parties to conduct their own testing. This offers benefits outside the structured testing corporate software development employs as there is a potentially larger pool of people reviewing and testing code.
However, as processes are less formal and it's possible for flaws in code to go undetected for long periods of time.
One the other hand, as proprietary software has improved its security posture hackers have moved to other targets that they perceive as offering better returns on their investment. That's meant a changing focus on systems that have previously been ignored or less targeted.
As a result, open source software has been looked at and exploited by hackers.
We would argue that this makes the time we live in analogous to Microsoft's Code Red time. The question then is - will the open source community step up to ensure all existing code is reviewed and all new code is developed with security at the front of mind?
This article is brought to you by Enex TestLab, content directors for CSO Australia.