Open source: Big benefits, big flaws

Given the dominance of open source in the IT marketplace, any significant debate over its value might be considered moot.

As Eric Cowperthwaite, vice president, advanced security and strategy at Core Security, told CSO recently, "Open-source code has conquered the world."

Indeed, its advantages are multiple, compelling and well known. Among the most compelling are that it is free, it is open to everybody, users can customize it to fit their needs and there is a community of thousands -- perhaps millions -- of eyes on the code to spot bugs or flaws so they can be fixed quickly, before they are exploited by cybercriminals.

[ ALSO ON CSO: The state of open source security ]

When the source code is, "open to the world, you are going to have multiple eyes viewing the same configuration," said Andrew Ostashen, security engineer at Redspin, "so if issues arise, the owners will be able to remediate faster."

Still, world conquerer or not, a number of security and legal experts, while they agree in general with Ostashen and are not issuing blanket condemnations of open source, continue to warn both organizations and individual users that it is not perfect, or even the right fit for everybody.

It is critical, they say, to be aware that some of the characteristics that make it so attractive also make it risky. Obviously, if the flaws in code are exposed for all to see, criminals can see them as well. And even millions of eyes on open-source code is not a guarantee that every flaw will be found and fixed.

"There have been claims that open source software is inherently more secure due to the openness and 'millions of eyes that can review the source code," said Rafal Los, director of solutions research at Accuvant. "This was thoroughly debunked by bugs like Heartbleed and others."

Indeed, Kevin McAleavey, cofounder and chief architect of the KNOS Project, somewhat sardonically refers to it as "open sores."

"Open source publishes the source code, and many eyes claim to review it, thus exposing any possible bad code," he said. "And yet ... Heartbleed. The defective code was right there for those 'many eyes' to spot since its release in February 2012, yet nobody spotted it until more than two years later, after the exploits had become overwhelming."

Another example he and others cite is the "Ghost" exploit in GNUTLS, which dates back to 2005 but was discovered only last year.

"Again, nobody ever spotted that one either until after exploits were piling up like cordwood," McAleavey said. "There was also the "Shellshock' exploit in the BASH shell, which similarly was published, seen by many eyes and dates back to version 1.03 since 1989."

That is because millions of eyes doesn't mean all those eyes are qualified to spot flaws.

"Just because you have a critical mass of people reviewing the code, are they qualified to do so?" asked Aaron Tantleff, a partner at Foley & Lardner. "There are no credentials to speak of, or certification that can be given to code reviewed by the open source community."

That is McAleavey's view as well. "Just because the source code is there doesn't mean that all of those eyeballs understand what the code actually does, or does incorrectly," he said.

And even if flaws are spotted and patches created, that doesn't guarantee they will be installed in every device or system that could be affected.

Tantleff said recent history is proof. "One need not look back very far to find examples of the risk of open source in one's environment," he said. "Park 'n Fly and suffered from attacks due to an open-sourced based security vulnerability that existed in the Joomla content management platform.

"A security patch had been issued well before the attack, but unfortunately the patch was never installed," he said.

McAleavey, who said he started working with Linux, one of the most popular open-source operating systems, when it came on the scene more than 20 years ago, said this problem exists largely because open source tends to exist as, "two separate entities."

In the case of Linux, "there is the 'kernel team,' which is the primary operating system itself, and then there are 'application maintainers,'" he said.

"Any changes to the Linux kernel itself still has to be approved by Linux (creator Linus Torvalds) personally or through one of his handful of trusted kernel maintainers. They, and only they, determine what happens to the core kernel OS itself," he said.

[ ALSO: A brief history of Linux malware ]

"But they have no interest whatsoever in what happens among the literally thousands of other open-source developers who maintain a single application or 'package' -- also known as 'distros' or 'distributions' -- of Linux. They're pretty much on their own."

That, he said, has led to "absolute anarchy in userland. And that's not good for stability or security. No one is in charge."

Los said closed-source software is "just as susceptible to being 'abandoned' as open source," but noted that the incentive to maintain and update commercial or proprietary software is there, "if the vendor truly cares for their product quality."

But, like McAleavey, Los said open-source components used in commercial applications, "are a massive problem, primarily because they're forgotten. Take, for instance, the OpenSSL library and the issues that popped up when a series of major flaws were discovered in it. Open-source and commercial software alike fell victim to the dire need to patch, but where OpenSSL was used in commercial applications, many of the end users simply weren't aware that it was there and so didn't know it needed to be patched."

Tantleff notes the same problem. "Just because a patch exists, doesn't mean the problem is gone. Someone still has to install it. Generally, there's no 'autoupdate' for open-sourced applications," he said.

The benefit of customizing or modifying the code to fit the needs of a developer or organization can boomerang as well, he said. All those modifications lead to, "many modified or forked versions of open sourced programs. Many of those modified applications are re-published for the world to use. The question then becomes which version do you use? Sometimes you cannot tell."

That, he said, can mean that a user or developer, "thinks he has the patched application, but in reality installed a version that was based on the unpatched application, and the vulnerability remains."

Open to modification also means it could be open to mischief. Tantleff said the code, "could be later injected with malware, or worse, specifically written to address an issue that people are seeking open-source applications for, with malware hidden inside from the start -- malware by design. Unfortunately, none of this is theoretical as there are examples of each of these."

Finally, a community of thousands makes it difficult to hold anyone accountable for legal or compliance problems. "If nobody's in charge, who do you sue?," asked McAleavey.

Of course, defenders of open source say the community surrounding it can be much more dependable than a company with a proprietary system. Writing in CMS Critic, Daniel Threlfall noted that, "if the single company managing the proprietary system goes under, what then? An open-source CMS, on the other hand, has a life of its own. No one entity owns it. Thus, there will always (presumably) be a support network and stable foundation upon which it can exist. The community is the stability."

How, then, can users get the best out of open source while avoiding the worst?

The answer may not be easy, Tantleff said, but it is relatively simple: "Companies should treat open source like all other software," he said.

It starts with knowing the source. "It is important that one knows where the software is coming from -- that it's a trusted source," he said. "You should also gather diligence on the software from other trusted sources."

Even if the source is trustworthy, "they need a proper, controlled process and program for vetting software before deploying it within the enterprise."

Finally, "they also need a program in place to manage and monitor the software once in the environment, including making sure the organization is aware of vulnerabilities, available patches, and most importantly, ensuring that the patches are installed," he said.

Join the CSO newsletter!

Error: Please check your email address.

Tags network securitysecurityCSO

More about CMSCSOLinux

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Taylor Armerding

Latest Videos

  • 150x50

    CSO Webinar: Will your data protection strategy be enough when disaster strikes?

    Speakers: - Paul O’Connor, Engagement leader - Performance Audit Group, Victorian Auditor-General’s Office (VAGO) - Nigel Phair, Managing Director, Centre for Internet Safety - Joshua Stenhouse, Technical Evangelist, Zerto - Anthony Caruana, CSO MC & Moderator

    Play Video

  • 150x50

    CSO Webinar: The Human Factor - Your people are your biggest security weakness

    ​Speakers: David Lacey, Researcher and former CISO Royal Mail David Turner - Global Risk Management Expert Mark Guntrip - Group Manager, Email Protection, Proofpoint

    Play Video

  • 150x50

    CSO Webinar: Current ransomware defences are failing – but machine learning can drive a more proactive solution

    Speakers • Ty Miller, Director, Threat Intelligence • Mark Gregory, Leader, Network Engineering Research Group, RMIT • Jeff Lanza, Retired FBI Agent (USA) • Andy Solterbeck, VP Asia Pacific, Cylance • David Braue, CSO MC/Moderator What to expect: ​Hear from industry experts on the local and global ransomware threat landscape. Explore a new approach to dealing with ransomware using machine-learning techniques and by thinking about the problem in a fundamentally different way. Apply techniques for gathering insight into ransomware behaviour and find out what elements must go into a truly effective ransomware defence. Get a first-hand look at how ransomware actually works in practice, and how machine-learning techniques can pick up on its activities long before your employees do.

    Play Video

  • 150x50

    CSO Webinar: Get real about metadata to avoid a false sense of security

    Speakers: • Anthony Caruana – CSO MC and moderator • Ian Farquhar, Worldwide Virtual Security Team Lead, Gigamon • John Lindsay, Former CTO, iiNet • Skeeve Stevens, Futurist, Future Sumo • David Vaile - Vice chair of APF, Co-Convenor of the Cyberspace Law And Policy Community, UNSW Law Faculty This webinar covers: - A 101 on metadata - what it is and how to use it - Insight into a typical attack, what happens and what we would find when looking into the metadata - How to collect metadata, use this to detect attacks and get greater insight into how you can use this to protect your organisation - Learn how much raw data and metadata to retain and how long for - Get a reality check on how you're using your metadata and if this is enough to secure your organisation

    Play Video

  • 150x50

    CSO Webinar: How banking trojans work and how you can stop them

    CSO Webinar: How banking trojans work and how you can stop them Featuring: • John Baird, Director of Global Technology Production, Deutsche Bank • Samantha Macleod, GM Cyber Security, ME Bank • Sherrod DeGrippo, Director of Emerging Threats, Proofpoint (USA)

    Play Video

More videos

Blog Posts

Market Place