In fact, this topic came into focus of late during a pair of online events, one hosted by the Institute for Security and Technology’s Ransomware Task Force and the other by the Open Source Security Foundation (OpenSSF). The events included speakers from the federal government, private industry and the open source community writ large. They all came together to discuss strategies and possible actions, with a goal of making it easier to find and fix flaws in software that use open-source code.
The U.S. simply should not abandon open source code, which is interwoven into most of today’s software, including commercial offerings, speakers said. There’s a good reason for its widespread use: Open source presents free-to-use software building blocks that spare developers from having to reinvent the wheel, said Marc Rogers, vice president of cybersecurity at identity and access management (IAM) company Okta. Even the federal government heavily uses such offerings.
“The federal government is one of — if not the — largest users of open source software in the world,” and open source also underpins critical infrastructure sectors, said Jack Cable, senior technical adviser at the Cybersecurity and Infrastructure Security Agency (CISA).
But when something goes wrong in widely used open source components, the effects can be severe. In 2021, CISA director Jen Easterly reportedly called Log4Shell the “most serious” vulnerability she’d seen in her career.
So what does a real fix look like, and what efforts are underway?
A UNIQUE SECTOR
Federal officials and other stakeholders know that dealing with the open source ecosystem will take a special touch. Policymakers cannot just treat it the same way they do the private sector, said Anjana Rajan, assistant national cyber director for technology security.
For one, the communities behind open source projects are decentralized and often international.
“This is a — by default — global problem,” Cable said. “If we can stick to say, within United States government, we are not going to have a comprehensive solution. So, we do want to very much get out there and encourage international collaboration around this.”
Rajan said policies should be harmonized across borders. Otherwise, an open source community could face conflict if members in one country are subject to a different set of policies than members from another. The federal government is looking to on-board more people with backgrounds in technical topics to help guide its approach. CISA, for one, has announced plans to hire an open source software security lead, according to FCW.
“We need more technical folks in the policymaking space to be able to write good policy on cybersecurity,” Rajan said.
Another difference with the private sector is that the open source tech world is powered by volunteers.
Rogers said that while these volunteers have roles to play in the open source ecosystem's security, it doesn’t make sense to hold them fully responsible.
Contributors to open source projects don’t decide who incorporates their offerings or how. They lack the insights to ensure the offerings are only used in low-risk ways or the ability to compel users to patch. Thus the private developers and companies who consider using open source components need to take some responsibility for first assessing risks.
“When you take code from somebody else and bake that into your product, only you can properly assess what the impact of that code failure is going to be,” Rogers said, advocating for a shared responsibility model.
Plus, putting all the security burden on open source developers could chill interest in contributing to open source projects, Rogers said.
“It’s important that we don't dictate to the open source development world … We need to sit down as equal parties and talk and work out what the best approach is, to make sure that we're not scaring people off,” Rogers said. “You can't have three people in Nebraska who build a very cool software product getting the lion's share of the focus from billion-dollar corporations. It's not realistic, and it's not going to solve anyone's problems.”
PROACTIVE SECURITY AND FASTER RESPONSE
Regulators shouldn’t try to penalize software engineers for bugs in their code — that’s just a natural part of the process of creating, Rajan said. But it’s important to push initiatives that can build code securely from the get-go and subsequently maintain it more effectively.
Using memory-safe languages is one practice that can go a long way, Rajan said.
“Research has shown that when you look at a critical infrastructure or a large enough codebase that's written in memory-unsafe language, and you migrate it to a memory-safe language, the number of vulnerabilities decreased by up to 70 percent. And that's a pretty remarkable lever to pull,” she said.
OpenSSF also aims to provide resources supporting secure practices. These include a guide helping developers evaluate open source offerings before using them and a guide helping developers coordinate open source vulnerably disclosures, OpenSSF CTO Brian Behlendorf said.
Funding and workforce are other pieces of the issue. Rajan said this year and next will see the federal government discuss how it can better fund the ecosystem. Meanwhile, Cable said, CISA helps give back by “default[ing] to open source for everything we put out.”
Cable also said suggested that the federal government could rally its sizable workforce and many software developers. Some companies incentivize employees to spend 20 percent of their work time on open source, and there is potential for the U.S. government to do something similar. But legal barriers and practical questions would need to be addressed first, Cable said.
When it comes to addressing open source software defects, speed also matters.
Behlendorf said OpenSSF has taken some steps to help find and fix flaws. For example, its Alpha-Omega Project, launched last year, provides grant funding to specific open source foundations. The money is intended to help them build up their security capabilities, such as by staffing up teams or implementing security policies. The initiative also involves scanning open source projects to identify and fix the kinds of “low-lying defects and simple bugs that frankly don’t get addressed,” he said.
In April, OpenSSF also released a finalized, updated version of its Supply Chain Levels for Software Artifacts framework. Thisframework “provides specifications for software supply chain security,” and helps make final software packages traceable back to their original source code.
The organizations that use open source also must maintain their software. Rogers said the rapid speed at which Log4J was exploited shows it’s no longer enough to patch every 30 days. Nowadays, “you need to use threat intelligence capabilities to track the vulnerabilities and see how they're evolving and understand the risks,” he said.
And when crises happen, strategies need to be in place to keep momentum going until the incident has been fully addressed. Unpatched versions of Log4J remain prevalent, as do unpatched versions of OpenSSL that can still be compromised by the Heartbleed vulnerability discovered in 2014. A recent scan found 50,000 servers still vulnerable to Heartbleed, Rogers said.
“Think of an incident like this like an oil spill,” he said. “What do you do when there's a massive oil spill? You contain it. You stop it spreading. You work on cleaning it up. And then you work on solving the source of the problem. We didn't do any of that for Log4J. There are repositories out there still putting out the vulnerable Log4J package.”