Google’s open-source security move may be pointless. In a perfect world, it should be.

Credit to Author: Evan Schuman| Date: Tue, 31 May 2022 02:30:00 -0700

One of the bigger threats to enterprise cybersecurity involves re-purposed third-party code and open-source code, so you’d
think Google’s Assured Open Source Software service would be a big help.

Think again.

Here’s Google’s pitch: “Assured OSS enables enterprise and public sector users of open source software to easily incorporate the same OSS packages that Google uses into their own developer workflows. Packages curated by the Assured OSS service are regularly scanned, analyzed, and fuzz-tested for vulnerabilities; have corresponding enriched metadata incorporating Container/Artifact Analysis data; are built with Cloud Build including evidence of verifiable SLSA-compliance; are verifiably signed by Google; and are distributed from an Artifact Registry secured and protected by Google.”

This service may or may not be useful, depending on the end-user. For some companies — especially small and mid-sized businesses — it might have value for small operations with no dedicated IT team. But for larger enterprises, things are very different.

Like everything in cybersecurity, we must start with trust. Should IT trust Google’s efforts here? First, we already many malware-laden or otherwise problematic apps have been approved for the Google app store, Google Play. (To be fair, it’s just as bad within Apple’s app store.)

That makes the point. Finding any security issues in code is extraordinarily difficult. No one is going to do it perfectly and Google (and Apple) simply don’t have the business model to staff those areas properly. So they rely on automation, which is spotty. 

Don’t get me wrong. What Google is attempting is a very good thing. But the key enterprise IT question is whether this program will allow them to do anything differently. I argue that it won’t.

IT needs to scan every single piece of code — especially open source — for any problems. That might include intentional problems, such as malware, ransomware, backdoors, or anything else nefarious. But it will also include accidental holes. It’s hard to fully fight against typos or sloppy coding. 

It’s not as though coders/programmers can justify not double-checking code that comes from this Google program. And no, the knowledge that this is what Google uses internally shouldn’t make any CIO, IT Director or CISO feel all warm and fuzzy.

That brings up a bigger issue: all enterprises should check and double-check every line of code that they access from elsewhere — no exceptions. That said, this is where reality meets ideal. 

I discussed the Google move with Chris Wysopal, one of the founders of software security firm Veracode, and he made some compelling points. There are a few disconnects at issue, one between developers/coders and IT management, the other between IT management (CIO) and security management (CISO). 

As for the first disconnect, IT can issue as many policy proclamations as it wants. If developers in the field choose to ignore those edicts, it comes down to enforcement. With every line-of-business executive breathing down IT’s neck, demanding everything right away — and those people are the ones generating the revenue, which means they will likely win any battles with the CFO or CEO —enforcement is difficult.

That assumes IT has, indeed, issued edicts demanding that outside code be checked twice to see what code is naughty and nice. That’s the second conflict: CISOs, CSOs and CROs will all want code-checking to happen routinely, while IT Directors and CIOs may take a less aggressive position.

There is a risk from this Google move, one that can be described as a false sense of security. There will be a temptation from some in IT to use Google’s offering as an opportunity to give in to the time pressure from LOBs and to waive cybersecurity checks on anything from Google’s Assured program. To be blunt, that means deciding to fully (and blindly) trust Google’s team to catch absolutely everything.

I can’t imagine a Fortune 1000 (or their privately-held counterparts) IT exec believing that and acting that way. But if they’re getting  pressure from business leaders to move quickly, it’s a relatively face-saving excuse to do what they know they shouldn’t do.

This forces us to deal with some uncomfortable facts. Is Google Assured more secure than unchecked code? Absolutely. Will it be perfect? Of course not. Therefore, prudence dictates that IT needs to continue what it was doing before and check all code. That makes Google’s effort rather irrelevant to the enterprise.

But it’s not that simple and it never is. Wysopal argues that many enterprises simply do not check what they should. If that’s true — and I sadly concede it likely is— then Google Assured is an improvement over what we had last month.

In other words, if you’re already cutting too many corners and plan to continue doing so, Google’s move can be a good thing. If you’re strict about code-checking, it’s irrelevant. 

Wysopal also argues that Google’s scale is far too small to help much, regardless of an enterprise’s code-checking approach. “This project would have to scale 10-fold to make a big difference,” Wysopal said. 

What do those IT leaders who do not strictly check code do? “They wait for someone else to find the vulnerability (and then fix it). The enterprise is kind of a dumb consumer of open source. If a vulnerability is found by someone else, they want a system in place where they can update,” Wysopal said. “It’s rare to find an enterprise with a strict policy and that they are enforcing well. Most allow developers to select open source without any strict process. As soon as app security starts to slow things down, it gets bypassed.”

Google’s move is good news for those who’ve cut too many security corners. How many of those enterprises are out there? That’s debatable, but I am afraid that Wysopal may be more right than anyone wants to admit.

http://www.computerworld.com/category/security/index.rss