CISO Reflections On GitHub’s State of The Octoverse
TL;DR How do you know you are shifting security left? Here is a quick checklist:
I unpack this checklist below through the lens of a security industry report that rightly encourages us all to shift left...at least in part.
The Octoverse Cliff Notes
Have you seen “The State Of The Octoverse?” Despite its Lovecraftian name, It’s not a sci-fi flick. It’s GitHub’s annual report on open source software. This volume covers best practices and metrics for software security – particularly open source software (OSS) with a focus on vulnerability and dependency patching.
Here are my CliffsNotes:
- Vulnerabilities are not abating...including OSS based ones.
- Vulnerabilities tend to live for four years prior to disclosure.
- Repositories that automatically generate pull-requests will patch within 33 days on average. That’s 13 days better (or 1.4 times faster) than those who don’t.
- Elite teams focused on automation are 4.9 times more likely to update dependencies and fix vulnerabilities without breakage.
It’s an epiphany of the obvious: automation that makes it easy for developers to do the right things wins. That’s why I agree with the authors of the Octoverse. The repo integrated approach proposed in the Octoverse should be standard practice (and while you're at it, add distroless to your mix!)
It's About the Developers!
The author’s main metric hinges on shortening the time to live (TTL) of patching by 13 days on average. The cause of that improvement is automating pull-requests for out-of-date dependencies and vulnerabilities – getting that information into the hot hands of busy developers without friction.
There is a generalizable principle here. Getting the right data, in an actionable format, into developers' hands faster reduces patching time. This process starts by integrating directly with existing developer tools chains.
From a DevOps perspective, integrating into existing tool chains is “meeting people where they are.” This is a principle that security (particularly vendors) has a tendency to get wrong. You want to keep developers from context switching out of their own tool chains into yours. Unfortunately, that adds time and can compound cost, demotivating developers while creating waste.
From a decision making perspective, you must go directly to the developer. Developers are in the best position to determine what is and isn’t relevant given the services they are responsible for.
Forcing a security decision maker in the middle of this process inflates TTL. Yes, security should have visibility (governance) and influence policy. But if security is in the driver's seat, you have just opted for slowness that leads to inaction and eventually breach.
Shifting Left Means Shifting the Work to Development
If you were to take a pure view of shifting security left, you would say it occurs “pre-deploy.” This means that security work needs to shift to developers so they can test code early in development. This is how we can prevent avoidable issues from ever materializing; It is the highest impact thing you can do.
It reduces the volume of avoidable vulnerabilities and configuration issues from materializing as developer re-work. This is what we’re doing at Soluble – giving developers security solutions that they can use within their toolchains and workflows to help them deliver secure, reliable code to production.
When the work is shifted left, fewer defects and misconfigurations make it to production, and there is less work for the security team, and it drastically reduces rework and remediation time for developers. Contact us to learn more – we’re happy to show you the solution.
Practical Ways to Lower Risk
In the end, it’s not really about how many more days of reduction in TTL you get. It's about taking a principled approach to continuous improvement based on removing waste – particularly in pipeline-based development. If you do that, you reap the benefits of less TTL, cost and breach risk.
Do the following, and you are meeting the principles of shifting left, even if it’s pre- or post-deploy (like in the Octoverse):Meet people where they are
- Go to their tools chains – not the other way around
- Avoid developer context switching
Get data to the responsible decision makers first, fast and in an actionable manner
- Don’t put security people in the middle
- Remember that developers know their code best
- Ideally, shift the security work left so they can test their own code within their CI/CD pipelines
Set goals for improvement
- Reduce the TTL of undeployed patches and dependencies
- Reduce the volume of avoidable issues from materializing in the first place.
The Octoverse measured TTL as an average. I want to encourage you to eventually measure TTL “on a curve.” As a first step you could do the following:
- 50% of patches are deployed within 33 days or longer
- 10% within 90 days or longer
- 1% in 365 days or longer
You should want to do this because you could get better at the 50% range while getting worse at the 1% range. Bad guys attack the back of the herd. The aging vulnerabilities will invariably be exposed to attackers longer. Bad guys will also likely have more techniques for exploitation with age. You want to monitor more of the data to make sure your capabilities are getting good coverage.
Arrival rates are the other side of the coin. Arrival rates are how you measure the effectiveness of your shift left work, resulting in less rework in production. I won’t bore you with how to do that yet. I have to save something for other posts!
As always, contact us to learn more. We'd love to hear your feedback.