Security Scorecards & Envoy — Automating supply chain analysis

Kim Lewandowski
Envoy Proxy
Published in
3 min readDec 17, 2020

--

The Security Scorecards project is one of my favorite projects I’ve worked on while at Google. We announced it under the OpenSSF umbrella several weeks ago. It auto-generates a “security score” through a number of checks on OSS projects. The reason why I like this project so much is because it’s simple to understand, fully automated, uses objective criteria and has the ability to make a large impact across the OSS ecosystem by driving awareness and inspiring projects to improve their security posture.

Right after the initial announcement, we learned that the Envoy project was looking for a mechanism to understand and enforce policy on the health of projects they take dependencies on. This gave us the opportunity to test Scorecards on a real-world project used in critical systems across the industry!

We helped Harvey Tuch, maintainer of Envoy, try out and evaluate Scorecards for their use case as part of their new policy for external dependencies.

“Until recently, we’ve had no stance on external dependencies or criteria for determining if a new external dependency is acceptable.”

First, for fun, let’s run Scorecards on the Envoy project itself, and then we can run it against all of Envoy’s dependencies.

For Envoy, we get these results:

Not too shabby, and this prompted an issue for signing releases and another fix for the Scorecards project!

Taking this one level deeper, here’s a snippet of the output against Envoy’s external dependencies:

green = pass, red = fail

It was awesome to see the conversations taking place amongst the maintainers of those projects on making improvements — “hey, can we get fuzzing integrated into this project?”

It’s working!! 😏

The Envoy project plans to integrate OpenSSF Scorecards into their dependency metadata and enforce in CI policies around their dependencies. Scorecards will reduce the toil and manual effort when maintaining Envoy’s supply chain. A key aspect of their new policy is that automated criteria are applied first, and then where necessary exceptions are made for non-conforming projects. This deliberative process allows maintainers the opportunity to consider relevant scorecard criteria, asking questions around missing criteria and evaluate alternatives. No automated system will be perfect, but Envoy plans to collaborate with OpenSSF Scorecards to improve accuracy and relevancy.

I’m looking forward to seeing more case studies like this. It’s really motivating to see the beginnings of a success story and cross-community collaboration. If you’re a maintainer of an OSS project and interested in trying out Scorecards similar to Envoy, tell me about it! You can find me and others working on projects like this in the Securing Critical Projects Slack Channel.

Now if we can just figure out how to stop bumping up against GitHub’s API limits. ;)

--

--