Previously in our Supply chain security for Go series, we covered dependency and vulnerability management tools and how Go ensures package integrity and availability as part of the commitment to countering the rise in supply chain attacks in recent years.
In this final installment, we’ll discuss how “shift left” security can help make sure you have the security information you need, when you need it, to avoid unwelcome surprises.
The software development life cycle (SDLC) refers to the series of steps that a software project goes through, from planning all the way through operation. It’s a cycle because once code has been released, the process continues and repeats through actions like coding new features, addressing bugs, and more.
Shifting left involves implementing security practices earlier in the SDLC. For example, consider scanning dependencies for known vulnerabilities; many organizations do this as part of continuous integration (CI) which ensures that code has passed security scans before it is released. However, if a vulnerability is first found during CI, significant time has already been invested building code upon an insecure dependency. Shifting left in this case means allowing developers to run vulnerability scans locally, well before the CI-time scan, so they can learn about issues with their dependencies prior to investing time and effort into creating new code built upon vulnerable dependencies or functions.
Go provides several features that help you address security early in your process, including govulncheck and pkg.go.dev discussed in Supply chain security for Go, Part 1. Today’s post covers two more features of special interest to supply chain security: the Go extension for Visual Studio Code and built-in fuzz testing.
In 2022, Go became the first major programming language to include fuzz testing in its standard toolset with the release of Go 1.18. Fuzzing is a type of automated testing that continuously alters program inputs to find bugs. It plays a huge role in keeping the Go project itself secure – OSS-Fuzz has discovered eight vulnerabilities in the Go Standard library since 2020.
Fuzz testing can find security exploits and vulnerabilities in edge cases that humans often miss, not only your code, but also in your dependencies—which means more insight into your supply chain. With fuzzing included in the standard Go tool set, developers can more easily shift left, fuzzing earlier in their development process. Our tutorial walks you through how to set up and run your fuzzing tests.
If you maintain a Go package, your project may be eligible for free and continuous fuzzing provided by OSS-Fuzz, which supports native Go fuzzing. Fuzzing your project, whether on demand through the standard toolset or continuously through OSS-Fuzz is a great way to help protect the people and projects who will use your module.
In the same way that we’re working toward "secure Go practices" becoming "standard Go practices," the future of software will be more secure for everyone when they’re simply “standard development practices.” Supply chain security threats are real and complex, but we can contribute to solving them by building solutions directly into open source ecosystems.
If you’ve enjoyed this series, come meet the Go team at Gophercon this September! And check out our closing keynote—all about how Go’s vulnerability management can help you write more secure and reliable software.
Security reviewers must develop the confidence and skills to make fast, difficult decisions. A simplistic piece of advice to reviewers is “just be confident” but in reality that takes practice and experience. Confidence comes with time, and people are there to support each other as we learn. This post shares advice we give to people doing security reviews for Chrome.
Chrome has a lightweight launch process. Teams write requirements and design documents outlining why the feature should be built, how the feature will benefit users, and how the feature will be built. Developers write code behind a feature flag and must pass a Launch Review before turning it on. Teams think about security early-on and coordinate with the security team. Teams are responsible for the safety of their features and ensuring that the security team is able to say ‘yes’ to its security review.
Security review focuses on the design of a proposed feature, not its details and is distinct from code review. Chrome changes need approval from engineers familiar with the code being changed but not necessarily from security experts. It is not practical for security engineers to scrutinize every change. Instead we focus on the feature’s architecture, and how it might affect people using Chrome.
Reviewers function best in an open and supportive engineering culture. Security review is not an easy task – it applies security engineering insights in a social context that could become adversarial and fractious. Google, and Chrome, embody a security-centric engineering culture, where respectful disagreement is valued, where we learn from mistakes, where decisions can be revisited, and where developers see the security team as a partner that helps them ship features safely. Within the security team we support each other by encouraging questioning & learning, and provide mentorship and coaching to help reviewers enhance their reviewing skills.
Start with some help. As a new reviewer, you may not feel you’re 100% ready — don’t let that put you off. The best way to learn is to observe and see what’s involved before easing in to doing reviews on your own. Start by shadowing to get a feel for the process. Ask the person you are shadowing how they plan to approach the review, then look at the materials yourself. Concentrate on learning how to review rather than on the details of the thing you are reviewing. Don’t get too involved but observe how the reviewer does things and ask them why. Next time try to co-review something - ask the feature team some questions and talk through your thoughts with the other reviewer. Let them make the final approval decision. Do this a few times and you’ll be ready to be the main reviewer, and remember that you can always reach out for help and advice.
Read a lot, but know when to stop. Understand what the feature is doing, what’s new, and what’s built on existing, approved, mechanisms. Focus on the new things. If you need to educate yourself, skim older docs or code for context. It can help to look at related reviews for repeated issues and solutions. It is tempting to try to understand everything and at first you’ll dig deeper than you need to. You’ll get better at knowing when to stop after a few reviews. Treat existing, approved, features as building blocks that you don’t need to fully understand, but might be useful to skim as background.
Launch review is a gate. It’s ok to ask feature teams to have the materials ready. Try to use your time wisely — if a design doc is very brief and lacks any security discussion you can quickly say "please add a security considerations section" and stop thinking about it until the team comes back with more complete documentation. If the design document doesn’t fully explain something that is a sign the document needs to be expanded — if something isn’t clear to you or isn’t covered then start asking questions. Remember that you’re not looking for every possible bug, but ensuring that major concerns are addressed upfront.
As you’re reading, read actively and write down observations and questions as you go. Cross them off if you find an answer later. For your first reviews this will take a long time. Don’t worry too much about that - you won't know yet which details matter. Over time you’ll learn where to focus your attention. This is also a good time to pair up with a seasoned reviewer. Schedule a chat to go over your thoughts before you share them with the feature team. This will help you understand the process people go through and allows a safe evaluation of your thoughts before you share them more widely - this will help you build confidence. Next, clarify any questions with the feature team. Try to write a sentence or two describing the feature - if you can’t do this it indicates you need more information.
You have permission to be ignorant! Use it! Ask questions until you understand areas of uncertainty. Asking questions provides real value, and often triggers the team to realize that something should be done differently. In particular — if it’s confusing to you it’s probably badly explained or badly thought out, or shows that an assumption or tacit knowledge is missing from a design document. If you’re worried about looking ignorant, make use of the more experienced reviewers around you — ask on the chat or book some time to talk over your thoughts one-on-one. This should help you formulate your question so that it’s useful to the feature team. Try to write out what you think is happening, and let the feature team tell you if you’re close or not.
The chances that you’ll understand everything immediately are very low, and that’s ok. In meetings about a feature a favorite question of mine is ‘what are you secretly worried about?’ followed by an awkward pause. People will absolutely tell you things! Sometimes there's a domain-knowledge mismatch when you don't have the right words to ask the question, so you can't get a useful answer. Always ask for a diagram that shows which process or component different parts of a feature are happening in — this helps you hone in on the critical interfaces, and will illustrate the design more clearly than screenfuls of text or code.
We’re here to help people. Try to center people in your thoughts and arguments. How will people use the feature? Who are they? Who might harm them and how? Are there particular groups of people that might be more vulnerable than others, and what can we do to protect them? How does the feature make people feel? How will their experience of the application change? How will their lives be affected? Think about how a bad actor might abuse the feature. What implicit assumptions is the implementation making about the people using it? What or who are we asking people to trust? What if someone modifies traffic, changes a message, passes in bad data, or tricks someone into using the feature when they don't want to? This is a great thing to discuss when you’re pairing with another reviewer — be sure to ask them what they like to think about.
Take time to think and bring an adversarial mindset and bring a different perspective. In some ways the purpose of a security review is to stop and think before unleashing new ideas on the world. Make focus time in your calendar or sit somewhere unusual to give yourself space to think. A skeptical, enquiring mindset is more useful than deep knowledge. You’re there to ask the questions the feature team won’t have thought about. They will naturally focus on what they need to do to make the feature work. Security review is about thinking about what else might happen when it’s working, or what might happen if someone deliberately tries to do things the designers didn’t expect. Try to take a different perspective.
Trust your spidey-senses. If you can't quite put your finger on what might go wrong, but something feels off. Sometimes a feature is just plain complicated, or in a risky area of code, or feels like it's been rushed. It can be difficult to articulate these concerns to a team without rubbing people the wrong way. Use people you trust to bounce your thoughts off and hone in on what you are worried about. Discuss with other reviewers whether and how these risks can be communicated. Your spidey-senses are probably correct, and they're as important as any single concrete solvable threat you've spotted.
Pause then approve. Once you’ve understood what’s happening and iterated through any concerns you’ve raised you’ll be ready to approve the feature for launch. It’s worth taking a short pause here to let your brain do its thinking in the background before you press the button. Try to concisely describe the feature — if you can’t then go back and ask more questions! It’s important to get questions and concerns to teams quickly but final approval can wait for some digestion time. If you cannot come up with a clear decision then reach out to other reviewers to discuss what to do next. Let the feature team know you’re working on it and when you’ll get back to them. After a pause, if nothing else occurs to you then click Approved and write a short paragraph saying why. Note any follow-on work the team has promised to complete before launching. This is also a great time to leave yourself a short note for your performance review — it’s easy to lose track of what you reviewed and the changes your input led to — having a rolling document will both help you spot patterns, and help you tell the story of the work you’ve done.
Nothing we do in software is forever, and many mistakes will be found and fixed later. You will make mistakes. Mainly small ones that won’t really matter. Security is about evaluating new risks in the context of the value provided to people using a product. This tradeoff extends into the design and launch process of which you are just a small part. You only have so much time, and It’s inevitable that you might sometimes see things that aren’t there, or not notice things that are. Security reviewers are one element in a layered defense and the consequences of a mistake will be contained by things you did spot. It’s good to try to find specific problems, but more important to locate and apply general security principles like sandboxing and the rule of two. Sometimes you might think something is fine, but later realize that it isn’t. This often happens when we learn something new about a feature, or discover that an assumption was invalid. This is where careful communication is important. Feature teams will be happy to know about any problems you uncover, and will find time to fix them later if possible. Remember that Looks Good To Me doesn’t mean Looks Perfect To Me.
Experienced reviewers can always improve, and apply their insights widely within their organization.
It takes time to learn security engineering and build a working knowledge of the architecture of a complex product. Reviewing is different from the normal development journey - when an engineer works on a feature they start in an ambiguous situation and gradually learn or invent everything needed to deeply understand and solve the problem. To be effective as a security reviewer we have to embrace ambiguity and ignorance, and learn how to swiftly learn just enough to have a useful opinion, before starting again for our next review. This may seem daunting - and it is - but over time reviewers get better at knowing where to focus their efforts.
Security reviewing can feel invisible. Security is not an all-or-nothing quality of a feature. Rather it forms one concern that a product must balance while still shipping, adding new features, and appealing to people that use it. Security is an important concern (for Chrome it’s both a critical engineering pillar, and something people say they value when choosing Chrome) but it’s not the only factor. It’s our job to identify and articulate security risks, and advocate for better approaches, but sometimes another concern dominates. If deviations from our advice are well justified we shouldn’t feel ignored - we did our bit.
Your peers are there to help you. If you need support, ask questions on the reviewing team’s chat, or schedule thirty minutes or a coffee with another reviewer to discuss a particular review.
Remember that developers know what they are doing, but might not be thinking about the things you are thinking about. You might not be confident in what you know about their feature, but imagine how the feature team feels coming to the mysterious halls of the security people! Often we’ll ask a team to implement one or more of our layered defenses before they get to launch their feature. This might be the first time they’ve had to write a fuzzer or harden a library. You’ll get requests for examples or help with implementation. Find an expert or spend time doing these things yourself. The security process should be as smooth a speed bump as possible. Any familiarity you have with these techniques will improve our interactions and maintain our reputation as a helpful team. If we ask someone to do something but can’t help them make progress we will be a source of frustration. If we help people they will be likely to approach us early-on next time they have a security question.
Develop mind-tricks and frameworks for having difficult conversations. Sometimes (especially when you get involved early in a project’s design phase) you will need to disagree with a feature’s design, or nudge a team in a more secure direction. While a supportive technical culture should make it safe to surface and resolve technical differences, it takes energy and patience to work through these conflicts. It’s harder still to say ‘no’, or ask a team to commit to more work than they were expecting. These are skills you can practice and become more comfortable doing. Look for courses you can take. Some suggestions include “having difficult conversations”, “mentoring”, “coaching”, “persuasive writing”, and “threat modeling”.
Find ways to scale your impact. Security decisions are made based on judgment and mechanisms but judgment doesn’t scale! To maintain a sustainable security workload for ourselves, and empower feature teams to make their own decisions, we need to make judgment as small a part of the puzzle as possible.
Encourage good patterns. If a design addresses a security concern, say so on the launch bug or a mailing list. This helps for later reviews, and provides useful feedback to the design team. Help newer reviewers see good or bad patterns, and the rhythm of reviews by telling a few stories of what went well and what got missed in the past. Establish architectural patterns that contain the consequences of a problem. Make these easy to follow while preventing anti-patterns - ideally a bad security idea shouldn’t even compile.
Write guidance or policies. Distill decisions into FAQs, threat models, principles or rules. Get involved with the people building foundational pieces of your product, and get them to own their security guidance so that it gets applied as part of that team’s advice to other teams. A checklist of things to look for in a particular area is a great starting point for the team making the next feature in that space, and for the reviewer that signs off at the end.
Level-up your developers. We can raise the level of expertise across the wider developer community, and reduce the burden of reviewing for security teams. Through repeated engagements with the same team you can start to set expectations - each time, drop some hints about what could be done better next time. Encourage system diagrams, risk assessments, threat modeling or sandboxing. Soon teams will start with these, and reviews will be much smoother.
Anoint security champions. In larger feature teams encourage a couple of security champions within the group to serve as initial points of contact and a first line of review. Support these people! Offer to talk them through their design docs and help them think about security concerns. They will grow into local experts who know when to call on security specialists. They can write security principles for their area, leading to secure features and smooth launch reviews.
Do a few reviews to develop confidence in your decisions. You won't understand all the details of a feature. You will sometimes say yes to the wrong things or get teams to do unnecessary work. You'll ask insightful questions and improve designs..
Remember that security reviewing is difficult. Remember that people are there to help you. Remember that every good decision you encourage keeps people safe from harm, and increases their trust in you and your product. As you mature, maintain a supportive culture where reviewers can grow, and where you help other teams develop new features with safety in mind.
Most modern consumer messaging platforms (including Google Messages) support end-to-end encryption, but users today are limited to communicating with contacts who use the same platform. This is why Google is strongly supportive of regulatory efforts that require interoperability for large end-to-end messaging platforms.
For interoperability to succeed in practice, however, regulations must be combined with open, industry-vetted, standards, particularly in the area of privacy, security, and end-to-end encryption. Without robust standardization, the result will be a spaghetti of ad hoc middleware that could lower security standards to cater for the lowest common denominator and raise implementation costs, particularly for smaller providers. Lack of standardization would also make advanced features such as end-to-end encrypted group messaging impossible in practice – group messages would have to be encrypted and delivered multiple times to cater for every different protocol.
With the recent publication of the IETF’s Message Layer Security (MLS) specification RFC 9420, messaging users can look forward to this reality. For the first time, MLS enables practical interoperability across services and platforms, scaling to groups of thousands of multi-device users. It is also flexible enough to allow providers to address emerging threats to user privacy and security, such as quantum computing.
By ensuring a uniformly high security and privacy bar that users can trust, MLS will unleash a huge field of new opportunities for the users and developers of interoperable messaging services that adopt it. This is why we intend to build MLS into Google Messages and support its wide deployment across the industry by open sourcing our implementation in the Android codebase.
In February, we expanded Google Workspace client-side encryption (CSE) capabilities to include Gmail and Calendar in addition to Drive, Docs, Slides, Sheets, and Meet.
CSE in Gmail was designed to provide commercial and public sector organizations an additional layer of confidentiality and data integrity protection beyond the existing encryption offered by default in Workspace. When CSE is enabled, email messages are protected using encryption keys that are fully under the customer’s control. The data is encrypted on the client device before it’s sent to Google servers that do not have access to the encryption keys, which means the data is indecipherable to us–we have no technical ability to access it. The entire process happens in the browser on the client device, without the need to install desktop applications or browser extensions, which means that users get the same intuitive productivity and collaboration experiences that they enjoy with Gmail today. Let’s take a deeper look into how it works.
How we built Client-side Encryption for Workspace
We invented and designed a new service called, Key Access Control List Service (KACLS), that is used across all essential Workspace applications. Then, we worked directly with customers and partners to make it secure, reliable, and simple to deploy. KACLS performs cryptographic operations with encryption keys after validating end-user authentication and authorization. It runs in a customer's controlled environment and provides the key management API called by the CSE-enabled Workspace clients. We have multiple partners providing software implementations of the KACLS API that can be used by our customers.
At a high level, Workspace client code takes advantage of envelope encryption to encrypt and decrypt the user content on the client with a Data Encryption Key (DEK) and leverage the KACLS to encrypt and decrypt the DEK. In order to provide separation of duty, we use the customer's OpenID Connect (OIDC) IdP to authenticate end-users and provide a JSON Web Token assertion with a claim identifying the user (3P_JWT). For every encryption/decryption request sent to KACLS, the application (e.g. Gmail) provides a JSON Web Token assertion with a claim authorizing the current end-user operation (G_JWT). KACLS validates these authentication and authorization tokens before returning, for example, a decrypted DEK to the user’s client device.
More details on KACLS are available in Google Workspace Encryption Whitepaper and CSE reference API.
How we built CSE into Gmail
Google Workspace Engineering teams have been hard at work over multiple years to deliver to our customers the ability to have their data protected with client-side encryption. This journey required us to work closely with customers and partners to provide a capability that was secure, easy to use, intuitive and easily deployable. It was also important for CSE to work seamlessly across the Workspace products: you can create a Meet CSE scheduled meeting in Calendar CSE and follow-up with Gmail CSE emails containing links to Drive CSE files.
Client-side encryption in Gmail was built with openness and interoperability in mind. The underlying technology being used is S/MIME, an open standard for sending encrypted messages over email. S/MIME is already supported in most enterprise email clients, so users are able to communicate securely, outside of their domain, regardless of what provider the recipient is using to read their mail, without forcing the recipients to log into a proprietary portal. S/MIME uses asymmetric encryption. The public key and the email of each user are included in the user's S/MIME certificate. Similarly to TLS used for HTTPS, each certificate is digitally signed by a chain of certificate authorities up to a broadly trusted root certificate authority. The certificate acts as a virtual business card, enabling anyone getting it to encrypt emails for that user. The user's private keys are kept secure under customer control and are used by users for decryption of incoming emails and digital signature of outgoing emails.
We decided to leverage the CSE paradigm used for Drive CSE and not keep the private key on the device, to keep them as safe as possible. Instead, we extended our KACLS API to support asymmetric encryption and signature operations. This enables our customers to centrally provision and enable S/MIME, on the KACLS, for all their users without having to deploy certificates individually to each user device.
CSE in Gmail uses the end-user's client existing cryptographic functionalities (Web Crypto API for web browsers for instance) to perform local encryption operations and run client-side code to perform all S/MIME message generation.
Now let's cover the detailed user flows:
When sending an email, the Gmail client generates a MIME message, encrypts the message with a random Data Encryption Key (DEK) then uses the recipients' public keys to encrypt the DEK, calls KACLS (with the user authenticated by customer's IdP and authorized by Google) to digitally sign content and finally sends the authenticated and encrypted S/MIME message, which contains both the encrypted email and the encrypted DEK, to Google servers for delivery to the recipients. Below is an animated screenshot showing the user interface of Gmail when using CSE.
When receiving an email, Gmail will verify that the digital signature of the email is valid and matches the sender's identity, which protects the email against tampering. Gmail will trust digital identities signed by Root CA PKI as well as custom domain configurations. The Gmail client will call KACLS (with the authentication and authorization JWT) to decrypt the email encryption key, then can decrypt the email and render it to the end-user.
How we protect the application
Workspace already uses the latest cryptographic standards to encrypt all data at rest and in transit between its facilities for all services. Additionally, Gmail uses Transport Layer Security (TLS) by default for communication with other email service providers. CSE in Gmail, however, provides an additional layer of protection for sensitive content. The security of Gmail CSE is paramount to us, and we developed new additional mechanisms to ensure CSE content would be locked into a secure container. On the web, we have been leveraging iframe origin isolation, strict postMessage API, and Content Security Policy to protect the user's sensitive data. Those security controls provide multiple layers of safety to ensure that CSE content stays isolated from the rest of the application. See this simplified diagram covering the isolation protecting CSE emails during composition or display.
What’s next for Client-side encryption and why it’s important
CSE in Gmail uses S/MIME to encrypt and digitally sign emails using public keys supplied by customers, which add an additional level of confidentiality and integrity to emails. This is done with extensive security controls to protect user data confidentiality, but also transparently integrated in Gmail UI to delight our users. However our work is not done, and we are actively partnering with Google Research to further develop client-side capabilities. You can see some of our progress in this field with our presentation at the RSA Security Conference last year where we provided insight into the challenges and the practical strategies to provide advanced capabilities, such as AI-driven phishing protection for CSE.
“Secure your dependencies”—it’s the new supply chain mantra. With attacks targeting software supply chains sharply rising, open source developers need to monitor and judge the risks of the projects they rely on. Our previous installment of the Supply chain security for Go series shared the ecosystem tools available to Go developers to manage their dependencies and vulnerabilities. This second installment describes the ways that Go helps you trust the integrity of a Go package.
Go has built-in protections against three major ways packages can be compromised before reaching you:
A new, malicious version of your dependency is published
A package is withdrawn from the ecosystem
A malicious file is substituted for a currently used version of your dependency
In this blog post we look at real-world scenarios of each situation and show how Go helps protect you from similar attacks.
In 2018, control of the JavaScript package event-stream passed from the original maintainer to a project contributor. The new owner purposefully published version 3.3.6 with a new dependency named flatmap-stream, which was found to be maliciously executing code to steal cryptocurrency. In the two months that the compromised version was available, it had been downloaded 8 million times. This poses the question - how many users were unaware that they had adopted a new indirect dependency?
Go ensures reproducible builds thanks to automatically fixing dependencies to a specific version (“pinning”). A newly released dependency version will not affect a Go build until the package author explicitly chooses to upgrade. This means that all updates to the dependency tree must pass code review. In a situation like the event-stream attack, developers would have the opportunity to investigate their new indirect dependency.
In 2016, an open-source developer pulled his projects from npm after a disagreement with npm and patent lawyers over the name of one of his open-source libraries. One of these pulled projects, left-pad, seemed to be small, but was used indirectly by some of the largest projects in the npm ecosystem. Left-pad had 2.5 million downloads in the month before it was withdrawn, and its disappearance left developers around the world scrambling to diagnose and fix broken builds. Within a few hours, npm took the unprecedented action to restore the package. The event was a wake up call to the community about what can happen when packages go missing.
Go guarantees the availability of packages.The Go Module Mirror serves packages requested by the go command, rather than going to the origin servers (such as GitHub). The first time any Go developer requests a given module, it’s fetched from upstream sources and cached within the module mirror. When a module has been made available under a standard open source license, all future requests for that module simply return the cached copy, even if the module is deleted upstream.
In December 2022, users who installed the package pyTorch-nightly via pip, downloaded something they didn’t expect: a package that included all the functionality of the original version but also ran a malicious binary that could gain access to environment variables, host names, and login information.
This compromise was possible because pyTorch-nightly had a dependency named torchtriton that shipped from the pyTorch-nightly package index instead of PyPI. An attacker claimed the unused torchtriton namespace on PyPI and uploaded a malicious package. Since pip checks PyPI first when performing an install, the attacker got their package out in front of the real package—a dependency confusion attack.
Go protects against these kinds of attacks in two ways. First, it is harder to hijack a namespace on the module mirror because publicly available projects are added to it automatically—there are no unclaimed namespaces of currently available projects. Second, package authenticity is automatically verified by Go's checksum database.
The checksum database is a global list of the SHA-256 hashes of source code for all publicly available Go modules. When fetching a module, the go command verifies the hashes against the checksum database, ensuring that all users in the ecosystem see the same source code for a given module version. In the case of pyTorch-nightly, a checksum database would have detected that the torchtriton version on PyPI did not match the one served earlier from pyTorch-nightly.
How do we know that the values in the Go checksum database are trustworthy? The Go checksum database is built on a Transparent Log of hashes of every Go module. The transparent log is backed by Trillian, a production-quality, open-source implementation also used for Certificate Transparency. Transparent logs are tamper-evident by design and append-only, meaning that it's impossible to delete or modify Go module hashes in the logs without the change being detected.
The Go team supports the checksum database and module mirror as services so that Go developers don't need to worry about disappearing or hijacked packages. The future of supply chain security is ecosystem integration, and with these services built directly into Go, you can develop with confidence, knowing your dependencies will be available and uncorrupted.
The final part of this series will discuss the Go tools that take a “shift left” approach to security—moving security earlier in the development life cycle. For a sneak peek, check out our recent supply chain security talk from Google I/O!