diff --git a/content/posts/2024-04-03-demystifying-sboms-the-backbone-of-modern-software-security.md b/content/posts/2024-04-03-demystifying-sboms-the-backbone-of-modern-software-security.md index beca8ce..43c960e 100644 --- a/content/posts/2024-04-03-demystifying-sboms-the-backbone-of-modern-software-security.md +++ b/content/posts/2024-04-03-demystifying-sboms-the-backbone-of-modern-software-security.md @@ -1,63 +1,152 @@ --- -title: 'Demystifying SBOMs: The Backbone of Modern Software Security' -description: "An introduction to Software Bill of Materials (SBOMs), explaining what they are, their types (SPDX, CycloneDX, SWID), how to generate them, and their role in vulnerability audits." +title: "Demystifying SBOMs: The Backbone of Modern Software Security" +description: "What is an SBOM? A Software Bill of Materials is a machine-readable inventory of every component in your software. Learn about SBOM formats (SPDX, CycloneDX), generation tools, vulnerability management, and compliance requirements." categories: - education -tags: [sbom, security, intro] +tags: [sbom, security, intro, supply-chain] +tldr: "An SBOM is a machine-readable inventory of every component in a piece of software — libraries, frameworks, and transitive dependencies included. SBOMs make vulnerability response fast (seconds instead of days), enable continuous monitoring, and are increasingly required by regulations like EO 14028 and the EU CRA. The two dominant formats are SPDX and CycloneDX." author: display_name: Cowboy Neil login: Cowboy Neil url: https://sbomify.com -author_login: Cowboy Neil -author_url: https://sbomify.com -wordpress_id: 153 -wordpress_url: https://sbomify.com/?p=153 -date: '2024-04-03 17:18:12 +0200' -date_gmt: '2024-04-03 17:18:12 +0200' -comments: [] +faq: + - question: "What is an SBOM?" + answer: "An SBOM (Software Bill of Materials) is a machine-readable inventory that lists every component in a piece of software, including direct dependencies, transitive dependencies, their versions, suppliers, and licenses. It is often compared to a nutritional label for food — it tells you exactly what is inside." + - question: "What are the main SBOM formats?" + answer: "The two dominant SBOM formats are SPDX (Software Package Data Exchange), maintained by the Linux Foundation, and CycloneDX, maintained by OWASP. Both support JSON and XML serialization. SPDX has deeper roots in license compliance, while CycloneDX was designed for security and supply chain use cases. A third format, SWID Tags, is used primarily for installed software identification." + - question: "Why are SBOMs important for security?" + answer: "SBOMs enable rapid vulnerability response. When a new CVE is disclosed, an SBOM lets you determine in seconds whether your software contains the affected component — without manually auditing source code or build systems. This is critical for monitoring against databases like the CISA KEV catalog and for meeting regulatory compliance requirements." + - question: "How do I generate an SBOM?" + answer: "SBOMs can be generated using tools like the sbomify GitHub Action, Syft, Trivy, or CycloneDX CLI plugins. The sbomify GitHub Action automatically selects the best generator for your ecosystem and enriches the output with package metadata. The most accurate SBOMs are generated at build time, when the full dependency tree is resolved." + - question: "Are SBOMs required by law?" + answer: "Increasingly, yes. U.S. Executive Order 14028 requires SBOMs for software sold to the federal government. The EU Cyber Resilience Act requires manufacturers of products with digital elements to identify and document components. The FDA requires SBOMs for medical device software. Many procurement contracts now include SBOM requirements as well." +date: 2024-04-03 slug: demystifying-sboms-the-backbone-of-modern-software-security --- -In the ever-evolving landscape of software development and cybersecurity, Software Bill of Materials (SBOMs) have emerged as a crucial tool for enhancing transparency, security, and compliance. SBOMs provide a detailed inventory of all the components that make up a software product, akin to a nutritional label for food products, detailing every ingredient that goes into the mix. This comprehensive insight is invaluable not only for developers and vendors but also for buyers and regulatory bodies, ensuring that software is secure, compliant, and free from vulnerabilities. Let's delve into the intricacies of SBOMs, exploring their types, generation, usage, and role in vulnerability audits. +When the Log4Shell vulnerability ([CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228)) was disclosed in December 2021, organizations around the world scrambled to answer a simple question: _does our software use Log4j?_ For most, the answer took days or weeks of manual investigation across thousands of applications. For the few that maintained [Software Bills of Materials](/what-is-sbom/), the answer took seconds. -### What are SBOMs? +An **SBOM (Software Bill of Materials)** is a machine-readable inventory of every component in a piece of software — every library, framework, and transitive dependency, along with version numbers, suppliers, and relationships. Think of it as a nutritional label for software: it tells you exactly what is inside. And just as food labels transformed public health by making ingredients transparent, SBOMs are transforming software security by making the software supply chain visible. -An SBOM is essentially a nested inventory, a document that lists all components in a piece of software. Software components might include libraries, packages, modules, and snippets, among others, each potentially having its own set of dependencies. SBOMs can vary in format and detail, but at their core, they serve the same purpose: to provide visibility into the software supply chain. +## What Is Inside an SBOM? -See [What is an SBOM](/what-is-sbom/) to better understand what SBOMs are. +A well-formed SBOM contains structured information about each component in your software: -### Types of SBOMs +- **Component name and version** — The specific library or package and its exact version (e.g., `log4j-core 2.14.1`) +- **Supplier** — Who authored or published the component +- **Package URL (purl)** — A universal identifier for the component (e.g., `pkg:maven/org.apache.logging.log4j/log4j-core@2.14.1`) +- **Dependency relationships** — How components relate to each other (direct dependency, transitive dependency, dev dependency) +- **Licenses** — The license terms governing each component +- **Hashes** — Cryptographic digests for integrity verification -SBOMs can be classified based on the depth of detail they offer and the format in which they are provided. Common formats include: +This information enables automated tooling to match components against vulnerability databases, verify license compliance, and detect supply chain anomalies. -- **SPDX (Software Package Data Exchange)**: Developed by the Linux Foundation, SPDX is a popular format that offers a standardized way to communicate the components, licenses, and copyrights associated with software packages. -- **CycloneDX**: A lightweight SBOM standard designed for use in application security contexts and supply chain component analysis. -- **SWID (Software Identification) Tags**: XML or JSON documents that identify and describe a software product within its operational environment. +## SBOM Formats -### Generating SBOMs +Three formats dominate the SBOM landscape. For a detailed comparison, see our [SBOM formats guide](/2026/01/15/sbom-formats-cyclonedx-vs-spdx/). -Creating an SBOM can be as simple or complex as the tools and methods employed. With the rise of DevOps and continuous integration/continuous deployment (CI/CD) practices, the generation of SBOMs has become more automated and integrated into the software development lifecycle. Popular tools for generating SBOMs include: +### SPDX (Software Package Data Exchange) -- **GitHub CLI**: GitHub has introduced capabilities to generate SBOMs directly within its CI/CD pipelines, allowing developers to create and update SBOMs as part of their regular development and deployment processes. -- **Docker**: The containerization platform enables users to generate SBOMs for container images, providing insights into the components that constitute the Docker containers. +[SPDX](https://spdx.dev/) is maintained by the Linux Foundation and is an ISO/IEC 5962 international standard. Originally designed for license compliance, SPDX has evolved to support security use cases with SPDX 3.0's security profile. It supports JSON, XML, RDF, tag-value, and YAML serialization formats. -### Using SBOMs +### CycloneDX -Once generated, SBOMs can be used in several ways to enhance software security and compliance. They are often shared with stakeholders, including buyers, regulatory bodies, and security teams, to provide transparency into the software components and their provenance. This transparency is crucial for: +[CycloneDX](https://cyclonedx.org/) is maintained by OWASP and was designed from the start for security and supply chain use cases. It supports JSON and XML, and has specialized capabilities including vulnerability disclosure (VEX), [cryptographic BOMs (CBOM)](/2024/04/10/future-proofing-cybersecurity-with-the-cryptography-bill-of-materials-cbom/), and service dependency mapping. CycloneDX is an ECMA standard (ECMA-424). -- **Compliance**: Ensuring that software complies with licenses and regulations. -- **Security**: Identifying known vulnerabilities within components. -- **Supply Chain Risk Management**: Assessing risks associated with third-party components. +### SWID Tags -SBOMs are typically shared through secure channels or integrated into product documentation, making them accessible to those who need them while maintaining the security of sensitive information. +[SWID (Software Identification) Tags](https://csrc.nist.gov/projects/Software-Identification-SWID) are an ISO/IEC 19770-2 standard primarily used for identifying installed software in enterprise asset management. SWID tags are less common in the development-focused SBOM ecosystem but are referenced in some government procurement requirements. -### SBOMs and Vulnerability Audits +For most organizations, the choice is between SPDX and CycloneDX. Both are mature, well-tooled, and accepted by regulatory frameworks. Many organizations generate both. -One of the most critical uses of SBOMs is in the context of vulnerability audits. By providing a detailed list of a software product's components, SBOMs enable security teams to quickly identify and address known vulnerabilities. Tools like the [OVS](https://osv.dev) can cross-reference SBOM data against databases of known vulnerabilities, facilitating rapid detection and remediation. +## How SBOMs Are Generated -Moreover, the ongoing maintenance of an SBOM throughout a product's lifecycle ensures that new vulnerabilities can be identified and mitigated promptly, maintaining the integrity and security of the software over time. +SBOMs can be generated at different points in the software lifecycle, with trade-offs in accuracy and completeness. -### Conclusion +### Build-Time Generation -As software ecosystems become increasingly complex and interconnected, the importance of SBOMs in ensuring the security, compliance, and integrity of software products cannot be overstated. By offering a transparent view into the components that make up software, SBOMs play a pivotal role in modern cybersecurity practices. With the right tools and processes in place, generating and utilizing SBOMs can become a seamless part of the software development and deployment lifecycle, paving the way for safer, more reliable software systems. +The most accurate SBOMs are generated during the build process, when the full dependency tree has been resolved and lock files are available. Build-time tools include: + +- **[sbomify GitHub Action](https://github.com/sbomify/sbomify-action/)** — A multi-ecosystem SBOM generator that automatically selects the best generation tool for your stack, enriches the output with package metadata, and optionally augments it with business information +- **[Syft](https://github.com/anchore/syft)** — Generates SBOMs from container images, file systems, and archives in both SPDX and CycloneDX formats +- **[Trivy](https://trivy.dev/)** — Aqua Security's scanner that generates SBOMs alongside vulnerability reports +- **[CycloneDX CLI tools](https://cyclonedx.org/tool-center/)** — Language-specific plugins for npm, Maven, Gradle, pip, Go, and more + +### Analysis-Time Generation + +Post-hoc tools analyze compiled binaries or container images to infer components. These are useful when source code or build systems are not available but may miss components that are statically linked or vendored without metadata. + +### The Accuracy Problem + +An SBOM is only as useful as it is accurate. [Lock file drift](/2024/07/30/what-is-lock-file-drift/) — where the dependency manifest and lock file fall out of sync — is one of the most common causes of inaccurate SBOMs. If your lock file doesn't reflect what's actually installed, neither will your SBOM. + +## Why SBOMs Matter + +### Rapid Vulnerability Response + +The primary security value of an SBOM is speed. When a new [CVE](/2025/12/18/cve-vulnerability-explained/) is disclosed — especially one confirmed as actively exploited in the [CISA KEV catalog](/2025/12/30/what-is-kev-cisa-known-exploited-vulnerabilities/) — you need to know immediately whether your software is affected. Without an SBOM, this means manually investigating each application's dependencies. With an SBOM, you query a database and get the answer in seconds. + +This is not a hypothetical benefit. During Log4Shell, organizations with SBOMs identified affected applications within hours. Those without SBOMs spent weeks. + +### Continuous Monitoring + +SBOMs enable _continuous_ vulnerability monitoring, not just point-in-time audits. By ingesting SBOMs into a management platform like [sbomify](https://sbomify.com) or [OWASP Dependency-Track](https://dependencytrack.org/), you can automatically cross-reference your components against vulnerability databases (like [Google OSV](https://osv.dev/)) as new vulnerabilities are disclosed. For details on building this pipeline, see our guide on [SBOM scanning for vulnerability detection](/2026/02/01/sbom-scanning-vulnerability-detection/). + +### Supply Chain Visibility + +Modern software is overwhelmingly composed of third-party code. Studies consistently show that 70-90% of a typical application consists of open-source components. An SBOM makes this composition visible, enabling informed decisions about which dependencies to adopt, which to replace, and which pose unacceptable risk. For more on [the role of SBOMs in cybersecurity](/2026/02/08/sbom-cybersecurity-role/), see our dedicated guide. + +### Compliance + +SBOMs are increasingly required — not just recommended — by regulators and procurement frameworks: + +- **[Executive Order 14028](/compliance/eo-14028/)** requires SBOMs for software sold to the U.S. federal government +- **[EU Cyber Resilience Act](/compliance/eu-cra/)** requires manufacturers of products with digital elements to document components and dependencies +- **FDA** requires SBOMs for pre-market submissions of medical device software +- **[CISA minimum elements](/compliance/cisa-minimum-elements/)** define what an SBOM must contain to be useful for cross-organizational sharing +- **NIST SP 800-161** recommends SBOMs as part of supply chain risk management + +## SBOMs in Practice: The Lifecycle + +Generating an SBOM is not a one-time event. SBOMs are most valuable when treated as living documents managed throughout the software lifecycle. + +1. **Generate** — Produce SBOMs as part of your CI/CD pipeline, at every build +2. **Sign** — Wrap SBOMs in [in-toto attestations](/2024/08/14/what-is-in-toto/) signed via [Sigstore](/2024/08/12/what-is-sigstore/) for integrity and provenance +3. **Store** — Ingest SBOMs into a management platform ([sbomify](https://sbomify.com)) for centralized visibility +4. **Monitor** — Continuously scan SBOMs against vulnerability databases as new CVEs are published +5. **Share** — Provide SBOMs to customers, regulators, and partners as required by contracts or regulations +6. **Update** — Regenerate SBOMs whenever dependencies change, and retire SBOMs for decommissioned software + +## Getting Started + +The fastest path to SBOM adoption: + +1. **Pick a format.** CycloneDX is the simplest starting point for security-focused use cases. SPDX is a good choice if license compliance is a primary driver. See our [format comparison](/2026/01/15/sbom-formats-cyclonedx-vs-spdx/). +2. **Add SBOM generation to CI.** Use the [sbomify GitHub Action](https://github.com/sbomify/sbomify-action/), Syft, Trivy, or a CycloneDX plugin for your language. Generate the SBOM alongside your build artifacts. +3. **Ingest into a management platform.** [sbomify](https://sbomify.com) provides centralized SBOM storage, vulnerability analysis via Google OSV, and sharing capabilities. +4. **Monitor continuously.** Set up alerts for new vulnerabilities affecting your components. + +For step-by-step generation guides across different ecosystems, see our [SBOM generation guides](/guides/). + +## Frequently Asked Questions + +### What is an SBOM? + +An SBOM (Software Bill of Materials) is a machine-readable inventory that lists every component in a piece of software, including direct dependencies, transitive dependencies, their versions, suppliers, and licenses. It is often compared to a nutritional label for food — it tells you exactly what is inside. See [What is an SBOM?](/what-is-sbom/) for a deeper overview. + +### What are the main SBOM formats? + +The two dominant SBOM formats are [SPDX](https://spdx.dev/) (Software Package Data Exchange), maintained by the Linux Foundation, and [CycloneDX](https://cyclonedx.org/), maintained by OWASP. Both support JSON and XML serialization. SPDX has deeper roots in license compliance, while CycloneDX was designed for security and supply chain use cases. For a detailed comparison, see our [format guide](/2026/01/15/sbom-formats-cyclonedx-vs-spdx/). + +### Why are SBOMs important for security? + +SBOMs enable rapid vulnerability response. When a new [CVE](/2025/12/18/cve-vulnerability-explained/) is disclosed, an SBOM lets you determine in seconds whether your software contains the affected component — without manually auditing source code or build systems. This is critical for monitoring against databases like the [CISA KEV catalog](/2025/12/30/what-is-kev-cisa-known-exploited-vulnerabilities/) and for meeting regulatory compliance requirements. + +### How do I generate an SBOM? + +SBOMs can be generated using tools like the [sbomify GitHub Action](https://github.com/sbomify/sbomify-action/), [Syft](https://github.com/anchore/syft), [Trivy](https://trivy.dev/), or [CycloneDX CLI plugins](https://cyclonedx.org/tool-center/). The most accurate SBOMs are generated at build time, when the full dependency tree is resolved. + +### Are SBOMs required by law? + +Increasingly, yes. U.S. [Executive Order 14028](/compliance/eo-14028/) requires SBOMs for software sold to the federal government. The [EU Cyber Resilience Act](/compliance/eu-cra/) requires manufacturers of products with digital elements to identify and document components. The FDA requires SBOMs for medical device software. Many procurement contracts now include SBOM requirements as well. diff --git a/content/posts/2024-04-10-future-proofing-cybersecurity-with-the-cryptography-bill-of-materials-cbom.md b/content/posts/2024-04-10-future-proofing-cybersecurity-with-the-cryptography-bill-of-materials-cbom.md index 32595a3..27f335c 100644 --- a/content/posts/2024-04-10-future-proofing-cybersecurity-with-the-cryptography-bill-of-materials-cbom.md +++ b/content/posts/2024-04-10-future-proofing-cybersecurity-with-the-cryptography-bill-of-materials-cbom.md @@ -1,46 +1,165 @@ --- -title: Future-Proofing Cybersecurity with the Cryptography Bill of Materials (CBOM) -description: "Introduction to CycloneDX's Cryptography Bill of Materials (CBOM) for managing cryptographic assets and preparing for quantum-resistant security in the post-quantum era." +title: "What Is a CBOM? The Cryptography Bill of Materials Explained" +description: "What is a CBOM? The Cryptography Bill of Materials inventories every cryptographic asset in your software — algorithms, keys, certificates, and protocols. Learn how CBOMs prepare organizations for the post-quantum transition." +categories: + - education +tags: [cbom, cryptography, security, post-quantum] +tldr: "A CBOM (Cryptography Bill of Materials) is an inventory of every cryptographic asset in your software: algorithms, key lengths, certificates, and protocols. Defined by CycloneDX, CBOMs are essential for the post-quantum transition — you cannot migrate to quantum-resistant cryptography if you do not know what cryptography you are using today." author: display_name: Cowboy Neil login: Cowboy Neil url: https://sbomify.com -author_login: Cowboy Neil -author_url: https://sbomify.com -wordpress_id: 197 -wordpress_url: https://sbomify.com/?p=197 -date: '2024-04-10 19:13:20 +0200' -date_gmt: '2024-04-10 19:13:20 +0200' -categories: - - education -tags: [cbom, cryptography, security] -comments: [] +faq: + - question: "What is a CBOM?" + answer: "A CBOM (Cryptography Bill of Materials) is a structured inventory of all cryptographic assets used in a piece of software, including algorithms, key lengths, certificates, protocols, and their usage contexts. It is defined as a capability of the CycloneDX SBOM standard and enables organizations to identify cryptographic risks, plan quantum-resistant migrations, and maintain cryptographic agility." + - question: "How is a CBOM different from an SBOM?" + answer: "An SBOM inventories software components (libraries, packages, dependencies). A CBOM inventories cryptographic assets (algorithms, keys, certificates, protocols). They are complementary: an SBOM tells you what software you have, a CBOM tells you what cryptography that software uses. CycloneDX supports both in the same format." + - question: "Why do I need a CBOM now?" + answer: "Quantum computers capable of breaking RSA and elliptic curve cryptography are expected within the next decade. NIST finalized its first post-quantum cryptography standards (ML-KEM, ML-DSA, SLH-DSA) in August 2024. The migration will take years, and the first step is knowing what cryptography you use today. CBOMs provide that inventory." + - question: "What is the harvest-now-decrypt-later threat?" + answer: "Harvest-now-decrypt-later (HNDL) is a strategy where adversaries capture encrypted data today with the intention of decrypting it once quantum computers become available. Data with long-term confidentiality requirements (state secrets, medical records, financial data) is particularly at risk. CBOMs help identify which systems use vulnerable algorithms so migration can be prioritized." + - question: "How do I generate a CBOM?" + answer: "CycloneDX tools can generate CBOMs by analyzing codebases for cryptographic library usage, certificate stores, and protocol configurations. IBM's CBOM toolkit and Crypto Discovery tools provide automated scanning. For many organizations, the first CBOM is compiled manually by inventorying TLS configurations, key management systems, and cryptographic library dependencies." +date: 2024-04-10 slug: future-proofing-cybersecurity-with-the-cryptography-bill-of-materials-cbom --- -In the rapidly evolving landscape of cybersecurity, the dawn of quantum computing presents both an unprecedented opportunity and a formidable challenge. The traditional cryptographic frameworks that have long served as the bedrock of our digital security are facing potential obsolescence, ushered in by the quantum era. Recognizing this pivotal shift, the CycloneDX initiative has introduced a groundbreaking tool: the Cryptography Bill of Materials (CBOM). This comprehensive guide not only charts a path for organizations navigating the complexities of quantum vulnerabilities but also heralds a new era of cybersecurity readiness. +Organizations know what software libraries they depend on — or at least they _should_, if they maintain [SBOMs](/what-is-sbom/). But ask most organizations what cryptographic algorithms they use, what key lengths protect their data, or which certificates expire next month, and the answer is usually silence. This blind spot is about to become critical: NIST finalized its first [post-quantum cryptography standards](https://csrc.nist.gov/projects/post-quantum-cryptography) in August 2024, starting a migration that will touch every system that uses public-key cryptography. You cannot migrate what you cannot see. + +A **CBOM (Cryptography Bill of Materials)** is the answer. Defined as a capability of the [CycloneDX](https://cyclonedx.org/capabilities/cbom/) standard, a CBOM is a structured inventory of every cryptographic asset in your software: algorithms, key lengths, certificates, protocols, and their usage contexts. Just as an SBOM makes your software composition visible, a CBOM makes your cryptographic posture visible — and visibility is the prerequisite for action. + +## What Is Inside a CBOM? + +A CBOM catalogs cryptographic assets across several categories: + +### Algorithms and Parameters + +- **Algorithm name** — The specific cryptographic algorithm (e.g., AES-256-GCM, RSA-2048, SHA-384, ECDSA P-256) +- **Key length** — The size of cryptographic keys in use +- **Mode of operation** — How the algorithm is applied (e.g., CBC, GCM, CTR for block ciphers) +- **Usage context** — What the algorithm protects (data at rest, data in transit, authentication, digital signatures) + +### Certificates + +- **Certificate authority** — Who issued the certificate +- **Subject and issuer** — What entity the certificate identifies +- **Validity period** — Expiration dates for proactive renewal +- **Signature algorithm** — The algorithm used to sign the certificate (critical for quantum readiness) + +### Protocols + +- **Protocol version** — TLS 1.2, TLS 1.3, SSH, IPsec, etc. +- **Cipher suites** — The specific combination of algorithms negotiated for each connection +- **Deprecated configurations** — Protocols or cipher suites that are no longer considered secure (e.g., TLS 1.0, RC4, 3DES) + +### Key Management + +- **Key storage locations** — HSMs, key vaults, file-based stores +- **Key rotation policies** — How often keys are rotated and by what mechanism +- **Key lifecycle state** — Active, expired, revoked, or compromised + +## Why CBOMs Matter Now + +### The Post-Quantum Transition + +In August 2024, NIST published three finalized post-quantum cryptography standards: + +- **[FIPS 203 (ML-KEM)](https://csrc.nist.gov/pubs/fips/203/final)** — Module-Lattice-Based Key-Encapsulation Mechanism, replacing key exchange mechanisms like ECDH +- **[FIPS 204 (ML-DSA)](https://csrc.nist.gov/pubs/fips/204/final)** — Module-Lattice-Based Digital Signature Algorithm, replacing signature schemes like RSA and ECDSA +- **[FIPS 205 (SLH-DSA)](https://csrc.nist.gov/pubs/fips/205/final)** — Stateless Hash-Based Digital Signature Algorithm, providing an alternative signature scheme based on different mathematical assumptions + +These standards mark the beginning of the largest cryptographic migration in computing history. Every system that uses RSA, ECDSA, ECDH, or other public-key algorithms vulnerable to quantum attack will need to transition to post-quantum alternatives. NIST's guidance calls for beginning migration immediately, with a target of deprecating vulnerable algorithms by 2035. + +The scale of this migration is staggering. It affects TLS configurations, code signing, certificate authorities, VPNs, database encryption, API authentication, and every other system that relies on public-key cryptography. Without a CBOM, organizations have no systematic way to identify what needs to change. + +### Harvest-Now, Decrypt-Later + +The most urgent reason to act now — even before quantum computers can break current encryption — is the **harvest-now, decrypt-later (HNDL)** threat. Nation-state adversaries and sophisticated attackers are already capturing encrypted network traffic and storing it, with the expectation that future quantum computers will be able to decrypt it. + +Data with long-term confidentiality requirements is particularly at risk: + +- Government classified information +- Medical records (HIPAA mandates decades of retention) +- Financial transactions and trade secrets +- Intellectual property and research data + +For this data, the threat window is not "when quantum computers arrive" — it is _now_. A CBOM helps identify which systems handle long-lived sensitive data and which cryptographic algorithms protect it, enabling prioritized migration to quantum-resistant alternatives. + +### Cryptographic Agility + +Even outside the quantum context, organizations regularly need to retire compromised or deprecated algorithms. SHA-1, MD5, RC4, DES, and TLS 1.0/1.1 have all been deprecated over the past decade, and each deprecation required organizations to find and update every system using the affected algorithm. Without a CBOM, this is an ad-hoc, error-prone process. + +**Cryptographic agility** — the ability to quickly swap out cryptographic algorithms across your infrastructure — depends on knowing exactly what is deployed where. A CBOM provides this foundation. + +## CBOMs and SBOMs + +CBOMs and [SBOMs](/what-is-sbom/) are complementary inventories that address different dimensions of software transparency. + +| | SBOM | CBOM | +| -------------------- | -------------------------------------------------------------- | ----------------------------------------------------- | +| **Inventories** | Software components (libraries, packages) | Cryptographic assets (algorithms, keys, certificates) | +| **Primary use case** | Vulnerability management, license compliance | Cryptographic risk assessment, quantum migration | +| **Format** | [SPDX, CycloneDX](/2026/01/15/sbom-formats-cyclonedx-vs-spdx/) | CycloneDX | +| **Answers** | "What's in my software?" | "What crypto does my software use?" | + +CycloneDX supports both in the same document. An organization generating CycloneDX SBOMs can extend them to include cryptographic asset data, producing a unified inventory that covers both component composition and cryptographic posture. + +For organizations managing SBOMs with [sbomify](https://sbomify.com), adding CBOM data to the same CycloneDX documents means cryptographic assets become part of the same management, monitoring, and sharing workflow already in place for software components. + +## Generating a CBOM + +### Automated Discovery + +Several tools support automated cryptographic asset discovery: + +- **[CycloneDX CBOM tools](https://cyclonedx.org/capabilities/cbom/)** — Generate CycloneDX-formatted CBOMs by analyzing dependencies and configurations +- **IBM Quantum Safe tools** — Scan codebases and running systems for cryptographic usage patterns +- **Certificate inventory tools** — Tools like `certbot`, `openssl`, and cloud provider APIs can enumerate certificates across infrastructure + +### Manual Inventory + +For many organizations, the first CBOM begins as a manual inventory: + +1. **TLS configurations** — Audit web servers, load balancers, and API gateways for supported protocol versions and cipher suites +2. **Certificate stores** — Enumerate all certificates, their issuers, expiration dates, and signature algorithms +3. **Key management systems** — Inventory HSMs, cloud KMS services (AWS KMS, Azure Key Vault, GCP Cloud KMS), and application-level key stores +4. **Cryptographic library dependencies** — Check your SBOM for libraries like OpenSSL, BoringSSL, libsodium, Bouncy Castle, and their configured algorithms +5. **Data-at-rest encryption** — Document database encryption, disk encryption, and backup encryption configurations + +### Ongoing Maintenance + +Like SBOMs, CBOMs should be treated as living documents. Certificate expirations, algorithm deprecations, and new deployments all require CBOM updates. Integrate CBOM generation into your CI/CD pipeline alongside SBOM generation for continuous visibility. + +## The Regulatory Landscape + +Cryptographic transparency is gaining regulatory attention: + +- **NSA CNSA 2.0** — The Commercial National Security Algorithm Suite 2.0 sets timelines for U.S. national security systems to transition to quantum-resistant algorithms, starting in 2025 +- **[Executive Order 14028](/compliance/eo-14028/)** and subsequent White House memoranda on quantum computing direct federal agencies to inventory cryptographic systems and develop migration plans +- **[EU Cyber Resilience Act](/compliance/eu-cra/)** requires products with digital elements to implement "appropriate" cryptographic protection — documenting that cryptography via CBOMs supports compliance +- **PCI DSS 4.0** requires inventories of cryptographic algorithms and key management practices for payment card environments -## Embracing Quantum-Resistant Cryptography +Organizations that proactively build CBOMs will be ahead of the compliance curve as these requirements mature. -Quantum computing, with its ability to process complex calculations at astonishing speeds, poses a significant threat to the cryptographic algorithms that protect our most sensitive data. The CBOM, developed as part of the CycloneDX project, emerges as a crucial asset for organizations striving to mitigate these risks. By providing a structured framework for the management of cryptographic assets, the CBOM enables entities to identify and address vulnerabilities, ensuring a robust defense against quantum computing's challenges. +## Frequently Asked Questions -## A Tool for Comprehensive Asset Management +### What is a CBOM? -At its core, the CBOM serves as a dynamic inventory system, cataloging cryptographic assets such as keys, certificates, and algorithms. This meticulous approach to asset management is more than a best practice; it's a necessity in an era where cryptographic agility—the ability to swiftly adapt and switch between cryptographic primitives—is paramount. The CBOM facilitates this agility, allowing organizations to respond rapidly to vulnerabilities and maintain compliance with evolving security standards. +A CBOM (Cryptography Bill of Materials) is a structured inventory of all cryptographic assets used in a piece of software, including algorithms, key lengths, certificates, protocols, and their usage contexts. It is defined as a capability of the [CycloneDX](https://cyclonedx.org/capabilities/cbom/) SBOM standard and enables organizations to identify cryptographic risks, plan quantum-resistant migrations, and maintain cryptographic agility. -## Guiding the Transition to Quantum-Safe Systems +### How is a CBOM different from an SBOM? -The guide provides a roadmap for organizations preparing for the transition to quantum-safe cryptography. It outlines practical examples, dependencies, and the anatomy of a CBOM, highlighting the significance of cryptographic standards and the management of cryptographic certifications. This resource is indispensable for professionals seeking to fortify their systems against the quantum threat landscape, offering insights into post-quantum cryptography readiness and the identification of weak cryptographic algorithms. +An [SBOM](/what-is-sbom/) inventories software components (libraries, packages, dependencies). A CBOM inventories cryptographic assets (algorithms, keys, certificates, protocols). They are complementary: an SBOM tells you what software you have, a CBOM tells you what cryptography that software uses. CycloneDX supports both in the same format. -## A Collaborative Effort +### Why do I need a CBOM now? -The development of the CBOM is a testament to the collaborative spirit of the CycloneDX community and industry experts. This collective endeavor reflects a shared commitment to advancing cybersecurity standards and fostering an ecosystem that is both inclusive and forward-looking. By leveraging the collective wisdom of the global cybersecurity community, the CBOM stands as a beacon of innovation and excellence in the face of quantum computing's challenges. +Quantum computers capable of breaking RSA and elliptic curve cryptography are expected within the next decade. NIST finalized its first post-quantum cryptography standards (ML-KEM, ML-DSA, SLH-DSA) in August 2024. The migration will take years, and the first step is knowing what cryptography you use today. CBOMs provide that inventory. -## Securing Our Digital Future +### What is the harvest-now-decrypt-later threat? -The introduction of the CBOM by CycloneDX marks a pivotal moment in the evolution of cybersecurity. As we stand on the brink of the quantum era, this tool equips organizations with the knowledge and strategies needed to navigate the shifting landscape. Embracing the CBOM is not merely an act of preparedness; it is a decisive step towards securing our digital future, ensuring that our systems and data remain protected in the face of quantum computing's transformative potential. +Harvest-now-decrypt-later (HNDL) is a strategy where adversaries capture encrypted data today with the intention of decrypting it once quantum computers become available. Data with long-term confidentiality requirements (state secrets, medical records, financial data) is particularly at risk. CBOMs help identify which systems use vulnerable algorithms so migration can be prioritized. -In conclusion, the CycloneDX's Cryptography Bill of Materials is more than a guide; it's a blueprint for future-proofing our cybersecurity infrastructure. As the digital world braces for the impact of quantum computing, the CBOM shines as a beacon of hope, guiding the way towards a secure, quantum-resistant future. +### How do I generate a CBOM? -You can learn more about CBOMs [here](https://cyclonedx.org/capabilities/cbom/). +[CycloneDX tools](https://cyclonedx.org/capabilities/cbom/) can generate CBOMs by analyzing codebases for cryptographic library usage, certificate stores, and protocol configurations. IBM's Quantum Safe toolkit provides automated scanning. For many organizations, the first CBOM is compiled manually by inventorying TLS configurations, key management systems, and cryptographic library dependencies. diff --git a/content/posts/2024-04-25-openssf-and-openssf-scorecards-bolstering-open-source-security.md b/content/posts/2024-04-25-openssf-and-openssf-scorecards-bolstering-open-source-security.md index 87bae83..062b921 100644 --- a/content/posts/2024-04-25-openssf-and-openssf-scorecards-bolstering-open-source-security.md +++ b/content/posts/2024-04-25-openssf-and-openssf-scorecards-bolstering-open-source-security.md @@ -1,62 +1,166 @@ --- -title: 'OpenSSF and OpenSSF Scorecards: Bolstering Open Source Security' -description: "Introduction to the Open Source Security Foundation and OpenSSF Scorecards, automated tools that assess the security health of open source projects." +title: "What Is OpenSSF? Scorecards, SLSA, and the Open Source Security Ecosystem" +description: "What is OpenSSF? The Open Source Security Foundation coordinates industry-wide efforts to secure open source software. Learn about OpenSSF Scorecards, how to run them, what they measure, and how they connect to SBOMs, SLSA, and supply chain security." categories: - education -tags: [openssf, security, open-source] +tags: [openssf, security, open-source, supply-chain] +tldr: "The Open Source Security Foundation (OpenSSF) coordinates the industry's efforts to secure open source software. Its most visible tool — OpenSSF Scorecards — automatically evaluates the security practices of any GitHub project across 18+ checks, producing a 0-10 score. Scorecards help consumers choose trustworthy dependencies and help maintainers identify security gaps." author: display_name: Cowboy Neil login: Cowboy Neil url: https://sbomify.com -author_login: Cowboy Neil -author_url: https://sbomify.com -wordpress_id: 213 -wordpress_url: https://sbomify.com/?p=213 -date: '2024-04-25 20:02:29 +0200' -date_gmt: '2024-04-25 20:02:29 +0200' -comments: [] +faq: + - question: "What is OpenSSF?" + answer: "The Open Source Security Foundation (OpenSSF) is a cross-industry initiative under the Linux Foundation that brings together developers, security professionals, and organizations to improve the security of open source software. It coordinates projects including Scorecards, SLSA, Sigstore, GUAC, Alpha-Omega, and the Best Practices Badge program." + - question: "What are OpenSSF Scorecards?" + answer: "OpenSSF Scorecards is an automated tool that evaluates the security practices of open source projects on GitHub. It runs 18+ checks covering areas like branch protection, code review, dependency management, CI/CD security, fuzzing, and vulnerability disclosure. Each check produces a score from 0 to 10, providing a quick assessment of a project's security posture." + - question: "How do I run OpenSSF Scorecards?" + answer: "You can run Scorecards via the CLI (scorecard --repo=github.com/org/repo), as a GitHub Action on your own repository, or look up pre-computed scores on the Scorecard website (securityscorecards.dev). The GitHub Action can run on every pull request and post results to the repository's security tab." + - question: "How do Scorecards relate to SBOMs?" + answer: "Scorecards help evaluate the security quality of your dependencies, which is complementary to SBOM-based vulnerability monitoring. An SBOM tells you what dependencies you have; Scorecards tell you how well-maintained and secure those dependencies' development practices are. Combining both gives a more complete picture of supply chain risk." + - question: "Is OpenSSF the same as the Linux Foundation?" + answer: "OpenSSF is a project within the Linux Foundation, not a separate organization. It was formed in 2020 by merging the Core Infrastructure Initiative (CII) and the Open Source Security Coalition. Its members include Google, Microsoft, Amazon, Intel, IBM, and many other organizations invested in open source security." +date: 2024-04-25 slug: openssf-and-openssf-scorecards-bolstering-open-source-security --- -## Introducing OpenSSF: A Beacon for Open Source Security +After the [Log4Shell vulnerability](https://nvd.nist.gov/vuln/detail/CVE-2021-44228) exposed how a single widely-used open source library could affect hundreds of thousands of organizations, the technology industry confronted an uncomfortable question: who is responsible for securing the open source software that underpins the modern internet? The answer, increasingly, is everyone — and the **[Open Source Security Foundation (OpenSSF)](https://openssf.org/)** is the organization coordinating that effort. -In today's digital-first landscape, open-source software is the backbone of countless applications across industries. However, this widespread adoption brings challenges, particularly in the realm of security. Enter the Open Source Security Foundation (OpenSSF), a collaborative effort that unites leaders from the most influential projects and companies in the tech world. OpenSSF’s mission is clear: improve the security of open-source software, ensuring it's not just widely used but also well-protected. +OpenSSF is a cross-industry initiative under the Linux Foundation that brings together developers, security professionals, and organizations to improve the security of open source software. Formed in 2020 by merging the Core Infrastructure Initiative (CII) and the Open Source Security Coalition, OpenSSF includes members like Google, Microsoft, Amazon, Intel, IBM, and dozens of other companies that depend on open source and have a shared interest in securing it. -### Who's Behind OpenSSF? +## The OpenSSF Ecosystem -OpenSSF was formed by the merger of the Open Source Security Coalition and the Core Infrastructure Initiative, under the umbrella of the Linux Foundation. This powerhouse brings together experts from big names like Google, Microsoft, IBM, and more, all committed to safeguarding the open-source ecosystem through initiatives, security tooling, best practices, and more. +OpenSSF is not a single tool — it is an umbrella for a portfolio of projects that together address different layers of the software supply chain security problem. -## Understanding OpenSSF Scorecards: The Security Litmus Test +| Project | What It Does | +| ---------------------------------------------------------------- | ------------------------------------------------------------------------------------- | +| **[Scorecards](https://securityscorecards.dev/)** | Automated security health assessment for open source projects | +| **[SLSA](/2024/08/17/what-is-slsa/)** | Build integrity and provenance framework (Supply chain Levels for Software Artifacts) | +| **[Sigstore](/2024/08/12/what-is-sigstore/)** | Keyless signing and verification infrastructure | +| **[GUAC](https://guac.sh/)** | Graph for Understanding Artifact Composition — aggregates supply chain metadata | +| **[Alpha-Omega](https://openssf.org/community/alpha-omega/)** | Funds security improvements for critical open source projects | +| **[Best Practices Badge](https://www.bestpractices.dev/)** | Self-assessment program for open source project security maturity | +| **[Package Analysis](https://github.com/ossf/package-analysis)** | Detects malicious packages in open source registries | +| **[Allstar](https://github.com/ossf/allstar)** | Enforces security policies on GitHub organizations | -One of the standout tools developed under the OpenSSF banner is the OpenSSF Scorecard. These scorecards are automated tools designed to give a quick, clear assessment of the security health of open source projects. Think of them as a health check-up for software, spotlighting potential vulnerabilities before they can cause major issues. +This guide focuses on **Scorecards** — the most widely used and immediately actionable tool in the OpenSSF portfolio. -### What Do OpenSSF Scorecards Measure? +## OpenSSF Scorecards: How They Work -OpenSSF Scorecards evaluate a variety of security practices across open source projects to ensure they meet high standards of security. Key areas of focus include: +[OpenSSF Scorecards](https://securityscorecards.dev/) is an automated tool that evaluates the security practices of open source projects on GitHub. It examines a project's repository configuration, CI/CD setup, dependency management, and contribution practices, and produces a score from 0 to 10 for each check. -- **Security Policies**: Are there clear security guidelines for the project? -- **Vulnerability Management**: How effectively does the project handle known vulnerabilities? -- **Code Review Standards**: Is there a rigorous code review process in place? -- **CI/CD Practices**: How robust are the integration and deployment processes? -- **Dependency Management**: How well does the project manage its software dependencies? +### What Scorecards Measure -Each of these areas is crucial, as vulnerabilities in one can compromise the entire project. By measuring these, the scorecards provide an open, transparent indicator of a project’s security posture. +Scorecards currently run 18+ checks. The most important ones: -### Why Are OpenSSF Scorecards Important? +**Branch Protection** — Is the default branch protected? Are force pushes blocked? Are status checks required before merging? Branch protection prevents unauthorized code from entering the main branch. -The value of OpenSSF Scorecards lies in their ability to standardize the evaluation of open source security. For developers and companies relying on open source projects, scorecards offer a quick snapshot of security risks. This not only helps in making informed decisions about which open source projects to trust but also drives improvements in projects that want to maintain high scores. +**Code Review** — Are pull requests reviewed before merging? Code review is one of the most effective defenses against malicious or accidental introduction of vulnerabilities. -Scorecards are also a boon for project maintainers. They serve as a benchmark, helping maintainers understand where their project stands in terms of security and where it can improve. Regular assessments with scorecards encourage ongoing vigilance and continuous enhancement of security measures. +**CI Tests** — Does the project run tests in CI? Automated testing catches regressions that manual review might miss. -## How OpenSSF Scorecards Work +**Dependency Update Tool** — Does the project use Dependabot, Renovate, or a similar tool to keep dependencies current? Outdated dependencies are a common source of vulnerabilities. -Using OpenSSF Scorecards is straightforward. They are publicly accessible and can be run against any open source project on GitHub. The results provide a score for each checked category, alongside recommendations for improvement. This makes the scorecards an essential tool not just for evaluating security but also for educating project maintainers and contributors on best practices in software security. +**Pinned Dependencies** — Are CI/CD workflow dependencies pinned to specific versions (by hash, not tag)? Unpinned dependencies in GitHub Actions workflows are a supply chain attack vector — an attacker who compromises a dependency can inject code into every workflow that references it by tag. -For those who are keen on delving deeper into the specifics of OpenSSF Scorecards and their implications for open-source security, [Nerding Out with Cowboy Neil]( Neil) offers a detailed deep dive. +**Vulnerabilities** — Does the project have unaddressed vulnerabilities in the OSV database? This checks whether known security issues are being resolved. + +**Security Policy** — Does the project have a `SECURITY.md` file describing how to report vulnerabilities? A clear disclosure process encourages responsible reporting. + +**Signed Releases** — Are releases cryptographically signed? Signed releases let consumers verify artifact integrity. This is where [Sigstore](/2024/08/12/what-is-sigstore/) and Scorecards intersect. + +**Fuzzing** — Does the project participate in OSS-Fuzz or use other fuzzing infrastructure? Fuzzing finds bugs that unit tests typically miss. + +**SAST** — Does the project run static analysis security testing tools? SAST catches common vulnerability patterns in code. + +**Token Permissions** — Are GitHub Actions workflow tokens scoped to minimum necessary permissions? Overly broad tokens increase the blast radius of a compromised workflow. + +**Dangerous Workflow** — Does the project have CI workflows that run untrusted code in a privileged context (e.g., `pull_request_target` with checkout)? This is a known attack vector for GitHub Actions. + +### Running Scorecards + +**CLI:** + +```bash +# Install +go install github.com/ossf/scorecard/v5/cmd/scorecard@latest + +# Run against any public GitHub repo +scorecard --repo=github.com/apache/log4j +``` + +**GitHub Action (for your own repos):** + +```yaml +- uses: ossf/scorecard-action@v2 + with: + results_file: results.sarif + publish_results: true +``` + +The GitHub Action posts results to the repository's Security tab and can be configured to run on every push or PR. + +**Website:** + +Pre-computed scores for popular projects are available at [securityscorecards.dev](https://securityscorecards.dev/). + +### Interpreting Results + +Scorecard results are most useful when viewed at the individual check level, not just as an aggregate score. A project might score 9/10 overall but have a 0 on Branch-Protection — which is a critical gap regardless of the overall score. + +When evaluating a dependency, focus on the checks that matter most for your threat model: + +- **If you're concerned about supply chain attacks:** Prioritize Branch-Protection, Code-Review, Pinned-Dependencies, Signed-Releases, and Dangerous-Workflow. +- **If you're concerned about vulnerability management:** Prioritize Vulnerabilities, Dependency-Update-Tool, and Maintained. +- **If you're concerned about build integrity:** Prioritize CI-Tests, SAST, and Fuzzing. + +## Scorecards and SBOMs + +Scorecards and [SBOMs](/what-is-sbom/) address different dimensions of dependency risk, and they are most powerful when used together. + +An SBOM tells you _what_ dependencies you have and [whether they contain known vulnerabilities](/2026/02/01/sbom-scanning-vulnerability-detection/). Scorecards tell you _how well those dependencies are maintained_ — which is a leading indicator of future vulnerability risk. A dependency with no known CVEs but a Scorecard score of 2/10 (no code review, no CI tests, no branch protection) is a risk that an SBOM alone would not flag. + +**Practical workflow:** + +1. Generate an [SBOM](/what-is-sbom/) for your project to identify all dependencies +2. Run Scorecards against your critical dependencies (or check pre-computed scores on securityscorecards.dev) +3. Use Scorecard results to inform dependency selection: prefer well-maintained dependencies with strong security practices +4. Monitor both SBOMs (for vulnerabilities) and Scorecards (for security practice degradation) over time + +For organizations using [sbomify](https://sbomify.com) for SBOM management, Scorecard data provides a complementary signal: while sbomify tracks known vulnerabilities in your components, Scorecards assess the development practices that determine how quickly those dependencies will respond to future vulnerabilities. + +## Scorecards and SLSA + +Scorecards and [SLSA](/2024/08/17/what-is-slsa/) are both OpenSSF projects, and they reinforce each other: + +- Scorecards evaluate whether a project _follows_ good security practices (code review, CI testing, dependency management) +- SLSA verifies that a specific artifact _was built_ through a secure, untampered process + +A project can have a perfect Scorecard but ship a compromised artifact if the build system is attacked. SLSA provenance catches that. Conversely, a project can have SLSA Build L3 provenance but still ship vulnerable code if it lacks code review or CI testing. Scorecards catch that. + +Together, they provide layered defense: Scorecards for development practices, SLSA for build integrity, and [SBOMs](/what-is-sbom/) for component visibility. {{< video-embed video_id="KdgkiWdhpZ8" title="OpenSSF Scorecards Deep Dive" description="A detailed look at OpenSSF Scorecards and their implications for open-source security." >}} -## Conclusion +## Frequently Asked Questions + +### What is OpenSSF? + +The Open Source Security Foundation (OpenSSF) is a cross-industry initiative under the Linux Foundation that brings together developers, security professionals, and organizations to improve the security of open source software. It coordinates projects including Scorecards, [SLSA](/2024/08/17/what-is-slsa/), [Sigstore](/2024/08/12/what-is-sigstore/), GUAC, Alpha-Omega, and the Best Practices Badge program. + +### What are OpenSSF Scorecards? + +[OpenSSF Scorecards](https://securityscorecards.dev/) is an automated tool that evaluates the security practices of open source projects on GitHub. It runs 18+ checks covering areas like branch protection, code review, dependency management, CI/CD security, fuzzing, and vulnerability disclosure. Each check produces a score from 0 to 10. + +### How do I run OpenSSF Scorecards? + +You can run Scorecards via the CLI (`scorecard --repo=github.com/org/repo`), as a [GitHub Action](https://github.com/ossf/scorecard-action) on your own repository, or look up pre-computed scores at [securityscorecards.dev](https://securityscorecards.dev/). + +### How do Scorecards relate to SBOMs? + +Scorecards help evaluate the security quality of your dependencies, which is complementary to [SBOM](/what-is-sbom/)-based vulnerability monitoring. An SBOM tells you what dependencies you have; Scorecards tell you how well-maintained and secure those dependencies' development practices are. Combining both gives a more complete picture of supply chain risk. + +### Is OpenSSF the same as the Linux Foundation? -The OpenSSF, with its comprehensive approach and powerful tools like the scorecards, is leading the charge in fortifying the security of open-source software. In an era where cyber threats are ever-evolving, the importance of such an initiative cannot be overstated. Whether you’re a developer, a project maintainer, or a company leveraging open source, engaging with OpenSSF and utilizing its scorecards can significantly enhance your software's security, ensuring it remains robust and resilient against threats. Embracing these resources is a step forward in fostering a safer, more secure open-source ecosystem. +OpenSSF is a project _within_ the Linux Foundation, not a separate organization. It was formed in 2020 by merging the Core Infrastructure Initiative (CII) and the Open Source Security Coalition. Its members include Google, Microsoft, Amazon, Intel, IBM, and many other organizations invested in open source security. diff --git a/content/posts/2024-07-10-understanding-the-eu-cyber-resilience-act-the-role-of-sboms-in-enhancing-cybersecurity.md b/content/posts/2024-07-10-understanding-the-eu-cyber-resilience-act-the-role-of-sboms-in-enhancing-cybersecurity.md index eb4af58..138e150 100644 --- a/content/posts/2024-07-10-understanding-the-eu-cyber-resilience-act-the-role-of-sboms-in-enhancing-cybersecurity.md +++ b/content/posts/2024-07-10-understanding-the-eu-cyber-resilience-act-the-role-of-sboms-in-enhancing-cybersecurity.md @@ -1,71 +1,178 @@ --- -title: 'Understanding the EU Cyber Resilience Act: The Role of SBOMs in Enhancing Cybersecurity' -description: "Comprehensive guide to the EU Cyber Resilience Act explaining SBOM requirements for digital products, implementation best practices, and compliance strategies." +title: "Understanding the EU Cyber Resilience Act: SBOM Requirements and Compliance" +description: "What does the EU Cyber Resilience Act require? The CRA mandates SBOMs, vulnerability handling, and security updates for all products with digital elements sold in the EU. Learn the timeline, product categories, and how to prepare." categories: - compliance -tags: [CRA, security, sbom, standards] +tags: [CRA, security, sbom, standards, eu] +tldr: "The EU Cyber Resilience Act (CRA) — adopted in October 2024 and enforceable from September 2027 — requires manufacturers of products with digital elements to provide SBOMs, handle vulnerabilities throughout the product lifecycle, report actively exploited vulnerabilities to ENISA within 24 hours, and deliver security updates for at least five years. It covers everything from consumer IoT devices to enterprise software." author: display_name: Cowboy Neil login: Cowboy Neil -author_login: Cowboy Neil -author_url: https://sbomify.com -wordpress_id: 277 -wordpress_url: https://sbomify.com/?p=277 -date: '2024-07-10 09:11:34 +0200' -date_gmt: '2024-07-10 09:11:34 +0200' -comments: [] + url: https://sbomify.com +faq: + - question: "What is the EU Cyber Resilience Act?" + answer: "The EU Cyber Resilience Act (CRA) is EU legislation that establishes mandatory cybersecurity requirements for all products with digital elements placed on the EU market. Adopted in October 2024, it requires manufacturers to implement security by design, maintain SBOMs, handle vulnerabilities throughout the product lifecycle, and provide security updates for at least five years." + - question: "When does the CRA take effect?" + answer: "The CRA entered into force in December 2024. Vulnerability reporting obligations apply from September 2026. The full set of essential requirements, including SBOM obligations, becomes enforceable in September 2027. Manufacturers selling products in the EU should be preparing now." + - question: "Does the CRA require SBOMs?" + answer: "Yes. The CRA requires manufacturers to identify and document the components and dependencies of their products, including by drawing up an SBOM. The SBOM must cover at minimum the top-level dependencies of the product. Machine-readable formats are recommended, and the SBOM must be kept up to date throughout the product's support period." + - question: "Does the CRA apply to open source software?" + answer: "Non-commercial open-source development is exempt. However, open-source software stewards — organizations that systematically provide support for open-source products intended for commercial activities — have lighter obligations, including establishing a cybersecurity policy and cooperating with market surveillance authorities. Open-source software integrated into commercial products is covered through the manufacturer's obligations." + - question: "What are the penalties for non-compliance?" + answer: "The CRA provides for fines of up to 15 million euros or 2.5% of worldwide annual turnover (whichever is higher) for non-compliance with essential cybersecurity requirements. Non-compliance with other obligations can result in fines of up to 10 million euros or 2% of turnover." +date: 2024-07-10 slug: understanding-the-eu-cyber-resilience-act-the-role-of-sboms-in-enhancing-cybersecurity --- -In an era where digital transformation is the norm, cybersecurity has become a paramount concern for organizations and governments worldwide. The European Union (EU) is at the forefront of this endeavor with its Cyber Resilience Act, a landmark legislation designed to bolster the cybersecurity of products with digital elements. A critical component of this act is the Software Bill of Materials (SBOM), a tool that can significantly enhance transparency, security, and resilience in the software supply chain. In this post, we'll delve into the EU Cyber Resilience Act with a particular focus on the importance and implications of SBOMs. +In October 2024, the European Union adopted the **Cyber Resilience Act (CRA)** — the most ambitious cybersecurity product regulation ever enacted. For the first time, a major market is requiring that _all_ products with digital elements meet mandatory cybersecurity requirements, including the maintenance of Software Bills of Materials, vulnerability handling processes, and long-term security update commitments. The CRA affects every company that places software or connected hardware on the EU market, from consumer IoT device makers to enterprise software vendors. -## What is the EU Cyber Resilience Act? +For organizations already invested in [SBOM practices](/what-is-sbom/), the CRA validates what they have been building. For those that have not started, the compliance clock is now ticking. For a practitioner perspective on CRA compliance, see our [interview with EU CRA expert Sarah Fluchs](/2026/01/06/cra-explained-cyber-resilience-act-for-device-manufacturers/). -The EU Cyber Resilience Act aims to establish a unified cybersecurity framework across the EU, ensuring that digital products and services meet stringent security requirements. This legislation mandates that manufacturers, developers, and distributors of digital products, including hardware and software, adhere to comprehensive cybersecurity standards. The act covers a wide range of products, from consumer devices to industrial control systems, emphasizing the need for robust security measures throughout the product lifecycle. +## Timeline -## The Importance of SBOMs +The CRA's obligations phase in over a three-year transition period: -A Software Bill of Materials (SBOM) is a detailed inventory of all components, libraries, and modules that make up a software product. It provides a transparent view of the software supply chain, allowing organizations to identify and address vulnerabilities more effectively. SBOMs play a crucial role in the EU Cyber Resilience Act for several reasons: +| Date | Milestone | +| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- | +| **October 2024** | CRA adopted by European Parliament and Council | +| **December 2024** | CRA entered into force (published in Official Journal) | +| **September 2026** | Vulnerability reporting obligations apply — manufacturers must report actively exploited vulnerabilities to ENISA within 24 hours | +| **September 2027** | Full essential requirements apply, including SBOM obligations, security by design, conformity assessment, and security update commitments | -### 1. **Enhanced Transparency** +Organizations should not wait until 2027. Building SBOM infrastructure, establishing vulnerability handling processes, and ensuring products meet security-by-design requirements takes time. Starting now provides a buffer for the organizational and technical changes required. -SBOMs offer unparalleled visibility into the components used in software products. This transparency helps organizations understand the origins of each component, track changes, and identify potential security risks. By knowing exactly what is in their software, organizations can make informed decisions about managing and mitigating vulnerabilities. +## What the CRA Covers -### 2. **Improved Vulnerability Management** +### Scope: Products with Digital Elements -With an SBOM, organizations can quickly identify which components are affected by newly discovered vulnerabilities. This rapid identification is crucial for timely patching and mitigation, reducing the window of opportunity for attackers. The EU Cyber Resilience Act emphasizes the need for proactive vulnerability management, and SBOMs are a vital tool in achieving this goal. +The CRA applies to **all products with digital elements** placed on the EU market. This includes: -### 3. **Supply Chain Security** +- **Software** — standalone applications, operating systems, firmware, mobile apps +- **Connected hardware** — IoT devices, routers, smart home devices, industrial controllers +- **Components** — software libraries and hardware components intended for integration into other products -The complexity of modern software often involves multiple third-party components, each potentially introducing security risks. SBOMs enable organizations to assess the security posture of their entire supply chain, ensuring that all components meet the required security standards. This holistic view is essential for maintaining the integrity of digital products and services. +The CRA uses three product categories based on risk: -### 4. **Regulatory Compliance** +**Default category** — The majority of products. Manufacturers can self-assess compliance (no third-party audit required). Examples: photo editing software, smart speakers, hard drives. -Adhering to the EU Cyber Resilience Act requires organizations to demonstrate compliance with its security requirements. SBOMs provide a clear and auditable record of the components used in software products, simplifying the compliance process. Organizations can use SBOMs to prove that they have taken the necessary steps to secure their software and comply with the legislation. +**Important products (Class I)** — Products with a higher cybersecurity risk. Compliance requires either a harmonized standard or third-party assessment. Examples: password managers, VPNs, routers, operating systems. -## Implementing SBOMs: Best Practices +**Important products (Class II)** — Higher-risk products requiring mandatory third-party conformity assessment. Examples: hypervisors, firewalls, tamper-resistant microcontrollers, industrial IoT gateways. -To fully leverage the benefits of SBOMs, organizations should adopt best practices for their implementation and management: +**Critical products** — The highest risk category. Requires European cybersecurity certification. Examples: hardware security modules, smart meter gateways, smartcard devices. -### 1. **Automate SBOM Generation** +### Exemptions -Manual creation of SBOMs can be time-consuming and error-prone. Automated tools can streamline the process, ensuring accuracy and completeness. Automation also facilitates continuous updating of SBOMs as software components change. +The CRA does _not_ apply to: -### 2. **Integrate SBOMs into DevSecOps** +- Products already covered by sector-specific EU regulations (medical devices, automotive, aviation) +- Non-commercial open-source software developed outside a commercial context +- Cloud services and SaaS (covered by NIS2 instead) -Incorporating SBOMs into DevSecOps practices ensures that security is considered at every stage of the software development lifecycle. This integration helps identify and address vulnerabilities early in the development process, reducing the risk of security issues in the final product. +## Essential Cybersecurity Requirements -### 3. **Educate and Train Staff** +The CRA mandates a set of essential requirements that products must meet. The most consequential for software producers: -Organizations should invest in training their staff on the importance of SBOMs and how to use them effectively. This education fosters a security-first mindset and ensures that employees are equipped to contribute to the organization's cybersecurity efforts. +### Security by Design -### 4. **Collaborate with Suppliers** +Products must be designed and developed with security in mind from the outset: -Engage with suppliers to ensure that they provide SBOMs for their components. Collaboration with suppliers enhances the overall security of the software supply chain and helps build a culture of transparency and trust. +- No known exploitable vulnerabilities at time of release +- Secure default configuration (no default passwords, minimum necessary privileges) +- Protection of data confidentiality and integrity +- Availability and resilience, including protection against denial of service -## Conclusion +### SBOM Requirements -The EU Cyber Resilience Act represents a significant step forward in enhancing cybersecurity for digital products and services. SBOMs are a cornerstone of this legislation, providing the transparency and insight needed to manage vulnerabilities and secure the software supply chain effectively. By embracing SBOMs and following best practices for their implementation, organizations can not only comply with the EU Cyber Resilience Act but also build a more resilient and secure digital ecosystem. +The CRA explicitly requires manufacturers to **identify and document components and dependencies**, including by drawing up a Software Bill of Materials. Key aspects: -At sbomify, we are committed to helping organizations navigate the complexities of SBOMs and achieve their cybersecurity goals. Contact us today to learn how we can support your journey towards enhanced cyber resilience. +- The SBOM must cover at minimum the top-level dependencies of the product +- Machine-readable format is recommended (e.g., [CycloneDX or SPDX](/2026/01/15/sbom-formats-cyclonedx-vs-spdx/)) +- The SBOM must be maintained and updated throughout the product's support period +- The SBOM is part of the technical documentation that must be available to market surveillance authorities + +For organizations already generating SBOMs in CI/CD pipelines and managing them through platforms like [sbomify](https://sbomify.com), CRA SBOM compliance is largely a matter of formalizing existing practices. For those starting from scratch, see our [guide to demystifying SBOMs](/2024/04/03/demystifying-sboms-the-backbone-of-modern-software-security/) and [SBOM generation guides](/guides/). + +### Vulnerability Handling + +Manufacturers must establish and maintain vulnerability handling processes throughout the product's support period: + +- **Coordinated vulnerability disclosure (CVD)** — A documented process for receiving and handling vulnerability reports from external researchers +- **Security updates** — Free security patches delivered without undue delay for the entire support period (minimum five years from market placement) +- **Vulnerability monitoring** — Active monitoring for vulnerabilities in product components, including third-party and open-source dependencies + +This is where SBOMs and [vulnerability scanning](/2026/02/01/sbom-scanning-vulnerability-detection/) become essential. Monitoring for newly disclosed [CVEs](/2025/12/18/cve-vulnerability-explained/) — especially those in the [CISA KEV catalog](/2025/12/30/what-is-kev-cisa-known-exploited-vulnerabilities/) — requires knowing exactly what components are in your product. An SBOM provides that inventory; vulnerability scanning tools provide the monitoring. + +### Vulnerability Reporting + +Starting September 2026, manufacturers must report: + +- **Actively exploited vulnerabilities** — to ENISA (the EU Agency for Cybersecurity) within **24 hours** of becoming aware, with a full report within 72 hours +- **Severe incidents** — to ENISA and affected users without undue delay + +This reporting obligation applies even to vulnerabilities in third-party components used in the product. Maintaining current SBOMs is the practical prerequisite for meeting this requirement — you cannot report a vulnerability in a component you do not know you use. + +## Open Source and the CRA + +The CRA's treatment of open source has been one of the most debated aspects of the legislation. + +**Non-commercial open source is exempt.** Open-source software developed and distributed without commercial intent falls outside the CRA's scope. Individual developers contributing to open-source projects on a voluntary basis are not manufacturers under the CRA. + +**Open-source software stewards** — organizations like the Apache Software Foundation, the Eclipse Foundation, or the Linux Foundation that systematically support open-source products used in commercial contexts — have lighter obligations. They must establish a cybersecurity policy, cooperate with market surveillance authorities, and document vulnerabilities, but they do not bear the full conformity assessment obligations of manufacturers. + +**Manufacturers remain responsible** for the open-source components they integrate into their commercial products. If a manufacturer ships a product containing an open-source library with a known vulnerability, the manufacturer — not the open-source project — bears the CRA compliance obligation. + +## Implementing CRA Compliance + +### Step 1: Inventory Your Products + +Identify all products with digital elements that you place on the EU market. Classify each into the appropriate risk category (default, important, critical). + +### Step 2: Establish SBOM Practices + +Generate SBOMs for all covered products. Integrate SBOM generation into your CI/CD pipeline using tools like the [sbomify GitHub Action](https://github.com/sbomify/sbomify-action/), Syft, or Trivy so SBOMs are produced at every build and kept current. Use [sbomify](https://sbomify.com) to manage, monitor, and share SBOMs across your product portfolio. + +### Step 3: Build Vulnerability Handling Processes + +Establish a coordinated vulnerability disclosure process. Set up [continuous vulnerability monitoring](/2026/02/01/sbom-scanning-vulnerability-detection/) against your SBOMs. Prepare the internal workflow for the 24-hour reporting obligation starting September 2026. + +### Step 4: Ensure Security by Design + +Review products against the CRA's essential requirements. Address default passwords, unnecessary network exposure, and other common security anti-patterns. Document your secure development lifecycle. + +### Step 5: Prepare Technical Documentation + +The CRA requires technical documentation including risk assessments, SBOM, design and development documentation, conformity assessment records, and EU declaration of conformity. Begin compiling this documentation now. + +## CRA and Other Frameworks + +The CRA intersects with several existing compliance frameworks: + +- **[Executive Order 14028](/compliance/eo-14028/)** — U.S. federal SBOM requirements are complementary; organizations meeting EO 14028 requirements are well-positioned for CRA compliance +- **[CISA minimum elements](/compliance/cisa-minimum-elements/)** — The CISA SBOM minimum elements align with CRA SBOM expectations +- **NIS2 Directive** — Covers operators of essential and important services (overlaps with CRA for some product categories) +- **[SLSA](/2024/08/17/what-is-slsa/)** and [in-toto](/2024/08/14/what-is-in-toto/) — Build provenance and supply chain integrity frameworks that support CRA's security-by-design requirements +- **CE marking** — Products meeting CRA requirements will carry the CE marking for cybersecurity + +## Frequently Asked Questions + +### What is the EU Cyber Resilience Act? + +The EU Cyber Resilience Act (CRA) is EU legislation that establishes mandatory cybersecurity requirements for all products with digital elements placed on the EU market. Adopted in October 2024, it requires manufacturers to implement security by design, maintain [SBOMs](/what-is-sbom/), handle vulnerabilities throughout the product lifecycle, and provide security updates for at least five years. + +### When does the CRA take effect? + +The CRA entered into force in December 2024. Vulnerability reporting obligations apply from September 2026. The full set of essential requirements, including SBOM obligations, becomes enforceable in September 2027. Manufacturers selling products in the EU should be preparing now. + +### Does the CRA require SBOMs? + +Yes. The CRA requires manufacturers to identify and document the components and dependencies of their products, including by drawing up a Software Bill of Materials. The SBOM must cover at minimum the top-level dependencies. Machine-readable formats ([CycloneDX, SPDX](/2026/01/15/sbom-formats-cyclonedx-vs-spdx/)) are recommended. + +### Does the CRA apply to open source software? + +Non-commercial open-source development is exempt. However, open-source software stewards have lighter obligations, including establishing a cybersecurity policy and cooperating with market surveillance authorities. Open-source software integrated into commercial products is covered through the manufacturer's obligations. + +### What are the penalties for non-compliance? + +The CRA provides for fines of up to 15 million euros or 2.5% of worldwide annual turnover (whichever is higher) for non-compliance with essential cybersecurity requirements. Non-compliance with other obligations can result in fines of up to 10 million euros or 2% of turnover. diff --git a/content/posts/2024-07-30-what-is-lock-file-drift.md b/content/posts/2024-07-30-what-is-lock-file-drift.md index 1be5b60..d8e2727 100644 --- a/content/posts/2024-07-30-what-is-lock-file-drift.md +++ b/content/posts/2024-07-30-what-is-lock-file-drift.md @@ -1,50 +1,190 @@ --- -title: "Understanding Lock File Drift: A Hidden Risk in Dependency Management" -description: "What lock file drift is, why it causes inconsistent builds and security vulnerabilities, and best practices to detect and prevent it in your projects." -author: - display_name: Cowboy Neil -author_login: Cowboy Neil -date: '2024-07-31 09:00:35 +0200' +title: "What Is Lock File Drift? A Hidden Risk in Dependency Management" +description: "What is lock file drift? When your dependency manifest and lock file fall out of sync, builds become unreproducible and SBOMs become inaccurate. Learn how to detect, prevent, and fix lock file drift across npm, Python, Go, Rust, and more." categories: - education -tags: [dependencies, security, devops] -comments: [] +tags: [dependencies, security, devops, sbom] +tldr: "Lock file drift occurs when your dependency manifest (package.json, requirements.txt, go.mod) and its lock file (package-lock.json, poetry.lock, go.sum) fall out of sync. The result: non-reproducible builds, silent security regressions, and SBOMs that don't reflect what's actually running. The fix is simple — strict install commands in CI and pre-commit hooks that catch drift before it reaches production." +author: + display_name: Cowboy Neil + login: Cowboy Neil + url: https://sbomify.com +faq: + - question: "What is lock file drift?" + answer: "Lock file drift is the condition where a project's dependency lock file (such as package-lock.json, poetry.lock, or Cargo.lock) is out of sync with its primary dependency manifest (such as package.json, pyproject.toml, or Cargo.toml). This means the versions recorded in the lock file do not match the constraints in the manifest, leading to unreproducible builds and potential security issues." + - question: "Why does lock file drift matter for security?" + answer: "Lock file drift can silently downgrade dependencies to versions with known vulnerabilities, or allow untested versions to enter production. It also makes SBOMs inaccurate — if your SBOM is generated from a drifted lock file, it does not reflect what is actually installed in your application, undermining vulnerability monitoring and compliance." + - question: "How do I detect lock file drift?" + answer: "Most package managers offer strict install commands that fail when drift is detected: npm ci for Node.js, pip install --require-hashes for Python, cargo check for Rust, and go mod tidy followed by checking for changes in Go. Run these in CI to catch drift automatically." + - question: "How do I prevent lock file drift?" + answer: "Use strict install commands in CI (npm ci instead of npm install), add pre-commit hooks that regenerate the lock file and fail if it changes, require lock file updates in pull request review checklists, and use automated dependency update tools like Dependabot or Renovate that update both files together." +date: 2024-07-30 slug: what-is-lock-file-drift --- -In the world of software development, managing dependencies is crucial for ensuring the stability and security of applications. One often overlooked aspect of this process is the phenomenon known as "lock file drift," where the dependency lock file is out of sync with the primary dependency file. This misalignment can lead to inconsistent builds, security vulnerabilities, and compatibility issues, ultimately compromising the integrity and functionality of a project. By understanding the causes of lock file drift and implementing best practices to detect and prevent it, development teams can maintain consistent, secure, and reliable builds. +A developer adds a new dependency to `package.json`, runs the application locally, confirms it works, and pushes the change. But they forget to run `npm install` to update `package-lock.json`. CI passes because it uses `npm install` (which silently regenerates the lock file) instead of `npm ci` (which would fail on the mismatch). The application deploys with a different set of dependency versions than anyone tested against. This is **lock file drift** — and it is one of the most common, least visible risks in modern dependency management. + +## What Is a Lock File? + +Every modern package manager uses two files to manage dependencies: + +1. **A manifest** — the file where developers declare what they need (`package.json`, `pyproject.toml`, `Cargo.toml`, `go.mod`, `Gemfile`, `composer.json`). Manifests typically specify version _ranges_ (e.g., `^1.2.0`), giving the package manager flexibility to resolve compatible versions. + +2. **A lock file** — the file generated by the package manager that records the _exact_ versions resolved for every direct and transitive dependency (`package-lock.json`, `poetry.lock`, `Cargo.lock`, `go.sum`, `Gemfile.lock`, `composer.lock`, `pnpm-lock.yaml`). Lock files ensure that every environment installs identical versions. + +The manifest says _what you want_. The lock file says _what you got_. When these two diverge, you have lock file drift. + +## How Drift Happens + +Lock file drift is usually accidental. The most common causes: + +**Manual manifest edits without reinstalling.** A developer adds or bumps a dependency version in the manifest but does not run the package manager to regenerate the lock file. The manifest now declares constraints that the lock file does not satisfy. + +**Merge conflicts resolved incorrectly.** Two branches modify dependencies independently. When merged, the manifest may reflect both changes, but the lock file — which is often large, machine-generated, and difficult to review — may be resolved incorrectly or left in an inconsistent state. + +**Automated tools that update one file but not the other.** CI scripts, Dockerfiles, or custom tooling may modify the manifest (e.g., bumping a version) without running the corresponding install command to propagate the change to the lock file. + +**Different package manager versions.** Different developers or CI environments running different versions of the same package manager can produce structurally different lock files from the same manifest, even when the resolved versions are identical. + +## Why Lock File Drift Is Dangerous + +### Non-Reproducible Builds + +The entire purpose of a lock file is build reproducibility. When drift occurs, the lock file no longer accurately describes what gets installed. Different environments may resolve different versions depending on when they run, what's available in the registry, and which resolution strategy the package manager uses. The result: "works on my machine" failures, intermittent test breakage, and production behavior that nobody tested. + +### Silent Security Regressions + +Lock file drift can prevent security patches from being applied _or_ introduce vulnerable versions without anyone noticing. + +Consider a scenario where the manifest is updated to require `library >= 2.1.0` (which patches a [CVE](/2025/12/18/cve-vulnerability-explained/)), but the lock file still pins `library@2.0.3` (the vulnerable version). If CI uses a lenient install command, it may honor the lock file and install the vulnerable version. The developer believes the patch is applied; it is not. + +The reverse is also dangerous: the lock file may resolve to a newer version that introduces breaking changes or new vulnerabilities that the manifest's range technically allows but that no one has tested. + +### Inaccurate SBOMs + +This is where lock file drift intersects directly with [software supply chain security](/2025/12/26/software-supply-chain-management/). When you generate an [SBOM](/what-is-sbom/) from your project, most SBOM generation tools read the lock file to determine the exact versions of your dependencies. If the lock file is drifted, the SBOM is wrong — it lists versions that do not match what is actually installed in the built artifact. + +An inaccurate SBOM undermines every downstream process that depends on it: [vulnerability scanning](/2026/02/01/sbom-scanning-vulnerability-detection/) misses real exposures or flags false positives, [KEV monitoring](/2025/12/30/what-is-kev-cisa-known-exploited-vulnerabilities/) operates on stale data, and compliance attestations become unreliable. For organizations using [sbomify](https://sbomify.com) or similar platforms to manage and monitor SBOMs, lock file drift is a data quality problem at the source. + +## Detecting Drift by Ecosystem + +Each package manager has mechanisms to detect lock file drift. The key principle: **use strict install commands in CI that fail when the lock file doesn't match the manifest.** + +### Node.js (npm / Yarn / pnpm) + +```bash +# npm: fails if package-lock.json is out of sync with package.json +npm ci + +# Yarn: fails if yarn.lock needs updating +yarn install --frozen-lockfile + +# pnpm: fails if pnpm-lock.yaml needs updating +pnpm install --frozen-lockfile +``` + +`npm ci` is the single most important command for preventing Node.js lock file drift. Unlike `npm install`, it does not modify the lock file — it installs exactly what the lock file specifies, and fails if the lock file is inconsistent with the manifest. + +### Python (pip / Poetry / Pipenv) + +```bash +# Poetry: fails if poetry.lock is out of date +poetry check --lock + +# Pipenv: verify lock file integrity +pipenv verify + +# pip with hashes: fails if installed packages don't match pinned hashes +pip install --require-hashes -r requirements.txt +``` + +### Rust (Cargo) + +```bash +# Check if Cargo.lock is up to date (in CI) +cargo check --locked +``` + +The `--locked` flag tells Cargo to refuse to update the lock file. If `Cargo.toml` and `Cargo.lock` are out of sync, the command fails. + +### Go + +```bash +# Tidy modules and check for changes +go mod tidy +git diff --exit-code go.sum +``` + +Go does not have a single "strict mode" flag, but running `go mod tidy` followed by checking whether `go.sum` changed is the standard CI pattern. + +### Ruby (Bundler) + +```bash +# Fails if Gemfile.lock needs updating +bundle install --frozen +``` + +## Preventing Drift + +### CI Pipeline Checks + +The most effective prevention is a CI step that runs the strict install command and fails the build on drift. This should be the _first_ step in your pipeline — before tests, before linting, before anything that depends on installed packages. + +```yaml +# Example: GitHub Actions step for Node.js +- name: Check for lock file drift + run: npm ci +``` + +### Pre-Commit Hooks + +Add a pre-commit hook that regenerates the lock file and checks for changes: + +```bash +#!/bin/sh +# Regenerate lock file and fail if it changed +npm install --package-lock-only +git diff --exit-code package-lock.json || { + echo "Lock file drift detected. Run 'npm install' and commit the updated lock file." + exit 1 +} +``` + +### Automated Dependency Updates + +Tools like [Dependabot](https://docs.github.com/en/code-security/dependabot) and [Renovate](https://docs.renovatebot.com/) update both the manifest and lock file together in a single pull request, eliminating the most common source of drift. -## What is a Lock File? +### Pull Request Review Discipline -Before diving into lock file drift, it's essential to understand the purpose of lock files. In most modern package managers, a lock file (such as `package-lock.json` for npm or `Pipfile.lock` for Pipenv) records the exact versions of dependencies installed in a project. This ensures that all environments (development, staging, production) use the same dependency versions, leading to consistent and reproducible builds. +Require that any PR modifying a dependency manifest also includes the corresponding lock file update. Code review tools can flag PRs that change `package.json` without changing `package-lock.json`. -## The Emergence of Lock File Drift +## Lock File Drift and SBOM Accuracy -Lock file drift occurs when the lock file becomes unsynchronized with the primary dependency file (`package.json`, `Pipfile`, `requirements.txt` etc.). This misalignment can happen due to several reasons: +For organizations subject to [software supply chain regulations](/compliance/eo-14028/) or generating SBOMs for compliance with the [EU Cyber Resilience Act](/compliance/eu-cra/), lock file drift is not just a developer inconvenience — it is a compliance risk. -1. **Manual Editing:** Developers manually update the primary dependency file without running the package manager to regenerate the lock file. -2. **Merge Conflicts:** In a collaborative environment, different branches might modify dependencies, leading to conflicts that are not correctly resolved in both files. -3. **Automated Tools:** Continuous integration (CI) tools or scripts might update dependencies but fail to refresh the lock file accordingly. +An SBOM generated from a drifted lock file is effectively a false attestation: it claims the software contains specific component versions when it may actually contain different ones. This matters for: -## Consequences of Lock File Drift +- **[Vulnerability monitoring](/2026/02/01/sbom-scanning-vulnerability-detection/)** — Scanners checking your SBOM against [CVE](/2025/12/18/cve-vulnerability-explained/) and [KEV](/2025/12/30/what-is-kev-cisa-known-exploited-vulnerabilities/) databases will produce incorrect results if the SBOM doesn't match the deployed artifact. +- **Audit trails** — Compliance frameworks expect SBOMs to accurately represent what is shipped. A drifted SBOM fails this requirement. +- **Incident response** — When a new vulnerability is disclosed, you need to know _exactly_ what versions are in your software. A drifted lock file makes this impossible. -Lock file drift can introduce significant risks and challenges to a project: +The fix is straightforward: ensure that CI enforces lock file consistency _before_ the SBOM generation step. If the lock file is accurate, the SBOM will be accurate. -1. **Inconsistent Builds:** When the lock file and the primary dependency file are out of sync, different environments might install varying versions of dependencies, leading to inconsistencies. This makes debugging difficult and can result in unexpected behavior in production. +## Frequently Asked Questions -2. **Security Vulnerabilities:** Lock file drift can prevent critical security updates from being applied. If the primary dependency file specifies a newer, secure version of a package but the lock file remains outdated, the application could be exposed to known vulnerabilities. +### What is lock file drift? -3. **Compatibility Issues:** Dependencies often rely on specific versions of other packages. A drifted lock file might cause version mismatches, leading to compatibility issues and potential runtime errors. +Lock file drift is the condition where a project's dependency lock file (such as `package-lock.json`, `poetry.lock`, or `Cargo.lock`) is out of sync with its primary dependency manifest (such as `package.json`, `pyproject.toml`, or `Cargo.toml`). This means the versions recorded in the lock file do not match the constraints in the manifest, leading to unreproducible builds and potential security issues. -## Detecting and Preventing Lock File Drift +### Why does lock file drift matter for security? -To mitigate the risks associated with lock file drift, teams can adopt several best practices: +Lock file drift can silently downgrade dependencies to versions with known vulnerabilities, or allow untested versions to enter production. It also makes [SBOMs](/what-is-sbom/) inaccurate — if your SBOM is generated from a drifted lock file, it does not reflect what is actually installed in your application, undermining vulnerability monitoring and compliance. -1. **Regular Audits:** Periodically check that the lock file is in sync with the primary dependency file. Tools like `npm audit` or `pipenv check` can help identify discrepancies and vulnerabilities. +### How do I detect lock file drift? -2. **Automated CI Checks:** Integrate checks into your CI pipeline to ensure the lock file is updated whenever the primary dependency file changes. This can be achieved by running commands like `npm install` or `pipenv install` during the CI process. +Most package managers offer strict install commands that fail when drift is detected: `npm ci` for Node.js, `poetry check --lock` for Python, `cargo check --locked` for Rust, and `go mod tidy` followed by checking for changes in Go. Run these in CI to catch drift automatically. -3. **Consistent Workflow:** Encourage developers to follow a consistent workflow where changes to dependencies are always followed by regenerating the lock file. This can be enforced through pre-commit hooks or other automated scripts. +### How do I prevent lock file drift? -4. **Education and Awareness:** Educate your development team about the importance of +Use strict install commands in CI (`npm ci` instead of `npm install`), add pre-commit hooks that regenerate the lock file and fail if it changes, require lock file updates in pull request review checklists, and use automated dependency update tools like Dependabot or Renovate that update both files together. diff --git a/content/posts/2024-08-12-what-is-sigstore.md b/content/posts/2024-08-12-what-is-sigstore.md index 1d2562f..78493d2 100644 --- a/content/posts/2024-08-12-what-is-sigstore.md +++ b/content/posts/2024-08-12-what-is-sigstore.md @@ -1,62 +1,180 @@ --- -title: "Understanding Sigstore: Securing the Software Supply Chain" -description: "Introduction to Sigstore, the open-source project for cryptographic signing of software artifacts using Cosign, Rekor transparency logs, Fulcio CA, and Gitsign." -author: - display_name: Cowboy Neil +title: "What Is Sigstore? Keyless Signing for the Software Supply Chain" +description: "What is Sigstore? The CNCF-graduated project makes cryptographic signing effortless with keyless signing via Fulcio, transparency logging via Rekor, and container signing via Cosign. Learn how Sigstore secures artifacts, SBOMs, and supply chains." categories: - education -tags: [sigstore, security, signing] +tags: [sigstore, security, signing, supply-chain] +tldr: "Sigstore eliminates the biggest barrier to software signing: key management. Using keyless signing, short-lived certificates, and a public transparency log, it lets developers sign and verify artifacts without managing long-lived keys. Sigstore is now a CNCF graduated project and powers signing for npm, PyPI, Kubernetes, and GitHub artifact attestations." +author: + display_name: Cowboy Neil + login: Cowboy Neil + url: https://sbomify.com +faq: + - question: "What is Sigstore?" + answer: "Sigstore is an open-source project that provides free, easy-to-use tools for cryptographically signing, verifying, and protecting software artifacts. Its keyless signing model eliminates the need for developers to manage long-lived cryptographic keys, making signing accessible to every project. Sigstore is a CNCF graduated project." + - question: "What is keyless signing?" + answer: "Keyless signing is Sigstore's approach to cryptographic signing that replaces long-lived keys with short-lived certificates tied to an identity (like a GitHub or Google account) via OpenID Connect. The certificate lives only long enough to create the signature, then expires. A transparency log (Rekor) records the signing event, providing permanent proof that the signature was valid at the time it was created." + - question: "What are the main components of Sigstore?" + answer: "Sigstore consists of three core services: Cosign (signs and verifies container images and other artifacts), Fulcio (a certificate authority that issues short-lived certificates based on OIDC identity), and Rekor (a transparency log that records all signing events in a tamper-evident ledger). Gitsign extends this to Git commit signing." + - question: "How is Sigstore related to SLSA and in-toto?" + answer: "Sigstore provides the signing infrastructure that SLSA and in-toto rely on. SLSA provenance attestations (which use the in-toto attestation format) are typically signed using Sigstore's keyless signing. GitHub artifact attestations, for example, use in-toto format signed via Sigstore's Fulcio and recorded in Rekor." + - question: "Who uses Sigstore?" + answer: "Sigstore is used by major package registries (npm, PyPI, Maven Central), container platforms (Kubernetes, Distroless), CI/CD systems (GitHub Actions, GitLab), and Linux distributions. GitHub's artifact attestation feature uses Sigstore under the hood." date: 2024-08-12 slug: what-is-sigstore --- -**Summary**: In an era where software supply chain attacks are becoming more common and sophisticated, [Sigstore](https://www.sigstore.dev/) represents a critical advancement in securing software development practices. By lowering the barriers to cryptographic signing and increasing the transparency of the signing process, Sigstore helps ensure that the software we all rely on is both secure and trustworthy. Whether you're a developer, a security professional, or a software user, the adoption of Sigstore has the potential to significantly enhance the integrity of the software ecosystem. +Before [Sigstore](https://www.sigstore.dev/), signing a software artifact meant generating a GPG or PEM key pair, storing the private key securely, distributing the public key, rotating keys on a schedule, and revoking compromised keys. Most projects never bothered. The result: the vast majority of open-source software shipped unsigned, and consumers had no way to verify that a package came from who it claimed to come from. + +Sigstore changed this by removing the hardest part of the problem — key management — entirely. Using short-lived certificates, identity-based signing, and a public transparency log, Sigstore makes it possible to sign and verify software artifacts without managing a single long-lived key. Launched in 2021 as a collaboration between Google, Red Hat, Purdue University, and Chainguard, Sigstore [graduated from the CNCF](https://www.cncf.io/projects/sigstore/) in 2022 and now underpins signing infrastructure for npm, PyPI, Kubernetes, and GitHub artifact attestations. + +## How Sigstore Works: Keyless Signing + +Sigstore's central innovation is **keyless signing** — a model where developers never see or manage cryptographic keys. Instead, signing is tied to an _identity_ (like a GitHub or Google account) through short-lived certificates. Here is the flow: + +1. **Authenticate.** The developer proves their identity via OpenID Connect (OIDC), using an existing identity provider like GitHub, Google, or Microsoft. +2. **Get a certificate.** Sigstore's certificate authority, **Fulcio**, issues a short-lived X.509 certificate (valid for roughly 10 minutes) that binds the developer's verified identity to an ephemeral key pair. +3. **Sign the artifact.** The developer uses **Cosign** (or another signing client) to sign the artifact with the ephemeral private key. +4. **Log the event.** The signature, certificate, and artifact digest are recorded in **Rekor**, Sigstore's transparency log — a tamper-evident, append-only public ledger. +5. **Key expires.** The ephemeral private key is discarded. Because the signing event is permanently recorded in Rekor with a timestamp, anyone can later verify that the signature was valid at the time it was created, even though the certificate has long since expired. + +This is what makes keyless signing possible: the transparency log replaces long-lived keys as the source of trust. Instead of trusting that a key has not been compromised over months or years, verifiers trust that the signing event was recorded in Rekor at a specific time, with a valid certificate, for a verified identity. + +## Core Components + +### Cosign + +[Cosign](https://docs.sigstore.dev/cosign/signing/overview/) is the primary signing and verification tool. It supports: + +- **Container images** — signs OCI images and stores signatures in OCI registries alongside the image +- **Blobs** — signs arbitrary files (binaries, archives, SBOMs) +- **[in-toto attestations](/2024/08/14/what-is-in-toto/)** — signs and verifies in-toto attestations, including SLSA provenance + +Cosign integrates with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins) and can verify signatures as part of admission control in Kubernetes clusters. + +### Fulcio + +[Fulcio](https://docs.sigstore.dev/fulcio/overview/) is Sigstore's certificate authority. It issues short-lived certificates (typically 10 minutes) after verifying the requester's identity via OIDC. Because certificates are short-lived, there is no need for certificate revocation lists or OCSP — the certificate simply expires before it can be meaningfully abused. + +Fulcio supports identity providers including GitHub Actions (for workload identity in CI/CD), Google, Microsoft, and any OIDC-compliant provider. In CI/CD environments, the build system's workload identity is used automatically, requiring no human interaction. + +### Rekor + +[Rekor](https://docs.sigstore.dev/rekor/overview/) is the transparency log — an immutable, append-only ledger that records every signing event. Each entry includes the artifact digest, the signature, and the signing certificate. Rekor provides: -## What is Sigstore? +- **Tamper evidence** — any modification to the log is detectable through cryptographic proofs (Merkle tree inclusion proofs) +- **Timestamping** — each entry has a trusted timestamp, proving when the signature was created +- **Public auditability** — anyone can query Rekor to verify that a signature exists and when it was made -Sigstore is an open-source project that provides a set of tools and services for developers to cryptographically sign, verify, and protect their software artifacts. This includes container images, binaries, and source code, among other elements. The primary goal of Sigstore is to make signing software artifacts as easy and widely adopted as possible, thereby increasing the overall trust in the software supply chain. +Rekor is what enables keyless signing to work: because the signing event is permanently recorded with a timestamp, the short-lived certificate does not need to remain valid for verification to succeed. -Sigstore was born out of a collaboration between Google, Red Hat, Purdue University, and other key contributors in the open-source community. The project has quickly gained momentum due to its focus on addressing critical security concerns in modern software development practices. +### Gitsign + +[Gitsign](https://docs.sigstore.dev/gitsign/overview/) applies Sigstore's keyless signing to Git commits and tags. Instead of configuring GPG keys, developers authenticate via OIDC and sign commits with a short-lived certificate, with the signing event recorded in Rekor. ## Why Sigstore Matters -The integrity of software artifacts is essential to prevent malicious code from infiltrating production systems. Traditionally, ensuring this integrity has involved complex processes of generating, managing, and distributing cryptographic keys for signing software. These processes, while effective, are often cumbersome, leading to low adoption rates and a general lack of cryptographic signing in many projects. +### The Key Management Problem + +Traditional code signing requires generating a key pair, protecting the private key (often in an HSM or vault), distributing the public key (via a keyserver or out-of-band), rotating keys periodically, and revoking them when compromised. Each of these steps is a potential point of failure, and the operational burden means most projects skip signing entirely. + +Sigstore eliminates every one of these steps. There are no long-lived keys to generate, store, rotate, or revoke. The developer's existing identity _is_ the credential, and the transparency log _is_ the distribution mechanism. + +### Real-World Supply Chain Attacks + +The attacks that Sigstore helps prevent are not theoretical: + +- **[SolarWinds (2020)](https://en.wikipedia.org/wiki/2020_United_States_federal_government_data_breach)** — Attackers compromised the build system and injected malicious code into signed updates. If the build system had used identity-bound, transparency-logged signing, the anomalous signing identity would have been visible in the public log. +- **[codecov (2021)](https://about.codecov.io/security-update/)** — A modified bash uploader script was distributed for months. Transparency-logged signatures would have made the unauthorized modification detectable. +- **[XZ Utils (2024)](/2024/04/13/what-really-happened-to-xz/)** — A malicious contributor backdoored the build process. Sigstore's identity-based signing would tie every signed artifact to a specific verified identity, making it harder for a pseudonymous attacker to sign releases undetected. + +### Adoption at Scale + +Sigstore's impact is measured by who uses it: + +- **npm** — All packages published to the npm registry include Sigstore-based provenance attestations, allowing users to verify that a package was built from a specific repository and commit. +- **PyPI** — Python package attestations use Sigstore to sign and verify distributions uploaded from trusted CI/CD publishers. +- **Maven Central** — The Sigstore Java client enables signing for JVM-based artifacts. +- **Kubernetes** — All Kubernetes release artifacts are signed with Sigstore, providing verifiable provenance for the most widely deployed container orchestrator. +- **GitHub artifact attestations** — GitHub's [attestation feature](https://docs.github.com/en/actions/security-for-github-actions/using-artifact-attestations/using-artifact-attestations-to-establish-provenance-for-builds) uses Sigstore for signing [in-toto attestations](/2024/08/14/what-is-in-toto/), including [SLSA](/2024/08/17/what-is-slsa/) provenance. +- **Distroless images** — Google's Distroless container images are signed with Cosign, allowing Kubernetes admission controllers to enforce signature verification. + +## Sigstore and SBOMs + +Sigstore provides the signing layer that gives [SBOMs](/what-is-sbom/) integrity and provenance. An unsigned SBOM is a claim — a signed SBOM is evidence. + +- **Signing SBOMs** — Cosign can sign SBOM files (CycloneDX, SPDX) or attach signed SBOMs to container images. This proves who generated the SBOM and when. +- **SBOM attestations** — An SBOM can be wrapped in an [in-toto attestation](/2024/08/14/what-is-in-toto/) and signed via Sigstore, producing a verifiable, transparency-logged SBOM that consumers can independently validate. +- **Verification in pipelines** — Organizations consuming SBOMs can use `cosign verify-attestation` to confirm that the SBOM was generated by a trusted CI/CD identity before [scanning it for vulnerabilities](/2026/02/01/sbom-scanning-vulnerability-detection/). + +For organizations using [sbomify](https://sbomify.com) for SBOM management, Sigstore-signed SBOMs provide assurance that the inventory data was generated by an authorized pipeline, not fabricated or modified after the fact. + +## Getting Started with Sigstore + +### Signing a Container Image + +```bash +# Install cosign +go install github.com/sigstore/cosign/v2/cmd/cosign@latest + +# Sign an image (keyless — opens browser for OIDC auth) +cosign sign my-registry.io/my-image:latest + +# Verify the image +cosign verify my-registry.io/my-image:latest \ + --certificate-identity=user@example.com \ + --certificate-oidc-issuer=https://accounts.google.com +``` + +### Signing in CI/CD (GitHub Actions) -Sigstore simplifies this process by providing a transparent, community-driven solution that reduces the barriers to entry for signing software artifacts. This democratization of signing practices helps ensure that more software components are cryptographically verified, making it harder for attackers to introduce malicious code into the software supply chain. +In GitHub Actions, Cosign uses the workflow's OIDC identity automatically — no browser interaction required: -## Key Components of Sigstore +```yaml +- uses: sigstore/cosign-installer@v3 +- run: cosign sign my-registry.io/my-image:${{ github.sha }} +``` -Sigstore is comprised of several key components that work together to provide a secure and user-friendly signing experience: +### Verifying Attestations -1. **Cosign**: A command-line tool that enables developers to sign, verify, and store signatures for container images and other artifacts. Cosign leverages existing cloud-native technologies and integrates seamlessly with popular CI/CD pipelines, making it easy to incorporate signing into existing workflows. +```bash +# Verify a SLSA provenance attestation +cosign verify-attestation my-registry.io/my-image:latest \ + --type slsaprovenance \ + --certificate-identity-regexp="https://github.com/my-org/*" \ + --certificate-oidc-issuer=https://token.actions.githubusercontent.com +``` -2. **Rekor**: A transparency log service that records all signature and metadata information in a tamper-evident manner. Rekor acts as a public ledger, allowing anyone to verify the integrity of an artifact and its associated signature. This transparency log is critical for ensuring that signed artifacts cannot be altered without detection. +For more on attestation formats, see our guide on [in-toto](/2024/08/14/what-is-in-toto/). -3. **Fulcio**: A certificate authority (CA) that issues short-lived certificates based on OpenID Connect (OIDC) identities. Fulcio enables developers to obtain cryptographic certificates without the need for complex key management, streamlining the process of signing artifacts. +## Sigstore in the Compliance Context -4. **Gitsign**: A tool for signing Git commits and tags using Sigstore's infrastructure. Gitsign ensures that code changes can be traced back to their source, providing an additional layer of security for version control systems. +Sigstore supports compliance with emerging software supply chain regulations: -## How Sigstore Works +- **[Executive Order 14028](/compliance/eo-14028/)** requires software producers to attest to secure development practices and provide SBOMs. Sigstore provides the cryptographic signing infrastructure to make those attestations verifiable. +- **[SLSA requirements](/2024/08/17/what-is-slsa/)** — Sigstore is the default signing mechanism for SLSA provenance at all build levels. Achieving SLSA compliance typically involves Sigstore. +- **[EU CRA](/compliance/eu-cra/)** requires demonstrating supply chain security and vulnerability handling. Sigstore-signed artifacts and SBOMs provide auditable proof of integrity. +- **[NIST SP 800-53](/compliance/nist-800-53/) SI-7** (Software, Firmware, and Information Integrity) requires integrity verification mechanisms. Sigstore's signing and verification model directly addresses this control. -Sigstore's architecture is designed to be both secure and easy to use. When a developer signs an artifact using Cosign, the following steps typically occur: +## Frequently Asked Questions -1. **Obtain a Certificate**: The developer uses Fulcio to obtain a short-lived certificate, which is tied to their OIDC identity (such as a GitHub or Google account). This certificate is used to sign the artifact. +### What is Sigstore? -2. **Sign the Artifact**: The developer uses Cosign to create a signature for the artifact. This signature, along with the certificate, is then uploaded to Rekor's transparency log. +Sigstore is an open-source project that provides free, easy-to-use tools for cryptographically signing, verifying, and protecting software artifacts. Its keyless signing model eliminates the need for developers to manage long-lived cryptographic keys, making signing accessible to every project. Sigstore is a CNCF graduated project. -3. **Store and Verify**: The signed artifact, along with its signature and certificate, can be stored in a registry. Anyone can later verify the artifact by checking the signature against the information in Rekor's transparency log. +### What is keyless signing? -This process ensures that the integrity and origin of the software artifact can be independently verified by anyone, making it much harder for malicious actors to compromise the software supply chain. +Keyless signing is Sigstore's approach to cryptographic signing that replaces long-lived keys with short-lived certificates tied to an identity (like a GitHub or Google account) via OpenID Connect. The certificate lives only long enough to create the signature, then expires. A transparency log (Rekor) records the signing event, providing permanent proof that the signature was valid at the time it was created. -## The Benefits of Sigstore +### What are the main components of Sigstore? -Sigstore offers several significant benefits to the software development community: +Sigstore consists of three core services: [Cosign](https://docs.sigstore.dev/cosign/signing/overview/) (signs and verifies container images and other artifacts), [Fulcio](https://docs.sigstore.dev/fulcio/overview/) (a certificate authority that issues short-lived certificates based on OIDC identity), and [Rekor](https://docs.sigstore.dev/rekor/overview/) (a transparency log that records all signing events in a tamper-evident ledger). [Gitsign](https://docs.sigstore.dev/gitsign/overview/) extends this to Git commit signing. -- **Enhanced Security**: By making cryptographic signing accessible and easy to use, Sigstore helps protect against supply chain attacks and other security threats. +### How is Sigstore related to SLSA and in-toto? -- **Transparency**: The use of a public transparency log ensures that all signed artifacts are publicly auditable, increasing trust in the software supply chain. +Sigstore provides the signing infrastructure that [SLSA](/2024/08/17/what-is-slsa/) and [in-toto](/2024/08/14/what-is-in-toto/) rely on. SLSA provenance attestations (which use the in-toto attestation format) are typically signed using Sigstore's keyless signing. GitHub artifact attestations, for example, use in-toto format signed via Sigstore's Fulcio and recorded in Rekor. -- **Ease of Use**: Sigstore's integration with existing tools and workflows means that developers can adopt secure signing practices without significant changes to their processes. +### Who uses Sigstore? -- **Community-Driven**: As an open-source project, Sigstore benefits from the contributions and oversight of a broad community, ensuring that it remains responsive to the needs of developers and organizations. +Sigstore is used by major package registries (npm, PyPI, Maven Central), container platforms (Kubernetes, Distroless), CI/CD systems (GitHub Actions, GitLab), and Linux distributions. GitHub's artifact attestation feature uses Sigstore under the hood. diff --git a/content/posts/2024-08-14-what-is-in-toto.md b/content/posts/2024-08-14-what-is-in-toto.md index 3045a28..88e5436 100644 --- a/content/posts/2024-08-14-what-is-in-toto.md +++ b/content/posts/2024-08-14-what-is-in-toto.md @@ -1,70 +1,177 @@ --- -title: "Understanding in-toto: Securing the Software Supply Chain" -description: "Introduction to in-toto, the open-source framework from NYU for securing software supply chains through layout definitions, signed link metadata, and verification." -author: - display_name: Cowboy Neil +title: "What Is in-toto? Securing the Software Supply Chain End to End" +description: "What is in-toto? The CNCF-graduated framework uses layouts, signed link metadata, and end-to-end verification to cryptographically prove every build step happened as intended. Learn how it underpins SLSA, GitHub attestations, and SBOM integrity." categories: - education -tags: [in-toto, security, supply-chain] +tags: [in-toto, security, supply-chain, attestation] +tldr: "in-toto is a framework that cryptographically verifies every step of your software supply chain — from source commit to deployed artifact. Developed at NYU's Secure Systems Lab, it is now a CNCF graduated project whose attestation format underpins SLSA provenance, GitHub artifact attestations, and sigstore signing." +author: + display_name: Cowboy Neil + login: Cowboy Neil + url: https://sbomify.com +faq: + - question: "What is in-toto?" + answer: "in-toto is an open-source framework for securing the software supply chain. It provides a system of layouts, signed link metadata, and end-to-end verification to ensure that every step in building and delivering software happened exactly as intended, with no unauthorized modifications. It is a CNCF graduated project." + - question: "What is the in-toto attestation framework?" + answer: "The in-toto attestation framework defines a standard envelope format for supply chain metadata. It wraps a typed predicate (such as SLSA provenance, vulnerability scan results, or SBOM data) inside a signed envelope, providing a universal format for making and verifying claims about software artifacts." + - question: "How is in-toto related to SLSA?" + answer: "SLSA (Supply chain Levels for Software Artifacts) builds directly on in-toto. SLSA provenance attestations use the in-toto attestation format, and SLSA's verification model is an extension of in-toto's end-to-end verification concept. in-toto provides the technical foundation; SLSA provides a maturity framework on top of it." + - question: "Is in-toto a CNCF project?" + answer: "Yes. in-toto joined the Cloud Native Computing Foundation (CNCF) as a sandbox project in 2019 and graduated in 2023. It is one of only a few supply chain security projects to reach CNCF graduated status, alongside sigstore and The Update Framework (TUF)." + - question: "How does in-toto relate to SBOMs?" + answer: "in-toto attestations can carry SBOM data as a predicate type, providing a signed, verifiable wrapper around SBOM content. This means you can not only generate an SBOM but also cryptographically prove who generated it, when, and from which source materials — adding integrity and provenance to your SBOM." date: 2024-08-14 slug: what-is-in-toto --- -In today's software landscape, securing the software supply chain is more crucial than ever. With increasing concerns about vulnerabilities and supply chain attacks, developers and organizations are looking for robust solutions to ensure the integrity of their software from development to deployment. One such solution is **[in-toto](https://github.com/in-toto/in-toto)**. In this post, we'll explore what in-toto is, how it works, and why it might be the key to securing your software supply chain. +When the [XZ Utils backdoor](/2024/04/13/what-really-happened-to-xz/) was discovered in March 2024, it revealed how a malicious contributor could spend years infiltrating an open-source project to inject a supply chain compromise. The attack succeeded because there was no cryptographic verification that the build steps producing the XZ binary matched the intended, authorized process. This is precisely the problem that **[in-toto](https://in-toto.io/)** was designed to solve. -### What is in-toto? +in-toto is a framework for securing the _entire_ software supply chain — from source commit to deployed artifact — by cryptographically recording and verifying every step along the way. Originally developed at New York University's [Secure Systems Lab](https://ssl.engineering.nyu.edu/) by Professor Justin Cappos and Santiago Torres-Arias, in-toto has grown from an academic research project into a [CNCF graduated project](https://www.cncf.io/projects/in-toto/) and the foundation of the modern software supply chain security ecosystem. The name comes from the Latin phrase meaning "in total" or "completely," reflecting its end-to-end approach. + +## How in-toto Works + +in-toto establishes a cryptographic chain of trust across every step in a software supply chain. It does this through three core concepts: layouts, link metadata, and verification. + +### Layouts: The Blueprint + +A layout is a signed document that defines the _intended_ supply chain. It specifies: + +- **The steps** involved in producing the software (e.g., clone source, run tests, compile, package) +- **Who is authorized** to perform each step (identified by cryptographic keys) +- **What materials and products** each step should consume and produce (e.g., the compile step takes source files as input and produces a binary as output) +- **Inspection rules** that must hold true across steps (e.g., the files that leave one step must be the same files that enter the next) + +The layout is signed by the project owner — the authority who defines what the supply chain _should_ look like. Think of it as a security policy for your build pipeline, expressed in a machine-verifiable format. + +### Link Metadata: The Evidence + +As each step in the layout is executed, the entity performing it generates _link metadata_ — a signed record containing: + +- The **command** that was run +- The **materials** (input files and their cryptographic hashes) consumed by the step +- The **products** (output files and their cryptographic hashes) produced by the step +- The **cryptographic signature** of the authorized functionary who performed the step + +Each link is signed by the functionary's private key, creating tamper-evident proof that the step was performed by an authorized party and that the inputs and outputs are exactly what was recorded. + +### Verification: The Guarantee + +Once all steps are complete, in-toto verification checks the entire chain: + +1. The layout signature is valid and from a trusted project owner +2. Every step defined in the layout has corresponding signed link metadata +3. Each link was signed by an authorized functionary for that step +4. The material/product relationships between steps are consistent — the products of one step match the materials of the next +5. All inspection rules pass + +If any check fails — a step was skipped, an unauthorized party performed a step, or files were modified between steps — verification fails and the artifact is rejected. This is what makes in-toto an _end-to-end_ verification system: it does not just check individual steps in isolation, but verifies the integrity of the entire pipeline. + +## The in-toto Attestation Framework -In-toto is an open-source framework designed to secure the entire software supply chain. It provides a way to ensure that all steps in the software development and deployment process are executed as intended, without tampering or unintended changes. The name "in-toto" is derived from the Latin phrase "in toto," meaning "in total" or "completely," which reflects the framework's comprehensive approach to security. +Beyond the original layout/link model, the in-toto project developed the **[in-toto attestation framework](https://github.com/in-toto/attestation)** — a more general-purpose format for making signed, verifiable claims about software artifacts. This framework has become the de facto standard for supply chain metadata across the industry. -The in-toto framework was originally developed by researchers at New York University as part of the Supply Chain Integrity, Transparency, and Trust (SCITT) project. It is now maintained as an open-source project and has been adopted by various organizations to enhance the security of their development pipelines. +An in-toto attestation consists of: -### How Does in-toto Work? +- **An envelope** — a signed wrapper (using [DSSE](https://github.com/secure-systems-lab/dsse), the Dead Simple Signing Envelope) that provides authentication +- **A subject** — the artifact(s) the attestation refers to, identified by cryptographic digest +- **A predicate** — a typed payload containing the actual claim -In-toto operates by establishing a chain of trust throughout the entire software supply chain. It does this through a series of steps: +The predicate type system is what makes the framework so versatile. Different predicate types include: -1. **Defining a Layout**: The first step in using in-toto is to define a "layout," which is essentially a blueprint of the software supply chain. The layout specifies the steps involved in the development process, the expected materials and products of each step, and the parties responsible for executing those steps. +- **[SLSA Provenance](/2024/08/17/what-is-slsa/)** — records how an artifact was built, including source, builder, and build parameters +- **SBOM** — wraps a CycloneDX or SPDX [SBOM](/what-is-sbom/) in a signed attestation, providing provenance and integrity +- **Vulnerability scan results** — attests to the findings of a security scan +- **Code review** — records that a human reviewed the code before it was built -2. **Creating and Signing Link Metadata**: As each step in the layout is executed, in-toto generates "link metadata," which records the details of the step, including the command run, the materials used (such as source code), and the products generated (such as binaries). This metadata is then signed by the entity responsible for the step, creating a cryptographic proof of its execution. +This predicate-based architecture means the in-toto attestation format can carry _any_ type of supply chain metadata in a standardized, signable envelope. -3. **Verifying the Supply Chain**: Once all steps are completed, in-toto verifies the entire supply chain by checking that each step was executed according to the layout and that the link metadata is valid. This ensures that no unauthorized changes were made during the process. +## Why in-toto Matters -4. **Reproducibility and Transparency**: In-toto emphasizes transparency by making all the metadata and verification processes open and auditable. This allows for third-party verification and ensures that the integrity of the software can be independently verified. +### Preventing Supply Chain Attacks -### Why is in-toto Important? +The core value of in-toto is that it makes supply chain tampering detectable. Consider how common supply chain attacks work: -In-toto addresses several key challenges in securing the software supply chain: +- **A build system is compromised** and injects malicious code during compilation. in-toto verification detects that the products of the build step do not match the expected transformation of the source materials. +- **An unauthorized actor pushes a release.** in-toto verification fails because the release step was not signed by an authorized functionary. +- **A dependency is swapped** between the resolve and build steps. in-toto's material/product matching across steps detects the inconsistency. -- **Preventing Supply Chain Attacks**: By providing end-to-end verification of the software supply chain, in-toto helps prevent attacks where malicious actors might attempt to inject vulnerabilities during the development process. +Without in-toto, these attacks can go undetected because each individual step _appears_ to have succeeded. in-toto's end-to-end verification catches tampering that step-level checks miss. -- **Ensuring Compliance**: For organizations that need to comply with regulatory requirements or internal security policies, in-toto provides a clear and auditable trail of the software's development history. +### The XZ Utils Lesson -- **Building Trust**: In-toto's transparent and verifiable process builds trust among developers, users, and stakeholders, ensuring that the software they rely on has not been tampered with. +The [XZ Utils backdoor (CVE-2024-3094)](/2024/04/13/what-really-happened-to-xz/) is a textbook case for why in-toto matters. The attacker modified the build process to inject a backdoor that only appeared in the distributed tarballs, not in the git source. A supply chain secured with in-toto would have required the build step to be performed by an authorized functionary using the exact source materials from the repository, and verification would have caught the discrepancy between the source and the tarball. -### Real-World Use Cases +## Real-World Adoption -In-toto has been adopted by several organizations and integrated into popular tools and frameworks: +in-toto and its attestation format have been widely adopted across the software supply chain ecosystem. -- **The Open Source Security Foundation (OpenSSF)**: In-toto is part of the OpenSSF's efforts to secure open-source software, providing a framework to protect against supply chain attacks in critical open-source projects. +**CNCF Graduated Project** — in-toto [graduated from the CNCF](https://www.cncf.io/projects/in-toto/) in 2023, joining a small set of projects that have achieved this level of maturity and adoption. It is part of the CNCF's supply chain security ecosystem alongside [sigstore](/2024/08/12/what-is-sigstore/) and [The Update Framework (TUF)](https://theupdateframework.io/). -- **Tern**: Tern, a tool for container image inspection, uses in-toto to track and verify the steps involved in building a container image, ensuring its integrity. +**SLSA Framework** — [SLSA (Supply chain Levels for Software Artifacts)](/2024/08/17/what-is-slsa/) uses the in-toto attestation format for all its provenance attestations. When a SLSA-compliant build system generates provenance, it produces an in-toto attestation. + +**GitHub Artifact Attestations** — GitHub's [artifact attestation](https://docs.github.com/en/actions/security-for-github-actions/using-artifact-attestations/using-artifact-attestations-to-establish-provenance-for-builds) feature uses in-toto attestations signed via [sigstore](/2024/08/12/what-is-sigstore/) to establish provenance for artifacts built with GitHub Actions. Every time you see SLSA provenance on a GitHub release, it is an in-toto attestation under the hood. + +**Kubernetes** — The Kubernetes project generates SLSA Level 3 provenance for its releases using in-toto attestations, allowing users to verify that official Kubernetes binaries were built from the expected source by authorized infrastructure. + +**Datadog** — Datadog integrated in-toto into its CI/CD pipelines to provide end-to-end verification of its Agent software, ensuring that the binaries delivered to customers match the intended build process. + +## in-toto and SBOMs + +in-toto and [SBOMs](/what-is-sbom/) are complementary. An SBOM tells you _what_ is in your software. in-toto tells you _how_ it got there and _who_ was responsible for each step. + +The intersection is practical: + +- **SBOMs as attestation predicates** — An in-toto attestation can wrap an SBOM as its predicate, producing a signed, verifiable SBOM. This proves not only what components are in the software, but also who generated the SBOM and from what source materials. +- **Build provenance for SBOM accuracy** — in-toto provenance attestations record the exact build inputs and process, which is precisely the information needed to generate an accurate SBOM. Build-time SBOMs generated alongside in-toto link metadata can be trusted to reflect the actual build. +- **Continuous verification** — When [monitoring SBOMs for vulnerabilities](/2026/02/01/sbom-scanning-vulnerability-detection/), in-toto attestations provide assurance that the SBOM was not tampered with after generation. + +For organizations managing SBOMs with a platform like [sbomify](https://sbomify.com), in-toto attestations add a layer of integrity: you can verify that the SBOM was generated by an authorized CI/CD pipeline from the correct source, not fabricated or modified after the fact. + +## Getting Started with in-toto + +The in-toto project provides implementations in multiple languages: + +- **[in-toto-python](https://github.com/in-toto/in-toto)** — The reference implementation, installable via `pip install in-toto` +- **[in-toto-golang](https://github.com/in-toto/in-toto-golang)** — Go implementation, used by many cloud-native tools +- **[in-toto-java](https://github.com/in-toto/in-toto-java)** — Java implementation for JVM-based build systems +- **[in-toto-rs](https://github.com/in-toto/in-toto-rs)** — Rust implementation + +For most organizations, the fastest path to adopting in-toto is through tools that already use it: + +1. **Enable SLSA provenance** in your CI/CD system. GitHub Actions, Google Cloud Build, and other platforms support SLSA provenance generation, which produces in-toto attestations automatically. +2. **Verify attestations** on artifacts you consume. Tools like `cosign verify-attestation` ([sigstore](/2024/08/12/what-is-sigstore/)) and `gh attestation verify` (GitHub CLI) check in-toto attestations. +3. **Generate signed SBOMs** by wrapping your SBOM output in an in-toto attestation using your CI/CD signing infrastructure. + +For deeper control — such as defining custom layouts with specific functionary keys and step-level verification — the in-toto Python or Go libraries provide the full layout/link/verify workflow. The [in-toto documentation](https://in-toto.readthedocs.io/) includes step-by-step tutorials. + +## in-toto in the Compliance Context + +in-toto aligns with emerging supply chain security requirements: + +- **[Executive Order 14028](/compliance/eo-14028/)** directs federal agencies to enhance software supply chain security, including provenance and integrity verification — core capabilities of in-toto. +- **[NIST SSDF (SP 800-218)](https://csrc.nist.gov/publications/detail/sp/800-218/final)** recommends protecting all forms of code from unauthorized access and tampering. in-toto's cryptographic verification of build steps directly addresses this requirement. +- **[SLSA requirements](/2024/08/17/what-is-slsa/)** — Organizations pursuing SLSA compliance are inherently using in-toto attestation formats for provenance. +- **[EU CRA](/compliance/eu-cra/)** requires vulnerability handling and supply chain security processes. in-toto provides the verification mechanism to demonstrate that software was built through an authorized, untampered pipeline. + +## Frequently Asked Questions + +### What is in-toto? -- **Datadog**: Datadog, a cloud monitoring platform, has integrated in-toto into its CI/CD pipeline to secure the delivery of its software updates. +in-toto is an open-source framework for securing the software supply chain. It provides a system of layouts, signed link metadata, and end-to-end verification to ensure that every step in building and delivering software happened exactly as intended, with no unauthorized modifications. It is a CNCF graduated project. -### Getting Started with in-toto +### What is the in-toto attestation framework? -To start using in-toto, you can visit the [in-toto GitHub repository](https://github.com/in-toto/in-toto) for detailed documentation and examples. The repository provides guides on how to define layouts, generate link metadata, and verify the software supply chain. +The in-toto attestation framework defines a standard envelope format for supply chain metadata. It wraps a typed predicate (such as SLSA provenance, vulnerability scan results, or SBOM data) inside a signed envelope, providing a universal format for making and verifying claims about software artifacts. -In-toto is implemented in Python, and you can install it via pip: +### How is in-toto related to SLSA? -```bash -pip install in-toto -``` +[SLSA](/2024/08/17/what-is-slsa/) (Supply chain Levels for Software Artifacts) builds directly on in-toto. SLSA provenance attestations use the in-toto attestation format, and SLSA's verification model is an extension of in-toto's end-to-end verification concept. in-toto provides the technical foundation; SLSA provides a maturity framework on top of it. -From there, you can start defining your software supply chain's layout and securing your development pipeline. +### Is in-toto a CNCF project? -### Wrapping up +Yes. in-toto joined the Cloud Native Computing Foundation (CNCF) as a sandbox project in 2019 and graduated in 2023. It is one of only a few supply chain security projects to reach CNCF graduated status, alongside [sigstore](/2024/08/12/what-is-sigstore/) and The Update Framework (TUF). -In-toto offers a powerful and flexible framework for securing the software supply chain, addressing the growing concerns of supply chain attacks and software integrity. By incorporating in-toto into your development processes, you can ensure that your software is built and delivered exactly as intended, without any unauthorized changes. As supply chain security becomes increasingly important, tools like in-toto are invaluable in safeguarding the integrity of your software. +### How does in-toto relate to SBOMs? -Stay secure, and happy coding! +in-toto attestations can carry SBOM data as a predicate type, providing a signed, verifiable wrapper around [SBOM](/what-is-sbom/) content. This means you can not only generate an SBOM but also cryptographically prove who generated it, when, and from which source materials — adding integrity and provenance to your SBOM. diff --git a/content/posts/2024-08-17-what-is-slsa.md b/content/posts/2024-08-17-what-is-slsa.md index f989630..c7dc576 100644 --- a/content/posts/2024-08-17-what-is-slsa.md +++ b/content/posts/2024-08-17-what-is-slsa.md @@ -1,55 +1,222 @@ --- -title: "Securing the Software Supply Chain with SLSA: What You Need to Know" -description: "Understanding SLSA (Supply chain Levels for Software Artifacts), Google's framework for securing software supply chains through build integrity and provenance tracking." -author: - display_name: Cowboy Neil +title: "What Is SLSA? Understanding Supply Chain Levels for Software Artifacts" +description: "What is SLSA? The OpenSSF framework for software supply chain security defines three build levels for provenance and integrity. Learn how SLSA works, how it builds on in-toto, and how to adopt it with GitHub Actions." categories: - education -tags: [slsa, security, supply-chain] +tags: [slsa, security, supply-chain, provenance] +tldr: "SLSA (pronounced 'salsa') is a framework of three progressively stronger build levels that ensure software artifacts have verifiable provenance — a signed, tamper-evident record of how, where, and by whom they were built. Maintained by the OpenSSF, SLSA builds on the in-toto attestation format and is natively supported by GitHub Actions, Google Cloud Build, and other CI/CD platforms." +author: + display_name: Cowboy Neil + login: Cowboy Neil + url: https://sbomify.com +faq: + - question: "What is SLSA?" + answer: "SLSA (Supply chain Levels for Software Artifacts, pronounced 'salsa') is a security framework that defines three build levels of increasing rigor for producing software artifacts with verifiable provenance. It provides a common language for describing build security maturity, from basic provenance documentation (Build L1) to hardened, tamper-resistant build platforms (Build L3). SLSA is maintained by the Open Source Security Foundation (OpenSSF)." + - question: "What are the SLSA build levels?" + answer: "SLSA v1.0 defines three build levels. Build L1 requires that provenance exists and documents the build process. Build L2 requires that provenance is generated by a hosted build platform (not the developer's machine), making it harder to forge. Build L3 requires a hardened build platform with isolated, tamper-resistant build environments, preventing even the build platform's own administrators from manipulating individual builds." + - question: "How is SLSA related to in-toto?" + answer: "SLSA uses the in-toto attestation format for all its provenance documents. An SLSA provenance attestation is an in-toto attestation with a specific predicate type that records build parameters, source information, and builder identity. in-toto provides the format; SLSA provides the security requirements." + - question: "How do I get started with SLSA?" + answer: "The fastest path is to use a CI/CD platform that already supports SLSA provenance generation. GitHub Actions provides SLSA Build L1 provenance through its artifact attestation feature. The slsa-github-generator from the SLSA project can produce Build L2/L3 provenance. Google Cloud Build also supports SLSA provenance natively." + - question: "Is SLSA required by any regulations?" + answer: "SLSA is not mandated by name in any regulation, but its requirements align with Executive Order 14028 (which requires attestation of secure development practices), the NIST SSDF (which requires protecting build integrity), and the EU Cyber Resilience Act (which requires supply chain security processes). Adopting SLSA helps demonstrate compliance with these frameworks." date: 2024-08-17 slug: what-is-slsa --- -## Abstract +In 2020, attackers compromised SolarWinds' build system and injected malicious code into Orion software updates that were distributed to roughly 18,000 organizations, including multiple U.S. government agencies. The attack succeeded because there was no verifiable record of what the build system was _supposed_ to produce — so no one noticed when it started producing something different. + +**[SLSA](https://slsa.dev/)** (Supply chain Levels for Software Artifacts, pronounced "salsa") is a framework designed to prevent exactly this kind of attack. It defines a set of progressively stronger requirements for how software is built, producing a signed, tamper-evident record — called **provenance** — that documents exactly how, where, and by whom an artifact was created. Maintained by the [Open Source Security Foundation (OpenSSF)](https://openssf.org/), SLSA provides a common language for describing build security maturity, from basic documentation to hardened, tamper-resistant build platforms. + +## The SLSA Build Levels + +SLSA v1.0 defines three build levels, each adding stronger guarantees against different classes of supply chain threats. + +### Build L1: Provenance Exists + +At Level 1, the build process produces provenance — a document that records what was built, from what source, using which build system. The provenance format is an [in-toto attestation](/2024/08/14/what-is-in-toto/) with an SLSA Provenance predicate. + +**Requirements:** + +- Provenance is generated and available to consumers +- The provenance document follows the SLSA provenance format +- The build process is documented + +**What it prevents:** Build L1 does not prevent attacks, but it creates a baseline of transparency. If a compromised artifact is later discovered, provenance helps trace where the failure occurred. It also enables consumers to make informed decisions about which artifacts to trust. + +### Build L2: Hosted Build Platform + +At Level 2, provenance must be generated by a _hosted build service_ — not the developer's local machine. The build service signs the provenance, attesting that it accurately reflects the build it performed. + +**Requirements:** + +- All Build L1 requirements +- Builds run on a hosted build platform (e.g., GitHub Actions, Google Cloud Build, GitLab CI) +- The build platform generates and signs the provenance (the developer cannot forge it) +- Provenance includes the builder identity, source reference, and build configuration + +**What it prevents:** Build L2 prevents developers from fabricating provenance. Because the build platform generates the provenance, a compromised developer account cannot create forged provenance claiming an artifact was built from clean source code when it was not. + +### Build L3: Hardened Builds + +At Level 3, the build platform itself must be hardened against tampering. Builds run in isolated environments where even the build platform's own administrators cannot manipulate individual builds. + +**Requirements:** + +- All Build L2 requirements +- Builds run in isolated, ephemeral environments +- The build platform prevents cross-build contamination (one build cannot influence another) +- Secrets and signing keys are isolated from build workloads +- The build platform has a verifiable record of its own integrity + +**What it prevents:** Build L3 prevents insider threats at the build platform level. Even if an attacker compromises the CI/CD infrastructure, the isolation guarantees mean they cannot inject code into a specific build without detection. This is the level required to defend against SolarWinds-class attacks. + +## How SLSA Provenance Works + +SLSA provenance is a machine-readable document that answers three questions about a software artifact: + +1. **What was built?** — The artifact's digest (cryptographic hash) +2. **How was it built?** — The source repository, commit, build configuration, and builder identity +3. **Who built it?** — The build platform and its identity credentials + +Provenance is expressed as an [in-toto attestation](/2024/08/14/what-is-in-toto/) — a signed envelope containing a typed predicate. The SLSA Provenance predicate type includes fields for the builder, build configuration, source materials, and metadata. The attestation is signed using [Sigstore](/2024/08/12/what-is-sigstore/) (keyless signing) or traditional key-based signing, and the signature is recorded in a transparency log. + +A simplified example of what provenance captures: + +``` +Subject: my-app:v1.2.3 (sha256:abc123...) +Builder: GitHub Actions (github.com/actions/runner) +Source: github.com/my-org/my-app @ commit def456 +Build: .github/workflows/release.yml +Timestamp: 2026-02-20T14:30:00Z +``` + +Consumers verify provenance by checking the signature, confirming the builder identity, and validating that the source and build configuration match their expectations. Tools like `cosign verify-attestation` and `slsa-verifier` automate this process. + +## The Supply Chain Threats SLSA Addresses + +SLSA's levels map to specific threat categories in the software supply chain. The framework's [threat model](https://slsa.dev/spec/v1.0/threats-overview) identifies attacks at each stage of the development process. + +| Threat | Example | SLSA Level | +| -------------------------- | ---------------------------------------------------- | ----------------------- | +| No provenance | Consumers cannot verify how an artifact was built | Build L1 | +| Forged provenance | Developer fabricates provenance for a modified build | Build L2 | +| Compromised build platform | Attacker injects code via CI/CD infrastructure | Build L3 | +| Compromised source | Unauthorized commit pushed to repo | Source track (separate) | +| Compromised dependencies | Malicious transitive dependency | Not yet covered by SLSA | + +Note that SLSA v1.0 focuses on the _build_ track. Source integrity (protecting the source repository from unauthorized changes) is being developed as a separate track. Dependency management is out of scope for SLSA but is addressed by complementary tools like [SBOMs](/what-is-sbom/) and [vulnerability scanning](/2026/02/01/sbom-scanning-vulnerability-detection/). + +## Real-World Adoption + +### GitHub Actions + +GitHub's [artifact attestation](https://docs.github.com/en/actions/security-for-github-actions/using-artifact-attestations/using-artifact-attestations-to-establish-provenance-for-builds) feature generates SLSA Build L1 provenance for any artifact built with GitHub Actions. The provenance is an [in-toto attestation](/2024/08/14/what-is-in-toto/) signed via [Sigstore](/2024/08/12/what-is-sigstore/) and stored alongside the artifact. Verification is available via the GitHub CLI: + +```bash +gh attestation verify my-artifact.tar.gz --owner my-org +``` + +For Build L2/L3, the [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator) uses GitHub's reusable workflows to generate provenance in an isolated build context, preventing the calling workflow from tampering with the provenance. + +### Google Cloud Build + +Google Cloud Build natively generates SLSA Build L3 provenance for container images. Provenance is automatically signed and can be verified using Binary Authorization for admission control in GKE clusters. + +### npm and PyPI + +The npm registry includes SLSA provenance for packages built via GitHub Actions, and PyPI supports provenance through its Trusted Publishers feature. Both use [Sigstore](/2024/08/12/what-is-sigstore/) for signing, allowing consumers to verify that a package was built from the stated source. + +### Kubernetes + +All Kubernetes release artifacts include SLSA Build L3 provenance, allowing cluster operators to verify that the binaries they deploy were produced by the Kubernetes project's official build infrastructure. + +## SLSA and SBOMs + +SLSA and [SBOMs](/what-is-sbom/) address different parts of the software supply chain security problem, and they are most effective when used together. + +- **SLSA provenance** answers: _How was this artifact built, and can I trust the build process?_ +- **SBOMs** answer: _What components are inside this artifact, and do any have known vulnerabilities?_ + +An organization that generates SLSA provenance for its builds _and_ produces SBOMs for its artifacts covers both integrity (the build was not tampered with) and visibility (the contents are known and monitorable). Provenance without an SBOM means you trust the build but do not know what is inside. An SBOM without provenance means you know what is inside but cannot verify the build was legitimate. + +Both SLSA provenance and SBOMs can be expressed as [in-toto attestations](/2024/08/14/what-is-in-toto/), signed with [Sigstore](/2024/08/12/what-is-sigstore/), and managed alongside artifacts. For organizations using [sbomify](https://sbomify.com) for SBOM management, pairing SBOMs with SLSA provenance provides a comprehensive picture: verified build integrity and a complete component inventory. + +## Getting Started with SLSA + +### Step 1: Achieve Build L1 + +Generate provenance for your builds. If you use GitHub Actions, enable artifact attestations: + +```yaml +permissions: + id-token: write + contents: read + attestations: write + +steps: + - uses: actions/checkout@v4 + - run: make build + - uses: actions/attest-build-provenance@v2 + with: + subject-path: dist/my-app +``` + +This produces an SLSA Build L1 provenance attestation signed via Sigstore. + +### Step 2: Move to Build L2/L3 + +Use the [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator) for isolated provenance generation that the calling workflow cannot tamper with: + +```yaml +jobs: + build: + # ... your build steps ... + provenance: + needs: build + uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0 + with: + base64-subjects: ${{ needs.build.outputs.digest }} +``` + +### Step 3: Verify Provenance -In a world where software is integral to almost every aspect of life, securing the software supply chain is more critical than ever. The increasing complexity of software systems has given rise to sophisticated cyberattacks, particularly those targeting the software supply chain. To combat these threats, Google introduced **SLSA (Supply chain Levels for Software Artifacts)**—a framework that provides a structured approach to safeguarding software development and ensuring that software artifacts are secure and tamper-free throughout their lifecycle. Pronounced "salsa," SLSA is essential for defending against supply chain attacks, maintaining software integrity, providing transparency, encouraging best practices, and meeting compliance standards. Additionally, GitHub now supports artifact attestations, further enhancing the ability to establish and verify provenance within the SLSA framework. +Consumers can verify provenance using the SLSA verifier: -## What is SLSA? +```bash +slsa-verifier verify-artifact my-app \ + --provenance-path my-app.intoto.jsonl \ + --source-uri github.com/my-org/my-app +``` -SLSA, which stands for Supply chain Levels for Software Artifacts, is a security framework designed to protect the integrity of software artifacts from the earliest stages of development through to deployment. Introduced by Google, SLSA offers a standardized approach to securing the software supply chain, addressing vulnerabilities that could otherwise be exploited by cybercriminals. +## SLSA in the Compliance Context -The framework is structured into four levels, each corresponding to a progressively higher degree of security maturity: +SLSA aligns with emerging supply chain security regulations: -1. **SLSA Level 1: Basic Build Integrity** - At this foundational level, organizations implement basic security practices, such as using version control systems (VCS) and documenting their build processes. This establishes a baseline for more advanced security measures. +- **[Executive Order 14028](/compliance/eo-14028/)** requires software producers to attest to secure development practices, including build integrity. SLSA provenance provides the mechanism for this attestation. +- **[NIST SSDF (SP 800-218)](https://csrc.nist.gov/publications/detail/sp/800-218/final)** recommends protecting all forms of code from unauthorized access and tampering. SLSA's build levels directly address this by progressively hardening the build process. +- **[EU CRA](/compliance/eu-cra/)** requires demonstrating supply chain security processes. SLSA provenance provides auditable evidence of build integrity. +- **[CISA minimum elements](/compliance/cisa-minimum-elements/)** recommend that SBOMs include build information. SLSA provenance complements SBOMs by providing detailed, verified build records. -2. **SLSA Level 2: Higher Build Integrity** - Building on Level 1, this stage introduces stronger controls like using a dedicated build service to prevent unauthorized changes. The goal is to enhance the integrity of the software as it progresses through the development pipeline. +## Frequently Asked Questions -3. **SLSA Level 3: Provenance** - Provenance is a key feature at this level, offering traceability and transparency regarding the origin of software artifacts. This level ensures that organizations can verify the entire history of their software, including who built it and how it was built. GitHub, one of the most widely used platforms for software development, now supports artifact attestations, enabling developers to create and verify attestations for software artifacts built using GitHub Actions. This capability allows organizations to seamlessly integrate provenance tracking into their development workflows, aligning with SLSA Level 3 requirements. +### What is SLSA? -4. **SLSA Level 4: Hermetic Builds and Two-Person Review** - The highest level of SLSA emphasizes maximum security with hermetic builds—completely isolated build environments—and a mandatory two-person review process. These practices help detect any issues before software deployment, ensuring the highest level of security. +SLSA (Supply chain Levels for Software Artifacts, pronounced "salsa") is a security framework that defines three build levels of increasing rigor for producing software artifacts with verifiable provenance. It provides a common language for describing build security maturity, from basic provenance documentation (Build L1) to hardened, tamper-resistant build platforms (Build L3). SLSA is maintained by the [Open Source Security Foundation (OpenSSF)](https://openssf.org/). -## Why SLSA Matters +### What are the SLSA build levels? -The importance of SLSA in today’s digital landscape cannot be overstated. As cyber threats continue to evolve, securing the software supply chain has become essential to preventing widespread damage and maintaining trust in digital systems. Here’s why SLSA is crucial: +SLSA v1.0 defines three build levels. Build L1 requires that provenance exists and documents the build process. Build L2 requires that provenance is generated by a hosted build platform (not the developer's machine), making it harder to forge. Build L3 requires a hardened build platform with isolated, tamper-resistant build environments, preventing even the build platform's own administrators from manipulating individual builds. -1. **Defending Against Supply Chain Attacks** - Supply chain attacks are a growing concern, with attackers targeting the software development process to introduce malicious code. SLSA helps organizations defend against these threats by implementing robust security measures at every stage of software development. +### How is SLSA related to in-toto? -2. **Ensuring Software Integrity and Trust** - SLSA ensures that software remains untampered throughout its lifecycle, which is critical for maintaining trust, especially in industries where security is paramount, such as healthcare, finance, and government. +SLSA uses the [in-toto attestation format](/2024/08/14/what-is-in-toto/) for all its provenance documents. An SLSA provenance attestation is an in-toto attestation with a specific predicate type that records build parameters, source information, and builder identity. in-toto provides the format; SLSA provides the security requirements. -3. **Providing Transparency with Provenance** - The concept of provenance, introduced at SLSA Level 3, offers a clear record of the software’s origin and development history. With GitHub's support for artifact attestations, organizations can now generate and verify these attestations directly within their development workflows, ensuring that the software’s authenticity and history are transparent and verifiable. +### How do I get started with SLSA? -4. **Encouraging Adoption of Best Practices** - By following the SLSA framework, organizations naturally adopt industry best practices for software security. This strengthens their own systems and contributes to the overall security of the software ecosystem. +The fastest path is to use a CI/CD platform that already supports SLSA provenance generation. GitHub Actions provides SLSA Build L1 provenance through its [artifact attestation feature](https://docs.github.com/en/actions/security-for-github-actions/using-artifact-attestations/using-artifact-attestations-to-establish-provenance-for-builds). The [slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator) can produce Build L2/L3 provenance. Google Cloud Build also supports SLSA provenance natively. -5. **Achieving Compliance and Regulatory Standards** - With cybersecurity regulations becoming more stringent, adhering to SLSA can help organizations meet compliance requirements, avoiding legal consequences and demonstrating a commitment to security that can serve as a competitive advantage. +### Is SLSA required by any regulations? -By adopting SLSA and leveraging tools like GitHub's artifact attestations, organizations can protect their software from threats, ensure its integrity, and contribute to a safer digital environment for all. +SLSA is not mandated by name in any regulation, but its requirements align with [Executive Order 14028](/compliance/eo-14028/) (which requires attestation of secure development practices), the [NIST SSDF](https://csrc.nist.gov/publications/detail/sp/800-218/final) (which requires protecting build integrity), and the [EU Cyber Resilience Act](/compliance/eu-cra/) (which requires supply chain security processes). Adopting SLSA helps demonstrate compliance with these frameworks. diff --git a/content/posts/2025-12-30-what-is-kev-cisa-known-exploited-vulnerabilities.md b/content/posts/2025-12-30-what-is-kev-cisa-known-exploited-vulnerabilities.md index 474ba72..6ab4175 100644 --- a/content/posts/2025-12-30-what-is-kev-cisa-known-exploited-vulnerabilities.md +++ b/content/posts/2025-12-30-what-is-kev-cisa-known-exploited-vulnerabilities.md @@ -1,31 +1,35 @@ --- title: "What Is a KEV? Understanding CISA's Known Exploited Vulnerabilities Catalog" -description: "Learn what the CISA KEV catalog is, how it differs from CVE and CVSS, and how SBOMs enable automated KEV monitoring for vulnerability prioritization." +description: "What is a KEV? CISA's Known Exploited Vulnerabilities catalog lists CVEs under active attack. Learn how KEV differs from CVE and CVSS, and how SBOMs automate KEV monitoring." categories: - education tags: [kev, cisa, vulnerability, security] -tldr: "CISA's Known Exploited Vulnerabilities (KEV) catalog lists CVEs that are actively being exploited in the wild. Pairing KEV data with SBOMs lets you instantly identify which of your software components are affected by real-world attacks — not just theoretical risks." +tldr: "A KEV is a vulnerability that attackers are exploiting right now. CISA's KEV catalog — about 1,500 entries and growing — is the fastest way to know which CVEs pose an immediate threat. Pairing KEV with SBOMs lets you instantly find which of your components are under active attack." author: display_name: Cowboy Neil login: Cowboy Neil url: https://sbomify.com faq: - question: "What is a KEV?" - answer: "A KEV (Known Exploited Vulnerability) is a CVE vulnerability that CISA has confirmed is being actively exploited in real-world attacks. The CISA KEV catalog lists these vulnerabilities along with remediation deadlines and required actions." + answer: "A KEV (Known Exploited Vulnerability) is a CVE vulnerability that CISA has confirmed is being actively exploited in real-world attacks. The CISA KEV catalog lists these vulnerabilities along with remediation deadlines and required actions. Being listed in the KEV catalog means the vulnerability is not just theoretically dangerous — it is being used by attackers right now." - question: "What is the KEV catalog?" - answer: "The CISA Known Exploited Vulnerabilities Catalog is a curated, continuously updated list of CVE vulnerabilities with confirmed active exploitation. Established by Binding Operational Directive 22-01, it requires U.S. federal agencies to remediate listed vulnerabilities within specified timeframes and is freely available as JSON, CSV, and via the CISA website." + answer: "The CISA Known Exploited Vulnerabilities Catalog is a curated, continuously updated list of CVE vulnerabilities with confirmed active exploitation. Established by Binding Operational Directive 22-01, it requires U.S. federal agencies to remediate listed vulnerabilities within specified timeframes. The catalog is freely available as JSON, CSV, and via the CISA website, and is widely used by both government and private sector organizations for patch prioritization." - question: "How is KEV different from CVE?" - answer: "CVE is a system for assigning unique identifiers to all publicly known vulnerabilities, regardless of whether they are being exploited. The KEV catalog is a curated subset of CVEs that have confirmed evidence of active exploitation. There are over 330,000 CVE entries but only about 1,500 KEV entries." + answer: "CVE is a system for assigning unique identifiers to all publicly known vulnerabilities, regardless of whether they are being exploited. The KEV catalog is a curated subset of CVEs that have confirmed evidence of active exploitation. There are over 330,000 CVE entries but only about 1,500 KEV entries. A CVE tells you a vulnerability exists; a KEV listing tells you attackers are actively using it." - question: "Who must comply with the KEV catalog?" - answer: "BOD 22-01 legally requires U.S. federal civilian executive branch (FCEB) agencies to remediate KEV-listed vulnerabilities by the specified due dates. However, CISA strongly recommends that all organizations use the KEV catalog for prioritization, and many private sector organizations have adopted it as a standard input to their vulnerability management programs." + answer: "BOD 22-01 legally requires U.S. federal civilian executive branch (FCEB) agencies to remediate KEV-listed vulnerabilities by the specified due dates. However, CISA strongly recommends that all organizations use the KEV catalog for prioritization. Many private sector organizations, state and local governments, and critical infrastructure operators have adopted KEV as a standard input to their vulnerability management programs." - question: "How can I monitor the KEV catalog automatically?" - answer: "Generate SBOMs for your applications, ingest them into a vulnerability management platform like OWASP Dependency-Track, and use vulnerability analysis to identify known CVEs in your components. Then cross-reference those CVEs against the KEV catalog, which CISA publishes as a machine-readable JSON feed." + answer: "Generate SBOMs for your applications, ingest them into a vulnerability management platform like OWASP Dependency-Track, and use vulnerability analysis to identify known CVEs in your components. Then cross-reference those CVEs against the KEV catalog. CISA publishes the KEV catalog as a machine-readable JSON feed that can be consumed by automated tools." + - question: "How often is the KEV catalog updated?" + answer: "CISA updates the KEV catalog multiple times per week. New vulnerabilities are added as soon as CISA confirms reliable evidence of active exploitation and verifies that a remediation action exists. There is no fixed schedule — additions are driven by threat intelligence, so the catalog may see several updates in a single week or occasional pauses." + - question: "How many vulnerabilities are in the KEV catalog?" + answer: "As of early 2026, the KEV catalog contains approximately 1,500 entries. The catalog launched with about 300 entries in November 2021, passed 1,000 in late 2023, and has been growing at roughly 20% per year. Despite this growth, the KEV catalog remains a small fraction of the 330,000+ CVEs in the broader vulnerability ecosystem, which is precisely what makes it useful for prioritization." date: 2025-12-30 slug: what-is-kev-cisa-known-exploited-vulnerabilities --- -A KEV (Known Exploited Vulnerability) is a vulnerability that has been confirmed as actively exploited in the wild. The [CISA Known Exploited Vulnerabilities Catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog), maintained by the Cybersecurity and Infrastructure Security Agency, is the authoritative list of these vulnerabilities. Unlike CVE databases that catalog all publicly known flaws, or CVSS scores that estimate theoretical severity, the KEV catalog answers a more urgent question: _is this vulnerability being exploited right now?_ +A KEV — Known Exploited Vulnerability — is a vulnerability that attackers are exploiting _right now_. When Apache Log4Shell ([CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228)) surfaced in December 2021, it took CISA just days to add it to the [Known Exploited Vulnerabilities Catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog). That single listing told every organization in the world: this is not theoretical — patch now. The KEV catalog, maintained by the Cybersecurity and Infrastructure Security Agency, is the authoritative registry of CVEs with confirmed active exploitation. While CVE databases catalog all publicly known flaws and CVSS scores estimate theoretical severity, the KEV catalog answers a more urgent question: _is this vulnerability being exploited right now?_ ![Vulnerability prioritization flow from CVE through CVSS and KEV to SBOM-based remediation](/assets/images/d2/vulnerability-prioritization.svg) @@ -33,7 +37,7 @@ A KEV (Known Exploited Vulnerability) is a vulnerability that has been confirmed The CISA KEV catalog is a curated list of [CVE](/2025/12/18/cve-vulnerability-explained/) vulnerabilities that have reliable evidence of active exploitation. CISA launched the catalog in November 2021 alongside [Binding Operational Directive (BOD) 22-01](https://www.cisa.gov/news-events/directives/bod-22-01-reducing-significant-risk-known-exploited-vulnerabilities), which requires U.S. federal civilian executive branch (FCEB) agencies to remediate KEV-listed vulnerabilities within specified timeframes. -As of early 2026, the KEV catalog contains nearly 1,500 entries. New vulnerabilities are added as CISA confirms evidence of exploitation, and each entry includes: +The catalog started with roughly 300 entries at launch, grew past 1,000 by late 2023, and contained approximately 1,500 entries as of early 2026. CISA adds new vulnerabilities multiple times per week as exploitation evidence is confirmed, and each entry includes: - **CVE ID** identifying the vulnerability - **Vendor and product** affected @@ -57,11 +61,23 @@ These three systems are complementary, not competing. Each answers a different q A CVE tells you a vulnerability exists. CVSS tells you how bad it _could_ be. KEV tells you it _is_ being exploited. Effective vulnerability prioritization uses all three signals together. -Consider two vulnerabilities, both with CVSS scores of 9.8 (Critical). One is listed in the KEV catalog; the other is not. The KEV-listed vulnerability should be patched first because there is confirmed evidence that attackers are actively exploiting it, whereas the other, while theoretically severe, may not have working exploits in circulation. +### A Worked Example: Log4Shell + +Consider [CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228), the Log4Shell vulnerability in Apache Log4j: + +- **CVE ID:** CVE-2021-44228 — assigned by MITRE, giving the flaw a unique, trackable identifier +- **CVSS score:** 10.0 (Critical) — the maximum possible score, reflecting remote code execution with no authentication required +- **KEV status:** Added to the KEV catalog on December 10, 2021, just days after public disclosure + +All three systems describe the same vulnerability, but each contributes different information. The CVE provides a common name. The [CVSS score](/2026/02/05/what-is-cvss-vulnerability-scoring/) quantifies theoretical severity. The KEV listing confirms that attackers were already exploiting it in the wild, making it an immediate priority rather than a theoretical risk. + +Now consider two vulnerabilities, both with CVSS scores of 9.8 (Critical). One is listed in the KEV catalog; the other is not. The KEV-listed vulnerability should be patched first because there is confirmed evidence of active exploitation, whereas the other — while theoretically severe — may not have working exploits in circulation. + +Beyond these three systems, [EPSS](https://www.first.org/epss/) (Exploit Prediction Scoring System) offers a forward-looking probability estimate of exploitation. Used alongside KEV, EPSS can help identify vulnerabilities that are _likely_ to be exploited soon, even before CISA confirms active exploitation. For more on how CVSS and EPSS complement each other, see our [CVSS scoring guide](/2026/02/05/what-is-cvss-vulnerability-scoring/). ## Binding Operational Directive 22-01 -[BOD 22-01](https://www.cisa.gov/news-events/directives/bod-22-01-reducing-significant-risk-known-exploited-vulnerabilities) ("Reducing the Significant Risk of Known Exploited Vulnerabilities") is the CISA directive that established the KEV catalog's operational role. It requires FCEB agencies to: +[BOD 22-01](https://www.cisa.gov/news-events/directives/bod-22-01-reducing-significant-risk-known-exploited-vulnerabilities) ("Reducing the Significant Risk of Known Exploited Vulnerabilities") is the CISA directive that established the KEV catalog's operational role. It covers over 100 federal civilian executive branch agencies and requires them to: 1. **Review** the KEV catalog on an ongoing basis 2. **Remediate** each KEV vulnerability by the due date specified in the catalog @@ -69,34 +85,36 @@ Consider two vulnerabilities, both with CVSS scores of 9.8 (Critical). One is li While BOD 22-01 only legally binds federal civilian agencies, CISA strongly recommends that all organizations — including state and local governments, critical infrastructure operators, and private sector companies — use the KEV catalog as a prioritization input for their vulnerability management programs. -The remediation timelines in BOD 22-01 are aggressive. Newly added KEVs typically have due dates of two to three weeks from the date of addition. This reflects the urgency: if a vulnerability is being actively exploited, delayed patching means continued exposure. +The remediation timelines in BOD 22-01 are aggressive. Vulnerabilities added to the KEV catalog since early 2022 typically carry a remediation deadline of 14 days from the date of addition, though some early entries in the catalog had longer windows of up to six months. This reflects the urgency: if a vulnerability is being actively exploited, delayed patching means continued exposure. ## How the KEV Catalog Is Maintained CISA adds vulnerabilities to the KEV catalog based on three criteria, all of which must be met: 1. **The vulnerability has an assigned CVE ID.** Only cataloged vulnerabilities with standard identifiers qualify. -2. **There is reliable evidence of active exploitation.** This evidence may come from CISA's own threat intelligence, reports from federal agencies, industry partners, or trusted cybersecurity organizations. +2. **There is reliable evidence of active exploitation.** CISA draws this evidence from multiple sources: federal agency incident reports, industry partners, commercial and open-source threat intelligence feeds, and cybersecurity researchers. 3. **A clear remediation action exists.** Typically this means a vendor patch or mitigation is available. CISA does not add vulnerabilities for which there is no known fix, as doing so would disclose exploited flaws without offering a path to resolution. +A high CVSS score alone is _not_ enough for KEV inclusion. A vulnerability can be rated Critical (9.0+) and still not appear in the KEV catalog if CISA lacks evidence of active exploitation. Conversely, a Medium-severity vulnerability _can_ be added to the KEV catalog if attackers are exploiting it in the wild. The criterion is exploitation evidence, not theoretical severity. + Vulnerabilities are rarely removed from the KEV catalog once added — even after the remediation deadline passes. While CISA has removed entries in rare cases where evidence of exploitation was later found insufficient, the catalog serves primarily as a persistent historical record of exploitation activity. ## Using the KEV Catalog for Patch Prioritization -Most organizations face far more vulnerabilities than they can patch simultaneously. The KEV catalog provides a practical prioritization signal that cuts through the noise. +Most organizations face far more vulnerabilities than they can patch simultaneously. The KEV catalog provides a practical prioritization signal that cuts through the noise. With approximately 245 KEVs added in 2025 alone — roughly 20% growth in a single year — the pace of confirmed exploitation is increasing, making principled prioritization more important than ever. ### A Prioritization Framework A common approach combines CVSS severity with KEV status and deployment context: 1. **Critical + KEV-listed + Internet-facing** — Patch immediately (within 24-48 hours) -2. **Critical + KEV-listed + Internal** — Patch within the BOD 22-01 deadline (typically 2-3 weeks) +2. **Critical + KEV-listed + Internal** — Patch within the BOD 22-01 deadline (typically 14 days) 3. **Critical + Not KEV-listed** — Patch within standard SLA (typically 30 days) -4. **High + KEV-listed** — Treat as critical; patch within 2-3 weeks +4. **High + KEV-listed** — Treat as critical; patch within 14 days 5. **High + Not KEV-listed** — Patch within standard SLA 6. **Medium/Low + Not KEV-listed** — Schedule for regular maintenance windows -This framework is a starting point. Organizations should adjust based on their risk tolerance, asset criticality, and compensating controls. +This framework is a starting point. Organizations should adjust based on their risk tolerance, asset criticality, and compensating controls. Adding [EPSS](https://www.first.org/epss/) as a forward-looking signal can further refine prioritization: a vulnerability with a high EPSS score that is not yet in the KEV catalog may still warrant accelerated patching. ### The Ransomware Flag @@ -106,6 +124,8 @@ Since October 2023, CISA has included a "Known Ransomware Campaign Use" flag in The real power of the KEV catalog emerges when combined with [SBOMs](/what-is-sbom/). An SBOM provides a machine-readable inventory of every component in your software. The KEV catalog provides a machine-readable list of actively exploited vulnerabilities. Connecting the two creates automated, continuous monitoring. +The logic is straightforward: you cannot check your components against the KEV catalog if you do not know what components you have. Without an SBOM, responding to a new KEV entry means manually investigating every application to determine whether it uses the affected library or product. With SBOMs, you can answer that question in seconds across your entire portfolio. + ### How It Works 1. **Generate SBOMs** for all your applications using tools from our [SBOM generation guides](/guides/) @@ -114,14 +134,14 @@ The real power of the KEV catalog emerges when combined with [SBOMs](/what-is-sb 4. **Cross-reference with KEV** — check which of those CVEs appear in the KEV catalog to identify actively exploited vulnerabilities 5. **Prioritize remediation** using the KEV due date and your deployment context -Because CISA adds new KEVs multiple times per week, building this cross-referencing into your workflow is important. The KEV catalog's machine-readable formats make this feasible to automate. +Because CISA adds new KEVs multiple times per week, building this cross-referencing into your workflow is important. The KEV catalog's machine-readable formats make this feasible to automate. For a deeper look at building this pipeline, see our guide on [SBOM scanning for vulnerability detection](/2026/02/01/sbom-scanning-vulnerability-detection/). ### Integration Points Several tools and data sources support KEV-SBOM workflows: -- **[CISA KEV JSON feed](https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json)** — Machine-readable, updated as new KEVs are added - **[sbomify](https://sbomify.com)** — SBOM management platform with vulnerability analysis via Google OSV integration +- **[CISA KEV JSON feed](https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json)** — Machine-readable, updated as new KEVs are added - **[OWASP Dependency-Track](https://dependencytrack.org/)** — Ingests SBOMs and performs vulnerability analysis using multiple data sources - **[Grype](https://github.com/anchore/grype)** — Command-line vulnerability scanner that can match against vulnerability data - **[OSV](https://osv.dev/)** — Google's open source vulnerability database @@ -137,6 +157,18 @@ The KEV catalog intersects with several compliance frameworks: - **[NIST SP 800-53](/compliance/nist-800-53/) SI-5** requires receiving and acting on security alerts and advisories — the KEV catalog is a primary source for this control. - **[EU CRA](/compliance/eu-cra/)** requires vulnerability handling processes, and KEV status is a valuable input for prioritizing which vulnerabilities to address first. +## Notable KEVs and Real-World Impact + +A few high-profile KEV entries illustrate why the catalog matters and how quickly confirmed exploitation can escalate. + +**Log4Shell (CVE-2021-44228)** — Added to the KEV catalog in December 2021, Log4Shell was one of the most consequential vulnerabilities in the catalog's early history. A remote code execution flaw in the ubiquitous Apache Log4j logging library, it affected hundreds of thousands of applications worldwide. Mass exploitation began within hours of public disclosure, and the vulnerability remains a common attack vector years later. + +**MOVEit Transfer (CVE-2023-34362)** — Added in June 2023, this SQL injection vulnerability in Progress Software's MOVEit Transfer file-sharing platform was exploited at scale by the Cl0p ransomware group. The campaign compromised over 2,500 organizations and exposed data belonging to tens of millions of individuals, making it one of the largest mass-exploitation events linked to a single KEV entry. + +**XZ Utils (CVE-2024-3094)** — Added in April 2024, this was not a conventional exploit but a deliberate supply chain backdoor planted in the XZ compression library by a contributor who spent years building trust in the open-source project. It was discovered just before shipping in major Linux distributions, narrowly averting a widespread compromise. Its KEV listing underscored that the catalog also tracks supply chain threats when exploitation evidence is confirmed. + +These cases share a common thread: each affected widely used software, each was exploited rapidly after (or even before) public disclosure, and each would have been easier to triage for organizations that maintained current SBOMs of their software components. + ## Frequently Asked Questions ### What is a KEV? @@ -158,3 +190,11 @@ BOD 22-01 legally requires U.S. federal civilian executive branch (FCEB) agencie ### How can I monitor the KEV catalog automatically? Generate SBOMs for your applications, ingest them into a vulnerability management platform like OWASP Dependency-Track, and use vulnerability analysis to identify known CVEs in your components. Then cross-reference those CVEs against the KEV catalog. CISA publishes the KEV catalog as a machine-readable [JSON feed](https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json) that can be consumed by automated tools. + +### How often is the KEV catalog updated? + +CISA updates the KEV catalog multiple times per week. New vulnerabilities are added as soon as CISA confirms reliable evidence of active exploitation and verifies that a remediation action exists. There is no fixed schedule — additions are driven by threat intelligence, so the catalog may see several updates in a single week or occasional pauses. + +### How many vulnerabilities are in the KEV catalog? + +As of early 2026, the KEV catalog contains approximately 1,500 entries. The catalog launched with about 300 entries in November 2021, passed 1,000 in late 2023, and has been growing at roughly 20% per year. Despite this growth, the KEV catalog remains a small fraction of the 330,000+ CVEs in the broader vulnerability ecosystem — which is precisely what makes it useful for prioritization.