WorryFree Computers   »   [go: up one dir, main page]


Google and many other organizations, such as NIST, IETF, and NSA, believe that migrating to post-quantum cryptography is important due to the large risk posed by a cryptographically-relevant quantum computer (CRQC). In August, we posted about how Chrome Security is working to protect users from the risk of future quantum computers by leveraging a new form of hybrid post-quantum cryptographic key exchange, Kyber (ML-KEM)1. We’re happy to announce that we have enabled the latest Kyber draft specification by default for TLS 1.3 and QUIC on all desktop Chrome platforms as of Chrome 124.2 This rollout revealed a number of previously-existing bugs in several TLS middlebox products. To assist with the deployment of fixes, Chrome is offering a temporary enterprise policy to opt-out.

Launching opportunistic quantum-resistant key exchange is part of Google’s broader strategy to prioritize deploying post-quantum cryptography in systems today that are at risk if an adversary has access to a quantum computer in the future. We believe that it’s important to inform standards with real-world experience, by implementing drafts and iterating based on feedback from implementers and early adopters. This iterative approach was a key part of developing QUIC and TLS 1.3. It’s part of why we’re launching this draft version of Kyber, and it informs our future plans for post-quantum cryptography.

Chrome’s post-quantum strategy prioritizes quantum-resistant key exchange in HTTPS, and increased agility in certificates from the Web PKI. While PKI agility may appear somewhat unrelated, its absence has contributed to significant delays in past cryptographic transitions and will continue to do so until we find a viable solution in this space. A more agile Web PKI is required to enable a secure and reliable transition to post-quantum cryptography on the web.

To understand this, let’s take a look at HTTPS and the current state of post-quantum cryptography. In the context of HTTPS, cryptography is primarily used in three different ways:

  • Symmetric Encryption/Decryption. HTTP is transmitted as data inside a TLS connection using an authenticated cipher (AEAD) such as AES-GCM. These algorithms are broadly considered safe against quantum cryptanalysis and can remain in place.
  • Key Exchange. Symmetric cryptography requires a secret key. Key exchange is a form of asymmetric cryptography in which two parties can mutually generate a shared secret key over a public channel. This secret key can then be used for symmetric encryption and decryption. All current forms of asymmetric key exchange standardized for use in TLS are vulnerable to quantum cryptanalysis.
  • Authentication. In HTTPS, authentication is achieved primarily through the use of digital signatures, which are used to convey server identity, handshake authentication, and transparency for certificate issuance. All of the digital signature and public key algorithms standardized for authentication in TLS are vulnerable to quantum cryptanalysis.

This results in two separate quantum threats to HTTPS.

The first is the threat to traffic being generated today. An adversary could store encrypted traffic now, wait for a CRQC to be practical, and then use it to decrypt the traffic after the fact. This is commonly known as a store-now-decrypt-later attack. This threat is relatively urgent, since it doesn’t matter when a CRQC is practical—the threat comes from storing encrypted data now. Defending against this attack requires the key exchange to be quantum resistant. Launching Kyber in Chrome enables servers to mitigate store-now-decrypt-later attacks.

The second threat is that future traffic is vulnerable to impersonation by a quantum computer. Once a CRQC actually exists, it could be used to break the asymmetric cryptography used for authentication in HTTPS. To defend against impersonation from a CRQC, we need to migrate all of the asymmetric cryptography used for authentication to post-quantum variants. However, breaking authentication only affects traffic generated after the availability of CRQCs. This is because breaking authentication on a recorded transcript doesn’t help the attacker impersonate either party—the conversation has already finished.

In other words, there’s no store-now-decrypt-later equivalent for authentication, and so while migrating key exchange and authentication to post-quantum variants are both important, migrating authentication is less urgent than key exchange. This is good, because there are a variety of challenges for migrating to post-quantum authentication. Specifically, size.

Post-quantum cryptography is big compared to the pre-quantum cryptographic algorithms used in HTTPS. A Kyber key exchange is ~1KB transmitted per peer, whereas an X25519 key exchange is only 32 bytes per peer, an over 30x increase. The actual key exchange operation in Kyber is quite fast. Transmitting Kyber keys is quite slow. The extra size from Kyber causes the TLS ClientHello to be split into two packets, resulting in a 4% median latency increase to all TLS handshakes in Chrome on desktop. On desktop platforms, this, with HTTP/2 and HTTP/3 connection reuse, is not large enough to be noticeable in Core Web Vitals. Unfortunately, it is noticeable on Android, where Internet connections are often lower bandwidth and higher latency, and so we have not yet launched on Android.

The size issues are even worse for authentication. ML-DSA (Dilithium) keys and signatures are ~40X the size of ECDSA keys and signatures. A typical TLS connection today uses two public keys and five signatures to fulfill all of the authentication requirements. A naive swap to ML-DSA would add ~14KB to the TLS handshake. Cloudflare anticipates it would increase latency by 20-40%, and we’ve seen that a single kilobyte was already impactful. Instead, we need alternate approaches to authentication in HTTPS that provide the desired properties and transmit fewer signatures and public keys.

We think the important next step for quantum-resistant authentication in HTTPS is to focus on enabling trust anchor agility. Historically, the public Web PKI could not deploy new algorithms quickly. This is because most site operators typically provision a single certificate for all supported clients and browsers. This certificate must both be issued from a trust hierarchy that is trusted by every browser or client the site operator supports, and the certificate must be compatible with each of these clients.

The single certificate model makes it difficult for the Web PKI to evolve. As security requirements change, site operators may find that there is no longer an intersection between certificates trusted by deployed clients, certificates trusted by new clients, algorithms supported by deployed clients, and algorithms supported by new clients (all crossed with every separate browser and root store). These clients may range from different browsers, older versions of those browsers not receiving updates, all the way to applications on smart TVs or payment terminals. As requirements diverge, site operators have to choose between security for new clients, and compatibility with older clients.

This conflict, in turn, limits new clients making PKI changes to improve user security, such as transitioning to post-quantum. Under a single-certificate deployment model, the newest clients cannot diverge too far from the oldest clients, or server operators will be left with no way to maintain compatibility. We propose to solve this by moving to a multi-certificate deployment model, where servers may be provisioned with multiple certificates, and automatically send the correct one to each client. This enables trust anchor agility, and allows clients to evolve at different rates. Clients who are up to date and reliably receiving updates could access the authentication mechanisms best suited for the Internet as it evolves without being hamstrung by old clients no longer receiving updates. Certification authorities and trust stores could introduce new post-quantum trust anchors without needing to wait for the slowest actor to add support. This would drastically simplify the post-quantum transition since it also enables the seamless addition and removal of hierarchies using experimental post-quantum authentication methods.

At first glance, TLS may appear to have trust anchor agility by way of cross-signatures and signature algorithm negotiation. However, neither of these mechanisms provide true trust anchor agility, nor were they intended to.

A cross-signature is when a CA creates two different certificates for a single subject and public key pair, but with different issuers and signatures. The first certificate is issued and signed as usual, by the CA itself. The second is issued and signed by a different trust hierarchy, often by a different organization. For example, the original Let’s Encrypt intermediate certificate existed in two forms3. The “regular” intermediate was signed by the Let’s Encrypt root, whereas the “cross-signed” intermediate was signed by IdenTrust. This approach of cross-signing a new PKI hierarchy with an older, more broadly available PKI hierarchy allows a new CA to bootstrap its trust on old devices, so long as the older devices support the signing algorithm. Cross-signatures, however, rely on significant cooperation among often competing CAs, and may not be suitable for when different clients have different needs. This limits when site operators can use cross-signs. Additionally, devices that do not support a new algorithm will still need to be updated to be able to use the new signing algorithm in the newer certificate, regardless of whether or not it is cross-signed.

Signature algorithm negotiation allows TLS peers to agree on the algorithm to be used for the handshake signature. This algorithm needs to correspond with the key type used in the certificate. Endpoints can infer that if the peer supports an algorithm such as ECDSA for the handshake signature, it must also support ECDSA certificates. This value can be used to multiplex between an RSA-based chain and a smaller ECDSA-based chain. For example, Google’s RSA-based large compatibility chain is four certificates and ~4.1KB, whereas the shortest ECDSA-based chain is three certificates and only ~1.7KB4.

Signature algorithm negotiation does not provide trust anchor agility. While the signature algorithm information implies algorithm support, it provides no information about what trust anchors a client actually trusts. A client can support ECDSA, but not have the latest ECDSA root certificate from a specific CA. Due to the wide variety of trust stores in use, many organizations may still often need to be conservative in when they serve ECDSA certificates and may need to provide a longer, cross-signed chain for maximum compatibility.

Neither cross-signatures nor signature algorithm negotiation are solutions to migrating to post-quantum cryptography for authentication. Cross-signatures do not help with new algorithms, and signature algorithm negotiation is solely about negotiating algorithms, not providing information about trust anchors. We expect a gradual transition to post-quantum cryptography. Inferring information about the contents of the trust store from the result of signature algorithm negotiation risks ossifying to a specific version of a specific trust store, rather than purely being used for algorithm negotiation.

Instead, to introduce agility to TLS we need an explicit mechanism for trust anchor negotiation, to allow the client and server to efficiently determine which certificate to use. At the November 2023 IETF meeting in Prague, Chrome proposed “Trust Expressions” as a mechanism for trust anchor negotiation in TLS. Chrome is currently seeking community input on Trust Expressions via the IETF process. We think the goal of being able to cleanly deploy multiple certificates to handle a range of clients is much more important than the specific mechanisms of the proposal.

From there, we can explore more efficient ways to authenticate servers, such as Merkle Tree Certificates. We view introducing some mechanism for trust anchor agility as a necessity for efficient post-quantum authentication. Experimentation will be extremely important as proposals are developed. Agility also enables using different solutions in different contexts, rather than sending extra data for the lowest-common denominator— solutions like Merkle Tree Certificates and intermediate elision require up-to-date clients.

Given these constraints, priorities, and risks, we think agility is more important than defining exactly what a post-quantum PKI will look like at this time. We recommend against immediately standardizing ML-DSA in X.509 for use in the public Web PKI via the CA/Browser Forum. We expect that ML-DSA, once NIST completes standardization, will play a part in a post-quantum Web PKI, but we’re focusing on agility first. This does not preclude introducing ML-DSA in X.509 as an option for private PKIs, which may be operating on more strict post-quantum timelines and have fewer constraints around certificate size, handshake latency, issuance transparency, and unmanaged endpoints.

Ultimately, we think that any approach to post-quantum authentication has the same first requirement—a migration mechanism for clients to opt-in to post-quantum secure authentication mechanisms when servers support it. Post-quantum authentication presents significant challenges to the Web ecosystem, but we believe trust anchor agility will enable us to overcome them and lead to a more secure, robust, and performant post-quantum web.

Notes


  1. The draft is X25519Kyber768, which is a combination of the pre-quantum algorithm X25519, and the post-quantum algorithm Kyber 768. Kyber is being renamed to ML-KEM, however for the purposes of this post, we will use “Kyber” to refer to the hybrid algorithm defined for TLS. 

  2. As the standards from NIST and IETF are not yet complete, this will be later removed and replaced with the final versions. At this stage of standardization, we expect only early adopters to use the primitives. 

  3. It actually exists in considerably more than two forms, but from an organizational perspective, there are versions that signed by other Let’s Encrypt certificates, and a version that is signed by IdenTrust, which is a completely separate certification authority from Let’s Encrypt. 

  4. The chain length includes the root certificate and leaf certificate. The byte numbers are what is transmitted over the wire, and so they include the leaf certificate but not the root certificate. 

Posted by David Adrian, Bob Beck, David Benjamin and Devon O'Brien

TL;DR: Automated certificate issuance and management strengthens the underlying security assurances provided by Transport Layer Security (TLS) by increasing agility and resilience. This post describes the benefits of automation and upcoming changes to the Chrome Root Program policy that represent Chrome Security’s ongoing commitment to improving web security.   



Introduction


One of the most common tools for enhancing user security on the Internet is “Transport Layer Security” (TLS), formerly known as “Secure Socket Layer” (SSL). At its most basic level, TLS is a security protocol that encrypts data such that only the intended recipient can read it. 


Encryption makes the Internet more secure, but only if consistently and reliably deployed. The adoption of modern practices, like automated TLS certificate issuance and management, helps achieve this goal. 



Background: TLS - The Foundation for Encrypted Communications on the Internet


You’re probably more familiar with TLS than you think, as it’s the underlying technology that puts the ‘S’ (referencing “Secure”) in HTTPS. We recently wrote about HTTPS and how it’s become the norm, with over 92% of page loads in Chrome on Android, Chrome OS, macOS, and Windows being transmitted using HTTPS.


TLS is the cryptographic protocol that establishes a secure channel between a web browser and the web server hosting the website a user is browsing. It provides a few core security properties:

  • Encryption: Ensures data being transmitted can’t be intercepted and understood by third parties or unintended recipients.

  • Authentication: Ensures the web server or application a web browser is connecting to is who it claims to be.

  • Integrity: Ensures data has not been altered while in transit.


To establish a TLS connection, a web browser and server introduce themselves and agree on the rules used to secure ongoing subsequent connections. This introduction is referred to as the “TLS Handshake.” If you’d like to look closer and improve your understanding of how TLS connections are established, check out this resource. 


X.509 certificates, sometimes referred to as “certificates,” “TLS certificates,” or “server authentication certificates,” are an essential part of the TLS Handshake. Certificates are issued by trusted entities called “Certification Authorities” (CAs) and are responsible for verifying and subsequently binding a domain name (e.g., google.com) with a corresponding public key. The certificate allows the web browser to verify it’s communicating with an authorized web server (i.e., server identity verification).


It’s important to note that TLS isn’t a perfect solution, nor does its use guarantee a website is completely safe. Remember, using TLS ensures web traffic is encrypted while in transit to or from the corresponding web server; it does not guarantee the safety or security of that content. TLS does not prevent phishing or malicious content like malware or viruses from being served to a website’s users. Removing opportunities for confusion related to the terms “encrypted" (a security property provided by TLS) and “safe" (a subjective feeling) is one of the reasons why, beginning in Chrome 117, Chrome replaced the lock icon in the address bar with a new security-neutral “tune” icon.



The Power of Automation


As outlined above, server authentication certificates underpin the encrypted connections between web browsers and web servers. Publicly trusted certificates – those trusted in products like Chrome by default – must adhere to both industry-wide and web browser-specific policies, like the CA/Browser Forum “Baseline Requirements” and the Chrome Root Program policy. One such requirement is that a certificate’s maximum validity is no more than 398 days. 


Certificate validity is defined in RFC 5280 and determines the functional lifetime a certificate may maximally be considered valid for use in establishing TLS connections. While today, the maximum certificate validity is set to 398 days, this hasn’t always been the case. In just over ten years, the ecosystem has trended from unlimited certificate lifetime to 60 months (2012), to 39 months (2015), to 825 days (2018), to 398 days (2020). With each reduction in maximum validity, the underlying goal was always the same: improving security. 


Shortening certificate lifetimes protects users by reducing the impact of compromised certificate keys and by speeding up the replacement of insecure technologies and practices across the web. Key compromises (i.e., when a web server certificate’s corresponding private key is accidentally or intentionally exposed) and the discovery of internet security weaknesses (e.g., the Heartbleed bug) are common events that can lead to real-world harm, and the web’s users should be better protected against them. 


The decreasing lifetime of certificates and the increasing number of certificates that organizations rely on have created a growing need for website operators to become more agile in managing certificates and corresponding infrastructure. Automation is one of the best methods of achieving increased agility, reliability, and security.



What is Certificate Automation?


While there isn’t a one-size-fits-all definition of certificate automation, there is one shared element: the requirement for “hands-on” input from humans during initial certificate issuance and ongoing renewal is minimized or eliminated. Certificate automation simplifies the often complex and error-prone tasks associated with managing certificates, enhancing security and operational efficiency.


In the Web Public Key Infrastructure (“Web PKI"), there are two major categories of certificate automation solutions: open solutions relying on standards such as the Automatic Certificate Management Environment (ACME) protocol and solutions often relying on proprietary tools or protocols.



Benefits of Automation


Automated certificate issuance and management:


  • promotes agility.

    • Automation increases the speed at which the benefits of new security capabilities are realized.

  • increases resilience and reliability.

    • Automation eliminates human error and can help scale the certificate management process across complex environments.

    • Automation coupled with monitoring protects against website outages due to certificate expiration that could result in a loss of traffic, reputation, or revenue. 

    • Innovations like ACME Renewal Information (ARI) present opportunities to seamlessly protect website operators and organizations from outages related to unforeseen events. ARI allows CAs to communicate to web servers that they should attempt to renew a certificate during a defined window, for example, before a certificate is revoked due to an incident.

  • increases efficiency.

    • Automation reduces the time and resources required to manage certificates manually. Though there is an initial investment to automate, over time, team members have increased availability to focus on more strategic, value-adding activities.



Why does Automation Lead to Better Security?


Automation improves security posture and increases resilience in response to unexpected events including CA incidents, Internet security weaknesses, and cryptographic deprecations.


CA Incidents


The Baseline Requirements prescribe response expectations for some types of CA incidents, and many of these responses include marking affected certificates as no longer trusted (“revoked”). Four years ago, Let’s Encrypt self-reported a bug that affected over 3 million certificates. In response to the incident, nearly 2 million certificates were revoked, meaning website operators needed to intervene and trigger replacement to avoid a potential outage. While the scale of this incident was atypical, Web PKI incidents that necessitate certificate re-issuance are commonplace. 


There are two important conclusions from this incident. First, the ACME protocol pioneered by and relied on by Let’s Encrypt presented the opportunity for affected website operators to recover from the incident with limited manual effort. More than 1.7 million affected certificates were replaced in less than 48 hours. Second, the incident resulted in Let’s Encrypt’s commitment to developing and deploying a new protocol (ARI, described above) capable of improving response to future CA incidents such that certificate replacement can occur automatically without human intervention. Let’s Encrypt announced a production deployment of ARI in March 2023. Other CAs have the opportunity to deploy this open protocol (e.g., Google Trust Services announced their production deployment of ARI in May 2023) to improve incident response.



Internet Security Weaknesses 

In April 2014, a security vulnerability (“Heartbleed”) was discovered in a popular cryptographic software library used to secure the majority of servers on the Internet that broke the security properties provided by TLS. It was estimated that in response to the bug, over 500,000 active publicly accessible server authentication certificates needed to be revoked and replaced. Despite a demonstrated vulnerability, remediation efforts from website operators were slow. Only 14% of affected websites completed the necessary remediation steps within a month of disclosure. About 33% of affected devices remained vulnerable nearly three years after disclosure. 


The maximum certificate validity permitted by the Baseline Requirements at the time was five years. For some website operators, this meant the need to revisit the state of their TLS configuration was incorrectly assumed to be years away - which partly explains the observed remediation inaction. Further, CAs who elected to revoke certificates faced significant costs related to hosting revocation information - estimated for one CA to be between $400,000 and $952,992.40 USD per month. The Baseline Requirements obligate CAs to host revocation information for each certificate they issue until the end of its validity period, meaning these costs may have needed to be sustained over several years - representing potentially catastrophic financial consequences to the organizations responsible for underpinning the web’s security. 


Minimally, modern automation technologies like ACME and ARI would have reduced touch labor experienced by website operators to reissue affected certificates. Considering the concerns related to vulnerable private key reuse, popular ACME clients like Certbot and Lego automatically create new key material for each certificate request. Further, if we could imagine a world where certificate validity was reduced, the maximum window of opportunity for attackers would have been significantly reduced from the 5-year window. As the degree of automation increases, so does the ease of transition to reduced certificate validity. Indeed, many sites are already using certificates with much shorter validity than today’s maximum of 398 days. For example, Facebook has implemented a highly automated certificate issuance and management workflow to protect its network edge and corresponding devices with certificates that are used for just a few days. Other CAs are defaulting to certificates valid for only 30 days. A final point of interest is that peer-reviewed research demonstrates that in response to the manual intervention necessitated by Heartbleed, system administrators who implemented automation were more prompt in performing certificate replacements when compared to those who did not.

Cryptographic Deprecations

Cryptographic hash functions — mathematical algorithms that produce a fixed-length output from an arbitrarily sized input — are central to the security of certificates. In 2005, researchers demonstrated the first weaknesses in the widely used SHA-1 hash function. In response to growing security concerns, in 2014, Chrome announced a deprecation timeline, with the CA/Browser Forum ultimately prohibiting the issuance of certificates that used SHA-1 after January 1, 2016. 


Unfortunately, this deprecation took years. Browsers had to wait for almost all affected certificates to be renewed, many of them manually, to avoid mass breakage. Modern automation technologies like ACME and ARI would have reduced the touch labor needed to reissue affected certificates. When coupled with reduced certificate validity, the web would have been able to transition away from SHA-1 much faster. And these cryptographic weaknesses weren't theoretical: in February 2017, researchers demonstrated a devastating vulnerability in SHA-1 — barely avoiding a crisis because Chrome had finished removing support for affected certificates just weeks before.


Cryptographic deprecations aren't as infrequent as you might think, since there is a steady stream of legacy cryptography in TLS and PKI that Chrome is working to eradicate and modernize, ideally before it becomes vulnerable.



The Opportunity for and Cost of Failure


Expired certificates bring a website down, causing loss of productivity, reputational harm, and missed service level expectations.

When considering failed TLS connections observed in Chrome versions released within the last year (i.e., Chrome 106 and greater) on all platforms, over 22% of these resulted from certificates with an invalid validity date. 

A 2019 study found that 3.9% of all HTTPS sites have expired certificates. 



State of Automation in the Ecosystem 


Between December 2022 and January 2023, our team ran a survey with owners of CAs included in the Chrome Root Store. The intent of the survey was to better understand existing and planned adoption of automated certificate issuance and management solutions. 


When coupled with publicly available data from Certificate Transparency logs and tools like crt.sh, the survey data estimated 58% of the certificates issued by the Web PKI today rely on the ACME protocol. There is clearly broad website operator support for issuing and managing certificates using ACME, and by extension, a strong demand for certificate automation in the Web PKI. The survey also highlighted that the set of CA owners that offer ACME support today and are included in the Chrome Root Store represent more than 95% of Web PKI’s certificate population. 70% of those corresponding CA owners self-reported increasing demand for ACME services, which we interpret as a strong indicator of a healthy and growing ACME user population across the ecosystem. None of the CA owners supporting ACME today indicated that ACME demand was decreasing. 


To better understand other types of automated certificate issuance and management solutions offered by CA owners included in the Chrome Root Store, we ran a separate survey between April and June 2023. When again coupled with publicly available data from Certificate Transparency logs and tools like crt.sh, the survey data indicated that more than 80% of the certificates issued by the Web PKI today are issued using some form of automation (which includes ACME). Organizations included in the Chrome Root Store that self-reported no automation support represented approximately .08% of the Web PKI certificate population. 



Our Commitment to Automation 


The Chrome Root Program provides governance and security review to determine the set of CAs trusted by default in Chrome. We’ve blogged about the Chrome Root Program in the past [1 and 2], but if you missed it, we keep users safe online by:

  • Administering policy and governance activities to manage the set of CAs trusted by default in Chrome,

  • evaluating impact and corresponding security implications related to public security incident disclosures by participating CAs, and

  • leading positive change to make the ecosystem more resilient.


Specific to the last point, the June 2022 release (Version 1.1) of the Chrome Root Program policy introduced the Chrome Root Program’s “Moving Forward, Together” (MFT) initiative that set out to share our vision of the future that includes modern, reliable, highly agile, purpose-driven PKIs with a focus on automation, simplicity, and security. 



Moving Forward, Together


While “Moving Forward, Together" is non-normative and therefore not policy, it represents future initiatives on which we hope to collaborate further with members of the Web PKI ecosystem. To explore and understand the broader ecosystem impacts of the related proposals described in MFT, we:

  • Study ecosystem data from publicly available tools like crt.sh and Censys,

  • interpret data resulting from Chrome tools, experiments, and usage data, 

  • evaluate peer-reviewed research, and, 

  • collect feedback through surveys like the ones related to automation solutions described earlier. 


Some of the MFT initiatives might be achieved through collaborations within the CA/Browser Forum. In other cases, it might be most appropriate for corresponding changes to land only in the Chrome Root Program policy, as not all CA owners who adhere to the CA/Browser Forum Baseline Requirements intend to serve Chrome’s focused PKI use case of server authentication - or wish to be trusted by default in Chrome. Regardless of how these proposals might eventually be implemented, we are committed to collaborating with community members to minimize adverse ecosystem impacts when appropriate and possible.



Upcoming Policy Changes


As announced last week at CA/Browser Forum Face-to-Face Meeting 60, we’ll soon be pre-releasing an updated version of the Chrome Root Program policy to collect feedback and requested clarifications from CA owners included in the Chrome Root Store.


One of the major focal points of Version 1.5 requires that applicants seeking inclusion in the Chrome Root Store must support automated certificate issuance and management. We’ve been communicating intent to require automation over the past year, including past Face-to-Face Meeting updates in February and June.


It’s important to note that these new requirements do not prohibit Chrome Root Store applicants from supporting “non-automated” methods of certificate issuance and renewal, nor require website operators to only rely on the automated solution(s) for certificate issuance and renewal. The intent behind this policy update is to make automated certificate issuance an option for a CA owner’s customers.


While we prefer ACME solutions over those that rely on proprietary protocols or tools, both forms of automation satisfy the intent of the new policy requirement. Specifically, we prefer ACME because of its widespread ecosystem support and adoption. Further, ACME is open and benefits from continued innovation and enhancements from a robust set of ecosystem participants. There is an extensive set of well-documented ACME client options spanning multiple languages and supported platforms. Last but not least, ACME was designed specifically to meet the TLS certificate issuance needs of the Web PKI.



Future Opportunities Related to Automation


Promoting broader ubiquity of automated certificate issuance and management will establish an important foundation for the next generation of the Web PKI. Increased use of automation will also unlock future opportunities for more modern and agile infrastructures where strengthened security properties can be realized, for example, where maximum certificate validity can be reduced with minimal downsides. 


Continued collaboration across members of the Web PKI ecosystem (e.g., web browsers, CAs, and website operators, and hosting providers) is necessary to make automation a viable option for all website operators. We’ve been encouraged by recent developments within the ACME ecosystem including ACME Renewal Information and Automated Certificate Management Environment for Subdomains. These initiatives aim to better protect website operators from unforeseen events that could affect certificate status and lead to outages, as well as to make it easier for popular server authentication use cases to be supported by ACME. There’s further opportunity related to improved fail-over (e.g., allowing a graceful transition to a new CA if the preferred provider is unavailable at the time of a request). We’re hopeful that as more CA owners support their customers in adopting automation, we’ll see continued developments such as these, making it even easier for website operators to securely obtain and manage server authentication certificates.



Learn More


If you’re a website operator, we encourage you to discover the potential of automated certificate issuance and management, and you should get started today! While we’ve compiled the below list of resources to improve your understanding, we encourage you to reach out to your corresponding CA owner to learn how they support, or plan to support automation. 


Resources


If you previously investigated implementing an automated certificate issuance and management solution and determined that it was either too difficult or that there were too many obstacles to make it a viable solution, we encourage you to reconsider. The Web PKI continues to evolve, and recent developments have made it easier than ever to adopt automation. Modern web server platform providers like Caddy help website operators configure TLS by default, as do many third-party hosting provider organizations. 


If you depend on software or a service provider that does not support automated certificate issuance and management, share this post and ask the corresponding organization to include support for automation on their future product roadmap.


Finally, if you’d like to share with us any challenges, lessons learned, or opportunities for improvement related to certificate automation, let us know at chrome-root-program [at] google [dot] com.


Note: the service providers listed in this post should not be considered exhaustive or an endorsement. The references are only intended to be informational.


Posted by Chrome Root Program, Chrome Security Team