This blog post explores how a Certificate Authority (CA) could maliciously issue a certificate with a valid signature but an impossibly distant expiration date, far beyond the CA's own validity period. This "fake future" certificate wouldn't trigger typical browser warnings because the signature checks out. However, by comparing the certificate's notAfter
date with Signed Certificate Timestamps (SCTs) from publicly auditable logs, inconsistencies can be detected. These SCTs provide proof of inclusion in a log at a specific time, effectively acting as a timestamp for when the certificate was issued. If the SCT is newer than the CA's validity period but the certificate claims an older issuance date within that validity period, it indicates potential foul play. The post further demonstrates how this discrepancy can be checked programmatically using open-source tools.
CSRF and CORS address distinct web security risks and therefore both are necessary. CSRF (Cross-Site Request Forgery) protects against malicious sites tricking a user's browser into making unintended requests to a trusted site where the user is already authenticated. This is achieved through tokens that verify the request originated from the trusted site itself. CORS (Cross-Origin Resource Sharing), on the other hand, dictates which external sites are permitted to access resources from a particular server, focusing on protecting the server itself from unauthorized access by scripts running on other origins. While they both deal with cross-site interactions, CSRF prevents malicious exploitation of a user's existing session, while CORS restricts access to the server's resources in the first place.
Hacker News users discussed the nuances of CSRF and CORS, pointing out that while they both address security concerns related to cross-origin requests, they protect against different threats. Several commenters emphasized that CORS primarily protects the server from unauthorized access by other origins, controlled by the server itself. CSRF, on the other hand, protects users from malicious sites exploiting their existing authenticated sessions on another site, controlled by the user's browser. One commenter offered a clear analogy: CORS is like a bouncer at a club deciding who can enter, while CSRF protection is like checking someone's ID to make sure they're not using a stolen membership card. The discussion also touched upon the practical differences in implementation, like preflight requests in CORS and the use of tokens in CSRF prevention. Some comments questioned the clarity of the original blog post's title, suggesting it might confuse the two distinct mechanisms.
Firefox now fully enforces Certificate Transparency (CT) logging for all TLS certificates, significantly bolstering web security. This means that all newly issued website certificates must be publicly logged in approved CT logs for Firefox to trust them. This measure prevents malicious actors from secretly issuing fraudulent certificates for popular websites, as such certificates would not appear in the public logs and thus be rejected by Firefox. This enhances user privacy and security by making it considerably harder for attackers to perform man-in-the-middle attacks. Firefox’s complete enforcement of CT marks a major milestone for internet security, setting a strong precedent for other browsers to follow.
HN commenters generally praise Mozilla for implementing Certificate Transparency (CT) enforcement in Firefox, viewing it as a significant boost to web security. Some express concern about the potential for increased centralization and the impact on smaller Certificate Authorities (CAs). A few suggest that CT logs themselves are a single point of failure and advocate for further decentralization. There's also discussion around the practical implications of CT enforcement, such as the risk of legitimate websites being temporarily inaccessible due to log issues, and the need for robust monitoring and alerting systems. One compelling comment highlights the significant decrease in mis-issued certificates since the introduction of CT, emphasizing its positive impact. Another points out the potential for domain fronting abuse being impacted by CT enforcement.
DigiCert, a Certificate Authority (CA), issued a DMCA takedown notice against a Mozilla Bugzilla post detailing a vulnerability in their certificate issuance process. This vulnerability allowed the fraudulent issuance of certificates for *.mozilla.org, a significant security risk. While DigiCert later claimed the takedown was accidental and retracted it, the initial action sparked concern within the Mozilla community regarding potential censorship and the chilling effect such legal threats could have on open security research and vulnerability disclosure. The incident highlights the tension between responsible disclosure and legal protection, particularly when vulnerabilities involve prominent organizations.
HN commenters largely express outrage at DigiCert's legal threat against Mozilla for publicly disclosing a vulnerability in their software via Bugzilla, viewing it as an attempt to stifle legitimate security research and responsible disclosure. Several highlight the chilling effect such actions can have on vulnerability reporting, potentially leading to more undisclosed vulnerabilities being exploited. Some question the legality and ethics of DigiCert's response, especially given the public nature of the Bugzilla entry. A few commenters sympathize with DigiCert's frustration with the delayed disclosure but still condemn their approach. The overall sentiment is strongly against DigiCert's handling of the situation.
Latacora's blog post "How (not) to sign a JSON object" cautions against signing JSON by stringifying it before applying a signature. This approach is vulnerable to attacks that modify whitespace or key ordering, which changes the string representation without altering the JSON's semantic meaning. The correct method involves canonicalizing the JSON object first – transforming it into a standardized, consistent byte representation – before signing. This ensures the signature validates only identical JSON objects, regardless of superficial formatting differences. The post uses examples to demonstrate the vulnerabilities of naive stringification and advocates using established JSON Canonicalization Schemes (JCS) for robust and secure signing.
HN commenters largely agree with the author's points about the complexities and pitfalls of signing JSON objects. Several highlighted the importance of canonicalization before signing, with some mentioning specific libraries like JWS and json-canonicalize to ensure consistent formatting. The discussion also touches upon alternatives like JWT (JSON Web Tokens) and COSE (CBOR Object Signing and Encryption) as potentially better solutions, particularly JWT for its ease of use in web contexts. Some commenters delve into the nuances of JSON's flexibility, which can make secure signing difficult, such as varying key order and whitespace handling. A few also caution against rolling your own cryptographic solutions and advocate for using established libraries where possible.
Mozilla's code signing journey began with a simple, centralized system using a single key and evolved into a complex, multi-layered approach. Initially, all Mozilla software was signed with one key, posing significant security risks. This led to the adoption of per-product keys, offering better isolation. Further advancements included build signing, allowing for verification even before installer creation, and update signing to secure updates delivered through the application. The process also matured through the use of hardware security modules (HSMs) for safer key storage and automated signing infrastructure for increased efficiency. These iterative improvements aimed to enhance security by limiting the impact of compromised keys and streamlining the signing process.
HN commenters generally praised the article for its clarity and detail in explaining a complex technical process. Several appreciated the focus on the practical, real-world challenges and compromises involved, rather than just the theoretical ideal. Some shared their own experiences with code signing, highlighting additional difficulties like the expense and bureaucratic hurdles, particularly for smaller developers. Others pointed out the inherent limitations and potential vulnerabilities of code signing, emphasizing that it's not a silver bullet security solution. A few comments also discussed alternative or supplementary approaches to software security, such as reproducible builds and better sandboxing.
OAuth2 is a delegation protocol that lets a user grant a third-party application limited access to their resources on a server, without sharing their credentials. Instead of handing over your username and password directly to the app, you authorize it through the resource server (like Google or Facebook). This authorization process generates an access token, which the app then uses to access specific resources on your behalf, within the scope you've permitted. OAuth2 focuses solely on authorization and not authentication, meaning it doesn't verify the user's identity. It relies on other mechanisms, like OpenID Connect, for that purpose.
HN commenters generally praised the article for its clear explanation of OAuth2, calling it accessible and well-written, particularly appreciating the focus on the "why" rather than just the "how." Some users pointed out potential minor inaccuracies or areas for further clarification, such as the distinction between authorization code grant with PKCE and implicit flow for client-side apps, the role of refresh tokens, and the implications of using a third-party identity provider. One commenter highlighted the difficulty of finding good OAuth2 resources and expressed gratitude for the article's contribution. Others suggested additional topics for the author to cover, such as the challenges of cross-domain authentication. Several commenters also shared personal anecdotes about their experiences implementing or troubleshooting OAuth2.
Summary of Comments ( 43 )
https://news.ycombinator.com/item?id=43285671
Hacker News users discuss the practicality and implications of the blog post's method for detecting malicious Sub-CAs. Several commenters point out the difficulty of implementing this at scale due to the computational cost and potential performance impact of checking every certificate against a large CRL set. Others express concerns about the feasibility of maintaining an up-to-date list of suspect CAs, given their dynamic nature. Some question the overall effectiveness, arguing that sophisticated attackers could circumvent such checks. A few users suggest alternative approaches like using DNSSEC and DANE, or relying on operating system trust stores. The overall sentiment leans toward acknowledging the validity of the author's points while remaining skeptical of the proposed solution's real-world applicability.
The Hacker News post "How to distrust a CA without any certificate errors" sparked a discussion with several insightful comments. Many commenters engaged with the core concept of the blog post, which explored how a Certificate Authority (CA) could potentially manipulate certificates without triggering typical browser warnings.
One commenter highlighted the difficulty in detecting this type of manipulation, emphasizing that the trust model inherently relies on the CA. They pointed out the unlikelihood of a user independently verifying the certificate chain, especially given the complexity involved. This reinforced the article's point about the potential vulnerability.
Another commenter discussed the "certificate pinning" technique, which involves hardcoding expected certificate information into an application. While effective against the specific attack described in the article, they acknowledged that it's not a widespread solution and introduces its own set of management challenges, such as the need to update pinned certificates regularly.
The conversation also touched upon the broader issues of trust and security in the digital landscape. One commenter expressed a general lack of trust in CAs, citing past incidents of compromised CAs and questionable practices. This sentiment resonated with other users who echoed concerns about the centralized nature of the CA system.
Some commenters suggested alternative approaches to enhance security. One idea involved using DNSSEC combined with DANE (DNS-based Authentication of Named Entities), allowing domain owners to specify which CAs are authorized to issue certificates for their domain. Another suggestion was to explore decentralized identity solutions, moving away from the reliance on centralized CAs altogether.
Several comments also delved into the technical details of the attack described in the article, with users discussing specific aspects of certificate validation and the potential implications for different browsers and operating systems. There was some debate about the practicality of the attack and the likelihood of it being exploited in the wild, with some arguing that the complexity involved would make it a less attractive option for attackers.
Overall, the comments section reflected a general understanding of the potential vulnerabilities associated with the current CA system and a desire for more robust and transparent security solutions. While no single solution emerged as a clear winner, the discussion provided valuable insights into the challenges and potential future directions of certificate management and online trust.