CSRF and CORS address distinct web security risks and therefore both are necessary. CSRF (Cross-Site Request Forgery) protects against malicious sites tricking a user's browser into making unintended requests to a trusted site where the user is already authenticated. This is achieved through tokens that verify the request originated from the trusted site itself. CORS (Cross-Origin Resource Sharing), on the other hand, dictates which external sites are permitted to access resources from a particular server, focusing on protecting the server itself from unauthorized access by scripts running on other origins. While they both deal with cross-site interactions, CSRF prevents malicious exploitation of a user's existing session, while CORS restricts access to the server's resources in the first place.
The blog post argues that implementing HTTP/2 within your internal network, behind a load balancer that already terminates HTTP/2, offers minimal performance benefits and can introduce unnecessary complexity. Since the connection between the load balancer and backend services is typically fast and reliable, the advantages of HTTP/2, such as header compression and multiplexing, are less impactful. The author suggests that using a simpler protocol like HTTP/1.1 for internal communication is often more efficient and easier to manage, avoiding potential debugging headaches associated with HTTP/2. They contend that focusing optimization efforts on other areas, like database queries or application logic, will likely yield more substantial performance improvements.
Hacker News users discuss the practicality of HTTP/2 behind a load balancer. Several commenters agree with the article's premise, pointing out that the benefits of HTTP/2, like header compression and multiplexing, are most effective on the initial connection between client and load balancer. Once past the load balancer, the connection between it and the backend servers often involves many short-lived requests, negating HTTP/2's advantages. Some argue that HTTP/1.1 with keep-alive is sufficient in this scenario, while others mention the added complexity of managing HTTP/2 connections behind the load balancer. A few users suggest that gRPC or other protocols might be a better fit for backend communication, and some bring up the potential benefits of HTTP/3 with its connection migration capabilities. The overall sentiment is that HTTP/2's value diminishes beyond the load balancer and alternative approaches may be more efficient.
The blog post "Nginx: try_files is evil too" argues against using the try_files
directive in Nginx configurations, especially for serving static files. While seemingly simple, its behavior can be unpredictable and lead to unexpected errors, particularly when dealing with rewritten URLs or if file existence checks are bypassed due to caching. The author advocates for using simpler, more explicit location blocks to define how different types of requests should be handled, leading to improved clarity, maintainability, and potentially better performance. They suggest separate location
blocks for specific file types and a final catch-all block for dynamic requests, promoting a more transparent and less error-prone approach to configuration.
Hacker News commenters largely disagree with the article's premise that try_files
is inherently "evil." Several point out that the author's proposed alternative using location
blocks with regular expressions is less performant and more complex, especially for simpler use cases. Some argue that the author mischaracterizes try_files
's purpose, which is primarily for serving static files efficiently, not complex routing. Others agree that try_files
can be misused, leading to confusing configurations, but contend that when used appropriately, it's a valuable tool. The discussion also touches on alternative approaches, such as using a separate frontend proxy or load balancer for more intricate routing logic. A few commenters express appreciation for the article prompting a re-evaluation of their Nginx configurations, even if they don't fully agree with the author's conclusions.
Httptap is a command-line tool for Linux that intercepts and displays HTTP and HTTPS traffic generated by any specified program. It works by injecting a dynamic library into the target process, allowing it to capture requests and responses before they reach the network stack. This provides a convenient way to observe the HTTP communication of applications without requiring proxies or modifying their source code. Httptap presents the captured data in a human-readable format, showing details like headers, body content, and timing information.
Hacker News users discuss httptap
, focusing on its potential uses and comparing it to existing tools. Some praise its simplicity and ease of use for quickly inspecting HTTP traffic, particularly for debugging. Others suggest alternative tools like mitmproxy
, tcpdump
, and Wireshark, highlighting their more advanced features, such as SSL decryption and broader protocol support. The conversation also touches on the limitations of httptap
, including its current lack of HTTPS decryption and potential performance impact. Several commenters express interest in contributing features, particularly HTTPS support. Overall, the sentiment is positive, with many appreciating httptap
as a lightweight and convenient option for simple HTTP inspection.
Summary of Comments ( 64 )
https://news.ycombinator.com/item?id=43231411
Hacker News users discussed the nuances of CSRF and CORS, pointing out that while they both address security concerns related to cross-origin requests, they protect against different threats. Several commenters emphasized that CORS primarily protects the server from unauthorized access by other origins, controlled by the server itself. CSRF, on the other hand, protects users from malicious sites exploiting their existing authenticated sessions on another site, controlled by the user's browser. One commenter offered a clear analogy: CORS is like a bouncer at a club deciding who can enter, while CSRF protection is like checking someone's ID to make sure they're not using a stolen membership card. The discussion also touched upon the practical differences in implementation, like preflight requests in CORS and the use of tokens in CSRF prevention. Some comments questioned the clarity of the original blog post's title, suggesting it might confuse the two distinct mechanisms.
The Hacker News post titled "Why do we have both CSRF protection and CORS?" generated a robust discussion with numerous comments exploring the nuances and interplay between these two security mechanisms. The central theme revolves around the distinct, yet complementary, roles CSRF protection and CORS play in securing web applications.
Several commenters emphasize that while both mechanisms address security concerns related to cross-origin requests, they target different attack vectors. CORS, they explain, is primarily browser-enforced and focuses on protecting the server from unauthorized access by scripts running in a different origin. It acts as a gatekeeper, allowing the server to specify which origins are permitted to make requests and what types of requests are allowed. This helps prevent malicious websites from directly accessing resources on a different domain without explicit server authorization.
CSRF protection, on the other hand, is focused on protecting the user from unknowingly making requests to a trusted site while authenticated. Commenters highlight how CSRF attacks exploit the browser's automatic inclusion of cookies and other authentication details in cross-origin requests. By tricking a user into interacting with a malicious website, an attacker can leverage the user's existing session to perform unwanted actions on the trusted site. CSRF tokens, as explained in the comments, act as a secret, unpredictable value included with each request that the server can verify to ensure the request originated from a legitimate source, not a malicious third-party site.
A key point of discussion revolves around how CORS alone does not prevent CSRF attacks. Commenters explain scenarios where a malicious website, even if blocked by CORS from reading the response from a cross-origin request, can still send the request. This means a CSRF attack can still be executed, potentially modifying data or performing actions on the targeted website without the user's knowledge, even if the attacker cannot directly see the results.
Some comments delve into the specific mechanics of how CSRF tokens work, explaining how they are typically generated on the server, embedded in forms or included as custom headers, and validated upon submission. They also touch upon different methods of implementing CSRF protection, such as double submit cookies and the Synchronizer Token Pattern.
Several commenters provide practical examples to illustrate the differences between CSRF and CORS, using scenarios involving banking transactions, social media interactions, and other web applications. These examples help clarify the distinct vulnerabilities each mechanism addresses and why both are necessary for comprehensive security.
Finally, some commenters discuss the limitations of both CSRF protection and CORS and highlight the importance of employing multiple layers of security. They mention other security best practices, such as proper input validation and output encoding, as essential components of a robust security strategy. The general consensus is that while CSRF and CORS are vital tools, they are not silver bullets, and a comprehensive approach to web security is crucial.