The blog post argues that implementing HTTP/2 within your internal network, behind a load balancer that already terminates HTTP/2, offers minimal performance benefits and can introduce unnecessary complexity. Since the connection between the load balancer and backend services is typically fast and reliable, the advantages of HTTP/2, such as header compression and multiplexing, are less impactful. The author suggests that using a simpler protocol like HTTP/1.1 for internal communication is often more efficient and easier to manage, avoiding potential debugging headaches associated with HTTP/2. They contend that focusing optimization efforts on other areas, like database queries or application logic, will likely yield more substantial performance improvements.
The blog post "There isn't much point to HTTP/2 past the load balancer" argues that while HTTP/2 offers significant performance benefits between a client (like a web browser) and a load balancer, extending HTTP/2 further into the internal network, between the load balancer and application servers, often yields negligible performance improvements and can even introduce complexities. The author bases this argument on empirical observations made within their specific Ruby on Rails application environment.
The author meticulously describes their testing methodology. They compare performance using both HTTP/1.1 and HTTP/2 for communication between the load balancer (HAProxy) and the application servers (Puma). They conduct load testing with wrk, simulating real-world traffic patterns. Their focus is primarily on latency and requests per second, key indicators of web application performance.
The results of their experimentation demonstrate that the performance difference between using HTTP/2 and HTTP/1.1 for communication between the load balancer and application servers is statistically insignificant. In some cases, HTTP/2 even performs slightly worse. The author attributes this lack of improvement to the nature of their application's internal network. Since the communication between the load balancer and the application servers happens within a fast, low-latency local network environment, the benefits of HTTP/2, such as header compression and multiplexing, become less impactful. The overhead introduced by HTTP/2, albeit small, can sometimes outweigh the potential gains in such a scenario.
Furthermore, the author highlights that implementing HTTP/2 between the load balancer and application servers introduced additional complexity to their infrastructure. This complexity necessitates more sophisticated configuration and monitoring, potentially leading to increased operational overhead.
The author concludes that while HTTP/2 is undoubtedly beneficial between the client and load balancer, extending it to the backend, specifically in scenarios involving a low-latency internal network, is often not worth the added complexity. They suggest that the limited performance gains, if any, are often outweighed by the increased operational overhead. The specific context of a Ruby on Rails application with Puma as the application server is emphasized, implicitly acknowledging that different application architectures and network environments might yield different results. The author encourages others to conduct similar experiments within their own environments before deciding on implementing HTTP/2 past the load balancer.
Summary of Comments ( 231 )
https://news.ycombinator.com/item?id=43168533
Hacker News users discuss the practicality of HTTP/2 behind a load balancer. Several commenters agree with the article's premise, pointing out that the benefits of HTTP/2, like header compression and multiplexing, are most effective on the initial connection between client and load balancer. Once past the load balancer, the connection between it and the backend servers often involves many short-lived requests, negating HTTP/2's advantages. Some argue that HTTP/1.1 with keep-alive is sufficient in this scenario, while others mention the added complexity of managing HTTP/2 connections behind the load balancer. A few users suggest that gRPC or other protocols might be a better fit for backend communication, and some bring up the potential benefits of HTTP/3 with its connection migration capabilities. The overall sentiment is that HTTP/2's value diminishes beyond the load balancer and alternative approaches may be more efficient.
The Hacker News post "There isn't much point to HTTP/2 past the load balancer" sparked a discussion with several insightful comments. Many commenters agreed with the premise of the article, noting that the benefits of HTTP/2, such as header compression and multiplexing, are most effective on the often congested public internet, and less so on the typically faster and more reliable internal network between a load balancer and backend servers.
Several commenters brought up the point that TLS overhead can negate the benefits of HTTP/2 in backend connections. One commenter suggested that if internal connections are already fast, encrypting and decrypting traffic for HTTP/2 might introduce more latency than it saves. This led to discussions about alternative protocols for internal communication, like gRPC or custom TCP-based protocols, which could provide performance benefits without the overhead of HTTP/2 and TLS.
Some commenters discussed specific scenarios where HTTP/2 between the load balancer and backend could be beneficial. These scenarios included environments with a high number of small requests, where multiplexing might offer some improvement, or situations where the connection between the load balancer and backend servers is less than ideal, such as in a geographically distributed setup.
One commenter noted that while HTTP/2 might not offer significant performance gains internally, it could simplify infrastructure by using a single protocol throughout the system. This simplification could reduce operational complexity and potentially ease troubleshooting.
A few commenters offered counterpoints to the article's premise. One argued that connection coalescing, a feature of HTTP/2, is still beneficial internally, especially with backend services making outbound calls. Another commenter suggested that the article overlooks potential future optimizations that could make HTTP/2 more attractive for internal communication.
There was also a discussion on the trade-offs between performance and security. Some commenters emphasized the importance of end-to-end encryption, even internally, and argued that the benefits of HTTP/2 combined with TLS justify the potential performance overhead. They highlighted potential security vulnerabilities in internal networks and suggested that assuming the internal network is secure is a risky proposition.
Overall, the comments on Hacker News provided a nuanced perspective on the use of HTTP/2 behind a load balancer, highlighting the potential downsides while acknowledging specific scenarios where it could be beneficial. The discussion explored various alternatives and touched upon the trade-offs between performance, security, and operational simplicity.