The blog post argues that implementing HTTP/2 within your internal network, behind a load balancer that already terminates HTTP/2, offers minimal performance benefits and can introduce unnecessary complexity. Since the connection between the load balancer and backend services is typically fast and reliable, the advantages of HTTP/2, such as header compression and multiplexing, are less impactful. The author suggests that using a simpler protocol like HTTP/1.1 for internal communication is often more efficient and easier to manage, avoiding potential debugging headaches associated with HTTP/2. They contend that focusing optimization efforts on other areas, like database queries or application logic, will likely yield more substantial performance improvements.
Scaling WebSockets presents challenges beyond simply scaling HTTP. While horizontal scaling with multiple WebSocket servers seems straightforward, managing client connections and message routing introduces significant complexity. A central message broker becomes necessary to distribute messages across servers, introducing potential single points of failure and performance bottlenecks. Various approaches exist, including sticky sessions, which bind clients to specific servers, and distributing connections across servers with a router and shared state, each with tradeoffs. Ultimately, choosing the right architecture requires careful consideration of factors like message frequency, connection duration, and the need for features like message ordering and guaranteed delivery. The more sophisticated the features and higher the performance requirements, the more complex the solution becomes, involving techniques like sharding and clustering the message broker.
HN commenters discuss the challenges of scaling WebSockets, agreeing with the article's premise. Some highlight the added complexity compared to HTTP, particularly around state management and horizontal scaling. Specific issues mentioned include sticky sessions, message ordering, and dealing with backpressure. Several commenters share personal experiences and anecdotes about WebSocket scaling difficulties, reinforcing the points made in the article. A few suggest alternative approaches like server-sent events (SSE) for simpler use cases, while others recommend specific technologies or architectural patterns for robust WebSocket deployments. The difficulty in finding experienced WebSocket developers is also touched upon.
Summary of Comments ( 231 )
https://news.ycombinator.com/item?id=43168533
Hacker News users discuss the practicality of HTTP/2 behind a load balancer. Several commenters agree with the article's premise, pointing out that the benefits of HTTP/2, like header compression and multiplexing, are most effective on the initial connection between client and load balancer. Once past the load balancer, the connection between it and the backend servers often involves many short-lived requests, negating HTTP/2's advantages. Some argue that HTTP/1.1 with keep-alive is sufficient in this scenario, while others mention the added complexity of managing HTTP/2 connections behind the load balancer. A few users suggest that gRPC or other protocols might be a better fit for backend communication, and some bring up the potential benefits of HTTP/3 with its connection migration capabilities. The overall sentiment is that HTTP/2's value diminishes beyond the load balancer and alternative approaches may be more efficient.
The Hacker News post "There isn't much point to HTTP/2 past the load balancer" sparked a discussion with several insightful comments. Many commenters agreed with the premise of the article, noting that the benefits of HTTP/2, such as header compression and multiplexing, are most effective on the often congested public internet, and less so on the typically faster and more reliable internal network between a load balancer and backend servers.
Several commenters brought up the point that TLS overhead can negate the benefits of HTTP/2 in backend connections. One commenter suggested that if internal connections are already fast, encrypting and decrypting traffic for HTTP/2 might introduce more latency than it saves. This led to discussions about alternative protocols for internal communication, like gRPC or custom TCP-based protocols, which could provide performance benefits without the overhead of HTTP/2 and TLS.
Some commenters discussed specific scenarios where HTTP/2 between the load balancer and backend could be beneficial. These scenarios included environments with a high number of small requests, where multiplexing might offer some improvement, or situations where the connection between the load balancer and backend servers is less than ideal, such as in a geographically distributed setup.
One commenter noted that while HTTP/2 might not offer significant performance gains internally, it could simplify infrastructure by using a single protocol throughout the system. This simplification could reduce operational complexity and potentially ease troubleshooting.
A few commenters offered counterpoints to the article's premise. One argued that connection coalescing, a feature of HTTP/2, is still beneficial internally, especially with backend services making outbound calls. Another commenter suggested that the article overlooks potential future optimizations that could make HTTP/2 more attractive for internal communication.
There was also a discussion on the trade-offs between performance and security. Some commenters emphasized the importance of end-to-end encryption, even internally, and argued that the benefits of HTTP/2 combined with TLS justify the potential performance overhead. They highlighted potential security vulnerabilities in internal networks and suggested that assuming the internal network is secure is a risky proposition.
Overall, the comments on Hacker News provided a nuanced perspective on the use of HTTP/2 behind a load balancer, highlighting the potential downsides while acknowledging specific scenarios where it could be beneficial. The discussion explored various alternatives and touched upon the trade-offs between performance, security, and operational simplicity.