The article "The Size of Packets" explores the distribution of IP packet sizes on the internet, emphasizing the enduring prevalence of small packets despite increasing bandwidth. It analyzes data from various sources, highlighting that the median packet size remains stubbornly around 400-500 bytes, even on high-speed links. This challenges the assumption that larger packets dominate modern networks and underscores the importance of optimizing network infrastructure for small packet efficiency. The piece also delves into the historical context of packet sizes, touching on Ethernet's influence and the continued relevance of TCP/IP headers, which contribute significantly to the overall size, especially for smaller payloads.
In the 1980s, computer enthusiasts, particularly in Europe, could download games and other software from radio broadcasts. Shows like the UK's "Microdrive" transmitted audio data that could be captured using cassette recorders and then loaded onto computers like the Sinclair ZX Spectrum. This method, while slow and prone to errors, provided access to a wealth of software, often bypassing the cost of commercial cassettes. These broadcasts typically included instructions, checksums for error verification, and even musical interludes while longer programs loaded. The practice demonstrates an early form of digital distribution, leveraging readily available technology to share software within a community.
Hacker News commenters on the article about downloading games from the radio in the 1980s largely reminisce about their own experiences. Several users recalled using cassette recorders to capture data from radio broadcasts, mentioning specific shows like "Bits & Bytes" in the UK. Some shared technical details about the process, including the use of different audio frequencies representing 0s and 1s, and the challenges of getting a clean recording. A few commenters also pointed out the historical context, highlighting the prevalence of BBSs and the slow speeds of early modems as factors contributing to the popularity of radio broadcasts as a distribution method for games and software. Others discussed the variety of content available, including games, utilities, and even early forms of digital art. The discussion also touched upon regional variations in these practices, with some noting that the phenomenon was more common in Europe than in the US.
FilePizza allows for simple, direct file transfers between browsers using WebRTC. It establishes a peer-to-peer connection, eliminating the need for an intermediary server to store the files. The sender generates a unique URL that they share with the recipient. When the recipient opens the URL, a direct connection is established and the file transfer begins. Once the transfer is complete, the connection closes. This allows for fast and secure file sharing, particularly useful for larger files that might be cumbersome to transfer through traditional methods like email or cloud storage.
HN commenters generally praised FilePizza's simplicity and clever use of WebRTC for direct file transfers, avoiding server-side storage. Several appreciated its retro aesthetic and noted its usefulness for quick, informal sharing, particularly when privacy or speed are paramount. Some discussed potential improvements, like indicating transfer progress more clearly and adding features like drag-and-drop. Concerns were raised about potential abuse for sharing illegal content, along with the limitations inherent in browser-based P2P, such as needing both parties online simultaneously. The ephemeral nature of the transfer was both praised for privacy and questioned for practicality in certain scenarios. A few commenters compared it favorably to similar tools like Snapdrop, highlighting its minimalist approach.
ACCESS.bus, developed by ACCESS Co., Ltd., was a short-lived attempt to create a low-cost, low-power alternative to USB in the late 1990s, primarily for connecting peripherals like keyboards and mice. Leveraging the already established I²C protocol, it aimed for simplicity and minimal hardware requirements. Despite backing from major Japanese manufacturers and some limited adoption in devices like digital cameras and PDAs, ACCESS.bus ultimately failed to gain significant traction against the rapidly growing dominance of USB, fading into obscurity by the early 2000s. Its failure was largely due to USB's broader industry support, superior performance for higher-bandwidth devices, and its eventual standardization and adoption across diverse platforms.
Several Hacker News commenters discussed ACCESS.bus's technical merits compared to USB. Some argued that while ACCESS.bus offered advantages like cheaper connectors and isochronous data transfer crucial for audio, its downfall was due to poorer marketing and industry support compared to the Intel-backed USB. Others pointed out that ACCESS.bus's use of a 7-bit addressing scheme limited it to 127 devices, a significant constraint compared to USB's much larger capacity. The conversation also touched upon the complexity of ACCESS.bus drivers and its apparent susceptibility to noise, alongside its prevalence in specific niches like high-end audio equipment in Japan. A few commenters reminisced about using ACCESS.bus devices and noted the lack of readily available information about the technology today, contributing to its "forgotten" status.
Without TCP or UDP, internet communication as we know it would cease to function. Applications wouldn't have standardized ways to send and receive data over IP. We'd lose reliability (guaranteed delivery, in-order packets) provided by TCP, and the speed and simplicity offered by UDP. Developers would have to implement custom protocols for each application, leading to immense complexity, incompatibility, and a much less efficient and robust internet. Essentially, we'd regress to a pre-internet state for networked applications, with ad-hoc solutions and significantly reduced interoperability.
Hacker News users discussed alternatives to TCP/UDP and the implications of not using them. Some highlighted the potential of QUIC and HTTP/3 as successors, emphasizing their improved performance and reliability features. Others explored lower-level protocols like SCTP as a possible replacement, noting its multi-streaming capabilities and potential for specific applications. A few commenters pointed out that TCP/UDP abstraction is already somewhat eroded in certain contexts like RDMA, where applications can interact more directly with the network hardware. The practicality of replacing such fundamental protocols was questioned, with some suggesting it would be a massive undertaking with limited benefits for most use cases. The discussion also touched upon the roles of the network layer and the possibility of protocols built directly on IP, acknowledging potential issues with fragmentation and reliability.
Ggwave is a small, cross-platform C library designed for transmitting data over sound using short, data-encoded tones. It focuses on simplicity and efficiency, supporting various payload formats including text, binary data, and URLs. The library provides functionalities for both sending and receiving, using a frequency-shift keying (FSK) modulation scheme. It features adjustable parameters like volume, data rate, and error correction level, allowing optimization for different environments and use-cases. Ggwave is designed to be easily integrated into other projects due to its small size and minimal dependencies, making it suitable for applications like device pairing, configuration sharing, or proximity-based data transfer.
HN commenters generally praise ggwave's simplicity and small size, finding it impressive and potentially useful for various applications like IoT device setup or offline data transfer. Some appreciated the clear documentation and examples. Several users discuss potential use cases, including sneaker authentication, sharing WiFi credentials, and transferring small files between devices. Concerns were raised about real-world robustness and susceptibility to noise, with some suggesting potential improvements like forward error correction. Comparisons were made to similar technologies, mentioning limitations of existing sonic data transfer methods. A few comments delve into technical aspects, like frequency selection and modulation techniques, with one commenter highlighting the choice of Goertzel algorithm for decoding.
This blog post explores improving type safety and reducing boilerplate when communicating between iOS apps and WatchOS complications using Swift. The author introduces two Domain Specific Languages (DSLs) built with Swift's result builders. The first DSL simplifies defining data models shared between the app and complication, automatically generating the necessary Codable conformance and WatchConnectivity transfer code. The second DSL streamlines updating complications, handling the asynchronous nature of data transfer and providing compile-time checks for supported complication families. By leveraging these DSLs, the author demonstrates a cleaner, safer, and more maintainable approach to iOS/WatchOS communication, minimizing the risk of runtime errors.
HN commenters generally praised the approach outlined in the article for its type safety and potential to reduce bugs in iOS/WatchOS communication. Some expressed concern about the verbosity of the generated code and suggested exploring alternative approaches like protobuf or gRPC, while acknowledging their added complexity. Others questioned the necessity of a DSL for this specific problem, suggesting that Swift's existing features might suffice with careful design. The potential benefits for larger teams and complex projects were also highlighted, where the enforced type safety could prevent subtle communication errors. One commenter pointed out the similarity to Apache Thrift. Several users appreciated the author's clear explanation and practical example.
Summary of Comments ( 28 )
https://news.ycombinator.com/item?id=43723884
HN users generally agree with the article's premise that smaller packets are better for latency. Several commenters note the importance of considering protocol overhead when discussing packet size, particularly in the context of VoIP and gaming where latency is critical. Some point out the trade-off between smaller packets (lower latency) and larger packets (higher throughput), suggesting that the "optimal" packet size depends on the specific application and network conditions. One commenter questions the article's dismissal of jumbo frames, arguing they can be beneficial in certain scenarios like data centers. Others offer additional resources and technical explanations regarding packet fragmentation and reassembly. A few commenters discuss the historical context of packet size, referencing older protocols and network limitations.
The Hacker News post "The Size of Packets" (https://news.ycombinator.com/item?id=43723884), which links to an article discussing internet packet sizes, has a moderate number of comments that delve into various aspects of networking and performance.
Several commenters discuss the historical context of packet sizes and the evolution of network technology. One commenter highlights the limitations of early Ethernet, which had a maximum transmission unit (MTU) of 1500 bytes, and how this influenced the common packet size seen today. Another points out that the introduction of jumbo frames, which allow for larger packets, aimed to improve efficiency but faced adoption challenges due to fragmentation issues and inconsistent support across networks. The complexities of balancing larger packet sizes for efficiency against the potential for increased latency and retransmissions due to errors are explored in several comments.
The topic of network overhead is also raised, with commenters discussing the proportion of a packet dedicated to headers versus actual data. The impact of different protocols, such as IPv4 and IPv6, on packet overhead is mentioned. One commenter provides specific calculations showing the overhead percentages for various scenarios, highlighting the significance of this overhead, especially with smaller packets.
Performance implications are a central theme. Some comments discuss the relationship between packet size, latency, and throughput, acknowledging that larger packets can reduce overhead and improve throughput but also increase latency in certain situations. The practical challenges of tuning network parameters to optimize for specific applications are also acknowledged.
Security considerations are briefly touched upon. One commenter points out that smaller packets can be beneficial for security in some contexts by reducing the impact of a single lost or corrupted packet.
Finally, a few comments offer anecdotal experiences and observations related to network performance and packet sizes in different environments. One commenter shares an experience with satellite internet where smaller packets were found to be more reliable, illustrating the real-world impact of these technical details.
Overall, the comments provide a range of perspectives on the nuances of packet sizes and their implications for network performance and efficiency. They highlight the ongoing balancing act between maximizing throughput while minimizing latency and ensuring reliability in diverse network environments. The discussion is grounded in technical details but also incorporates practical experience and historical context, offering a valuable supplement to the linked article.