Running extra fiber optic cable during initial installation, even if it seems excessive, is a highly recommended practice. Future-proofing your network infrastructure with spare fiber significantly reduces cost and effort later on. Pulling new cable is disruptive and expensive, while having readily available dark fiber allows for easy expansion, upgrades, and redundancy without the hassle of major construction or downtime. This upfront investment pays off in the long run by providing flexibility and adaptability to unforeseen technological advancements and increasing bandwidth demands.
56k modems' upstream speeds were limited to 33.6kbps due to analog-to-digital conversion at the phone company. However, downloads could reach 56kbps because they leveraged a mostly-digital path from the telco's server to the user's modem. This asymmetry existed because the phone company's infrastructure used digital signals internally, even for analog phone calls. The digital audio was converted to analog only at the last mile, at the user's local central office. This meant a 56k modem downloading data was essentially receiving a slightly-modified digital signal, bypassing much of the analog conversion process and thus achieving higher throughput. Uploads, originating from the analog modem, had to be fully digitized at the central office, resulting in the lower speed.
Several Hacker News commenters pointed out that the article's title is misleading. They clarified that 56k modems didn't rely on digital phone lines in the way the title implies. Instead, they exploited the fact that the trunk lines between central offices were digital, while the "last mile" to the user's home remained analog. This allowed the modem to receive data digitally at the CO's end and convert it to analog for the final leg, maximizing the speed within the constraints of the analog local loop. Some users also shared anecdotal memories of early modem technology and discussed the limitations imposed by analog lines. One commenter noted the importance of echo cancellation in achieving these higher speeds. A few commenters discussed related topics like the technical reasons behind the asymmetry of upload and download speeds and the different standards used for upstream communication.
The IEEE Spectrum article argues that the current trajectory of 6G development, focused on extremely high frequencies and bandwidth, might be misguided. While these frequencies offer theoretical speed improvements, they suffer from significant limitations like extremely short range and susceptibility to atmospheric interference. The article proposes a shift in focus towards utilizing the existing, and largely underutilized, mid-band spectrum for 6G. This approach, combined with advanced signal processing and network management techniques, could deliver substantial performance gains without the drawbacks of extremely high frequencies, offering a more practical and cost-effective path to a truly impactful next-generation wireless network.
HN commenters largely agree that focusing on 6G is premature and driven by hype, especially given 5G's under-delivered promises and niche applications. Several express skepticism about the need for the speeds 6G promises, arguing current infrastructure improvements and better utilization of existing technologies are more pressing. Some suggest focusing on improving coverage, affordability, and power efficiency instead of chasing higher theoretical speeds. There's also concern about the research itself, with comments highlighting the impracticality of some proposed technologies and the lack of clear use cases beyond vague "future applications." A few commenters point out the cyclical nature of these G cycles, driven by marketing and telco interests rather than genuine user needs.
The blog post explores the feasibility and potential advantages of using existing telephone wiring (specifically the unused pairs in twisted-pair copper lines) for home networking. It highlights POTS's robust infrastructure and broad availability, even in areas lacking cable or fiber internet. The author discusses various modulation techniques like G.hn that could deliver speeds comparable to or exceeding current home network technologies while potentially offering better security and interference resistance than Wi-Fi. They also acknowledge challenges such as distance limitations, potential crosstalk with active phone lines (if present), and the need for new hardware. Overall, the post suggests that repurposing telephone wiring could be a viable and even superior alternative to traditional home networking methods.
Hacker News users generally found the idea of networking over phone lines intriguing, though impractical in most modern contexts. Several commenters discussed the technical details, pointing out limitations in bandwidth and the potential interference issues with existing phone services like DSL. Some reminisced about earlier phone networking attempts, including using BBS systems and dedicated phone-line networking hardware. The consensus was that while the technical challenge is interesting, existing solutions like Ethernet and WiFi are far superior for most residential networking needs, making this approach a novelty rather than a practical solution. A few users pointed out niche use cases, such as situations where running new cables is impossible or extremely difficult, suggesting a very limited potential application.
The AMD Radeon Instinct MI300A boasts a massive, unified memory subsystem, key to its performance as an APU designed for AI and HPC workloads. It combines 128GB of HBM3 memory with 8 stacks of 16GB each, offering impressive bandwidth. This memory is unified across the CPU and GPU dies, simplifying programming and boosting efficiency. AMD achieves this through a sophisticated design involving a combination of Infinity Fabric links, memory controllers integrated into the CPU dies, and a complex scheduling system to manage data movement. This architecture allows the MI300A to access and process large datasets efficiently, crucial for the demanding tasks it's targeted for.
Hacker News users discussed the complexity and impressive scale of the MI300A's memory subsystem, particularly the challenges of managing coherence across such a large and varied memory space. Some questioned the real-world performance benefits given the overhead, while others expressed excitement about the potential for new kinds of workloads. The innovative use of HBM and on-die memory alongside standard DRAM was a key point of interest, as was the potential impact on software development and optimization. Several commenters noted the unusual architecture and speculated about its suitability for different applications compared to more traditional GPU designs. Some skepticism was expressed about AMD's marketing claims, but overall the discussion was positive, acknowledging the technical achievement represented by the MI300A.
Summary of Comments ( 86 )
https://news.ycombinator.com/item?id=43471177
HN commenters largely agree with the author's premise: running extra fiber is cheap insurance against future needs and troubleshooting. Several share anecdotes of times extra fiber saved the day, highlighting the difficulty and expense of retrofitting later. Some discuss practical considerations like labeling, conduit space, and potential damage during construction. A few offer alternative perspectives, suggesting that focusing on good documentation and flexible network design can sometimes be more valuable than simply laying more fiber. The discussion also touches on the importance of considering future bandwidth demands and the increasing prevalence of fiber in residential settings.
The Hacker News post "If you get the chance, always run more extra network fiber cabling" generated a lively discussion with several insightful comments. Many commenters strongly agreed with the premise of running extra fiber, emphasizing the relatively low cost of the cable itself compared to the labor involved in installation, making it a worthwhile investment for future-proofing.
Several users shared anecdotes reinforcing this point. One commenter recounted a situation where pre-running extra fiber saved them significant time and money when they unexpectedly needed to expand their network infrastructure. Another highlighted the difficulty and expense of retrofitting fiber in older buildings, emphasizing the wisdom of over-provisioning during initial construction.
A few commenters offered practical advice on implementing this strategy. Suggestions included labeling cables clearly, using high-quality cable for longevity, and considering future bandwidth needs. One commenter specifically recommended using OM5 fiber for its higher bandwidth capacity, while another cautioned against going overboard and advocated for a balanced approach based on reasonable future needs. This commenter argued against running exorbitant amounts of fiber "just because," and instead recommended a sensible approach to over-provisioning.
The discussion also touched on the importance of proper documentation. Commenters stressed the need for accurate records of cable runs, including detailed diagrams and labeling, to facilitate future maintenance and upgrades. This was highlighted as particularly important in larger or more complex installations where tracking cable runs can become difficult.
Some users also mentioned the potential benefits of dark fiber – unused optical fiber – for future expansion or leasing opportunities. This was presented as another argument for installing more fiber than immediately necessary.
Finally, a few comments addressed the broader context of network planning, emphasizing the importance of considering not just fiber but also other aspects of network infrastructure like conduit space and power distribution. These commenters argued for a holistic approach to network design, considering all interconnected elements.
Overall, the comments on Hacker News strongly supported the idea of running extra fiber cabling whenever possible, citing cost savings, future-proofing, and the challenges of retrofitting. The discussion provided practical advice on implementation and highlighted the importance of documentation and a comprehensive approach to network planning.