"A Tale of Four Kernels" examines the performance characteristics of four different operating system microkernels: Mach, Chorus, Windows NT, and L4. The paper argues that microkernels, despite their theoretical advantages in modularity and flexibility, have historically underperformed monolithic kernels due to high inter-process communication (IPC) costs. Through detailed measurements and analysis, the authors demonstrate that while Mach and Chorus suffer significantly from IPC overhead, L4's highly optimized IPC mechanisms allow it to achieve performance comparable to monolithic systems. The study reveals that careful design and implementation of IPC primitives are crucial for realizing the potential of microkernel architectures, with L4 showcasing a viable path towards efficient and flexible OS structures. Windows NT, despite being marketed as a microkernel, is shown to have a hybrid structure closer to a monolithic kernel, sidestepping the IPC bottleneck but also foregoing the modularity benefits of a true microkernel.
In December 2008, a dike holding back a massive coal ash pond at the Tennessee Valley Authority's Kingston Fossil Plant failed, releasing over a billion gallons of toxic sludge. This deluge inundated the surrounding community, burying homes and covering hundreds of acres in a thick layer of coal ash, a byproduct of burning coal containing heavy metals and radioactive materials. The disaster displaced residents, damaged property, and spurred long-term health concerns among residents and cleanup workers, many of whom later developed cancers and other illnesses linked to coal ash exposure. The TVA ultimately took responsibility for the spill caused by faulty dike construction and was tasked with a lengthy and expensive cleanup process.
HN commenters largely focus on the lack of accountability for TVA and the devastating long-term health consequences for the Kingston community. Several highlight the inadequacy of the $43 million settlement considering the scale of the disaster and the ongoing health problems. Some commenters point to the inherent risks of coal ash storage and the need for better regulations and enforcement. The disparity between the treatment of the Kingston community and the likely response had a similar disaster occurred in a wealthier area is also discussed, with many feeling that environmental injustice played a significant role. A few comments provide further context around coal ash disposal and regulatory failures, referencing other similar incidents. Some also express frustration with the slow pace of cleanup and the perceived lack of media attention the disaster received.
The 2008 blog post argues that Windows wasn't truly "free" for businesses, despite the common perception. While the OS itself came bundled with PCs, the associated costs of management, maintenance, software licensing (especially for Microsoft Office and server products), antivirus, and dealing with malware significantly outweighed the initial cost of the OS. The author contends that these hidden expenses made Windows a more expensive option compared to perceived free alternatives like Linux, particularly for smaller businesses. Ultimately, the "free" Windows license subsidized other revenue streams for Microsoft, making it a profitable, albeit deceptive, business model.
Hacker News users discussed the complexities of Microsoft's "free" Windows licensing model for businesses. Several pointed out that while the OS itself might not have a direct upfront cost, it's bundled with hardware purchases, making it an indirect expense. Others highlighted the ongoing costs associated with Windows, such as Software Assurance for updates and support, along with the costs of managing Active Directory and other related infrastructure. The general consensus was that "free" is a misleading term, and the true cost of Windows for businesses is substantial when considering the total cost of ownership. Some commenters also discussed the historical context of the article (from 2008) and how Microsoft's licensing and business models have evolved since then.
In 2008, amidst controversy surrounding its initial Chrome End User License Agreement (EULA), Google clarified that the license only applied to Chrome itself, not to user-generated content created using Chrome. Matt Cutts explained that the broad language in the original EULA was standard boilerplate, intended for protecting Google's intellectual property within the browser, not claiming ownership over user data. The company quickly revised the EULA to eliminate ambiguity and explicitly state that Google claims no rights to user content created with Chrome. This addressed concerns about Google overreaching and reassured users that their work remained their own.
HN commenters in 2023 discuss Matt Cutts' 2008 blog post clarifying Google's Chrome license agreement. Several express skepticism of Google, pointing out that the license has changed since the post and that Google's data collection practices are extensive regardless. Some commenters suggest the original concern arose from a misunderstanding of legalese surrounding granting a license to use software versus a license to user-created content. Others mention that granting a license to "sync" data is distinct from other usage and requires its own scrutiny. A few commenters reflect on the relative naivety of concerns about data privacy in 2008 compared to the present day, where such concerns are much more widespread. The discussion ultimately highlights the evolution of public perception regarding online privacy and the persistent distrust of large tech companies like Google.
The Kaminsky DNS vulnerability exploited a weakness in DNS resolvers' handling of NXDOMAIN responses (indicating a nonexistent domain). Attackers could forge responses for nonexistent subdomains, poisoning the resolver's cache with a malicious IP address. The small size of the DNS response ID field (16 bits) and predictable transaction IDs made it relatively easy for attackers to guess the correct ID, allowing the forged response to be accepted. This enabled them to redirect traffic intended for legitimate websites to malicious servers, facilitating phishing and other attacks. The vulnerability was mitigated by increasing the entropy of transaction IDs, making them harder to predict and forged responses less likely to be accepted.
The Hacker News comments on the illustrated guide to the Kaminsky DNS vulnerability largely praise the clarity and helpfulness of the guide, especially its visual aids. Several commenters reminisce about dealing with the vulnerability when it was discovered, highlighting the urgency and widespread impact it had at the time. Some discuss technical details, including the difficulty of patching all affected DNS servers and the intricacies of the exploit itself. One commenter points out that the same underlying issue (predictable transaction IDs) has cropped up in other protocols besides DNS. Another emphasizes the importance of the vulnerability's disclosure and coordinated patching process as a positive example of handling security flaws responsibly. A few users also link to related resources, including Dan Kaminsky's own presentations on the vulnerability.
Summary of Comments ( 5 )
https://news.ycombinator.com/item?id=43404617
Hacker News users discuss the practical implications and historical context of the "Four Kernels" paper. Several commenters highlight the paper's effectiveness in teaching OS fundamentals, particularly for those new to the subject. The simplicity of the kernels, along with the provided code, allows for easy comprehension and experimentation. Some discuss how valuable this approach is compared to diving straight into a complex kernel like Linux. Others point out that while pedagogically useful, these simplified kernels lack the complexities of real-world operating systems, such as memory management and device drivers. The historical significance of MINIX 3 is also touched upon, with one commenter mentioning Tanenbaum's involvement and the influence of these kernels on educational materials. The overall sentiment is that the paper is a valuable resource for learning OS basics.
The Hacker News post titled "A Tale of Four Kernels [pdf] (2008)" linking to a paper comparing microkernels has a modest number of comments, primarily focusing on the practicality and performance implications of microkernels.
One commenter highlights the historical context of the paper, mentioning that it was written during a time when multicore systems were becoming prevalent, leading to renewed interest in microkernels due to their potential advantages in terms of isolation and modularity. They also point out the paper's focus on the perceived performance disadvantages of microkernels, which had often been cited as a major drawback.
Another commenter discusses the "L4 is a fast path" concept, explaining that while Mach (a microkernel system) incurred significant overhead for inter-process communication, L4 aimed to optimize this by making the common case extremely fast. This optimization involved streamlining the message-passing mechanism and minimizing context switching overhead.
A further comment elaborates on the performance trade-offs of microkernels, acknowledging the inherent overhead of message passing but arguing that careful design and optimization can mitigate this significantly. They suggest that the benefits of microkernels, such as improved security and reliability, can outweigh the performance costs in certain applications.
One commenter notes the difficulty in achieving ideal performance with microkernels, especially when dealing with shared memory. They point to the challenges of managing memory access and maintaining consistency across different components of the system.
A user mentions seL4, a formally verified microkernel, as a significant advancement in the field. They explain that formal verification provides strong guarantees about the correctness of the kernel, potentially leading to improved security and reliability.
Finally, a commenter highlights the historical preference for monolithic kernels in widely adopted operating systems like Windows, macOS, and Linux, attributing this to their perceived simplicity and performance advantages. They suggest that the complexities of microkernel design and implementation have hindered their widespread adoption.
In summary, the comments on the Hacker News post revolve around the trade-offs between performance and other desirable characteristics like security and modularity in microkernels, highlighting the ongoing discussion and advancements in microkernel design and the challenges they face in competing with established monolithic kernels.