The blog post "Windows BitLocker – Screwed Without a Screwdriver" details a frustrating and potentially data-loss-inducing scenario involving Windows BitLocker encryption and a Secure Boot configuration change. The author recounts how they inadvertently triggered a BitLocker recovery key prompt after updating their computer's firmware. This seemingly innocuous update modified the Secure Boot configuration, specifically by enabling the Platform Key (PK) protection. BitLocker, designed with robust security in mind, interpreted this change as a potential security compromise, suspecting that an unauthorized actor might have tampered with the boot process. As a safeguard against potential malicious activity, BitLocker locked the drive and demanded the recovery key.
The author emphasizes the surprising nature of this event. There were no explicit warnings about the potential impact of a firmware update on BitLocker. The firmware update process itself didn't highlight the Secure Boot modification in a way that would alert the user to the potential consequences. This lack of clear communication created a situation where a routine update turned into a scramble for the BitLocker recovery key.
The post underscores the importance of securely storing the BitLocker recovery key. Without access to this key, the encrypted data on the drive becomes inaccessible, effectively resulting in data loss. The author highlights the potential severity of this situation, especially for users who may not have readily available access to their recovery key.
Furthermore, the post subtly criticizes the design of BitLocker and its interaction with Secure Boot. The author argues that triggering a recovery key prompt for a legitimate firmware update, especially one initiated by the user themselves, is an overreaction. A more nuanced approach, perhaps involving a warning or a less drastic security measure, would have been preferable. The author suggests that the current implementation creates unnecessary anxiety and potential data loss risks for users who perform routine system updates.
Finally, the post serves as a cautionary tale for other Windows users who utilize BitLocker. It stresses the necessity of understanding the implications of Secure Boot changes and the critical role of the BitLocker recovery key. It encourages proactive measures to ensure the recovery key is safely stored and accessible, mitigating the risk of data loss in similar scenarios. The author implies that better communication and more user-friendly design choices regarding BitLocker and Secure Boot interactions would significantly improve the user experience and reduce the risk of unintended data loss.
Microsoft has announced that it will cease providing support for Microsoft 365 applications on the Windows 10 operating system after October 14, 2025. This means that after this date, users who continue to utilize Windows 10 will no longer receive security updates, bug fixes, or technical support for their Microsoft 365 apps, which include popular productivity software like Word, Excel, PowerPoint, Outlook, and Teams. This effectively ends the functional lifespan of Microsoft 365 on Windows 10, although the applications may continue to operate for a period afterward, albeit with increasing security risks and potential compatibility issues.
This decision aligns with Microsoft's broader strategy of encouraging users to migrate to Windows 11, the company's latest operating system. While Microsoft will continue to support Windows 10 with security updates until October 14, 2025, the lack of support for crucial productivity applications like Microsoft 365 effectively makes Windows 10 a less desirable platform for businesses and individuals who rely on these applications for their daily workflow. This move underscores the importance of staying up-to-date with software updates and operating system upgrades to ensure ongoing compatibility and security. Users who wish to continue using Microsoft 365 with full support after the October 2025 deadline will need to upgrade their systems to Windows 11. Failing to do so could expose users to potential security vulnerabilities and limit their access to the latest features and functionalities offered by Microsoft 365. This effectively deprecates Windows 10 as a viable platform for continued use of the Microsoft 365 suite, pushing users towards the newer Windows 11 ecosystem.
The Hacker News post titled "Microsoft won't support Office apps on Windows 10 after October 14th" has generated a number of comments discussing the implications of Microsoft's decision. Several commenters express frustration and cynicism regarding Microsoft's perceived strategy of pushing users towards newer operating systems and subscription services.
One highly upvoted comment points out the confusion this creates for users, especially given that Windows 10 is still supported until 2025. They highlight the discrepancy between supporting the OS but not the core productivity suite on that OS, questioning the logic behind this move. The commenter suggests this is a tactic to force upgrades to Windows 11, even if users are content with their current setup.
Another commenter echoes this sentiment, expressing annoyance at the constant pressure to upgrade, particularly when they are satisfied with the performance and stability of their existing software. They feel this is a blatant attempt by Microsoft to increase revenue through forced upgrades and subscriptions.
The theme of planned obsolescence is also raised, with one user arguing that this is a classic example of a company artificially limiting the lifespan of perfectly functional software to drive sales. They express disappointment in this practice and the lack of consideration for users who prefer stability over constant updates.
Some commenters discuss the technical implications, questioning the specific reasons why Office apps wouldn't function on a supported OS. They speculate about potential security concerns or underlying changes in the software architecture that necessitate the change. However, there's a general skepticism towards these explanations, with many believing it's primarily a business decision rather than a technical necessity.
A few users offer practical advice, suggesting alternatives like LibreOffice or using older, perpetual license versions of Microsoft Office. They also discuss the possibility of using virtual machines to run Windows 11 if necessary.
Several comments mention the security implications, with some suggesting that this move might actually improve security by forcing users onto a more modern and regularly updated platform. However, this is countered by others who argue that forced upgrades can disrupt workflows and create vulnerabilities if not handled properly.
Overall, the comments reflect a general sentiment of frustration and skepticism towards Microsoft's decision. Many users perceive it as a manipulative tactic to drive revenue and force upgrades, rather than a move based on genuine technical necessity or user benefit. The discussion highlights the ongoing tension between software companies' desire for continuous updates and users' preference for stability and control over their systems.
This blog post, titled "Why is my CPU usage always 100%? (Upgrading my Chumby 8 kernel part 9)", details the author's ongoing journey to upgrade the Linux kernel on their Chumby 8, a now-discontinued internet appliance. A persistent issue of 100% CPU utilization plagues the device after the kernel upgrade, prompting a deep dive into diagnosing the root cause.
Initially, the author suspects a runaway process is consuming all available CPU cycles. Using the top
command, they identify the culprit as the kworker
process, specifically a kernel thread dedicated to handling software interrupts. This discovery shifts the focus from a misbehaving user-space application to a problem within the kernel itself.
The author's investigation then explores various potential sources of excessive software interrupts. They meticulously eliminate possibilities such as network interrupts by disconnecting the device from the network, and timer interrupts by analyzing their frequency and confirming they are within expected parameters.
The post highlights the challenges of debugging kernel-level issues, especially on an embedded system with limited resources and debugging tools. The author leverages the available tools, including top
, /proc/interrupts
, and kernel debugging messages, to progressively narrow down the problem.
Through a process of elimination and careful observation, the author eventually identifies the excessive software interrupts as stemming from the SD card driver. The continuous stream of interrupts from the SD card controller overwhelms the system, leading to the observed 100% CPU usage. While the exact reason for the SD card driver's behavior remains unclear at the end of the post, the author pinpoints the source of the problem and sets the stage for further investigation in future installments. The post concludes by emphasizing the iterative nature of debugging and the importance of systematically eliminating potential causes.
The Hacker News post discussing the blog post "Why is my CPU usage always 100%? Upgrading my Chumby 8 kernel (Part 9)" has several comments exploring various aspects of the situation and offering potential solutions.
One commenter points out the inherent difficulty in debugging such embedded systems, highlighting the lack of sophisticated tools and the often obscure nature of the problems. They sympathize with the author's struggle, acknowledging the frustration that can arise when dealing with limited resources and cryptic error messages.
Another commenter questions the author's decision to stick with the older kernel (2.6.32), suggesting that moving to a more modern kernel might be a more efficient approach in the long run. They acknowledge the author's stated reasons for remaining with the older kernel (familiarity and control) but argue that the benefits of a newer kernel, including potential performance improvements and bug fixes, might outweigh the effort involved in upgrading.
A third commenter focuses on the specific issue of the kworker
process consuming high CPU. They suggest investigating whether a driver is misbehaving or if some background process is stuck in a loop. They propose using tools like strace
or perf
to pinpoint the culprit and gain a better understanding of the kernel's behavior. This commenter also mentions the possibility of a hardware issue, although they consider it less likely.
Further discussion revolves around the challenges of real-time systems and the potential impact of interrupt handling on CPU usage. One commenter suggests examining interrupt frequencies and considering the possibility of interrupt coalescing to reduce overhead.
Finally, there's a brief exchange about the Chumby device itself, with one commenter expressing nostalgia for the device and another sharing their own experience with embedded systems development. This adds a touch of personal reflection to the technical discussion.
Overall, the comments provide a valuable extension to the blog post, offering diverse perspectives on debugging embedded systems, troubleshooting high CPU usage, and the specific challenges posed by the Chumby 8 and its older kernel. The commenters offer practical suggestions and insights drawn from their own experiences, creating a collaborative problem-solving environment.
The blog post "DOS APPEND" from the OS/2 Museum meticulously details the functionality and nuances of the APPEND
command in various DOS versions, primarily focusing on its evolution and differences compared to the PATH
command. APPEND
, much like PATH
, allows programs to access data files located in directories other than their current working directory. However, while PATH
focuses on executable files, APPEND
extends this capability to data files, specified by various file extensions.
The article begins by explaining the initial purpose of APPEND
in DOS 3.3, highlighting its ability to search specified directories for data files when a program attempts to open a file not found in the current directory. This eliminates the need for programs to explicitly handle path information for data files. The post then traces the development of APPEND
through later DOS versions, including DOS 3.31, where a significant bug related to networked drives was addressed.
A key distinction between APPEND
and PATH
is elaborated upon: PATH
affects only the search for executable files (.COM, .EXE, and .BAT), while APPEND
pertains to data files with extensions specified by the user. This difference is crucial for understanding their respective roles within the DOS environment.
The blog post further delves into the various ways APPEND
can be used, outlining the command-line switches and their effects. These switches include /E
, which loads the appended directories into an environment variable, /PATH:ON
, which enables searching the appended directories even when a full path is provided for a file, and /PATH:OFF
, which disables this behavior. The post also explains the use of /X
, which extends the functionality of APPEND
to affect the EXEC
function calls, thus influencing child processes.
The evolution of APPEND
continues to be discussed, noting the removal of the problematic /X:ON
and /X:OFF
switches in later versions due to their instability. The article also touches upon the differences in behavior between APPEND
in MS-DOS/PC DOS and DR DOS, particularly concerning the handling of the ;
delimiter in the APPEND
list and the search order when multiple directories are specified.
Finally, the post concludes by briefly discussing the persistence of APPEND
in later Windows versions for compatibility, even though its utility diminishes in these more advanced operating systems with their more sophisticated file management capabilities. The article thoroughly explores the intricacies and historical context of the APPEND
command, offering a comprehensive understanding of its functionality and its place within the broader DOS ecosystem.
The Hacker News post titled "DOS APPEND" with the link https://www.os2museum.com/wp/dos-append/ has several comments discussing the utility of the APPEND
command in DOS and OS/2, as well as its quirks and comparisons to other operating systems.
One commenter recalls using APPEND
frequently and finding it incredibly useful, particularly for accessing data files located in different directories without having to constantly change directories or use full paths. They highlight the convenience it offered in a time before sophisticated development environments and integrated development environments (IDEs).
Another commenter draws a parallel between APPEND
and the modern concept of environment variables like $PATH
in Unix-like systems, which serve a similar purpose of specifying locations where the system should search for executables. They also touch on how APPEND
differed slightly in OS/2, specifically regarding the handling of data files versus executables.
Further discussion revolves around the intricacies of APPEND
's behavior. One comment explains how APPEND
didn't just search the appended directories but actually made them appear as if they were part of the current directory, creating a virtualized directory structure. This led to some confusion and unexpected behavior in certain situations, especially with programs that relied on obtaining the current working directory.
One user recounts experiences with the complexities of managing multiple directories and files in early versions of Turbo Pascal, illustrating the context where a tool like APPEND
would have been valuable. This comment also highlights the limited tooling available at the time, emphasizing the appeal of features like APPEND
for streamlining development workflows.
Someone points out the potential for conflicts and unexpected results when using APPEND
with programs that create files in the current directory. They suggest that APPEND
's behavior could lead to files being inadvertently created in a directory different from the intended one, depending on how the program handled relative paths.
The security implications of APPEND
are also addressed, with a comment mentioning the risks associated with accidentally executing programs from untrusted directories added to the APPEND
path. This highlights the potential security vulnerabilities that could arise from misuse or improper configuration of the command.
Finally, there's a mention of a similar feature called apppath
in the REXX language, further illustrating the cross-platform desire for this kind of directory management functionality.
Overall, the comments paint a picture of APPEND
as a powerful but somewhat quirky tool that provided a valuable solution to directory management challenges in the DOS/OS/2 era, while also introducing potential pitfalls that required careful consideration. The discussion showcases how APPEND
reflected the computing landscape of the time and how its functionality foreshadowed concepts that are commonplace in modern operating systems.
This blog post by Naehrdine explores an unexpected reboot phenomenon observed on an iPhone running iOS 18 and details the process of reverse engineering the operating system to pinpoint the root cause. The author begins by describing the seemingly random nature of the reboots, noting they occurred after periods of inactivity, specifically overnight while the phone was charging and seemingly unused. This led to initial suspicions of a hardware issue, but traditional troubleshooting steps, like resetting settings and even a complete device restore using iTunes, failed to resolve the problem.
Faced with the persistence of the issue, the author embarked on a deeper investigation involving reverse engineering iOS 18. This involved utilizing tools and techniques to analyze the operating system's inner workings. The post explicitly mentions the use of Frida, a dynamic instrumentation toolkit, which allows for the injection of custom code into running processes, enabling real-time monitoring and manipulation. The author also highlights the use of a disassembler and debugger to examine the compiled code of the operating system and trace its execution flow.
The investigation focused on system daemons, which are background processes responsible for essential system operations. Through meticulous analysis, the author identified a specific daemon, 'powerd', as the likely culprit. 'powerd' is responsible for managing the device's power state, including sleep and wake cycles. Further examination of 'powerd' revealed a previously unknown internal check within the daemon related to prolonged inactivity. This check, under certain conditions, was triggering an undocumented system reset.
The blog post then meticulously details the specific function within 'powerd' that was causing the reboot, providing the function's name and a breakdown of its logic. The author's analysis revealed that the function appears to be designed to mitigate potential hardware or software issues arising from extended periods of inactivity by forcing a system restart. However, this function seemed to be malfunctioning, triggering the reboot even in the absence of any genuine problems.
While the author stops short of providing a definitive solution or patch, the post concludes by expressing confidence that the identified function is indeed responsible for the unexplained reboots. The in-depth analysis presented provides valuable insights into the inner workings of iOS power management and offers a potential starting point for developing a fix, either through official Apple updates or community-driven workarounds. The author's work demonstrates the power of reverse engineering in uncovering hidden behaviors and troubleshooting complex software issues.
The Hacker News post titled "Reverse Engineering iOS 18 Inactivity Reboot" sparked a discussion with several insightful comments.
One commenter questioned the necessity of the inactivity reboot, especially given its potential to interrupt important tasks like long-running computations or data transfers. They also expressed concern about the lack of user control over this feature.
Another commenter pointed out the potential security implications of the reboot, particularly if a device is left unattended and unlocked in a sensitive environment. They suggested the need for an option to disable the automatic reboot for specific situations.
A different commenter shared their personal experience with the inactivity reboot, describing the frustration of having their device restart unexpectedly during a long process. They emphasized the importance of giving users more control over such system behaviors.
Several commenters discussed the technical aspects of the reverse engineering process, praising the author of the blog post for their detailed analysis. They also speculated about the potential reasons behind Apple's implementation of the inactivity reboot, such as memory management or security hardening.
One commenter suggested that the reboot might be related to preventing potential exploits that rely on long-running processes, but acknowledged the inconvenience it causes for users.
Another commenter highlighted the potential negative impact on accessibility for users who rely on assistive technologies, as the reboot could interrupt their workflow and require them to reconfigure their settings.
Overall, the comments reflect a mix of curiosity about the technical details, concern about the potential drawbacks of the feature, and a desire for more user control over the behavior of their devices. The commenters generally appreciate the technical analysis of the blog post author while expressing a need for Apple to provide options or clarity around this feature.
Summary of Comments ( 57 )
https://news.ycombinator.com/item?id=42747877
HN commenters generally concur with the article's premise that relying solely on BitLocker without additional security measures like a TPM or Secure Boot can be risky. Several point out how easy it is to modify boot order or boot from external media to bypass BitLocker, effectively rendering it useless against a physically present attacker. Some commenters discuss alternative full-disk encryption solutions like Veracrypt, emphasizing its open-source nature and stronger security features. The discussion also touches upon the importance of pre-boot authentication, the limitations of relying solely on software-based security, and the practical considerations for different threat models. A few commenters share personal anecdotes of BitLocker failures or vulnerabilities they've encountered, further reinforcing the author's points. Overall, the prevailing sentiment suggests a healthy skepticism towards BitLocker's security when used without supporting hardware protections.
The Hacker News post "Windows BitLocker – Screwed Without a Screwdriver" generated a moderate amount of discussion, with several commenters sharing their perspectives and experiences related to BitLocker and disk encryption.
Several commenters discuss alternative full-disk encryption solutions they consider more robust or user-friendly than BitLocker. Veracrypt is mentioned multiple times as a preferred open-source alternative. One commenter specifically highlights its support for multiple bootloaders and ease of recovery. Others bring up LUKS on Linux as another open-source full-disk encryption option they favor.
The reliance on closed-source solutions for critical security measures like disk encryption is a concern raised by some. They emphasize the importance of transparency and the ability to inspect the code, particularly when dealing with potential vulnerabilities or backdoors. In contrast, one user expressed confidence in Microsoft's security practices, suggesting that the closed-source nature doesn't necessarily imply lower security.
A few commenters shared personal anecdotes of BitLocker issues, including problems recovering data after hardware failures. These stories highlighted the real-world implications of relying on a system that can become inaccessible due to unforeseen circumstances.
There's a discussion about the potential dangers of relying solely on TPM for key protection. The susceptibility of TPMs to vulnerabilities or physical attacks is raised as a concern. One user suggests storing the recovery key offline, independent of the TPM, to mitigate this risk. Another points out the importance of physically securing the machine itself, as a stolen laptop with BitLocker enabled but dependent on TPM could be potentially vulnerable to attack.
Some users questioned the specific scenario described in the original blog post, with one suggesting that the inability to boot may have been due to a Secure Boot issue unrelated to BitLocker. They also highlighted the importance of carefully documenting the recovery key to prevent data loss.
Finally, one commenter mentions encountering similar issues with FileVault on macOS, illustrating that the challenges and complexities of disk encryption are not unique to Windows. They note that while these solutions are designed to protect data, they can sometimes hinder access, especially in non-standard scenarios like hardware failures or OS upgrades.