The blog post "Windows BitLocker – Screwed Without a Screwdriver" details a frustrating and potentially data-loss-inducing scenario involving Windows BitLocker encryption and a Secure Boot configuration change. The author recounts how they inadvertently triggered a BitLocker recovery key prompt after updating their computer's firmware. This seemingly innocuous update modified the Secure Boot configuration, specifically by enabling the Platform Key (PK) protection. BitLocker, designed with robust security in mind, interpreted this change as a potential security compromise, suspecting that an unauthorized actor might have tampered with the boot process. As a safeguard against potential malicious activity, BitLocker locked the drive and demanded the recovery key.
The author emphasizes the surprising nature of this event. There were no explicit warnings about the potential impact of a firmware update on BitLocker. The firmware update process itself didn't highlight the Secure Boot modification in a way that would alert the user to the potential consequences. This lack of clear communication created a situation where a routine update turned into a scramble for the BitLocker recovery key.
The post underscores the importance of securely storing the BitLocker recovery key. Without access to this key, the encrypted data on the drive becomes inaccessible, effectively resulting in data loss. The author highlights the potential severity of this situation, especially for users who may not have readily available access to their recovery key.
Furthermore, the post subtly criticizes the design of BitLocker and its interaction with Secure Boot. The author argues that triggering a recovery key prompt for a legitimate firmware update, especially one initiated by the user themselves, is an overreaction. A more nuanced approach, perhaps involving a warning or a less drastic security measure, would have been preferable. The author suggests that the current implementation creates unnecessary anxiety and potential data loss risks for users who perform routine system updates.
Finally, the post serves as a cautionary tale for other Windows users who utilize BitLocker. It stresses the necessity of understanding the implications of Secure Boot changes and the critical role of the BitLocker recovery key. It encourages proactive measures to ensure the recovery key is safely stored and accessible, mitigating the risk of data loss in similar scenarios. The author implies that better communication and more user-friendly design choices regarding BitLocker and Secure Boot interactions would significantly improve the user experience and reduce the risk of unintended data loss.
The blog post "Let's talk about AI and end-to-end encryption" by Matthew Green on cryptographyengineering.com delves into the complex relationship between artificial intelligence and end-to-end encryption (E2EE), exploring the perceived conflict between allowing AI access to user data for training and maintaining the privacy guarantees provided by E2EE. The author begins by acknowledging the increasing calls to allow AI models access to encrypted data, driven by the desire to leverage this data for training more powerful and capable AI systems. This desire stems from the inherent limitations of training AI on solely public data, which often results in less accurate and less useful models compared to those trained on a broader dataset, including private user data.
Green meticulously dissects several proposed solutions to this dilemma, outlining their technical intricacies and inherent limitations. He starts by examining the concept of training AI models directly on encrypted data, a technically challenging feat that, while theoretically possible in limited contexts, remains largely impractical and computationally expensive for the scale required by modern AI development. He elaborates on the nuances of homomorphic encryption and secure multi-party computation, explaining why these techniques, while promising, are not currently viable solutions for practical, large-scale AI training on encrypted datasets.
The post then transitions into discussing proposals involving client-side scanning, often framed as a means to detect illegal content, such as child sexual abuse material (CSAM). Green details how these proposals, while potentially well-intentioned, fundamentally undermine the core principles of end-to-end encryption, effectively creating backdoors that could be exploited by malicious actors or governments. He meticulously outlines the technical mechanisms by which client-side scanning operates, highlighting the potential for false positives, abuse, and the erosion of trust in secure communication systems. He emphasizes that introducing any form of client-side scanning necessitates a shift away from true end-to-end encryption, transforming it into something closer to client-to-server encryption with client-side pre-decryption scanning, thereby compromising the very essence of E2EE's privacy guarantees.
Furthermore, Green underscores the slippery slope argument, cautioning against the potential for expanding the scope of such scanning beyond CSAM to encompass other types of content deemed undesirable by governing bodies. This expansion, he argues, could lead to censorship and surveillance, significantly impacting freedom of expression and privacy. The author concludes by reiterating the importance of preserving end-to-end encryption as a crucial tool for protecting privacy and security in the digital age. He emphasizes that the perceived tension between AI advancement and E2EE necessitates careful consideration and a nuanced approach that prioritizes user privacy and security without stifling innovation. He suggests that focusing on alternative approaches, such as federated learning and differential privacy, may offer more promising avenues for developing robust AI models without compromising the integrity of end-to-end encrypted communication.
The Hacker News post "Let's talk about AI and end-to-end encryption" has generated a robust discussion with several compelling comments. Many commenters grapple with the inherent tension between the benefits of AI-powered features and the preservation of end-to-end encryption (E2EE).
One recurring theme is the practicality and potential misuse of client-side scanning. Some commenters express skepticism about the feasibility of truly secure client-side scanning, arguing that any client-side processing inherently weakens E2EE and creates vulnerabilities for malicious actors or governments to exploit. They also voice concerns about the potential for function creep, where systems designed for specific purposes (like detecting CSAM) could be expanded to encompass broader surveillance. The chilling effect on free speech and privacy is a significant concern.
Several comments discuss the potential for alternative approaches, such as federated learning, where AI models are trained on decentralized data without compromising individual privacy. This is presented as a potential avenue for leveraging the benefits of AI without sacrificing E2EE. However, the technical challenges and potential limitations of federated learning in this context are also acknowledged.
The "slippery slope" argument is prominent, with commenters expressing worry that any compromise to E2EE, even for seemingly noble purposes, sets a dangerous precedent. They argue that once the principle of E2EE is weakened, it becomes increasingly difficult to resist further encroachments on privacy.
Some commenters take a more pragmatic stance, suggesting that the debate isn't necessarily about absolute E2EE versus no E2EE, but rather about finding a balance that allows for some beneficial AI features while mitigating the risks. They suggest exploring technical solutions that could potentially offer a degree of compromise, though skepticism about the feasibility of such solutions remains prevalent.
The ethical implications of using AI to scan personal communications are also a significant point of discussion. Commenters raise concerns about false positives, the potential for bias in AI algorithms, and the lack of transparency and accountability in automated surveillance systems. The potential for abuse and the erosion of trust are recurring themes.
Finally, several commenters express a strong defense of E2EE as a fundamental right, emphasizing its crucial role in protecting privacy and security in an increasingly digital world. They argue that any attempt to weaken E2EE, regardless of the intended purpose, represents a serious threat to individual liberties.
According to a January 15, 2025, Reuters report, the immensely popular social media platform TikTok was purportedly bracing itself for a potential shutdown of its services within the United States, anticipated to occur as early as Sunday, January 19, 2025. While the precise nature of the impending shutdown remained somewhat ambiguous, the report indicated that the platform was actively undertaking preparatory measures in anticipation of this disruptive event. The potential shutdown, shrouded in a degree of uncertainty, stemmed from ongoing and escalating tensions between the United States government and the platform's parent company, ByteDance, a Chinese technology conglomerate. These tensions, which have been simmering for an extended period, revolve primarily around concerns regarding data security and the potential for the Chinese government to access user information gleaned from the platform. The Reuters report cites unspecified "information reports" as the basis for this claim, adding a layer of complexity to the situation. The report stops short of definitively confirming the shutdown, acknowledging the inherent fluidity of the situation and the possibility that the anticipated service disruption might not ultimately materialize. Nevertheless, the report highlights the serious consideration being given to this possibility by TikTok and the tangible steps being taken to mitigate the potential fallout from such a drastic measure. The implications of a potential U.S. shutdown of TikTok are substantial, considering the platform's vast user base within the country and its significant cultural influence. The report does not delve into the specifics of the preparatory measures undertaken by TikTok, leaving open questions about the nature of these preparations and their potential efficacy in mitigating the impact of a shutdown.
The Hacker News post titled "TikTok preparing for U.S. shut-off on Sunday" (linking to a Reuters article about TikTok potentially being shut down in the US) has generated a number of comments discussing the implications of such a move.
Several commenters express skepticism about the likelihood of a shutdown actually happening, citing previous threats and the potential legal challenges involved. Some point out the difficulty of enforcing such a ban, considering the technical complexities and the potential for users to circumvent restrictions using VPNs. The perceived political motivations behind the potential ban are also a recurring theme, with some suggesting it's more about data security concerns and others viewing it as a form of protectionism for US tech companies.
A significant portion of the discussion revolves around the potential impact on users, particularly content creators who rely on TikTok for income. Some commenters express concern about the loss of a creative outlet and the potential fragmentation of online communities. Others discuss the possible migration of users to alternative platforms, speculating on which platforms might benefit most from a TikTok ban.
The technical feasibility of a shutdown is also debated, with some commenters questioning the government's ability to effectively block access to the app. Discussions about the role of app stores (Apple App Store and Google Play Store) in enforcing a ban also emerge. Some users propose alternative scenarios, such as a forced sale of TikTok's US operations to an American company, as a more likely outcome than a complete ban.
The potential economic consequences of a shutdown are also considered, with some commenters pointing out the potential job losses and the impact on the advertising industry. The broader implications for free speech and internet censorship are also touched upon, with some expressing concern about the precedent that a ban might set.
Some of the most compelling comments highlight the complex interplay of political, economic, and social factors surrounding the issue. One commenter argues that the potential ban is a symptom of a larger geopolitical struggle between the US and China, while another suggests that the focus on TikTok overlooks the data collection practices of American social media companies. A particularly insightful comment points out the potential for unintended consequences, such as driving users to less regulated platforms, if TikTok is banned. Another compelling comment highlights the potential impact on smaller creators who rely on TikTok for income and may not have the same reach on other platforms.
The blog post "Homomorphic Encryption in iOS 18" by Bastian Bohm details the introduction of homomorphic encryption capabilities within Apple's iOS 18 operating system, specifically focusing on the newly available APIs for performing calculations on encrypted data without requiring decryption. The author expresses excitement about this development, highlighting the potential for enhanced privacy and security in various applications.
The post begins by explaining the concept of homomorphic encryption, emphasizing its ability to process encrypted information directly, thus preserving the confidentiality of sensitive data. It distinguishes between Fully Homomorphic Encryption (FHE), which supports arbitrary computations, and Partially Homomorphic Encryption (PHE), which is limited to specific operations like addition or multiplication. The post clarifies that iOS 18 implements PHE, specifically focusing on additive homomorphic encryption.
The core of the post revolves around the newly introduced SecKeyEncryptedData
class and its associated methods. The author provides a concise code example demonstrating how to create encrypted integers using this class and how to perform homomorphic addition on these encrypted values. The resulting sum remains encrypted, and only the holder of the decryption key can reveal its true value. The author meticulously breaks down the code snippet, explaining the role of each function and parameter. For instance, the post elucidates the process of generating a public key specifically designated for encrypted data operations and how this key is subsequently used to encrypt integer values. It also explains the significance of the perform
method in executing homomorphic operations on these encrypted integers.
Furthermore, the post discusses the underlying cryptographic scheme employed by Apple, revealing that it leverages a variant of the Paillier cryptosystem. This choice is deemed suitable for integer additions and is acknowledged for its established security properties. The post also touches upon the practical limitations of PHE, specifically noting the inability to perform other operations like multiplication or comparison directly on the encrypted data without decryption.
Finally, the author speculates on the potential applications of this technology within the Apple ecosystem. The example given is privacy-preserving data collection, suggesting how homomorphic encryption could enable the aggregation of user statistics without compromising individual data privacy. This could be useful for applications like collecting usage metrics or accumulating health data while ensuring that the individual contributions remain confidential. The author concludes with an optimistic outlook on the future implications of homomorphic encryption within the iOS environment and expresses anticipation for further advancements in this field.
The Hacker News post titled "Homomorphic encryption in iOS 18" spawned a modest discussion with a handful of comments focusing on the practicalities and limitations of the technology, rather than the announcement itself. No one expressed outright excitement or skepticism about the announcement, instead offering pragmatic observations.
One commenter pointed out that the homomorphic encryption being utilized is limited to integer addition and multiplication, and thus isn't fully homomorphic encryption (FHE) in the broader, more powerful sense. They clarified that true FHE allows arbitrary computation on encrypted data, which is not what Apple is implementing. This comment served as an important clarification to distinguish the specific type of homomorphic encryption being employed.
Another user expanded on this by mentioning that the specific technique used is called "additive homomorphic encryption" and likely leverages the Paillier cryptosystem. This added technical depth to the discussion, providing a potential underlying mechanism for Apple's implementation. They then speculated about its use case, suggesting it could be applied to scenarios like federated learning or aggregated metrics collection.
A subsequent comment explored the performance limitations of homomorphic encryption. The commenter noted the significant computational overhead associated with these techniques, which makes them unsuitable for many real-time or performance-sensitive applications. This comment highlighted the trade-offs involved in using homomorphic encryption, emphasizing that while it offers enhanced privacy, it comes at the cost of performance.
Finally, one commenter linked to a related project called "Concrete," further adding context to the types of operations and optimizations possible within the homomorphic encryption space. This provides an avenue for those interested in learning more about practical implementations and advancements in the field.
Overall, the comments section offers a concise and informed discussion focusing on the technical nuances of Apple's implementation rather than broad speculation or hype. They provide valuable context and clarification regarding the specific type of homomorphic encryption being used and its inherent limitations.
Summary of Comments ( 57 )
https://news.ycombinator.com/item?id=42747877
HN commenters generally concur with the article's premise that relying solely on BitLocker without additional security measures like a TPM or Secure Boot can be risky. Several point out how easy it is to modify boot order or boot from external media to bypass BitLocker, effectively rendering it useless against a physically present attacker. Some commenters discuss alternative full-disk encryption solutions like Veracrypt, emphasizing its open-source nature and stronger security features. The discussion also touches upon the importance of pre-boot authentication, the limitations of relying solely on software-based security, and the practical considerations for different threat models. A few commenters share personal anecdotes of BitLocker failures or vulnerabilities they've encountered, further reinforcing the author's points. Overall, the prevailing sentiment suggests a healthy skepticism towards BitLocker's security when used without supporting hardware protections.
The Hacker News post "Windows BitLocker – Screwed Without a Screwdriver" generated a moderate amount of discussion, with several commenters sharing their perspectives and experiences related to BitLocker and disk encryption.
Several commenters discuss alternative full-disk encryption solutions they consider more robust or user-friendly than BitLocker. Veracrypt is mentioned multiple times as a preferred open-source alternative. One commenter specifically highlights its support for multiple bootloaders and ease of recovery. Others bring up LUKS on Linux as another open-source full-disk encryption option they favor.
The reliance on closed-source solutions for critical security measures like disk encryption is a concern raised by some. They emphasize the importance of transparency and the ability to inspect the code, particularly when dealing with potential vulnerabilities or backdoors. In contrast, one user expressed confidence in Microsoft's security practices, suggesting that the closed-source nature doesn't necessarily imply lower security.
A few commenters shared personal anecdotes of BitLocker issues, including problems recovering data after hardware failures. These stories highlighted the real-world implications of relying on a system that can become inaccessible due to unforeseen circumstances.
There's a discussion about the potential dangers of relying solely on TPM for key protection. The susceptibility of TPMs to vulnerabilities or physical attacks is raised as a concern. One user suggests storing the recovery key offline, independent of the TPM, to mitigate this risk. Another points out the importance of physically securing the machine itself, as a stolen laptop with BitLocker enabled but dependent on TPM could be potentially vulnerable to attack.
Some users questioned the specific scenario described in the original blog post, with one suggesting that the inability to boot may have been due to a Secure Boot issue unrelated to BitLocker. They also highlighted the importance of carefully documenting the recovery key to prevent data loss.
Finally, one commenter mentions encountering similar issues with FileVault on macOS, illustrating that the challenges and complexities of disk encryption are not unique to Windows. They note that while these solutions are designed to protect data, they can sometimes hinder access, especially in non-standard scenarios like hardware failures or OS upgrades.