PunchCard Key Backup is an open-source tool that allows you to physically back up cryptographic keys, like PGP or SSH keys, onto durable, punch-out cards. It encodes the key as a grid of punched holes, readable by a webcam and decodable by the software. This provides a low-tech, offline backup method resistant to digital threats and EMP attacks, ideal for long-term storage or situations where digital backups are unavailable or unreliable. The cards are designed to be easily reproducible and verifiable, and the project includes templates for printing your own cards.
A security researcher discovered a vulnerability in O2's VoLTE implementation that allowed anyone to determine the approximate location of an O2 customer simply by making a phone call to them. This was achieved by intercepting and manipulating the SIP INVITE message sent during call setup, specifically the "P-Asserted-Identity" header. By slightly modifying the caller ID presented to the target device, the researcher could trigger error messages that revealed location information normally used for emergency services. This information included cell tower IDs, which can be easily correlated with geographic locations. This vulnerability highlighted a lack of proper input sanitization and authorization checks within O2's VoLTE infrastructure, potentially affecting millions of customers. The issue has since been reported and patched by O2.
Hacker News users discuss the feasibility and implications of the claimed O2 VoLTE vulnerability. Some express skepticism about the ease with which an attacker could exploit this, pointing out the need for specialized equipment and the potential for detection. Others debate the actual impact, questioning whether coarse location data (accurate to a cell tower) is truly a privacy violation given its availability through other means. Several commenters highlight the responsibility of mobile network operators to address such security flaws and emphasize the importance of ongoing security research and public disclosure. The discussion also touches upon the trade-offs between functionality (like VoLTE) and security, as well as the potential legal ramifications for O2. A few users mention similar vulnerabilities in other networks, suggesting this isn't an isolated incident.
The European Union is launching its own vulnerability database, the European Vulnerability Database (EU-VD), aiming to bolster cybersecurity within the bloc and reduce reliance on the US National Vulnerability Database (NVD). Concerns over the NVD's perceived declining quality, slow updates, and limited scope have driven the EU's initiative. The EU-VD plans to offer multilingual support, prioritize vulnerabilities affecting EU member states, and incorporate information from various sources, including national CERTs and open-source intelligence, ultimately striving to provide a more comprehensive and timely resource for European users.
Hacker News users discussed the potential effectiveness and challenges of the EU's new vulnerability database. Some expressed skepticism about the database's ability to improve security, citing concerns about bureaucracy, potential for misuse by malicious actors, and the existing vulnerability disclosure ecosystem. Others viewed the EU's effort as a positive step towards standardized vulnerability reporting and potentially a more balanced approach compared to the US system, particularly given perceived issues with the US's vulnerability equity process (VEP). There was also discussion about the practicalities of vulnerability disclosure, the impact on smaller companies, and the difficulties in classifying vulnerability severity. Some commenters highlighted the need for careful consideration regarding responsible disclosure practices and potential unintended consequences. Several commenters compared the EU's database to similar initiatives, and debate arose around mandatory versus voluntary reporting, along with questions of whether the database will cover both hardware and software vulnerabilities.
The author argues that modern personal computing has become "anti-personnel," designed to exploit users rather than empower them. Software and hardware are increasingly complex, opaque, and controlled by centralized entities, fostering dependency and hindering user agency. This shift is exemplified by the dominance of subscription services, planned obsolescence, pervasive surveillance, and the erosion of user ownership and control over data and devices. The essay calls for a return to the original ethos of personal computing, emphasizing user autonomy, open standards, and the right to repair and modify technology. This involves reclaiming agency through practices like self-hosting, using open-source software, and engaging in critical reflection about our relationship with technology.
HN commenters largely agree with the author's premise that much of modern computing is designed to be adversarial toward users, extracting data and attention at the expense of usability and agency. Several point out the parallels with Shoshana Zuboff's "Surveillance Capitalism." Some offer specific examples like CAPTCHAs, cookie banners, and paywalls as prime examples of "anti-personnel" design. Others discuss the inherent tension between free services and monetization through data collection, suggesting that alternative business models are needed. A few counterpoints argue that the article overstates the case, or that users implicitly consent to these tradeoffs in exchange for free services. A compelling exchange centers on whether the described issues are truly "anti-personnel," or simply the result of poorly designed systems.
Mistral AI has released Le Chat, an enterprise-grade AI assistant designed for on-premise deployment. This focus on local deployment prioritizes data privacy and security, addressing concerns surrounding sensitive information. Le Chat offers customizable features allowing businesses to tailor the assistant to specific needs and integrate it with existing workflows. It leverages Mistral's large language models to provide functionalities like text generation, summarization, translation, and question answering, aiming to improve productivity and streamline internal processes.
Hacker News users discuss Mistral AI's release of Le Chat, an enterprise-focused AI assistant. Several commenters express skepticism about the "on-prem" claim, questioning the actual feasibility and practicality of running large language models locally given their significant resource requirements. Others note the rapid pace of open-source LLM development and wonder if proprietary models like Le Chat will remain competitive. Some commenters see value in the enterprise focus, particularly around data privacy and security. There's also discussion about the broader trend of "LLMOps," with commenters pointing out the ongoing challenges in managing and deploying these complex models. Finally, some users simply express excitement about the potential of Le Chat and similar tools for improving productivity.
The author describes using a "zip bomb" detection system to protect their server from denial-of-service attacks. Rather than blocking all zip files, they've implemented a system that checks uploaded zip archives for excessively high compression ratios, a hallmark of zip bombs designed to overwhelm systems by decompressing into massive amounts of data. If a suspicious zip is detected, it's quarantined for manual review, allowing legitimate large zip files to still be processed while preventing malicious ones from disrupting the server. This approach offers a compromise between outright banning zips and leaving the server vulnerable.
Hacker News users discussed various aspects of zip bomb protection. Some questioned the practicality and effectiveness of using zip bombs defensively, suggesting alternative methods like resource limits and input validation are more robust. Others debated the ethics and legality of such a defense, with concerns about potential harm to legitimate users or scanners. Several commenters highlighted the "Streisand effect" – that publicizing this technique might attract unwanted attention and testing. There was also discussion of specific tools and techniques for decompression, emphasizing the importance of security-focused libraries and cautious handling of compressed data. Some users shared anecdotal experiences of encountering zip bombs in the wild, reinforcing the need for appropriate safeguards.
Google is allowing businesses to run its Gemini AI models on their own infrastructure, addressing data privacy and security concerns. This on-premise offering of Gemini, accessible through Google Cloud's Vertex AI platform, provides companies greater control over their data and model customizations while still leveraging Google's powerful AI capabilities. This move allows clients, particularly in regulated industries like healthcare and finance, to benefit from advanced AI without compromising sensitive information.
Hacker News commenters generally expressed skepticism about Google's announcement of Gemini availability for private data centers. Many doubted the feasibility and affordability for most companies, citing the immense infrastructure and expertise required to run such large models. Some speculated that this offering is primarily targeted at very large enterprises and government agencies with strict data security needs, rather than the average business. Others questioned the true motivation behind the move, suggesting it could be a response to competition or a way for Google to gather more data. Several comments also highlighted the irony of moving large language models "back" to private data centers after the trend of cloud computing. There was also some discussion around the potential benefits for specific use cases requiring low latency and high security, but even these were tempered by concerns about cost and complexity.
Hackers breached the Office of the Comptroller of the Currency (OCC), a US Treasury department agency responsible for regulating national banks, gaining access to approximately 150,000 email accounts. The OCC discovered the breach during its investigation of the MOVEit Transfer vulnerability exploitation, confirming their systems were compromised between May 27 and June 12. While the agency claims no evidence suggests other Treasury systems were affected or that sensitive data beyond email content was accessed, they are continuing their investigation and working with law enforcement.
Hacker News commenters express skepticism about the reported 150,000 compromised emails, questioning the actual impact and whether this number represents unique emails or includes forwards and replies. Some suggest the number is inflated to justify increased cybersecurity budgets. Others point to the OCC's history of poor cybersecurity practices and a lack of transparency. Several commenters discuss the potential legal and regulatory implications for Microsoft, the email provider, and highlight the ongoing challenge of securing cloud-based email systems. The lack of detail about the nature of the breach and the affected individuals also drew criticism.
23andMe offers two data deletion options. "Account Closure" removes your profile and reports, disconnects you from DNA relatives, and prevents further participation in research. However, de-identified genetic data may be retained for internal research unless you specifically opt out. "Spit Kit Destruction" goes further, requiring contacting customer support to have your physical sample destroyed. While 23andMe claims anonymized data may still be used, they assert it can no longer be linked back to you. For the most comprehensive data removal, pursue both Account Closure and Spit Kit Destruction.
HN commenters largely discuss the complexities of truly deleting genetic data. Several express skepticism that 23andMe or similar services can fully remove data, citing research collaborations, anonymized datasets, and the potential for data reconstruction. Some suggest more radical approaches like requesting physical sample destruction, while others debate the ethical implications of research using genetic data and the individual's right to control it. The difficulty of separating individual data from aggregated research sets is a recurring theme, with users acknowledging the potential benefits of research while still desiring greater control over their personal information. A few commenters also mention the potential for law enforcement access to such data and the implications for privacy.
Amazon is discontinuing on-device processing for Alexa voice commands. All future requests will be sent to the cloud for processing, regardless of device capabilities. While Amazon claims this will lead to a more unified and improved Alexa experience with faster response times and access to newer features, it effectively removes the local processing option previously available on some devices. This change means increased reliance on a constant internet connection for Alexa functionality and raises potential privacy concerns regarding the handling of voice data.
HN commenters generally lament the demise of on-device processing for Alexa, viewing it as a betrayal of privacy and a step backwards in functionality. Several express concern about increased latency and dependence on internet connectivity, impacting responsiveness and usefulness in areas with poor service. Some speculate this move is driven by cost-cutting at Amazon, prioritizing server-side processing and centralized data collection over user experience. A few question the claimed security benefits, arguing that local processing could enhance privacy and security in certain scenarios. The potential for increased data collection and targeted advertising is also a recurring concern. There's skepticism about Amazon's explanation, with some suggesting it's a veiled attempt to push users towards newer Echo devices or other Amazon services.
A misconfigured Amazon S3 bucket exposed over 86,000 medical records and personally identifiable information (PII) belonging to users of the nurse staffing platform eShift. The exposed data included names, addresses, phone numbers, email addresses, Social Security numbers, medical licenses, certifications, and vaccination records. This data breach highlights the continued risk of unsecured cloud storage and the potential consequences for sensitive personal information. eShift, dubbed the "Uber for nurses," provides on-demand healthcare staffing solutions. While the company has since secured the bucket, the extent of the damage and potential for identity theft and fraud remains a serious concern.
HN commenters were largely critical of Eshyft's security practices, calling the exposed data "a treasure trove for identity thieves" and expressing concern over the sensitive nature of the information. Some pointed out the irony of a cybersecurity-focused company being vulnerable to such a basic misconfiguration. Others questioned the competence of Eshyft's leadership and engineering team, with one commenter stating, "This isn't rocket science." Several commenters highlighted the recurring nature of these types of breaches and the need for stronger regulations and consequences for companies that fail to adequately protect user data. A few users debated the efficacy of relying on cloud providers like AWS for security, emphasizing the shared responsibility model.
This project introduces a JPEG image compression service that incorporates partially homomorphic encryption (PHE) to enable compression on encrypted images without decryption. Leveraging the somewhat homomorphic nature of certain encryption schemes, specifically the Paillier cryptosystem, the service allows for operations like Discrete Cosine Transform (DCT) and quantization on encrypted data. While fully homomorphic encryption remains computationally expensive, this approach provides a practical compromise, preserving privacy while still permitting some image processing in the encrypted domain. The resulting compressed image remains encrypted, requiring the appropriate key for decryption and viewing.
Hacker News users discussed the practicality and novelty of the JPEG compression service using homomorphic encryption. Some questioned the real-world use cases, given the significant performance overhead compared to standard JPEG compression. Others pointed out that the homomorphic encryption only applies to the DCT coefficients and not the entire JPEG pipeline, limiting the actual privacy benefits. The most compelling comments highlighted this limitation, suggesting that true end-to-end encryption would be more valuable but acknowledging the difficulty of achieving that with current homomorphic encryption technology. There was also skepticism about the claimed 10x speed improvement, with requests for more detailed benchmarks and comparisons to existing methods. Some commenters expressed interest in the potential applications, such as privacy-preserving image processing in medical or financial contexts.
South Korea's Personal Information Protection Commission has accused DeepSeek, a South Korean AI firm specializing in personalized content recommendations, of illegally sharing user data with its Chinese investor, ByteDance. The regulator alleges DeepSeek sent personal information, including browsing histories, to ByteDance servers without proper user consent, violating South Korean privacy laws. This data sharing reportedly occurred between July 2021 and December 2022 and affected users of several popular South Korean apps using DeepSeek's technology. DeepSeek now faces a potential fine and a corrective order.
Several Hacker News commenters express skepticism about the accusations against DeepSeek, pointing out the lack of concrete evidence presented and questioning the South Korean regulator's motives. Some speculate this could be politically motivated, related to broader US-China tensions and a desire to protect domestic companies like Kakao. Others discuss the difficulty of proving data sharing, particularly with the complexity of modern AI models and training data. A few commenters raise concerns about the potential implications for open-source AI models, wondering if they could be inadvertently trained on improperly obtained data. There's also discussion about the broader issue of data privacy and the challenges of regulating international data flows, particularly involving large tech companies.
The blog post explores encoding arbitrary data within seemingly innocuous emojis. By exploiting the variation selectors and zero-width joiners in Unicode, the author demonstrates how to embed invisible data into an emoji sequence. This hidden data can be later extracted by specifically looking for these normally unseen characters. While seemingly a novelty, the author highlights potential security implications, suggesting possibilities like bypassing filters or exfiltrating data subtly. This hidden channel could be used in scenarios where visible communication is restricted or monitored.
Several Hacker News commenters express skepticism about the practicality of the emoji data smuggling technique described in the article. They point out the significant overhead and inefficiency introduced by the encoding scheme, making it impractical for any substantial data transfer. Some suggest that simpler methods like steganography within image files would be far more efficient. Others question the real-world applications, arguing that such a convoluted method would likely be easily detected by any monitoring system looking for unusual patterns. A few commenters note the cleverness of the technique from a theoretical perspective, while acknowledging its limited usefulness in practice. One commenter raises a concern about the potential abuse of such techniques for bypassing content filters or censorship.
The FTC is taking action against GoDaddy for allegedly failing to adequately protect its customers' sensitive data. GoDaddy reportedly allowed unauthorized access to customer accounts on multiple occasions due to lax security practices, including failing to implement multi-factor authentication and neglecting to address known vulnerabilities. These lapses facilitated phishing attacks and other fraudulent activities, impacting millions of customers. As a result, GoDaddy will pay $21.3 million and be required to implement a comprehensive information security program subject to independent assessments for the next 20 years.
Hacker News commenters generally agree that GoDaddy's security practices are lacking, with some pointing to personal experiences of compromised sites hosted on the platform. Several express skepticism about the effectiveness of the FTC's actions, suggesting the fines are too small to incentivize real change. Some users highlight the conflict of interest inherent in GoDaddy's business model, where they profit from selling security products to fix vulnerabilities they may be partially responsible for. Others discuss the wider implications for web hosting security and the responsibility of users to implement their own protective measures. A few commenters defend GoDaddy, arguing that shared responsibility exists and users also bear the burden for securing their own sites. The discussion also touches upon the difficulty of patching WordPress vulnerabilities and the overall complexity of website security.
DualQRCode.com offers a free online tool to create dual QR codes. These codes seamlessly embed a smaller QR code within a larger one, allowing for two distinct links to be accessed from a single image. The user provides two URLs, customizes the inner and outer QR code colors, and downloads the resulting combined code. This can be useful for scenarios like sharing a primary link with a secondary link for feedback, donations, or further information.
Hacker News users discussed the practicality and security implications of dual QR codes. Some questioned the real-world use cases, suggesting existing methods like shortened URLs or link-in-bio services are sufficient. Others raised security concerns, highlighting the potential for one QR code to be swapped with a malicious link while the other remains legitimate, thereby deceiving users. The technical implementation was also debated, with commenters discussing the potential for encoding information across both codes for redundancy or error correction, and the challenges of displaying two codes clearly on physical media. Several commenters suggested alternative approaches, such as using a single QR code that redirects to a page containing multiple links, or leveraging NFC technology. The overall sentiment leaned towards skepticism about the necessity and security of the dual QR code approach.
The blog post details how the author lost access to a BitLocker-encrypted drive due to a Secure Boot policy change, even with the correct password. The TPM chip, responsible for storing the BitLocker recovery key, perceived the modified Secure Boot state as a potential security breach and refused to release the key. This highlighted a vulnerability in relying solely on the TPM for BitLocker recovery, especially when dual-booting or making system configuration changes. The author emphasizes the importance of backing up recovery keys outside the TPM, as recovery through Microsoft's account proved difficult and unhelpful in this specific scenario. Ultimately, the data remained inaccessible despite possessing the password and knowing the modifications made to the system.
HN commenters generally concur with the article's premise that relying solely on BitLocker without additional security measures like a TPM or Secure Boot can be risky. Several point out how easy it is to modify boot order or boot from external media to bypass BitLocker, effectively rendering it useless against a physically present attacker. Some commenters discuss alternative full-disk encryption solutions like Veracrypt, emphasizing its open-source nature and stronger security features. The discussion also touches upon the importance of pre-boot authentication, the limitations of relying solely on software-based security, and the practical considerations for different threat models. A few commenters share personal anecdotes of BitLocker failures or vulnerabilities they've encountered, further reinforcing the author's points. Overall, the prevailing sentiment suggests a healthy skepticism towards BitLocker's security when used without supporting hardware protections.
The blog post "Let's talk about AI and end-to-end encryption" explores the perceived conflict between the benefits of end-to-end encryption (E2EE) and the potential of AI. While some argue that E2EE hinders AI's ability to analyze data for valuable insights or detect harmful content, the author contends this is a false dichotomy. They highlight that AI can still operate on encrypted data using techniques like homomorphic encryption, federated learning, and secure multi-party computation, albeit with performance trade-offs. The core argument is that preserving E2EE is crucial for privacy and security, and perceived limitations in AI functionality shouldn't compromise this fundamental protection. Instead of weakening encryption, the focus should be on developing privacy-preserving AI techniques that work with E2EE, ensuring both security and the responsible advancement of AI.
Hacker News users discussed the feasibility and implications of client-side scanning for CSAM in end-to-end encrypted systems. Some commenters expressed skepticism about the technical challenges and potential for false positives, highlighting the difficulty of distinguishing between illegal content and legitimate material like educational resources or artwork. Others debated the privacy implications and potential for abuse by governments or malicious actors. The "slippery slope" argument was raised, with concerns that seemingly narrow use cases for client-side scanning could expand to encompass other types of content. The discussion also touched on the limitations of hashing as a detection method and the possibility of adversarial attacks designed to circumvent these systems. Several commenters expressed strong opposition to client-side scanning, arguing that it fundamentally undermines the purpose of end-to-end encryption.
TikTok was reportedly preparing for a potential shutdown in the U.S. on Sunday, January 15, 2025, according to information reviewed by Reuters. This involved discussions with cloud providers about data backup and transfer in case a forced sale or ban materialized. However, a spokesperson for TikTok denied the report, stating the company had no plans to shut down its U.S. operations. The report suggested these preparations were contingency plans and not an indication that a shutdown was imminent or certain.
HN commenters are largely skeptical of a TikTok shutdown actually happening on Sunday. Many believe the Reuters article misrepresented the Sunday deadline as a shutdown deadline when it actually referred to a deadline for ByteDance to divest from TikTok. Several users point out that previous deadlines have come and gone without action, suggesting this one might also be uneventful. Some express cynicism about the US government's motives, suspecting political maneuvering or protectionism for US social media companies. A few also discuss the technical and logistical challenges of a shutdown, and the potential legal battles that would ensue. Finally, some commenters highlight the irony of potential US government restrictions on speech, given its historical stance on free speech.
iOS 18 introduces homomorphic encryption for some Siri features, allowing on-device processing of encrypted audio requests without decrypting them first. This enhances privacy by preventing Apple from accessing the raw audio data. Specifically, it uses a fully homomorphic encryption scheme to transform audio into a numerical representation amenable to encrypted computations. These computations generate an encrypted Siri response, which is then sent to Apple servers for decryption and delivery back to the user. While promising improved privacy, the post raises concerns about potential performance impacts and the specific details of the implementation, which Apple hasn't fully disclosed.
Hacker News users discussed the practical implications and limitations of homomorphic encryption in iOS 18. Several commenters expressed skepticism about Apple's actual implementation and its effectiveness, questioning whether it's fully homomorphic encryption or a more limited form. Performance overhead and restricted use cases were also highlighted as potential drawbacks. Some pointed out that the touted benefits, like encrypted search and image classification, might be achievable with existing techniques, raising doubts about the necessity of homomorphic encryption for these tasks. A few users noted the potential security benefits, particularly regarding protecting user data from cloud providers, but the overall sentiment leaned towards cautious optimism pending further details and independent analysis. Some commenters linked to additional resources explaining the complexities and current state of homomorphic encryption research.
Summary of Comments ( 23 )
https://news.ycombinator.com/item?id=44145202
HN users generally praised the project for its cleverness and simplicity, viewing it as a fun and robust offline backup method. Some discussed the practicality, pointing out limitations like the 255-bit key size being smaller than modern standards. Others suggested improvements such as using a different encoding scheme for greater density or incorporating error correction. Durability of the cards was also a topic, with users considering lamination or metal stamping for longevity. The overall sentiment was positive, appreciating the project as a novel approach to cold storage.
The Hacker News post titled "Show HN: PunchCard Key Backup" generated a moderate discussion with several interesting comments. Many commenters expressed appreciation for the novelty and physicality of the punchcard backup system, contrasting it with the more abstract and digital nature of typical key backup methods.
One commenter highlighted the advantage of this system being resistant to electromagnetic pulses (EMPs), a concern for some individuals preparing for disaster scenarios. They further elaborated on the potential longevity of punchcards, pointing out their durability and resistance to data degradation over time compared to electronic storage media. Another commenter echoed this sentiment, emphasizing the robustness and simplicity of the punchcard approach.
Several commenters discussed the practicality of the system. One questioned the number of keys that could be reasonably stored on a punchcard, while another suggested potential improvements like using a more robust material than card stock for the punchcards. The discussion also touched upon the potential for errors during the punching process and the possibility of developing tools to assist with accurate punching.
One user jokingly compared the method to storing secrets on bananas, alluding to the unusual nature of using fruit for data storage, while acknowledging the cleverness of the punchcard concept.
Some commenters explored the historical context of punchcards, drawing parallels to their use in early computing. One mentioned the potential for using existing punchcard readers to interface with the backup system, bridging the gap between this modern application and its historical roots.
The security aspect was also addressed. A commenter raised the concern that punchcards might not be as secure as other backup methods if not stored carefully, as they are visually decipherable. This led to a discussion about the importance of physical security in any backup strategy, regardless of the medium.
Overall, the comments reflected a mixture of amusement, appreciation for the ingenuity, and practical considerations regarding the punchcard key backup system. The discussion highlighted the trade-offs between simplicity, durability, security, and practicality inherent in this unconventional approach.