"The NSA Selector" details a purported algorithm and scoring system used by the NSA to identify individuals for targeted surveillance based on their communication metadata. It describes a hierarchical structure where selectors, essentially search queries on metadata like phone numbers, email addresses, and IP addresses, are combined with modifiers to narrow down targets. The system assigns a score based on various factors, including the target's proximity to known persons of interest and their communication patterns. This score then determines the level of surveillance applied. The post claims this information was gleaned from leaked Snowden documents, although direct sourcing is absent. It provides a technical breakdown of how such a system could function, aiming to illustrate the potential scope and mechanics of mass surveillance based on metadata.
Swiss-based privacy-focused company Proton, known for its VPN and encrypted email services, is considering leaving Switzerland due to a new surveillance law. The law grants the Swiss government expanded powers to spy on individuals and companies, requiring service providers like Proton to hand over user data in certain circumstances. Proton argues this compromises their core mission of user privacy and confidentiality, potentially making them "less confidential than Google," and is exploring relocation to a jurisdiction with stronger privacy protections.
Hacker News users discuss Proton's potential departure from Switzerland due to new surveillance laws. Several commenters express skepticism of Proton's claims, suggesting the move is motivated more by marketing than genuine concern for user privacy. Some argue that Switzerland is still more privacy-respecting than many other countries, questioning whether a move would genuinely benefit users. Others point out the complexities of running a secure email service, noting the challenges of balancing user privacy with legal obligations and the potential for abuse. A few commenters mention alternative providers and the increasing difficulty of finding truly private communication platforms. The discussion also touches upon the practicalities of relocating a company of Proton's size and the potential impact on its existing infrastructure and workforce.
John L. Young, co-founder of Cryptome, a crucial online archive of government and corporate secrets, passed away. He and co-founder Deborah Natsios established Cryptome in 1996, dedicating it to publishing information suppressed for national security or other questionable reasons. Young tirelessly defended the public's right to know, facing numerous legal threats and challenges for hosting controversial documents, including internal memos, manuals, and blueprints. His unwavering commitment to transparency and freedom of information made Cryptome a vital resource for journalists, researchers, and activists, leaving an enduring legacy of challenging censorship and promoting open access to information.
HN commenters mourn the loss of John Young, co-founder of Cryptome, highlighting his dedication to free speech and government transparency. Several share anecdotes showcasing Young's uncompromising character and the impact Cryptome had on their lives. Some discuss the site's role in publishing sensitive documents and the subsequent government pressure, admiring Young's courage in the face of legal threats. Others praise the simple, ad-free design of Cryptome as a testament to its core mission. The overall sentiment expresses deep respect for Young's contribution to online freedom of information.
The author argues that modern personal computing has become "anti-personnel," designed to exploit users rather than empower them. Software and hardware are increasingly complex, opaque, and controlled by centralized entities, fostering dependency and hindering user agency. This shift is exemplified by the dominance of subscription services, planned obsolescence, pervasive surveillance, and the erosion of user ownership and control over data and devices. The essay calls for a return to the original ethos of personal computing, emphasizing user autonomy, open standards, and the right to repair and modify technology. This involves reclaiming agency through practices like self-hosting, using open-source software, and engaging in critical reflection about our relationship with technology.
HN commenters largely agree with the author's premise that much of modern computing is designed to be adversarial toward users, extracting data and attention at the expense of usability and agency. Several point out the parallels with Shoshana Zuboff's "Surveillance Capitalism." Some offer specific examples like CAPTCHAs, cookie banners, and paywalls as prime examples of "anti-personnel" design. Others discuss the inherent tension between free services and monetization through data collection, suggesting that alternative business models are needed. A few counterpoints argue that the article overstates the case, or that users implicitly consent to these tradeoffs in exchange for free services. A compelling exchange centers on whether the described issues are truly "anti-personnel," or simply the result of poorly designed systems.
Thai authorities are systematically using online doxxing to intimidate and silence critics. The Citizen Lab report details how government agencies, particularly the Royal Thai Army, leverage social media and messaging platforms to collect and disseminate personal information of dissidents. This information, including names, addresses, family details, and affiliations, is then weaponized to publicly shame, harass, and threaten individuals online, fostering a climate of fear and self-censorship. The report highlights the coordinated nature of these campaigns, often involving fake accounts and coordinated posting, and the chilling effect they have on freedom of expression in Thailand.
HN commenters discuss the chilling effect of doxxing and online harassment campaigns orchestrated by Thai authorities to silence dissent, particularly targeting young activists. Some express concern about the increasing sophistication of these tactics, including the use of seemingly grassroots social media campaigns to amplify the harassment and create an environment of fear. Others highlight the vulnerability of individuals lacking strong digital security practices, and the difficulty of holding perpetrators accountable. The conversation also touches on broader themes of internet freedom, the role of social media platforms in facilitating such campaigns, and the potential for similar tactics to be employed by other authoritarian regimes. Several commenters draw parallels to other countries where governments utilize online harassment and disinformation to suppress political opposition. Finally, there's a brief discussion of potential countermeasures and the importance of supporting organizations that protect digital rights and online privacy.
Doctorow's "Against Transparency" argues that calls for increased transparency are often a wolf in sheep's clothing. While superficially appealing, transparency initiatives frequently empower bad actors more than they help the public. The powerful already possess extensive information about individuals, and forced transparency from the less powerful merely provides them with more ammunition for exploitation, harassment, and manipulation, without offering reciprocal accountability. This creates an uneven playing field, furthering existing power imbalances and solidifying the advantages of those at the top. Genuine accountability, Doctorow suggests, requires not just seeing through systems, but also into them – understanding the power dynamics and decision-making processes obscured by superficial transparency.
Hacker News users discussing Cory Doctorow's "Against Transparency" post largely agree with his premise that forced transparency often benefits powerful entities more than individuals. Several commenters point out how regulatory capture allows corporations to manipulate transparency requirements to their advantage, burying individuals in legalese while extracting valuable data for their own use. The discussion highlights examples like California's Prop 65, which is criticized for its overbroad warnings that ultimately desensitize consumers. Some users express skepticism about Doctorow's proposed solutions, while others offer alternative perspectives, emphasizing the importance of transparency in specific areas like government spending and open-source software. The potential for AI to exacerbate these issues is also touched upon, with concerns raised about the use of personal data for exploitative purposes. Overall, the comments paint a picture of nuanced agreement with Doctorow's central argument, tempered by practical concerns and a recognition of the complex role transparency plays in different contexts.
Terms of Service; Didn't Read (ToS;DR) is a community-driven project that simplifies and rates the terms of service and privacy policies of various websites and online services. It uses a simple grading system (Class A to Class E) to quickly inform users about potential issues regarding their rights, data usage, and other key aspects hidden within lengthy legal documents. The goal is to increase transparency and awareness, empowering users to make informed decisions about which services they choose to use based on how those services handle their data and respect user rights. ToS;DR relies on volunteer contributions to analyze and summarize these complex documents, making them easily digestible for the average internet user.
HN users generally praise ToS;DR as a valuable resource for understanding the complexities of terms of service. Several highlight its usefulness for quickly assessing the key privacy and data usage implications of various online services. Some express appreciation for the project's crowd-sourced nature and its commitment to transparency. A few commenters discuss the inherent difficulties in keeping up with constantly changing terms of service and the challenges of accurately summarizing complex legal documents. One user questions the project's neutrality, while another suggests expanding its scope to include privacy policies. The overall sentiment is positive, with many viewing ToS;DR as a vital tool for navigating the increasingly complex digital landscape.
Pressure is mounting on the UK Parliament's Intelligence and Security Committee (ISC) to hold its hearing on Apple's data privacy practices in public. The ISC plans to examine claims made in a recent report that Apple's data extraction policies could compromise national security and aid authoritarian regimes. Privacy advocates and legal experts argue a public hearing is essential for transparency and accountability, especially given the significant implications for user privacy. The ISC typically operates in secrecy, but critics contend this case warrants an open session due to the broad public interest and potential impact of its findings.
HN commenters largely agree that Apple's argument for a closed-door hearing regarding data privacy doesn't hold water. Several highlight the irony of Apple's public stance on privacy conflicting with their desire for secrecy in this legal proceeding. Some express skepticism about the sincerity of Apple's privacy concerns, suggesting it's more about competitive advantage. A few commenters suggest the closed hearing might be justified due to legitimate technical details or competitive sensitivities, but this view is in the minority. Others point out the inherent conflict between national security and individual privacy, noting that this case touches upon that tension. A few express cynicism about government overreach in general.
Internet shutdowns across Africa reached a record high in 2024, with 26 documented incidents, primarily during elections or periods of civil unrest. Governments increasingly weaponized internet access, disrupting communication and suppressing dissent. These shutdowns, often targeting mobile data and social media platforms, caused significant economic damage and hampered human rights monitoring. Ethiopia and Senegal were among the countries experiencing the longest and most disruptive outages. The trend raises concerns about democratic backsliding and the erosion of digital rights across the continent.
HN commenters discuss the increasing use of internet shutdowns in Africa, particularly during elections and protests. Some point out that this tactic isn't unique to Africa, with similar actions seen in India and Myanmar. Others highlight the economic damage these shutdowns inflict, impacting businesses and individuals relying on digital connectivity. The discussion also touches upon the chilling effect on free speech and access to information, with concerns raised about governments controlling narratives. Several commenters suggest that decentralized technologies like mesh networks and satellite internet could offer potential solutions to bypass these shutdowns, although practical limitations are acknowledged. The role of Western tech companies in facilitating these shutdowns is also questioned, with some advocating for stronger stances against government censorship.
EFF warns that age verification laws, ostensibly designed to restrict access to adult content, pose a serious threat to online privacy. While initially targeting pornography sites, these laws are expanding to encompass broader online activities, such as accessing skincare products, potentially requiring users to upload government IDs to third-party verification services. This creates a massive database of sensitive personal information vulnerable to breaches, government surveillance, and misuse by private companies, effectively turning age verification into a backdoor for widespread online monitoring. The EFF argues that these laws are overbroad, ineffective at their stated goals, and disproportionately harm marginalized communities.
HN commenters express concerns about the slippery slope of age verification laws, starting with porn and potentially expanding to other online content and even everyday purchases. They argue that these laws normalize widespread surveillance and data collection, creating honeypots for hackers and potentially enabling government abuse. Several highlight the ineffectiveness of age gates, pointing to easy bypass methods and the likelihood of children accessing restricted content through other means. The chilling effect on free speech and the potential for discriminatory enforcement are also raised, with some commenters drawing parallels to authoritarian regimes. Some suggest focusing on better education and parental controls rather than restrictive legislation. The technical feasibility and privacy implications of various verification methods are debated, with skepticism towards relying on government IDs or private companies.
The UK's National Cyber Security Centre (NCSC), along with GCHQ, quietly removed official advice recommending the use of Apple's device encryption for protecting sensitive information. While no official explanation was given, the change coincides with the UK government's ongoing push for legislation enabling access to encrypted communications, suggesting a conflict between promoting security best practices and pursuing surveillance capabilities. This removal raises concerns about the government's commitment to strong encryption and the potential chilling effect on individuals and organizations relying on such advice for data protection.
HN commenters discuss the UK government's removal of advice recommending Apple's encryption, speculating on the reasons. Some suggest it's due to Apple's upcoming changes to client-side scanning (now abandoned), fearing it weakens end-to-end encryption. Others point to the Online Safety Bill, which could mandate scanning of encrypted messages, making previous recommendations untenable. A few posit the change is related to legal challenges or simply outdated advice, with Apple no longer being the sole provider of strong encryption. The overall sentiment expresses concern and distrust towards the government's motives, with many suspecting a push towards weakening encryption for surveillance purposes. Some also criticize the lack of transparency surrounding the change.
Apple has removed its iCloud Advanced Data Protection feature, which offers end-to-end encryption for almost all iCloud data, from its beta software in the UK. This follows reported concerns from the UK's National Cyber Security Centre (NCSC) that the enhanced security measures would hinder law enforcement's ability to access data for investigations. Apple maintains that the feature will be available to UK users eventually, but hasn't provided a clear timeline for its reintroduction. While the feature remains available in other countries, this move raises questions about the balance between privacy and government access to data.
HN commenters largely agree that Apple's decision to pull its child safety features, specifically the client-side scanning of photos, is a positive outcome. Some believe Apple was pressured by the UK government's proposed changes to the Investigatory Powers Act, which would compel companies to disable security features if deemed a national security risk. Others suggest Apple abandoned the plan due to widespread criticism and technical challenges. A few express disappointment, feeling the feature had potential if implemented carefully, and worry about the implications for future child safety initiatives. The prevalence of false positives and the potential for governments to abuse the system were cited as major concerns. Some skepticism towards the UK government's motivations is also evident.
The UK government is pushing for a new law, the Investigatory Powers Act, that would compel tech companies like Apple to remove security features, including end-to-end encryption, if deemed necessary for national security investigations. This would effectively create a backdoor, allowing government access to user data without their knowledge or consent. Apple argues that this undermines user privacy and security, making everyone more vulnerable to hackers and authoritarian regimes. The law faces strong opposition from privacy advocates and tech experts who warn of its potential for abuse and chilling effects on free speech.
HN commenters express skepticism about the UK government's claims regarding the necessity of this order for national security, with several pointing out the hypocrisy of demanding backdoors while simultaneously promoting end-to-end encryption for their own communications. Some suggest this move is a dangerous precedent that could embolden other authoritarian regimes. Technical feasibility is also questioned, with some arguing that creating such a backdoor is impossible without compromising security for everyone. Others discuss the potential legal challenges Apple might pursue and the broader implications for user privacy globally. A few commenters raise concerns about the chilling effect this could have on whistleblowers and journalists.
Thailand has disrupted utilities to a Myanmar border town notorious for housing online scam operations. The targeted area, Shwe Kokko, is reportedly a hub for Chinese-run criminal enterprises involved in various illicit activities, including online gambling, fraud, and human trafficking. By cutting off electricity and internet access, Thai authorities aim to hinder these operations and pressure Myanmar to address the issue. This action follows reports of thousands of people being trafficked to the area and forced to work in these scams.
Hacker News commenters are skeptical of the stated efficacy of Thailand cutting power and internet to Myanmar border towns to combat scam operations. Several suggest that the gangs are likely mobile and adaptable, easily relocating or using alternative power and internet sources like generators and satellite connections. Some highlight the collateral damage inflicted on innocent civilians and legitimate businesses in the affected areas. Others discuss the complexity of the situation, mentioning the involvement of corrupt officials and the difficulty of definitively attributing the outages to Thailand. The overall sentiment leans towards the action being a performative, ineffective measure rather than a genuine solution.
The Substack post details how DeepSeek, a video search engine with content filtering, can be circumvented by encoding potentially censored keywords as hexadecimal strings. Because DeepSeek decodes hex before applying its filters, a search for "0x736578" (hex for "sex") will return results that a direct search for "sex" might block. The post argues this reveals a flaw in DeepSeek's censorship implementation, demonstrating that filtering based purely on keyword matching is easily bypassed with simple encoding techniques. This highlights the limitations of automated content moderation and the potential for unintended consequences when relying on simplistic filtering methods.
Hacker News users discuss potential censorship evasion techniques, prompted by an article detailing how DeepSeek, a coder-focused search engine, appears to suppress results related to specific topics. Several commenters explore the idea of encoding sensitive queries in hexadecimal format as a workaround. However, skepticism arises regarding the long-term effectiveness of such a tactic, predicting that DeepSeek would likely adapt and detect such encoding methods. The discussion also touches upon the broader implications of censorship in code search engines, with some arguing that DeepSeek's approach might hinder access to valuable information while others emphasize the platform's right to curate its content. The efficacy and ethics of censorship are debated, with no clear consensus emerging. A few comments delve into alternative evasion strategies and the general limitations of censorship in a determined community.
Cory Doctorow's "It's Not a Crime If We Do It With an App" argues that enclosing formerly analog activities within proprietary apps often transforms acceptable behaviors into exploitable data points. Companies use the guise of convenience and added features to justify these apps, gathering vast amounts of user data that is then monetized or weaponized through surveillance. This creates a system where everyday actions, previously unregulated, become subject to corporate control and potential abuse, ultimately diminishing user autonomy and creating new vectors for discrimination and exploitation. The post uses the satirical example of a potato-tracking app to illustrate how seemingly innocuous data collection can lead to intrusive monitoring and manipulation.
HN commenters generally agree with Doctorow's premise that large corporations use "regulatory capture" to avoid legal consequences for harmful actions, citing examples like Facebook and Purdue Pharma. Some questioned the framing of the potato tracking scenario as overly simplistic, arguing that real-world supply chains are vastly more complex. A few commenters discussed the practicality of Doctorow's proposed solutions, debating the efficacy of co-ops and decentralized systems in combating corporate power. There was some skepticism about the feasibility of truly anonymized data collection and the potential for abuse even in decentralized systems. Several pointed out the inherent tension between the convenience offered by these technologies and the potential for exploitation.
A federal court ruled the NSA's warrantless searches of Americans' data under Section 702 of the Foreign Intelligence Surveillance Act unconstitutional. The court found that the "backdoor searches," querying a database of collected communications for information about Americans, violated the Fourth Amendment's protection against unreasonable searches. This landmark decision significantly limits the government's ability to search this data without a warrant, marking a major victory for digital privacy. The ruling specifically focuses on querying data already collected, not the collection itself, and the government may appeal.
HN commenters largely celebrate the ruling against warrantless searches of 702 data, viewing it as a significant victory for privacy. Several highlight the problematic nature of the "backdoor search" loophole and its potential for abuse. Some express skepticism about the government's likely appeals and the long road ahead to truly protect privacy. A few discuss the technical aspects of 702 collection and the challenges in balancing national security with individual rights. One commenter points out the irony of the US government criticizing other countries' surveillance practices while engaging in similar activities domestically. Others offer cautious optimism, hoping this ruling sets a precedent for future privacy protections.
The blog post "Right to root access" argues that users should have complete control over the devices they own, including root access. It contends that manufacturers artificially restrict user access for anti-competitive reasons, forcing users into walled gardens and limiting their ability to repair, modify, and truly own their devices. This restriction extends beyond just software to encompass firmware and hardware, hindering innovation and consumer freedom. The author believes this control should be a fundamental digital right, akin to property rights in the physical world, empowering users to fully utilize and customize their technology.
HN users largely agree with the premise that users should have root access to devices they own. Several express frustration with "walled gardens" and the increasing trend of manufacturers restricting user control. Some highlight the security and repairability benefits of root access, citing examples like jailbreaking iPhones to enable security features unavailable in the official iOS. A few more skeptical comments raise concerns about users bricking their devices and the potential for increased malware susceptibility if users lack technical expertise. Others note the conflict between right-to-repair legislation and software licensing agreements. A recurring theme is the desire for modular devices that allow component replacement and OS customization without voiding warranties.
Summary of Comments ( 68 )
https://news.ycombinator.com/item?id=44044459
HN users discuss the practicality and implications of the "NSA selector" tool described in the linked GitHub repository. Some express skepticism about its real-world effectiveness, pointing out limitations in matching capabilities and the potential for false positives. Others highlight the ethical concerns surrounding such tools, regardless of their efficacy, and the potential for misuse. Several commenters delve into the technical details of the selector's implementation, discussing regular expressions, character encoding, and performance considerations. The legality of using such a tool is also debated, with differing opinions on whether simply possessing or running the code constitutes a crime. Finally, some users question the authenticity and provenance of the tool, suggesting it might be a hoax or a misinterpretation of actual NSA practices.
The Hacker News post titled "The NSA Selector" (linking to a GitHub repository about a supposed NSA spying tool) has a moderate number of comments, enough to provide some discussion but not an overwhelmingly large thread. Many of the comments express a high degree of skepticism about the authenticity and significance of the "NSA selector" described in the GitHub repository.
Several commenters question the technical details presented, pointing out apparent inconsistencies or lack of evidence. One commenter notes the absence of crucial information about how the alleged tool would integrate with existing systems, making it difficult to assess its plausibility. Others express doubt about the claimed capabilities of the tool, suggesting they are exaggerated or based on misunderstandings of network security principles. The lack of verification from reputable sources is a recurring theme, with commenters emphasizing the need for stronger evidence before taking the claims seriously.
Some commenters engage in more speculative discussion, exploring hypothetical scenarios even while acknowledging the uncertainty surrounding the "selector." They discuss the potential implications if such a tool were real, considering its possible impact on privacy and security. However, these discussions remain grounded in the prevailing skepticism, treating the "selector" as more of a thought experiment than a confirmed threat.
A few comments offer alternative explanations for the information presented in the GitHub repository. One commenter suggests it could be a misunderstanding of existing network monitoring techniques, while another speculates it might be a deliberate hoax or disinformation campaign. These alternative theories further contribute to the overall sense of doubt surrounding the "NSA selector."
In summary, the comments on the Hacker News post predominantly express skepticism and caution regarding the "NSA selector." They highlight the lack of verifiable evidence, question the technical details, and propose alternative explanations. While some commenters engage in speculative discussions about the potential implications, the overall tone remains one of doubt, emphasizing the need for more substantial proof before accepting the claims at face value.