The blog post encourages readers to experiment with a provided Python script that demonstrates how easily location can be estimated using publicly available Wi-Fi network data and the Wigle.net API. By inputting the BSSIDs (unique identifiers) of nearby Wi-Fi networks, even without connecting to them, the script queries Wigle.net and returns a surprisingly accurate location estimate. The post highlights the privacy implications of this accessible technology, emphasizing how readily available information about wireless networks can be used to pinpoint someone's location with a simple script, regardless of whether location services are enabled on a device. This reinforces the previous post's message about the pervasiveness of location tracking.
The post "Everyone knows all the apps on your phone" argues that the extensive data collection practices of mobile advertising networks effectively reveal which apps individuals use, even without explicit permission. Through deterministic and probabilistic methods linking device IDs, IP addresses, and other signals, these networks can create detailed profiles of app usage across devices. This information is then packaged and sold to advertisers, data brokers, and even governments, allowing them to infer sensitive information about users, from their political affiliations and health concerns to their financial status and personal relationships. The post emphasizes the illusion of privacy in the mobile ecosystem, suggesting that the current opt-out model is inadequate and calls for a more robust approach to data protection.
Hacker News users discussed the privacy implications of app usage data being readily available to mobile carriers and how this data can be used for targeted advertising and even more nefarious purposes. Some commenters highlighted the ease with which this data can be accessed, not just by corporations but also by individuals with basic technical skills. The discussion also touched upon the ineffectiveness of current privacy regulations and the lack of real control users have over their data. A few users pointed out the potential for this data to reveal sensitive information like health conditions or financial status based on app usage patterns. Several commenters expressed a sense of resignation and apathy, suggesting the fight for data privacy is already lost, while others advocated for stronger regulations and user control over data sharing.
A journalist drove 300 miles through rural Virginia, then filed public records requests with law enforcement agencies to see what surveillance footage they had of his car. He received responses from various agencies, including small town police, sheriff's departments, and university police. Some agencies had no footage, while others had license plate reader (LPR) data or images from traffic cameras. The experience highlighted the patchwork nature of public surveillance, with data retention policies and access procedures varying widely. While some agencies promptly provided information, others were unresponsive or claimed exemptions. The experiment ultimately revealed the growing, yet inconsistent, presence of automated surveillance in even rural areas and raised questions about data security and public access to this information.
Hacker News users discuss the implications of widespread police surveillance and the journalist's experience requesting footage of his own vehicle. Some express concern about the lack of transparency and potential for abuse, highlighting the ease with which law enforcement can track individuals. Others question the legality and oversight of such data collection practices, emphasizing the need for stricter regulations. A few commenters suggest technical countermeasures, such as license plate covers, while acknowledging their limited effectiveness and potential legal ramifications. The practicality and cost-effectiveness of storing vast amounts of surveillance data are also debated, with some arguing that the data's usefulness in solving crimes doesn't justify the privacy intrusion. Several users share personal anecdotes of encountering ALPRs (Automatic License Plate Readers), reinforcing the pervasiveness of this technology. Finally, the discussion touches upon the challenges of balancing public safety with individual privacy rights in an increasingly surveilled society.
Amazon has removed the "Do Not Send" toggle in Alexa's privacy settings that previously prevented voice recordings from being reviewed by human annotators. While users can still delete their voice history and choose not to participate in the "Help improve Alexa" program, automatic deletion is no longer an option, meaning some voice recordings will be retained for an unspecified period for ongoing model development. Amazon claims this change simplifies privacy settings and reflects the primary way customers manage their data (i.e., through activity deletion).
Hacker News users reacted with cynicism and resignation to the news that Amazon silently removed the Alexa voice recording privacy option. Many expressed the belief that Amazon never truly honored the setting in the first place, speculating the data was still collected regardless of user preference. Some commenters suggested that this move further erodes trust in Amazon and reinforces the perception that "big tech" companies prioritize data collection over user privacy. Others recommended alternative smart home solutions that respect privacy or simply avoiding such devices altogether. A few wondered about the technical or legal reasons behind the change, with some speculating it might be related to training large language models.
The Register reports that Google collects and transmits Android user data, including hardware identifiers and location, to its servers even before a user opens any apps or completes device setup. This pre-setup data collection involves several Google services and occurs during the initial boot process, transmitting information like IMEI, hardware serial number, SIM serial number, and nearby Wi-Fi access point details. While Google claims this data is crucial for essential services like fraud prevention and software updates, the article raises privacy concerns, particularly because users are not informed of this data collection nor given the opportunity to opt out. This behavior raises questions about the balance between user privacy and Google's data collection practices.
HN commenters discuss the implications of Google's data collection on Android even before app usage. Some highlight the irony of Google's privacy claims contrasted with their extensive tracking. Several express resignation, suggesting this behavior is expected from Google and other large tech companies. One commenter mentions a study showing Google collecting data even when location services are disabled, and another points to the difficulty of truly opting out of this tracking without significant technical knowledge. The discussion also touches upon the limitations of using alternative Android ROMs or de-Googled phones, acknowledging their usability compromises. There's a general sense of pessimism about the ability of users to control their data in the Android ecosystem.
Mozilla's Firefox Terms state that they collect information you input into the browser, including text entered in forms, search queries, and URLs visited. This data is used to provide and improve Firefox features like autofill, search suggestions, and syncing. Mozilla emphasizes that they handle this information responsibly, aiming to minimize data collection, de-identify data where possible, and provide users with controls to manage their privacy. They also clarify that while they collect this data, they do not collect the content of web pages you visit unless you explicitly choose features like Pocket or Firefox Screenshots, which are governed by separate privacy policies.
HN users express concern and skepticism over Mozilla's claim to own "information you input through Firefox," interpreting it as overly broad and potentially invasive. Some argue the wording is likely a clumsy attempt to cover necessary data collection for features like sync and breach alerts, not a declaration of ownership over user-created content. Others point out the impracticality of Mozilla storing and utilizing such vast amounts of data, suggesting it's a legal safeguard rather than a reflection of actual practice. A few commenters highlight the contrast with Firefox's privacy-focused image, questioning the need for such strong language. Several users recommend alternative browsers like LibreWolf and Ungoogled Chromium, perceiving them as more privacy-respecting alternatives.
The blog post "Removing Jeff Bezos from My Bed" details the author's humorous, yet slightly unsettling, experience with Amazon's Echo Show 15 and its personalized recommendations. The author found that the device, positioned in their bedroom, consistently suggested purchasing a large, framed portrait of Jeff Bezos. While acknowledging the technical mechanisms likely behind this odd recommendation (facial recognition misidentification and correlated browsing data), they highlight the potential for such personalized advertising to become intrusive and even creepy within the intimate space of a bedroom. The post emphasizes the need for more thoughtful consideration of the placement and application of AI-powered advertising, especially as smart devices become increasingly integrated into our homes.
Hacker News users generally found the linked blog post humorous and relatable. Several commenters shared similar experiences with unwanted targeted ads, highlighting the creepiness factor and questioning the effectiveness of such highly personalized marketing. Some discussed the technical aspects of how these ads are generated, speculating about data collection practices and the algorithms involved. A few expressed concerns about privacy and the potential for misuse of personal information. Others simply appreciated the author's witty writing style and the absurdity of the situation. The top comment humorously suggested an alternative headline: "Man Discovers Retargeting."
Meta's Project Aria research kit consists of smart glasses and a wristband designed to gather first-person data like video, audio, eye-tracking, and location, which will be used to develop future AR glasses. This data is anonymized and used to train AI models that understand the real world, enabling features like seamless environmental interaction and intuitive interfaces. The research kit is not a consumer product and is only distributed to qualified researchers participating in specific studies. The project emphasizes privacy and responsible data collection, employing blurring and redaction techniques to protect bystanders' identities in the collected data.
Several Hacker News commenters express skepticism about Meta's Project Aria research kit, questioning the value of collecting such extensive data and the potential privacy implications. Some doubt the project's usefulness for AR development, suggesting that realistic scenarios are more valuable than vast amounts of "boring" data. Others raise concerns about data security and the possibility of misuse, drawing parallels to previous controversies surrounding Meta's data practices. A few commenters are more optimistic, seeing potential for advancements in AR and expressing interest in the technical details of the data collection process. Several also discuss the challenges of processing and making sense of such a massive dataset, and the limitations of relying solely on first-person visual data for understanding human behavior.
A UK watchdog is investigating Apple's compliance with its own App Tracking Transparency (ATT) framework, questioning why Apple's first-party apps seem exempt from the same stringent data collection rules imposed on third-party developers. The Competition and Markets Authority (CMA) is particularly scrutinizing how Apple gathers and uses user data within its own apps, given that it doesn't require user permission via the ATT pop-up prompts like third-party apps must. The probe aims to determine if this apparent double standard gives Apple an unfair competitive advantage in the advertising and app markets, potentially breaching competition law.
HN commenters largely agree that Apple's behavior is hypocritical, applying stricter tracking rules to third-party apps while seemingly exempting its own. Some suggest this is classic regulatory capture, where Apple leverages its gatekeeper status to stifle competition. Others point out the difficulty of proving Apple's data collection is for personalized ads, as Apple claims it's for "personalized experiences." A few commenters argue Apple's first-party data usage is less problematic because the data isn't shared externally, while others counter that the distinction is irrelevant from a privacy perspective. The lack of transparency around Apple's data collection practices fuels suspicion. A common sentiment is that Apple's privacy stance is more about marketing than genuine user protection. Some users also highlight the inherent conflict of interest in Apple acting as both platform owner and app developer.
A recent study reveals that CAPTCHAs are essentially a profitable tracking system disguised as a security measure. While ostensibly designed to differentiate bots from humans, CAPTCHAs allow companies like Google to collect vast amounts of user data for targeted advertising and other purposes. This system has cost users a staggering amount of time—an estimated 819 billion hours globally—and has generated nearly $1 trillion in revenue, primarily for Google. The study argues that the actual security benefits of CAPTCHAs are minimal compared to the immense profits generated from the user data they collect. This raises concerns about the balance between online security and user privacy, suggesting CAPTCHAs function more as a data harvesting tool than an effective bot deterrent.
Hacker News users generally agree with the premise that CAPTCHAs are exploitative. Several point out the irony of Google using them for training AI while simultaneously claiming they prevent bots. Some highlight the accessibility issues CAPTCHAs create, particularly for disabled users. Others discuss alternatives, such as Cloudflare's Turnstile, and the privacy implications of different solutions. The increasing difficulty and frequency of CAPTCHAs are also criticized, with some speculating it's a deliberate tactic to push users towards paid "captcha-free" services. Several commenters express frustration with the current state of CAPTCHAs and the lack of viable alternatives.
Tim investigated the precision of location data used for targeted advertising by requesting his own data from ad networks. He found that location information shared with these networks, often through apps on his phone, was remarkably precise, pinpointing his location to within a few meters. He successfully identified his own apartment and even specific rooms within it based on the location polygons provided by the ad networks. This highlighted the potential privacy implications of sharing location data with apps, demonstrating how easily and accurately individuals can be tracked even without explicit consent for precise location sharing. The experiment revealed a lack of transparency and control over how this granular location data is collected, used, and shared by advertising ecosystems.
HN commenters generally agreed with the article's premise that location tracking through in-app advertising is pervasive and concerning. Some highlighted the irony of privacy policies that claim not to share precise location while effectively doing so through ad requests containing latitude/longitude. Several discussed technical details, including the surprising precision achievable even without GPS and the potential misuse of background location data. Others pointed to the broader ecosystem issue, emphasizing the difficulty in assigning blame to any single actor and the collective responsibility of ad networks, app developers, and device manufacturers. A few commenters suggested potential mitigations like VPNs or disabling location services entirely, while others expressed resignation to the current state of surveillance. The effectiveness of "Limit Ad Tracking" settings was also questioned.
The Asurion article outlines how to manage various Apple "intelligence" features, which personalize and improve user experience but also collect data. It explains how to disable Siri suggestions, location tracking for specific apps or entirely, personalized ads, sharing analytics with Apple, and features like Significant Locations and personalized recommendations in apps like Music and TV. The article emphasizes that disabling these features may impact the functionality of certain apps and services, and offers steps for both iPhone and Mac devices.
HN commenters largely express skepticism and distrust of Apple's "intelligence" features, viewing them as data collection tools rather than genuinely helpful features. Several comments highlight the difficulty in truly disabling these features, pointing out that Apple often re-enables them with software updates or buries the relevant settings deep within menus. Some users suggest that these "intelligent" features primarily serve to train Apple's machine learning models, with little tangible benefit to the end user. A few comments discuss specific examples of unwanted behavior, like personalized ads appearing based on captured data. Overall, the sentiment is one of caution and a preference for maintaining privacy over utilizing these features.
Summary of Comments ( 50 )
https://news.ycombinator.com/item?id=43716704
Hacker News users generally agreed with the article's premise, expressing concern over the ease with which location can be approximated or even precisely determined using readily available data and relatively simple techniques. Several commenters shared their own experiences replicating the author's methods, often with similar success in pinpointing locations. Some highlighted the chilling implications for privacy, particularly in light of data breaches and the potential for malicious actors to exploit this vulnerability. A few offered suggestions for mitigating the risk, such as VPN usage or scrutinizing browser extensions, while others debated the feasibility and effectiveness of such measures. Some questioned the novelty of the findings, pointing to prior discussions on similar topics, while others emphasized the importance of continued awareness and education about these privacy risks.
The Hacker News post titled "Everyone knows your location, Part 2: try it yourself and share the results" generated a moderate amount of discussion with a mix of reactions and insights related to the original article's claims about location tracking.
Several commenters shared their own experiences attempting the location tracking techniques described in the article, with varying degrees of success. Some reported being able to pinpoint locations with surprising accuracy, while others found the methods less effective or inconsistent. This led to a discussion about the reliability and practicality of these techniques in real-world scenarios.
A key point of discussion revolved around the ethical implications of readily accessible location tracking methods. Commenters debated the potential for misuse and the need for greater awareness and control over personal location data. Some argued for stricter regulations and increased transparency from companies collecting and utilizing location information.
Technical details of the tracking methods were also examined. Commenters discussed the specifics of IP address geolocation, WiFi positioning, and other techniques, including their limitations and potential vulnerabilities. Some commenters with expertise in networking and security offered insights into the accuracy and feasibility of these methods, pointing out factors that could influence the results.
The conversation touched upon the trade-offs between convenience and privacy in the context of location-based services. Commenters acknowledged the benefits of location services for navigation, personalized recommendations, and other applications, but also expressed concerns about the potential for surveillance and data breaches.
Some commenters also discussed potential mitigations and defenses against unwanted location tracking. Suggestions included using VPNs, disabling location services on devices, and being mindful of the permissions granted to apps.
Finally, a few commenters questioned the overall novelty of the information presented in the article, suggesting that the methods described were already well-known within the security and privacy community. However, they acknowledged the value in raising public awareness about these issues and making them accessible to a wider audience.