France's data protection watchdog, CNIL, fined Apple €8 million and Meta (Facebook's parent company) €60 million for violating EU privacy law. The fines stem from how the companies implemented targeted advertising on iOS and Android respectively. CNIL found that users were not given a simple enough mechanism to opt out of personalized ads; while both companies offered some control, users had to navigate multiple settings. Specifically, Apple defaulted to personalized ads requiring users to actively disable them, while Meta made ad personalization integral to its terms of service, requiring active consent to activate non-personalized ads. The CNIL considered both approaches violations of EU regulations that require clear and straightforward consent for personalized advertising.
Wired's article argues that Meta's dominance in social media, built through acquisitions like Instagram and WhatsApp, allowed it to initially embrace interoperability with other platforms. However, once its monopoly was secured, Meta strategically reversed course, restricting access and data portability to stifle competition and maintain its control over the digital landscape. This behavior, as highlighted in the FTC's antitrust lawsuit, demonstrates Meta's opportunistic approach to collaboration, treating interoperability as a tool to be exploited rather than a principle to uphold. The article emphasizes how Meta's actions ultimately harmed users by limiting choice and innovation.
HN commenters largely agree with the premise of the Wired article, pointing out Meta/Facebook's history of abandoning projects and partners once they've served their purpose. Several commenters cite specific examples like Facebook's treatment of Zynga and the shuttering of Parse. Some discuss the broader implications of platform dependence and the inherent risks for developers building on closed ecosystems controlled by powerful companies like Meta. Others note that this behavior isn't unique to Meta, highlighting similar patterns in other large tech companies, like Google and Apple, where services and APIs are discontinued with little notice, disrupting reliant businesses. A few voices suggest that regulatory intervention is necessary to address this power imbalance and prevent the stifling of innovation. The general sentiment is one of distrust towards Meta and a wariness about relying on their platforms for long-term projects.
Simon Willison speculates that Meta's decision to open-source its Llama large language model might be a strategic move to comply with the upcoming EU AI Act. The Act places greater regulatory burdens on "foundation models"—powerful, general-purpose AI models like Llama—especially those deployed commercially. By open-sourcing Llama, Meta potentially sidesteps these stricter regulations, as the open nature arguably diminishes Meta's direct control and thus their designated responsibility under the Act. This move allows Meta to benefit from community contributions and improvements while possibly avoiding the costs and limitations associated with being classified as a foundation model provider under the EU's framework.
Several commenters on Hacker News discussed the potential impact of the EU AI Act on Meta's decision to release Llama as "open source." Some speculated that the Act's restrictions on foundation models might incentivize companies to release models openly to avoid stricter regulations applied to closed-source, commercially available models. Others debated the true openness of Llama, pointing to the community license's restrictions on commercial use at scale, arguing that this limitation makes it not truly open source. A few commenters questioned if Meta genuinely intended to avoid the AI Act or if other factors, such as community goodwill and attracting talent, were more influential. There was also discussion around whether Meta's move was preemptive, anticipating future tightening of "open source" definitions within the Act. Some also observed the irony of regulations potentially driving more open access to powerful AI models.
The FTC's antitrust lawsuit against Meta kicked off in federal court. The FTC argues that Meta illegally monopolized the virtual reality market by acquiring Within, maker of the popular fitness app Supernatural, and is seeking to force Meta to divest the company. Meta contends that the acquisition was pro-competitive, benefiting consumers and developers alike. The trial's outcome holds significant weight for the future of VR and the FTC's ability to challenge Big Tech acquisitions in nascent markets.
HN commenters discuss the difficulty of defining the relevant market in the Meta antitrust case, with some arguing that virtual reality fitness is a distinct market from broader social media or even general VR, while others believe the focus should be on Meta's overall social media dominance. Several commenters express skepticism about the FTC's case, believing it's weak and politically motivated, and unlikely to succeed given the high bar for antitrust action. The acquisition of Within is seen by some as a relatively small deal unlikely to warrant such scrutiny. Some discussion also revolves around the potential chilling effect of such lawsuits on acquisitions by large companies, potentially stifling innovation. A few commenters also mention the unusual courtroom setup with VR headsets provided, highlighting the novelty of the technology involved in the case.
Meta has announced Llama 4, a collection of foundational models that boast improved performance and expanded capabilities compared to their predecessors. Llama 4 is available in various sizes and has been trained on a significantly larger dataset of text and code. Notably, Llama 4 introduces multimodal capabilities, allowing it to process both text and images. This empowers the models to perform tasks like image captioning, visual question answering, and generating more detailed image descriptions. Meta emphasizes their commitment to open innovation and responsible development by releasing Llama 4 under a non-commercial license for research and non-commercial use, aiming to foster broader community involvement in AI development and safety research.
Hacker News users discussed the implications of Llama 2's multimodal capabilities, particularly its image understanding. Some expressed excitement about potential applications like image-based Q&A and generating alt-text for accessibility. Skepticism arose around Meta's closed-source approach with Llama 2, contrasting it with the fully open Llama 1. Several commenters debated the competitive landscape, comparing Llama 2 to Google's Gemini and open-source models, questioning whether Llama 2 offered significant advantages. The closed nature also raised concerns about reproducibility of research and community contributions. Others noted the rapid pace of AI advancement and speculated on future developments. A few users highlighted the potential for misuse, such as generating misinformation.
The Nieman Lab article highlights the growing role of journalists in training AI models for companies like Meta and OpenAI. These journalists, often working as contractors, are tasked with fact-checking, identifying biases, and improving the quality and accuracy of the information generated by these powerful language models. Their work includes crafting prompts, evaluating responses, and essentially teaching the AI to produce more reliable and nuanced content. This emerging field presents a complex ethical landscape for journalists, forcing them to navigate potential conflicts of interest and consider the implications of their work on the future of journalism itself.
Hacker News users discussed the implications of journalists training AI models for large companies. Some commenters expressed concern that this practice could lead to job displacement for journalists and a decline in the quality of news content. Others saw it as an inevitable evolution of the industry, suggesting that journalists could adapt by focusing on investigative journalism and other areas less susceptible to automation. Skepticism about the accuracy and reliability of AI-generated content was also a recurring theme, with some arguing that human oversight would always be necessary to maintain journalistic standards. A few users pointed out the potential conflict of interest for journalists working for companies that also develop AI models. Overall, the discussion reflected a cautious approach to the integration of AI in journalism, with concerns about the potential downsides balanced by an acknowledgement of the technology's transformative potential.
This GitHub repository offers a comprehensive exploration of Llama 2, aiming to demystify its inner workings. It covers the architecture, training process, and implementation details of the model. The project provides resources for understanding Llama 2's components, including positional embeddings, attention mechanisms, and the rotary embedding technique. It also delves into the training data and methodology used to develop the model, along with practical guidance on implementing and running Llama 2 from scratch. The goal is to equip users with the knowledge and tools necessary to effectively utilize and potentially extend the capabilities of Llama 2.
Hacker News users discussed the practicality and accessibility of training large language models (LLMs) like Llama 3. Some expressed skepticism about the feasibility of truly training such a model "from scratch" given the immense computational resources required, questioning if the author was simply fine-tuning an existing model. Others highlighted the value of the resource for educational purposes, even if full-scale training wasn't achievable for most individuals. There was also discussion about the potential for optimized training methods and the possibility of leveraging smaller, more manageable datasets for specific tasks. The ethical implications of training and deploying powerful LLMs were also touched upon. Several commenters pointed out inconsistencies or potential errors in the provided code examples and training process description.
Meta is arguing that its platform hosting pirated books isn't illegal because they claim there's no evidence they're "seeding" (actively uploading and distributing) the copyrighted material. They contend they're merely "leeching" (downloading), which they argue isn't copyright infringement. This defense comes as publishers sue Meta for hosting and facilitating access to vast quantities of pirated books on platforms like Facebook and Instagram, claiming significant financial harm. Meta asserts that publishers haven't demonstrated that the company is contributing to the distribution of the infringing content beyond simply allowing users to access it.
Hacker News users discuss Meta's defense against accusations of book piracy, with many expressing skepticism towards Meta's "we're just a leech" argument. Several commenters point out the flaw in this logic, arguing that downloading constitutes an implicit form of seeding, as portions of the file are often shared with other peers during the download process. Others highlight the potential hypocrisy of Meta's position, given their aggressive stance against copyright infringement on their own platforms. Some users also question the article's interpretation of the legal arguments, and suggest that Meta's stance may be more nuanced than portrayed. A few commenters draw parallels to previous piracy cases involving other companies. Overall, the consensus leans towards disbelief in Meta's defense and anticipates further legal challenges.
Meta's Project Aria research kit consists of smart glasses and a wristband designed to gather first-person data like video, audio, eye-tracking, and location, which will be used to develop future AR glasses. This data is anonymized and used to train AI models that understand the real world, enabling features like seamless environmental interaction and intuitive interfaces. The research kit is not a consumer product and is only distributed to qualified researchers participating in specific studies. The project emphasizes privacy and responsible data collection, employing blurring and redaction techniques to protect bystanders' identities in the collected data.
Several Hacker News commenters express skepticism about Meta's Project Aria research kit, questioning the value of collecting such extensive data and the potential privacy implications. Some doubt the project's usefulness for AR development, suggesting that realistic scenarios are more valuable than vast amounts of "boring" data. Others raise concerns about data security and the possibility of misuse, drawing parallels to previous controversies surrounding Meta's data practices. A few commenters are more optimistic, seeing potential for advancements in AR and expressing interest in the technical details of the data collection process. Several also discuss the challenges of processing and making sense of such a massive dataset, and the limitations of relying solely on first-person visual data for understanding human behavior.
Meta's AI Demos website showcases a collection of experimental AI projects focused on generative AI for images, audio, and code. These demos allow users to interact with and explore the capabilities of these models, such as creating images from text prompts, generating variations of existing images, editing images using text instructions, translating speech in real-time, and creating music from text descriptions. The site emphasizes the research and development nature of these projects, highlighting their potential while acknowledging their limitations and encouraging user feedback.
Hacker News users discussed Meta's AI demos with a mix of skepticism and cautious optimism. Several commenters questioned the practicality and real-world applicability of the showcased technologies, particularly the image segmentation and editing features, citing potential limitations and the gap between demo and production-ready software. Some expressed concern about the potential misuse of such tools, particularly for creating deepfakes. Others were more impressed, highlighting the rapid advancements in AI and the potential for these technologies to revolutionize creative fields. A few users pointed out the similarities to existing tools and questioned Meta's overall AI strategy, while others focused on the technical aspects and speculated on the underlying models and datasets used. There was also a thread discussing the ethical implications of AI-generated content and the need for responsible development and deployment.
Summary of Comments ( 174 )
https://news.ycombinator.com/item?id=43770337
Hacker News commenters generally agree that the fines levied against Apple and Meta (formerly Facebook) are insignificant relative to their revenue, suggesting the penalties are more symbolic than impactful. Some point out the absurdity of the situation, with Apple being fined for giving users more privacy controls, while Meta is fined for essentially ignoring them. The discussion also questions the effectiveness of GDPR and similar regulations, arguing that they haven't significantly changed data collection practices and mostly serve to generate revenue for governments. Several commenters expressed skepticism about the EU's motives, suggesting the fines are driven by a desire to bolster European tech companies rather than genuinely protecting user privacy. A few commenters note the contrast between the EU's approach and that of the US, where similar regulations are seemingly less enforced.
The Hacker News post "Apple and Meta fined millions for breaching EU law" generated a modest number of comments, primarily focusing on the perceived absurdity of the fines and the EU's regulatory approach.
Several commenters expressed skepticism about the effectiveness and rationale behind the fines. One user questioned the logic of fining companies for allegedly violating user privacy while simultaneously mandating features (like ATT, App Tracking Transparency) that purportedly aim to protect user privacy. They highlighted the seemingly contradictory nature of being penalized for not adhering to a standard while also being forced to implement a mechanism that seemingly leads to that penalty.
Another commenter pointed out the relatively small amount of the fines compared to the companies' vast revenues, suggesting that such penalties are unlikely to deter future behavior. They argued that these fines essentially amount to a "cost of doing business" rather than a genuine deterrent.
The discussion also touched on the complexities of obtaining user consent and the practical challenges of adhering to regulations like GDPR. A commenter sarcastically remarked on the expectation that users should meaningfully engage with complex consent pop-ups, noting the impracticality of expecting users to carefully consider and understand the implications of every consent request.
One comment questioned the actual impact on user privacy, suggesting that the fines might be more about generating revenue for the EU than genuinely protecting users. They also suggested the possibility of regulatory capture, implying that regulators might be influenced by larger tech companies.
Finally, a comment highlighted the seeming disparity in the application of GDPR regulations, observing that smaller companies face stricter enforcement while larger companies often seem to escape significant consequences. They used the analogy of enforcing traffic laws strictly on bicycles while ignoring violations by large trucks.
In essence, the comments reflect a general sentiment of skepticism and cynicism towards the EU's approach to regulating tech giants, questioning the effectiveness and motivations behind the fines, and highlighting the practical difficulties and perceived inconsistencies in their application.