Belgian artist Dries Depoorter created "The Flemish Scrollers," an art project using AI to detect and publicly shame Belgian politicians caught using their phones during parliamentary livestreams. The project automatically clips videos of these instances and posts them to a Twitter bot account, tagging the politicians involved. Depoorter aims to highlight politicians' potential inattentiveness during official proceedings.
Micah Lee's blog post investigates leaked data purportedly from a Ukrainian paramilitary group. He analyzes the authenticity of the leak, noting corroboration with open-source information and the inclusion of sensitive operational details that make a forgery less likely. Lee focuses on the technical aspects of the leak, examining the file metadata and directory structure, which suggests an internal compromise rather than a hack. He concludes that while definitive attribution is difficult, the leak appears genuine and offers a rare glimpse into the group's inner workings, including training materials, equipment lists, and personal information of members.
Hacker News users discussed the implications of easily accessible paramilitary manuals and the potential for misuse. Some commenters debated the actual usefulness of such manuals, arguing that real-world training and experience are far more valuable than theoretical knowledge gleaned from a PDF. Others expressed concern about the ease with which extremist groups could access these resources and potentially use them for nefarious purposes. The ethical implications of hosting such information were also raised, with some suggesting that platforms have a responsibility to prevent the spread of potentially harmful content, while others argued for the importance of open access to information. A few users highlighted the historical precedent of similar manuals being distributed, pointing out that they've been available for decades, predating the internet.
Body doubling utilizes the presence of another person, either virtually or in-person, to enhance focus and productivity, particularly for tasks that individuals find challenging to initiate or complete independently. This technique leverages accountability and shared work sessions to combat procrastination and maintain motivation, particularly beneficial for those with ADHD, autism, or other conditions impacting executive function. The website, BodyDoubling.com, offers resources and a platform to connect with others for body doubling sessions, highlighting its effectiveness in overcoming procrastination and fostering a sense of shared purpose while working towards individual goals.
Hacker News users discussed the effectiveness of body doubling, with many sharing personal anecdotes of its benefits for focus and productivity, especially for those with ADHD. Some highlighted the accountability and subtle social pressure as key drivers, while others emphasized the reduction of procrastination and feeling less alone in tackling tasks. A few skeptical commenters questioned the long-term viability and potential for dependency, suggesting it might be a crutch rather than a solution. The discussion also touched upon virtual body doubling tools and the importance of finding a compatible partner, along with the potential for it to evolve into co-working. Some users drew parallels to other productivity techniques like the Pomodoro method, and there was a brief debate about the distinction between body doubling and simply working in the same space.
An Oregon woman discovered her private nude photos had been widely shared in her small town, tracing the source back to the local district attorney, Marco Bocci, and a sheriff's deputy. The photos were taken from her phone while it was in police custody as evidence. Despite the woman's distress and the clear breach of privacy, both Bocci and the deputy are shielded from liability by qualified immunity (QI), preventing her from pursuing legal action against them. The woman, who had reported a stalking incident, now feels further victimized by law enforcement. An independent investigation confirmed the photo sharing but resulted in no disciplinary action.
HN commenters largely discuss qualified immunity (QI), expressing frustration with the legal doctrine that shields government officials from liability. Some argue that QI protects bad actors and prevents accountability for misconduct, particularly in cases like this where the alleged actions seem clearly inappropriate. A few commenters question the factual accuracy of the article or suggest alternative explanations for how the photos were disseminated, but the dominant sentiment is critical of QI and its potential to obstruct justice in this specific instance and more broadly. Several also highlight the power imbalance between citizens and law enforcement, noting the difficulty individuals face when challenging authority.
Simon Willison argues that computers cannot be held accountable because accountability requires subjective experience, including understanding consequences and feeling remorse or guilt. Computers, as deterministic systems following instructions, lack these crucial components of consciousness. While we can and should hold humans accountable for the design, deployment, and outcomes of computer systems, ascribing accountability to the machines themselves is a category error, akin to blaming a hammer for hitting a thumb. This doesn't absolve us from addressing the harms caused by AI and algorithms, but requires focusing responsibility on the human actors involved.
HN users largely agree with the premise that computers, lacking sentience and agency, cannot be held accountable. The discussion centers around the implications of this, particularly regarding the legal and ethical responsibilities of the humans behind AI systems. Several compelling comments highlight the need for clear lines of accountability for the creators, deployers, and users of AI, emphasizing that focusing on punishing the "computer" is a distraction. One user points out that inanimate objects like cars are already subject to regulations and their human operators held responsible for accidents. Others suggest the concept of "accountability" for AI needs rethinking, perhaps focusing on verifiable safety standards and rigorous testing, rather than retribution. The potential for individuals to hide behind AI as a scapegoat is also raised as a major concern.
Cory Doctorow's "It's Not a Crime If We Do It With an App" argues that enclosing formerly analog activities within proprietary apps often transforms acceptable behaviors into exploitable data points. Companies use the guise of convenience and added features to justify these apps, gathering vast amounts of user data that is then monetized or weaponized through surveillance. This creates a system where everyday actions, previously unregulated, become subject to corporate control and potential abuse, ultimately diminishing user autonomy and creating new vectors for discrimination and exploitation. The post uses the satirical example of a potato-tracking app to illustrate how seemingly innocuous data collection can lead to intrusive monitoring and manipulation.
HN commenters generally agree with Doctorow's premise that large corporations use "regulatory capture" to avoid legal consequences for harmful actions, citing examples like Facebook and Purdue Pharma. Some questioned the framing of the potato tracking scenario as overly simplistic, arguing that real-world supply chains are vastly more complex. A few commenters discussed the practicality of Doctorow's proposed solutions, debating the efficacy of co-ops and decentralized systems in combating corporate power. There was some skepticism about the feasibility of truly anonymized data collection and the potential for abuse even in decentralized systems. Several pointed out the inherent tension between the convenience offered by these technologies and the potential for exploitation.
Summary of Comments ( 105 )
https://news.ycombinator.com/item?id=43278473
HN commenters largely criticized the project for being creepy and invasive, raising privacy concerns about publicly shaming politicians for normal behavior. Some questioned the legality and ethics of facial recognition used in this manner, particularly without consent. Several pointed out the potential for misuse and the chilling effect on free speech. A few commenters found the project amusing or a clever use of technology, but these were in the minority. The practicality and effectiveness of the project were also questioned, with some suggesting politicians could easily circumvent it. There was a brief discussion about the difference between privacy expectations in public vs. private settings, but the overall sentiment was strongly against the project.
The Hacker News comments section for the post "Automatically tagging politician when they use their phone on the livestreams" (regarding the project "The Flemish Scrollers") contains a robust discussion with a variety of perspectives on the project's implications.
Several commenters express concerns about privacy and surveillance. They question the ethics of publicly shaming politicians for using their phones, arguing that it's a form of public shaming and doesn't necessarily indicate wrongdoing. Some highlight the potential for misuse of this technology and the slippery slope towards increased surveillance of individuals. The idea that this could normalize such tracking and lead to its application to everyday citizens is a recurring worry. Some also point out the potential for false positives and the lack of context surrounding phone usage. A politician might be responding to an urgent matter or using their phone for work-related tasks, and the automatic tagging system doesn't differentiate between these scenarios.
Others see the project as a valuable tool for transparency and accountability. They argue that it holds politicians accountable for their attention during public sessions and allows the public to see how engaged their representatives are. Some suggest that it could discourage distractions and encourage politicians to be more present during important discussions. The sentiment that the public has a right to know what their elected officials are doing is prevalent in these comments.
A few commenters discuss the technical aspects of the project, including the use of facial recognition and AI. They delve into the accuracy of the system and the potential for biases in the algorithms. Some express interest in the technical implementation details and the challenges involved in identifying individuals and tracking their phone usage in real-time.
There's also a discussion about the broader implications of this technology beyond just politicians. Some commenters speculate about its potential use in other contexts, such as monitoring student attention in classrooms or employee engagement in meetings. The ethical implications of such applications are debated, with some arguing that it could be a useful tool while others express concern about the potential for abuse.
Finally, a handful of comments offer alternative perspectives or humorous takes on the situation. Some suggest that the project is more of an art piece or social commentary than a practical tool. Others joke about the potential reactions of politicians to being caught using their phones.
Overall, the comments section reveals a complex and nuanced discussion about the project's ethical, technical, and societal implications. There is a clear divide between those who see it as a positive step towards transparency and accountability and those who view it as a potentially invasive form of surveillance. The discussion highlights the important questions surrounding the use of AI and facial recognition technology in public spaces and the balance between privacy and public access to information.