A persistent, though likely apocryphal, story claims an ancient law mandates a bale of hay (sometimes straw) be hung from Charing Cross railway bridge. This supposed law is often linked to a public execution or a builder's compensation for lost river access due to the bridge's construction. However, no evidence supports the existence of such a law, and its origins likely lie in humorous speculation and urban legend. The story's longevity is attributed to its amusing and intriguing nature, even without factual basis.
Swiss-based privacy-focused company Proton, known for its VPN and encrypted email services, is considering leaving Switzerland due to a new surveillance law. The law grants the Swiss government expanded powers to spy on individuals and companies, requiring service providers like Proton to hand over user data in certain circumstances. Proton argues this compromises their core mission of user privacy and confidentiality, potentially making them "less confidential than Google," and is exploring relocation to a jurisdiction with stronger privacy protections.
Hacker News users discuss Proton's potential departure from Switzerland due to new surveillance laws. Several commenters express skepticism of Proton's claims, suggesting the move is motivated more by marketing than genuine concern for user privacy. Some argue that Switzerland is still more privacy-respecting than many other countries, questioning whether a move would genuinely benefit users. Others point out the complexities of running a secure email service, noting the challenges of balancing user privacy with legal obligations and the potential for abuse. A few commenters mention alternative providers and the increasing difficulty of finding truly private communication platforms. The discussion also touches upon the practicalities of relocating a company of Proton's size and the potential impact on its existing infrastructure and workforce.
The blog post "You Wouldn't Download a Hacker News" argues against the trend of building personal websites as complex web applications. The author contends that static sites, while seemingly less technologically advanced, are superior for personal sites due to their simplicity, speed, security, and ease of maintenance. Building a dynamic web application for a personal site introduces unnecessary complexity and vulnerabilities, akin to illegally downloading a car—it's more trouble than it's worth when simpler, legal alternatives exist. The core message is that personal websites should prioritize content and accessibility over flashy features and complicated architecture.
The Hacker News comments discuss the blog post's analogy of downloading a car (representing building software in-house) versus subscribing to a car service (representing using SaaS). Several commenters find the analogy flawed, arguing that software is more akin to designing and building a custom factory (in-house) versus renting a generic factory space (SaaS). This highlights the flexibility and control offered by building your own software, even if it's more complex. Other commenters point out the hidden costs of SaaS, such as vendor lock-in, data security concerns, and the potential for price hikes. The discussion also touches on the importance of considering the specific needs and resources of a company when deciding between building and buying software, acknowledging that SaaS can be a viable option for certain situations. A few commenters suggest the choice also depends on the stage of a company, with early-stage startups often benefiting from the speed and affordability of SaaS.
France's data protection watchdog, CNIL, fined Apple €8 million and Meta (Facebook's parent company) €60 million for violating EU privacy law. The fines stem from how the companies implemented targeted advertising on iOS and Android respectively. CNIL found that users were not given a simple enough mechanism to opt out of personalized ads; while both companies offered some control, users had to navigate multiple settings. Specifically, Apple defaulted to personalized ads requiring users to actively disable them, while Meta made ad personalization integral to its terms of service, requiring active consent to activate non-personalized ads. The CNIL considered both approaches violations of EU regulations that require clear and straightforward consent for personalized advertising.
Hacker News commenters generally agree that the fines levied against Apple and Meta (formerly Facebook) are insignificant relative to their revenue, suggesting the penalties are more symbolic than impactful. Some point out the absurdity of the situation, with Apple being fined for giving users more privacy controls, while Meta is fined for essentially ignoring them. The discussion also questions the effectiveness of GDPR and similar regulations, arguing that they haven't significantly changed data collection practices and mostly serve to generate revenue for governments. Several commenters expressed skepticism about the EU's motives, suggesting the fines are driven by a desire to bolster European tech companies rather than genuinely protecting user privacy. A few commenters note the contrast between the EU's approach and that of the US, where similar regulations are seemingly less enforced.
Wired's article argues that Meta's dominance in social media, built through acquisitions like Instagram and WhatsApp, allowed it to initially embrace interoperability with other platforms. However, once its monopoly was secured, Meta strategically reversed course, restricting access and data portability to stifle competition and maintain its control over the digital landscape. This behavior, as highlighted in the FTC's antitrust lawsuit, demonstrates Meta's opportunistic approach to collaboration, treating interoperability as a tool to be exploited rather than a principle to uphold. The article emphasizes how Meta's actions ultimately harmed users by limiting choice and innovation.
HN commenters largely agree with the premise of the Wired article, pointing out Meta/Facebook's history of abandoning projects and partners once they've served their purpose. Several commenters cite specific examples like Facebook's treatment of Zynga and the shuttering of Parse. Some discuss the broader implications of platform dependence and the inherent risks for developers building on closed ecosystems controlled by powerful companies like Meta. Others note that this behavior isn't unique to Meta, highlighting similar patterns in other large tech companies, like Google and Apple, where services and APIs are discontinued with little notice, disrupting reliant businesses. A few voices suggest that regulatory intervention is necessary to address this power imbalance and prevent the stifling of innovation. The general sentiment is one of distrust towards Meta and a wariness about relying on their platforms for long-term projects.
Harvey Silverglate's book, "Three Felonies a Day," argues that the average American unknowingly commits three felonies daily due to the vast and often vague nature of federal criminal law. The proliferation of broadly worded statutes, coupled with expansive interpretations by prosecutors, allows for the criminalization of acts that individuals wouldn't perceive as illegal. This creates a system where selective prosecution becomes easy, allowing the government to target almost anyone they choose. Silverglate illustrates this with examples of seemingly innocuous actions that could be construed as felonies, highlighting the potential for abuse and the erosion of due process. The book serves as a cautionary tale about the overreach of federal power and the dangers of an overly complex and opaque legal system.
HN commenters discuss Harvey Silverglate's book and the idea that the average American unknowingly commits three felonies daily due to the overabundance and complexity of laws. Several express concern about the erosion of mens rea (criminal intent) in many laws, leading to situations where individuals can be prosecuted for actions they didn't realize were illegal. Some debate the accuracy of Silverglate's "three felonies" claim, viewing it as hyperbole, while others find it plausible given the vastness of the legal code. A few commenters point out the potential for selective enforcement and abuse of power this legal complexity creates, while others highlight the difficulty of proving intent even in cases where it exists. The discussion also touches on the expansion of regulatory offenses, victimless crimes, and the contrast between the public perception of crime and the realities of the legal system. Some share personal anecdotes of encountering obscure or complex regulations, reinforcing the idea that everyday actions can unintentionally violate laws.
The US National Labor Relations Board (NLRB) has paused two cases against Apple involving alleged retaliation and suppression of union activity. This follows President Biden's appointment of Gwynne Wilcox, a lawyer representing a group accusing Apple of labor violations in one of the cases, to a key NLRB position. To avoid a conflict of interest, the NLRB’s general counsel has withdrawn from the cases until Wilcox is officially confirmed and recuses herself. This delay could impact the timing and outcome of the cases.
HN commenters discuss potential conflicts of interest arising from Gwynne Wilcox's appointment to the NLRB, given her prior involvement in cases against Apple. Some express concern that this appointment could influence future NLRB decisions, potentially favoring unions and hindering Apple's defense against unfair labor practice allegations. Others argue that recusal policies exist to mitigate such conflicts and that Wilcox's expertise is valuable to the board. A few commenters note the broader implications for labor relations and the increasing power of unions, with some suggesting this appointment reflects a pro-union stance by the current administration. The discussion also touches upon the specifics of the Apple cases, including allegations of coercive statements and restrictions on union organizing. Several commenters debate the merits of these allegations and the overall fairness of the NLRB's processes.
California's new "friend compound" laws, effective January 1, 2024, significantly ease restrictions on building multiple housing units on a single-family lot. Senate Bills 9 and 10 streamline the process for splitting lots and building duplexes, triplexes, and fourplexes, respectively, while maintaining local control over design standards. These laws aim to increase housing density and affordability by overcoming outdated zoning regulations, though their effectiveness remains to be seen due to potential loopholes and local implementation challenges. They represent a notable step towards addressing California's housing crisis.
Hacker News users discussed the complexities and potential downsides of California's recently enacted "Friend Compound" ADU law (AB-2221). Several commenters questioned the financial viability, pointing out that the costs associated with building multiple ADUs on a single lot could outweigh the potential rental income, especially with rising interest rates. Others raised concerns about parking, increased density impacting neighborhood character, and the potential for exploitation by developers seeking to maximize profits. The lack of clear guidelines within the law regarding utility connections and other practical considerations was also a recurring theme. Some expressed skepticism about whether the law would meaningfully address the housing crisis, suggesting it might primarily benefit wealthier homeowners. The overall sentiment seemed to be cautious optimism tempered by a healthy dose of pragmatism.
A US appeals court upheld a ruling that AI-generated artwork cannot be copyrighted. The court affirmed that copyright protection requires human authorship, and since AI systems lack the necessary human creativity and intent, their output cannot be registered. This decision reinforces the existing legal framework for copyright and clarifies its application to works generated by artificial intelligence.
HN commenters largely agree with the court's decision that AI-generated art, lacking human authorship, cannot be copyrighted. Several point out that copyright is designed to protect the creative output of people, and that extending it to AI outputs raises complex questions about ownership and incentivization. Some highlight the potential for abuse if corporations could copyright outputs from models they trained on publicly available data. The discussion also touches on the distinction between using AI as a tool, akin to Photoshop, versus fully autonomous creation, with the former potentially warranting copyright protection for the human's creative input. A few express concern about the chilling effect on AI art development, but others argue that open-source models and alternative licensing schemes could mitigate this. A recurring theme is the need for new legal frameworks better suited to AI-generated content.
Peter Roberts, an immigration attorney specializing in working with Y Combinator and startup companies, hosted an "Ask Me Anything" (AMA) on Hacker News. He offered to answer questions related to visas for founders, employees, and investors, particularly focusing on the complexities of navigating U.S. immigration law for early-stage companies. He emphasized his experience with O-1A visas for individuals with extraordinary ability, H-1Bs for specialty occupations, and E-2 treaty investor visas, as well as green cards. Roberts also touched upon the challenges and nuances of immigration law, encouraging participants to ask specific questions to receive the most accurate and helpful advice.
Commenters on the "Ask Me Anything" with immigration attorney Peter Roberts largely focus on practical questions related to visas, green cards, and startup-related immigration issues. Several ask about the specifics of the O-1 visa, its requirements, and success rates. Others inquire about the timelines and challenges associated with obtaining green cards through employment, particularly for those on H-1B visas. Some commenters express frustration with the current immigration system and its complexities, while others seek advice on navigating the process for specific scenarios, such as international founders or employees. There's significant interest in Roberts's experience with YC companies and the common immigration hurdles they face. A few commenters also touch upon the ethical considerations of immigration law and the impact of policy changes.
EFF warns that age verification laws, ostensibly designed to restrict access to adult content, pose a serious threat to online privacy. While initially targeting pornography sites, these laws are expanding to encompass broader online activities, such as accessing skincare products, potentially requiring users to upload government IDs to third-party verification services. This creates a massive database of sensitive personal information vulnerable to breaches, government surveillance, and misuse by private companies, effectively turning age verification into a backdoor for widespread online monitoring. The EFF argues that these laws are overbroad, ineffective at their stated goals, and disproportionately harm marginalized communities.
HN commenters express concerns about the slippery slope of age verification laws, starting with porn and potentially expanding to other online content and even everyday purchases. They argue that these laws normalize widespread surveillance and data collection, creating honeypots for hackers and potentially enabling government abuse. Several highlight the ineffectiveness of age gates, pointing to easy bypass methods and the likelihood of children accessing restricted content through other means. The chilling effect on free speech and the potential for discriminatory enforcement are also raised, with some commenters drawing parallels to authoritarian regimes. Some suggest focusing on better education and parental controls rather than restrictive legislation. The technical feasibility and privacy implications of various verification methods are debated, with skepticism towards relying on government IDs or private companies.
Right to Repair legislation has now been introduced in all 50 US states, marking a significant milestone for the movement. While no state has yet passed a comprehensive law covering all product categories, the widespread introduction of bills signifies growing momentum. These bills aim to compel manufacturers to provide consumers and independent repair shops with the necessary information, tools, and parts to fix their own devices, from electronics and appliances to agricultural equipment. This push for repairability aims to reduce electronic waste, empower consumers, and foster competition in the repair market. Though the fight is far from over, with various industries lobbying against the bills, the nationwide reach of these legislative efforts represents substantial progress.
Hacker News commenters generally expressed support for Right to Repair legislation, viewing it as a win for consumers, small businesses, and the environment. Some highlighted the absurdity of manufacturers restricting access to repair information and parts, forcing consumers into expensive authorized repairs or planned obsolescence. Several pointed out the automotive industry's existing right to repair as a successful precedent. Concerns were raised about the potential for watered-down legislation through lobbying efforts and the need for continued vigilance. A few commenters discussed the potential impact on security and safety if unqualified individuals attempt repairs, but the overall sentiment leaned heavily in favor of the right to repair movement's progress.
Several key EU regulations are slated to impact startups in 2025. The Data Act will govern industrial data sharing, requiring companies to make data available to users and others upon request, potentially affecting data-driven business models. The revised Payment Services Directive (PSD3) aims to enhance payment security and foster open banking, impacting fintechs with stricter requirements. The Cyber Resilience Act mandates enhanced cybersecurity for connected devices, adding compliance burdens on hardware and software developers. Additionally, the EU's AI Act, though expected later, could still influence product development strategies throughout 2025 with its tiered risk-based approach to AI regulation. These regulations necessitate careful preparation and adaptation for startups operating within or targeting the EU market.
Hacker News users discussing the upcoming EU regulations generally express concerns about their complexity and potential negative impact on startups. Several commenters predict these regulations will disproportionately burden smaller companies due to the increased compliance costs, potentially stifling innovation and favoring larger, established players. Some highlight specific regulations, like the Digital Services Act (DSA) and the Digital Markets Act (DMA), and discuss their potential consequences for platform interoperability and competition. The platform liability aspect of the DSA is also a point of contention, with some questioning its practicality and effectiveness. Others note the broad scope of these regulations, extending beyond just tech companies, and affecting sectors like manufacturing and AI. A few express skepticism about the EU's ability to effectively enforce these regulations.
A Brazilian Supreme Court justice ordered internet providers to block access to the video platform Rumble within 72 hours. The platform is accused of failing to remove content promoting January 8th riots in Brasília and spreading disinformation about the Brazilian electoral system. Rumble was given a deadline to comply with removal orders, which it missed, leading to the ban. Justice Alexandre de Moraes argued that the platform's actions posed a risk to public order and democratic institutions.
Hacker News users discuss the implications of Brazil's ban on Rumble, questioning the justification and long-term effectiveness. Some argue that the ban is an overreach of power and sets a dangerous precedent for censorship, potentially emboldening other countries to follow suit. Others point out the technical challenges of enforcing such a ban, suggesting that determined users will likely find workarounds through VPNs. The decision's impact on Rumble's user base and revenue is also debated, with some predicting minimal impact while others foresee significant consequences, particularly if other countries adopt similar measures. A few commenters draw parallels to previous bans of platforms like Telegram, noting the limited success and potential for unintended consequences like driving users to less desirable platforms. The overall sentiment expresses concern over censorship and the slippery slope towards further restrictions on online content.
Simon Willison argues that computers cannot be held accountable because accountability requires subjective experience, including understanding consequences and feeling remorse or guilt. Computers, as deterministic systems following instructions, lack these crucial components of consciousness. While we can and should hold humans accountable for the design, deployment, and outcomes of computer systems, ascribing accountability to the machines themselves is a category error, akin to blaming a hammer for hitting a thumb. This doesn't absolve us from addressing the harms caused by AI and algorithms, but requires focusing responsibility on the human actors involved.
HN users largely agree with the premise that computers, lacking sentience and agency, cannot be held accountable. The discussion centers around the implications of this, particularly regarding the legal and ethical responsibilities of the humans behind AI systems. Several compelling comments highlight the need for clear lines of accountability for the creators, deployers, and users of AI, emphasizing that focusing on punishing the "computer" is a distraction. One user points out that inanimate objects like cars are already subject to regulations and their human operators held responsible for accidents. Others suggest the concept of "accountability" for AI needs rethinking, perhaps focusing on verifiable safety standards and rigorous testing, rather than retribution. The potential for individuals to hide behind AI as a scapegoat is also raised as a major concern.
Cory Doctorow's "It's Not a Crime If We Do It With an App" argues that enclosing formerly analog activities within proprietary apps often transforms acceptable behaviors into exploitable data points. Companies use the guise of convenience and added features to justify these apps, gathering vast amounts of user data that is then monetized or weaponized through surveillance. This creates a system where everyday actions, previously unregulated, become subject to corporate control and potential abuse, ultimately diminishing user autonomy and creating new vectors for discrimination and exploitation. The post uses the satirical example of a potato-tracking app to illustrate how seemingly innocuous data collection can lead to intrusive monitoring and manipulation.
HN commenters generally agree with Doctorow's premise that large corporations use "regulatory capture" to avoid legal consequences for harmful actions, citing examples like Facebook and Purdue Pharma. Some questioned the framing of the potato tracking scenario as overly simplistic, arguing that real-world supply chains are vastly more complex. A few commenters discussed the practicality of Doctorow's proposed solutions, debating the efficacy of co-ops and decentralized systems in combating corporate power. There was some skepticism about the feasibility of truly anonymized data collection and the potential for abuse even in decentralized systems. Several pointed out the inherent tension between the convenience offered by these technologies and the potential for exploitation.
The Lawfare article argues that AI, specifically large language models (LLMs), are poised to significantly impact the creation of complex legal texts. While not yet capable of fully autonomous lawmaking, LLMs can already assist with drafting, analyzing, and interpreting legal language, potentially increasing efficiency and reducing errors. The article explores the potential benefits and risks of this development, acknowledging the potential for bias amplification and the need for careful oversight and human-in-the-loop systems. Ultimately, the authors predict that AI's role in lawmaking will grow substantially, transforming the legal profession and requiring careful consideration of ethical and practical implications.
HN users discuss the practicality and implications of AI writing complex laws. Some express skepticism about AI's ability to handle the nuances of legal language and the ethical considerations involved, suggesting that human oversight will always be necessary. Others see potential benefits in AI assisting with drafting legislation, automating tedious tasks, and potentially improving clarity and consistency. Several comments highlight the risks of bias being encoded in AI-generated laws and the potential for misuse by powerful actors to further their own agendas. The discussion also touches on the challenges of interpreting and enforcing AI-written laws, and the potential impact on the legal profession itself.
Peter Roberts, an immigration attorney working with Y Combinator and startups, hosted an AMA on Hacker News. He primarily addressed questions about visas for startup founders, including the O-1A visa for individuals with extraordinary ability, the E-2 treaty investor visa, and the H-1B visa for specialty occupations. He discussed the requirements and challenges associated with each visa, emphasizing the importance of a strong application with ample evidence of achievement. Roberts also touched on topics such as incorporating in the US, the process of obtaining a green card, and the difficulties international founders face when raising capital. He highlighted the complexities of US immigration law and offered general advice while encouraging individuals to seek personalized legal counsel.
Commenters on the "Ask Me Anything" with immigration attorney Peter Roberts largely focused on practical questions related to visas for startup founders and employees. Several inquiries revolved around the complexities of the O-1 visa, particularly regarding demonstrating extraordinary ability and the impact of prior visa denials. Others asked about alternatives like the E-2 treaty investor visa and the H-1B visa, including strategies for navigating the lottery system. A few commenters also discussed the broader challenges of US immigration policy and its impact on the tech industry, specifically the difficulty of attracting and retaining global talent. Some expressed frustration with the current system while others shared personal anecdotes about their immigration experiences.
The Supreme Court upheld a lower court's ruling to ban TikTok in the United States, citing national security concerns. However, former President Trump, who initially pushed for the ban, has suggested he might offer TikTok a reprieve if certain conditions are met. This potential lifeline could involve an American company taking over TikTok's U.S. operations. The situation remains uncertain, with TikTok's future in the U.S. hanging in the balance.
Hacker News commenters discuss the potential political motivations and ramifications of the Supreme Court upholding a TikTok ban, with some skeptical of Trump's supposed "lifeline" offer. Several express concern over the precedent set by banning a popular app based on national security concerns without clear evidence of wrongdoing, fearing it could pave the way for future restrictions on other platforms. Others highlight the complexities of separating TikTok from its Chinese parent company, ByteDance, and the technical challenges of enforcing a ban. Some commenters question the effectiveness of the ban in achieving its stated goals and debate whether alternative social media platforms pose similar data privacy risks. A few point out the irony of Trump's potential involvement in a deal to keep TikTok operational, given his previous stance on the app. The overall sentiment reflects a mixture of apprehension about the implications for free speech and national security, and cynicism about the political maneuvering surrounding the ban.
Summary of Comments ( 125 )
https://news.ycombinator.com/item?id=44060156
HN commenters discuss the curious law requiring a bale of hay to hang from Charing Cross bridge. Several express skepticism about the veracity of the "ancient law," with one pointing out the bridge's relatively young age (1864) and suggesting the story is likely apocryphal, perhaps a humorous anecdote started by a construction worker. Others question the practicality and safety of such a law, wondering about the frequency of replacement and potential fire hazard. The overall sentiment leans towards amusement and disbelief, with some appreciating the quirky nature of the story even if untrue. Some commenters also explore the possibility of it being a "jest" inserted into a contract or planning document, rather than an actual enforceable law.
The Hacker News post titled "Ancient law requires a bale of straw to hang from Charing Cross rail bridge" has generated a moderate number of comments, most of which express skepticism about the veracity of the "ancient law" claim made in the linked article. Several commenters delve into the history of the bridge and surrounding area, offering alternative explanations for the presence of hay bales.
One of the most compelling comments points out the lack of any corroborating evidence for such a law, suggesting the story is likely apocryphal or a misinterpretation of historical practices. This commenter highlights the absence of any mention of this law in historical records or legal texts, arguing that if such a peculiar law existed, it would likely be well-documented. They propose that the story might have originated from a local tradition or a humorous anecdote that has been taken too literally.
Another commenter speculates that the hay bales might have been used for practical purposes, such as erosion control or as a temporary barrier during construction or maintenance work on the bridge. This practical explanation contrasts sharply with the more whimsical notion of an ancient legal requirement.
Several commenters also discuss the challenges of verifying historical information and the prevalence of misinformation online. They emphasize the importance of critical thinking and seeking reliable sources before accepting claims as factual. One commenter even jokingly suggests the "law" might be a clever marketing ploy by local hay farmers.
While some commenters entertain the possibility of the law being real, most express doubt and call for more substantial evidence. The overall tone of the comments is one of healthy skepticism, with users engaging in reasoned discussion and offering alternative explanations for the presence of hay bales on the bridge. No one presents definitive proof disproving the original article's claim, but the lack of supporting evidence and the plausibility of alternative explanations lead most commenters to believe the story is likely untrue.