The author, initially enthusiastic about AI's potential to revolutionize scientific discovery, realized that current AI/ML tools are primarily useful for accelerating specific, well-defined tasks within existing scientific workflows, rather than driving paradigm shifts or independently generating novel hypotheses. While AI excels at tasks like optimizing experiments or analyzing large datasets, its dependence on existing data and human-defined parameters limits its capacity for true scientific creativity. The author concludes that focusing on augmenting scientists with these powerful tools, rather than replacing them, is a more realistic and beneficial approach, acknowledging that genuine scientific breakthroughs still rely heavily on human intuition and expertise.
The "Generative AI Con" argues that the current hype around generative AI, specifically large language models (LLMs), is a strategic maneuver by Big Tech. It posits that LLMs are being prematurely deployed as polished products to capture user data and establish market dominance, despite being fundamentally flawed and incapable of true intelligence. This "con" involves exaggerating their capabilities, downplaying their limitations (like bias and hallucination), and obfuscating the massive computational costs and environmental impact involved. Ultimately, the goal is to lock users into proprietary ecosystems, monetize their data, and centralize control over information, mirroring previous tech industry plays. The rush to deploy, driven by competitive pressure and venture capital, comes at the expense of thoughtful development and consideration of long-term societal consequences.
HN commenters largely agree that the "generative AI con" described in the article—hyping the current capabilities of LLMs while obscuring the need for vast amounts of human labor behind the scenes—is real. Several point out the parallels to previous tech hype cycles, like Web3 and self-driving cars. Some discuss the ethical implications of this concealed human labor, particularly regarding worker exploitation in developing countries. Others debate whether this "con" is intentional deception or simply a byproduct of the hype cycle, with some arguing that the transformative potential of LLMs is genuine, even if the timeline is exaggerated. A few commenters offer more optimistic perspectives, suggesting that the current limitations will be overcome, and that the technology is still in its early stages. The discussion also touches upon the potential for LLMs to eventually reduce their reliance on human input, and the role of open-source development in mitigating the negative consequences of corporate control over these technologies.
Summary of Comments ( 200 )
https://news.ycombinator.com/item?id=44037941
Several commenters on Hacker News agreed with the author's sentiment about the hype surrounding AI in science, pointing out that the "low-hanging fruit" has already been plucked and that significant advancements are becoming increasingly difficult. Some highlighted the importance of domain expertise and the limitations of relying solely on AI, emphasizing that AI should be a tool used by experts rather than a replacement for them. Others discussed the issue of reproducibility and the "black box" nature of some AI models, making scientific validation challenging. A few commenters offered alternative perspectives, suggesting that AI still holds potential but requires more realistic expectations and a focus on specific, well-defined problems. The misleading nature of visualizations generated by AI was also a point of concern, with commenters noting the potential for misinterpretations and the need for careful validation.
The Hacker News post titled "I got fooled by AI-for-science hype–here's what it taught me" generated a moderate discussion with several insightful comments. Many commenters agreed with the author's core premise that AI hype in science, particularly regarding drug discovery and materials science, often oversells the current capabilities.
Several users highlighted the distinction between using AI for discovery versus optimization. One commenter pointed out that AI excels at optimizing existing solutions, making incremental improvements based on vast datasets. However, they argued it's less effective at genuine discovery, where novel concepts and breakthroughs are needed. This was echoed by another who mentioned that drug discovery often involves an element of "luck" and creative leaps that AI struggles to replicate.
Another recurring theme was the "garbage in, garbage out" problem. Commenters stressed that AI models are only as good as the data they're trained on. In scientific domains, this can be problematic due to limited, biased, or noisy datasets. One user specifically discussed materials science, explaining that the available data is often incomplete or inconsistent, hindering the effectiveness of AI models. Another mentioned that even within drug discovery, datasets are often proprietary and not shared, further limiting the potential of large-scale AI applications.
Some commenters offered a more nuanced perspective, acknowledging the hype while also recognizing the potential of AI. One suggested that AI could be a valuable tool for scientists, particularly for automating tedious tasks and analyzing complex data, but it shouldn't be seen as a replacement for human expertise and intuition. Another commenter argued that AI's role in science is still evolving, and while current applications may be overhyped, future breakthroughs are possible as the technology matures and datasets improve.
A few comments also touched on the economic incentives driving the AI hype. One user suggested that venture capital and media attention create pressure to exaggerate the potential of AI, leading to unrealistic expectations and inflated claims. Another mentioned the "publish or perish" culture in academia, which can incentivize researchers to oversell their results to secure funding and publications.
Overall, the comments section presents a generally skeptical view of the current state of AI-for-science, highlighting the limitations of existing approaches and cautioning against exaggerated claims. However, there's also a recognition that AI holds promise as a scientific tool, provided its limitations are acknowledged and expectations are tempered.