The post "Limits of Smart: Molecules and Chaos" argues that relying solely on "smart" systems, particularly AI, for complex problem-solving has inherent limitations. It uses the analogy of protein folding to illustrate how brute-force computational approaches, even with advanced algorithms, struggle with the sheer combinatorial explosion of possibilities in systems governed by physical laws. While AI excels at specific tasks within defined boundaries, it falters when faced with the chaotic, unpredictable nature of reality at the molecular level. The post suggests that a more effective approach involves embracing the inherent randomness and exploring "dumb" methods, like directed evolution in biology, which leverage natural processes to navigate complex landscapes and discover solutions that purely computational methods might miss.
The article proposes a new theory of consciousness called "assembly theory," suggesting that consciousness arises not simply from complex arrangements of matter, but from specific combinations of these arrangements, akin to how molecules gain new properties distinct from their constituent atoms. These combinations, termed "assemblies," represent information stored in the structure of molecules, especially within living organisms. The complexity of these assemblies, measurable by their "assembly index," correlates with the level of consciousness. This theory proposes that higher levels of consciousness require more complex and diverse assemblies, implying consciousness could exist in varying degrees across different systems, not just biological ones. It offers a potentially testable framework for identifying and quantifying consciousness through analyzing the complexity of molecular structures and their interactions.
Hacker News users discuss the "Integrated Information Theory" (IIT) of consciousness proposed in the article, expressing significant skepticism. Several commenters find the theory overly complex and question its practical applicability and testability. Some argue it conflates correlation with causation, suggesting IIT merely describes the complexity of systems rather than explaining consciousness. The high degree of abstraction and lack of concrete predictions are also criticized. A few commenters offer alternative perspectives, suggesting consciousness might be a fundamental property, or referencing other theories like predictive processing. Overall, the prevailing sentiment is one of doubt regarding IIT's validity and usefulness as a model of consciousness.
The essay "Life is more than an engineering problem" critiques the "longtermist" philosophy popular in Silicon Valley, arguing that its focus on optimizing future outcomes through technological advancement overlooks the inherent messiness and unpredictability of human existence. The author contends that this worldview, obsessed with maximizing hypothetical future lives, devalues the present and simplifies complex ethical dilemmas into solvable equations. This mindset, rooted in engineering principles, fails to appreciate the intrinsic value of human life as it is lived, with all its imperfections and limitations, and ultimately risks creating a future devoid of genuine human connection and meaning.
HN commenters largely agreed with the article's premise that life isn't solely an engineering problem. Several pointed out the importance of considering human factors, emotions, and the unpredictable nature of life when problem-solving. Some argued that an overreliance on optimization and efficiency can be detrimental, leading to burnout and neglecting essential aspects of human experience. Others discussed the limitations of applying a purely engineering mindset to complex social and political issues. A few commenters offered alternative frameworks, like "wicked problems," to better describe life's challenges. There was also a thread discussing the role of engineering in addressing critical issues like climate change, with the consensus being that while engineering is essential, it must be combined with other approaches for effective solutions.
Summary of Comments ( 2 )
https://news.ycombinator.com/item?id=43495476
HN commenters largely agree with the premise of the article, pointing out that intelligence and planning often fail in complex, chaotic systems like biology and markets. Some argue that "smart" interventions can exacerbate problems by creating unintended consequences and disrupting natural feedback loops. Several commenters suggest that focusing on robustness and resilience, rather than optimization for a specific outcome, is a more effective approach in such systems. Others discuss the importance of understanding limitations and accepting that some degree of chaos is inevitable. The idea of "tinkering" and iterative experimentation, rather than grand plans, is also presented as a more realistic and adaptable strategy. A few comments offer specific examples of where "smart" interventions have failed, like the use of pesticides leading to resistant insects or financial engineering contributing to market instability.
The Hacker News post "Limits of Smart: Molecules and Chaos" discussing the Dynomight Substack article of the same name sparked a moderately active discussion with 17 comments. Several commenters engaged with the core ideas presented in the article, focusing on the inherent unpredictability of complex systems and the limitations of reductionist approaches.
One compelling thread explored the implications for large language models (LLMs). A commenter argued that LLMs, while impressive, are ultimately statistical machines limited by their training data and incapable of true understanding or generalization beyond that data. This limitation, they argued, ties back to the article's point about the inherent chaos and complexity of the world. Another commenter built upon this idea, suggesting that LLMs may be effective within specific niches but struggle with broader, more nuanced contexts where unforeseen variables and emergent behaviors can dominate.
Another commenter focused on the practical implications of the article's thesis for fields like medicine and engineering. They highlighted the challenges of predicting outcomes in complex biological systems and the limitations of current modeling techniques. They posited that a more holistic, systems-based approach might be necessary to overcome these challenges.
Several commenters offered personal anecdotes or examples to illustrate the article's points. One shared an experience from the semiconductor industry, highlighting the unexpected and often counterintuitive behavior of materials at the nanoscale. Another discussed the limitations of weather forecasting, drawing a parallel to the article's discussion of chaos and unpredictability.
Some commenters offered critiques or alternative perspectives. One commenter questioned the article's framing of "smart" and suggested that the real issue lies in our limited understanding of complex systems rather than any inherent limitation of intelligence. Another commenter pushed back against the idea that reductionism is inherently flawed, arguing that it remains a valuable tool for scientific inquiry, even in the face of complex phenomena.
A few comments offered tangential observations or links to related resources. One commenter shared a link to a paper discussing the concept of "emergence" in complex systems. Another commented on the writing style of the original article, praising its clarity and accessibility.
Overall, the comments on Hacker News reflect a thoughtful engagement with the ideas presented in the "Limits of Smart" article. The discussion covered a range of topics, from the implications for artificial intelligence to the challenges of predicting outcomes in complex systems. While there wasn't a single dominant narrative, the comments collectively explored the inherent limitations of reductionist approaches and the need for more nuanced understanding of complex phenomena.