Story Details

  • As an experienced LLM user, I don't use generative LLMs often

    Posted: 2025-05-05 17:22:40

    Despite the hype, even experienced users find limited practical applications for generative LLMs like ChatGPT. While acknowledging their potential, the author primarily leverages them for specific tasks like summarizing long articles, generating regex, translating between programming languages, and quickly scaffolding code. The core issue isn't the technology itself, but rather the lack of reliable integration into existing workflows and the inherent unreliability of generated content, especially for complex or critical tasks. This leads to a preference for traditional, deterministic tools where accuracy and predictability are paramount. The author anticipates future utility will depend heavily on tighter integration with other applications and improvements in reliability and accuracy.

    Summary of Comments ( 148 )
    https://news.ycombinator.com/item?id=43897320

    Hacker News users generally agreed with the author's premise that LLMs are currently more hype than practical for experienced users. Several commenters emphasized that while LLMs excel at specific tasks like generating boilerplate code, writing marketing copy, or brainstorming, they fall short in areas requiring accuracy, nuanced understanding, or complex reasoning. Some suggested that current LLMs are best used as "augmented thinking" tools, enhancing existing workflows rather than replacing them. The lack of source reliability and the tendency for "hallucinations" were cited as major limitations. One compelling comment highlighted the difference between experienced users, who approach LLMs with specific goals and quickly recognize their shortcomings, versus less experienced users who might be more easily impressed by the surface-level capabilities. Another pointed out the "Trough of Disillusionment" phase of the hype cycle, suggesting that the current limitations are to be expected and will likely improve over time. A few users expressed hope for more specialized, domain-specific LLMs in the future, which could address some of the current limitations.