Dbushell's blog post "Et Tu, Grammarly?" criticizes Grammarly's tone detector for flagging neutral phrasing as overly negative or uncertain. He provides examples where simple, straightforward sentences are deemed problematic, arguing that the tool pushes users towards an excessively positive and verbose style, ultimately hindering clear communication. This, he suggests, reflects a broader trend of AI writing tools prioritizing a specific, and potentially undesirable, writing style over actual clarity and conciseness. He worries this reinforces corporate jargon and ultimately diminishes the quality of writing.
The Hacker News post asks for insider perspectives on Yann LeCun's criticism of current deep learning architectures, particularly his advocacy for moving beyond systems trained solely on pattern recognition. LeCun argues that these systems lack fundamental capabilities like reasoning, planning, and common sense, and believes a paradigm shift is necessary to achieve true artificial intelligence. The post author wonders about the internal discussions and research directions within organizations like Meta/FAIR, influenced by LeCun's views, and whether there's a disconnect between his public statements and the practical work being done.
The Hacker News comments on Yann LeCun's push against current architectures are largely speculative, lacking insider information. Several commenters discuss the potential of LeCun's "autonomous machine intelligence" approach and his criticisms of current deep learning methods, with some agreeing that current architectures struggle with reasoning and common sense. Others express skepticism or downplay the significance of LeCun's position, pointing to the success of current models in specific domains. There's a recurring theme of questioning whether LeCun's proposed solutions are substantially different from existing research or if they are simply rebranded. A few commenters offer alternative perspectives, such as the importance of embodied cognition and the potential of hierarchical temporal memory. Overall, the discussion reflects the ongoing debate within the AI community about the future direction of the field, with LeCun's views being a significant, but not universally accepted, contribution.
The author argues that Go's context.Context
is overused and often misused as a dumping ground for arbitrary values, leading to unclear dependencies and difficult-to-test code. Instead of propagating values through Context
, they propose using explicit function parameters, promoting clearer code, better separation of concerns, and easier testability. They contend that using Context
primarily for cancellation and timeouts, its intended purpose, would streamline code and improve its maintainability.
HN commenters largely agree with the author's premise that context.Context
in Go is overused and often misused for dependency injection or as a dumping ground for miscellaneous values. Several suggest that structured concurrency, improved error handling, and better language features for cancellation and deadlines could alleviate the need for context
in many cases. Some argue that context
is still useful for request-scoped values, especially in server contexts, and shouldn't be entirely removed. A few commenters express concern about the practicality of removing context
given its widespread adoption and integration into the standard library. There is a strong desire for better alternatives, rather than simply discarding the existing mechanism without a replacement. Several commenters also mention the similarities between context
overuse in Go and similar issues with dependency injection frameworks in other languages.
The author details a frustrating experience with GitHub Actions where a seemingly simple workflow to build and deploy a static website became incredibly complex and time-consuming due to caching issues. Despite attempting various caching strategies and workarounds, builds remained slow and unpredictable, ultimately leading to increased costs and wasted developer time. The author concludes that while GitHub Actions might be suitable for straightforward tasks, its caching mechanism's unreliability makes it a poor choice for more complex projects, especially those involving static site generation. They ultimately opted to migrate to a self-hosted solution for improved control and predictability.
Hacker News users generally agreed with the author's sentiment about GitHub Actions' complexity and unreliability. Many shared similar experiences with flaky builds, obscure error messages, and difficulty debugging. Several commenters suggested exploring alternatives like GitLab CI, Drone CI, or self-hosted runners for more control and predictability. Some pointed out the benefits of GitHub Actions, such as its tight integration with GitHub and the availability of pre-built actions, but acknowledged the frustrations raised in the article. The discussion also touched upon the trade-offs between convenience and control when choosing a CI/CD solution, with some arguing that the ease of use initially offered by GitHub Actions can be overshadowed by the difficulties encountered as projects grow more complex. A few users offered specific troubleshooting tips or workarounds for common issues, highlighting the community-driven nature of problem-solving around GitHub Actions.
The author argues that Knuth's vision of literate programming, where code is written for humans within a narrative explaining its logic, hasn't achieved mainstream adoption because it fundamentally misunderstands the nature of programming. Rather than a linear, top-down process suitable for narrative explanation, programming is inherently exploratory and iterative, involving frequent refactoring and restructuring. Literate programming tools force a rigid structure onto this fluid process, making it cumbersome and ultimately counterproductive. The author proposes "exploratory programming" as a more realistic approach, emphasizing tools that facilitate quick exploration, refactoring, and visualization of code relationships, allowing understanding to emerge organically from the code itself.
Hacker News users discuss the merits and flaws of Knuth's literate programming style. Some argue that his approach, while elegant, prioritizes code as literature over practicality, making it difficult to navigate and modify, particularly in larger projects. Others counter that the core concept of intertwining code and explanation remains valuable, but modern tooling like Jupyter notebooks and embedded documentation offer better solutions. The thread also explores alternative approaches like docstrings and the use of comments to generate documentation, emphasizing the importance of clear and concise explanations within the codebase itself. Several commenters highlight the benefits of separating documentation from code for maintainability and flexibility, suggesting that the ideal approach depends on the project's scale and complexity. The original post is criticized for misrepresenting Knuth's views and focusing too heavily on superficial aspects like tool choice rather than the underlying philosophy.
Summary of Comments ( 47 )
https://news.ycombinator.com/item?id=43514308
HN commenters largely agree with the author's criticism of Grammarly's aggressive upselling and intrusive UI. Several users share similar experiences of frustration with the constant prompts to upgrade, even after dismissing them. Some suggest alternative grammar checkers like LanguageTool and ProWritingAid, praising their less intrusive nature and comparable functionality. A few commenters point out that Grammarly's business model necessitates these tactics, while others discuss the potential negative impact on user experience and writing flow. One commenter mentions the irony of Grammarly's own grammatical errors in their marketing materials, further fueling the sentiment against the company's practices. The overall consensus is that Grammarly's usefulness is overshadowed by its annoying and disruptive upselling strategy.
The Hacker News post "Et Tu, Grammarly?" discussing Dbushell's blog post about Grammarly's apparent shift towards AI-driven features and potential decline in core grammar checking functionality, sparked a lively discussion with several compelling comments.
Several users shared anecdotal experiences mirroring the author's sentiment. One user lamented the perceived decline in Grammarly's ability to catch basic grammatical errors, contrasting it with the tool's past performance. They specifically mentioned missing simple mistakes, suggesting a shift in focus from fundamental grammar rules. Another commenter echoed this, expressing frustration with Grammarly's increasing tendency to offer stylistic suggestions instead of addressing core grammatical issues. This user found the stylistic suggestions disruptive and ultimately deactivated the tool due to its perceived ineffectiveness in its primary function.
The conversation also touched upon the broader implications of AI integration in writing tools. One commenter cautioned against relying solely on AI for writing and editing, emphasizing the importance of human oversight and the development of strong writing skills. They argued that tools like Grammarly should be used as aids, not replacements for critical thinking and careful editing. Another user suggested that the perceived decline in Grammarly's core functionality might be a deliberate strategy to push users towards the AI-powered features and premium subscriptions, speculating that the free version might be intentionally "dumbed down."
Some users offered alternative solutions and perspectives. One commenter recommended LanguageTool as a potential replacement for Grammarly, praising its open-source nature and perceived superiority in catching grammatical errors. Another user pointed out that while Grammarly might not be perfect, it still offers valuable assistance, particularly for non-native English speakers. This commenter highlighted the importance of acknowledging the tool's limitations and using it judiciously.
Finally, one commenter offered a more technical perspective, suggesting that the shift towards AI might be due to the inherent difficulty in maintaining and improving rule-based grammar checking systems. They speculated that machine learning models, despite their current limitations, might offer a more scalable and adaptable approach to grammar checking in the long run.
In summary, the comments on Hacker News reflect a mixed sentiment towards Grammarly's recent changes. While some users appreciate the new AI features, many express concern over the perceived decline in basic grammar checking capabilities, sparking a broader discussion about the role of AI in writing and the future of grammar-checking tools.