Dbushell's blog post "Et Tu, Grammarly?" criticizes Grammarly's tone detector for flagging neutral phrasing as overly negative or uncertain. He provides examples where simple, straightforward sentences are deemed problematic, arguing that the tool pushes users towards an excessively positive and verbose style, ultimately hindering clear communication. This, he suggests, reflects a broader trend of AI writing tools prioritizing a specific, and potentially undesirable, writing style over actual clarity and conciseness. He worries this reinforces corporate jargon and ultimately diminishes the quality of writing.
Em dashes (—) are versatile and primarily used to indicate a break in thought—like this—or to set off parenthetical information. They can also replace colons or commas for added emphasis. En dashes (–) are shorter than em dashes and mainly connect ranges of numbers, dates, or times, like 9–5 or January–June. Hyphens (-) are the shortest and connect compound words (e.g., long-term) or parts of words broken at the end of a line. Use two hyphens together (--) if you don't have access to an em dash or en dash.
HN users generally appreciate Merriam-Webster's explanation of em and en dash usage. Some find the spacing rules around em dashes overly pedantic, especially in informal writing, suggesting that as long as the dash stands out, the spacing is less crucial. A few commenters discuss the challenges of typing these dashes efficiently, with suggested keyboard shortcuts and text replacement tools mentioned for macOS and Linux. One commenter points out the increasing trend of using hyphens in place of both en and em dashes, expressing concern that proper usage might be fading. Another highlights the ambiguity created by different coding styles rendering en/em dashes visually identical, leading to potential misinterpretations for developers.
Affixes.org is a comprehensive resource dedicated to English affixes (prefixes and suffixes). It provides a searchable database of these morphemes, offering definitions, examples of their use within words, and etymological information. The site aims to improve vocabulary and understanding of English word formation by breaking down words into their constituent parts and explaining how affixes modify the meaning of root words. It serves as a valuable tool for anyone interested in expanding their lexical knowledge and gaining a deeper appreciation for the intricacies of the English language.
Hacker News users generally praised the Affixes website for its clean design, intuitive interface, and helpful examples. Several commenters pointed out its usefulness for learning English, particularly for non-native speakers. Some suggested improvements like adding audio pronunciations, more example sentences, and the ability to search by meaning rather than just the affix itself. One commenter appreciated the site's simplicity compared to more complex dictionary sites, while another highlighted the value of understanding affixes for deciphering unfamiliar words. A few users shared related resources, including a Latin and Greek root word website and a book recommendation for vocabulary building. There was some discussion on the etymology of specific affixes and how they've evolved over time.
BritCSS is a humorous CSS framework that replaces American English spellings in CSS properties and values with their British English equivalents. It aims to provide a more "civilised" (British English spelling) styling experience, swapping terms like color
for colour
and center
for centre
. While functionally identical to standard CSS, it serves primarily as a lighthearted commentary on the dominance of American English in web development.
Hacker News users generally found BritCSS humorous, but impractical. Several commenters pointed out the inherent problems with trying to localize CSS, given its global nature and the established convention of using American English. Some suggested it would fragment the community and create unnecessary complexity in workflows. One commenter jokingly suggested expanding the idea to include other localized CSS versions, like Australian English, further highlighting the absurdity of the project. Others questioned the motivation behind targeting American English specifically, suggesting it stemmed from a place of anti-American sentiment. There's also discussion about the technical limitations and challenges of such an undertaking, like handling existing libraries and frameworks. While some appreciated the satire, the consensus was that BritCSS wasn't a serious proposal.
Ohm is a parsing toolkit designed for creating parsers in JavaScript and TypeScript that are both powerful and easy to use. It features a grammar definition syntax closely resembling EBNF, enabling developers to express complex syntax rules clearly and concisely. Ohm's built-in support for semantic actions allows users to directly embed JavaScript or TypeScript code within their grammar rules, simplifying the process of building abstract syntax trees (ASTs) and performing other actions during parsing. The toolkit provides excellent error reporting capabilities, helping developers quickly identify and fix syntax errors. Its flexible architecture makes it suitable for various applications, from validating user input to building full-fledged compilers and interpreters.
HN users generally expressed interest in Ohm, praising its user-friendliness, clear documentation, and the power offered by its grammar-based approach to parsing. Several compared it favorably to traditional parser generators like PEG.js and nearley, highlighting Ohm's superior error messages and easier learning curve. Some users discussed potential applications, including building linters, formatters, and domain-specific languages. A few questioned the performance implications of its JavaScript implementation, while others suggested potential improvements like adding support for left-recursive grammars. The overall sentiment leaned positive, with many eager to try Ohm in their own projects.
Mark Rosenfelder's "The Language Construction Kit" offers a practical guide for creating fictional languages, emphasizing naturalistic results. It covers core aspects of language design, including phonology (sounds), morphology (word formation), syntax (sentence structure), and the lexicon (vocabulary). The book also delves into writing systems, sociolinguistics, and the evolution of languages, providing a comprehensive framework for crafting believable and complex constructed languages. While targeted towards creating languages for fictional worlds, the kit also serves as a valuable introduction to linguistics itself, exploring the underlying principles governing real-world languages.
Hacker News users discuss the Language Construction Kit, praising its accessibility and comprehensiveness for beginners. Several commenters share nostalgic memories of using the kit in their youth, sparking their interest in linguistics and constructed languages. Some highlight specific aspects they found valuable, such as the sections on phonology and morphology. Others debate the kit's age and whether its information is still relevant, with some suggesting updated resources while others argue its core principles remain valid. A few commenters also discuss the broader appeal and challenges of language creation.
The blog post details methods for eliminating left and mutual recursion in context-free grammars, crucial for parser construction. Left recursion, where a non-terminal derives itself as the leftmost symbol, is problematic for top-down parsers. The post demonstrates how to remove direct left recursion using factorization and substitution. It then explains how to handle indirect left recursion by ordering non-terminals and systematically applying the direct recursion removal technique. Finally, it addresses mutual recursion, where two or more non-terminals derive each other, converting it into direct left recursion, which can then be eliminated using the previously described methods. The post uses concrete examples to illustrate these transformations, making it easier to understand the process of converting a grammar into a parser-friendly form.
Hacker News users discussed the potential inefficiency of the presented left-recursion elimination algorithm, particularly its reliance on repeated string concatenation. They suggested alternative approaches using stacks or accumulating results in a list for better performance. Some commenters questioned the necessity of fully eliminating left recursion in all cases, pointing out that modern parsing techniques, like packrat parsing, can handle left-recursive grammars directly. The lack of formal proofs or performance comparisons with established methods was also noted. A few users discussed the benefits and drawbacks of different parsing libraries and techniques, including ANTLR and various parser combinator libraries.
The Stack Exchange post explores why "zero" takes the plural form of a noun. It concludes that "zero" functions similarly to other quantifiers like "two," "few," and "many," which inherently refer to pluralities. While "one" signifies a single item, "zero" indicates the absence of any items, conceptually similar to having multiple absences or a group of nothing. This aligns with how other languages treat zero, and using the singular with zero can create ambiguity, especially in contexts discussing countable nouns where "one" is a possibility. Essentially, "zero" grammatically behaves like a plural quantifier because it describes the absence of a quantity greater than one.
Hacker News users discuss the seemingly illogical pluralization of "zero." Some argue that "zero" functions as a placeholder for a plural noun, similar to other quantifiers like "many" or "few." Others suggest that its plural form stems from its representation of a set containing no elements, which conceptually could contain multiple (zero) elements. The notion that zero apples is one set of apples, while grammatically plural, was also raised. The prevalent feeling is that the pluralization is more a quirk of language evolution than strict logical adherence, echoing the original Stack Exchange post's accepted answer. Some users pointed to different conventions in other languages, highlighting the English language's idiosyncrasies. A few comments humorously question the entire premise, wondering why such a seemingly trivial matter warrants discussion.
This blog post explores a simplified variant of Generalized LR (GLR) parsing called "right-nulled" GLR. Instead of maintaining a graph-structured stack during parsing ambiguities, this technique uses a single stack and resolves conflicts by prioritizing reduce actions over shift actions. When a conflict occurs, the parser performs all possible reductions before attempting to shift. This approach sacrifices some of GLR's generality, as it cannot handle all types of grammars, but it significantly reduces the complexity and overhead associated with maintaining the graph-structured stack, leading to a faster and more memory-efficient parser. The post provides a conceptual overview, highlights the limitations compared to full GLR, and demonstrates the algorithm with a simple example.
Hacker News users discuss the practicality and efficiency of GLR parsing, particularly in comparison to other parsing techniques. Some commenters highlight its theoretical power and ability to handle ambiguous grammars, while acknowledging its potential performance overhead. Others question its suitability for real-world applications, suggesting that simpler methods like PEG or recursive descent parsers are often sufficient and more efficient. A few users mention specific use cases where GLR parsing shines, such as language servers and situations requiring robust error recovery. The overall sentiment leans towards appreciating GLR's theoretical elegance but expressing reservations about its widespread adoption due to perceived complexity and performance concerns. A recurring theme is the trade-off between parsing power and practical efficiency.
Summary of Comments ( 47 )
https://news.ycombinator.com/item?id=43514308
HN commenters largely agree with the author's criticism of Grammarly's aggressive upselling and intrusive UI. Several users share similar experiences of frustration with the constant prompts to upgrade, even after dismissing them. Some suggest alternative grammar checkers like LanguageTool and ProWritingAid, praising their less intrusive nature and comparable functionality. A few commenters point out that Grammarly's business model necessitates these tactics, while others discuss the potential negative impact on user experience and writing flow. One commenter mentions the irony of Grammarly's own grammatical errors in their marketing materials, further fueling the sentiment against the company's practices. The overall consensus is that Grammarly's usefulness is overshadowed by its annoying and disruptive upselling strategy.
The Hacker News post "Et Tu, Grammarly?" discussing Dbushell's blog post about Grammarly's apparent shift towards AI-driven features and potential decline in core grammar checking functionality, sparked a lively discussion with several compelling comments.
Several users shared anecdotal experiences mirroring the author's sentiment. One user lamented the perceived decline in Grammarly's ability to catch basic grammatical errors, contrasting it with the tool's past performance. They specifically mentioned missing simple mistakes, suggesting a shift in focus from fundamental grammar rules. Another commenter echoed this, expressing frustration with Grammarly's increasing tendency to offer stylistic suggestions instead of addressing core grammatical issues. This user found the stylistic suggestions disruptive and ultimately deactivated the tool due to its perceived ineffectiveness in its primary function.
The conversation also touched upon the broader implications of AI integration in writing tools. One commenter cautioned against relying solely on AI for writing and editing, emphasizing the importance of human oversight and the development of strong writing skills. They argued that tools like Grammarly should be used as aids, not replacements for critical thinking and careful editing. Another user suggested that the perceived decline in Grammarly's core functionality might be a deliberate strategy to push users towards the AI-powered features and premium subscriptions, speculating that the free version might be intentionally "dumbed down."
Some users offered alternative solutions and perspectives. One commenter recommended LanguageTool as a potential replacement for Grammarly, praising its open-source nature and perceived superiority in catching grammatical errors. Another user pointed out that while Grammarly might not be perfect, it still offers valuable assistance, particularly for non-native English speakers. This commenter highlighted the importance of acknowledging the tool's limitations and using it judiciously.
Finally, one commenter offered a more technical perspective, suggesting that the shift towards AI might be due to the inherent difficulty in maintaining and improving rule-based grammar checking systems. They speculated that machine learning models, despite their current limitations, might offer a more scalable and adaptable approach to grammar checking in the long run.
In summary, the comments on Hacker News reflect a mixed sentiment towards Grammarly's recent changes. While some users appreciate the new AI features, many express concern over the perceived decline in basic grammar checking capabilities, sparking a broader discussion about the role of AI in writing and the future of grammar-checking tools.