Good writing is about clarity and saying something substantive. It's not about flowery language or trying to sound smart. Instead, focus on thinking clearly, which translates into clear writing. This involves discarding bad ideas, dissecting good ones, and expressing the resulting thoughts simply and directly. Essentially, good writing is the product of clear thinking, relentless editing, and a genuine desire to communicate effectively with the reader. It's a continuous process of refinement, akin to repeatedly rewriting a paragraph until it expresses precisely what you intend.
The author champions the semicolon, arguing its undeserved bad reputation stems from misuse, not inherent flaw. They contend that the semicolon's power lies in its ability to create dramatic tension and build anticipation by linking closely related yet distinct thoughts, offering a nuanced pause stronger than a comma but softer than a full stop. This "delicious tension," the author claims, elevates writing, providing a rhythmic flow and allowing for complex ideas to unfold gracefully. They encourage embracing the semicolon's potential, not fearing it, to craft more compelling and sophisticated prose.
HN commenters largely discuss their personal preferences and experiences with semicolons in programming. Some defend their use, citing improved code clarity and the prevention of potential errors, particularly in JavaScript's automatic semicolon insertion (ASI). Others argue against them as unnecessary clutter, especially in languages like Go and Python where they are optional or absent. A few highlight the importance of consistency within a codebase, regardless of personal preference. The most compelling comments offer specific examples of where semicolons prevent ambiguity or improve readability, countering the argument that they are purely stylistic. Several users mention the historical context of semicolons and how their necessity has changed with the evolution of programming languages.
This 1990 paper by Sriyatha offers a computational linguistic approach to understanding the complex roles of Greek particles like μέν, δέ, γάρ, and οὖν. It argues against treating them as simply discourse markers and instead proposes a framework based on "coherence relations" between segments of text. The paper suggests these particles signal specific relationships, such as elaboration, justification, or contrast, aiding in the interpretation of how different parts of a text relate to each other. This framework allows for computational analysis of these relationships, moving beyond a simple grammatical description towards a more nuanced understanding of how particles contribute to the overall meaning and coherence of Greek texts.
HN users discuss the complexity and nuance of ancient Greek particles, praising the linked article for its clarity and insight. Several commenters share anecdotes about their struggles learning Greek, highlighting the difficulty of mastering these seemingly small words. The discussion also touches on the challenges of translation, the limitations of relying solely on dictionaries, and the importance of understanding the underlying logic and rhetoric of the language. Some users express renewed interest in revisiting their Greek studies, inspired by the article's approachable explanation of a complex topic. One commenter points out the connection between Greek particles and similar structures in other languages, particularly Indian languages, suggesting a shared Indo-European origin for these grammatical features.
Dbushell's blog post "Et Tu, Grammarly?" criticizes Grammarly's tone detector for flagging neutral phrasing as overly negative or uncertain. He provides examples where simple, straightforward sentences are deemed problematic, arguing that the tool pushes users towards an excessively positive and verbose style, ultimately hindering clear communication. This, he suggests, reflects a broader trend of AI writing tools prioritizing a specific, and potentially undesirable, writing style over actual clarity and conciseness. He worries this reinforces corporate jargon and ultimately diminishes the quality of writing.
HN commenters largely agree with the author's criticism of Grammarly's aggressive upselling and intrusive UI. Several users share similar experiences of frustration with the constant prompts to upgrade, even after dismissing them. Some suggest alternative grammar checkers like LanguageTool and ProWritingAid, praising their less intrusive nature and comparable functionality. A few commenters point out that Grammarly's business model necessitates these tactics, while others discuss the potential negative impact on user experience and writing flow. One commenter mentions the irony of Grammarly's own grammatical errors in their marketing materials, further fueling the sentiment against the company's practices. The overall consensus is that Grammarly's usefulness is overshadowed by its annoying and disruptive upselling strategy.
Em dashes (—) are versatile and primarily used to indicate a break in thought—like this—or to set off parenthetical information. They can also replace colons or commas for added emphasis. En dashes (–) are shorter than em dashes and mainly connect ranges of numbers, dates, or times, like 9–5 or January–June. Hyphens (-) are the shortest and connect compound words (e.g., long-term) or parts of words broken at the end of a line. Use two hyphens together (--) if you don't have access to an em dash or en dash.
HN users generally appreciate Merriam-Webster's explanation of em and en dash usage. Some find the spacing rules around em dashes overly pedantic, especially in informal writing, suggesting that as long as the dash stands out, the spacing is less crucial. A few commenters discuss the challenges of typing these dashes efficiently, with suggested keyboard shortcuts and text replacement tools mentioned for macOS and Linux. One commenter points out the increasing trend of using hyphens in place of both en and em dashes, expressing concern that proper usage might be fading. Another highlights the ambiguity created by different coding styles rendering en/em dashes visually identical, leading to potential misinterpretations for developers.
Affixes.org is a comprehensive resource dedicated to English affixes (prefixes and suffixes). It provides a searchable database of these morphemes, offering definitions, examples of their use within words, and etymological information. The site aims to improve vocabulary and understanding of English word formation by breaking down words into their constituent parts and explaining how affixes modify the meaning of root words. It serves as a valuable tool for anyone interested in expanding their lexical knowledge and gaining a deeper appreciation for the intricacies of the English language.
Hacker News users generally praised the Affixes website for its clean design, intuitive interface, and helpful examples. Several commenters pointed out its usefulness for learning English, particularly for non-native speakers. Some suggested improvements like adding audio pronunciations, more example sentences, and the ability to search by meaning rather than just the affix itself. One commenter appreciated the site's simplicity compared to more complex dictionary sites, while another highlighted the value of understanding affixes for deciphering unfamiliar words. A few users shared related resources, including a Latin and Greek root word website and a book recommendation for vocabulary building. There was some discussion on the etymology of specific affixes and how they've evolved over time.
BritCSS is a humorous CSS framework that replaces American English spellings in CSS properties and values with their British English equivalents. It aims to provide a more "civilised" (British English spelling) styling experience, swapping terms like color
for colour
and center
for centre
. While functionally identical to standard CSS, it serves primarily as a lighthearted commentary on the dominance of American English in web development.
Hacker News users generally found BritCSS humorous, but impractical. Several commenters pointed out the inherent problems with trying to localize CSS, given its global nature and the established convention of using American English. Some suggested it would fragment the community and create unnecessary complexity in workflows. One commenter jokingly suggested expanding the idea to include other localized CSS versions, like Australian English, further highlighting the absurdity of the project. Others questioned the motivation behind targeting American English specifically, suggesting it stemmed from a place of anti-American sentiment. There's also discussion about the technical limitations and challenges of such an undertaking, like handling existing libraries and frameworks. While some appreciated the satire, the consensus was that BritCSS wasn't a serious proposal.
Ohm is a parsing toolkit designed for creating parsers in JavaScript and TypeScript that are both powerful and easy to use. It features a grammar definition syntax closely resembling EBNF, enabling developers to express complex syntax rules clearly and concisely. Ohm's built-in support for semantic actions allows users to directly embed JavaScript or TypeScript code within their grammar rules, simplifying the process of building abstract syntax trees (ASTs) and performing other actions during parsing. The toolkit provides excellent error reporting capabilities, helping developers quickly identify and fix syntax errors. Its flexible architecture makes it suitable for various applications, from validating user input to building full-fledged compilers and interpreters.
HN users generally expressed interest in Ohm, praising its user-friendliness, clear documentation, and the power offered by its grammar-based approach to parsing. Several compared it favorably to traditional parser generators like PEG.js and nearley, highlighting Ohm's superior error messages and easier learning curve. Some users discussed potential applications, including building linters, formatters, and domain-specific languages. A few questioned the performance implications of its JavaScript implementation, while others suggested potential improvements like adding support for left-recursive grammars. The overall sentiment leaned positive, with many eager to try Ohm in their own projects.
Mark Rosenfelder's "The Language Construction Kit" offers a practical guide for creating fictional languages, emphasizing naturalistic results. It covers core aspects of language design, including phonology (sounds), morphology (word formation), syntax (sentence structure), and the lexicon (vocabulary). The book also delves into writing systems, sociolinguistics, and the evolution of languages, providing a comprehensive framework for crafting believable and complex constructed languages. While targeted towards creating languages for fictional worlds, the kit also serves as a valuable introduction to linguistics itself, exploring the underlying principles governing real-world languages.
Hacker News users discuss the Language Construction Kit, praising its accessibility and comprehensiveness for beginners. Several commenters share nostalgic memories of using the kit in their youth, sparking their interest in linguistics and constructed languages. Some highlight specific aspects they found valuable, such as the sections on phonology and morphology. Others debate the kit's age and whether its information is still relevant, with some suggesting updated resources while others argue its core principles remain valid. A few commenters also discuss the broader appeal and challenges of language creation.
The blog post details methods for eliminating left and mutual recursion in context-free grammars, crucial for parser construction. Left recursion, where a non-terminal derives itself as the leftmost symbol, is problematic for top-down parsers. The post demonstrates how to remove direct left recursion using factorization and substitution. It then explains how to handle indirect left recursion by ordering non-terminals and systematically applying the direct recursion removal technique. Finally, it addresses mutual recursion, where two or more non-terminals derive each other, converting it into direct left recursion, which can then be eliminated using the previously described methods. The post uses concrete examples to illustrate these transformations, making it easier to understand the process of converting a grammar into a parser-friendly form.
Hacker News users discussed the potential inefficiency of the presented left-recursion elimination algorithm, particularly its reliance on repeated string concatenation. They suggested alternative approaches using stacks or accumulating results in a list for better performance. Some commenters questioned the necessity of fully eliminating left recursion in all cases, pointing out that modern parsing techniques, like packrat parsing, can handle left-recursive grammars directly. The lack of formal proofs or performance comparisons with established methods was also noted. A few users discussed the benefits and drawbacks of different parsing libraries and techniques, including ANTLR and various parser combinator libraries.
The Stack Exchange post explores why "zero" takes the plural form of a noun. It concludes that "zero" functions similarly to other quantifiers like "two," "few," and "many," which inherently refer to pluralities. While "one" signifies a single item, "zero" indicates the absence of any items, conceptually similar to having multiple absences or a group of nothing. This aligns with how other languages treat zero, and using the singular with zero can create ambiguity, especially in contexts discussing countable nouns where "one" is a possibility. Essentially, "zero" grammatically behaves like a plural quantifier because it describes the absence of a quantity greater than one.
Hacker News users discuss the seemingly illogical pluralization of "zero." Some argue that "zero" functions as a placeholder for a plural noun, similar to other quantifiers like "many" or "few." Others suggest that its plural form stems from its representation of a set containing no elements, which conceptually could contain multiple (zero) elements. The notion that zero apples is one set of apples, while grammatically plural, was also raised. The prevalent feeling is that the pluralization is more a quirk of language evolution than strict logical adherence, echoing the original Stack Exchange post's accepted answer. Some users pointed to different conventions in other languages, highlighting the English language's idiosyncrasies. A few comments humorously question the entire premise, wondering why such a seemingly trivial matter warrants discussion.
This blog post explores a simplified variant of Generalized LR (GLR) parsing called "right-nulled" GLR. Instead of maintaining a graph-structured stack during parsing ambiguities, this technique uses a single stack and resolves conflicts by prioritizing reduce actions over shift actions. When a conflict occurs, the parser performs all possible reductions before attempting to shift. This approach sacrifices some of GLR's generality, as it cannot handle all types of grammars, but it significantly reduces the complexity and overhead associated with maintaining the graph-structured stack, leading to a faster and more memory-efficient parser. The post provides a conceptual overview, highlights the limitations compared to full GLR, and demonstrates the algorithm with a simple example.
Hacker News users discuss the practicality and efficiency of GLR parsing, particularly in comparison to other parsing techniques. Some commenters highlight its theoretical power and ability to handle ambiguous grammars, while acknowledging its potential performance overhead. Others question its suitability for real-world applications, suggesting that simpler methods like PEG or recursive descent parsers are often sufficient and more efficient. A few users mention specific use cases where GLR parsing shines, such as language servers and situations requiring robust error recovery. The overall sentiment leans towards appreciating GLR's theoretical elegance but expressing reservations about its widespread adoption due to perceived complexity and performance concerns. A recurring theme is the trade-off between parsing power and practical efficiency.
Summary of Comments ( 67 )
https://news.ycombinator.com/item?id=44081586
Hacker News users largely agreed with Paul Graham's essay on good writing, praising its clarity and actionable advice. Several commenters highlighted the importance of rewriting and editing, echoing Graham's emphasis on the process. Some offered additional tips, such as reading your work aloud and focusing on clarity for the reader. A few pointed out the inherent difficulty of writing well, while others appreciated the essay's encouragement to strive for better writing despite the challenge. The value of simple language and clear thinking was a recurring theme, with some sharing personal anecdotes of how Graham's writing style influenced them. A minor point of contention arose regarding Graham's dismissal of certain stylistic choices, with some defending the occasional use of more complex sentence structures.
The Hacker News post titled "Good Writing," linking to Paul Graham's essay on the same subject, generated a moderate amount of discussion with 29 comments. Many commenters generally agree with Graham's points about clarity and conciseness being crucial for good writing.
Several commenters emphasize the importance of rewriting and editing, echoing Graham's advice. One commenter highlights the benefit of reading one's own work aloud to catch awkward phrasing and improve flow. Another suggests using tools like Grammarly, Hemingway Editor, and ProWritingAid to help identify areas for improvement.
Some commenters delve into specific techniques mentioned in Graham's essay. One discusses the value of using simple words and avoiding jargon. Another explores the concept of "through-lines" in writing, emphasizing the importance of maintaining a clear and consistent narrative thread.
A few commenters offer additional advice beyond what Graham covers. One suggests focusing on the reader and their needs, emphasizing empathy as a key component of effective communication. Another highlights the importance of understanding the specific context and audience for any piece of writing.
One commenter challenges the notion that all writing should strive for absolute clarity, arguing that some forms of writing, such as poetry or fiction, can benefit from ambiguity and layered meaning. This leads to a brief discussion about the different goals and styles of writing.
A couple of commenters share personal anecdotes about their writing process and the challenges they face. One discusses the difficulty of balancing clarity with creativity, while another describes their struggles with perfectionism.
While there isn't one overwhelmingly compelling comment that stands out from the rest, the discussion provides a valuable extension of Graham's essay by offering practical tips, exploring nuances, and sharing personal experiences related to the craft of writing. The general consensus affirms the core principles of good writing outlined by Graham, with commenters offering further insights and perspectives on the topic.