This post explores a shift in thinking about programming languages from individual entities to sets or families of languages. Instead of focusing on a single language's specific features, the author advocates for considering the shared characteristics and relationships between languages within a broader group. This approach involves recognizing core concepts and abstractions that transcend individual syntax, allowing for easier transfer of knowledge and the development of tools that can operate across multiple languages within a set. The author uses examples like the ML language family and the Lisp dialects to illustrate how shared underlying principles can unify seemingly disparate languages, leading to a more powerful and adaptable approach to programming.
Hillel Wayne presents a seemingly straightforward JavaScript code snippet involving a variable assignment within a conditional statement containing a regular expression match. The unexpected behavior arises from how JavaScript's RegExp
object handles global flags. Because the global flag is enabled, subsequent calls to test()
within the same regex object continue matching from the previous match's position. This leads to the conditional evaluating differently on subsequent runs, resulting in the variable assignment only happening once even though the conditional appears to be true multiple times. Effectively, the regex remembers its position between calls, causing confusion for those expecting each call to test()
to start from the beginning of the string. The post highlights the subtle yet crucial difference between using a regex literal each time versus using a regex object, which retains state.
Hacker News users discuss various aspects of the perplexing JavaScript parsing puzzle. Several commenters analyze the specific grammar rules and automatic semicolon insertion (ASI) behavior that lead to the unexpected result, highlighting the complexities of JavaScript's parsing logic. Some point out that the ++
operator binds more tightly than the optional chaining operator (?.
), explaining why the increment applies to the property access result rather than the object itself. Others mention the importance of tools like ESLint and linters for catching such potential issues and suggest that relying on ASI can be problematic. A few users share personal anecdotes of encountering similar unexpected JavaScript behavior, emphasizing the need for careful consideration of these parsing quirks. One commenter suggests the puzzle demonstrates why "simple" languages can be more difficult to master than initially perceived.
The blog post demonstrates how to implement symbolic differentiation using definite clause grammars (DCGs) in Prolog. It leverages the elegant, declarative nature of DCGs to parse mathematical expressions represented as strings and simultaneously construct their derivative. By defining grammar rules for basic arithmetic operations (addition, subtraction, multiplication, division, and exponentiation), including the chain rule and handling constants and variables, the Prolog program can effectively differentiate a wide range of expressions. The post highlights the concise and readable nature of this approach, showcasing the power of DCGs for tackling symbolic computation tasks.
Hacker News users discussed the elegance and power of using definite clause grammars (DCGs) for symbolic differentiation, praising the conciseness and declarative nature of the approach. Some commenters pointed out the historical connection between Prolog and DCGs, highlighting their suitability for symbolic computation. A few users expressed interest in exploring further applications of DCGs beyond differentiation, such as parsing and code generation. The discussion also touched upon the performance implications of using DCGs and compared them to other parsing techniques. Some commenters raised concerns about the readability and maintainability of complex DCG-based systems.
This blog post details how to implement custom syntax highlighting in Emacs using tree-sitter. The author demonstrates creating a minor mode for highlighting TODO items and FIXMEs in comments within C++ code. This involves defining specific queries that target the comment nodes in the tree-sitter parse tree and then associating faces (colors and styles) with the captured nodes. The example provides a practical illustration of leveraging tree-sitter's structured code understanding to achieve more precise and context-aware highlighting than traditional regular expression-based approaches. The post also briefly covers how to incorporate these queries into a theme for broader application and includes a troubleshooting tip for ensuring tree-sitter highlighting is active.
HN commenters largely praised the integration of tree-sitter into Emacs, highlighting the significant improvements in syntax highlighting accuracy and performance. Some expressed excitement over the potential for more advanced features like semantic highlighting and code navigation enabled by tree-sitter's deeper understanding of code structure. A few users shared their personal experiences with setting up and using tree-sitter in Emacs, offering tips and workarounds for common issues. One commenter noted the wider adoption of tree-sitter across various editors and its positive impact on the developer experience. Others discussed the technical details of tree-sitter's implementation, comparing it to traditional regular expression-based highlighting. A couple of comments touched on the potential for future improvements, such as asynchronous parsing and better support for more obscure languages.
This 2015 blog post demonstrates how to leverage Lua's flexible syntax and metamechanisms to create a Domain Specific Language (DSL) for generating HTML. The author uses Lua's tables and functions to create a clean, readable syntax that abstracts away the verbosity of raw HTML. By overloading the concatenation operator and utilizing metatables, the DSL allows users to build HTML elements and structures in a declarative way, mirroring the structure of the output. This approach simplifies HTML generation within Lua, making the code cleaner and more maintainable. The post provides concrete examples showing how to define tags, attributes, and nested elements, offering a practical guide to building similar DSLs for other output formats.
Hacker News users generally praised the article for its clear explanation of building a DSL in Lua, particularly appreciating the focus on leveraging Lua's existing features and metamechanisms. Several commenters shared their own experiences and preferences for using Lua for DSLs, including its use in game development and configuration management. One commenter pointed out potential performance considerations when using this approach, suggesting that precompilation could mitigate some overhead. Others discussed alternative methods for building DSLs, such as using parser generators. The use of Lua's setfenv
was highlighted, with some acknowledging its power while others expressing caution due to potential debugging difficulties. A few users also mentioned other languages like Fennel and Janet as interesting alternatives to Lua for similar purposes.
This blog post chronicles the author's weekend project of building a compiler for a simplified C-like language. It walks through the implementation of a lexical analyzer, parser (using recursive descent), and code generator targeting x86-64 assembly. The compiler handles basic arithmetic operations, variable declarations and assignments, if/else statements, and while loops. The post emphasizes simplicity and educational value over performance or completeness, providing a practical example of compiler construction principles in a digestible format. The code is available on GitHub for readers to explore and experiment with.
HN users largely praised the TinyCompiler project for its educational value, highlighting its clear code and approachable structure as beneficial for learning compiler construction. Several commenters discussed extending the compiler's functionality, such as adding support for different architectures or optimizing the generated code. Some pointed out similar projects or resources, like the "Let's Build a Compiler" tutorial and the Crafting Interpreters book. A few users questioned the "weekend" claim in the title, believing the project would take significantly longer for a novice to complete. The post also sparked discussion about the practical applications of such a compiler, with some suggesting its use for educational purposes or embedding in resource-constrained environments. Finally, there was some debate about the complexity of the compiler compared to more sophisticated tools like LLVM.
pdfsyntax is a tool that visually represents the internal structure of a PDF file using HTML. It parses a PDF, extracts its objects and their relationships, and presents them in an interactive HTML tree view. This allows users to explore the document's components, such as fonts, images, and text content, along with the underlying PDF syntax. The tool aims to aid in understanding and debugging PDF files by providing a clear, navigable representation of their often complex internal organization.
Hacker News users generally praised the PDF visualization tool for its clarity and potential usefulness in debugging PDF issues. Several commenters pointed out its helpfulness in understanding PDF internals and suggested potential improvements like adding search functionality, syntax highlighting, and the ability to manipulate the PDF structure directly. Some users discussed the complexities of the PDF format, with one highlighting the challenge of extracting clean text due to the arbitrary ordering of elements. Others shared their own experiences with problematic PDFs and expressed hope that this tool could aid in diagnosing and fixing such files. The discussion also touched upon alternative PDF libraries and tools, further showcasing the community's interest in PDF manipulation and analysis.
Ohm is a parsing toolkit designed for creating parsers in JavaScript and TypeScript that are both powerful and easy to use. It features a grammar definition syntax closely resembling EBNF, enabling developers to express complex syntax rules clearly and concisely. Ohm's built-in support for semantic actions allows users to directly embed JavaScript or TypeScript code within their grammar rules, simplifying the process of building abstract syntax trees (ASTs) and performing other actions during parsing. The toolkit provides excellent error reporting capabilities, helping developers quickly identify and fix syntax errors. Its flexible architecture makes it suitable for various applications, from validating user input to building full-fledged compilers and interpreters.
HN users generally expressed interest in Ohm, praising its user-friendliness, clear documentation, and the power offered by its grammar-based approach to parsing. Several compared it favorably to traditional parser generators like PEG.js and nearley, highlighting Ohm's superior error messages and easier learning curve. Some users discussed potential applications, including building linters, formatters, and domain-specific languages. A few questioned the performance implications of its JavaScript implementation, while others suggested potential improvements like adding support for left-recursive grammars. The overall sentiment leaned positive, with many eager to try Ohm in their own projects.
The blog post details the reverse engineering process of Apple's proprietary Typed Stream format used in various macOS features like Spotlight search indexing and QuickLook previews. The author, motivated by the lack of public documentation, utilizes a combination of tools and techniques including analyzing generated Typed Stream files, using class-dump on relevant system frameworks, and examining open-source components like CoreFoundation, to decipher the format. They ultimately discover that Typed Streams are essentially serialized property lists with a specific header and optional compression, allowing for efficient storage and retrieval of typed data. This reverse engineering effort provides valuable insight into the inner workings of macOS and potentially enables interoperability with other systems.
HN users generally praised the author's reverse-engineering effort, calling it "impressive" and "well-documented." Some discussed the implications of Apple using a custom format, speculating about potential performance benefits or tighter integration with their hardware. One commenter noted the similarity to Google's Protocol Buffers, suggesting Apple might have chosen this route to avoid dependencies. Others pointed out the difficulty in reverse-engineering these formats, highlighting the value of such work for interoperability. A few users discussed potential use cases for the information, including debugging and data recovery. Some also questioned the long-term viability of relying on undocumented formats.
The blog post details methods for eliminating left and mutual recursion in context-free grammars, crucial for parser construction. Left recursion, where a non-terminal derives itself as the leftmost symbol, is problematic for top-down parsers. The post demonstrates how to remove direct left recursion using factorization and substitution. It then explains how to handle indirect left recursion by ordering non-terminals and systematically applying the direct recursion removal technique. Finally, it addresses mutual recursion, where two or more non-terminals derive each other, converting it into direct left recursion, which can then be eliminated using the previously described methods. The post uses concrete examples to illustrate these transformations, making it easier to understand the process of converting a grammar into a parser-friendly form.
Hacker News users discussed the potential inefficiency of the presented left-recursion elimination algorithm, particularly its reliance on repeated string concatenation. They suggested alternative approaches using stacks or accumulating results in a list for better performance. Some commenters questioned the necessity of fully eliminating left recursion in all cases, pointing out that modern parsing techniques, like packrat parsing, can handle left-recursive grammars directly. The lack of formal proofs or performance comparisons with established methods was also noted. A few users discussed the benefits and drawbacks of different parsing libraries and techniques, including ANTLR and various parser combinator libraries.
This blog post explores a simplified variant of Generalized LR (GLR) parsing called "right-nulled" GLR. Instead of maintaining a graph-structured stack during parsing ambiguities, this technique uses a single stack and resolves conflicts by prioritizing reduce actions over shift actions. When a conflict occurs, the parser performs all possible reductions before attempting to shift. This approach sacrifices some of GLR's generality, as it cannot handle all types of grammars, but it significantly reduces the complexity and overhead associated with maintaining the graph-structured stack, leading to a faster and more memory-efficient parser. The post provides a conceptual overview, highlights the limitations compared to full GLR, and demonstrates the algorithm with a simple example.
Hacker News users discuss the practicality and efficiency of GLR parsing, particularly in comparison to other parsing techniques. Some commenters highlight its theoretical power and ability to handle ambiguous grammars, while acknowledging its potential performance overhead. Others question its suitability for real-world applications, suggesting that simpler methods like PEG or recursive descent parsers are often sufficient and more efficient. A few users mention specific use cases where GLR parsing shines, such as language servers and situations requiring robust error recovery. The overall sentiment leans towards appreciating GLR's theoretical elegance but expressing reservations about its widespread adoption due to perceived complexity and performance concerns. A recurring theme is the trade-off between parsing power and practical efficiency.
Keon is a new serialization/deserialization (serde) format designed for human readability and writability, drawing heavy inspiration from Rust's syntax. It aims to be a simple and efficient alternative to formats like JSON and TOML, offering features like strongly typed data structures, enums, and tagged unions. Keon emphasizes being easy to learn and use, particularly for those familiar with Rust, and focuses on providing a compact and clear representation of data. The project is actively being developed and explores potential use cases like configuration files, data exchange, and data persistence.
Hacker News users discuss KEON, a human-readable serialization format resembling Rust. Several commenters express interest, praising its readability and potential as a configuration language. Some compare it favorably to TOML and JSON, highlighting its expressiveness and Rust-like syntax. Concerns arise regarding its verbosity compared to more established formats, particularly for simple data structures, and the potential niche appeal due to the Rust syntax. A few suggest potential improvements, including a more formal specification, tools for generating parsers in other languages, and exploring the benefits over existing formats like Serde. The overall sentiment leans towards cautious optimism, acknowledging the project's potential but questioning its practical advantages and broader adoption prospects.
Summary of Comments ( 20 )
https://news.ycombinator.com/item?id=43360287
The Hacker News comments discuss the concept of "language sets" introduced in the linked gist. Several commenters express skepticism about the practical value and novelty of the idea, questioning whether it genuinely offers advantages over existing programming paradigms like macros, polymorphism, or code generation. Some find the examples unconvincing and overly complex, suggesting simpler solutions could achieve the same results. Others point out potential performance implications and the added cognitive load of managing language sets. However, a few commenters express interest, seeing potential applications in areas like DSL design and metaprogramming, though they also acknowledge the need for further development and clearer examples to demonstrate its usefulness. Overall, the reception is mixed, with many unconvinced but a few intrigued by the possibilities.
The Hacker News post "From Languages to Language Sets" sparks a discussion around the linked gist, which proposes the idea of "language sets" – combining multiple programming languages for different parts of a project based on their strengths. The comments section is moderately active, containing a mix of agreement, disagreement, and explorations of related concepts.
Several commenters express enthusiasm for the idea, highlighting the potential benefits of using specialized languages for specific tasks. One commenter points out how this approach mirrors existing practices, such as using SQL for database interactions within a larger application written in a different language. They argue that explicitly recognizing and formalizing these "language sets" could lead to better tool development and more structured project organization. Another commenter emphasizes the productivity gains that could be achieved by choosing the right language for each job, rather than being constrained by a single language's limitations. They also suggest that improved tooling around language sets could simplify the process of integrating different languages.
Others express skepticism or raise concerns. One commenter questions the novelty of the idea, suggesting that it simply describes the status quo of using multiple languages within a project. They argue that the term "language set" doesn't add much to the existing understanding of polyglot programming. Another commenter raises the issue of increased complexity when managing multiple languages, particularly regarding tooling, debugging, and team communication. They acknowledge the potential benefits but caution against overlooking the practical challenges.
The discussion also delves into related topics. One commenter mentions the concept of "internal DSLs" (Domain-Specific Languages) and suggests that creating small, specialized languages within a larger project could be a more effective alternative to full-blown language sets. Another commenter draws parallels to the microservices architecture pattern, arguing that language sets could be seen as a similar approach applied to programming languages rather than services.
Overall, the comments reflect a mixed reception to the idea of "language sets." While some see it as a valuable way to formalize and improve existing polyglot programming practices, others question its novelty and express concerns about increased complexity. The discussion also touches upon related concepts like internal DSLs and microservices, enriching the conversation around the central theme of choosing the right tools for the job.