Jonathan Protzenko announced the release of Evercrypt 1.0 for Python, providing a high-assurance cryptography library with over 15,000 lines of formally verified code. This release leverages the HACL* cryptographic library, which has been mathematically proven correct, and makes it readily available for Python developers through a simple and performant interface. Evercrypt aims to bring robust, verified cryptographic primitives to a wider audience, improving security and trustworthiness for applications that depend on strong cryptography. It offers a drop-in replacement for existing libraries, significantly enhancing the security guarantees without requiring extensive code changes.
The author explores incorporating Haskell-inspired functional programming concepts into their Python code. They focus on immutability by using tuples and namedtuples instead of lists and dictionaries where appropriate, leveraging list comprehensions and generator expressions for functional transformations, and adopting higher-order functions like map
, filter
, and reduce
(via functools
). While acknowledging that Python isn't inherently designed for pure functional programming, the author demonstrates how these techniques can improve code clarity, testability, and potentially performance by reducing side effects and encouraging a more declarative style. They also highlight the benefits of type hinting for enhancing readability and catching errors early.
Commenters on Hacker News largely appreciated the author's journey of incorporating Haskell's functional paradigms into their Python code. Several praised the pragmatic approach, noting that fully switching languages isn't always feasible and that adopting beneficial concepts piecemeal can be highly effective. Some pointed out specific areas where Haskell's influence shines in Python, like using list comprehensions, generators, and immutable data structures for improved code clarity and potentially performance. A few commenters cautioned against overusing functional concepts in Python, emphasizing the importance of readability and maintaining a balance suitable for the project and team. There was also discussion about the performance implications of these techniques, with some suggesting profiling to ensure benefits are realized. Some users shared their own experiences with similar "Haskelling" or "Lisping" of other languages, further demonstrating the appeal of cross-pollinating programming paradigms.
While often derided for its verbosity and perceived outdatedness, Objective-C possesses a unique charm for some developers. Its Smalltalk-inspired message-passing paradigm, dynamic nature, and human-readable syntax foster a sense of playfulness and expressiveness that can be missing in more rigid languages. This article argues that Objective-C's idiosyncrasies, including its use of square brackets and descriptive method names, contribute to a more approachable and understandable coding experience, particularly for those coming from a less technical background. Despite its decline in popularity since Swift's arrival, Objective-C's enduring legacy and distinct character continue to resonate with a dedicated community who appreciate its subjective appeal.
HN commenters largely agree that Objective-C's verbosity, while initially appearing cumbersome, contributes to its readability and maintainability. Several users appreciate the explicit nature of message passing and how it clarifies code intention. Some argue that modern Objective-C, with features like literals and blocks, addresses many of the verbosity complaints. The dynamic nature of the language and the power of its runtime are also highlighted as benefits. A few commenters express nostalgia for Objective-C, contrasting it with Swift, which they perceive as less enjoyable or flexible, despite its modern syntax. There's also a discussion around the challenges of learning Objective-C and the impact of Apple's transition to Swift.
Guy Steele's "Growing a Language" advocates for designing programming languages with extensibility in mind, enabling them to evolve gracefully over time. He argues against striving for a "perfect" initial design, instead favoring a core language with powerful mechanisms for growth, akin to biological evolution. These mechanisms include higher-order functions, allowing users to effectively extend the language themselves, and a flexible syntax capable of accommodating new constructs. Steele emphasizes the importance of "bottom-up" growth, where new features emerge from practical usage and are integrated into the language organically, rather than being imposed top-down by designers. This allows the language to adapt to unforeseen needs and remain relevant as the programming landscape changes.
Hacker News users discuss Guy Steele's "Growing a Language" lecture, focusing on its relevance even decades later. Several commenters praise Steele's insights into language design, particularly his emphasis on evolving languages organically rather than rigidly adhering to initial specifications. The concept of "worse is better" is highlighted, along with a discussion of how seemingly inferior initial designs can sometimes win out due to their adaptability and ease of implementation. The challenge of backward compatibility in evolving languages is also a key theme, with commenters noting the tension between maintaining existing code and incorporating new features. Steele's humor and engaging presentation style are also appreciated. One commenter links to a video of the lecture, while others lament that more modern programming languages haven't fully embraced the principles Steele advocates.
Yes, it's technically still possible to write a plain C iOS app in 2025 (and beyond). While Apple pushes developers towards Swift and SwiftUI, and Objective-C is slowly fading, the underlying iOS APIs are still C-based. This means you can use C, potentially with some Objective-C bridging for UI elements or higher-level functionalities, to create a functional app. However, this approach is significantly harder and less efficient than using Swift or Objective-C, lacking modern tools, libraries, and simplified memory management. Maintaining and updating a C-based iOS app would also be a considerable challenge compared to using more modern, officially supported languages and frameworks. Therefore, while possible, it's not generally recommended for practical development.
Hacker News users discussed the practicality and challenges of writing a plain C iOS app in 2025. Several commenters pointed out that while technically possible, using only C would be incredibly difficult and time-consuming, requiring significant workarounds to interact with essential iOS frameworks (mostly written in Objective-C and Swift). Some suggested leveraging existing C libraries and frameworks like SDL or raylib for cross-platform compatibility and easier graphics handling. Others questioned the motivation behind such an endeavor, given the availability of more suitable languages and tools for iOS development. The general consensus was that while a pure C app is theoretically achievable, it's a highly impractical approach for modern iOS development. Some pointed out that Apple's increasing restrictions on low-level access make a pure C app even more challenging going forward.
This blog post concludes a series exploring functional programming (FP) concepts in Python. The author emphasizes that fully adopting FP in Python isn't always practical or beneficial, but strategically integrating its principles can significantly improve code quality. Key takeaways include favoring pure functions and immutability whenever possible, leveraging higher-order functions like map
and filter
, and understanding how these concepts promote testability, readability, and maintainability. While acknowledging Python's inherent limitations as a purely functional language, the series demonstrates how embracing a functional mindset can lead to more elegant and robust Python code.
HN commenters largely agree with the author's general premise about functional programming's benefits, particularly its emphasis on immutability for managing complexity. Several highlighted the importance of distinguishing between pure and impure functions and strategically employing both. Some debated the practicality and performance implications of purely functional data structures in real-world applications, suggesting hybrid approaches or emphasizing the role of immutability even within imperative paradigms. Others pointed out the learning curve associated with functional programming and the difficulty of debugging complex functional code. The value of FP concepts like higher-order functions and composition was also acknowledged, even if full-blown FP adoption wasn't always deemed necessary. There was some discussion of specific languages and their suitability for functional programming, with Clojure receiving positive mentions.
Haskell offers a powerful and efficient approach to concurrency, leveraging lightweight threads and clear communication primitives. Its unique runtime system manages these threads, enabling high performance without the complexities of manual thread management. Instead of relying on shared mutable state and locks, which are prone to errors, Haskell uses software transactional memory (STM) for safe concurrent data access. This allows developers to write concurrent code that is more composable, easier to reason about, and less susceptible to deadlocks and race conditions. Combined with asynchronous exceptions and other features, Haskell provides a robust and elegant framework for building highly concurrent and parallel applications.
Hacker News users generally praised the article for its clarity and conciseness in explaining Haskell's concurrency model. Several commenters highlighted the elegance of software transactional memory (STM) and its ability to simplify concurrent programming compared to traditional locking mechanisms. Some discussed the practical performance characteristics of STM, acknowledging its overhead but also noting its scalability and suitability for certain workloads. A few users compared Haskell's approach to concurrency with other languages like Clojure and Rust, sparking a brief debate about the trade-offs between different concurrency models. One commenter mentioned the learning curve associated with Haskell but emphasized the long-term benefits of its powerful type system and concurrency features. Overall, the comments reflect a positive reception of the article and a general appreciation for Haskell's approach to concurrency.
"Hacktical C" is a free, online guide to the C programming language aimed at aspiring security researchers and exploit developers. It covers fundamental C concepts like data types, control flow, and memory management, but with a specific focus on how these concepts are relevant to low-level programming and exploitation techniques. The guide emphasizes practical application, featuring numerous code examples and exercises demonstrating buffer overflows, format string vulnerabilities, and other common security flaws. It also delves into topics like interacting with the operating system, working with assembly language, and reverse engineering, all within the context of utilizing C for offensive security purposes.
Hacker News users largely praised "Hacktical C" for its clear writing style and focus on practical application, particularly for those interested in systems programming and security. Several commenters appreciated the author's approach of explaining concepts through real-world examples, like crafting shellcode and exploiting vulnerabilities. Some highlighted the book's coverage of lesser-known C features and quirks, making it valuable even for experienced programmers. A few pointed out potential improvements, such as adding more exercises or expanding on certain topics. Overall, the sentiment was positive, with many recommending the book for anyone looking to deepen their understanding of C and its use in low-level programming.
The author argues that Go channels, while conceptually appealing, often lead to overly complex and difficult-to-debug code in real-world scenarios. They contend that the implicit blocking nature of channels introduces subtle dependencies and makes it hard to reason about program flow, especially in larger projects. Error handling becomes cumbersome, requiring verbose boilerplate and leading to convoluted control structures. Ultimately, the post suggests that callbacks, despite their perceived drawbacks, offer a more straightforward and manageable approach to concurrency, particularly when dealing with complex interactions and error propagation. While channels might be suitable for simple use cases, their limitations become apparent as complexity increases, leading to code that is harder to understand, maintain, and debug.
HN commenters largely disagree with the article's premise. Several point out that the author's examples are contrived and misuse channels, leading to unnecessary complexity. They argue that channels are a powerful tool for concurrency when used correctly, offering simplicity and efficiency in many common scenarios. Some suggest the author's preferred approach of callbacks and mutexes is more error-prone and less readable. A few commenters mention the learning curve associated with channels but acknowledge their benefits once mastered. Others highlight the importance of understanding the appropriate use cases for channels, conceding they aren't a universal solution for every concurrency problem.
Erlang's defining characteristics aren't lightweight processes and message passing, but rather its error handling philosophy. The author argues that Erlang's true power comes from embracing failure as inevitable and providing mechanisms to isolate and manage it. This is achieved through the "let it crash" philosophy, where individual processes are allowed to fail without impacting the overall system, combined with supervisor hierarchies that restart failed processes and maintain system stability. The lightweight processes and message passing are merely tools that facilitate this error handling approach by providing isolation and a means for asynchronous communication between supervised components. Ultimately, Erlang's strength lies in its ability to build robust and fault-tolerant systems.
Hacker News users discussed the meaning and significance of "lightweight processes and message passing" in Erlang. Several commenters argued that the author missed the point, emphasizing that the true power of Erlang lies in its fault tolerance and the "let it crash" philosophy enabled by lightweight processes and isolation. They argued that while other languages might technically offer similar concurrency mechanisms, they lack Erlang's robust error handling and ability to build genuinely fault-tolerant systems. Some commenters pointed out that immutability and the single assignment paradigm are also crucial to Erlang's strengths. A few comments focused on the challenges of debugging Erlang systems and the potential performance overhead of message passing. Others highlighted the benefits of the actor model for concurrency and distribution. Overall, the discussion centered on the nuances of Erlang's design and whether the author adequately captured its core value proposition.
Rust enums can surprisingly be smaller than expected. While naively, one might assume an enum's size is determined by the largest variant plus a discriminant to track which variant is active, the compiler optimizes this. If an enum's largest variant contains data with internal padding, the discriminant can sometimes be stored within that padding, avoiding an increase in the overall size. This optimization applies even when using #[repr(C)]
or #[repr(u8)]
, so long as the layout allows it. Essentially, the compiler cleverly utilizes existing unused space within variants to store the variant tag, minimizing the enum's memory footprint.
Hacker News users discussed the surprising optimization where Rust can reduce the size of an enum if its variants all have the same representation. Some commenters expressed admiration for this detail of the Rust compiler and its potential performance benefits. A few questioned the long-term stability of relying on this optimization, wondering if changes to the enum's variants could inadvertently increase its size in the future. Others delved into the specifics of how this optimization interacts with features like repr(C)
and niche filling optimizations. One user linked to a relevant section of the Rust Reference, further illuminating the compiler's behavior. The discussion also touched upon the potential downsides, such as making the generated assembly more complex, and how using #[repr(u8)]
might offer a more predictable and explicit way to control enum size.
PlanetScale's Vitess project, which uses a Go-based MySQL interpreter, historically lagged behind C++ in performance. Through focused optimization efforts targeting function call overhead, memory allocation, and string conversion, they significantly improved Vitess's speed. By leveraging Go's built-in profiling tools and making targeted changes like using custom map implementations and byte buffers, they achieved performance comparable to, and in some cases exceeding, a similar C++ interpreter. These improvements demonstrate that with careful optimization, Go can be a competitive choice for performance-sensitive applications like database interpreters.
Hacker News users discussed the benchmarks presented in the PlanetScale blog post, expressing skepticism about their real-world applicability. Several commenters pointed out that the microbenchmarks might not reflect typical database workload performance, and questioned the choice of C++ implementation used for comparison. Some suggested that the Go interpreter's performance improvements, while impressive, might not translate to significant gains in a production environment. Others highlighted the importance of considering factors beyond raw execution speed, such as memory usage and garbage collection overhead. The lack of details about the specific benchmarks and the C++ implementation used made it difficult for some to fully assess the validity of the claims. A few commenters praised the progress Go has made, but emphasized the need for more comprehensive and realistic benchmarks to accurately compare interpreter performance.
This post outlines a vision for first-class WebAssembly support in Swift, enabling developers to compile Swift code directly to Wasm for use in web browsers and other Wasm environments. The proposal emphasizes seamless integration with existing JavaScript ecosystems, allowing bidirectional communication between Swift and JavaScript code. It also aims for near-native performance by leveraging Wasm's capabilities, and proposes tools and workflows to simplify the development process, such as automatic generation of JavaScript bindings for Swift code. The ultimate goal is to empower Swift developers to build high-performance web applications and leverage the growing Wasm ecosystem, while maintaining Swift's core values of safety, performance, and expressiveness.
Hacker News users discussed the potential and challenges of Swift for WebAssembly. Some expressed excitement about the prospect of using Swift for frontend development, highlighting its performance and type safety as advantages over JavaScript. Others were more cautious, pointing to the existing maturity of JavaScript and its ecosystem, and questioning whether Swift could gain significant traction. Concerns were raised about the size of Swift compiled output and the integration with existing JavaScript libraries and frameworks. The potential for full-stack Swift development and server-side applications with WebAssembly was also mentioned as a motivating factor. Several users suggested that prioritizing the developer experience and tooling would be crucial for adoption.
Nvidia has introduced native Python support to CUDA, allowing developers to write CUDA kernels directly in Python. This eliminates the need for intermediary languages like C++ and simplifies GPU programming for Python's vast scientific computing community. The new CUDA Python compiler, integrated into the Numba JIT compiler, compiles Python code to native machine code, offering performance comparable to expertly tuned CUDA C++. This development significantly lowers the barrier to entry for GPU acceleration and promises improved productivity and code readability for researchers and developers working with Python.
Hacker News commenters generally expressed excitement about the simplified CUDA Python programming offered by this new functionality, eliminating the need for wrapper libraries like Numba or CuPy. Several pointed out the potential performance benefits of direct CUDA access from Python. Some discussed the implications for machine learning and the broader Python ecosystem, hoping it lowers the barrier to entry for GPU programming. A few commenters offered cautionary notes, suggesting performance might not always surpass existing solutions and emphasizing the importance of benchmarking. Others questioned the level of "native" support, pointing out that a compiled kernel is still required. Overall, the sentiment was positive, with many anticipating easier and potentially faster CUDA development in Python.
Bill Gates reflects on the recently released Altair BASIC source code, a pivotal moment in Microsoft's history. He reminisces about the challenges and excitement of developing this early software for the Altair 8800 with Paul Allen, including the limited memory constraints and the thrill of seeing it run successfully for the first time. Gates emphasizes the importance of this foundational work, highlighting how it propelled both Microsoft and the broader personal computer revolution forward. He also notes the collaborative nature of early software development and encourages exploration of the code as a window into the past.
HN commenters discuss the historical significance of Microsoft's early source code release, noting its impact on the industry and the evolution of programming practices. Several commenters reminisce about using these early versions of BASIC and DOS, sharing personal anecdotes about their first experiences with computing. Some express interest in examining the code for educational purposes, to learn from the simple yet effective design choices. A few discuss the legal implications of releasing decades-old code, and the potential for discovering hidden vulnerabilities. The challenges of understanding code written with now-obsolete practices are also mentioned. Finally, some commenters speculate on the motivations behind Microsoft's decision to open-source this historical artifact.
JavaScript's "weirdness" often stems from its rapid development and need for backward compatibility. The post highlights quirks like automatic semicolon insertion, the flexible nature of this
, and the unusual behavior of ==
(loose equality) versus ===
(strict equality). These behaviors, while sometimes surprising, are generally explained by the language's design choices and attempts to accommodate various coding styles. The author encourages embracing these quirks as part of JavaScript's identity, understanding the underlying reasons, and leveraging linters and style guides to mitigate potential issues. Ultimately, recognizing these nuances allows developers to write more predictable and less error-prone JavaScript code.
HN users largely agreed with the author's points about JavaScript's quirks, with several sharing their own anecdotes about confusing behavior. Some praised the blog post for clearly articulating frustrations they've felt. A few commenters pointed out that while JavaScript has its oddities, many are rooted in its flexible, dynamic nature, which is also a source of its power and widespread adoption. Others argued that some of the "weirdness" described is common to other languages or simply the result of misunderstanding core concepts. One commenter offered that focusing too much on these quirks distracts from appreciating JavaScript's strengths and suggested embracing the language's unique aspects. There's a thread discussing the performance implications of the +
operator vs. template literals, and another about the behavior of loose equality (==
). Overall, the comments reflect a mixture of exasperation and acceptance of JavaScript's idiosyncrasies.
Edsger Dijkstra argues against "natural language programming," believing it a foolish endeavor. He contends that natural language's inherent ambiguity and imprecision make it unsuitable for expressing the rigorous logic required in programming. Instead of striving for superficial readability through natural language, Dijkstra advocates for focusing on developing formal notations and abstractions that are clear, concise, and verifiable, even if they appear less "natural" initially. He emphasizes that programming requires a level of precision and unambiguity that natural language simply cannot provide, and attempting to bridge this gap will ultimately lead to more confusion and less reliable software.
HN commenters generally agree with Dijkstra's skepticism of "natural language programming." Some highlight the ambiguity inherent in natural language as fundamentally incompatible with the precision required for programming. Others point out the success of domain-specific languages (DSLs) as a middle ground, offering a more human-readable syntax without sacrificing clarity. One commenter suggests Dijkstra's critique is more aimed at vague specifications disguised as programs rather than genuinely well-defined natural language programming. Several commenters mention the value of formal methods and mathematical notation for clear program design, echoing Dijkstra's sentiments. A few offer historical context, suggesting the "natural language programming" Dijkstra criticized likely refers to early, overly ambitious attempts, and that modern NLP advancements might warrant revisiting the concept.
The blog post explores how Python code performance can be affected by CPU caching, though less predictably than in lower-level languages like C. Using a matrix transpose operation as an example, the author demonstrates that naive Python code suffers from cache misses due to its row-major memory layout conflicting with the column-wise access pattern of the transpose. While techniques like NumPy's transpose function can mitigate this by leveraging optimized C code under the hood, writing cache-efficient pure Python is difficult due to the interpreter's memory management and dynamic typing hindering fine-grained control. Ultimately, the post concludes that while awareness of caching can be beneficial for Python programmers, particularly when dealing with large datasets, focusing on algorithmic optimization and leveraging optimized libraries generally offers greater performance gains.
Commenters on Hacker News largely agreed with the article's premise that Python code, despite its interpreted nature, is affected by CPU caching. Several users provided anecdotal evidence of performance improvements after optimizing code for cache locality, particularly when dealing with large datasets. One compelling comment highlighted that NumPy, a popular Python library, heavily leverages C code under the hood, meaning that its performance is intrinsically linked to memory access patterns and thus caching. Another pointed out that Python's garbage collector and dynamic typing can introduce performance variability, making cache effects harder to predict and measure consistently, but still present. Some users emphasized the importance of profiling and benchmarking to identify cache-related bottlenecks in Python. A few commenters also discussed strategies for improving cache utilization, such as using smaller data types, restructuring data layouts, and employing libraries designed for efficient memory access. The discussion overall reinforces the idea that while Python's high-level abstractions can obscure low-level details, underlying hardware characteristics like CPU caching still play a significant role in performance.
F# offers a compelling blend of functional and object-oriented programming, making it suitable for diverse tasks from scripting and data science to full-fledged applications. Its succinct syntax, strong type system, and emphasis on immutability enhance code clarity, maintainability, and correctness. Features like type inference, pattern matching, and computational expressions streamline development, enabling developers to write concise yet powerful code. While benefiting from the .NET ecosystem and interoperability with C#, F#'s distinct functional-first approach fosters a different, often more elegant, way of solving problems. This translates to improved developer productivity and more robust software.
Hacker News users discuss the merits of F#, often comparing it to other functional languages like OCaml, Haskell, and Clojure. Some commenters appreciate F#'s practicality and ease of use, especially within the .NET ecosystem, highlighting its strong typing and tooling. Others find its functional purity less strict than Haskell's, viewing it as both a benefit (pragmatism) and a drawback (potential for less elegant code). The discussion touches on F#'s suitability for specific domains like data science and web development, with some expressing enthusiasm while others note the prevalence of C# in those areas within the .NET world. Several comments lament the comparatively smaller community and ecosystem surrounding F#, despite acknowledging its technical strengths. The overall sentiment appears to be one of respect for F# but also a recognition of its niche status.
Mads Tofte's "Four Lectures on Standard ML" provides a concise introduction to the core concepts of SML. It covers the fundamental aspects of the language, including its type system with polymorphism and type inference, its support for functional programming with higher-order functions, and its module system for structuring large programs. The lectures emphasize clarity and practicality, demonstrating how these features contribute to writing reliable and reusable code. Examples illustrate key concepts like pattern matching, data structures, and abstract data types. The text aims to provide a solid foundation for further exploration of SML and its applications.
Hacker News users discuss Mads Tofte's "Four Lectures on Standard ML" with appreciation for its clarity and historical context. Several commenters highlight the document as an excellent introduction to ML and type inference, praising its conciseness and accessibility compared to more modern resources. Some note the significance of seeing the language presented shortly after its creation, offering a glimpse into its original design principles. The lack of dependent types is mentioned, with one commenter pointing out that adding them would significantly alter ML's straightforward type inference. Others discuss the influence of ML on later languages like Haskell and OCaml, and the enduring relevance of its core concepts. A few users reminisce about their experiences learning ML and using related tools like SML/NJ.
Inko is a programming language designed for building reliable and efficient concurrent software. It features a static type system with algebraic data types and pattern matching, aiding in catching errors at compile time. Inko's concurrency model leverages actors and message passing to avoid shared memory and the associated complexities of mutexes and locks. This actor-based approach, coupled with automatic memory management via garbage collection, aims to simplify the development of concurrent programs and reduce the risk of data races and other concurrency bugs. Furthermore, Inko prioritizes performance and offers efficient compilation to native code. The language seeks to provide a practical and robust solution for modern concurrent programming challenges.
Hacker News users discussed Inko's features, drawing comparisons to Rust and Pony. Several commenters expressed interest in the actor model and ownership/borrowing system for concurrency. Some questioned Inko's practicality and adoption potential given the existing competition, while others were curious about its performance characteristics and real-world applications. The garbage collection aspect was a point of contention, with some viewing it as a drawback for performance-critical applications. A few users also mentioned their previous experiences with the language, highlighting both positive and negative aspects. There was general curiosity about the language's maturity and the size of its community.
Rivulet is a new esoteric programming language designed to produce visually appealing source code that resembles branching river networks. The language's syntax utilizes characters like /
, \
, |
, and -
to direct the "flow" of the program, creating tree-like structures. While functionally simple, primarily focused on integer manipulation and output, Rivulet prioritizes aesthetic form over practical utility, offering programmers a way to create visually interesting code art. The resulting programs, when visualized, evoke a sense of natural formations, hence the name "Rivulet."
Hacker News users discussed Rivulet, a language for creating generative art. Several commenters expressed fascination with the project, praising its elegance and the beauty of the generated output. Some discussed the underlying techniques, connecting it to concepts like domain warping and vector fields. Others explored potential applications, such as animating SVGs or creating screensavers. A few commenters compared it to other creative coding tools like Shadertoy and Processing, while others delved into technical aspects like performance optimization and the choice of using JavaScript. There was general interest in understanding the language's syntax and semantics.
The blog post "You Need Subtyping" argues that subtyping, despite sometimes being viewed as complex or unnecessary, is a crucial tool for writing flexible and maintainable code. It emphasizes that subtyping allows for writing generic algorithms that operate on a range of related types without needing modification for each specific type. The author illustrates this through examples using shapes and animal sounds, demonstrating how subtyping enables reusable functions that handle different subtypes without explicit type checks. The post further champions subtype polymorphism as a superior alternative to approaches like typeclasses or enums for handling diverse data types, highlighting its ability to gracefully accommodate future type extensions without altering existing code. Ultimately, the author advocates for embracing subtyping as a fundamental concept for building robust and adaptable software systems.
HN users generally disagreed with the premise that subtyping is needed. Several commenters argued that subtyping adds complexity, especially in larger projects, and that its benefits are often overstated. Alternatives like composition and pattern matching were suggested as potentially superior approaches. Some argued that the author conflated subtyping with polymorphism, while others pointed out that the benefits mentioned in the article, like code reuse and extensibility, could be achieved without subtyping. A few commenters discussed the specific example used in the blog post, highlighting its contrived nature and suggesting better alternatives. The overall sentiment was that subtyping is a tool, sometimes useful, but not a necessity.
The blog post "Zlib-rs is faster than C" demonstrates how the Rust zlib-rs
crate, a wrapper around the C zlib library, can achieve significantly faster decompression speeds than directly using the C library. This surprising performance gain comes from leveraging Rust's zero-cost abstractions and more efficient memory management. Specifically, zlib-rs
uses a custom allocator optimized for the specific memory usage patterns of zlib, minimizing allocations and deallocations, which constitute a significant performance bottleneck in the C version. This specialized allocator, combined with Rust's ownership system, leads to measurable speed improvements in various decompression scenarios. The post concludes that careful Rust wrappers can outperform even highly optimized C code by intelligently managing resources and eliminating overhead.
Hacker News commenters discuss potential reasons for the Rust zlib implementation's speed advantage, including compiler optimizations, different default settings (particularly compression level), and potential benchmark inaccuracies. Some express skepticism about the blog post's claims, emphasizing the maturity and optimization of the C zlib implementation. Others suggest potential areas of improvement in the benchmark itself, like exploring different compression levels and datasets. A few commenters also highlight the impressive nature of Rust's performance relative to C, even if the benchmark isn't perfect, and commend the blog post author for their work. Several commenters point to the use of miniz, a single-file C implementation of zlib, suggesting this may not be a truly representative comparison to zlib itself. Finally, some users provided updates with their own benchmark results attempting to reconcile the discrepancies.
This post explores a shift in thinking about programming languages from individual entities to sets or families of languages. Instead of focusing on a single language's specific features, the author advocates for considering the shared characteristics and relationships between languages within a broader group. This approach involves recognizing core concepts and abstractions that transcend individual syntax, allowing for easier transfer of knowledge and the development of tools that can operate across multiple languages within a set. The author uses examples like the ML language family and the Lisp dialects to illustrate how shared underlying principles can unify seemingly disparate languages, leading to a more powerful and adaptable approach to programming.
The Hacker News comments discuss the concept of "language sets" introduced in the linked gist. Several commenters express skepticism about the practical value and novelty of the idea, questioning whether it genuinely offers advantages over existing programming paradigms like macros, polymorphism, or code generation. Some find the examples unconvincing and overly complex, suggesting simpler solutions could achieve the same results. Others point out potential performance implications and the added cognitive load of managing language sets. However, a few commenters express interest, seeing potential applications in areas like DSL design and metaprogramming, though they also acknowledge the need for further development and clearer examples to demonstrate its usefulness. Overall, the reception is mixed, with many unconvinced but a few intrigued by the possibilities.
C Plus Prolog is a project that embeds a Prolog interpreter within C++ code, allowing for logic programming within a C++ application. It aims to provide a seamless integration where Prolog predicates can be called directly from C++ and vice-versa, enabling the combination of Prolog's declarative power with C++'s performance and imperative features. The project leverages a modified version of SWI-Prolog, a popular open-source Prolog implementation, and offers a bidirectional interface for data exchange between the two languages. This facilitates the development of applications that benefit from both efficient procedural code and the logical reasoning capabilities of Prolog.
Hacker News users discussed the practicality and niche appeal of C Plus Prolog. Some expressed interest in its potential for specific applications like implementing rule engines or program analysis tools, while others questioned the performance implications of embedding Prolog within C++. One commenter suggested that a cleaner approach might involve interfacing Prolog with a language like Rust. Several pointed out the project's age and apparent inactivity, raising concerns about maintainability and documentation. The potential for improved tooling using C++-based IDEs was mentioned as a possible benefit. Overall, the discussion centered around the specialized nature of the project and the trade-offs involved in its approach.
Shopify developed a new type inference algorithm called interprocedural sparse conditional type propagation (ISCTP) for their Ruby codebase. ISCTP significantly improves the performance of Sorbet, their gradual type checker, by more effectively propagating type information across method boundaries and within conditional branches. This addresses the common issue of "union types" exploding in complexity when analyzing code with many branching paths. By selectively tracking only relevant type refinements within each branch, ISCTP dramatically reduces the amount of computation required, resulting in faster type checking and fewer false positives. This improvement enables Shopify to scale their type checking efforts across their large and dynamic Ruby on Rails application.
HN commenters generally expressed interest in Sorbet's type system and its performance improvements. Some questioned the practical impact of these optimizations for most users and the tradeoffs involved. One commenter highlighted the importance of constant propagation and the challenges of scaling static analysis, while another compared Sorbet's approach to similar features in other typed languages. There was also a discussion regarding the specifics of Sorbet's implementation, including its handling of runtime type checks and the implications for performance. A few users expressed curiosity about the "sparse" aspect and how it contributes to the overall efficiency of the system. Finally, one comment pointed out the potential for this optimization to significantly improve code analysis tools and IDE features.
Microsoft is developing a new TypeScript compiler implementation called "tsc-native" built using native C++. This new compiler aims to drastically improve TypeScript compilation speed, potentially making it up to 10x faster than the existing JavaScript-based compiler. The project leverages the V8 JavaScript engine's TurboFan JIT compiler to optimize performance-critical parts of the type checking process. While still experimental, initial benchmarks show significant improvements, particularly for large projects. The team is actively working on refining the compiler and invites community feedback as they progress towards a production-ready release.
Hacker News users discussed the potential impact of a native TypeScript compiler. Some expressed skepticism about the claimed 10x speed improvement, emphasizing the need for real-world benchmarks and noting that compile times aren't always the bottleneck in TypeScript development. Others questioned the long-term viability of the project given Microsoft's previous attempts at native compilation. Several commenters pointed out that JavaScript's dynamic nature presents inherent challenges for ahead-of-time compilation and optimization, and wondered how the project would address issues like runtime type checking and dynamic module loading. There was also interest in whether the native compiler would support features like decorators and reflection. Some users expressed hope that a faster compiler could enable new use cases for TypeScript, like scripting and game development.
Python 3.14 introduces an experimental, limited form of tail-call optimization. While not true tail-call elimination as seen in functional languages, it optimizes specific tail calls within the same frame, significantly reducing stack frame allocation overhead and improving performance in certain scenarios like deeply recursive functions using accumulators. The optimization specifically targets calls where the last operation is a call to the same function and local variables aren't modified after the call. While promising for specific use cases, this optimization does not support mutual recursion or calls in nested functions, and it is currently hidden behind a flag. Performance benchmarks reveal substantial speed improvements, sometimes exceeding 2x, and memory usage benefits, particularly for tail-recursive functions previously prone to exceeding recursion depth limits.
HN commenters largely discuss the practical limitations of Python's new tail-call optimization. While acknowledging it's a positive step, many point out that the restriction to self-recursive calls severely limits its usefulness. Some suggest this limitation stems from Python's frame introspection features, while others question the overall performance impact given the existing bytecode overhead. A few commenters express hope for broader tail-call optimization in the future, but skepticism prevails about its wide adoption due to the language's design. The discussion also touches on alternative approaches like trampolining and the cultural preference for iterative code in Python. Some users highlight specific use cases where tail-call optimization could be beneficial, such as recursive descent parsing and certain algorithm implementations, though the consensus remains that the current implementation's impact is minimal.
The paper "Constant-time coding will soon become infeasible" argues that maintaining constant-time implementations for cryptographic algorithms is becoming increasingly challenging due to evolving hardware and software environments. The authors demonstrate that seemingly innocuous compiler optimizations and speculative execution can introduce timing variability, even in carefully crafted constant-time code. These issues are exacerbated by the complexity of modern processors and the difficulty of fully understanding their intricate behaviors. Consequently, the paper concludes that guaranteeing constant-time execution across different architectures and compiler versions is nearing impossibility, potentially jeopardizing the security of cryptographic implementations relying on this property to prevent timing attacks. They suggest exploring alternative mitigation strategies, such as masking and blinding, as more robust defenses against side-channel vulnerabilities.
HN commenters discuss the implications of the research paper, which suggests constant-time programming will become increasingly difficult due to hardware optimizations like speculative execution. Several express concern about the future of cryptography and security-sensitive code, as these rely heavily on constant-time implementations to prevent side-channel attacks. Some doubt the practicality of the attack described, citing existing mitigations and the complexity of exploiting microarchitectural side channels. Others propose software-based defenses, such as using interpreter-based languages, formal verification, or inserting random delays. The feasibility and cost of deploying these mitigations are also debated, with some arguing that the burden will fall disproportionately on developers. There's also skepticism about the paper's claims of "infeasibility," with commenters suggesting that constant-time coding will become more challenging but not impossible.
Summary of Comments ( 130 )
https://news.ycombinator.com/item?id=43731165
Hacker News users discussed the implications of having 15,000 lines of verified cryptography in Python, focusing on the trade-offs between verification and performance. Some expressed skepticism about the practical benefits of formal verification for cryptographic libraries, citing the difficulty of verifying real-world usage and the potential performance overhead. Others emphasized the importance of correctness in cryptography, arguing that verification offers valuable guarantees despite its limitations. The performance costs were debated, with some suggesting that the overhead might be acceptable or even negligible in certain scenarios. Several commenters also discussed the challenges of formal verification in general, including the expertise required and the limitations of existing tools. The choice of Python was also questioned, with some suggesting that a language like OCaml might be more suitable for this type of project.
The Hacker News post titled "15,000 lines of verified cryptography now in Python" (https://news.ycombinator.com/item?id=43731165) sparked a discussion with several insightful comments.
Many commenters expressed enthusiasm for the project, highlighting the importance of verifiable cryptography and the potential benefits of its Python implementation. The accessibility of Python was seen as a key advantage, making this formally verified cryptography more readily available to a wider audience of developers.
A prevalent theme in the discussion revolved around the practicality and performance implications of using verified code in real-world applications. Some users questioned the runtime performance overhead, expressing concerns about the feasibility of using such libraries in performance-sensitive scenarios. Others countered this, arguing that the security benefits might outweigh the potential performance trade-offs, particularly in high-security applications.
Several commenters delved into the technical details of the project, discussing the use of the F* language and its role in the verification process. The challenges of integrating formally verified code with existing Python ecosystems were also mentioned.
Some users raised concerns about the limited scope of the current implementation, noting that 15,000 lines of code represent only a fraction of a complete cryptographic library. However, the author's response clarified that the current focus was on core cryptographic primitives, with plans for future expansion.
The discussion also touched on the broader implications of formal verification in software development. Commenters acknowledged the difficulty and cost of formal verification but also emphasized its potential to significantly enhance software security and reliability.
While generally positive, the comments also expressed a degree of cautious optimism. The novelty and complexity of the project mean its long-term success and adoption remain to be seen. Some users pointed out the need for thorough testing and real-world evaluation to fully assess the project's viability.