A productive monorepo requires careful consideration of several key ingredients. Effective dependency management is crucial, often leveraging a package manager within the repo and explicit dependency declarations to ensure clarity and build reproducibility. Automated tooling, especially around testing and code quality (linting, formatting), is essential to maintain consistency across the projects within the monorepo. A well-defined structure, typically organized around bounded contexts or domains, helps navigate the codebase and prevents it from becoming unwieldy. Finally, continuous integration and deployment (CI/CD) tailored for the monorepo's structure allows for efficient and automated builds, tests, and releases of individual projects or the entire repo, maximizing the benefits of the shared codebase.
Xtool is a cross-platform command-line tool designed to replace Xcode for building iOS, macOS, watchOS, and tvOS apps. It aims to provide a faster and more flexible build system, particularly for developers working on Linux or Windows. Utilizing Swift's new build system, Xtool offers improved performance and concurrency over Xcode, and simplifies dependency management by leveraging the Swift Package Manager. It supports building for Apple devices via connected hardware or simulators, and while currently experimental, the project actively welcomes community involvement.
Hacker News users discussed Xtool's potential and limitations. Some expressed excitement about cross-platform iOS development, particularly for CI/CD pipelines and those without access to Macs. Others were skeptical about its long-term viability given Apple's control over the iOS ecosystem, questioning whether it could truly replicate Xcode's functionality, especially for debugging and profiling. Concerns were also raised about potential legal challenges from Apple. Several commenters mentioned existing solutions like Flutter and React Native as potentially better alternatives for cross-platform development, although acknowledging Xtool's unique focus on native Swift. The complexity of replicating Xcode's tight integration with Apple's hardware and software was a recurring theme, with some suggesting that a cloud-based macOS solution might be a more practical approach.
"Less Slow C++" offers practical advice for improving C++ build and execution speed. It covers techniques ranging from precompiled headers and unity builds (combining source files) to link-time optimization (LTO) and profile-guided optimization (PGO). It also explores build system optimizations like using Ninja and parallelizing builds, and coding practices that minimize recompilation such as avoiding unnecessary header inclusions and using forward declarations. Finally, the guide touches upon utilizing tools like compiler caches (ccache) and build analysis utilities to pinpoint bottlenecks and further accelerate the development process. The focus is on readily applicable methods that can significantly improve C++ project turnaround times.
Hacker News users discussed the practicality and potential benefits of the "less_slow.cpp" guidelines. Some questioned the emphasis on micro-optimizations, arguing that focusing on algorithmic efficiency and proper data structures is generally more impactful. Others pointed out that the advice seemed tailored for very specific scenarios, like competitive programming or high-frequency trading, where every ounce of performance matters. A few commenters appreciated the compilation of optimization techniques, finding them valuable for niche situations, while some expressed concern that blindly applying these suggestions could lead to less readable and maintainable code. Several users also debated the validity of certain recommendations, like avoiding virtual functions or minimizing branching, citing potential trade-offs with code design and flexibility.
Feldera drastically reduced Rust compile times for a project with over a thousand crates from 30 minutes to 2 minutes by strategically leveraging sccache. They initially tried using a shared volume for the sccache directory but encountered performance issues. The solution involved setting up a dedicated, high-performance sccache server, accessed by developers via SSH, which dramatically improved cache hit rates and reduced compilation times. Additionally, they implemented careful dependency management, reducing unnecessary rebuilds by pinning specific crate versions in a lockfile and leveraging workspaces to manage the many inter-related crates effectively.
HN commenters generally praise the author's work in reducing Rust compile times, while also acknowledging that long compile times remain a significant issue for the language. Several point out that the demonstrated improvement is largely due to addressing a specific, unusual dependency issue (duplicated crates) rather than a fundamental compiler speedup. Some express hope that the author's insights, particularly around dependency management, will contribute to future Rust development. Others suggest additional strategies for improving compile times, such as using sccache and focusing on reducing dependencies in the first place. A few commenters mention the trade-off between compile time and runtime performance, suggesting that Rust's speed often justifies the longer compilation.
Bazel's next generation focuses on improving build performance and developer experience. Key changes include Starlark, a Python-like language for build rules offering more flexibility and maintainability, as well as a transition to a new execution phase, Skyframe v2, designed for increased parallelism and scalability. These upgrades aim to simplify complex build processes, especially for large projects, while also reducing overall build times and improving caching effectiveness through more granular dependency tracking and action invalidation. Additionally, remote execution and caching are being streamlined, further contributing to faster builds by distributing workload and reusing previously built artifacts more efficiently.
Hacker News commenters generally agree that Bazel's remote caching and execution are powerful features, offering significant build speed improvements. Several users shared positive experiences, particularly with large monorepos. Some pointed out the steep learning curve and initial setup complexity as drawbacks, with one commenter mentioning it took their team six months to fully integrate Bazel. The discussion also touched upon the benefits for dependency management and build reproducibility. A few commenters questioned Bazel's suitability for smaller projects, suggesting the overhead might outweigh the advantages. Others expressed interest in alternative build systems like BuildStream and Buck2. A recurring theme was the desire for better documentation and easier integration with various languages and platforms.
This blog post details how to create a statically linked Go executable that utilizes C code, overcoming the challenges typically associated with CGO and external dependencies. The author leverages Zig as a build system and cross-compiler, using its ability to compile C code and link it directly into a Go-compatible archive. This approach eliminates the need for a system C toolchain on the target machine during deployment, producing a truly self-contained binary. The post provides a practical example, guiding the reader through the necessary Zig build script configuration and explaining the underlying principles. This allows for simplified deployment, particularly useful for environments like scratch Docker containers, and offers a more robust and reproducible build process.
Hacker News users discuss the clever use of Zig as a build tool to statically link C dependencies for Go programs, effectively bypassing the complexities of cgo
and resulting in self-contained binaries. Several commenters praise the approach for its elegance and practicality, particularly for cross-compilation scenarios. Some express concern about the potential fragility of relying on undocumented Go internals, while others highlight the ongoing efforts within the Go community to address static linking natively. A few users suggest alternative solutions like using Docker for consistent build environments or exploring fully statically-linked C libraries. The overall sentiment is positive, with many appreciating the ingenuity and potential of this Zig-based workaround.
Debian's "bookworm" release now offers officially reproducible live images. This means that rebuilding the images from source code will result in bit-for-bit identical outputs, verifying the integrity and build process. This achievement, a first for official Debian live images, was accomplished by addressing various sources of non-determinism within the build system, including timestamps, random numbers, and build paths. This increased transparency and trustworthiness strengthens Debian's security posture.
Hacker News commenters generally expressed approval of Debian's move toward reproducible builds, viewing it as a significant step for security and trust. Some highlighted the practical benefits, like easier verification of image integrity and detection of malicious tampering. Others discussed the technical challenges involved in achieving reproducibility, particularly with factors like timestamps and build environments. A few commenters also touched upon the broader implications for software supply chain security and the potential influence on other distributions. One compelling comment pointed out the difference between "bit-for-bit" reproducibility and the more nuanced "content-addressed" approach Debian is using, clarifying that some variation in non-functional aspects is still acceptable. Another insightful comment mentioned the value of this for embedded systems, where knowing exactly what's running is crucial.
Christian Tietze reflects on the "software rake," a metaphor for accumulating small, seemingly insignificant tasks that eventually hinder progress on larger, more important work. He breaks down the rake's "prongs" into categories like maintenance, distractions, context switching, and unexpected issues. These prongs snatch time and attention, creating a sense of being busy but unproductive. Tietze advocates for consciously identifying and addressing these prongs through techniques like timeboxing, focused work sessions, and ruthless prioritization to clear the way for meaningful progress on significant projects.
Hacker News users discussed the various "prongs" of the Rake, agreeing with the author's general premise about complexity in software. Several commenters shared their own experiences wrestling with similar issues, particularly around build systems and dependency management. One pointed out the irony of Rake itself being a complex build system, while another suggested that embracing complexity is sometimes unavoidable, especially as projects mature. The impact of "worse is better" philosophy was debated, with some arguing it contributes to the problem and others suggesting it's a pragmatic necessity. A few users highlighted specific prongs they found particularly relevant, including the struggle to maintain compatibility and the pressure to adopt new technologies. Some offered alternative solutions, like focusing on smaller, composable tools and simpler languages, while others emphasized the importance of careful planning and design upfront to mitigate future complexity. There was also discussion about the role of organizational structure and communication in exacerbating these issues.
The blog post details a performance optimization for Nix's evaluation process. By pre-resolving store paths for built-in functions, specifically fetchers, Nix can avoid redundant computations during evaluation, leading to significant speed improvements. This is achieved by introducing a new builtins
attribute in the Nix expression language containing pre-computed hashes for commonly used fetchers. This change eliminates the need to repeatedly calculate these hashes during each evaluation, resulting in faster build times, particularly noticeable in projects with many dependencies. The post demonstrates benchmark results showing a substantial reduction in evaluation time with this optimization, highlighting its potential to improve the overall Nix user experience.
Hacker News users generally praised the technique described in the article for improving Nix evaluation performance. Several commenters highlighted the cleverness of pre-computing store paths, noting that it bypasses a significant bottleneck in Nix's evaluation process. Some expressed surprise that this optimization wasn't already implemented, while others discussed potential downsides, like the added complexity to the tooling and the risk of invalidating the cache if the store path changes. A few users also shared their own experiences with Nix performance issues and suggested alternative optimization strategies. One commenter questioned the significance of the improvement in practical scenarios, arguing that derivation evaluation is often not the dominant factor in overall build time.
This video demonstrates the incredibly fast incremental compilation of the Zig self-hosted compiler. By making a small, seemingly insignificant change to a source file within the compiler's codebase and rebuilding, the video showcases a rebuild time of just around 25 milliseconds. This highlights Zig's efficient build system and its focus on fast iteration times, a key advantage for developer productivity.
Hacker News users generally praised the Zig compiler's fast incremental compilation demonstrated in the video. Several commenters highlighted the impressive speed and how it contributes to a positive developer experience. Some pointed out that while the demo is compelling, real-world project builds with dependencies might not be as instantaneous. Others discussed the potential of Zig's self-hosting capability and build system, comparing it favorably to other languages and build tools. A few users also expressed interest in Zig's memory management and safety features. There was some discussion about the practical limitations of incremental compilation and the importance of understanding its inner workings.
Several Linux distributions, including Arch Linux, Debian, Fedora, and NixOS, are collaborating to improve reproducible builds. This means ensuring that compiling source code results in identical binary packages, regardless of the build environment or timing. This joint effort aims to increase security by allowing independent verification that binaries haven't been tampered with and simplifies debugging by guaranteeing consistent build outputs. The project involves sharing tools and best practices across distributions, improving build reproducibility across different architectures, and working upstream with software developers to address issues that hinder reproducibility.
Hacker News commenters generally expressed support for the reproducible builds initiative, viewing it as a crucial step towards improved security and trustworthiness. Some highlighted the potential to identify malicious code injections, while others emphasized the benefits for debugging and verifying software integrity. A few commenters discussed the practical challenges of achieving reproducible builds across different distributions, citing variations in build environments and dependencies as potential obstacles. One commenter questioned the feasibility of guaranteeing bit-for-bit reproducibility across all architectures, prompting a discussion about the nuances of the goal and the acceptability of minor, non-functional differences. There was also some discussion of existing tooling and the importance of community involvement in driving the project forward.
The Hacker News post discusses whether any programming languages allow specifying package dependencies directly within import or include statements, rather than separately in a dedicated dependency management file. The original poster highlights the potential benefits of this approach, such as improved clarity and ease of understanding dependencies for individual files. They suggest a syntax where version numbers or constraints could be incorporated into the import statement itself. While no existing mainstream languages seem to offer this feature, some commenters mention related concepts like import maps in JavaScript and conditional imports in some languages. The core idea is to make dependency management more localized and transparent at the file level.
The Hacker News comments discuss the pros and cons of specifying package requirements directly within import statements. Several commenters appreciate the clarity and explicitness this would bring, as it makes dependencies immediately obvious and reduces the need for separate dependency management files. Others argue against it, citing potential drawbacks like redundancy, increased code verbosity, and difficulties managing complex dependency graphs. Some propose alternative solutions, like embedding version requirements in comments or using language-specific mechanisms for dependency specification. A few commenters mention existing languages or tools that offer similar functionality, such as Nix and Dhall, pointing to these as potential examples or inspiration for how such a system could work. The discussion also touches on the practical implications for tooling and build systems, with commenters considering the impact on IDE integration and compilation processes.
The author recounts their four-month journey building a simplified, in-memory, relational database in Rust. Motivated by a desire to deepen their understanding of database internals, they leveraged 647 open-source crates, highlighting Rust's rich ecosystem. The project, named "Oso," implements core database features like SQL parsing, query planning, and execution, though it omits persistence and advanced functionalities. While acknowledging the extensive use of external libraries, the author emphasizes the value of the learning experience and the practical insights gained into database architecture and Rust development. The project served as a personal exploration, focusing on educational value over production readiness.
Hacker News commenters discuss the irony of the blog post title, pointing out the potential hypocrisy of criticizing open-source reliance while simultaneously utilizing it extensively. Some argued that using numerous dependencies is not inherently bad, highlighting the benefits of leveraging existing, well-maintained code. Others questioned the author's apparent surprise at the dependency count, suggesting a naive understanding of modern software development practices. The feasibility of building a complex project like a database in four months was also debated, with some expressing skepticism and others suggesting it depends on the scope and pre-existing knowledge. Several comments delve into the nuances of Rust's compile times and dependency management. A few commenters also brought up the licensing implications of using numerous open-source libraries.
Summary of Comments ( 169 )
https://news.ycombinator.com/item?id=44086917
HN commenters largely agree with the author's points on the importance of good tooling for a successful monorepo. Several users share their positive experiences with Nx, echoing the author's recommendation. Some discuss the tradeoffs between a monorepo and manyrepos, with a few highlighting the increased complexity and potential for slower build times in a monorepo setup, particularly with JavaScript projects. Others point to the value of clear code ownership and modularity, regardless of the repository structure. One commenter suggests Bazel as an alternative build tool and another recommends exploring Pants v2. A couple of users mention that "productive" is subjective and emphasize the importance of adapting the approach to the specific team and project needs.
The Hacker News post titled "The Ingredients of a Productive Monorepo" (https://news.ycombinator.com/item?id=44086917) sparked a discussion with several insightful comments. Many users shared their experiences and opinions on monorepo tooling and best practices.
One compelling comment thread discussed the importance of a fast build system, with one user emphasizing that a monorepo is only as good as its build system allows it to be. This led to a discussion of various build systems like Bazel and Buck, and how they address the challenges of scaling builds in a large monorepo. Some users shared their positive experiences with these tools, highlighting features like remote caching and fine-grained dependency management. Others cautioned against the complexity of setting up and maintaining these systems, suggesting simpler alternatives might be more appropriate for smaller projects.
Another key discussion revolved around code sharing and discoverability within a monorepo. One user suggested that clear conventions and strong documentation are essential for navigating a large codebase. Another pointed out that the ease of code sharing can be a double-edged sword, potentially leading to unwanted dependencies and tighter coupling between components if not managed carefully. The idea of "bounded contexts" was brought up as a way to mitigate this risk, encouraging developers to think carefully about module boundaries and dependencies.
Several comments touched on the cultural aspects of adopting a monorepo. One user argued that a successful monorepo requires a strong engineering culture that values collaboration and code ownership. Another emphasized the importance of clear communication and shared understanding of the monorepo's structure and conventions.
Finally, the topic of tooling support for refactoring and dependency management was also discussed. Users highlighted the benefits of automated tools for tasks like renaming symbols and updating imports across the entire codebase, while others pointed out that the complexity of these tools can be a barrier to entry.
In summary, the comments on the Hacker News post offer a valuable perspective on the practical considerations of implementing and maintaining a productive monorepo, covering topics ranging from build systems and tooling to code organization and engineering culture. The discussion highlights both the potential benefits and the challenges of adopting a monorepo approach, providing valuable insights for anyone considering this architectural pattern.