This post discusses a common problem in game physics: preventing jittering and instability in stacked rigid bodies. It introduces a technique called "speculative contacts," where potential collisions in the next physics step are predicted and pre-emptively resolved. This allows for stable stacking by ensuring the bodies are prepared for contact, rather than reacting impulsively after penetration occurs. The post emphasizes the improved stability and visual quality this method offers compared to traditional solutions like increasing solver iterations, which are computationally expensive. It also highlights the importance of efficiently identifying potential contacts to maintain performance.
The blog post details the author's successful porting of both Terraria and Celeste to WebAssembly using a custom C# runtime built upon Mono. This allows both games to run directly within a web browser without plugins or installation. The author highlights the challenges encountered, particularly with handling graphics and input, and explains the solutions implemented, such as utilizing SDL2 and Emscripten. While performance isn't yet perfect, particularly with Terraria's more complex world generation, both games are demonstrably playable in the browser, showcasing the potential of WebAssembly for running demanding applications.
HN users discussed the technical challenges and successes of porting games to WebAssembly. Several commenters praised the developer's work, particularly the performance achieved with Celeste, noting it felt native. Some discussed the complexities of handling game inputs, audio, and file system access within the browser environment. A few users expressed interest in the potential of WASM for game development, seeing it as a viable platform for distributing and playing games without installations. Others shared their experiences with similar projects and offered suggestions for optimization. The legality of distributing ROMs online was also briefly touched upon.
Wall Go is a browser-based recreation of the "Wall" minigame from the Korean flash game "Devil's Plan 2." Players control a character who must dodge incoming walls by moving left or right across the screen. The game features increasing difficulty, simple controls, and a retro aesthetic reminiscent of the original flash game.
HN commenters were generally impressed with the Wall Go implementation, praising the developer for their attention to detail in recreating the original mini-game's feel and difficulty. Some users reminisced about playing Devil's Plan 2, while others suggested improvements like difficulty settings, different maze sizes, or a "rewind" feature. A few commenters discussed the original game's logic and optimal strategies, including pre-calculating moves based on the predictable enemy patterns. The overall sentiment was positive, with many appreciating the nostalgic throwback and well-executed browser version.
"The Level Design Book" is a free, collaborative, online resource dedicated to the craft of video game level design. It covers a broad range of topics, from foundational principles like spatial design and gameplay flow to more advanced concepts such as scripting and lighting. The book aims to be a practical guide for aspiring and professional level designers, offering concrete advice, examples, and resources to help them create compelling and engaging player experiences. It emphasizes the importance of understanding player psychology, clear communication, and iterative design. The project welcomes contributions from experienced designers, encouraging the ongoing growth and evolution of the resource as best practices and technologies change.
Commenters on Hacker News largely praised "The Level Design Book," highlighting its comprehensive nature and practical approach. Several expressed excitement about finally having a dedicated resource for level design, noting the scarcity of quality materials on the topic. Some appreciated the book's focus on fundamental principles applicable across genres and its avoidance of becoming overly software-specific. A few commenters with professional experience in game development vouched for the author's expertise and the book's relevance to industry practices. Some discussion revolved around the book's price, with some finding it reasonable while others hesitated due to the cost. Finally, several users expressed interest in the book's coverage of accessibility and inclusivity in level design.
Older games often achieve a lasting appeal that many modern titles lack, due to a combination of factors. Simpler designs and smaller scopes meant more focus on core gameplay loops, which fosters replayability and allows communities to master and explore the mechanics in depth, even creating their own content through modding. Modern games, burdened by larger budgets, often prioritize graphics and complex systems that can detract from engaging core gameplay and become outdated quickly. Additionally, live service models with ongoing updates and microtransactions can fracture communities and make it difficult to revisit older versions, effectively killing the game as it existed at launch. These older, simpler games remain accessible and enjoyable precisely because they are complete and unchanging experiences.
HN users generally agreed with the premise that older games are more replayable, citing factors like simpler design focusing on core gameplay loops, and a lack of aggressive monetization schemes. Some argued that "new" in the title really meant AAA games with bloated budgets and feature creep, contrasting them with indie games which often capture the spirit of older titles. Several commenters highlighted the importance of moddability and community-driven content in extending the lifespan of older games. Others pointed out the nostalgia factor and the rose-tinted glasses through which older games are viewed, acknowledging that many releases from the past were simply forgotten. A few dissenting voices argued that newer games also have staying power, especially in genres like strategy and grand strategy, suggesting the author's generalization was too broad.
John Carmack's talk at Upper Bound 2025 focused on the complexities of AGI development. He highlighted the immense challenge of bridging the gap between current AI capabilities and true general intelligence, emphasizing the need for new conceptual breakthroughs rather than just scaling existing models. Carmack expressed concern over the tendency to overestimate short-term progress while underestimating long-term challenges, advocating for a more realistic approach to AGI research. He also discussed potential risks associated with increasingly powerful AI systems.
HN users discuss John Carmack's 2012 talk on "Independent Game Development." Several commenters reminisce about Carmack's influence and clear communication style. Some highlight his emphasis on optimization and low-level programming as key to achieving performance, particularly in resource-constrained environments like mobile at the time. Others note his advocacy for smaller, focused teams and "lean methodologies," contrasting it with the bloat they perceive in modern game development. A few commenters mention specific technical insights they gleaned from Carmack's talks or express disappointment that similar direct, technical presentations are less common today. One user questions whether Carmack's approach is still relevant given advancements in hardware and tools, sparking a debate about the enduring value of optimization and the trade-offs between performance and developer time.
The blog post argues against the prevalent "architectural" approach to level design in games, where spaces are treated as interconnected rooms separated by walls. This approach, often dictated by level editors, limits creativity and leads to predictable, boxy environments. The author advocates for a more "sculptural" approach, emphasizing the continuous nature of 3D space and using tools that allow for more organic shaping of the environment. This shift would enable the creation of more immersive and surprising levels that move beyond the limitations of traditional room-based design.
Several Hacker News commenters agreed with the author's premise that over-architecting systems leads to rigidity and difficulty in adapting to change. Some discussed the challenges of balancing upfront design with emergent design, emphasizing the importance of iterative development and refactoring. One commenter highlighted the value of "Worse is Better" as a design philosophy, suggesting that a simpler, less perfect initial design that can be improved over time is often preferable to a complex, "perfect" design that is difficult to change. Another pointed out the connection to Conway's Law, noting how organizational structure influences system design, and how decentralization can lead to more organic, adaptable systems. The idea of "fitness functions" for system design also arose, with commenters suggesting that defining clear goals and metrics is crucial for effective evolution of a system. A few commenters offered practical examples of how they had encountered and addressed these issues in their own work.
90s.dev is a web-based game maker designed to evoke the look and feel of classic DOS games. It offers a simplified development environment with a drag-and-drop interface for placing sprites and backgrounds, along with a scripting language reminiscent of older programming styles. The platform aims to make game development accessible to beginners while providing a nostalgic experience for seasoned developers. Created games can be played directly in the browser and shared easily online.
Commenters on Hacker News largely praised 90s.dev for its nostalgic appeal and ease of use, with several comparing it favorably to simpler, pre-Unity game development environments like Klik & Play. Some expressed excitement for its potential as a teaching tool, particularly for introducing children to programming concepts. A few users questioned the long-term viability of the project given its reliance on a custom runtime, while others offered suggestions for improvements like mobile support, local storage, and improved documentation. The discussion also touched upon the challenges of web-based game development, including performance and browser compatibility. Several commenters shared their own experiences with similar projects or reminisced about the golden age of shareware games.
The author envisions a future (2025 and beyond) where creating video games without a traditional game engine becomes increasingly viable. This is driven by advancements in web technologies like WebGPU, which offer native performance, and readily available libraries handling complex tasks like physics and rendering. Combined with the growing accessibility of AI tools for asset creation and potentially even gameplay logic, the barrier to entry for game development lowers significantly. This empowers smaller teams and individual developers to bring their unique game ideas to life, focusing on creativity rather than wrestling with complex engine setup and low-level programming. This shift mirrors the transition seen in web development, moving from manual HTML/CSS/JS to higher-level frameworks and tools.
Hacker News users discussed the practicality and appeal of the author's approach to game development. Several commenters questioned the long-term viability of building and maintaining custom engines, citing the significant time investment and potential for reinventing the wheel. Others expressed interest in the minimalist philosophy, particularly for smaller, experimental projects where creative control is paramount. Some pointed out the existing tools like raylib and Love2D that offer a middle ground between full-blown engines and building from scratch. The discussion also touched upon the importance of understanding underlying principles, regardless of the chosen tools. Finally, some users debated the definition of a "game engine" and whether the author's approach qualifies as engine-less.
The Nintendo 64, despite its limited color palette, employed clever tricks to create dynamic lighting effects. Developers manipulated the console's limited color palette by dynamically shifting colors within the palette itself. Rather than calculating light values per pixel, they changed the overall color ramps assigned to textures, giving the illusion of light and shadow moving across surfaces. This technique was often combined with vertex shading, allowing for smooth gradients across polygons. By strategically updating palettes, they simulated various lighting conditions, including time of day changes and colored light sources, while conserving precious processing power and memory.
Hacker News users discuss various aspects of the N64's rendering techniques. Several commenters express fascination with the creativity and ingenuity required to achieve impressive lighting effects within the console's limited hardware capabilities. Some highlight the clever use of vertex colors and dithering patterns to simulate complex lighting scenarios. Others note the importance of understanding the N64's architecture and the interplay between the Reality Coprocessor (RCP) and the central processing unit (CPU). One commenter points out the impact these techniques had on the overall aesthetic of N64 games, contributing to their distinctive look and feel. Another emphasizes the value of articles like this in preserving and disseminating knowledge about older hardware and software techniques. Several users share personal anecdotes about their experiences with N64 development and their admiration for the developers who pushed the console's limits.
Jason Thorsness's blog post "Tower Defense: Cache Control" uses the analogy of tower defense games to explain how caching improves website performance. Just like strategically placed towers defend against incoming enemies, various caching layers intercept requests for website assets (like images and scripts), preventing them from reaching the origin server. These layers, including browser cache, CDN, and server-side caching, progressively filter requests, reducing server load and latency. Each layer has its own "rules of engagement" (cache-control headers) dictating how long and under what conditions resources are stored and reused, optimizing the delivery of content and improving the overall user experience.
Hacker News users discuss the blog post about optimizing a Tower Defense game using aggressive caching and precomputation. Several commenters praise the author's in-depth analysis and clear explanations, particularly the breakdown of how different caching strategies impact performance. Some highlight the value of understanding fundamental optimization techniques even in the context of a seemingly simple game. Others offer additional suggestions for improvement, such as exploring different data structures or considering the trade-offs between memory usage and processing time. One commenter notes the applicability of these optimization principles to other domains beyond game development, emphasizing the broader relevance of the author's approach. Another points out the importance of profiling to identify performance bottlenecks, echoing the author's emphasis on data-driven optimization. A few commenters share their own experiences with similar optimization challenges, adding practical perspectives to the discussion.
React Three Fiber (R3F) is a React renderer for Three.js, bringing declarative, component-based development to 3D web experiences. It simplifies complex Three.js code, allowing developers to create and compose 3D scenes using familiar React patterns. The broader React Three ecosystem, built around R3F, provides additional tools and libraries like Drei for commonly used helpers and effects, as well as curated examples and templates via @react-three/fiber and use-cannon for physics simulations. This ecosystem aims to lower the barrier to entry for web-based 3D graphics and empowers developers to build immersive experiences with greater ease and efficiency.
Hacker News users generally expressed enthusiasm for React Three Fiber (R3F) and its ecosystem, praising its ease of use compared to Three.js directly, and its ability to bridge the gap between declarative React and the imperative nature of WebGL. Several commenters highlighted the practical benefits of using R3F, including faster prototyping and improved developer experience. Some discussed the potential of Drei, a helper library for R3F, for simplifying complex tasks and reducing boilerplate code. Performance concerns were also raised, with some questioning the overhead of React in 3D rendering, while others argued that R3F's optimizations mitigate these issues in many cases. A few users mentioned other relevant libraries like react-babylonjs and wondered about their comparative strengths and weaknesses. Overall, the sentiment was positive, with many commenters excited about the future of R3F and its potential to democratize 3D web development.
The blog post "15 Years of Shader Minification" reflects on the evolution of techniques to reduce shader code size, crucial for performance in graphics programming. Starting with simple regex-based methods, the field progressed to more sophisticated approaches leveraging abstract syntax trees (ASTs) and dedicated tools like Shader Minifier and GLSL optimizer. The author emphasizes the importance of understanding GLSL semantics for effective minification, highlighting challenges like varying precision and cross-compiler quirks. The post concludes with a look at future directions, including potential for machine learning-based optimization and the increasing complexity posed by newer shader languages like WGSL.
HN users discuss the challenges and intricacies of shader minification, reflecting on its evolution over 15 years. Several commenters highlight the difficulty in optimizing shaders due to the complex interplay between hardware, drivers, and varying precision requirements. The effectiveness of minification is questioned, with some arguing that perceived performance gains often stem from improved compilation or driver optimizations rather than the minification process itself. Others point out the importance of considering the specific target hardware and the potential for negative impacts on precision and stability. The discussion also touches upon the trade-offs between shader size and readability, with some suggesting that smaller shaders aren't always faster and can be harder to debug. A few commenters share their experiences with specific minification tools and techniques, while others lament the lack of widely adopted best practices and the ongoing need for manual optimization.
The author criticizes Unity's decision to ban the VLC library from its Asset Store while simultaneously utilizing and profiting from other open-source projects like LLVM and Mono. They argue that Unity's justification for the ban, citing VLC's GPLv2 license incompatibility with their terms of service, is hypocritical. The author points out that Unity's own products benefit from GPLv2-licensed software, suggesting the ban is motivated by competitive concerns, specifically the potential disruption VLC's inclusion could have on their own video player offering. This selective enforcement of licensing terms, according to the author, reveals a double standard regarding open source and demonstrates a prioritization of profit over community contributions.
The Hacker News comments discuss Unity's seemingly contradictory stance on open source, banning VLC while simultaneously using open-source software themselves. Several commenters point out the potential hypocrisy, questioning whether Unity truly understands open-source licensing. Some suggest the ban might stem from VLC's GPL license, which could obligate Unity to open-source their own engine if they bundled it. Others speculate about practical reasons for the ban, like avoiding potential legal issues arising from VLC's broad codec support, or preventing users from easily ripping game assets. A few defend Unity, arguing that they are within their rights to control their platform and that the GPL's implications can be challenging for businesses to navigate. There's also discussion about the lack of clarity from Unity regarding their reasoning, which fuels speculation and distrust within the community. Finally, some commenters express concern over the precedent this sets, worrying that other closed-source platforms might adopt similar restrictions on open-source software.
Terry Cavanagh has released the source code for his popular 2D puzzle platformer, VVVVVV, under the MIT license. The codebase, primarily written in C++, includes the game's source, assets, and build scripts for various platforms. This release allows anyone to examine, modify, and redistribute the game, fostering learning and potential community-driven projects based on VVVVVV.
HN users discuss the VVVVVV source code release, praising its cleanliness and readability. Several commenters highlight the clever use of fixed-point math and admire the overall simplicity and elegance of the codebase, particularly given the game's complexity. Some share their experiences porting the game to other platforms, noting the ease with which they were able to do so thanks to the well-structured code. A few commenters express interest in studying the game's level design and collision detection implementation. There's also a discussion about the use of SDL and the challenges of porting older C++ code, with some reflecting on the game development landscape of the time. Finally, several users express appreciation for Terry Cavanagh's work and the decision to open-source the project.
This post proposes a taxonomy for classifying rendering engines based on two key dimensions: the scene representation (explicit vs. implicit) and the rendering technique (rasterization vs. ray tracing). Explicit representations, like triangle meshes, directly define the scene geometry, while implicit representations, like signed distance fields, define the scene mathematically. Rasterization projects scene primitives onto the screen, while ray tracing simulates light paths to determine pixel colors. The taxonomy creates four categories: explicit/rasterization (traditional real-time graphics), explicit/ray tracing (becoming increasingly common), implicit/rasterization (used for specific effects and visualizations), and implicit/ray tracing (offering unique capabilities but computationally expensive). The author argues this framework provides a clearer understanding of rendering engine design choices and future development trends.
Hacker News users discuss the proposed taxonomy for rendering engines, mostly agreeing that it's a useful starting point but needs further refinement. Several commenters point out the difficulty of cleanly categorizing existing engines due to their hybrid approaches and evolving architectures. Specific suggestions include clarifying the distinction between "tiled" and "immediate" rendering, addressing the role of compute shaders, and incorporating newer deferred rendering techniques. The author of the taxonomy participates in the discussion, acknowledging the feedback and indicating a willingness to revise and expand upon the initial classification. One compelling comment highlights the need to consider the entire rendering pipeline, rather than just individual stages, to accurately classify an engine. Another insightful comment points out that focusing on data structures, like the use of a G-Buffer, might be more informative than abstracting to rendering paradigms.
MTerrain is a Godot Engine plugin offering a highly optimized terrain system with a dedicated editor. It uses a chunked LOD approach for efficient rendering of large terrains, supporting features like splatmaps (texture blending) and customizable shaders. The editor provides tools for sculpting, painting, and object placement, enabling detailed terrain creation within the Godot environment. Performance is a key focus, leveraging multi-threading and optimized mesh generation for smooth gameplay even with complex terrains. The plugin aims to be user-friendly and integrates seamlessly with Godot's existing workflows.
The Hacker News comments express general enthusiasm for the MTerrain Godot plugin, praising its performance improvements over Godot's built-in terrain system. Several commenters highlight the value of open-source contributions like this, especially for game engines like Godot. Some discuss the desire for improved terrain tools in Godot and express hope for this project's continued development and potential integration into the core engine. A few users raise questions about specific features, like LOD implementation and performance comparisons with other engines like Unity, while others offer suggestions for future enhancements such as better integration with Godot's built-in systems and the addition of features like holes and caves. One commenter mentions having used the plugin successfully in a personal project, offering a positive firsthand account of its capabilities.
Warren Robinett's Adventure, released in 1979 (not 1980 as the title suggests), for the Atari 2600, is a groundbreaking game considered the first action-adventure and the first to feature an "Easter egg" – Robinett's hidden signature. Developed despite Atari's policy of not crediting programmers, Adventure's simple graphics represented a fantasy world where players retrieved a jeweled chalice while navigating mazes, battling dragons, and interacting with objects like keys and bridges. Its open-world gameplay and multiple screens were innovative for the time, significantly influencing later game design. The game's success helped legitimize the role of programmers and contributed to the rise of the video game industry.
Commenters on Hacker News discussed the ingenuity of Warren Robinett hiding his name in the game "Adventure" given the corporate culture at Atari at the time, which didn't credit developers. Some recalled their childhood experiences discovering the Easter egg and the sense of mystery it evoked. Others debated the impact of "Adventure" on gaming history, with some arguing its significance in popularizing the action-adventure genre and others highlighting its technical achievements given the 2600's limitations. A few commenters also shared personal anecdotes about working with or meeting Robinett. One commenter even linked a video showing how to trigger the easter egg.
Inspired by the HD-2D art style of Octopath Traveler II, a developer created their own pixel art editor. The editor, written in TypeScript and using HTML Canvas, focuses on recreating the layered sprite effect seen in the game, allowing users to create images with multiple light sources and apply depth effects to individual pixels. The project is open-source and available on GitHub, and the developer welcomes feedback and contributions.
Several commenters on the Hacker News post praise the pixel art editor's clean UI and intuitive design. Some express interest in the underlying technology and ask about the framework used (Godot 4). Others discuss the challenges of pixel art, particularly around achieving a consistent style and the benefits of using dedicated tools. A few commenters share their own experiences with pixel art and recommend other software or resources. The developer actively engages with commenters, answering questions about the editor's features, planned improvements (including animation support), and the inspiration drawn from Octopath Traveler II's distinct HD-2D style. There's also a short thread discussing the merits of different dithering algorithms.
WorldGen is an open-source Python library for procedurally generating 3D scenes. It aims to be versatile, supporting various use cases like game development, VR/XR experiences, and synthetic data generation. Users define scenes declaratively using a YAML configuration file, specifying elements like objects, materials, lighting, and camera placement. WorldGen boasts a modular and extensible design, allowing for the integration of custom object generators and modifiers. It leverages Blender as its rendering backend, exporting scenes in common 3D formats.
Hacker News users generally praised WorldGen's potential and its open-source nature, viewing it as a valuable tool for game developers, especially beginners or those working on smaller projects. Some expressed excitement about the possibilities for procedural generation and the ability to create diverse and expansive 3D environments. Several commenters highlighted specific features they found impressive, such as the customizable parameters, real-time editing, and export compatibility with popular game engines like Unity and Unreal Engine. A few users questioned the performance with large and complex scenes, and some discussed potential improvements, like adding more biomes or improving the terrain generation algorithms. Overall, the reception was positive, with many eager to experiment with the tool.
This tutorial demonstrates building a basic text adventure game in C. It starts with a simple framework using printf
and scanf
for output and input, focusing on creating a game loop that processes player commands. The tutorial introduces core concepts like managing game state with variables, handling different actions (like "look" and "go") with conditional statements, and defining rooms with descriptions. It emphasizes a step-by-step approach, expanding the game's functionality by adding new rooms, objects, and interactions through iterative development. The example uses simple string comparisons to interpret player commands and a rudimentary structure to represent the game world. The tutorial prioritizes clear explanations and aims to be an accessible introduction to game programming in C.
Commenters on Hacker News largely praised the tutorial for its clear, concise, and beginner-friendly approach to C programming and game development. Several appreciated the focus on fundamental concepts and the avoidance of complex libraries, making it accessible even to those with limited C experience. Some suggested improvements like using getline()
for safer input handling and adding features like saving/loading game state. The nostalgic aspect of text adventures also resonated with many, sparking discussions about classic games like Zork and the broader history of interactive fiction. A few commenters offered alternative approaches or pointed out minor technical details, but the overall sentiment was positive, viewing the tutorial as a valuable resource for aspiring programmers.
This pull request introduces initial support for Apple's visionOS platform in the Godot Engine. It adds a new build target enabling developers to create and export Godot projects specifically for visionOS headsets. The implementation leverages the existing xr
interface and builds upon the macOS platform support, allowing developers to reuse existing XR projects and code with minimal modifications. This preliminary support focuses on enabling core functionality and rendering on the device, paving the way for more comprehensive visionOS features in future updates.
Hacker News users generally expressed excitement about Godot's upcoming native visionOS support, viewing it as a significant step forward for the engine and potentially a game-changer for VR/AR development. Several commenters praised Godot's open-source nature and its commitment to cross-platform compatibility. Some discussed the potential for new types of games and experiences enabled by visionOS and the ease with which existing Godot projects could be ported. A few users raised questions about Apple's closed ecosystem and its potential impact on the openness of Godot's implementation. The implications of Apple's developer fees and App Store policies were also briefly touched upon.
"Find the Odd Disk" presents a visual puzzle where players must identify a single, subtly different colored disk among a grid of seemingly identical ones. The difference in color is minimal, challenging the player's perception and requiring careful observation. The game provides no hints or feedback beyond the user's clicks, increasing the difficulty and rewarding attentive analysis. Successfully clicking the odd disk reveals the next level, featuring progressively more disks and subtler color variations, making each round more demanding than the last.
HN users generally enjoyed the "Find the Odd Disk" color puzzle, praising its elegant simplicity and clever design. Several pointed out the effectiveness of using just noticeable differences (JNDs) in color to create a challenging but solvable puzzle. Some discussed optimal strategies, with one suggesting binary search as the most efficient approach. A few users shared their completion times, and others expressed their satisfaction in solving it. There was some light debate over whether it was truly JND or slightly larger differences, but the overall consensus was positive, with many appreciating the break from more complex or stressful topics typically discussed on HN.
Pike is a dynamic programming language combining high-level productivity with efficient performance. Its syntax resembles Java and C, making it easy to learn for programmers familiar with those languages. Pike supports object-oriented, imperative, and functional programming paradigms. It boasts powerful features like garbage collection, advanced data structures, and built-in support for networking and databases. Pike is particularly well-suited for developing web applications, system administration tools, and networked applications, and is free and open-source software.
HN commenters discuss Pike's niche as a performant, garbage-collected language used for specific applications like the Roxen web server and MUDs. Some recall its history at LPC and its association with the LPC MUD. Several express surprise that it's still maintained, while others share positive experiences with its speed and C-like syntax, comparing it favorably to Java in some respects. One commenter highlights its use in high-frequency trading due to its performance characteristics. The overall sentiment leans towards respectful curiosity about a relatively obscure but seemingly capable language.
A developer created an incredibly small, playable first-person shooter inspired by Doom that fits entirely within the data capacity of a QR code. The game, called "Backrooms DOOM," leverages extremely limited graphics and simple gameplay mechanics to achieve this feat. Scanning the QR code redirects to a webpage where the game can be played directly in a browser.
Hacker News users generally expressed admiration for the technical achievement of fitting a Doom-like game into a QR code. Several commenters questioned the actual playability, citing the extremely limited resolution and controls. Some discussed the clever compression techniques likely used, and others shared similar projects, like fitting Wolfenstein 3D into a tweet or creating even smaller games. A few questioned the use of the term "Doom-like," suggesting it was more of a tech demo than a truly comparable experience. The practicality was debated, with some seeing it as a fun novelty while others considered it more of a technical exercise. There was some discussion about the potential of pushing this concept further with future advancements in QR code capacity or display technology.
Defold is a free and open-source 2D game engine designed for rapid development. It features a streamlined workflow with its own integrated editor, supports Lua scripting, and offers a wide range of built-in tools for graphics, physics, animation, and sound. Targeting multiple platforms including iOS, Android, HTML5, Windows, macOS, and Linux, Defold simplifies cross-platform deployment with a single-click build process. Its focus on efficiency allows for small game sizes and optimal performance, making it suitable for a variety of game genres and platforms.
Hacker News users discuss Defold's ease of use, especially for beginners, and its suitability for 2D games. Some praise its small executable size and fast iteration times, while others highlight the active community and helpful documentation. Concerns include its limited 3D capabilities, the small talent pool, and uncertainty about its long-term viability despite its acquisition by King and subsequent independence. Several users share their positive experiences using Defold for both personal projects and commercially released games, citing its performance and streamlined workflow. The editor is lauded as clean and efficient. Some express disappointment in King's handling of the engine after acquiring it, but also optimism about its future as an independent entity once again.
The author reflects positively on their experience using Lua for a 60k-line project. They praise Lua's speed, small size, and ease of embedding. While acknowledging the limited ecosystem and tooling compared to larger languages, they found the simplicity and resulting stability to be major advantages. Minor frustrations included the standard library's limitations, especially regarding string manipulation, and the lack of static typing. Overall, Lua proved remarkably effective for their needs, offering a productive and efficient development experience despite some drawbacks. They highlight LuaJIT's exceptional performance and recommend it for CPU-bound tasks.
Hacker News users generally agreed with the author's assessment of Lua, praising its speed, simplicity, and ease of integration. Several commenters highlighted their own positive experiences with Lua, particularly in game development and embedded systems. Some discussed the limitations of the standard library and the importance of choosing good third-party libraries. The lack of static typing was mentioned as a drawback, though some argued that good testing practices mitigate this issue. A few commenters also pointed out that 60k lines of code is not exceptionally large, providing context for the author's experience. The overall sentiment was positive towards Lua, with several users recommending it for specific use cases.
Ubisoft has open-sourced Chroma, a software tool they developed internally to simulate various forms of color blindness. This allows developers to test their games and applications to ensure they are accessible and enjoyable for colorblind users. Chroma provides real-time colorblindness simulation within a viewport, supporting several common types of color vision deficiency. It integrates easily into existing workflows, offering both standalone and Unity plugin versions. The source code and related resources are available on GitHub, encouraging community contributions and wider adoption for improved accessibility across the industry.
HN commenters generally praised Ubisoft for open-sourcing Chroma, finding it a valuable tool for developers to improve accessibility in games. Some pointed out the potential benefits beyond colorblindness, such as simulating different types of monitors and lighting conditions. A few users shared their personal experiences with colorblindness and appreciated the effort to make gaming more inclusive. There was some discussion around existing tools and libraries for similar purposes, with comparisons to Daltonize and mentioning of shader implementations. One commenter highlighted the importance of testing with actual colorblind individuals, while another suggested expanding the tool to simulate other visual impairments. Overall, the reception was positive, with users expressing hope for wider adoption within the game development community.
The blog post "Everything wrong with MCP" criticizes Mojang's decision to use the MCP (Mod Coder Pack) as the intermediary format for modding Minecraft Java Edition. The author argues that MCP, being community-maintained and reverse-engineered, introduces instability, obfuscates the modding process, complicates debugging, and grants Mojang excessive control over the modding ecosystem. They propose that Mojang should instead release an official modding API based on clean, human-readable source code, which would foster a more stable, accessible, and innovative modding community. This would empower modders with clearer understanding of the game's internals, streamline development, and ultimately benefit players with a richer and more reliable modded experience.
Hacker News users generally agreed with the author's criticisms of Minecraft's Marketplace. Several commenters shared personal anecdotes of frustrating experiences with low-quality content, misleading pricing practices, and the predatory nature of some microtransactions targeted at children. The lack of proper moderation and quality control from Microsoft was a recurring theme, with some suggesting it damages the overall Minecraft experience. Others pointed out the irony of Microsoft's approach, contrasting it with their previous stance on open-source and community-driven development. A few commenters argued that the marketplace serves a purpose, providing a platform for creators, though acknowledging the need for better curation. Some also highlighted the role of parents in managing children's spending habits within the game.
Whatsit.today is a new word guessing game where players try to decipher a hidden five-letter word by submitting guesses. Feedback is provided after each guess, revealing which letters are correct and if they are in the correct position within the word. The game offers a daily puzzle and the opportunity for unlimited practice. The creator is seeking feedback on their project.
HN users generally praised the simple, clean design and addictive gameplay of the word game. Several suggested improvements, such as a dark mode, a way to see definitions, and a larger word list. Some questioned the scoring system and offered alternative methods. A few pointed out similar existing games, and others offered encouragement for further development and monetization strategies. One commenter appreciated the creator's humility in presenting the game and mentioned their own mother's enjoyment of simple word games, creating a sense of camaraderie. The overall sentiment was positive and supportive.
Summary of Comments ( 4 )
https://news.ycombinator.com/item?id=44127173
HN users discuss various aspects of rigid body simulation, focusing on the challenges of achieving stable "rest" states. Several commenters highlight the inherent difficulties with numerical methods, especially in stacked configurations where tiny inaccuracies accumulate and lead to instability. The "fix" proposed in the linked tweet, of directly zeroing velocities below a threshold, is deemed by some as a hack, while others appreciate its pragmatic value in specific scenarios. A more nuanced approach of damping velocities based on kinetic energy is suggested, as well as a pointer to Bullet Physics' strategy for handling resting contacts. The overall sentiment leans towards acknowledging the complexity of robust rigid body simulation and the need for a balance between physical accuracy and computational practicality.
The Hacker News post "Putting Rigid Bodies to Rest" links to a tweet showcasing a demo of a physics engine. The comments section is relatively short, with a primary focus on the specifics of the demo and some broader discussion about physics engines and game development.
One commenter points out that the demo is not actually putting rigid bodies to rest in the traditional physics engine sense. Instead, it's cleverly using joints to create the illusion of stability. They explain that true resting behavior usually involves detecting minimal movement and then freezing the object to prevent further computation. This commenter's observation sparks a small discussion about the practicality and efficiency of this approach versus true resting implementations.
Another commenter highlights the nostalgic aspect of the demo, comparing it to early 3D games and demoscene productions. They express appreciation for the visual simplicity and the focus on a single, well-executed effect.
A further comment dives a bit deeper into the technical details, speculating on how the demo might be handling collision detection and response, given the jointed nature of the construction. They posit that a specialized collision detection algorithm might be used to optimize performance.
The rest of the comments are brief, mostly expressing general interest in the demo or agreeing with previous points. One commenter simply states their appreciation for the "satisfying" nature of the simulation. There's no extensive debate or deeply technical analysis, likely due to the limited scope of the original tweet and the straightforward nature of the demo itself.