Chris Siebenmann's blog post, "The history and use of /etc/glob in early Unixes," delves into the historical context and functionality of the /etc/glob
file, a mechanism for global command aliases present in Version 6 Unix and its predecessors. Siebenmann begins by highlighting the limited disk space and memory constraints of these early Unix systems, which necessitated creative solutions for managing common commands and reducing redundancy. /etc/glob
addressed this by providing a centralized repository for text substitutions that would be applied system-wide.
The post meticulously explains the operation of /etc/glob
. Essentially, /etc/glob
contained a list of pairs of strings. Whenever a command was entered, the shell would consult this file. If the first string of any pair matched the beginning of the command, the matching portion of the command would be replaced with the second string of that pair. This allowed for abbreviation of frequently used commands, parameterization of commands with common arguments, and even the creation of entirely new commands built upon existing ones.
Siebenmann provides concrete examples gleaned from historical Unix sources, illustrating the practical application of /etc/glob
. One example demonstrates how ls -l
could be abbreviated to simply ll
, significantly reducing typing effort. Another shows how commands could be pre-configured with specific options, such as always listing directories in long format. The post also emphasizes the powerful, albeit potentially confusing, ability to chain multiple substitutions together, allowing complex transformations of commands based on the defined patterns.
The post further discusses the historical evolution of /etc/glob
. While initially existing as a standalone file, its functionality was eventually incorporated directly into the shell itself in later Unix versions. This integration streamlined the command parsing process and obviated the need for a separate file. The reasons for this transition likely stemmed from efficiency improvements and a desire for a more unified command interpretation approach.
Finally, Siebenmann draws a parallel between /etc/glob
and modern features like shell aliases and functions. While functionally similar in their ability to create shortcuts and customized commands, /etc/glob
differed in its global scope and its application prior to argument parsing. This distinction underlines the evolution of command processing in Unix systems, moving from a centralized, pre-parsing substitution mechanism to the more localized and flexible approaches prevalent today. The post concludes by noting the enduring influence of /etc/glob
on contemporary features, serving as a historical precursor to the powerful command manipulation capabilities we take for granted in modern shells.
This comprehensive guide, titled "BCPL Programming on the Raspberry Pi," serves as an introduction to the BCPL programming language specifically tailored for use on the Raspberry Pi platform. It aims to provide novice programmers, particularly young individuals, with a foundational understanding of BCPL and equip them with the necessary skills to develop functional programs on their Raspberry Pi.
The document begins with a brief historical overview of BCPL, highlighting its influence as a precursor to the widely-used C programming language. This historical context establishes BCPL's significance in the evolution of programming languages. The guide then proceeds to detail the installation process of the Cintcode BCPL interpreter on a Raspberry Pi system, offering clear, step-by-step instructions to ensure a smooth setup.
Following the installation, the core concepts of BCPL programming are systematically introduced. This includes a detailed explanation of fundamental data types like integers and vectors (arrays), along with guidance on using operators for arithmetic and logical operations. Control flow mechanisms, crucial for directing program execution, are also covered comprehensively, encompassing conditional statements (IF, TEST), loops (WHILE, FOR), and switch statements (SWITCHON). The guide emphasizes the importance of structured programming techniques to promote clarity and maintainability in BCPL code.
The guide further delves into more advanced topics such as procedures (functions) and the concept of separate compilation. It elucidates how to define and call procedures, enabling modular program design and code reuse. The separate compilation feature allows developers to break down larger programs into smaller, manageable modules that can be compiled independently and then linked together. This promotes efficient development and simplifies debugging.
Input and output operations are also addressed, demonstrating how to interact with the user via the console and how to manipulate files. The guide provides examples of reading and writing data to files, enabling persistent storage of information.
Throughout the guide, numerous examples of BCPL code snippets are interspersed to illustrate the practical application of the concepts being discussed. These practical demonstrations reinforce the theoretical explanations and facilitate a deeper understanding of BCPL syntax and functionality. The document concludes with a series of suggested programming exercises designed to challenge the reader and encourage further exploration of BCPL's capabilities on the Raspberry Pi. These exercises provide hands-on experience and promote the development of practical programming skills. In essence, the document serves as a self-contained, accessible resource for anyone interested in learning BCPL programming in the context of the Raspberry Pi.
The Hacker News post titled "Young Persons Guide to BCPL Programming on the Raspberry Pi [pdf]" has several comments discussing the linked PDF and BCPL in general. A recurring theme is nostalgia and appreciation for the simplicity and elegance of BCPL.
One commenter recalls using BCPL on a Xerox Data Systems Sigma 9 in the early 1980s, highlighting its influence on C and emphasizing its small size and speed. They appreciate the document for its historical context and clear explanation of bootstrapping.
Another commenter focuses on the educational value of the document, suggesting that working through it provides valuable insight into how software works at a fundamental level, from bare metal up. They praise the clear writing style and the practical approach of using a Raspberry Pi.
A few comments delve into the history of BCPL, mentioning its relationship to CPL and C, and how it was a dominant language for systems programming before C took over. One user explains that BCPL was instrumental in the development of the original boot ROM for the Amiga. They also mention its continued use in some specialized areas due to its compact runtime.
Some comments express interest in trying BCPL on a modern platform like the Raspberry Pi. They discuss the potential benefits of learning such a foundational language and the practical experience it offers in understanding system architecture and bootstrapping.
Several commenters share personal anecdotes about their experiences with BCPL or related languages, giving the discussion a sense of historical perspective. One person talks about using BCPL in the 1970s and remembers the challenges of using paper tape. Another recounts learning C before BCPL and finding the differences fascinating.
The overall sentiment in the comments is positive, with many expressing admiration for BCPL's simplicity and power. The document is praised for being well-written, informative, and historically relevant. The discussion provides a glimpse into the enduring interest in older programming languages and the desire to understand the foundations of modern computing.
The blog post "DOS APPEND" from the OS/2 Museum meticulously details the functionality and nuances of the APPEND
command in various DOS versions, primarily focusing on its evolution and differences compared to the PATH
command. APPEND
, much like PATH
, allows programs to access data files located in directories other than their current working directory. However, while PATH
focuses on executable files, APPEND
extends this capability to data files, specified by various file extensions.
The article begins by explaining the initial purpose of APPEND
in DOS 3.3, highlighting its ability to search specified directories for data files when a program attempts to open a file not found in the current directory. This eliminates the need for programs to explicitly handle path information for data files. The post then traces the development of APPEND
through later DOS versions, including DOS 3.31, where a significant bug related to networked drives was addressed.
A key distinction between APPEND
and PATH
is elaborated upon: PATH
affects only the search for executable files (.COM, .EXE, and .BAT), while APPEND
pertains to data files with extensions specified by the user. This difference is crucial for understanding their respective roles within the DOS environment.
The blog post further delves into the various ways APPEND
can be used, outlining the command-line switches and their effects. These switches include /E
, which loads the appended directories into an environment variable, /PATH:ON
, which enables searching the appended directories even when a full path is provided for a file, and /PATH:OFF
, which disables this behavior. The post also explains the use of /X
, which extends the functionality of APPEND
to affect the EXEC
function calls, thus influencing child processes.
The evolution of APPEND
continues to be discussed, noting the removal of the problematic /X:ON
and /X:OFF
switches in later versions due to their instability. The article also touches upon the differences in behavior between APPEND
in MS-DOS/PC DOS and DR DOS, particularly concerning the handling of the ;
delimiter in the APPEND
list and the search order when multiple directories are specified.
Finally, the post concludes by briefly discussing the persistence of APPEND
in later Windows versions for compatibility, even though its utility diminishes in these more advanced operating systems with their more sophisticated file management capabilities. The article thoroughly explores the intricacies and historical context of the APPEND
command, offering a comprehensive understanding of its functionality and its place within the broader DOS ecosystem.
The Hacker News post titled "DOS APPEND" with the link https://www.os2museum.com/wp/dos-append/ has several comments discussing the utility of the APPEND
command in DOS and OS/2, as well as its quirks and comparisons to other operating systems.
One commenter recalls using APPEND
frequently and finding it incredibly useful, particularly for accessing data files located in different directories without having to constantly change directories or use full paths. They highlight the convenience it offered in a time before sophisticated development environments and integrated development environments (IDEs).
Another commenter draws a parallel between APPEND
and the modern concept of environment variables like $PATH
in Unix-like systems, which serve a similar purpose of specifying locations where the system should search for executables. They also touch on how APPEND
differed slightly in OS/2, specifically regarding the handling of data files versus executables.
Further discussion revolves around the intricacies of APPEND
's behavior. One comment explains how APPEND
didn't just search the appended directories but actually made them appear as if they were part of the current directory, creating a virtualized directory structure. This led to some confusion and unexpected behavior in certain situations, especially with programs that relied on obtaining the current working directory.
One user recounts experiences with the complexities of managing multiple directories and files in early versions of Turbo Pascal, illustrating the context where a tool like APPEND
would have been valuable. This comment also highlights the limited tooling available at the time, emphasizing the appeal of features like APPEND
for streamlining development workflows.
Someone points out the potential for conflicts and unexpected results when using APPEND
with programs that create files in the current directory. They suggest that APPEND
's behavior could lead to files being inadvertently created in a directory different from the intended one, depending on how the program handled relative paths.
The security implications of APPEND
are also addressed, with a comment mentioning the risks associated with accidentally executing programs from untrusted directories added to the APPEND
path. This highlights the potential security vulnerabilities that could arise from misuse or improper configuration of the command.
Finally, there's a mention of a similar feature called apppath
in the REXX language, further illustrating the cross-platform desire for this kind of directory management functionality.
Overall, the comments paint a picture of APPEND
as a powerful but somewhat quirky tool that provided a valuable solution to directory management challenges in the DOS/OS/2 era, while also introducing potential pitfalls that required careful consideration. The discussion showcases how APPEND
reflected the computing landscape of the time and how its functionality foreshadowed concepts that are commonplace in modern operating systems.
This meticulously detailed blog post, "Ascending Mount FujiNet," chronicles the author's multifaceted journey to achieve robust and reliable networking capabilities for their Tandy Color Computer 3. The narrative begins by outlining the existing limitations of networking solutions for this vintage hardware, primarily focusing on the speed constraints of the serial port. The author then introduces the FujiNet project, an ambitious endeavor to implement a modern network interface for the CoCo 3 utilizing an ESP32 microcontroller. This endeavor isn't merely about connecting the machine to the internet; it involves crafting a sophisticated system that emulates legacy peripherals like hard drives and floppy drives, streamlining the process of transferring files and interacting with the retro hardware.
The author meticulously documents their methodical exploration of various hardware and software components required for the FujiNet implementation. They delve into the specifics of setting up the ESP32, configuring the necessary software, and integrating it with the CoCo 3. The challenges encountered are described in detail, including addressing conflicts with memory addresses and navigating the complexities of interrupt handling. The narrative emphasizes the iterative nature of the process, highlighting the adjustments made to hardware configurations and software parameters to overcome obstacles and optimize performance.
A significant portion of the post is dedicated to elucidating the intricacies of network booting. The author explains the process of configuring the CoCo 3 to boot from the network, leveraging the capabilities of the FujiNet system. They discuss the importance of network boot ROMs and the modifications required to accommodate the enhanced functionality offered by FujiNet. The post also delves into the mechanisms of loading different operating systems and disk images remotely, showcasing the versatility of the network booting setup.
Furthermore, the author explores the integration of specific software, such as the RS-DOS operating system, demonstrating how FujiNet seamlessly bridges the gap between the vintage hardware and modern network resources. The ability to access files stored on a network share as if they were local drives is highlighted, underscoring the practical benefits of the FujiNet system for everyday use with the CoCo 3. The overall tone conveys the author's enthusiasm for retro computing and their meticulous approach to problem-solving, resulting in a comprehensive guide for others seeking to enhance their CoCo 3 experience with modern network connectivity. The post concludes with a sense of accomplishment and a glimpse into the future possibilities of the FujiNet project.
The Hacker News post "Ascending Mount FujiNet" discussing a blog post about the FujiNet networking device for 8-bit Atari systems generated several interesting comments.
One commenter expressed excitement about the project, highlighting the appeal of modernizing retro hardware without resorting to emulation. They appreciated the ability to use original hardware with modern conveniences. This sentiment was echoed by others who found the blend of old and new technology compelling.
Another commenter, identifying as the author of the blog post, clarified some technical details. They explained that while the current implementation uses ESP32 modules for Wi-Fi, the long-term goal is to develop a dedicated ASIC for a more integrated and potentially faster solution. This prompted a discussion about the feasibility and cost-effectiveness of ASIC development, with other commenters weighing in on the potential challenges and benefits.
There was also a discussion about the broader implications of the FujiNet project and its potential impact on the retro gaming community. Some commenters speculated on whether similar projects could be developed for other retro platforms, expanding the possibilities for online play and other modern features.
Several commenters shared their personal experiences with retro networking solutions, comparing FujiNet to other options and discussing the advantages and disadvantages of each. This led to a conversation about the challenges of preserving and maintaining retro hardware, and the importance of projects like FujiNet in keeping these systems accessible and enjoyable for future generations.
Finally, a few commenters focused on the technical aspects of the FujiNet implementation, discussing topics like network protocols, data transfer speeds, and the challenges of integrating modern networking technology with older hardware. These comments provided valuable insights into the complexities of the project and the ingenuity required to overcome them.
This GitHub repository, titled "Elite - Source Code (Commodore 64)," meticulously presents the original source code for the seminal video game Elite, specifically the version developed for the Commodore 64 home computer. It is not simply a dump of the original code; rather, it represents a painstaking effort to make the code understandable to modern programmers and those interested in the history of game development. Mark Moxon, the repository's author, has undertaken the extensive task of annotating the 6502 assembly language code with detailed comments and explanations. This documentation clarifies the function of individual code sections, algorithms employed, and the overall structure of the game's logic.
The repository includes not just the core game code, but also the associated data files necessary for Elite to run on a Commodore 64. This comprehensive approach allows for a complete reconstruction of the original development environment. Beyond the raw source code, the repository provides a wealth of supplementary material. This includes documentation regarding the game's intricate algorithms, such as those governing procedural generation of the game world, 3D graphics rendering on limited hardware, and the underlying physics engine. Furthermore, the repository likely incorporates explanations of the various data structures employed within the game, shedding light on how information like ship specifications, trade commodities, and planetary data were stored and manipulated.
The stated goal of this project is to provide a deep dive into the technical ingenuity behind Elite, making its inner workings accessible to a broader audience. By providing clear annotations and supplementary documentation, the repository aims to serve as both an educational resource for aspiring programmers and a historical archive preserving a landmark achievement in video game development. This detailed reconstruction of the original Elite source code provides valuable insights into the constraints and challenges faced by developers working with the limited resources of 8-bit home computers in the 1980s and showcases the innovative solutions they devised to create such a groundbreaking and influential game.
The Hacker News post titled "Documented and annotated source code for Elite on the Commodore 64" generated a fair number of comments, primarily expressing appreciation for the effort involved in documenting and annotating this classic piece of gaming history.
Several commenters reminisced about their experiences with Elite on the Commodore 64, sharing personal anecdotes about the impact the game had on them. Some discussed the technical challenges of developing for the C64, especially with its limited resources, praising the ingenuity of the original programmers. The clever use of 6502 assembly language tricks and mathematical optimizations were frequently mentioned and analyzed.
A few comments delved into specific aspects of the code, such as the use of fixed-point arithmetic, the generation of the game world, and the rendering of the wireframe graphics. These technical discussions highlighted the elegant solutions implemented within the constraints of the C64's hardware.
The meticulous documentation and annotation work by Mark Moxon was highly praised. Commenters emphasized the value of this effort for preserving gaming history and for educational purposes, allowing aspiring programmers to learn from classic code examples. The accessibility of the annotated code was also appreciated, making it easier to understand the intricacies of the game's inner workings.
Some comments linked to related resources, including other versions of Elite's source code and articles discussing the game's development. Others expressed interest in exploring the code further and potentially contributing to the documentation effort.
A particularly compelling comment thread discussed the difficulties of reverse engineering old code, especially without original documentation. The work involved in deciphering the original programmers' intentions and adding meaningful annotations was recognized as a significant undertaking.
Overall, the comments reflected a strong sense of nostalgia and respect for the technical achievements of the original Elite developers. The appreciation for the detailed documentation and annotation work underscores the importance of preserving and understanding classic software for future generations.
Summary of Comments ( 23 )
https://news.ycombinator.com/item?id=42680437
HN commenters discuss the blog post's exploration of
/etc/glob
in early Unix. Several highlight the post's clarification of the mechanism's purpose, not as filename expansion (handled by the shell), but as a way to store user-specific command aliases predating aliases and shell functions. Some commenters share anecdotes about encountering this archaic feature, while others express fascination with this historical curiosity and the evolution of Unix. The overall sentiment is appreciation for the post's shedding light on a forgotten piece of Unix history and prompting reflection on how modern systems have evolved. Some debate the actual impact and usage prevalence of/etc/glob
, with some suggesting it was likely rarely used even in early Unix.The Hacker News post titled "The history and use of /etc./glob in early Unixes" has generated a moderate discussion with several interesting comments. The comments primarily focus on historical context, technical details related to globbing, and personal anecdotes about using or encountering this somewhat obscure Unix feature.
One commenter provides further historical context by mentioning that Version 6 Unix's shell did not support globbing, meaning the expansion of wildcard characters like
*
and?
, directly. Instead,/etc/glob
was used as an external program to perform this expansion. This detail highlights the evolution of the shell and its built-in capabilities over time.Another commenter elaborates on the mechanics of how
/etc/glob
interacted with the shell. They explain that the shell would identify commands starting with an unescaped wildcard, then execute/etc/glob
to expand the wildcards. The expanded argument list was then passed to the actual command being executed. This clarifies the role of/etc/glob
as an intermediary for handling wildcards in older Unix systems.A subsequent comment thread discusses the use of
set -f
(ornoglob
) in modern shells to disable wildcard expansion. This connection is made to illustrate that while globbing is now integrated into the shell itself, mechanisms to disable it still exist, echoing the older behavior where globbing wasn't a default shell feature.Someone shares a personal anecdote about encountering remnants of
/etc/glob
in a much later version of Unix (4.3BSD). Although no longer functional, the presence of the/etc/glob
file serves as a historical artifact, reminding users of earlier Unix implementations.Another comment explains the security implications of directly executing the output of programs in the shell. They highlight that directly substituting the output of
/etc/glob
into the command line could lead to command injection vulnerabilities if filenames contained special characters. This observation points to the potential risks associated with early implementations of globbing.A commenter also mentions the influence of Multics on early Unix, suggesting that some of these design choices might have been inherited or influenced by Multics' features. This provides a broader context by linking the development of Unix to its predecessors.
Finally, a few comments touch upon alternative globbing mechanisms like the use of backticks, further enriching the discussion by presenting different approaches to handling filename expansion in older shells.
Overall, the comments on the Hacker News post provide valuable insights into the historical context, technical details, and practical implications of
/etc/glob
in early Unix systems. They offer a glimpse into the evolution of the shell and its features, as well as the challenges and considerations faced by early Unix developers.