The Smithsonian Magazine article, "Can You Read This Cursive Handwriting? The National Archives Wants Your Help," elucidates a fascinating citizen science initiative spearheaded by the National Archives and Records Administration (NARA). This ambitious undertaking seeks to enlist the aid of the public in transcribing a vast and historically significant collection of handwritten documents, many of which are penned in the elegant, yet often challenging to decipher, script known as cursive. These documents, representing a crucial segment of America's documentary heritage, offer invaluable insights into the past, covering a wide array of topics from mundane daily life to pivotal moments in national history. However, due to the sheer volume of material and the specialized skill required for accurate interpretation of cursive script, the National Archives faces a monumental task in making these records readily accessible to researchers and the public alike.
The article details how this crowdsourced transcription effort, facilitated through a dedicated online platform, empowers volunteers to contribute meaningfully to the preservation and accessibility of these historical treasures. By painstakingly deciphering the often intricate loops and flourishes of cursive handwriting, participants play a crucial role in transforming these handwritten artifacts into searchable digital text. This digitization process not only safeguards these fragile documents from the ravages of time and physical handling but also democratizes access to historical information, allowing anyone with an internet connection to explore and learn from the rich narratives contained within these primary source materials. The article emphasizes the collaborative nature of the project, highlighting how the collective efforts of numerous volunteers can achieve what would be an insurmountable task for archivists alone. Furthermore, it underscores the inherent value of cursive literacy, demonstrating how this seemingly antiquated skill remains relevant and vital for unlocking the secrets held within historical archives. The initiative, therefore, serves not only as a means of preserving historical records but also as a testament to the power of community engagement and the enduring importance of paleographic skills in the digital age.
The website "WTF Happened In 1971?" presents a collection of graphs depicting various socio-economic indicators in the United States, primarily spanning from the post-World War II era to the present day. The overarching implication of the website is that a significant inflection point occurred around 1971, after which several key metrics seemingly diverged from their previously established trends. This divergence often manifests as a decoupling between productivity and compensation, a stagnation or decline in real wages, and a dramatic increase in metrics related to cost of living, such as housing prices and healthcare expenses.
The website does not explicitly propose a singular causative theory for this shift. Instead, it presents a compelling visual argument for the existence of a turning point in American economic history, inviting viewers to draw their own conclusions. The graphs showcase a variety of indicators, including, but not limited to:
Productivity and real hourly wages: These graphs illustrate a strong correlation between productivity and wages prior to 1971, with both rising in tandem. Post-1971, however, productivity continues to climb while real wages stagnate, creating a widening gap. This suggests that the benefits of increased productivity were no longer being equitably distributed to workers.
Housing prices and housing affordability: The website depicts a sharp escalation in housing costs relative to income after 1971. This is visualized through metrics like the house price-to-income ratio and the number of years of median income required to purchase a median-priced house. This indicates a growing difficulty for the average American to afford housing.
Healthcare costs: Similar to housing, the cost of healthcare exhibits a dramatic increase after 1971, becoming a progressively larger burden on household budgets.
Debt levels (both household and national): The website presents graphs showcasing a substantial rise in debt levels, particularly after 1971. This includes metrics like household debt as a percentage of disposable income and the national debt as a percentage of GDP, suggesting a growing reliance on borrowing to maintain living standards.
College costs and college tuition as a percentage of median income: The cost of higher education undergoes a significant increase post-1971, making college less accessible for many.
Income inequality: The website visually represents the growing disparity in income distribution, with the share of wealth held by the top 1% increasing significantly after 1971, further exacerbating the economic challenges faced by the majority of the population.
In essence, "WTF Happened In 1971?" visually argues that a fundamental change occurred in the American economy around that year, marked by decoupling of productivity and wages, exploding costs of essential goods and services like housing and healthcare, and a widening gap between the wealthy and the rest of the population. The website refrains from explicitly attributing this shift to any specific cause, leaving the interpretation and analysis to the observer.
The Hacker News post titled "WTF Happened in 1971?" generated a significant amount of discussion, with many commenters offering various perspectives on the claims made in the linked article. While some expressed skepticism about the presented correlations, others offered supporting arguments, additional historical context, and alternative interpretations.
A recurring theme in the comments was the acknowledgment that 1971 was a pivotal year with numerous significant global events. The end of the Bretton Woods system, where currencies were pegged to gold, was frequently cited as a key factor contributing to the economic shifts highlighted in the article. Commenters debated the long-term consequences of this change, with some arguing it led to increased financial instability and inequality.
Several commenters pointed out potential flaws in the article's methodology, suggesting that simply correlating various metrics with the year 1971 doesn't necessarily imply causation. They argued that other factors, such as the oil crisis of the 1970s, increasing globalization, and technological advancements, could have contributed to the observed trends. Some suggested that focusing solely on 1971 oversimplifies a complex historical period and that a more nuanced analysis is required.
Some commenters offered alternative explanations for the trends shown in the article. One commenter proposed that the post-World War II economic boom, driven by reconstruction and pent-up demand, was naturally slowing down by the early 1970s. Another suggested that the rise of neoliberal economic policies, beginning in the 1970s and 80s, played a significant role in the growing income inequality.
Other commenters focused on the social and cultural changes occurring around 1971. They mentioned the rise of counterculture movements, the changing role of women in society, and the increasing awareness of environmental issues as potential factors influencing the trends discussed. Some argued that these societal shifts were intertwined with the economic changes, creating a complex and multifaceted picture of the era.
A few commenters delved deeper into specific data points presented in the article, challenging their accuracy or offering alternative interpretations. For example, the discussion around productivity and wages prompted debate about how these metrics are measured and whether they accurately reflect the lived experiences of workers.
While the article itself presents a particular narrative, the comments on Hacker News offer a broader range of perspectives and interpretations. They highlight the complexities of historical analysis and the importance of considering multiple factors when examining societal shifts. The discussion serves as a valuable reminder that correlation does not equal causation and encourages a critical approach to understanding historical trends.
The website "IRC Driven" presents itself as a modern indexing and search engine specifically designed for Internet Relay Chat (IRC) networks. It aims to provide a comprehensive and readily accessible archive of public IRC conversations, making them searchable and browsable for various purposes, including research, historical analysis, community understanding, and retrieving information shared within these channels.
The service operates by connecting to IRC networks and meticulously logging the public channels' activity. This logged data is then processed and indexed, allowing users to perform granular searches based on keywords, specific channels, date ranges, and even nicknames. The site highlights its commitment to transparency by offering clear explanations of its data collection methods, privacy considerations, and its dedication to respecting robots.txt and similar exclusion protocols to avoid indexing channels that prefer not to be archived.
IRC Driven emphasizes its modern approach, contrasting it with older, often outdated IRC logging methods. This modernity is reflected in its user-friendly interface, the robust search functionality, and the comprehensive scope of its indexing efforts. The site also stresses its scalability and ability to handle the vast volume of data generated by active IRC networks.
The project is presented as a valuable resource for researchers studying online communities, individuals seeking historical context or specific information from IRC discussions, and community members looking for a convenient way to review past conversations. It's posited as a tool that can facilitate understanding of evolving online discourse and serve as a repository of knowledge shared within the IRC ecosystem. The website encourages users to explore the indexed channels and utilize the search features to discover the wealth of information contained within the archives.
The Hacker News post for "IRC Driven – modern IRC indexing site and search engine" has generated several comments, discussing various aspects of the project.
Several users expressed appreciation for the initiative, highlighting the value of searchable IRC logs for retrieving past information and context. One commenter mentioned the historical significance of IRC and the wealth of knowledge contained within its logs, lamenting the lack of good indexing solutions. They see IRC Driven as filling this gap.
Some users discussed the technical challenges involved in such a project, particularly concerning the sheer volume of data and the different logging formats used across various IRC networks and clients. One user questioned the handling of logs with personally identifiable information, raising privacy concerns. Another user inquired about the indexing process, specifically whether the site indexes entire networks or allows users to submit their own logs.
The project's open-source nature and the use of SQLite were praised by some commenters, emphasizing the transparency and ease of deployment. This sparked a discussion about the scalability of SQLite for such a large dataset, with one user suggesting alternative database solutions.
Several comments focused on potential use cases, including searching for specific code snippets, debugging information, or historical project discussions. One user mentioned using the site to retrieve a lost SSH key, demonstrating its practical value. Another commenter suggested features like user authentication and the ability to filter logs by channel or date range.
There's a thread discussing the differences and overlaps between IRC Driven and other similar projects like Logs.io and Pine. Users compared the features and functionalities of each, highlighting the unique aspects of IRC Driven, such as its decentralized nature and focus on individual channels.
A few users shared their personal experiences with IRC logging and indexing, recounting past attempts to build similar solutions. One commenter mentioned the difficulties in parsing different log formats and the challenges of maintaining such a system over time.
Finally, some comments focused on the user interface and user experience of IRC Driven. Suggestions were made for improvements, such as adding syntax highlighting for code snippets and improving the search functionality.
Chris Siebenmann's blog post, "The history and use of /etc/glob in early Unixes," delves into the historical context and functionality of the /etc/glob
file, a mechanism for global command aliases present in Version 6 Unix and its predecessors. Siebenmann begins by highlighting the limited disk space and memory constraints of these early Unix systems, which necessitated creative solutions for managing common commands and reducing redundancy. /etc/glob
addressed this by providing a centralized repository for text substitutions that would be applied system-wide.
The post meticulously explains the operation of /etc/glob
. Essentially, /etc/glob
contained a list of pairs of strings. Whenever a command was entered, the shell would consult this file. If the first string of any pair matched the beginning of the command, the matching portion of the command would be replaced with the second string of that pair. This allowed for abbreviation of frequently used commands, parameterization of commands with common arguments, and even the creation of entirely new commands built upon existing ones.
Siebenmann provides concrete examples gleaned from historical Unix sources, illustrating the practical application of /etc/glob
. One example demonstrates how ls -l
could be abbreviated to simply ll
, significantly reducing typing effort. Another shows how commands could be pre-configured with specific options, such as always listing directories in long format. The post also emphasizes the powerful, albeit potentially confusing, ability to chain multiple substitutions together, allowing complex transformations of commands based on the defined patterns.
The post further discusses the historical evolution of /etc/glob
. While initially existing as a standalone file, its functionality was eventually incorporated directly into the shell itself in later Unix versions. This integration streamlined the command parsing process and obviated the need for a separate file. The reasons for this transition likely stemmed from efficiency improvements and a desire for a more unified command interpretation approach.
Finally, Siebenmann draws a parallel between /etc/glob
and modern features like shell aliases and functions. While functionally similar in their ability to create shortcuts and customized commands, /etc/glob
differed in its global scope and its application prior to argument parsing. This distinction underlines the evolution of command processing in Unix systems, moving from a centralized, pre-parsing substitution mechanism to the more localized and flexible approaches prevalent today. The post concludes by noting the enduring influence of /etc/glob
on contemporary features, serving as a historical precursor to the powerful command manipulation capabilities we take for granted in modern shells.
The Hacker News post titled "The history and use of /etc./glob in early Unixes" has generated a moderate discussion with several interesting comments. The comments primarily focus on historical context, technical details related to globbing, and personal anecdotes about using or encountering this somewhat obscure Unix feature.
One commenter provides further historical context by mentioning that Version 6 Unix's shell did not support globbing, meaning the expansion of wildcard characters like *
and ?
, directly. Instead, /etc/glob
was used as an external program to perform this expansion. This detail highlights the evolution of the shell and its built-in capabilities over time.
Another commenter elaborates on the mechanics of how /etc/glob
interacted with the shell. They explain that the shell would identify commands starting with an unescaped wildcard, then execute /etc/glob
to expand the wildcards. The expanded argument list was then passed to the actual command being executed. This clarifies the role of /etc/glob
as an intermediary for handling wildcards in older Unix systems.
A subsequent comment thread discusses the use of set -f
(or noglob
) in modern shells to disable wildcard expansion. This connection is made to illustrate that while globbing is now integrated into the shell itself, mechanisms to disable it still exist, echoing the older behavior where globbing wasn't a default shell feature.
Someone shares a personal anecdote about encountering remnants of /etc/glob
in a much later version of Unix (4.3BSD). Although no longer functional, the presence of the /etc/glob
file serves as a historical artifact, reminding users of earlier Unix implementations.
Another comment explains the security implications of directly executing the output of programs in the shell. They highlight that directly substituting the output of /etc/glob
into the command line could lead to command injection vulnerabilities if filenames contained special characters. This observation points to the potential risks associated with early implementations of globbing.
A commenter also mentions the influence of Multics on early Unix, suggesting that some of these design choices might have been inherited or influenced by Multics' features. This provides a broader context by linking the development of Unix to its predecessors.
Finally, a few comments touch upon alternative globbing mechanisms like the use of backticks, further enriching the discussion by presenting different approaches to handling filename expansion in older shells.
Overall, the comments on the Hacker News post provide valuable insights into the historical context, technical details, and practical implications of /etc/glob
in early Unix systems. They offer a glimpse into the evolution of the shell and its features, as well as the challenges and considerations faced by early Unix developers.
A new, specialized search engine and Freedom of Information Act (FOIA) request facilitator has been launched, specifically designed to aid in the retrieval of United States veteran records. This resource, hosted at birls.org, aims to streamline and simplify the often complex and time-consuming process of obtaining these vital documents. Traditionally, requesting information through the FOIA has involved navigating bureaucratic hurdles, including locating the correct agency, understanding the specific requirements for each agency, and managing the often lengthy waiting periods. This new tool seeks to mitigate these challenges by providing a user-friendly interface for searching existing records and a streamlined, web-based system for submitting FOIA requests, specifically leveraging fax technology to interact with government agencies. The implied benefit is a more accessible and efficient method for veterans, their families, researchers, and other interested parties to access crucial information pertaining to military service. The website itself presumably hosts a searchable database of already digitized veteran records, allowing users to potentially find information without needing to file a formal request. For records not yet digitized or publicly available, the integrated FOIA request system purports to simplify the process by automatically generating and submitting the necessary paperwork via fax to the relevant government entity, potentially reducing processing time and administrative overhead for the user. This service is being offered free of charge, further lowering the barrier to entry for individuals seeking these records.
The Hacker News post titled "Show HN: New search engine and free-FOIA-by-fax-via-web for US veteran records" linking to birls.org generated several comments, largely focusing on the practicalities and potential impact of the service.
Several commenters expressed appreciation for the service, highlighting the difficulty and often prohibitive cost usually associated with obtaining veteran records. They saw this as a valuable tool for veterans, their families, and researchers seeking information. The simplification of the FOIA request process via fax automation was specifically praised.
Some questioned the legality of charging for expedited processing of FOIA requests, a feature mentioned on the site. This sparked a discussion around the nuances of FOIA law and whether the service was charging for the expedited processing itself or for the value-added service of preparing and submitting the request.
Technical aspects of the service were also discussed. One commenter inquired about the search engine's underlying data source and indexing methods. Another questioned the choice of fax as the communication medium, suggesting more modern, potentially more efficient methods. The reliance on fax was explained by the creator as a workaround for government agencies that are slow to adopt modern technology, particularly regarding FOIA requests.
The creator of the website actively participated in the discussion, responding to questions and clarifying the service's functionality and purpose. They explained the motivation behind the project, emphasizing the desire to make veteran records more accessible. They also addressed the pricing model, stating the fee was for the service provided and not for the expedited processing itself, which is at the discretion of the government agency.
Overall, the comments section reflected a mixture of enthusiasm for the service's potential to simplify access to veteran records, queries about its technical implementation and legal aspects, and appreciation for the creator's initiative in tackling a complex bureaucratic process. The discussion highlights the challenges of navigating the FOIA process and the need for services that can bridge the gap between individuals and government information.
Summary of Comments ( 175 )
https://news.ycombinator.com/item?id=42745334
HN commenters were largely enthusiastic about the transcription project, viewing it as a valuable contribution to historical preservation and a fun challenge. Several users shared their personal experiences with cursive, lamenting its decline in education and expressing nostalgia for its use. Some questioned the choice of Zooniverse as the platform, citing usability issues and suggesting alternatives like FromThePage. A few technical points were raised about the difficulty of deciphering 18th and 19th-century handwriting, especially with variations in style and ink, and the potential benefits of using AI/ML for pre-processing or assisting with transcription. There was also a discussion about the legal and historical context of the documents, including the implications of slavery and property ownership.
The Hacker News post "Can you read this cursive handwriting? The National Archives wants your help" generated a moderate number of comments, mostly focusing on the practicality of the project and the state of cursive education.
Several commenters expressed skepticism about the crowdsourcing approach's efficacy, questioning the accuracy and efficiency of relying on volunteers. One commenter pointed out the potential for "trolling and garbage entries," while another suggested that employing a small group of trained paleographers would be more effective. This led to a small discussion about the potential cost-effectiveness of different approaches, with some arguing that the crowdsourcing route, even with its flaws, is likely more economical.
A recurring theme was the decline of cursive writing skills. Many commenters lamented the loss of this skill, expressing concern about the ability of future generations to access historical documents. Some shared anecdotes about their personal experiences with cursive, with some emphasizing its importance in their education and others mentioning they rarely use it. One commenter even suggested that teaching cursive should be mandatory, reflecting a nostalgic view of its role in education.
A few commenters discussed the technical aspects of the project, including the platform used for transcription (Zooniverse) and the potential for using AI/ML to aid in the process. One individual with experience in handwriting recognition suggested that machine learning could significantly help but acknowledged the challenges posed by variations in historical handwriting.
A couple of users offered practical tips for those interested in participating, such as focusing on deciphering keywords and context rather than getting bogged down in individual letters. Others highlighted the importance of the project, emphasizing the value of making historical documents accessible to the public.
Finally, some commenters simply expressed their enjoyment of the challenge and their intention to participate, demonstrating a genuine interest in contributing to the preservation of historical records. While not a large number of comments, the discussion touched upon several key aspects of the project, from its feasibility and methodology to the broader implications for the preservation of historical documents and the changing landscape of handwriting skills.