This post advocates for using Ruby's built-in features, specifically Struct
, to create value objects. It argues against using gems like Virtus
or hand-rolling complex classes, emphasizing simplicity and performance. The author demonstrates how Struct
provides concise syntax for defining immutable attributes, automatic equality comparisons based on attribute values, and a convenient way to represent data structures focused on holding values rather than behavior. This approach aligns with Ruby's philosophy of minimizing boilerplate and leveraging existing tools for common patterns. By using Struct
, developers can create lightweight, efficient value objects without sacrificing readability or conciseness.
Jazco's post argues that Bluesky's "lossy" timelines, where some posts aren't delivered to all followers, are actually beneficial. Instead of striving for perfect delivery like traditional social media, Bluesky embraces the imperfection. This lossiness, according to Jazco, creates a more relaxed posting environment, reduces the pressure for virality, and encourages genuine interaction. It fosters a feeling of casual conversation rather than a performance, making the platform feel more human and less like a broadcast. This approach prioritizes the experience of connection over complete information dissemination.
HN users discussed the tradeoffs of Bluesky's sometimes-lossy timeline, with many agreeing that occasional missed posts are acceptable for a more performant, decentralized system. Some compared it favorably to email, which also isn't perfectly reliable but remains useful. Others pointed out that perceived reliability in centralized systems is often an illusion, as data loss can still occur. Several commenters suggested technical improvements or alternative approaches like local-first software or better synchronization mechanisms, while others focused on the philosophical implications of accepting imperfection in technology. A few highlighted the importance of clear communication about potential data loss to manage user expectations. There's also a thread discussing the differences between "lossy" and "eventually consistent," with users arguing about the appropriate terminology for Bluesky's behavior.
This post outlines essential PostgreSQL best practices for improved database performance and maintainability. It emphasizes using appropriate data types, including choosing smaller integer types when possible and avoiding generic text
fields in favor of more specific types like varchar
or domain types. Indexing is crucial, advocating for indexes on frequently queried columns and foreign keys, while cautioning against over-indexing. For queries, the guide recommends using EXPLAIN
to analyze performance, leveraging the power of WHERE
clauses effectively, and avoiding wildcard leading characters in LIKE
queries. The post also champions prepared statements for security and performance gains and suggests connection pooling for efficient resource utilization. Finally, it underscores the importance of vacuuming regularly to reclaim dead tuples and prevent bloat.
Hacker News users generally praised the linked PostgreSQL best practices article for its clarity and conciseness, covering important points relevant to real-world usage. Several commenters highlighted the advice on indexing as particularly useful, especially the emphasis on partial indexes and understanding query plans. Some discussed the trade-offs of using UUIDs as primary keys, acknowledging their benefits for distributed systems but also pointing out potential performance downsides. Others appreciated the recommendations on using ENUM
types and the caution against overusing triggers. A few users added further suggestions, such as using pg_stat_statements
for performance analysis and considering connection pooling for improved efficiency.
Latacora's blog post "How (not) to sign a JSON object" cautions against signing JSON by stringifying it before applying a signature. This approach is vulnerable to attacks that modify whitespace or key ordering, which changes the string representation without altering the JSON's semantic meaning. The correct method involves canonicalizing the JSON object first – transforming it into a standardized, consistent byte representation – before signing. This ensures the signature validates only identical JSON objects, regardless of superficial formatting differences. The post uses examples to demonstrate the vulnerabilities of naive stringification and advocates using established JSON Canonicalization Schemes (JCS) for robust and secure signing.
HN commenters largely agree with the author's points about the complexities and pitfalls of signing JSON objects. Several highlighted the importance of canonicalization before signing, with some mentioning specific libraries like JWS and json-canonicalize to ensure consistent formatting. The discussion also touches upon alternatives like JWT (JSON Web Tokens) and COSE (CBOR Object Signing and Encryption) as potentially better solutions, particularly JWT for its ease of use in web contexts. Some commenters delve into the nuances of JSON's flexibility, which can make secure signing difficult, such as varying key order and whitespace handling. A few also caution against rolling your own cryptographic solutions and advocate for using established libraries where possible.
This spreadsheet documents a personal file system designed to mitigate data loss at home. It outlines a tiered backup strategy using various methods and media, including cloud storage (Google Drive, Backblaze), local network drives (NAS), and external hard drives. The system emphasizes redundancy by storing multiple copies of important data in different locations, and incorporates a structured approach to file organization and a regular backup schedule. The author categorizes their data by importance and sensitivity, employing different strategies for each category, reflecting a focus on preserving critical data in the event of various failure scenarios, from accidental deletion to hardware malfunction or even house fire.
Several commenters on Hacker News expressed skepticism about the practicality and necessity of the "Home Loss File System" presented in the linked Google Doc. Some questioned the complexity introduced by the system, suggesting simpler solutions like cloud backups or RAID would be more effective and less prone to user error. Others pointed out potential vulnerabilities related to security and data integrity, especially concerning the proposed encryption method and the reliance on physical media exchange. A few commenters questioned the overall value proposition, arguing that the risk of complete home loss, while real, might be better mitigated through insurance rather than a complex custom file system. The discussion also touched on potential improvements to the system, such as using existing decentralized storage solutions and more robust encryption algorithms.
Summary of Comments ( 2 )
https://news.ycombinator.com/item?id=43421324
HN commenters largely criticized the article for misusing or misunderstanding the term "value object." They argued that true value objects are defined by their attributes and compared by value, not identity, using examples like
5 == 5
even if they are different instances of the integer5
. They pointed out that the author's use ofComparable
and overriding==
based on specific attributes leaned more towards a Data Transfer Object (DTO) or a record. Some questioned the practical value of the approach presented, suggesting simpler alternatives like using structs or plain Ruby objects with attribute readers. A few commenters offered different ways to implement proper value objects in Ruby, including using theValues
gem and leveraging immutable data structures.The Hacker News post titled "How to create value objects in Ruby – the idiomatic way" has generated several comments discussing various aspects of value objects in Ruby and alternative approaches.
One commenter points out that using
Struct
for value objects can be problematic when dealing with inheritance, particularly when attributes are added to subclasses. They suggest usingData.define
as a potential solution to this issue, as it creates immutable objects by default. This commenter also mentions that theComparable
module provides a more concise way to define equality and comparison methods based on the value object's attributes. They provide a code example illustrating this approach.Another commenter questions the necessity of the article's approach, suggesting that a simple class with an initialize method and attribute readers would suffice in many cases. They argue against over-engineering simple value objects, emphasizing the importance of readability and maintainability. This commenter also raises the potential for performance implications when using modules like
Comparable
, suggesting benchmarking to determine the actual impact.A different user focuses on the use of
::new
in the original article's example, explaining that it's not required and is likely a stylistic choice. They point out that using just.new
would be the more common and concise approach in Ruby.The conversation then shifts towards a discussion of the benefits and drawbacks of using
Struct
versus defining a custom class. One commenter highlights thatStruct
can be handy for quick prototyping or when the value object is extremely simple. However, they acknowledge the limitations ofStruct
, such as difficulties with inheritance and the inability to easily add custom methods. Another commenter mentions usingOpenStruct
as an alternative, but acknowledges its own set of trade-offs, particularly regarding performance.Finally, a commenter draws attention to the
dry-struct
gem from thedry-rb
ecosystem, advocating for its use in creating more robust and feature-rich value objects. They specifically mention the gem's ability to handle type coercion and validation, making it a suitable option for more complex scenarios. Another comment chimes in endorsingdry-struct
and adding that using it is generally superior to relying onStruct
. They mentiondry-struct
's ability to specify types, which aids in catching errors early.