Story Details

  • An alternative construction of Shannon entropy

    Posted: 2024-11-13 16:45:13

    This blog post presents a different perspective on deriving Shannon entropy, distinct from the traditional axiomatic approach. Instead of starting with desired properties and deducing the entropy formula, it begins with a fundamental problem: quantifying the average number of bits needed to optimally represent outcomes from a probabilistic source. The author argues this approach provides a more intuitive and grounded understanding of why the entropy formula takes the shape it does.

    The post meticulously constructs this derivation. It starts by considering a source emitting symbols from a finite alphabet, each with an associated probability. The core idea is to group these symbols into sets based on their probabilities, specifically targeting sets where the cumulative probability is a power of two. This allows for efficient representation using binary codes, as each set can be uniquely identified by a binary prefix.

    The process begins with the most probable symbol and continues iteratively, grouping less probable symbols into progressively larger sets until all symbols are assigned. The author demonstrates how this grouping mirrors the process of building a Huffman code, a well-known algorithm for creating optimal prefix-free codes.

    The post then carefully analyzes the expected number of bits required to encode a symbol using this method. This expectation involves summing the product of the number of bits assigned to a set (which relates to the negative logarithm of the cumulative probability of that set) and the cumulative probability of the symbols within that set.

    Through a series of mathematical manipulations and approximations, leveraging the properties of logarithms and the behavior of probabilities as the number of samples increases, the author shows that this expected number of bits converges to the familiar Shannon entropy formula: the negative sum of each symbol's probability multiplied by the logarithm base 2 of that probability.

    Crucially, the derivation highlights the relationship between optimal coding and entropy. It demonstrates that Shannon entropy represents the theoretical lower bound on the average number of bits needed to encode messages from a given source, achievable through optimal coding schemes like Huffman coding. This construction emphasizes that entropy is not just a measure of uncertainty or information content, but intrinsically linked to efficient data compression and representation. The post concludes by suggesting this alternative construction offers a more concrete and less abstract understanding of Shannon entropy's significance in information theory.

    Summary of Comments ( 11 )
    https://news.ycombinator.com/item?id=42127609

    The Hacker News post titled "An alternative construction of Shannon entropy," linking to an article exploring a different way to derive Shannon entropy, has generated a moderate discussion with several interesting comments.

    One commenter highlights the pedagogical value of the approach presented in the article. They appreciate how it starts with desirable properties for a measure of information and derives the entropy formula from those, contrasting this with the more common axiomatic approach where the formula is presented and then shown to satisfy the properties. They believe this method makes the concept of entropy more intuitive.

    Another commenter focuses on the historical context, mentioning that Shannon's original derivation was indeed based on desired properties. They point out that the article's approach is similar to the one Shannon employed, further reinforcing the pedagogical benefit of seeing the formula emerge from its intended properties rather than the other way around. They link to a relevant page within a book on information theory which seemingly discusses Shannon's original derivation.

    A third commenter questions the novelty of the approach, suggesting that it seems similar to standard treatments of the topic. They wonder if the author might be overselling the "alternative construction" aspect. This sparks a brief exchange with another user who defends the article, arguing that while the fundamental ideas are indeed standard, the specific presentation and the emphasis on the grouping property could offer a fresh perspective, especially for educational purposes.

    Another commenter delves into more technical details, discussing the concept of entropy as a measure of average code length and relating it to Kraft's inequality. They connect this idea to the article's approach, demonstrating how the desired properties lead to a formula that aligns with the coding interpretation of entropy.

    Finally, a few comments touch upon related concepts like cross-entropy and Kullback-Leibler divergence, briefly extending the discussion beyond the scope of the original article. One commenter mentions an example of how entropy is useful, by stating how optimizing for log-loss in a neural network can be interpreted as an attempt to make the predicted distribution very similar to the true distribution.

    Overall, the comments section provides a valuable supplement to the article, offering different perspectives on its significance, clarifying some technical points, and connecting it to broader concepts in information theory. While not groundbreaking, the discussion reinforces the importance of pedagogical approaches that derive fundamental formulas from their intended properties.