The blog post revisits William Benter's groundbreaking 1995 paper detailing the statistical model he used to successfully predict horse race outcomes in Hong Kong. Benter's approach went beyond simply ranking horses based on past performance. He meticulously gathered a wide array of variables, recognizing the importance of factors like track condition, jockey skill, and individual horse form. His model employed advanced statistical techniques, including Bayesian networks and meticulous data normalization, to weigh these factors and generate accurate probability estimates for each horse winning. This allowed him to identify profitable betting opportunities by comparing his predicted probabilities with publicly available odds, effectively exploiting market inefficiencies. The post highlights the rigor, depth of analysis, and innovative application of statistical methods that underpinned Benter's success, showcasing it as a landmark achievement in predictive modeling.
Chicago is offering an unusual investment opportunity tied to the future revenue of its first casino, the Bally's Chicago casino. Investors can buy a "Chicago Casino Bond" that pays a variable rate based on a percentage of the casino's adjusted gross receipts. While offering potentially high returns, the investment carries significant risk as casino revenue is unpredictable. Factors like competition, economic downturns, and the casino's management could impact payouts, and there's no guarantee of principal return. Essentially, it's a bet on the long-term success of the casino itself.
HN commenters are skeptical of the investment opportunity presented, questioning the projected 16% IRR. Several point out the inherent risks in casino ventures, citing competition, changing regulations, and the reliance on optimistic revenue projections. Some highlight the unusual nature of the offering and the lack of transparency surrounding the investor's identity. The overall sentiment leans towards caution, with commenters advising a thorough due diligence process and expressing doubts about the viability of such a high return in a saturated market like Chicago. Some also suggest exploring publicly traded casino companies as a potentially safer alternative investment in the sector.
The blog post "Kelly Can't Fail" argues against the common misconception that the Kelly criterion is dangerous due to its potential for large drawdowns. It demonstrates that, under specific idealized conditions (including continuous trading and accurate knowledge of the true probability distribution), the Kelly strategy cannot go bankrupt, even when facing adverse short-term outcomes. This "can't fail" property stems from Kelly's logarithmic growth nature, which ensures eventual recovery from any finite loss. While acknowledging that real-world scenarios deviate from these ideal conditions, the post emphasizes the theoretical robustness of Kelly betting as a foundation for understanding and applying leveraged betting strategies. It concludes that the perceived risk of Kelly is often due to misapplication or misunderstanding, rather than an inherent flaw in the criterion itself.
The Hacker News comments discuss the limitations and practical challenges of applying the Kelly criterion. Several commenters point out that the Kelly criterion assumes perfect knowledge of the probability distribution of outcomes, which is rarely the case in real-world scenarios. Others emphasize the difficulty of estimating the "edge" accurately, and how even small errors can lead to substantial drawdowns. The emotional toll of large swings, even if theoretically optimal, is also discussed, with some suggesting fractional Kelly strategies as a more palatable approach. Finally, the computational complexity of Kelly for portfolios of correlated assets is brought up, making its implementation challenging beyond simple examples. A few commenters defend Kelly, arguing that its supposed failures often stem from misapplication or overlooking its long-term nature.
Summary of Comments ( 45 )
https://news.ycombinator.com/item?id=44105470
HN commenters discuss Bill Benter's horse racing prediction model, praising its statistical rigor and innovative approach. Several highlight the importance of feature engineering and data quality, emphasizing that Benter's edge came from meticulous data collection and refinement rather than complex algorithms. Some note the parallels to modern machine learning, while others point out the unique challenges of horse racing, like limited data and dynamic odds. A few commenters question the replicability of Benter's success today, given the increased competition and market efficiency. The ethical considerations of algorithmic gambling are also briefly touched upon.
The Hacker News post titled "Revisiting the algorithm that changed horse race betting (2023)" linking to an annotated version of Bill Benter's paper has generated a moderate amount of discussion. Several commenters focus on the complexities and nuances of Benter's approach, moving beyond the simplified narrative often presented.
One compelling point raised is the crucial role of accurate data. Multiple comments emphasize that Benter's success wasn't solely due to a brilliant algorithm, but heavily reliant on obtaining and cleaning high-quality data, a task that required significant effort and resources. This highlights the often overlooked aspect of data integrity in machine learning successes. One commenter even suggests that Benter's real edge was his superior data collection and processing, rather than the algorithm itself.
Another key theme revolves around the idea of diminishing returns and the efficient market hypothesis. Commenters discuss how Benter's success likely influenced the market, making it more efficient and thus harder for similar strategies to achieve the same level of profitability today. This illustrates the dynamic nature of prediction markets and how successful strategies can eventually become self-defeating. The discussion touches on the constant need for adaptation and refinement in such environments.
Some commenters delve into the technical aspects of Benter's model, mentioning the challenges of overfitting and the importance of feature selection. They acknowledge the impressive nature of building such a system in the pre-internet era with limited computational power. The discussion around feature engineering hints at the depth and complexity of Benter's work, going beyond simply plugging data into an algorithm.
Finally, a few comments provide interesting anecdotes and context, like mentioning Benter's collaboration with Alan Woods and the broader landscape of quantitative horse racing betting. These comments enrich the discussion by providing a historical perspective and highlighting the collaborative nature of such endeavors.
Overall, the comments section offers valuable insights into the practical realities and complexities of applying quantitative methods to prediction markets, moving beyond the often romanticized narratives of algorithmic success. They emphasize the importance of data quality, the dynamic nature of markets, and the ongoing need for adaptation and refinement in the face of competition and changing conditions.