Story Details

  • Are polynomial features the root of all evil? (2024)

    Posted: 2025-04-22 16:49:55

    The blog post explores the potential downsides of using polynomial features in machine learning, particularly focusing on their instability in high dimensions. While polynomial expansion can improve model fit by capturing non-linear relationships, it can also lead to extreme sensitivity to input changes, causing wild oscillations and poor generalization. The author demonstrates this issue with visualizations of simple polynomials raised to high powers and illustrates how even small perturbations in the input can drastically alter the output. They suggest Bernstein polynomials as a more stable alternative, highlighting their properties like non-negativity and partition of unity, which contribute to smoother behavior and better extrapolation. The post concludes that while polynomial features can be beneficial, their inherent instability requires careful consideration and potentially exploration of alternative basis functions like Bernstein polynomials.

    Summary of Comments ( 1 )
    https://news.ycombinator.com/item?id=43764101

    HN users discuss potential downsides of polynomial features, particularly in the context of overfitting and interpretability issues. Some argue against their broad categorization as "evil," suggesting they can be valuable when applied judiciously and with proper regularization techniques. One commenter points out their usefulness in approximating non-linear functions and highlights the importance of understanding the underlying data and model behavior. Others discuss alternatives like splines, which offer more local control and flexibility, and the role of feature scaling in mitigating potential problems with polynomial features. The trade-off between complexity and interpretability is a recurring theme, with commenters emphasizing the importance of selecting the right tool for the specific problem and dataset.