Lehmer's continued fraction factorization algorithm offers a way to find factors of a composite integer n. It leverages the convergents of the continued fraction expansion of √n to generate pairs of integers x and y such that x² ≡ y² (mod n). If x is not congruent to ±y (mod n), then gcd(x-y, n) and gcd(x+y, n) will yield non-trivial factors of n. While not as efficient as more advanced methods like the general number field sieve, it provides a relatively simple approach to factorization and serves as a stepping stone towards understanding more complex techniques.
Terry Tao explores the problem of efficiently decomposing a large factorial n! into a product of factors of roughly equal size √n. He outlines several approaches, including a naive iterative method that repeatedly divides n! by the largest integer below √n, and a more sophisticated approach leveraging prime factorization. The prime factorization method cleverly groups primes into products close to the target size, offering significant computational advantages. While both methods achieve the desired decomposition, the prime factorization technique highlights the interplay between the smooth structure of factorials (captured by their prime decomposition) and the goal of obtaining uniformly large factors. Tao emphasizes the efficiency gains from working with the prime factorization, and suggests potential generalizations and connections to other mathematical concepts like smooth numbers and the Dickman function.
Hacker News users discussed the surprising difficulty of factoring large factorials, even when not seeking prime factorization. One commenter highlighted the connection to cryptography, pointing out that if factoring factorials were easy, breaking RSA would be as well. Another questioned the practical applications of this type of factorization, while others appreciated the mathematical puzzle aspect. The discussion also touched upon the computational complexity of factoring and the effectiveness of different factoring algorithms in this specific context. Some commenters shared resources and further reading on related topics in number theory. The general sentiment was one of appreciation for the mathematical curiosity presented by Terry Tao's blog post.
Summary of Comments ( 3 )
https://news.ycombinator.com/item?id=43524385
Hacker News users discuss Lehmer's algorithm, mostly focusing on its impracticality despite its mathematical elegance. Several commenters point out the exponential complexity, making it slower than trial division for realistically sized numbers. The discussion touches upon the algorithm's reliance on finding small quadratic residues, a process that becomes computationally expensive quickly. Some express interest in its historical significance and connection to other factoring methods, while others question the article's claim of it being "simple" given its actual complexity. A few users note the lack of practical applications, emphasizing its theoretical nature. The overall sentiment leans towards appreciation of the mathematical beauty of the algorithm but acknowledges its limited real-world use.
The Hacker News post titled "Lehmer's Continued Fraction Factorization Algorithm" linking to a Substack article on the same topic has generated several comments discussing various aspects of the algorithm and its historical context.
Several commenters discuss the practical efficiency of Lehmer's algorithm. One commenter points out that although it's not the fastest known factorization method, it held the record for some time and is still a relatively straightforward algorithm to understand and implement. Another clarifies that while trial division is generally faster for very small factors, continued fraction methods like Lehmer's become more efficient for composite numbers with larger factors. The discussion also touches on the computational complexity, with a commenter noting that continued fraction factorization methods, including Lehmer's, fall into a subexponential but superpolynomial runtime category.
A thread delves into the history of factoring algorithms and their relation to cryptography, highlighting that while Lehmer's algorithm isn't competitive with modern methods like the general number field sieve, it was a significant advancement in its time. The conversation then expands to include mentions of Fermat's factorization method and how these older algorithms contributed to the foundation of more sophisticated techniques.
The performance of Lehmer's algorithm relative to other historical methods is another topic of discussion. One commenter mentions the quadratic sieve as a successor to continued fraction methods, offering a substantial speed improvement. Another clarifies that while the general number field sieve is the most efficient algorithm known for very large numbers, the quadratic sieve still performs better for numbers within a certain range. This comparison provides context for how Lehmer's algorithm fits within the broader landscape of factoring algorithms.
Some comments offer practical perspectives. One points out the potential use of these older algorithms for educational purposes or as a starting point for understanding more complex methods. This echoes the sentiment that even though superseded by more powerful algorithms, Lehmer's method holds historical and pedagogical value. Finally, at least one commenter provides links to additional resources on factoring algorithms, allowing readers to further explore the topic if they wish.