Harper's LLM code generation workflow centers around using LLMs for iterative code refinement rather than complete program generation. They start with a vague idea, translate it into a natural language prompt, and then use an LLM (often GitHub Copilot) to generate a small code snippet. This output is then critically evaluated, edited, and re-prompted to the LLM for further refinement. This cycle continues, focusing on small, manageable pieces of code and leveraging the LLM as a powerful autocomplete tool. The overall strategy prioritizes human control and understanding of the code, treating the LLM as an assistant in the coding process, not a replacement for the developer. They highlight the importance of clearly communicating intent to the LLM through the prompt, and emphasize the need for developers to retain responsibility for the final code.
Harper Reed, in their blog post "My LLM codegen workflow atm," details their current process for utilizing Large Language Models (LLMs) in software development. They emphasize that this workflow is constantly evolving and subject to change. Currently, Reed employs LLMs primarily for generating small, functional units of code, rather than complete programs. This includes tasks such as crafting regular expressions, converting data structures (like JSON to YAML), and producing short snippets of code in various languages (e.g., Python, JavaScript, Bash). Reed specifically avoids requesting LLMs to create entire classes or complex architectural components.
Their process typically begins with a clear and concise prompt describing the desired functionality, often including specific input and expected output examples. This precise prompting, according to Reed, is crucial for obtaining satisfactory results. They then feed this prompt to an LLM, usually through a dedicated coding assistant tool like GitHub Copilot. Upon receiving the generated code, Reed doesn't blindly accept it but meticulously reviews and tests the output, ensuring it aligns with the intended behavior and adheres to best practices. This testing phase frequently involves manual adjustments and refinements to the LLM-generated code.
Reed highlights the importance of understanding the generated code and not treating the LLM as a black box. They believe that comprehending the underlying logic is essential for both integrating the generated snippet into the larger project and for debugging potential issues. This understanding also allows for easier modification and adaptation of the code as project requirements evolve. While Reed acknowledges the potential of LLMs to revolutionize software development, their current approach focuses on leveraging these tools for augmenting their own coding abilities, rather than replacing them entirely. They view LLMs as powerful assistants capable of handling tedious or repetitive coding tasks, thereby freeing up the developer to focus on higher-level design and problem-solving.
Summary of Comments ( 146 )
https://news.ycombinator.com/item?id=43094006
HN commenters generally express skepticism about the author's LLM-heavy coding workflow. Several suggest that focusing on improving fundamental programming skills and using traditional debugging tools would be more effective in the long run. Some see the workflow as potentially useful for boilerplate generation, but worry about over-reliance on LLMs leading to a decline in core coding proficiency and an inability to debug or understand generated code. The debugging process described by the author, involving repeatedly prompting the LLM, is seen as particularly inefficient. A few commenters raise concerns about the cost and security implications of sharing sensitive code with third-party LLM providers. There's also a discussion about the limited context window of LLMs and the difficulty of applying them to larger projects.
The Hacker News post titled "My LLM codegen workflow" (linking to https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/) generated a moderate amount of discussion. Several commenters shared their own experiences and perspectives on using LLMs for code generation.
A recurring theme was the acknowledgment that LLM code generation is a powerful tool, but it's not a magic bullet. One commenter emphasized the importance of understanding what you're asking the LLM to do and structuring the prompts effectively. They pointed out that LLMs can produce impressive-looking code that is fundamentally flawed if the prompt doesn't accurately capture the desired logic. This reinforces the idea that the user still needs a strong understanding of the underlying problem and coding principles.
Another commenter shared a similar sentiment, stating that LLMs are best used for automating tedious tasks or generating boilerplate code. They cautioned against relying on LLMs for complex logic or critical parts of an application, emphasizing the need for careful review and testing of any LLM-generated code.
Several commenters discussed the importance of iterative prompting and refinement when working with LLMs. They described a process of giving the LLM an initial prompt, reviewing the output, and then providing feedback or more specific instructions to guide the LLM toward the desired result. This highlights the interactive nature of using LLMs for code generation and the need for ongoing interaction between the user and the LLM.
One commenter specifically mentioned using LLMs for generating unit tests, finding it particularly useful for this purpose. They explained that LLMs can often generate a comprehensive suite of tests, saving developers considerable time and effort.
While many commenters focused on the practical aspects of using LLMs for code generation, others discussed the broader implications of this technology. One commenter raised concerns about the potential for LLMs to generate insecure code and the need for robust security testing. Another commenter speculated on the future of software development, envisioning a scenario where LLMs become integral to the entire development process.
Overall, the comments on the Hacker News post reflect a cautiously optimistic view of LLM code generation. While acknowledging the potential benefits and expressing enthusiasm for the technology, commenters also emphasized the importance of careful use, thorough testing, and a continued need for human oversight.