This blog post details how to run the DeepSeek R1 671B large language model (LLM) entirely on a ~$2000 server built with an AMD EPYC 7452 CPU, 256GB of RAM, and consumer-grade NVMe SSDs. The author emphasizes affordability and accessibility, demonstrating a setup that avoids expensive server-grade hardware and leverages readily available components. The post provides a comprehensive guide covering hardware selection, OS installation, configuring the necessary software like PyTorch and CUDA, downloading the model weights, and ultimately running inference using the optimized llama.cpp
implementation. It highlights specific optimization techniques, including using bitsandbytes
for quantization and offloading parts of the model to the CPU RAM to manage its large size. The author successfully achieves a performance of ~2 tokens per second, enabling practical, albeit slower, local interaction with this powerful LLM.
The blog post "How to Run DeepSeek R1 671B Fully Locally on a $2000 EPYC Rig" details the author's successful endeavor to run the large language model DeepSeek R1 671B on a relatively affordable, self-assembled server. The primary motivation behind this project was to achieve cost-effective, private, and locally accessible large language model inference, avoiding the costs and potential privacy concerns associated with cloud-based solutions like OpenAI's API.
The author carefully selected hardware components to balance performance and budget. The centerpiece of the system is an AMD EPYC 7F72 dual-socket server, chosen for its impressive core count (48 cores per CPU, 96 total) and large L3 cache, crucial for handling the substantial memory requirements of the 671B parameter model. The system also includes 512GB of DDR4 ECC RAM, which, while not sufficient to load the entire model into RAM, allows for offloading to NVMe storage and leveraging the CPU's large cache effectively. Three 2TB NVMe SSDs are configured in RAID 0, maximizing read speed for faster model loading and processing. A relatively modest power supply (1000W) was deemed sufficient, further contributing to the cost-effectiveness of the build.
The software setup involved installing Ubuntu 22.04 and meticulously configuring the necessary dependencies, including CUDA drivers, Python libraries, and the specific DeepSeek inference code. The author highlights the importance of accurate driver versions and provides detailed instructions for their installation, addressing potential compatibility issues. They also outline the steps to download and convert the DeepSeek model to a suitable format for local inference. Optimizations, such as using the bitsandbytes
library for 8-bit quantization, are implemented to reduce memory footprint and improve performance. This allows the model to be run on the system with the available RAM, albeit with increased processing time.
The post then walks through the process of running the model using the command-line interface, explaining the relevant parameters and demonstrating a basic example of text generation. The author emphasizes that, while performance is slower compared to cloud-based solutions or systems with larger RAM capacity, the setup successfully achieves local inference with a reasonable response time. The post concludes by acknowledging potential improvements, like utilizing larger RAM or implementing more aggressive quantization techniques, and reinforces the overall feasibility and cost-effectiveness of running large language models locally on a budget-conscious server build. The project effectively demonstrates a practical approach to bringing powerful language models within reach of individuals and small teams without relying on external cloud services.
Summary of Comments ( 157 )
https://news.ycombinator.com/item?id=42897205
HN commenters were skeptical about the true cost and practicality of running a 671B parameter model on a $2,000 server. Several pointed out that the $2,000 figure only covered the CPUs, excluding crucial components like RAM, SSDs, and GPUs, which would significantly inflate the total price. Others questioned the performance on such a setup, doubting it would be usable for anything beyond trivial tasks due to slow inference speeds. The lack of details on power consumption and cooling requirements was also criticized. Some suggested cloud alternatives might be more cost-effective in the long run, while others expressed interest in smaller, more manageable models. A few commenters shared their own experiences with similar hardware, highlighting the challenges of memory bandwidth and the potential need for specialized hardware like Infiniband for efficient communication between CPUs.
The Hacker News post discussing running a large language model (LLM) like DeepSeek R1 671B on a relatively inexpensive EPYC server generated a fair amount of discussion. Several commenters focused on the practicality and nuances of the setup described in the article.
One key point of discussion revolved around the actual cost and complexity of the setup. While the article highlights a $2000 server, commenters pointed out that this price likely doesn't encompass the cost of GPUs, which are essential for running such a large model effectively. They argued that the true cost would be significantly higher when factoring in suitable GPUs. Furthermore, the expertise required to set up and maintain such a system was also a topic of conversation, with commenters suggesting that it's not a trivial task and requires specialized knowledge.
Another thread of discussion centered on the performance trade-offs. Running a 671B parameter model on a less powerful setup compared to what's typically used in large-scale deployments would inevitably lead to slower inference speeds. Commenters discussed the impact of this slower performance on practical usability, suggesting that while it might be technically feasible to run the model, the response times could be too long for many applications.
The potential benefits of running a large language model locally were also acknowledged. Commenters mentioned the advantages of data privacy and control, as locally hosted models don't require sending data to external servers. This aspect was particularly relevant for sensitive data or applications where data security is paramount.
Finally, some commenters expressed skepticism about the overall feasibility and practicality of the approach outlined in the article. They questioned whether the performance gains, even with optimized libraries and techniques, would be sufficient to justify the complexity and cost involved in setting up and maintaining a local LLM of this size. They also raised concerns about the power consumption and cooling requirements for such a system. Overall, the comments reflected a mixture of intrigue and pragmatism, acknowledging the potential benefits while also highlighting the challenges and limitations of running large language models on less powerful hardware.