
Artificial Intelligence - StarfiniteX
February 8, 2025 at 05:18 AM
*4 ways to run LLMs like DeepSeek-R1 locally on your computer:*
Running LLMs locally is like having a superpower:
- Cost savings
- Privacy: Your data stays on your computer
- Plus, it's incredibly fun
Let us explore some of the best methods to achieve this.
1️⃣ *Ollama*
* Running a model through Ollama is as simple as executing a command: ollama run deepseek-r1
* You can also install Ollama with a single command: curl -fsSL https:// ollama. com/install .sh | sh
2️⃣ *LMStudio*
* Install LMStudio can be installed as an app on your computer.
* It offers a ChatGPT-like interface, allowing you to load and eject models as if you were handling tapes in a tape recorder.
3️⃣ *vLLM*
* vLLM is a fast and easy-to-use library for LLM inference and serving.
* It has State-of-the-art serving throughput ⚡️
* A few lines of code and you can locally run DeepSeek as an OpenAI compatible server with reasoning enabled.
4️⃣ *LlamaCPP (the OG)*
* LlamaCPP enables LLM inference with minimal setup and state-of-the-art performance.
That's a wrap!
If you're interested in:
- ML/AI Engineering ⚙️
- GenAi
Consifer following →
https://linktr.ee/AiCommunity ✔️
Author: *Niraj Lunavat* (https://www.linkedin.com/in/nirajlunavat)