Grive
Grive
February 5, 2025 at 04:05 PM
I've noticed a trend of people using DeepSeek R1, but keep in mind that it's biased, and all chats are transferred to China. If you're planning to use R1, be aware of this. Many YouTubers recommend installing Ollama or LM Studio to run models locally without internet access, which does help protect your data. While this is true, there are some important things to consider: 1. Bias remains – Running the model locally doesn’t remove its built-in biases. If the model itself is flawed or manipulated, using it offline won’t change that. 2. Heavy hardware requirements – If you want a DeepSeek model that surpasses GPT, you'd need to run DeepSeek R1-671B, which is 404GB in size. No consumer laptop can handle this, and only high-end AI servers or powerful workstations can run it efficiently. Running smaller versions may be possible, but they won’t match the performance of leading AI models. 3. Privacy risks still exist – While Ollama and LM Studio are great for running models locally, they are still third-party applications. Their policies could change anytime, potentially allowing data collection or cloud-based processing. Relying solely on them for privacy isn’t foolproof. In short, open-source and private models are great alternatives, but they require proper knowledge and resources to use effectively. Don't just follow trends—stay informed and cautious.
👍 1

Comments