Transcript for:
DeepSeek R1 Local Model Overview

I'm sure everyone's seen all the talk about DeepSeek R1, which disrupted how we view AI models and caused panic in the stock market. It is the best free and open source model available, but using it on their website stores your information on servers in China with a loose privacy policy. So I'll show you how you can run it locally on your computer in just a few minutes. If you've never run a model locally before, it's super easy with this method. The way most people are using it is on deepseek.com. I'll start there so I can show the comparison. You click start now. Log in, then click the DeepThink R1 icon to turn it on. Then add your prompt. If a plant that doubles its size every day covers a lake in 30 days, how much time will it take for two plants to cover half the lake? And send it. And sometimes the most interesting part is just watching its chain of thought. It thinks in such a human-sounding way. Okay, it thought for 92 seconds. It was able to reason its way through that and got the right answer. You can open and close the sidebar, click here for a new chat, and that's all there is to it. A reasoning model isn't necessary for your everyday questions, but it is great when you need multi-step reasoning and even comes up with really creative stuff for things like poetry or song lyrics. But here is a snippet from their privacy policy. Basically, it tracks everything in the way you'd expect. There's other options already to run it on a site whose servers are located in the US. If you want to do that, Perplexity gets you five per day under the free plan. More if you're on the pro. or a slightly distilled model is free on Grok and is insanely fast. The ideal way to be fully private is to run it locally so no information ever leaves your computer. You can do that through LM Studio really easily. On the home page, just download the option for your computer. Once that's done, open it up, go to the Discover tab, then type DeepSeek into the search bar. A bunch of options will pop up for how distilled the model is. The more hardware you have, the bigger the model you'll be able to run. The full DeepSeek R1 with 671 billion parameters needs serious hardware that very few people have, but these go down all the way to sizes just about any computer can handle. Right now I'm on the 8 billion parameter model, although my computer could handle more than that if I wanted. I can go even bigger once I set something up with this RTX 5090 that Nvidia sent me, but I'm using this for now, and there's different options for how quantized it is. They'll all feel roughly the same, and what's cool about LM Studio is it will tell you if your GPU can handle it. That's what it means when it's green right here. I'll download this Q4KM1 it suggested right here. Once that's done downloading, all you have to do is go up to the top and make sure the right model is selected. Then start prompting. This is running fully locally. Nothing is leaving my computer and going to any servers anywhere else. You can see it still gets the correct answer, just like on the website. That's your quick start guide. You should be up and running. As you can see, all of these options got the correct answer and it can do far more than what I demoed here. It's the most capable open source model we have. If you wanna go far more in depth on Futurpedia, we have over 20 comprehensive courses on AI and how to incorporate it into your life and career to get ahead and save time. You can get started for free using the link in the description. Thank you so much for watching. I'll see you in the next one.