Run Deepseek R1 With Ollama + (Open Webui Docker) Easy way

this is how to run Deepseek locally in your computer :

download and install ollama

after install run this in terminal, or find other model in ollama library


1
ollama run deepseek-r1:8b

download and install docker

run this if you have nvidia card


1
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

run this if you don’t have nvidia card


1
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

to access it type localhost:3000 in browser, and start chat with AI locally

if you prefer online, use Deepseek in here

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.