This guide will walk you through the steps to set up and run the DeepSeek R1 model on Google Colab using the Ollama server. Additionally, you’ll learn how to create a Gradio UI to interact with the model in a user-friendly way.
- A Google account (to access Google Drive and Google Colab).
- Basic familiarity with Google Colab and terminal commands.
- Go to Google Drive and log in.
- Click on the + New button and select Folder.
- Name the folder (e.g.,
DeepSeek-R1-Project).
- Open the folder you just created.
- Right-click inside the folder, select More, and then choose Google Colaboratory.
- A new Colab notebook will be created. Name it (e.g.,
DeepSeek-R1-Notebook).
- To improve the model's performance, you can enable GPU or TPU:
- Go to Runtime > Change runtime type.
- Under Hardware accelerator, select GPU or TPU.
- Click Save and restart the notebook.
-
In the Colab notebook, paste the following commands to install the necessary libraries:
!pip install langchain !pip install langchain-core !pip install langchain-community !pip install colab-xterm
-
Run the cells to install the libraries.
-
Paste the following commands to load
colab-xtermand open a terminal:%load_ext colabxterm %xterm
-
A terminal will open in a new tab within Colab.
-
In the terminal, run the following command to install Ollama:
curl -fsSL https://ollama.com/install.sh | sh -
Wait for the installation to complete.
-
In the terminal, start the Ollama server and load the DeepSeek R1 model by running the following commands:
ollama serve &ollama run deepseek-r1:7b
-
The DeepSeek R1 model will be loaded and ready to use.
To make it easier to interact with the DeepSeek R1 model, you can add a Gradio UI. Follow these steps:
-
Install the Gradio library by running the following command in a Colab cell:
!pip install gradio
-
Define a function that sends user prompts to the DeepSeek R1 model and returns the response. Paste the following code into a Colab cell:
import subprocess def query_deepseek_r1(prompt): command = f'ollama run deepseek-r1:7b "{prompt}"' result = subprocess.run(command, shell=True, capture_output=True, text=True) return result.stdout
-
Create a Gradio interface using the function above. Paste the following code into a Colab cell:
import gradio as gr def gradio_interface(prompt): response = query_deepseek_r1(prompt) return response iface = gr.Interface( fn=gradio_interface, inputs="text", outputs="text", title="DeepSeek R1 Chatbot", description="Ask anything to the DeepSeek R1 model!" ) iface.launch(share=True)
-
Run the cell. Gradio will generate a public link to your UI.
- Open the Gradio UI link in your browser.
- Type a prompt (e.g., "What is the capital of France?") in the input box.
- Click Submit.
- The DeepSeek R1 model will process the prompt and display the response in the output box.
- To save your Colab notebook, go to File > Save a copy in Drive.
- Choose the folder you created earlier (e.g.,
DeepSeek-R1-Project).
- If the Ollama server fails to start, ensure that the installation was successful and try running the commands again.
- If you encounter any issues with the model, check your internet connection and ensure that you have sufficient resources (e.g., GPU/TPU enabled).
- If the Gradio UI does not load, ensure that the
share=Trueparameter is set and that your Colab runtime is active.
You have successfully set up and run the DeepSeek R1 model on Google Colab using the Ollama server and added a Gradio UI for easy interaction. Enjoy experimenting with the model!
If you have any questions or run into issues, let me know! 😊
