Get 50% Discount Offer 26 Days

Recommended Services
Supported Scripts
WordPress
Hubspot
Joomla
Drupal
Wix
Shopify
Magento
Typeo3
How to Run DeepSeek on Your Server Using Ollama and NVIDIA GPUs

DeepSeek is a revolutionary AI model that has demonstrated efficient server deployment capabilities in recent years. Its substantial capabilities will change our current data processing and engagement methods. To make managing and running DeepSeek easier, we can utilize Ollama, a lightweight command-line tool designed for handling large AI models locally or on cloud-based VPS. This guide will provide you with a step-by-step process for setting up and running DeepSeek on your server using Ollama.

By the end of this tutorial, you will have a fully functional AI system at your disposal, ready for various tasks such as content generation, data analysis, and automation. You’ve successfully navigated the complexities of setting up and running DeepSeek on your server using Ollama, and now you’re ready to unleash the power of AI.

Why Use Ollama for DeepSeek?

Ollama simplifies the process of running large language models by:

  • Offers a streamlined command-line interface.
  • Handling dependencies and configurations automatically.
  • Supports local execution for better privacy and security.
  • Provide a lightweight yet powerful platform for AI enthusiasts and developers.

Requirements

Before we start, make sure the server meets the following requirements:

System Requirements

  • Operating System: Ubuntu 20.04 (recommended) or any Linux distribution.
  • Hardware:
    • Minimum: 8GB RAM, quad-core CPU, 20GB free disk space.
    • Recommended: 16GB+ RAM, dedicated GPU for enhanced performance.
  • Dependencies:
    • curl
    • ca-certificates
    • Docker (for additional features)
  • Network Requirements: Stable and high-speed internet connection.

Step 1: Equipping the Server

Update and Upgrade your server to ensure all packages are up to date by executing:

sudo apt update && sudo apt upgrade -y
Update and Upgrade your server sudo apt update && sudo apt upgrade -y

Additionally, confirm that you have sudo privileges to execute administrative tasks.

Step 2: Installing Ollama

To install Ollama on your server, run the following command:

To install Ollama on your server, run the following command
curl -fsSL https://ollama.com/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh

This script will automatically download and configure Ollama on your system. Once installed, verify its installation by running:

ollama --version
verify its installation by running ollama --version

If the installation is successful, you should see the version number displayed in the terminal.

Step 3: Downloading and Running DeepSeek Model

Once Ollama is installed on our server, the next step is to pull the DeepSeek model. Execute the following command:

ollama pull deepseek-r1:70b
ollama pull deepseek-r1:70b

This command will download and install DeepSeek. After installation, you can run the model using:

ollama run deepseek-r1:70b
ollama run deepseek-r1:70b

Now, you can start interacting with DeepSeek by entering your queries.

Step 4: Installing Docker for OpenWeb UI Integration

If you want to manage DeepSeek using a graphical interface, you can install Docker to run a web-based UI like OpenWebUI. Follow these steps to install Docker:

Add Docker’s GPG key:

sudo mkdir -p /etc/apt/keyrings
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

Add the Docker repository:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
commands to add the Docker repository on Ubuntu, with terminal output

Install Docker:

sudo apt-get update
showing the sudo apt-get update command and its terminal output
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo apt-get install docker-ce command and terminal output of installation
  1. To Verify the installation by checking the Docker version, use:
docker --version

Step 5: Running a Web UI for DeepSeek

To manage DeepSeek via a browser, deploy OpenWebUI with Docker:

docker run -d --network host -v openwebui/app:/backend/data \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui
docker run command to start a container and terminal output of image pulling

Once running, access OpenWebUI in your browser at:

http://127.0.0.1:3000
OpenWebUI setup screen with fields for name, email, password, and admin creation

You can now interact with DeepSeek through an intuitive graphical interface.

Step 6: Monitoring Performance and Troubleshooting

Checking GPU Usage

If your server has a GPU, use NVIDIA’s monitoring tool to track its usage:

nvidia-smi
nvidia-smi command output displaying GPU usage details and active processes

Common Issues and Fixes

Here are some common issues you might encounter when setting up DeepSeek and how to fix them. 

1. Docker Container Not Running
Run the following command to check the status of Docker:

docker ps
docker ps command output listing a running Docker container with status

If OpenWebUI is not running, restart the container:

docker restart open-webui
command docker restart open-webui to restart the OpenWebUI container

2. DeepSeek Model Not Loading
Check installed models:

ollama list
ollama list command output displaying details of a Deepseek model

If the DeepSeek model is missing, reinstall it:

ollama pull deepseek-r1:70b
ollama pull deepseek-r1:70b command output with successful image pull details

3. Server Access Issues
Ensure firewall rules allow traffic on necessary ports (e.g., 11434 for Ollama and 3000 for OpenWebUI):

sudo ufw allow 11434
sudo ufw allow 3000
firewall commands to allow ports 11434 for Ollama and 3000 for OpenWebUI

Step 7: What are the Features of OpenWebUI

OpenWebUI Dashboard:

The OpenWebUI dashboard is designed for simplicity and efficiency. Key sections include:

  1. New Chat: Start conversational queries directly with your installed AI models.
  2. Models Tab: View and manage your installed models. You have to Enable or disable them based on your requirements.
  3. Settings Tab: Customize global settings, such as API connections, prompt configurations, and database management.

This structure provides a user-friendly environment for interacting with AI models and configuring workflows.

OpenWebUI dashboard with sections for chat, models, prompts, and settings

Important Configurations in Settings of OpenWebUI

The Settings tab offers several customization options:

  1. General Settings: Configure user preferences like auto-login and default prompts. This makes sure of a smoother interaction experience.
Important Configurations in Settings of OpenWebUI
  1. Connections: Add third-party APIs, such as OpenAI or Ollama. Use this to link models to external services for extended functionality.
OpenWebUI connections page for linking APIs like OpenAI and Ollama
  1. Web Search Integration: Enable AI to fetch real-time information from the internet for dynamic query responses. This feature is ideal for research and fact-checking tasks.
OpenWebUI web search settings to enable internet-based AI queries

Step 8:  How to Manage LLM Models using OpenWebUI

Model Management using Admin Panel

The Models Tab in Admin Panel allows you to:

  • View Models: Check the list of all installed models, including details such as size and version.
  • Enable/Disable Models: Toggle models on or off to control which ones are active for queries.
  • Add/Delete Models: Upload new model files or remove outdated ones to optimize resource usage.
OpenWebUI's Models Tab for managing LLM models, including view, enable, and delete options

How to Make Changes to Model Settings using Admin Dashboard in OpenWebUI

Each model can be individually configured to suit your needs:

  • Set Default Prompts: Predefine instructions to guide model outputs.
  • Link APIs: Connect specific models to external APIs for seamless integration with other systems.
  • Adjust Response Formats: Tailor how the models present their outputs, such as text length or tone, to better suit your tasks.
Make Changes to Model Settings using Admin Dashboard in OpenWebUI

Step 9: How to Use Advanced Features in OpenWebUI

Manage Database from Admin Panel

The Database Tab provides tools for managing system data:

  1. Import/Export Settings: Save your configurations as JSON files for backups or transfer them to another system.
How to Use Advanced Features in OpenWebUI
  1. Reset the Database: Clear all data to troubleshoot issues or start fresh without affecting the software.

Configure Audio and Video Settings from Admin Panel

Advanced settings include:

  • Audio: Choose Text-to-Speech (TTS) and Speech-to-Text (STT) engines like Whisper or Azure. This enables AI to interact through audio.
Configure Audio and Video Settings from Admin Panel
  • Images: Enable or disable image storage to optimize storage space and maintain privacy. This is particularly useful for AI models generating visual outputs.
Enable or disable image storag

Step 10: Manage User Roles in OpenWebUI

How to Assign Admin and Pending Roles to users from Admin Panel

The Users Tab in the OpenWebUI admin panel allows administrators to manage user roles. Users can be set as Admin or remain Pending depending on the access level they require. To update the role of a user, simply click on the pencil icon next to their name.

Manage User Roles in OpenWebUI

Step 11: How to Fix Account Activation Issues in OpenWebUI

Check the Activation Status of the user in OpenWebUI Dashboard

If users face restricted access, they may see an Account Activation Pending message. This issue occurs when admin approval is needed. Navigate to the admin panel and activate user accounts to resolve this problem. Contact the administrator if the issue persists.

How to Fix Account Activation Issues in OpenWebUI

Step 12: Chat Interaction with DeepSeek

Here’s an example of how you can create a simple HTML page during a chat interaction with DeepSeek. The screenshot below showcases a basic conversation:

Chat Interaction with DeepSeek

In this example, the user requested a simple “Hello World” HTML page, and the AI promptly generated a clean and concise code snippet. Such interactions demonstrate the flexibility and responsiveness of DeepSeek when deployed with Ollama.

Conclusion

This guide’s process helped you successfully install and set up DeepSeek on your server using Ollama. Ollama delivers two operational interfaces: a command-line interface and a web-based interface, which improves the execution of AI tasks.

The analysis should proceed by maximizing model potential, establishing API integrations, and exploiting GPU processing capabilities for greater efficiency. Happy AI computing!

About the writer

Vinayak Baranwal Article Author

Vinayak Baranwal wrote this article. Use the provided link to connect with Vinayak on LinkedIn for more insightful content or collaboration opportunities.

Leave a Reply

Your email address will not be published. Required fields are marked *

Lifetime Solutions:

VPS SSD

Lifetime Hosting

Lifetime Dedicated Servers