
DeepSeek is a revolutionary AI model that has demonstrated efficient server deployment capabilities in recent years. Its substantial capabilities will change our current data processing and engagement methods. To make managing and running DeepSeek easier, we can utilize Ollama, a lightweight command-line tool designed for handling large AI models locally or on cloud-based VPS. This guide will provide you with a step-by-step process for setting up and running DeepSeek on your server using Ollama.
By the end of this tutorial, you will have a fully functional AI system at your disposal, ready for various tasks such as content generation, data analysis, and automation. You’ve successfully navigated the complexities of setting up and running DeepSeek on your server using Ollama, and now you’re ready to unleash the power of AI.
Why Use Ollama for DeepSeek?
Ollama simplifies the process of running large language models by:
- Offers a streamlined command-line interface.
- Handling dependencies and configurations automatically.
- Supports local execution for better privacy and security.
- Provide a lightweight yet powerful platform for AI enthusiasts and developers.
Requirements
Before we start, make sure the server meets the following requirements:
System Requirements
- Operating System: Ubuntu 20.04 (recommended) or any Linux distribution.
- Hardware:
- Minimum: 8GB RAM, quad-core CPU, 20GB free disk space.
- Recommended: 16GB+ RAM, dedicated GPU for enhanced performance.
- Dependencies:
- curl
- ca-certificates
- Docker (for additional features)
- Network Requirements: Stable and high-speed internet connection.
Step 1: Equipping the Server
Update and Upgrade your server to ensure all packages are up to date by executing:
sudo apt update && sudo apt upgrade -y
Additionally, confirm that you have sudo privileges to execute administrative tasks.
Step 2: Installing Ollama
To install Ollama on your server, run the following command:
curl -fsSL https://ollama.com/install.sh | sh
This script will automatically download and configure Ollama on your system. Once installed, verify its installation by running:
ollama --version
If the installation is successful, you should see the version number displayed in the terminal.
Step 3: Downloading and Running DeepSeek Model
Once Ollama is installed on our server, the next step is to pull the DeepSeek model. Execute the following command:
ollama pull deepseek-r1:70b
This command will download and install DeepSeek. After installation, you can run the model using:
ollama run deepseek-r1:70b
Now, you can start interacting with DeepSeek by entering your queries.
Step 4: Installing Docker for OpenWeb UI Integration
If you want to manage DeepSeek using a graphical interface, you can install Docker to run a web-based UI like OpenWebUI. Follow these steps to install Docker:
Add Docker’s GPG key:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
Add the Docker repository:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install Docker:
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
- To Verify the installation by checking the Docker version, use:
docker --version
Step 5: Running a Web UI for DeepSeek
To manage DeepSeek via a browser, deploy OpenWebUI with Docker:
docker run -d --network host -v openwebui/app:/backend/data \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui
Once running, access OpenWebUI in your browser at:
http://127.0.0.1:3000
You can now interact with DeepSeek through an intuitive graphical interface.
Step 6: Monitoring Performance and Troubleshooting
Checking GPU Usage
If your server has a GPU, use NVIDIA’s monitoring tool to track its usage:
nvidia-smi
Common Issues and Fixes
Here are some common issues you might encounter when setting up DeepSeek and how to fix them.
1. Docker Container Not Running
Run the following command to check the status of Docker:
docker ps
If OpenWebUI is not running, restart the container:
docker restart open-webui
2. DeepSeek Model Not Loading
Check installed models:
ollama list
If the DeepSeek model is missing, reinstall it:
ollama pull deepseek-r1:70b
3. Server Access Issues
Ensure firewall rules allow traffic on necessary ports (e.g., 11434 for Ollama and 3000 for OpenWebUI):
sudo ufw allow 11434
sudo ufw allow 3000
Step 7: What are the Features of OpenWebUI
OpenWebUI Dashboard:
The OpenWebUI dashboard is designed for simplicity and efficiency. Key sections include:
- New Chat: Start conversational queries directly with your installed AI models.
- Models Tab: View and manage your installed models. You have to Enable or disable them based on your requirements.
- Settings Tab: Customize global settings, such as API connections, prompt configurations, and database management.
This structure provides a user-friendly environment for interacting with AI models and configuring workflows.
Important Configurations in Settings of OpenWebUI
The Settings tab offers several customization options:
- General Settings: Configure user preferences like auto-login and default prompts. This makes sure of a smoother interaction experience.
- Connections: Add third-party APIs, such as OpenAI or Ollama. Use this to link models to external services for extended functionality.
- Web Search Integration: Enable AI to fetch real-time information from the internet for dynamic query responses. This feature is ideal for research and fact-checking tasks.
Step 8: How to Manage LLM Models using OpenWebUI
Model Management using Admin Panel
The Models Tab in Admin Panel allows you to:
- View Models: Check the list of all installed models, including details such as size and version.
- Enable/Disable Models: Toggle models on or off to control which ones are active for queries.
- Add/Delete Models: Upload new model files or remove outdated ones to optimize resource usage.
How to Make Changes to Model Settings using Admin Dashboard in OpenWebUI
Each model can be individually configured to suit your needs:
- Set Default Prompts: Predefine instructions to guide model outputs.
- Link APIs: Connect specific models to external APIs for seamless integration with other systems.
- Adjust Response Formats: Tailor how the models present their outputs, such as text length or tone, to better suit your tasks.
Step 9: How to Use Advanced Features in OpenWebUI
Manage Database from Admin Panel
The Database Tab provides tools for managing system data:
- Import/Export Settings: Save your configurations as JSON files for backups or transfer them to another system.
- Reset the Database: Clear all data to troubleshoot issues or start fresh without affecting the software.
Configure Audio and Video Settings from Admin Panel
Advanced settings include:
- Audio: Choose Text-to-Speech (TTS) and Speech-to-Text (STT) engines like Whisper or Azure. This enables AI to interact through audio.
- Images: Enable or disable image storage to optimize storage space and maintain privacy. This is particularly useful for AI models generating visual outputs.
Step 10: Manage User Roles in OpenWebUI
How to Assign Admin and Pending Roles to users from Admin Panel
The Users Tab in the OpenWebUI admin panel allows administrators to manage user roles. Users can be set as Admin or remain Pending depending on the access level they require. To update the role of a user, simply click on the pencil icon next to their name.
Step 11: How to Fix Account Activation Issues in OpenWebUI
Check the Activation Status of the user in OpenWebUI Dashboard
If users face restricted access, they may see an Account Activation Pending message. This issue occurs when admin approval is needed. Navigate to the admin panel and activate user accounts to resolve this problem. Contact the administrator if the issue persists.
Step 12: Chat Interaction with DeepSeek
Here’s an example of how you can create a simple HTML page during a chat interaction with DeepSeek. The screenshot below showcases a basic conversation:
In this example, the user requested a simple “Hello World” HTML page, and the AI promptly generated a clean and concise code snippet. Such interactions demonstrate the flexibility and responsiveness of DeepSeek when deployed with Ollama.
Conclusion
This guide’s process helped you successfully install and set up DeepSeek on your server using Ollama. Ollama delivers two operational interfaces: a command-line interface and a web-based interface, which improves the execution of AI tasks.
The analysis should proceed by maximizing model potential, establishing API integrations, and exploiting GPU processing capabilities for greater efficiency. Happy AI computing!
About the writer
Vinayak Baranwal wrote this article. Use the provided link to connect with Vinayak on LinkedIn for more insightful content or collaboration opportunities.