Get 50% Discount Offer 26 Days

Recommended Services
Supported Scripts
WordPress
Hubspot
Joomla
Drupal
Wix
Shopify
Magento
Typeo3
How to Install and Use Ollama and OpenWebUI on your VPS

This guide will show you how to install, configure, and use Ollama and OpenWebUI. By the end of this guide, you will have a fully operational system for managing and interacting with advanced AI models, with detailed explanations of every process and feature.

1. What Are Ollama and OpenWebUI?

Ollama is a command-line tool to manage advanced AI models like Llama on local machines. It allows users to install, configure, and run AI models with minimal effort. Ollama’s simplicity makes it ideal for researchers, developers, and AI enthusiasts who want local control over model operations without relying on external servers.

OpenWebUI is a web-based graphical user interface that complements Ollama. It provides a visual way to interact with AI models and manage workflows. Using OpenWebUI, one can do the following API integration: model configuring and handling prompts. Combined, these command-line tools provide a fully integrated AI experience, combining the CLI’s robustness with a Web UI’s simplicity.

2. Requirements

System Requirements

Before you begin the installation process, make sure that your system meets the following requirements:

  • Operating System: A Linux distribution like Ubuntu 20.04 or newer is highly recommended for compatibility.
  • Hardware Specifications: A minimum of 8GB RAM, a quad-core processor, and at least 20GB of free disk space are required for basic setups. Advanced AI models like Llama may demand 16GB or more. Also, you need to have a GPU for faster and better-speed AI responses.
  • Dependencies: Important tools include curl, ca-certificates, and Docker. These components are critical for downloading and running the required software.
  • Networking: A stable and fast internet connection is mandatory to download large model files, Docker images, and other dependencies.

Pre-Installation Preparation

Preparing your system confirms a smooth installation. First, run the following commands to update and upgrade your system packages:

sudo apt update && sudo apt upgrade
Ubuntu Terminal - Running sudo apt update && sudo apt upgrade to update system packages
Ubuntu Terminal Running sudo apt upgrade to update system packages

This ensures that you are working with the latest software versions. Verify administrative access by testing sudo commands, as you’ll need root privileges during installation. Finally, check your internet speed to avoid interruptions when downloading large files.

3. How to Install Ollama on your VPS

Step 1: Run the Installation Command

The Ollama installation script is simple. Copy the command from the Ollama website and execute it in your terminal:

Ollama Website Linux Download Button Install with one command
curl -fsSL https://ollama.com/install.sh | sh

This script will download the required files, configure the system environment, and make the Ollama CLI available immediately. The process is automated, so you won’t need to perform additional manual steps.

Ollama Linux Installation - Terminal output showing script execution and progress

Step 2: Verify the Installation

Verify the installation by running the following:

ollama list

This command displays a list of installed models and confirms whether the Ollama CLI is functioning correctly.

Ollama Linux Installation - Terminal output showing the ollama list command and its output

If the list is empty, you can install models in the next step, ensuring your installation is successful.

Step 3: Add and Use Models

To add a model, such as Llama, use the following command:

ollama pull llama3:2.3b
Ollama - Downloading and Running the llama3:2.3b model

This downloads and installs the selected model. Once installed, you can interact with it by running:

ollama run llama3:2.3b

You can now type queries, and the model will generate real-time responses. For example, you might ask:

Ollama - Terminal output showing model download, run, and interaction

The model will provide a detailed answer, showcasing its capabilities.

4. How to Install Docker for OpenWebUI

Step 1: Add Docker’s Repository Key

Docker is essential for running OpenWebUI. Start by adding Docker’s GPG key to your system:

sudo mkdir -p /etc/apt/keyrings
Terminal output showing the command to create the /etc/apt/keyrings directory and install the Docker GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
Terminal output showing the command to download and add the Docker GPG key to the system
sudo chmod a+r /etc/apt/keyrings/docker.asc
Terminal output showing the command to set permissions for the Docker GPG key

This step ensures that your system can trust the Docker repository and prevents installation errors caused by invalid keys.

Step 2: Add the Docker Repository

Next, add Docker’s stable repository to your system’s package manager:

echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

This configuration lets you download and install Docker components from the official repository.

Terminal output showing the command to add the Docker repository to the system's package manager

Step 3: Install Docker Components

Update your package list and install Docker along with its necessary components:

sudo apt-get update
Ubuntu Terminal - Running sudo apt update to update system packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Terminal output showing the command to install Docker and its dependencies using the apt package manager

These tools will allow you to run Docker containers, manage images, and compose multi-container setups, all of which are critical for OpenWebUI.

5. How to Install and Configure OpenWebUI

Step 1: Run OpenWebUI Using Docker

To run OpenWebUI, execute the following Docker command:

docker run --d --network host -v openwebui/app:/backend/data \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui
Terminal output showing the Docker command to run OpenWebUI and its progress

This means you can work with updated copies of the software you use. Lastly, check the administrative access, as you will need root privileges during installation; you can do this by trying out sudo commands. Last but not least, verify your internet connection to ensure you don’t experience a slow connection when downloading files.

Step 2: Access the Interface

Open a browser and go to:

http://127.0.0.1:3000 or http://yourdomain.com:3000
Accessing the OpenWebUI Interface

Create an admin account with your name, email, and password. This account will give you access to all OpenWebUI features, including managing models, running queries, and configuring settings.

Screenshot of the OpenWebUI login screen and the release notes for version 0.4.3

6. How to Use OpenWebUI Features

Dashboard Overview

The OpenWebUI dashboard is designed for simplicity and efficiency. Key sections include:

  1. New Chat: Start conversational queries directly with your installed AI models.
  2. Models Tab: View and manage your installed models. Please turn it on or off as needed.
  3. Settings Tab: Customize global settings, such as API connections, prompt configurations, and database management.

This structure provides a user-friendly environment for interacting with AI models and configuring workflows.

Screenshot of the OpenWebUI dashboard showing the list of installed models, Ollama and OpenWebUI

Key Configurations in Settings

The Settings tab offers several customization options:

  1. General Settings: Configure user preferences like auto-login and default prompts. This makes sure of a smoother interaction experience.
Screenshot of the OpenWebUI Settings tab, showing options for general settings, embedding models, and more
  1. Connections: Add third-party APIs, such as OpenAI or Ollama. Use this to link models to external services for extended functionality.
Configuring API Connections in OpenWebUI
  1. Web Search Integration: This feature enables AI to fetch real-time information from the Internet for dynamic query responses. It is ideal for research and fact-checking tasks.
Visual representation of OpenWebUI's settings page, highlighting key features like web search integration.

7. How to Manage AI Models in OpenWebUI

Model Management

The Models Tab allows you to:

  • View Models: Check the list of all installed models, including details such as size and version.
  • Enable/Disable Models: Toggle models on or off to control which ones are active for queries.
  • Add/Delete Models: Upload new model files or remove outdated ones to optimize resource usage.
OpenWebUI interface displaying a list of installed AI models.

Customizing Model Settings

Each model can be individually configured to suit your needs:

  • Set Default Prompts: Predefine instructions to guide model outputs.
  • Link APIs: Connect specific models to external APIs for seamless integration with other systems.
  • Adjust Response Formats: You can tailor how the models present their outputs, such as Text length or tone, to suit your tasks better.
Overview of model configuration settings in OpenWebUI

8. How to Use Advanced Features in OpenWebUI

Database Management

The Database Tab provides tools for managing system data:

  1. Import/Export Settings: Save your configurations as JSON files for backups or transfer them to another system.
Overview of database management features in OpenWebUI
  1. Reset the Database: Clear all data to troubleshoot issues or start fresh without affecting the software.

Audio and Visual Configurations

Advanced settings include:

  • Audio: Choose Text-to-Speech (TTS) and Speech-to-Text (STT) engines like Whisper or Azure. This enables AI to interact through audio.
OpenWebUI, Audio, Visual, Settings, Configuration, Text-to-Speech, Speech-to-Text, AI
  • Images: Enable or turn off image storage to optimize storage space and maintain privacy. This is particularly useful for AI models generating visual outputs.
OpenWebUI Image Settings, OpenWebUI Image Generation Settings, OpenWebUI Image Storage Settings

9. Managing User Roles in OpenWebUI

Assigning Admin and Pending Roles

The Users Tab in OpenWebUI’s admin panel allows administrators to manage user roles. Users can be set as Admin or remain Pending depending on their required access level. To update a user’s role, click the pencil icon next to their name.

OpenWebUI interface showcasing user list and role assignment options

10. Resolving Access Activation Issues

Checking Activation Status

Users may see an Account Activation Pending message if they face restricted access. This issue occurs when admin approval is needed. To resolve this problem, navigate to the admin panel and activate user accounts. Contact the administrator if the issue persists.

OpenWebUI interface showing a successful login after resolving activation issues

11. Configuring API Connections in OpenWebUI

Adding and Managing API Keys

API connections can be configured in the Settings Tab for OpenAI, Ollama, and others for better and more efficient control of the administrators. With the help of the button located in the dropdown, you can switch between available APIs. This reaffirms the integration of dynamic model capabilities.

Image Reference:

  • The image demonstrates adding and managing API connections in the Settings Tab.

12. Customizing Text-to-Speech and Image Features

Setting Up TTS and Image Configurations

OpenWebUI allows users to configure advanced features under the admin panel’s Audio and Image Settings sections. You can enable text-to-speech engines like Whisper or adjust image storage settings for optimized resource use. Make sure you’ve selected the appropriate settings based on your project needs.

OpenWebUI interface showcasing settings for customizing audio and image behavior.

13. Database Management in OpenWebUI

Importing and Exporting Database Configurations

The Database Tab enables administrators to manage system data easily. They can import existing configurations via JSON files for quick setup or export current backup settings. The database can also be reset entirely to troubleshoot persistent issues.

14. Monitoring GPU Usage and Performance

Real-Time GPU Monitoring with NVIDIA-SMI

The NVIDIA-SMI command allows users to monitor GPU performance in real-time. It provides detailed information, including the driver version, CUDA version, GPU memory usage, and processes utilizing the GPU. This tool is essential for ensuring the optimal allocation of resources when running large models like LLaMA 3.2b.

NVIDIA-SMI output displaying detailed information about GPU usage and performance

15. Selecting and Managing AI Models in OpenWebUI

Switching Between Available Models

OpenWebUI provides the flexibility to manage multiple AI models such as LLaMA 3.2b, Arena Model, and more. Use the Model Selector feature to switch between models based on project requirements. Each model offers unique capabilities tailored to specific use cases, such as help with natural language or image generation.

OpenWebUI interface displaying a dropdown menu with various AI models to choose from

16. Making Two Models Chat Using Shared Data

To let two OpenWebUI-based AI models engage with each other, OpenWebUI provides the function for the models to exchange data. It could be precious for the more complex application of the technology, such as group troubleshooting or benchmarking between machine learning models.

Steps to Enable Model Interaction:

  1. Load Both Models: make sure both models are available and active in the interface.
  2. Select the Data Exchange Option: Utilize the settings that allow models to share relevant data streams.
  3. Monitor Interaction: View and analyze the ongoing exchanges to ensure coherence and accuracy.
OpenWebUI interface showcasing the process of enabling model collaboration

17. Enabling Temporary Chat in OpenWebUI

Instant communications within OpenWebUI are discrete and short-lived, allowing experimenting with concepts, reviewing AI creation, or conducting provisional conversations. This makes the feature useful for constant trial, one-on-one, or sensitive conversations since no session history is recorded.

Steps to Enable Temporary Chat:

  1. Activate Temporary Mode: Switch the chat mode to “Temporary” via the user interface.
  2. Start Interaction: Begin inputting prompts without concern for long-term storage.
  3. Finalize Output: Export or note down outputs manually, if necessary, before closing the session.
AI Chat Interface with Temporary Mode Option

18. How to Troubleshoot Common Issues

Docker Fails to Start

If Docker doesn’t start, restart the service:

sudo systemctl restart docker
Terminal output showing the execution of the sudo systemctl restart docker command.

Ensure that Docker is installed correctly, and check that your user has the required permissions to run Docker commands.

Model Not Loading

Run the following command to check installed models:

ollama list
Terminal output showing the results of the ollama list command

If the required model is missing, add it using Ollama pull <model_name> and confirm its availability before running queries.

Access Problems

If OpenWebUI is inaccessible, check the status of the Docker container:

docker ps
Terminal output showing the results of the docker ps command

Restart the container if necessary, and verify that your browser is pointing to the correct server address.

19. Conclusion

Now, you’ve successfully installed and configured Ollama and OpenWebUI. These tools combine CLI and GUI functionalities to make AI workflows simple. With their robust features, you can manage AI models, customize workflows, and unlock the full potential of advanced AI-driven applications. Explore and experiment to make the most of this setup.

About the writer

Vinayak Baranwal Article Author

This article was written by Vinayak Baranwal. For more insightful content or collaboration opportunities, feel free to connect with Vinayak on LinkedIn using the provided link.

Leave a Reply

Your email address will not be published. Required fields are marked *

Lifetime Solutions:

VPS SSD

Lifetime Hosting

Lifetime Dedicated Servers