This guide will show you how to install, configure, and use Ollama and OpenWebUI. By the end of this guide, you will have a fully operational system for managing and interacting with advanced AI models, with detailed explanations of every process and feature.
1. What Are Ollama and OpenWebUI?
Ollama is a command-line tool to manage advanced AI models like Llama on local machines. It allows users to install, configure, and run AI models with minimal effort. Ollama’s simplicity makes it ideal for researchers, developers, and AI enthusiasts who want local control over model operations without relying on external servers.
OpenWebUI is a web-based graphical user interface that complements Ollama. It provides a visual way to interact with AI models and manage workflows. Using OpenWebUI, one can do the following API integration: model configuring and handling prompts. Combined, these command-line tools provide a fully integrated AI experience, combining the CLI’s robustness with a Web UI’s simplicity.
2. Requirements
System Requirements
Before you begin the installation process, make sure that your system meets the following requirements:
- Operating System: A Linux distribution like Ubuntu 20.04 or newer is highly recommended for compatibility.
- Hardware Specifications: A minimum of 8GB RAM, a quad-core processor, and at least 20GB of free disk space are required for basic setups. Advanced AI models like Llama may demand 16GB or more. Also, you need to have a GPU for faster and better-speed AI responses.
- Dependencies: Important tools include curl, ca-certificates, and Docker. These components are critical for downloading and running the required software.
- Networking: A stable and fast internet connection is mandatory to download large model files, Docker images, and other dependencies.
Pre-Installation Preparation
Preparing your system confirms a smooth installation. First, run the following commands to update and upgrade your system packages:
sudo apt update && sudo apt upgrade
This ensures that you are working with the latest software versions. Verify administrative access by testing sudo commands, as you’ll need root privileges during installation. Finally, check your internet speed to avoid interruptions when downloading large files.
3. How to Install Ollama on your VPS
Step 1: Run the Installation Command
The Ollama installation script is simple. Copy the command from the Ollama website and execute it in your terminal:
curl -fsSL https://ollama.com/install.sh | sh
This script will download the required files, configure the system environment, and make the Ollama CLI available immediately. The process is automated, so you won’t need to perform additional manual steps.
Step 2: Verify the Installation
Verify the installation by running the following:
ollama list
This command displays a list of installed models and confirms whether the Ollama CLI is functioning correctly.
If the list is empty, you can install models in the next step, ensuring your installation is successful.
Step 3: Add and Use Models
To add a model, such as Llama, use the following command:
ollama pull llama3:2.3b
This downloads and installs the selected model. Once installed, you can interact with it by running:
ollama run llama3:2.3b
You can now type queries, and the model will generate real-time responses. For example, you might ask:
The model will provide a detailed answer, showcasing its capabilities.
4. How to Install Docker for OpenWebUI
Step 1: Add Docker’s Repository Key
Docker is essential for running OpenWebUI. Start by adding Docker’s GPG key to your system:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
This step ensures that your system can trust the Docker repository and prevents installation errors caused by invalid keys.
Step 2: Add the Docker Repository
Next, add Docker’s stable repository to your system’s package manager:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
This configuration lets you download and install Docker components from the official repository.
Step 3: Install Docker Components
Update your package list and install Docker along with its necessary components:
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
These tools will allow you to run Docker containers, manage images, and compose multi-container setups, all of which are critical for OpenWebUI.
5. How to Install and Configure OpenWebUI
Step 1: Run OpenWebUI Using Docker
To run OpenWebUI, execute the following Docker command:
docker run --d --network host -v openwebui/app:/backend/data \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui
This means you can work with updated copies of the software you use. Lastly, check the administrative access, as you will need root privileges during installation; you can do this by trying out sudo commands. Last but not least, verify your internet connection to ensure you don’t experience a slow connection when downloading files.
Step 2: Access the Interface
Open a browser and go to:
http://127.0.0.1:3000 or http://yourdomain.com:3000
Create an admin account with your name, email, and password. This account will give you access to all OpenWebUI features, including managing models, running queries, and configuring settings.
6. How to Use OpenWebUI Features
Dashboard Overview
The OpenWebUI dashboard is designed for simplicity and efficiency. Key sections include:
- New Chat: Start conversational queries directly with your installed AI models.
- Models Tab: View and manage your installed models. Please turn it on or off as needed.
- Settings Tab: Customize global settings, such as API connections, prompt configurations, and database management.
This structure provides a user-friendly environment for interacting with AI models and configuring workflows.
Key Configurations in Settings
The Settings tab offers several customization options:
- General Settings: Configure user preferences like auto-login and default prompts. This makes sure of a smoother interaction experience.
- Connections: Add third-party APIs, such as OpenAI or Ollama. Use this to link models to external services for extended functionality.
- Web Search Integration: This feature enables AI to fetch real-time information from the Internet for dynamic query responses. It is ideal for research and fact-checking tasks.
7. How to Manage AI Models in OpenWebUI
Model Management
The Models Tab allows you to:
- View Models: Check the list of all installed models, including details such as size and version.
- Enable/Disable Models: Toggle models on or off to control which ones are active for queries.
- Add/Delete Models: Upload new model files or remove outdated ones to optimize resource usage.
Customizing Model Settings
Each model can be individually configured to suit your needs:
- Set Default Prompts: Predefine instructions to guide model outputs.
- Link APIs: Connect specific models to external APIs for seamless integration with other systems.
- Adjust Response Formats: You can tailor how the models present their outputs, such as Text length or tone, to suit your tasks better.
8. How to Use Advanced Features in OpenWebUI
Database Management
The Database Tab provides tools for managing system data:
- Import/Export Settings: Save your configurations as JSON files for backups or transfer them to another system.
- Reset the Database: Clear all data to troubleshoot issues or start fresh without affecting the software.
Audio and Visual Configurations
Advanced settings include:
- Audio: Choose Text-to-Speech (TTS) and Speech-to-Text (STT) engines like Whisper or Azure. This enables AI to interact through audio.
- Images: Enable or turn off image storage to optimize storage space and maintain privacy. This is particularly useful for AI models generating visual outputs.
9. Managing User Roles in OpenWebUI
Assigning Admin and Pending Roles
The Users Tab in OpenWebUI’s admin panel allows administrators to manage user roles. Users can be set as Admin or remain Pending depending on their required access level. To update a user’s role, click the pencil icon next to their name.
10. Resolving Access Activation Issues
Checking Activation Status
Users may see an Account Activation Pending message if they face restricted access. This issue occurs when admin approval is needed. To resolve this problem, navigate to the admin panel and activate user accounts. Contact the administrator if the issue persists.
11. Configuring API Connections in OpenWebUI
Adding and Managing API Keys
API connections can be configured in the Settings Tab for OpenAI, Ollama, and others for better and more efficient control of the administrators. With the help of the button located in the dropdown, you can switch between available APIs. This reaffirms the integration of dynamic model capabilities.
Image Reference:
- The image demonstrates adding and managing API connections in the Settings Tab.
12. Customizing Text-to-Speech and Image Features
Setting Up TTS and Image Configurations
OpenWebUI allows users to configure advanced features under the admin panel’s Audio and Image Settings sections. You can enable text-to-speech engines like Whisper or adjust image storage settings for optimized resource use. Make sure you’ve selected the appropriate settings based on your project needs.
13. Database Management in OpenWebUI
Importing and Exporting Database Configurations
The Database Tab enables administrators to manage system data easily. They can import existing configurations via JSON files for quick setup or export current backup settings. The database can also be reset entirely to troubleshoot persistent issues.
14. Monitoring GPU Usage and Performance
Real-Time GPU Monitoring with NVIDIA-SMI
The NVIDIA-SMI command allows users to monitor GPU performance in real-time. It provides detailed information, including the driver version, CUDA version, GPU memory usage, and processes utilizing the GPU. This tool is essential for ensuring the optimal allocation of resources when running large models like LLaMA 3.2b.
15. Selecting and Managing AI Models in OpenWebUI
Switching Between Available Models
OpenWebUI provides the flexibility to manage multiple AI models such as LLaMA 3.2b, Arena Model, and more. Use the Model Selector feature to switch between models based on project requirements. Each model offers unique capabilities tailored to specific use cases, such as help with natural language or image generation.
16. Making Two Models Chat Using Shared Data
To let two OpenWebUI-based AI models engage with each other, OpenWebUI provides the function for the models to exchange data. It could be precious for the more complex application of the technology, such as group troubleshooting or benchmarking between machine learning models.
Steps to Enable Model Interaction:
- Load Both Models: make sure both models are available and active in the interface.
- Select the Data Exchange Option: Utilize the settings that allow models to share relevant data streams.
- Monitor Interaction: View and analyze the ongoing exchanges to ensure coherence and accuracy.
17. Enabling Temporary Chat in OpenWebUI
Instant communications within OpenWebUI are discrete and short-lived, allowing experimenting with concepts, reviewing AI creation, or conducting provisional conversations. This makes the feature useful for constant trial, one-on-one, or sensitive conversations since no session history is recorded.
Steps to Enable Temporary Chat:
- Activate Temporary Mode: Switch the chat mode to “Temporary” via the user interface.
- Start Interaction: Begin inputting prompts without concern for long-term storage.
- Finalize Output: Export or note down outputs manually, if necessary, before closing the session.
18. How to Troubleshoot Common Issues
Docker Fails to Start
If Docker doesn’t start, restart the service:
sudo systemctl restart docker
Ensure that Docker is installed correctly, and check that your user has the required permissions to run Docker commands.
Model Not Loading
Run the following command to check installed models:
ollama list
If the required model is missing, add it using Ollama pull <model_name> and confirm its availability before running queries.
Access Problems
If OpenWebUI is inaccessible, check the status of the Docker container:
docker ps
Restart the container if necessary, and verify that your browser is pointing to the correct server address.
19. Conclusion
Now, you’ve successfully installed and configured Ollama and OpenWebUI. These tools combine CLI and GUI functionalities to make AI workflows simple. With their robust features, you can manage AI models, customize workflows, and unlock the full potential of advanced AI-driven applications. Explore and experiment to make the most of this setup.
About the writer
This article was written by Vinayak Baranwal. For more insightful content or collaboration opportunities, feel free to connect with Vinayak on LinkedIn using the provided link.