This guide will show you how to install, configure, and use Ollama and OpenWebUI. By the end of this guide, you will have a fully operational system for managing and interacting with advanced AI models, with detailed explanations of every process and feature.
Ollama is a command-line tool to manage advanced AI models like Llama on local machines. It allows users to install, configure, and run AI models with minimal effort. Ollama’s simplicity makes it ideal for researchers, developers, and AI enthusiasts who want local control over model operations without relying on external servers.
OpenWebUI is a web-based graphical user interface that complements Ollama. It provides a visual way to interact with AI models and manage workflows. Using OpenWebUI, one can do the following API integration: model configuring and handling prompts. Combined, these command-line tools provide a fully integrated AI experience, combining the CLI’s robustness with a Web UI’s simplicity.
Before you begin the installation process, make sure that your system meets the following requirements:
Preparing your system confirms a smooth installation. First, run the following commands to update and upgrade your system packages:
sudo apt update && sudo apt upgrade


This ensures that you are working with the latest software versions. Verify administrative access by testing sudo commands, as you’ll need root privileges during installation. Finally, check your internet speed to avoid interruptions when downloading large files.
The Ollama installation script is simple. Copy the command from the Ollama website and execute it in your terminal:

curl -fsSL https://ollama.com/install.sh | sh
This script will download the required files, configure the system environment, and make the Ollama CLI available immediately. The process is automated, so you won’t need to perform additional manual steps.

Verify the installation by running the following:
ollama list
This command displays a list of installed models and confirms whether the Ollama CLI is functioning correctly.

If the list is empty, you can install models in the next step, ensuring your installation is successful.
To add a model, such as Llama, use the following command:
ollama pull llama3:2.3b

This downloads and installs the selected model. Once installed, you can interact with it by running:
ollama run llama3:2.3b
You can now type queries, and the model will generate real-time responses. For example, you might ask:

The model will provide a detailed answer, showcasing its capabilities.
Docker is essential for running OpenWebUI. Start by adding Docker’s GPG key to your system:
sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc

sudo chmod a+r /etc/apt/keyrings/docker.asc

This step ensures that your system can trust the Docker repository and prevents installation errors caused by invalid keys.
Next, add Docker’s stable repository to your system’s package manager:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
This configuration lets you download and install Docker components from the official repository.

Update your package list and install Docker along with its necessary components:
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

These tools will allow you to run Docker containers, manage images, and compose multi-container setups, all of which are critical for OpenWebUI.
To run OpenWebUI, execute the following Docker command:
docker run --d --network host -v openwebui/app:/backend/data \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui

This means you can work with updated copies of the software you use. Lastly, check the administrative access, as you will need root privileges during installation; you can do this by trying out sudo commands. Last but not least, verify your internet connection to ensure you don’t experience a slow connection when downloading files.
Open a browser and go to:
http://127.0.0.1:3000 or http://yourdomain.com:3000

Create an admin account with your name, email, and password. This account will give you access to all OpenWebUI features, including managing models, running queries, and configuring settings.

The OpenWebUI dashboard is designed for simplicity and efficiency. Key sections include:
This structure provides a user-friendly environment for interacting with AI models and configuring workflows.

The Settings tab offers several customization options:



The Models Tab allows you to:

Each model can be individually configured to suit your needs:

The Database Tab provides tools for managing system data:

Advanced settings include:


The Users Tab in OpenWebUI’s admin panel allows administrators to manage user roles. Users can be set as Admin or remain Pending depending on their required access level. To update a user’s role, click the pencil icon next to their name.

Users may see an Account Activation Pending message if they face restricted access. This issue occurs when admin approval is needed. To resolve this problem, navigate to the admin panel and activate user accounts. Contact the administrator if the issue persists.

API connections can be configured in the Settings Tab for OpenAI, Ollama, and others for better and more efficient control of the administrators. With the help of the button located in the dropdown, you can switch between available APIs. This reaffirms the integration of dynamic model capabilities.
Image Reference:
OpenWebUI allows users to configure advanced features under the admin panel’s Audio and Image Settings sections. You can enable text-to-speech engines like Whisper or adjust image storage settings for optimized resource use. Make sure you’ve selected the appropriate settings based on your project needs.

The Database Tab enables administrators to manage system data easily. They can import existing configurations via JSON files for quick setup or export current backup settings. The database can also be reset entirely to troubleshoot persistent issues.
The NVIDIA-SMI command allows users to monitor GPU performance in real-time. It provides detailed information, including the driver version, CUDA version, GPU memory usage, and processes utilizing the GPU. This tool is essential for ensuring the optimal allocation of resources when running large models like LLaMA 3.2b.

OpenWebUI provides the flexibility to manage multiple AI models such as LLaMA 3.2b, Arena Model, and more. Use the Model Selector feature to switch between models based on project requirements. Each model offers unique capabilities tailored to specific use cases, such as help with natural language or image generation.

To let two OpenWebUI-based AI models engage with each other, OpenWebUI provides the function for the models to exchange data. It could be precious for the more complex application of the technology, such as group troubleshooting or benchmarking between machine learning models.

Instant communications within OpenWebUI are discrete and short-lived, allowing experimenting with concepts, reviewing AI creation, or conducting provisional conversations. This makes the feature useful for constant trial, one-on-one, or sensitive conversations since no session history is recorded.

If Docker doesn’t start, restart the service:
sudo systemctl restart docker

Ensure that Docker is installed correctly, and check that your user has the required permissions to run Docker commands.
Run the following command to check installed models:
ollama list

If the required model is missing, add it using Ollama pull <model_name> and confirm its availability before running queries.
If OpenWebUI is inaccessible, check the status of the Docker container:
docker ps

Restart the container if necessary, and verify that your browser is pointing to the correct server address.
Now, you’ve successfully installed and configured Ollama and OpenWebUI. These tools combine CLI and GUI functionalities to make AI workflows simple. With their robust features, you can manage AI models, customize workflows, and unlock the full potential of advanced AI-driven applications. Explore and experiment to make the most of this setup.

Vinayak Baranwal wrote this article. Use the provided link to connect with Vinayak on LinkedIn for more insightful content or collaboration opportunities.