OpenWebUI Tutorial: Setting Up and Using Local Llama 3.2 with Ollama
Introduction
This tutorial provides a step-by-step guide to setting up OpenWebUI as a user-friendly interface for the Llama 3.2 model using Ollama. By the end of this tutorial, you will have a fully functional local AI chatbot running on your computer.
Prerequisites
- Basic knowledge of command-line usage
- Installed Docker Desktop
- Installed Ollama
Tutorial Duration: 1 Hour
Step 1: Install and Set Up Docker (10 min)
Docker allows us to run OpenWebUI easily.
- Download Docker Desktop from here.
- Install Docker and ensure it is running.
- Open a terminal (Command Prompt/PowerShell/Terminal) and verify installation:
If a version number appears, Docker is installed correctly.docker --version
Step 2: Install Ollama (5 min)
Ollama is required to run Llama 3.2 locally.
- Download and install Ollama from https://ollama.com/download.
- Open a terminal and check if it is installed correctly:
ollama --version
Step 3: Download Llama 3.2 Model (10 min)
We now download the Llama 3.2 model for local use.
- Open a terminal and run:
ollama pull meta/llama3
- Wait for the download to complete (this may take some time depending on internet speed).
Step 4: Start Ollama (5 min)
Once the model is downloaded, start Ollama.
- Run the following command:
ollama serve
- This will start the local Ollama server.
Step 5: Install and Run OpenWebUI (15 min)
Now, we will install and start OpenWebUI using Docker.
- Pull the OpenWebUI Docker image:
docker pull ghcr.io/open-webui/open-webui:main
- Run OpenWebUI with the following command:
docker run -d --name openwebui -p 3000:3000 -v open-webui-data:/app/data --restart unless-stopped ghcr.io/open-webui/open-webui:main
- Verify that OpenWebUI is running:
If you see a container named openwebui, it is running.docker ps
Step 6: Access OpenWebUI (5 min)
Now, open the user interface in a web browser.
- Go to http://localhost:3000 in your browser.
- Create an account and log in.
Step 7: Configure OpenWebUI to Use Ollama (5 min)
- Go to Settings → LLM Provider.
- Select Ollama.
- Enter the model name: llama3.
- Save the settings.
Step 8: Test the AI Chatbot (5 min)
Now, let’s check if everything is working:
- Open the chat window.
- Type a message, such as:
What is AI?
- If the AI responds, the setup is complete!
Conclusion
By following this tutorial, you have successfully:
✅ Installed Docker and Ollama
✅ Downloaded and ran Llama 3.2
✅ Installed and configured OpenWebUI
✅ Connected OpenWebUI to Ollama
✅ Tested the chatbot
You now have a fully functional local AI chatbot running securely on your machine! 🚀