Saturday, 15 March 2025

OpenWebUI - Beginner's Tutorial

 

OpenWebUI Tutorial: Setting Up and Using Local Llama 3.2 with Ollama

Introduction

This tutorial provides a step-by-step guide to setting up OpenWebUI as a user-friendly interface for the Llama 3.2 model using Ollama. By the end of this tutorial, you will have a fully functional local AI chatbot running on your computer.

Prerequisites

  • Basic knowledge of command-line usage
  • Installed Docker Desktop
  • Installed Ollama

Tutorial Duration: 1 Hour

Step 1: Install and Set Up Docker (10 min)

Docker allows us to run OpenWebUI easily.

  1. Download Docker Desktop from here.
  2. Install Docker and ensure it is running.
  3. Open a terminal (Command Prompt/PowerShell/Terminal) and verify installation:
    docker --version
    
    If a version number appears, Docker is installed correctly.

Step 2: Install Ollama (5 min)

Ollama is required to run Llama 3.2 locally.

  1. Download and install Ollama from https://ollama.com/download.
  2. Open a terminal and check if it is installed correctly:
    ollama --version
    

Step 3: Download Llama 3.2 Model (10 min)

We now download the Llama 3.2 model for local use.

  1. Open a terminal and run:
    ollama pull meta/llama3
    
  2. Wait for the download to complete (this may take some time depending on internet speed).

Step 4: Start Ollama (5 min)

Once the model is downloaded, start Ollama.

  1. Run the following command:
    ollama serve
    
  2. This will start the local Ollama server.

Step 5: Install and Run OpenWebUI (15 min)

Now, we will install and start OpenWebUI using Docker.

  1. Pull the OpenWebUI Docker image:
    docker pull ghcr.io/open-webui/open-webui:main
    
  2. Run OpenWebUI with the following command:
    docker run -d --name openwebui -p 3000:3000 -v open-webui-data:/app/data --restart unless-stopped ghcr.io/open-webui/open-webui:main
    
  3. Verify that OpenWebUI is running:
    docker ps
    
    If you see a container named openwebui, it is running.

Step 6: Access OpenWebUI (5 min)

Now, open the user interface in a web browser.

  1. Go to http://localhost:3000 in your browser.
  2. Create an account and log in.

Step 7: Configure OpenWebUI to Use Ollama (5 min)

  1. Go to SettingsLLM Provider.
  2. Select Ollama.
  3. Enter the model name: llama3.
  4. Save the settings.

Step 8: Test the AI Chatbot (5 min)

Now, let’s check if everything is working:

  1. Open the chat window.
  2. Type a message, such as:
    What is AI?
    
  3. If the AI responds, the setup is complete!

Conclusion

By following this tutorial, you have successfully: ✅ Installed Docker and Ollama
✅ Downloaded and ran Llama 3.2
✅ Installed and configured OpenWebUI
✅ Connected OpenWebUI to Ollama
✅ Tested the chatbot

You now have a fully functional local AI chatbot running securely on your machine! 🚀

No comments:

Post a Comment

OpenWebUI - Beginner's Tutorial

  OpenWebUI Tutorial: Setting Up and Using Local Llama 3.2 with Ollama Introduction This tutorial provides a step-by-step guide to setting...