Saturday, 15 March 2025

iMMAi - Set up using Local LLM Ollama, and OpenwebUI [Docker Image]

 

Here’s a step-by-step guide to setting up a local Ollama LLM with the Llama 3.2 model and using OpenWebUI as an interface. This guide is designed for  iMMbiZSofTians , so I'll keep it simple.


Step 1: Install Docker

We will use Docker to run OpenWebUI easily.

  1. Download and install Docker Desktop from here.
  2. After installation, open Docker and make sure it is running.

Step 2: Install Ollama

Ollama is the tool that helps run LLM models locally.

  1. Download and install Ollama from https://ollama.com/download.

  2. Open a terminal (Command Prompt / PowerShell / Terminal) and type:

    ollama --version
    

    If it shows a version number, that means Ollama is installed correctly.


Step 3: Download Llama3.2 Model

Now, let's download the Llama 3.2 model.

  1. Open a terminal and run:

    ollama pull meta/llama3
    

    This will download the latest Llama 3 model (currently version 3.2).

Instead for iMMbizsoft : Modelfile


Step 4: Run Ollama API

Once the model is downloaded, start Ollama so it can be used by OpenWebUI.

  1. In the terminal, run:

    ollama serve
    

    This will start Ollama and make it available as an API.


Step 5: Run OpenWebUI using Docker

Now, let’s set up OpenWebUI, which provides a user-friendly chat interface.

  1. Pull the OpenWebUI Docker image:

    docker pull ghcr.io/open-webui/open-webui:main
    
  2. Run OpenWebUI and connect it to Ollama:

    docker run -d --name openwebui -p 3000:3000 -v open-webui-data:/app/data --restart unless-stopped ghcr.io/open-webui/open-webui:main
    
    • -d → Runs the container in the background.
    • --name openwebui → Names the container openwebui.
    • -p 3000:3000 → Maps port 3000 (default UI port).
    • -v open-webui-data:/app/data → Stores data persistently.
    • --restart unless-stopped → Ensures the UI restarts if stopped.

Step 6: Access OpenWebUI

  1. Open a web browser.
  2. Go to http://localhost:3000.
  3. You should see the OpenWebUI login page.
  4. Create an account and log in.
  5. Go to SettingsLLM Provider and select Ollama.
  6. Now, you can chat with Llama 3.2 locally!

Step 7: Test the Model

To check if everything is working, type:

ollama run llama3

If you see a response, your local AI model is ready!


Final Summary

✅ Installed Docker
✅ Installed Ollama
✅ Downloaded Llama 3.2 model
✅ Started Ollama API
✅ Pulled and ran OpenWebUI
✅ Accessed OpenWebUI in the browser
✅ Connected OpenWebUI to Ollama

Now you have a fully functional local AI chatbot running without the internet! 🚀

For further Deep Jumpin🦘: https://docs.openwebui.com/getting-started/

Let me know if you need help! 😊


No comments:

Post a Comment

OpenWebUI - Beginner's Tutorial

  OpenWebUI Tutorial: Setting Up and Using Local Llama 3.2 with Ollama Introduction This tutorial provides a step-by-step guide to setting...