Saturday, 15 March 2025

OpenWebUI - Beginner's Tutorial

 

OpenWebUI Tutorial: Setting Up and Using Local Llama 3.2 with Ollama

Introduction

This tutorial provides a step-by-step guide to setting up OpenWebUI as a user-friendly interface for the Llama 3.2 model using Ollama. By the end of this tutorial, you will have a fully functional local AI chatbot running on your computer.

Prerequisites

  • Basic knowledge of command-line usage
  • Installed Docker Desktop
  • Installed Ollama

Tutorial Duration: 1 Hour

Step 1: Install and Set Up Docker (10 min)

Docker allows us to run OpenWebUI easily.

  1. Download Docker Desktop from here.
  2. Install Docker and ensure it is running.
  3. Open a terminal (Command Prompt/PowerShell/Terminal) and verify installation:
    docker --version
    
    If a version number appears, Docker is installed correctly.

Step 2: Install Ollama (5 min)

Ollama is required to run Llama 3.2 locally.

  1. Download and install Ollama from https://ollama.com/download.
  2. Open a terminal and check if it is installed correctly:
    ollama --version
    

Step 3: Download Llama 3.2 Model (10 min)

We now download the Llama 3.2 model for local use.

  1. Open a terminal and run:
    ollama pull meta/llama3
    
  2. Wait for the download to complete (this may take some time depending on internet speed).

Step 4: Start Ollama (5 min)

Once the model is downloaded, start Ollama.

  1. Run the following command:
    ollama serve
    
  2. This will start the local Ollama server.

Step 5: Install and Run OpenWebUI (15 min)

Now, we will install and start OpenWebUI using Docker.

  1. Pull the OpenWebUI Docker image:
    docker pull ghcr.io/open-webui/open-webui:main
    
  2. Run OpenWebUI with the following command:
    docker run -d --name openwebui -p 3000:3000 -v open-webui-data:/app/data --restart unless-stopped ghcr.io/open-webui/open-webui:main
    
  3. Verify that OpenWebUI is running:
    docker ps
    
    If you see a container named openwebui, it is running.

Step 6: Access OpenWebUI (5 min)

Now, open the user interface in a web browser.

  1. Go to http://localhost:3000 in your browser.
  2. Create an account and log in.

Step 7: Configure OpenWebUI to Use Ollama (5 min)

  1. Go to SettingsLLM Provider.
  2. Select Ollama.
  3. Enter the model name: llama3.
  4. Save the settings.

Step 8: Test the AI Chatbot (5 min)

Now, let’s check if everything is working:

  1. Open the chat window.
  2. Type a message, such as:
    What is AI?
    
  3. If the AI responds, the setup is complete!

Conclusion

By following this tutorial, you have successfully: ✅ Installed Docker and Ollama
✅ Downloaded and ran Llama 3.2
✅ Installed and configured OpenWebUI
✅ Connected OpenWebUI to Ollama
✅ Tested the chatbot

You now have a fully functional local AI chatbot running securely on your machine! 🚀

iMMAi - Set up using Local LLM Ollama, and OpenwebUI [Docker Image]

 

Here’s a step-by-step guide to setting up a local Ollama LLM with the Llama 3.2 model and using OpenWebUI as an interface. This guide is designed for  iMMbiZSofTians , so I'll keep it simple.


Step 1: Install Docker

We will use Docker to run OpenWebUI easily.

  1. Download and install Docker Desktop from here.
  2. After installation, open Docker and make sure it is running.

Step 2: Install Ollama

Ollama is the tool that helps run LLM models locally.

  1. Download and install Ollama from https://ollama.com/download.

  2. Open a terminal (Command Prompt / PowerShell / Terminal) and type:

    ollama --version
    

    If it shows a version number, that means Ollama is installed correctly.


Step 3: Download Llama3.2 Model

Now, let's download the Llama 3.2 model.

  1. Open a terminal and run:

    ollama pull meta/llama3
    

    This will download the latest Llama 3 model (currently version 3.2).

How I made iMMAi [local LLM]

 

How I Made iMMAi: A Legal AI Assistant

Introduction

iMMAi is a powerful local AI assistant specialized in Indian Company Laws and corporate regulations. This blog will guide you through the step-by-step process of creating iMMAi using Ollama and Docker.

Step 1: Install Ollama

Ollama is the tool that allows us to run large language models locally.

  1. Download and install Ollama from https://ollama.com/download.
  2. Verify the installation by running:
    ollama --version
    

Step 2: Install Docker Desktop

Docker is needed to containerize and manage OpenWebUI.

  1. Download and install Docker Desktop from https://www.docker.com/get-started.
  2. Open Docker and make sure it is running.

Step 3: Download the Llama 3.2 Model

Now, we will pull the Llama 3.2 model, which serves as the base for iMMAi.

  1. Open a terminal and run:
    ollama pull llama3.2
    
  2. Verify the model is downloaded:
    ollama run llama3.2
    

Step 4: Create a Custom Modelfile for iMMAi

Now, we will customize Llama 3.2 to specialize in Indian legal and corporate regulations.

  1. Create a new file named Modelfile and add the following content:
    FROM llama3.2
    SYSTEM """Your name is iMMAi! You are a very clever Legal Assistant and Chartered Accountant 
    specialized in Indian Company Laws and corporate regulations. You know everything about company registration and financial aspects. 
    You are succinct and informative. Search only for official legal and corporate regulations in India.
    Do not include foreign laws or unrelated information.
    Provide a **brief summary** in 2-3 sentences by default."""
    PARAMETER temperature 0.1
    

Step 5: Create the iMMAi Model

Now, we will create the iMMAi model using the Modelfile.

  1. Open a terminal and run:
    ollama create iMMAi -f Modelfile
    
  2. Check if the model is created:
    ollama list
    
    If you see iMMAi in the list, the model has been successfully created.

Step 6: Test iMMAi

Finally, let’s run and test our custom AI assistant.

  1. Run iMMAi in the terminal:
    ollama run iMMAi
    
  2. Ask it a legal or corporate question, such as:
    How do I register a private limited company in India?
    
  3. If the response is relevant and based on Indian corporate laws, your AI assistant is ready! 🚀

Conclusion

In this blog, we successfully: ✅ Installed Ollama and Docker
✅ Downloaded and ran Llama 3.2
✅ Created a custom legal AI assistant (iMMAi)
✅ Tested iMMAi for legal and corporate queries

You now have a fully functional local AI legal assistant that can help with Indian corporate regulations. 🎯

OpenWebUI - Beginner's Tutorial

  OpenWebUI Tutorial: Setting Up and Using Local Llama 3.2 with Ollama Introduction This tutorial provides a step-by-step guide to setting...