Techalicious Academy / 2026-02-11-openwebui

(Visit our meetup for more great tutorials)

OPENWEBUI - YOUR LOCAL CHATGPT

What We're Building Tonight

You've used ChatGPT. Maybe you've used Claude. They're slick, they're fast, they work. But every message you send leaves your computer, travels to someone else's servers, and gets processed on hardware you don't own. Someone else sees your prompts. Someone else sets the rules about what you can and can't ask.

Tonight we're building the same experience, but on your own machine.

OpenWebUI is an open source web interface that looks and feels like ChatGPT. It connects to Ollama, which runs AI models locally on your hardware. The result is a private, powerful chat interface where you control everything.

Why Bother?

Three reasons.

First, privacy. Nothing leaves your network. Your conversations, your documents, your questions stay on your machine. No terms of service. No data harvesting.

Second, freedom. No content filters you didn't choose. No "I can't help with that" when you're asking a perfectly reasonable question. You pick the model, you set the rules.

Third, cost. After the initial setup, there's no monthly subscription. No per-token charges. Run it as much as you want. The only cost is the electricity your Mac uses.

What OpenWebUI Actually Is

OpenWebUI is a web application that runs on your computer. You open it in your browser at localhost:3000 and it looks almost identical to ChatGPT. You get a chat interface, conversation history, the ability to upload files, and a model selector at the top.

Under the hood, it talks to Ollama through an API. When you type a message, OpenWebUI sends it to Ollama, which runs the AI model on your hardware and streams the response back.

The architecture looks like this:

+------------------------------------------+
|  Your Mac                                |
|                                          |
|  Browser -----> OpenWebUI -----> Ollama  |
|  (localhost)    (Docker)         (local) |
+------------------------------------------+

Everything stays on your machine. The browser talks to OpenWebUI, which talks to Ollama. No internet required once everything is installed.

What This Tutorial Covers

By the end of tonight, you'll know how to:

  1. Install and run OpenWebUI with Docker
  2. Connect it to Ollama and pull models
  3. Navigate the interface and start chatting
  4. Create custom AI agents in the Workspace
  5. Write system directives that shape model behavior
  6. Understand and tweak parameters like temperature and top-p
  7. Use vision models to analyze images
  8. Troubleshoot common problems

We'll also walk through several models, talk about what hardware you need to run them, and when you'd pick one over another.

What You'll Need

Hardware:

Apple Silicon Mac (M1, M2, M3, or M4)
16GB unified memory minimum, 32GB recommended
About 30GB free disk space for models

Software:

Colima + Docker CLI (we'll install it)
Ollama (we'll install it)

If you were at our Claude Code session two weeks ago, you probably have Ollama already. Good. You're ahead of the game.

What This Is Not

This is a casual walkthrough, not a certification course. We're going to set things up together, poke around the interface, and have some fun with local AI models.

If something breaks on your specific machine, we'll debug it together or you can hit us up after. But the goal tonight is to get everyone from zero to a working local ChatGPT setup.

Let's Get Started

Next up: we'll check your hardware and make sure you've got the right specs before we install anything.