Techalicious Academy / 2026-02-11-openwebui

(Visit our meetup for more great tutorials)

WORKSPACE - CREATING CUSTOM MODELS

This is where OpenWebUI goes from "nice ChatGPT clone" to something genuinely more powerful. The Workspace lets you create custom model configurations that act like specialized AI agents.

Think of it this way. The base models from Ollama are blank slates. A custom model is that same brain, but with a job description, a personality, and specific settings tuned for a task.

Accessing the Workspace

Click "Workspace" in the sidebar. You'll see three tabs:

Models:      Custom model configurations
Knowledge:   Document collections for reference (RAG)
Prompts:     Reusable prompt templates

We're going to focus on Models for now.

Creating Your First Custom Model

Click the "+" button or "Create a Model" in the Models tab.

You'll see a form with several sections. Let's walk through each one.

The Basics

Name:

Give it a descriptive name. Something like "Code Reviewer" or
"Writing Assistant" or "Explain Like I'm 5." This is what shows
up in the model dropdown when you're chatting.

Model ID:

A unique identifier, auto-generated from the name. You can
customize it if you want something shorter.

Description:

A brief note about what this model does. Shows up when browsing
your models. Helps you remember why you created it six months
from now.

Avatar:

Upload a custom image. Purely cosmetic but nice for visual
identification. Supports animated GIFs if you're feeling fancy.

Selecting the Base Model

This is the core decision. The Base Model dropdown shows every model available through your Ollama connection. Whatever you select becomes the "brain" of your custom model.

The custom model doesn't create a new AI. It wraps an existing model with your specific instructions and settings. So "Code Reviewer" might use qwen3:30b-a3b as its base, but with a system prompt that tells it to focus on code quality and security issues.

Pick based on the task:

General conversation, reasoning:  qwen3:30b-a3b
Long documents, deep analysis:    qwen3-next (if you have 64GB)
Tasks needing image analysis:     qwen3-vl or mistral-small3.2
Fast, lightweight responses:      qwen3-vl:4b

The System Prompt

This is the most important field. The system prompt defines who this model is and how it behaves. Every conversation with this custom model starts with your system prompt silently loaded before the user says anything.

We'll cover system prompts in depth in the next chapter, but here's a quick example:

You are a code reviewer. When the user shares code, analyze it for:
bugs, security issues, performance problems, and readability.
Be direct and specific. Point to exact lines. Suggest fixes with
code examples. If the code looks good, say so briefly.

The model reads this before every conversation. It shapes everything that follows.

OpenWebUI supports dynamic variables in system prompts using Jinja2 syntax:

{{CURRENT_DATE}}     Inserts today's date
{{CURRENT_TIME}}     Inserts current time
{{USER_NAME}}        Inserts the logged-in user's name

So you could write:

Today is {{CURRENT_DATE}}. The user's name is {{USER_NAME}}.
Greet them by name when appropriate.

Prompt Suggestions (Starter Questions)

Below the system prompt, you can add prompt suggestions. These are clickable chips that appear when someone starts a new conversation with this model. They give the user ideas for how to use it.

For a Code Reviewer model, you might add:

"Review this Python function for security issues"
"What are the performance bottlenecks in this code?"
"Suggest a cleaner way to write this"

For an Explain Like I'm 5 model:

"Explain how the internet works"
"What is quantum computing?"
"Why is the sky blue?"

These aren't required but they make the model feel polished and help users who aren't sure what to ask.

Capabilities

Below the prompt setup, you'll see toggle switches for capabilities:

Vision:           Enable image analysis (needs a vision base model)
Web Search:       Let the model search the web for current info
Image Generation: Enable image creation (needs DALL-E or similar)
File Upload:      Allow document uploads with context injection

Toggle these based on what your custom model needs. A code reviewer probably doesn't need image generation. A research assistant probably wants web search.

Knowledge Bindings

You can attach Knowledge collections to a custom model. When you do, the model automatically has access to those documents in every conversation. No need for the user to upload or reference anything.

This is powerful for things like:

A company FAQ bot with your docs pre-loaded
A study assistant with your textbook chapters attached
A coding helper with your project's documentation bound

We'll cover Knowledge in more detail later, but know that this is where you connect them to specific models.

Tool Bindings

Similar to Knowledge, you can attach specific Tools to a model. Tools let the model interact with external systems or perform specific actions like running calculations or executing code.

OpenWebUI comes with some built-in tools and you can create custom ones. For now, just know the option exists.

Saving and Using Your Model

Click Save. Your custom model now appears in the model dropdown alongside the base Ollama models. Select it, start chatting, and everything you configured takes effect automatically.

You can create as many custom models as you want. Same base model, different system prompts, different parameters. One for code, one for writing, one for brainstorming, one for explaining things to your grandmother.

Example: Building a Cooking Assistant

Let's build one together to see the full flow.

Name:         Home Chef
Base Model:   qwen3:30b-a3b
System Prompt:

You are an experienced home cook who helps people with recipes,
substitutions, and cooking techniques. You focus on practical,
everyday cooking, not restaurant-level complexity.

When suggesting recipes, always include approximate prep time
and cooking time. Assume the user has a standard home kitchen.
If they mention dietary restrictions, respect those in every
suggestion going forward.

Keep responses conversational. Nobody wants to read an essay
when they just want to know how long to boil an egg.

Starter Questions:

"What can I make with chicken thighs and rice?"
"I need a quick weeknight dinner for two"
"How do I make a good pasta sauce from scratch?"
"What's a good substitute for heavy cream?"

Save that and select "Home Chef" from the dropdown. Start chatting about dinner. Notice how the responses are shaped by the system prompt without you having to explain what you want every time.

Next Up

System prompts deserve their own chapter. Let's dig into what makes a good one and common mistakes to avoid.