Techalicious Academy / 2026-02-11-openwebui

(Visit our meetup for more great tutorials)

SYSTEM DIRECTIVES

The system prompt is the single most powerful tool you have for shaping model behavior. It's the difference between a generic chatbot and something that feels purpose-built for your needs.

What a System Prompt Does

Every conversation has three types of messages:

System:     Instructions the model follows (invisible to the user)
User:       What the person types
Assistant:  What the model responds

The system prompt is loaded before the first user message. The model treats it as foundational context. Everything it generates is influenced by these instructions.

When you create a custom model in the Workspace and fill in the system prompt field, that text becomes the system message for every conversation using that model.

The Anatomy of a Good System Prompt

Good system prompts share some patterns. They tend to cover:

Who the model is:       Its role, expertise, personality
What it should do:      Primary tasks and behaviors
How it should respond:  Tone, length, format preferences
What it should avoid:   Things you don't want it to do

You don't need all four sections every time. Some models only need a sentence. Others benefit from a detailed briefing. Match the complexity of the prompt to the complexity of the task.

Example: Minimal Prompt

You are a concise writing editor. Fix grammar and improve clarity.
Keep the author's voice. Only change what needs changing.

That's 23 words and it works. The model knows its role, its task, and a key constraint (keep the author's voice).

Example: Detailed Prompt

You are a senior software engineer reviewing code submissions.
Your focus areas are security, performance, and maintainability,
in that priority order.

When reviewing code:
Reference specific line numbers when pointing out issues.
Classify each finding as Critical, Warning, or Suggestion.
Provide a corrected code snippet for every issue you find.
If the code is clean, say so in one sentence and move on.

Style guidelines:
Be direct. No filler phrases like "Great question!" or "Sure!"
Use technical terminology appropriate for experienced developers.
Format code examples in properly highlighted code blocks.

More detailed, but each line earns its place. Nothing is vague or redundant.

Common Mistakes

Writing too much. System prompts eat into your context window. A 2,000 word system prompt on a model with 4K context leaves almost no room for actual conversation. Keep it tight. If your system prompt is longer than a paragraph or two, question whether every line is pulling its weight.

Being too vague. "Be helpful and friendly" tells the model nothing it doesn't already default to. Be specific about what "helpful" means for your use case. Does it mean giving step-by-step instructions? Asking clarifying questions before answering? Keeping responses under three sentences?

Contradicting yourself. "Always give detailed explanations" followed by "Keep responses brief" puts the model in an impossible position. It'll try to satisfy both and do neither well. Pick a direction.

Using threats or emphasis hacks. "YOU MUST ALWAYS" or "NEVER EVER" or "THIS IS EXTREMELY IMPORTANT" doesn't work the way you'd think. Models respond to clear instructions, not capital letters. A calm "Always include source citations" works better than "YOU MUST NEVER FORGET TO CITE YOUR SOURCES!!!"

Dynamic Variables

OpenWebUI lets you inject live data into system prompts using double curly braces:

{{CURRENT_DATE}}     Today's date
{{CURRENT_TIME}}     Right now
{{USER_NAME}}        The logged-in user's name

Example:

You are a daily planning assistant. Today is {{CURRENT_DATE}}.
Help {{USER_NAME}} organize their priorities for the day.
Start by asking what's on their plate.

Now the model always knows the date and the user's name without you hardcoding anything.

Prompt Templates with Slash Commands

Besides system prompts on custom models, OpenWebUI also has a Prompts feature in the Workspace. These are reusable prompt templates you can invoke in any conversation by typing / followed by the prompt name.

For example, you could create a prompt called "summarize" with the text:

Summarize the following text in three bullet points, focusing on
the key takeaways and any action items.

Then in any chat, type /summarize and it inserts that prompt. Paste your text after it and send.

This is different from system prompts. System prompts are permanent personality. Slash prompts are one-off instructions you use when needed.

Practical System Prompt Templates

Here are a few you can adapt:

Research Assistant:

You help users research topics thoroughly. When asked about a
subject, provide a structured overview covering key facts, common
misconceptions, and areas of active debate. Cite specifics when
possible. If you're uncertain about something, say so rather
than guessing. Ask clarifying questions if the topic is broad.

Socratic Tutor:

You teach through questions rather than answers. When the user
asks something, respond with a guiding question that helps them
think through the problem themselves. Only provide direct answers
if they're truly stuck after 2-3 exchanges. Adjust difficulty
based on their responses.

Meeting Notes Formatter:

You take raw meeting notes or transcripts and produce clean,
organized summaries. Structure the output as: Attendees, Key
Decisions, Action Items (with owners and deadlines if mentioned),
and Open Questions. Keep it concise. Use the original wording
when possible rather than paraphrasing.

Testing and Iterating

System prompts rarely work perfectly on the first try. The process is:

  1. Write your initial prompt
  2. Have a conversation and see how the model behaves
  3. Notice where it deviates from what you want
  4. Adjust the prompt to address those specific deviations
  5. Test again

This is normal. Even professionals iterate on prompts. The models are following statistical patterns, not reading comprehension. Sometimes a small wording change makes a big difference.

Next Up

Now let's look at the other knobs you can turn: the generation parameters that control HOW the model thinks, not just WHAT it thinks about.