TROUBLESHOOTING
Things will break. It's software. Here are the most common problems you'll hit and how to fix them.
OpenWebUI Won't Load in Browser
You go to http://localhost:3000 and get a connection error.
First, check that the container is running:
docker compose ps
If you don't see open-webui listed, it's not running. Start it:
docker compose up -d
If it shows "Restarting" in a loop, check the logs:
docker compose logs --tail 50
Common causes:
Port 3000 is already in use by another application. Stop the
other app or change the port in your docker-compose.yml:
ports:
- "8080:8080"
Then restart:
docker compose down
docker compose up -d
Now access it at http://localhost:8080 instead.
No Models in the Dropdown
OpenWebUI loads but the model selector is empty.
This means it can't reach Ollama. Check these in order:
- Is Ollama running? Look for the Ollama icon in your Mac's menu bar. If it's not there, open the Ollama application from your Applications folder. Or check from Terminal: curl http://localhost:11434/api/version If this fails, Ollama isn't running. Launch it.
- Can OpenWebUI reach Ollama? For Docker installs, OpenWebUI connects through host.docker.internal:11434. Go to Admin Panel > Settings > Connections > Ollama and verify the URL is set to: http://host.docker.internal:11434 Click the check mark to test the connection.
- Did you pull any models? Even if the connection works, you need models downloaded: ollama list If nothing shows up, you haven't pulled any models yet: ollama pull qwen3:30b-a3b
Model Responds With Gibberish
If the model produces incoherent output, random characters, or repetitive nonsense, the most likely cause is a parameter issue.
Check your temperature setting. If it's above 1.5, the model is being asked to be too random. Lower it to 0.7.
Also check if you've set conflicting parameters. Running Mirostat and Top P simultaneously can produce weird results. Use one approach or the other, not both.
If you're using a custom model with a very long system prompt, the prompt might be eating most of the context window. Try shortening the system prompt or increasing the context length.
Model Is Extremely Slow
Local inference is slower than cloud services. That's expected. But if it's unreasonably slow:
Check what else is running. Open Activity Monitor and look at
memory pressure. If it's in the red, your Mac is swapping to
disk and everything slows down.
Try a smaller model. If you're running qwen3-next (80B) on a
32GB Mac, it's going to crawl. Drop to qwen3:30b-a3b.
Close memory-hungry applications. Safari with 50 tabs, Docker
Desktop itself, and large applications all compete for memory.
Check that the model is using GPU. Run ollama ps while a model
is loaded. The PROCESSOR column should show GPU percentage. If
it says CPU, something is wrong with your Ollama installation.
Model Forgets Earlier Context
You're in a long conversation and the model starts forgetting things you discussed earlier or repeating questions you already answered.
This is a context window issue. The conversation has exceeded the model's context length and older messages have been pushed out.
Solutions:
Start a new conversation for a new topic. Don't try to have one
infinite-length chat.
Increase context length in the model parameters. But remember
this costs RAM.
Be more concise in your messages. Shorter messages mean more
messages fit in context.
Summarize the conversation periodically. Ask the model to
summarize the key points so far. That compressed summary then
serves as context going forward.
Upload Image But Model Ignores It
You uploaded an image but the model responds as if there's no image.
- Make sure you're using a vision model (qwen3-vl, mistral-small3.2)
- If using a custom model, check that Vision is enabled in the Capabilities toggles
- Make sure the image actually uploaded (you should see a thumbnail in the message area before sending)
- Some very large images might fail silently. Try a smaller or lower-resolution version.
Docker Takes Too Much Disk Space
Docker can accumulate old images and containers over time. Clean up:
docker system prune -a
This removes unused images, containers, and build cache. Your open-webui data volume is preserved since it's actively in use.
Updating OpenWebUI
To get the latest version, from the directory with your docker-compose.yml:
docker compose pull
docker compose up -d
That's it. Docker pulls the new image and recreates the container. Your data persists because it's in the named volume, not the container. You're just replacing the application code while keeping all your conversations, models, and settings.
Resetting Everything
If you want a completely fresh start:
docker compose down -v
Then run docker compose up -d again. The -v flag removes the
named volume too, which wipes everything: conversations,
settings, user accounts. Nuclear option.
Getting Help
OpenWebUI has an active community:
GitHub: github.com/open-webui/open-webui
Discord: Check the GitHub repo for the invite link
Docs: docs.openwebui.com
Or ask us at the next meetup. That's what we're here for.
Next Up
Let's wrap up with a quick reference card you can keep handy.