09-troubleshooting.txt
From: Running OpenClaw Locally with Ollama on Apple Silicon
TROUBLESHOOTING
=================
Everything that can go wrong, why it goes wrong, and how to fix it.
This section is organized by symptom, because when something breaks
you don't know the root cause yet. You know what you see.
"0/200k tokens" or TUI Shows No Response
------------------------------------------
This is the most reported issue. You type a message, hit enter, and
nothing happens. The token counter stays at zero. The cursor blinks.
Silence.
Root cause 1: Missing or wrong "api" field in the config.
The "api" field must be "ollama" (for native mode) or
"openai-responses" (for compatibility mode). If it's missing or
set to something else, OpenClaw sends requests in a format that
Ollama doesn't understand. Ollama ignores the malformed request
and sends nothing back. No error on either side.
Fix: Add "api": "ollama" to your provider config.
Root cause 2: Mismatched baseUrl and api mode.
With api: "ollama", the baseUrl must be http://127.0.0.1:11434
(no /v1). With api: "openai-responses", it must be
http://127.0.0.1:11434/v1. Cross them and you get silence.
Fix: Match them correctly. See the configuration section.
Root cause 3: Ollama isn't running.
curl http://localhost:11434/
If that doesn't return "Ollama is running", start Ollama:
ollama serve
Or launch the Ollama app from Applications.
Root cause 4: The model isn't loaded.
ollama list | grep qwen2.5-coder
If the model doesn't appear, it wasn't pulled successfully.
Re-pull it:
ollama pull qwen2.5-coder:14b
"No API key found for provider ollama"
-----------------------------------------
OpenClaw's provider initialization requires a non-empty API key string
even though Ollama doesn't use one.
Fix: Set OLLAMA_API_KEY=ollama-local in your environment or add
"apiKey": "ollama-local" to the Ollama provider config in
openclaw.json. The value is ignored by Ollama but required by
OpenClaw.
Gateway Port Conflict (EADDRINUSE)
------------------------------------
Error: "listen EADDRINUSE: address already in use :::18789"
Something else is using port 18789. Find out what:
lsof -i :18789
Common culprits:
A leftover clawdbot-gateway or moltbot-gateway service from before
the project was renamed. Check:
launchctl list | grep claw
Another OpenClaw instance. Kill it:
pkill -f openclaw
Or just use a different port:
OPENCLAW_GATEWAY_PORT=19001 openclaw gateway run --bind loopback
"unauthorized: gateway token missing" (WebSocket 1008)
--------------------------------------------------------
The WebSocket connection to the gateway was rejected because no auth
token was provided or the token was wrong.
Fix:
openclaw doctor --generate-gateway-token
openclaw dashboard --no-open
The second command prints the authenticated URL with the token
embedded. Copy that URL to your browser. Since version 2026.2.19+,
OpenClaw auto-generates and persists a gateway token at startup, so
this mostly happens with older versions or corrupted configs.
Model Too Large for RAM
-------------------------
When the model exceeds available memory, macOS starts swapping to disk.
Symptoms: responses take minutes instead of seconds, fans spin up, the
entire system becomes unresponsive. Ollama doesn't crash; it just gets
painfully slow.
How to diagnose: Open Activity Monitor, click the Memory tab. If the
"Memory Pressure" graph is yellow or red, you're swapping.
Fixes, in order of preference:
1. Close memory-hungry apps. Each Chrome tab uses 100-300MB. Close
everything you're not actively using.
2. Reduce context window. Edit your Modelfile or config to set
num_ctx to 4096 or 8192 instead of the default.
3. Use a smaller quantization. Q3_K_M is about 7.3GB for 14B,
compared to 9GB for Q4_K_M:
ollama pull qwen2.5-coder:14b-q3_K_M
4. Use a smaller model entirely. Qwen3 8B at roughly 5GB fits
easily on 16GB machines:
ollama pull qwen3:8b
Agent Describes Actions Instead of Doing Them
------------------------------------------------
You ask the agent to list files and it says "I would run the ls
command to list your files..." instead of actually running ls and
showing results.
This means tool calling isn't working. The model is generating
conversational text instead of structured tool-call responses.
Root causes:
Wrong model. DeepSeek-R1 and some older models don't support tool
calling. Stick with qwen2.5-coder:14b.
Wrong api mode. Make sure "api" is set to "ollama" (not missing,
not something else).
reasoning: true in model config. Set it to false for Ollama models.
When true, OpenClaw sends a "developer" role message that confuses
the model.
macOS Permissions Aren't Taking Effect
-----------------------------------------
You toggled Full Disk Access or Accessibility in System Settings but
OpenClaw still can't access files or use certain tools.
The fix is embarrassingly simple: QUIT TERMINAL COMPLETELY and reopen
it. macOS does not apply permission changes to already-running terminal
sessions. Just closing the tab isn't enough. Fully quit the Terminal
app (Cmd+Q) and relaunch it.
This catches everyone at least once. It's a 45-minute debugging trap
because nothing in the error messages suggests "restart Terminal" as
the solution.
Ollama Crashes Mid-Conversation
----------------------------------
Ollama can crash due to memory pressure, macOS putting the process to
sleep, or bugs in specific model inference paths.
What to do:
1. OpenClaw's gateway maintains session state independently. Your
conversation history is preserved in
~/.openclaw/agents/<agentId>/sessions/
2. Restart Ollama:
pkill ollama
ollama serve &
3. Re-warm the model:
ollama run qwen2.5-coder:14b "ping" --verbose
4. Send your next message in OpenClaw. It reconnects automatically.
Cloud Fallback Is Happening
------------------------------
You configured Ollama but the logs show requests going to Anthropic
or OpenAI.
Diagnosis:
openclaw logs --follow
# Send a test message, then check logs for "anthropic" or "openai"
env | grep -i "anthropic\|openai\|openrouter\|gemini"
If any cloud API keys are set in your environment, OpenClaw will use
them instead of Ollama regardless of your JSON config. The priority
system puts cloud providers first.
Fix: Unset the environment variables:
unset ANTHROPIC_API_KEY
unset OPENAI_API_KEY
And remove them from your shell profile (~/.zshrc, ~/.bashrc) if
they're set there permanently.
+----------------------------------------------------------+
| THE NUCLEAR OPTION |
| |
| If everything is broken beyond diagnosis: |
| |
| rm -rf ~/.openclaw |
| openclaw onboard --install-daemon |
| # Re-enter Ollama config manually |
| |
| For Ollama specifically: |
| |
| pkill ollama |
| ollama serve & |
| ollama run qwen2.5-coder:14b "test" |
+----------------------------------------------------------+
The Full Diagnostic Command
-----------------------------
When in doubt, run the doctor:
openclaw doctor --fix
This checks everything: config validity, provider connectivity,
gateway status, dependency versions, file permissions, and known
issues. The --fix flag auto-remediates what it can. Start here
when something breaks and you're not sure what.