Techalicious Academy / 2026-02-24-openclaw-ollama

(Visit our meetup for more great tutorials)

CONFIGURING OPENCLAW FOR LOCAL-ONLY OPERATION

This is where most people get stuck. The onboarding wizard creates a basic config, but for a reliable local setup, especially for a demo, manual configuration is more predictable and easier to troubleshoot.

The Configuration File

OpenClaw's main config lives at:

~/.openclaw/openclaw.json

Open it in your editor. If the wizard already created one, you'll see some content. Replace the relevant sections with this, or create it from scratch if it doesn't exist:

{
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://127.0.0.1:11434",
        "apiKey": "ollama-local",
        "api": "ollama",
        "models": [
          {
            "id": "qwen2.5-coder:14b",
            "name": "Qwen 2.5 Coder 14B",
            "reasoning": false,
            "input": ["text"],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 32768,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "ollama/qwen2.5-coder:14b"
      }
    }
  },
  "gateway": {
    "bind": "loopback",
    "port": 18789,
    "auth": {
      "mode": "token",
      "token": "REPLACE-WITH-RANDOM-TOKEN"
    }
  },
  "tools": {
    "web": {
      "search": { "enabled": false },
      "fetch": { "enabled": true }
    }
  }
}

Generate a proper random token for the gateway auth:

openssl rand -hex 32

Copy the output and replace REPLACE-WITH-RANDOM-TOKEN in the config.

Critical Configuration Details

There are three fields that cause the most confusion. Get these wrong and OpenClaw silently fails with no useful error message.

THE "api" FIELD:

This tells OpenClaw how to talk to Ollama. There are two modes:

"api": "ollama"
  Uses Ollama's native /api/chat endpoint. This is the preferred mode
  because it supports streaming and tool calling simultaneously. When
  using this mode, the baseUrl must NOT have /v1 at the end.

  Correct:   "baseUrl": "http://127.0.0.1:11434"
  Wrong:     "baseUrl": "http://127.0.0.1:11434/v1"

"api": "openai-responses"
  Uses Ollama's OpenAI-compatible endpoint. When using this mode, the
  baseUrl MUST have /v1 at the end.

  Correct:   "baseUrl": "http://127.0.0.1:11434/v1"
  Wrong:     "baseUrl": "http://127.0.0.1:11434"

Mixing these up causes silent failures. The model just doesn't
respond. No error. Nothing. Just silence. Use "ollama" mode with
no /v1 path. It's the better option.

THE "reasoning" FIELD:

Set this to false for Ollama models. When reasoning is true, OpenClaw
sends a "developer" role message that Ollama doesn't understand. The
result is a cryptic error or no response at all. Just set it false.

THE "apiKey" FIELD:

Ollama doesn't require or use an API key. But OpenClaw's provider
initialization code requires a non-empty string to activate the
provider. The value "ollama-local" is a convention. You can use
anything. It gets sent to Ollama and ignored.

Environment Variables

Create the file ~/.openclaw/.env:

OLLAMA_API_KEY=ollama-local
OPENCLAW_DISABLE_BONJOUR=1
OPENCLAW_GATEWAY_PORT=18789

What each does:

OLLAMA_API_KEY: Same dummy value as in the JSON config. The env var
and the config value serve the same purpose. Belt and suspenders.

OPENCLAW_DISABLE_BONJOUR: This is important. By default, OpenClaw
broadcasts its presence on your local network using mDNS (Bonjour).
Every device on the same Wi-Fi can discover your OpenClaw instance.
In "full" mode, the broadcast even includes filesystem paths and
your username. Setting this to 1 disables broadcasting entirely.

OPENCLAW_GATEWAY_PORT: Matches the port in the JSON config.

Making Sure No Cloud Keys Leak In

This is the sneaky one. OpenClaw has a provider priority system:

Anthropic > OpenAI > OpenRouter > Gemini > ... > Ollama

Ollama is at the bottom of the priority list. If ANY cloud API key is set anywhere in your environment, OpenClaw will use that cloud provider instead of your local Ollama, even if you explicitly configured Ollama in the JSON file. This includes keys in:

Your shell profile (~/.zshrc, ~/.bashrc)
Any .env file in the current directory
Environment variables set by other tools
macOS Keychain entries (less common but possible)

Check for leaked keys:

env | grep -i "anthropic\|openai\|openrouter\|gemini"

That should return nothing. If it returns anything, you need to unset those variables or remove them from your shell profile. Otherwise your "local" AI assistant is secretly calling Anthropic's API.

There's a known bug (GitHub issue #5790) where some versions silently fall back to Anthropic even when Ollama is configured and no Anthropic key seems to be present. After you start OpenClaw and send a test message, verify the logs:

openclaw logs --follow

Look for any references to "anthropic", "openai", or other cloud provider names. If you see them, the fallback is happening and you need to dig deeper into where the key is coming from.

openclaw models list

This should show only your Ollama model. If it shows cloud models, something is wrong with the configuration.

File Permissions

Before moving on, lock down the config files:

chmod 700 ~/.openclaw
chmod 600 ~/.openclaw/openclaw.json
chmod 600 ~/.openclaw/.env

The ~/.openclaw/ directory contains your gateway auth token, session transcripts, and potentially OAuth tokens for messaging platforms. All stored as plaintext JSON and Markdown. These permissions ensure only your user account can read them.

Also check for legacy directories from earlier versions of the project (it was renamed from ClawDBot and MoltBot):

ls -la ~/.clawdbot/ ~/.moltbot/ 2>/dev/null

If these exist, they may contain old credentials. Secure or delete them.

Configuration is done. Next up: security hardening. This is not optional.