08-using-openclaw.txt
From: Running OpenClaw Locally with Ollama on Apple Silicon
USING OPENCLAW
================
Good news: you do not need WhatsApp, Telegram, or any messaging
platform to use OpenClaw. The built-in interfaces work great and
don't require any external services.
The Web Dashboard
------------------
The fastest way to start chatting:
openclaw dashboard
This opens your browser to http://127.0.0.1:18789/ with an
authenticated URL (the gateway token is embedded). You get a full
chat interface where you can type messages, see responses, and monitor
tool calls in real time.
This is the best option for demos. Visual, easy to understand, works
in any browser. No setup beyond what we've already done.
The Terminal UI
-----------------
If you prefer staying in the terminal:
openclaw tui
This gives you an interactive chat right in your terminal. It supports
slash commands:
/status Check connection to Ollama
/model Switch models (e.g., /model ollama/qwen2.5-coder:14b)
/new Start a new conversation session
/history View past sessions
Prefix commands with ! to run shell commands directly:
!git status
!ls -la
The TUI is lightweight and keeps everything in one window. Nice for
working alongside other terminal tools.
One-Shot CLI
--------------
For scripting or quick questions:
openclaw agent --message "List the files in my home directory"
This sends a single message, gets a response, and exits. Useful for
automation or when you just want a quick answer without entering an
interactive session.
Add --thinking high for more detailed reasoning:
openclaw agent --message "Explain this error: <paste error here>" --thinking high
What You Can Do With It
-------------------------
Once you're in a conversation (via Dashboard, TUI, or CLI), the agent
can do things on your behalf. This is where tool calling comes in.
The model doesn't just generate text. It can invoke tools to interact
with your system.
Try these, escalating in complexity:
Basic response test:
"What day is it today?"
This confirms the model responds at all. If you get nothing, check
the troubleshooting section.
Tool-calling test:
"List the files in my current directory"
The agent should invoke a shell command and return real results from
your filesystem. If it describes what it WOULD do instead of actually
doing it, tool calling isn't working. Check that your model supports
tool calling (qwen2.5-coder:14b does) and that the "api" field in
your config is set correctly.
Multi-step test:
"Create a file called test.txt with 'Hello from OpenClaw' inside it,
then read it back to me"
This tests file I/O tools and multi-step reasoning. The agent should
create the file, then read it, then report the contents.
Code generation test:
"Write a Python script that downloads a web page and counts the
word frequencies, then save it as wordcount.py"
This tests the agent's ability to write code and save files.
+----------------------------------------------------------+
| If the agent describes actions instead of doing them, |
| tool calling is broken. Check model compatibility |
| and the "api" field in openclaw.json. |
+----------------------------------------------------------+
Optional: Connecting Telegram
-------------------------------
If you want a more impressive demo, Telegram is the simplest messaging
platform to set up. It uses long-polling, which means no public URL,
no webhooks, no port forwarding. Everything works behind your NAT.
1. Open Telegram. Message @BotFather. Send /newbot.
2. Follow the prompts to name your bot. Copy the bot token.
3. During openclaw onboard (or manually in the config), select
Telegram and enter the token.
4. Message your new bot on Telegram. You'll receive a pairing code.
5. Approve: openclaw pairing approve telegram <CODE>
Now you can chat with your local AI through Telegram on your phone.
The messages go through Telegram's servers (encrypted) but the AI
processing happens entirely on your Mac.
Important caveat: all messaging platforms (Telegram, WhatsApp,
Discord, Signal) require their respective cloud servers for message
transport. For a truly air-gapped setup with zero cloud contact, use
the Web Dashboard or TUI only.
Session Management
-------------------
OpenClaw maintains conversation sessions. Each session has its own
context and history. Sessions are stored locally at:
~/.openclaw/agents/<agentId>/sessions/
If Ollama crashes mid-conversation (it happens), don't panic. The
session state is preserved independently. Restart Ollama, send another
message, and OpenClaw reconnects automatically. You might need to
re-warm the model:
ollama run qwen2.5-coder:14b "ping" --verbose
The conversation continues where it left off.
Checking The Logs
------------------
If something seems off, watch the logs in real time:
openclaw logs --follow
In another terminal, send a message through the Dashboard or TUI.
Watch the log output for errors, warnings, or references to cloud
providers (which would indicate the local-only configuration isn't
working).
Log files are stored at:
/tmp/openclaw/openclaw-YYYY-MM-DD.log
They're in JSON lines format (one JSON object per line). For verbose
debugging:
OPENCLAW_LOG_LEVEL=debug openclaw gateway run --verbose
This produces a LOT of output. Useful for diagnosing stubborn issues,
overwhelming for everyday use.