03-install-ollama.txt
From: Running OpenClaw Locally with Ollama on Apple Silicon
INSTALLING OLLAMA
==================
Ollama is the local model server. It handles downloading models, loading
them into memory, running inference on Apple Silicon's GPU via Metal, and
exposing an API that other tools (like OpenClaw) can talk to. Think of
it as the engine. OpenClaw is the dashboard.
Two Ways To Install
--------------------
Option A: Direct download (recommended)
Go to https://ollama.com/download/mac and download the DMG. Open it,
drag the app to your Applications folder, and launch it. On first run,
it installs the command-line tool at /usr/local/bin/ollama.
That's it. Done.
Option B: Homebrew
brew install ollama
This puts the binary at /opt/homebrew/bin/ollama on Apple Silicon Macs.
Same result, different path.
Either way works fine. The direct download is simpler for people who
don't have Homebrew set up.
Verify The Installation
------------------------
Open a terminal and run:
ollama --version
You should see a version number. If you get "command not found", the CLI
didn't install properly. Try launching the Ollama app from Applications
first, then reopen your terminal.
ollama list
This should return an empty table since you haven't pulled any models
yet. If it returns an error about connecting to the server, make sure
the Ollama app is running (check your menu bar for the Ollama icon).
Confirming Metal Acceleration
------------------------------
This is important. Ollama must be running natively on Apple Silicon to
use Metal GPU acceleration. If it's running under Rosetta 2 (the x86
emulator), Metal is completely disabled and everything will be painfully
slow.
Verify the binary architecture:
file $(which ollama)
You want to see "arm64" in the output. If you see "x86_64", something
went wrong. Reinstall from the official download.
Also verify the path makes sense:
which ollama
Expected paths:
/usr/local/bin/ollama (direct install)
/opt/homebrew/bin/ollama (Homebrew)
If it's pointing somewhere unexpected, you might have a stale
installation.
+----------------------------------------------------------+
| NEVER run Ollama under Rosetta 2. |
| It disables Metal entirely. No GPU acceleration. |
| Verify with: file $(which ollama) |
| Must show arm64. |
+----------------------------------------------------------+
A Note About Docker
--------------------
You might be tempted to run Ollama in Docker. Don't. Docker on macOS
runs inside a Linux VM, and that VM has no access to Apple Silicon's
GPU. Your model would run on CPU only, which means maybe 1-2 tokens
per second instead of 10-40. It's unusable.
Always run Ollama natively on macOS for Metal acceleration.
Starting Ollama
----------------
If you installed via the DMG, just launch the app. You'll see a small
llama icon in your menu bar. The server starts automatically.
If you installed via Homebrew and want to start it manually:
ollama serve
This starts the server in the foreground. You'll see log output. Open
a new terminal tab for running commands.
To verify the server is running:
curl http://localhost:11434/
This should return: "Ollama is running"
If it does, you're good. The model server is up, Metal is active, and
we're ready to pull a model.