02-hardware-requirements.txt

From: Running OpenClaw Locally with Ollama on Apple Silicon

HARDWARE REQUIREMENTS ====================== Before you install anything, let's talk about what your Mac actually needs to do here and whether your specific machine is up to it. Why Apple Silicon Works So Well For This ----------------------------------------- Traditional computers have separate CPU memory and GPU memory. If you want to run a big model on GPU, the model has to fit in your graphics card's dedicated VRAM. A gaming GPU might have 8 or 12 gigs of VRAM. A professional card, 24 or 48. Either way, there's a hard wall. Apple Silicon is different. The CPU and GPU share the same unified memory pool. If your Mac has 32GB of RAM, the GPU can use all 32GB. No separate VRAM, no transfers between memory pools, no bottleneck. This is why a $1,600 MacBook Pro can run models that would need a $2,000 GPU on a PC. Metal is Apple's GPU framework, and Ollama uses it automatically. No CUDA drivers to install, no configuration needed. It just works. This is genuinely one of the few cases where Apple's walled garden approach pays off in simplicity. The 14B Model Memory Budget ----------------------------- We're running a 14 billion parameter model at Q4_K_M quantization. That means the model weights have been compressed from 16-bit floating point down to roughly 4 bits per parameter. The resulting file is about 9GB. But the file size isn't the whole story. When the model runs, it also needs memory for the KV cache, which stores the conversation context. The KV cache scales linearly with the context window size. Here's what the total memory footprint looks like: At 4K context: roughly 9.5 GB total At 8K context: roughly 10.7 GB total At 16K context: roughly 12.5 GB total At 32K context: roughly 15-16 GB total macOS itself needs about 4-5GB to run, plus whatever you have open. Chrome is a notorious memory hog. Each tab uses 100 to 300 MB. Safari is lighter but still adds up. What Each RAM Tier Gets You ----------------------------- 16GB (base M1/M2/M3/M4): This is tight but functional. After macOS takes its share, you have maybe 10-12GB free. That's enough for the model at 4K to 8K context, but you need to close your browser first. Seriously. Quit Chrome, quit Safari, quit everything you don't need. If macOS starts swapping to disk, performance falls off a cliff. You'll go from 10 tokens per second to waiting a full minute for each response. Verdict: It works. Barely. Close everything else first. 24GB (M2/M3/M4 Pro): This is the sweet spot. Comfortable headroom for 16K context with background apps running. You can keep a few browser tabs open without worrying. This is the minimum I'd recommend for regular use. Verdict: Good experience. Recommended minimum. 32GB+ (Pro/Max): Ideal. You can run Q5_K_M quantization (better quality, larger file) with 32K+ context and still have room to breathe. Or stick with Q4_K_M and enjoy the headroom. Verdict: Great experience. No compromises at 14B. 48GB+ (Max/Ultra): Overkill for a 14B model. If you have this much memory, consider running a 32B model instead, like qwen2.5-coder:32b. You've got the RAM. Use it. Verdict: Skip 14B. Go bigger. How To Check Your Specs ------------------------ Apple menu, then About This Mac. You need three things: 1. Chip: Must say M1, M2, M3, or M4 (with or without Pro/Max/Ultra). If it says Intel, stop here. This guide is not for you. 2. Memory: The number next to "Memory" or "Unified Memory". This is your total. 16GB minimum, 24GB recommended. 3. macOS version: Must be Sonoma (14) or newer. If you're on Ventura (13) or older, update first. For a more detailed view, open Activity Monitor (it's in Applications, Utilities). Click the Memory tab. The "Memory Pressure" graph at the bottom tells you how stressed your system is right now. If it's yellow or red before you even start Ollama, you need to close some apps. +----------------------------------------------------------+ | The single biggest factor in your experience is RAM. | | Not the chip variant, not the clock speed. RAM. | | 16GB will work. 24GB will be comfortable. 32GB+ ideal. | +----------------------------------------------------------+ One More Thing: Disk Space --------------------------- The model file itself is about 9GB. Ollama stores models in ~/.ollama/models/ by default. OpenClaw and its dependencies need another couple of gigs. Budget at least 15GB of free disk space to be safe. If you're tight on storage, clean up before you start. df -h / # Check the "Available" column If you have the specs, let's install Ollama.

← Back to tutorial