Techalicious Academy / 2026-02-06-claude-code-local

(Visit our meetup for more great tutorials)

PREREQUISITES

Before we install anything, let's make sure you have what's needed.

Check Your Hardware

For Apple Silicon Macs, open Terminal and run:

sysctl -n machdep.cpu.brand_string

Should show something like "Apple M4" or "Apple M2 Max".

Check your memory:

sysctl -n hw.memsize | awk '{print $1/1024/1024/1024 " GB"}'

You need 16GB minimum. 32GB or more is better.

Why Memory Matters

Local LLMs load entirely into RAM (unified memory on Apple Silicon). A 30B parameter model needs roughly 20GB just for the weights.

Model Size        Minimum RAM      Recommended
--------------------------------------------------------
7B parameters     8GB              16GB
14B parameters    12GB             24GB
30B parameters    24GB             32GB+
70B parameters    48GB             64GB+

The model we recommend (qwen3-coder) is a "mixture of experts" model: 30B total parameters but only 3B active at once. This makes it faster and lighter than a traditional 30B model.

Check Disk Space

df -h ~

You need about 30GB free:

Install Node.js

Claude Code's installer requires Node.js. Check if you have it:

node --version

If not installed, get it from nodejs.org or use Homebrew:

brew install node

Check Network (For Remote Ollama)

If running Ollama on a separate machine, verify connectivity:

ping 10.0.0.79   # Replace with your AI server's IP

If that works, try the Ollama port:

nc -zv 10.0.0.79 11434

Should show "Connection succeeded" or similar.

Terminal Basics

You should be comfortable with:

cd ~/some/directory      # Change directory
ls -la                   # List files
cat file.txt             # View file contents
export VAR="value"       # Set environment variable
nano file.txt            # Edit a file (or vim/emacs)

If these are foreign, spend 30 minutes on a command line tutorial first. We're not covering terminal basics here.

Firewall Considerations

If running Ollama on a separate machine, that machine's firewall needs to allow incoming connections on port 11434.

macOS:

System Preferences > Security & Privacy > Firewall > Firewall Options
Make sure Ollama isn't blocked.

Linux:

# Allow port 11434
sudo ufw allow 11434

Checklist

Before proceeding, confirm:

[ ] Apple Silicon Mac (or Linux with GPU)
[ ] 16GB+ RAM (32GB+ preferred)
[ ] 30GB+ free disk space
[ ] Node.js installed
[ ] Network connectivity to AI server (if remote)
[ ] Basic terminal comfort

If You're Using a Single Machine

Everything on one box? Simpler setup:

The only downside is your main machine gets hot and slow while generating. A dedicated AI server lets you keep working while the model thinks.

Next Up

Next chapter: Installing and configuring Ollama on your AI server (or the same machine if you're doing everything locally).