TROUBLESHOOTING
Common problems and how to fix them.
"Connection refused" when calling Ollama
The Ollama server isn't running.
Fix: Start it with:
ollama serve
Keep this terminal open while you work. Ollama needs to be running to accept requests.
"Model not found" error
You're trying to use a model that isn't downloaded.
Fix: Pull the model first:
ollama pull ministral
Check what you have installed:
ollama list
Request times out
Vision models are slower than text models. They need time to process images.
Fixes:
- Increase the timeout in your code (we used 120 seconds)
- Check your system resources (close other apps)
- Try a smaller model (minicpm-v is lighter than ministral)
- Make sure you have enough RAM (16GB+ recommended for larger models)
Response doesn't follow the format
The model is ignoring your structured prompt.
Fixes:
- Try a different model (some follow instructions better)
- Simplify your prompt (fewer questions)
- Be more explicit ("Answer ONLY with YES or NO")
- Lower the temperature to 0.1 for more consistent output
- Make sure your prompt ends with the expected format
Response is empty or garbled
The model might have hit a problem.
Fixes:
- Check Ollama logs in the terminal where it's running
- Try a simpler prompt first to verify the model works
- Verify your Base64 encoding is correct
- Make sure the image isn't corrupted
"Image too large" or memory errors
The image file is too big, or you're hitting terminal character limits.
Fixes:
- Resize images before encoding (768px max dimension is usually fine)
- Check that images are under 20MB
- Use a more efficient format (JPEG vs PNG for photos)
To resize on macOS:
sips --resampleHeightWidthMax 768 image.png --out resized.png
Check your terminal's character limit:
getconf ARG_MAX
Check your Base64 string size:
base64 -i image.png | perl -pe's~\s~~g' | wc -c
If the count is over 800,000 characters, resize the image first.
JSON parsing errors
The response isn't valid JSON.
Fixes:
- Check if Ollama is returning an error message instead
- Make sure stream is set to false (streaming returns different format)
- Use jq -r to see the raw response before parsing
curl command not working
Common issues:
- Missing quotes around the JSON
- Variable not expanding (use double quotes for $IMAGE_B64)
- Wrong endpoint URL
- Ollama not running
Test with a simple request first:
curl http://localhost:11434/api/tags
Inconsistent results
Running the same image twice gives different answers.
Fixes:
- Lower temperature to 0.1 (more deterministic)
- Make prompt more specific
- Accept that AI has some randomness (run multiple times and vote)
Still stuck?
Check the Ollama terminal for error messages. Most problems show up there before they show up in your code.
Ollama GitHub issues:
https://github.com/ollama/ollama/issues
Techalicious forum:
https://techalicious.forum