Surprise Launch Week - Day 4 - Ollama Support

Surprise Launch Week - Day 4 - Ollama Support

TL;DR

One toggle → your automations run on-device, no cloud hop.

# .env
ENABLE_OLLAMA=true
LLM_KEY=OLLAMA
OLLAMA_MODEL=qwen2.5:7b-instruct
OLLAMA_SERVER_URL=http://host.docker.internal:11434

Why you’ll care

  • Zero data leaves the box – ideal for air-gapped or PII-heavy flows,
  • Drop-in swap – test the latest gemma model or qwen model with a one line change
  • Run custom fine-tunes - fine-tune your own open source LLM and try it out with Skyvern