Ollama v0.19 logo

Ollama v0.19

Massive local model speedup on Apple Silicon with MLX

Visit
AI & Machine LearningDeveloper Tools
Imported from Product Hunt
April 2, 2026

Gallery

Ollama v0.19 screenshot 1
Ollama v0.19 screenshot 2
Ollama v0.19 screenshot 3
Ollama v0.19 screenshot 4

About

Ollama v0.19 rebuilds Apple Silicon inference on top of MLX, bringing much faster local performance for coding and agent workflows. It also adds NVFP4 support and smarter cache reuse, snapshots, and eviction for more responsive sessions.

Discussion (0)

Log in to join the discussion

No comments yet. Be the first to share your thoughts!