ollama 0.12.11 released this week as the newest feature update to this easy-to-run method of deploying OpenAI GPT-OSS, DeepSeek-R1, Gemma 3, and other large language models. Exciting with ollama 0.12.11 is that it’s now supporting the Vulkan API.
Launching ollama with the OLLAMA_VULKAN=1 environment variable set will now enable Vulkan API support as an alternative to the likes of AMD ROCm and NVIDIA CUDA acceleration. This is great for open-source Vulkan drivers, older AMD graphics cards lacking ROCm support, or even any AMD setup with the RADV driver present but not having installed ROCm. As we’ve seen when testing Llama.cpp with Vulkan, in some cases using Vulkan can be faster than using the likes of ROCm.
PSA Ollama is just a slightly more convenient llama.cpp wrapper with a few controversies. if you can spare the extra effort, try using llama.cpp itself. it has a lot more granularity in terms of control, is faster, and has supported Vulkan and many other backends for a while now



