

Managed to run it with llama.cpp. It was a great suggestion, thank you! MiniCPM-o-2_6 iq4 managed to read text from a picture of a shirt that gemma could not get right
Managed to run it with llama.cpp. It was a great suggestion, thank you! MiniCPM-o-2_6 iq4 managed to read text from a picture of a shirt that gemma could not get right
Ok, turned out to be as simple to run as downloading llama.cpp binaries, gguf of gemma3 and an mmproj file and running it all like this
./llama-server -m ~/LLM-models/gemma-3-4b-it-qat-IQ4_NL.gguf --mmproj ~/LLM-models/gemma-3-4b-it-qat-mmproj-F16.gguf --port 5002
(Could be even easier if I’d let it download weights itself, and just used -hf option instead of -m and —mmproj).
And now I can use it from my browser at localhost:5002, llama.cpp already provides an interface there that supports images!
Tested high resolution images and it seems to either downscale or cut them into chunks or both, but the main thing is that 20 megapixels photos work fine, even on my laptop with no gpu, they just take a couple of minutes to get processed. And while 4b model is not very smart (especially quantized), it could still read and translate text for me.
Need to test more with other models but just wanted to leave this here already in case someone stumbles upon this question and wants to do it themselves. It turned out to be much more accessible than expected.
Sounds like what I’m looking for! What do you use for inference?
Thank you, haven’t heard of it before and it looks really interesting! I need to test how it works with llama.cpp, I wonder how it works with resolutions higher than supported, will it get downscaled
I’m not sure if I’m doing something wrong here, but openwebui has been weird for me. I’ve tried running nanonets-ocr, but it only read the last lines visible on a photo. And other models would start reprocessing the whole chat and ignoring the last image I post, answering with the context of the previous reply instead… Using the websearch is easy with it though, so I think I’ll keep an eye on it and maybe will try again later