Ollama Now Supports Llama 3.2 Vision 

Why Ollama is Good for Running LLMs on Computer

Ollama now supports Llama 3.2 vision, a multimodal LLM that can recognise, reason, and caption images. The platform that enables users to run Llama locally, can now run Llama 3.2 vision in both 11B and 90B sizes.

Ollama’s blog post announcing the news included several examples of using Llama 3.2 vision in OCR, image recognition and Q&A, data visual analysis, and handwriting. The integration is now supported by OpenWeb UI, and not Llama.cpp.

Ollama runs locally on your system, eliminating privacy concerns when uploading images to the tool. They have also provided instructions on integrating and using Llama 3.2 vision inside Ollama.

The feature was released in the v0.4 of Ollama. Another update to this version was speed enhancement for follow-on requests to vision models. Also, Ollama can now import models from Safetensors without a Modelfile when running “ollama create my-model”.

Meta released Llama 3.2 at the end of September this year, and claimed to beat Claude 3 Haiku and GPT 4o on vision-based tasks. Meta is offering both small and medium vision LLMs (11B and 90B) and lightweight models (1B and 3B) optimised for on-device use, with support for both Qualcomm and MediaTek hardware.
Ollama has been at the forefront of providing a free infrastructure to help users run LLMs in their local terminal. Earlier this year, when AIM reviewed the best tools to run large language models (LLMs) locally on a computer, Ollama stood out as the most efficient solution, offering unmatched flexibility.

However, a few days ago, a report revealed six critical flaws in Ollama, four of which were considered as CVEs (common vulnerability exposures) and patched in an update, whereas the other two were disputed by the maintainers of Ollama.

The report was published by Oligo’s research team, who revealed that “collectively, the vulnerabilities could allow an attacker to carry out a wide-range of malicious actions with a single HTTP request, including Denial of Service (DoS) attacks, model poisoning, model theft, and more”.

As mentioned earlier, one of the main advantages of using Ollama is running an LLM locally, and minimising the risks involved with using one online. However, if more such vulnerabilities occur in the future, Ollama’s popularity, and position in the AI ecosystem could be severely compromised.

The post Ollama Now Supports Llama 3.2 Vision appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...