Ollama - Lessons Learned and Recommendations

Background: I am exclusively using/testing your framework in a private network exclusively using Ollama and a number of LLMs on various servers.

Documentation Update: When you update your documentation in the next release, can you please include your recommendations, knowledge and best-practices on how/what to use when using Ollama. Both your framework and Ollama are rapidly evolving and having the associated growing pains. The comment I just read about Agent replacing Assistant, … would be helpful (and easy to do) in a Ollama Usage recommendation note.

I’ve been doing testing across your Cookbook examples, converting them to exclusively use Ollama. Things I’ve ‘discovered’ from my testing include: In most cases if an agent uses tools, it is better to use OllamaTools (but not necessarily in all cases). Control agents in a multi-agent environment should only use Ollama. Execution of various LLMs in Ollama can generate similar results, but there can be sufficient difference to adversely affect Agent execution. Suitable OpenAPI prompts in (some?) cases are unsuitable for Ollama LLMs.

My biggest request after more overview documentation is to fix all modes of MemoryAgent usage to work with Ollama. Without conversation and session memory, it’s difficult to create a robust virtual assistant.

Finally, it would be helpful to know your development priorities. The fastest, best and most financially lucrative focus is supporting the OpenAI models and ecosystem. But, there is an issue of privacy, which will inhibit major corporations from fully embracing the current environment. So what is your next?

Thanks and you are course to develop a truly great product.

Hi @bills
Thank you for reaching out and using Phidata! I’ve tagged the relevant engineers to assist you with your query. We aim to respond within 48 hours.
If this is urgent, please feel free to let us know, and we’ll do our best to prioritize it.
Thanks for your patience!

Hi @bills
Thanks for the insight! We work directly with some customers with a lot of the same requirements around using open source models via Ollama. I agree, we need to give you (the user) much better insight into using Ollama. I have added it to my list of documentation updates to make.

We have focussed on OpenAI since they are generally ahead of the curve in terms of features and are well known, but that doesn’t mean Ollama is of a lesser concern to us. At the moment “Memory Summarization” doesn’t work with Ollama, but other than that memory should work fine with OpenAI.

The truth is also that we are a small team and can’t get to everything. Feedback from the community greatly influences where we put our focus next, so Ollama will move up the list.