Background: I am exclusively using/testing your framework in a private network exclusively using Ollama and a number of LLMs on various servers.
Documentation Update: When you update your documentation in the next release, can you please include your recommendations, knowledge and best-practices on how/what to use when using Ollama. Both your framework and Ollama are rapidly evolving and having the associated growing pains. The comment I just read about Agent replacing Assistant, … would be helpful (and easy to do) in a Ollama Usage recommendation note.
I’ve been doing testing across your Cookbook examples, converting them to exclusively use Ollama. Things I’ve ‘discovered’ from my testing include: In most cases if an agent uses tools, it is better to use OllamaTools (but not necessarily in all cases). Control agents in a multi-agent environment should only use Ollama. Execution of various LLMs in Ollama can generate similar results, but there can be sufficient difference to adversely affect Agent execution. Suitable OpenAPI prompts in (some?) cases are unsuitable for Ollama LLMs.
My biggest request after more overview documentation is to fix all modes of MemoryAgent usage to work with Ollama. Without conversation and session memory, it’s difficult to create a robust virtual assistant.
Finally, it would be helpful to know your development priorities. The fastest, best and most financially lucrative focus is supporting the OpenAI models and ecosystem. But, there is an issue of privacy, which will inhibit major corporations from fully embracing the current environment. So what is your next?
Thanks and you are course to develop a truly great product.