Inconsistent Tool Call with Ollama + Llama 3.3 <|python_tag|>

Hi,

I have two tools: one for discovering the user’s location and another for getting the weather based on the location.

I’m using Ollama + Llama 3.3 70b. When I ask, “How is the temperature here?”:

  • Using Gemini, it makes two tool calls: one to get the location and, based on that, another to get the weather.
  • However, with Ollama + Llama 3.3, the first call works correctly, but the second call is formatted differently.

Here’s an example:

plaintext

Copy code

DEBUG    ============== user ==============                                    
DEBUG    how is the temperature here?
DEBUG    ============== assistant ==============                                
DEBUG    Tool Calls: [                                                         
           {                                                                   
             "type": "function",                                               
             "function": {                                                     
               "name": "discover_user_location",                               
               "arguments": "{}"                                               
             }                                                                 
           }                                                                   
         ]                                                                     
DEBUG    ============== tool ==============                                    
DEBUG    New York
DEBUG    ============== assistant ==============                               
DEBUG    <|python_tag|>get_weather_from_location(city="New York")  

I expected the second response to be in the same format as the first one, rather than using <|python_tag|>.

Can anyone help me understand why this is happening or how to fix it?

Hi @paulok
Thank you for reaching out and using Phidata! I’ve tagged the relevant engineers to assist you with your query. We aim to respond within 24 hours.
If this is urgent, please feel free to let us know, and we’ll do our best to prioritize it.
Thanks for your patience!

Hey @paulok, it’s mostly llama is hallucinating but you can try to add an instruction to strictly follow a particular output schema or use response_model in the Agent class to define the structure of your output.