Understanding LLM vs Model, Assistant vs Agent in Phidata

Hi Phidata community!

I’m exploring different approaches in Phidata and noticed some inconsistencies when working with LLMs vs models, and assistants vs agents. Specifically, I found that using from phi.llm.anthropic import Claude and from phi.assistant import Assistant works well for structured output, but switching to model/agent doesn’t produce the same results.

Could someone help clarify:

  1. What’s the key difference between using LLM vs model in Phidata?
  2. When should we use Assistant vs Agent, and how do they handle structured outputs differently?
  3. Is there a recommended approach for consistent structured outputs across these different implementations?

Thanks for helping me understand these distinctions!

In the same vein, is there any overview documentation that answers his question and further gives an overview how/why your GitHub code is divided as it is. Also, is there some sort of architecture diagram/document and implementation philosophy… which would be very helpful both in finding and contributing to your open source code base. Finally, do you have an implementation priority plan of some sort. It’s clear that using/supporting OpenAI was your initial emphasis… and now you’re working on including other third party environments. In our case we are currently focussed on Ollama, but that may not be one of your priorities… Feel free to answer here for others or contact me directly. Thanks.

1 Like

Hi @Southern_Push2935
Thank you for reaching out and using Phidata! I’ve tagged the relevant engineers to assist you with your query. We aim to respond within 24 hours.
If this is urgent, please feel free to let us know, and we’ll do our best to prioritize it.
Thanks for your patience!

Hi @Southern_Push2935

  1. LLM is deprecated in favour of model. If there are discrepancies it is not intended. Please let us know where model falls short. Similarly Agent has replace Assistant. In our next major version both of these will be removed.
  2. You should use Agent. Agent should more accurately support structured outputs.
  3. The docs should cover the basics, but let us know what issues you are seeing.
1 Like

Hi @bills
Our documentation should cover most of your questions. We have a section on Ollama there and it is definitely one of our main implementations. With our next major release, which is planned for the next 2-3 weeks, the code and docs should be much clearer. In the mean time you should be able to succeed with an Agent using the Ollama model. Feel free to check out the cookbooks on Ollama as well phidata/cookbook/providers/ollama at b65c7a73f3f98549f30d140114eb345d1e7b0eb4 · phidatahq/phidata · GitHub

Thanks for the clarification about LLM/model and Assistant/Agent deprecation. I’m experiencing some inconsistencies when trying to transition to the newer approach. Here’s a minimal example that works with Assistant:

from phi.assistant import Assistant
from phi.llm.anthropic import Claude
from pydantic import BaseModel, Field
from typing import List

class SectionReport(BaseModel):
    section_number: int = Field(...)
    section_title: str = Field(...)
    status: str = Field(...)
    errors: List[str] = Field(default=[])
    total_errors: int = Field(default=0)

# This works perfectly with structured output
qc_assistant = Assistant(
    llm=Claude(id="claude-3-5-sonnet-20241022", temperature=0.1),
    description=description,
    instructions=instructions,
    output_model=SectionReport,
)

Even with structured_outputs=True, the Agent implementation doesn’t produce the same structured output as Assistant, just a Warning:

WARNING  Failed to convert response to pydantic model: 1 validation error for SectionReport
           Invalid JSON: expected ident at line 1 column 2 [type=json_invalid, input_value='I\'ll review this sectio...nd source requirements.', input_type=str]
             For further information visit https://errors.pydantic.dev/2.10/v/json_invalid
WARNING  Failed to convert response to response_model

Here’s my Agent implementation:

qc_agent = Agent(
    model=Claude(id="claude-3-5-sonnet-20241022", temperature=0.1),
    description=description,
    instructions=instructions,
    response_model=SectionReport,
    structured_outputs=True,
)

Could you point me to any specific documentation or examples for handling structured outputs with Agent?

Hi, I’d suggest removing the structured_output=True. This only works with models that natively supports structured output. We are aiming to improve this developer experience and documentation soon.
I can point you to the cookbook for Claude structured output.

2 Likes