Asking to configure api key of openai even though i am using groq model

On running the code it is asking me to configure api key of openai. I am using llama3-groq-70b-8192-tool-use-preview groq model and nowhere in the code open ai is being used. I then configured the api key of openai but the phidata tool internally uses gpt4 model, can we configure it to use gpt4 mini or gpt3 and why do i need to confgure openai api key when i am not even using it

1 Like

can you please share your code so that we can figure out why it’s using open ai?

Hi @rb26 ,

This is my Day 1 of working with Phidata, and I’m trying to build my first AI agent. I came across a similar issue when I tried creating a team-agent. Initially, I had specified the model name in the member-agent but had not mentioned the model for the team-agent.

Later, when I added the model as Groq(id=β€œβ€) in the team-agent as well, I started seeing results.


from phi.agent import Agent
from phi.model.groq import Groq
from phi.tools.duckduckgo import DuckDuckGo
from phi.tools.yfinance import YFinanceTools
from dotenv import load_dotenv


load_dotenv()

web_agent = Agent(
    name ="Web Agent",
    model = Groq(id="llama-3.3-70b-versatile"),
    tools =[DuckDuckGo(search=True)],
    instructions=["Always include sources"],
    show_tool_calls=True,
    markdown=True
    )

finance_agent = Agent(
    name = "Finance Agent",
    role="Get financial data",
    model=Groq(id="llama-3.3-70b-versatile"),
    tools=[YFinanceTools(stock_price=True,analyst_recommendations=True, stock_fundamentals=True)],
    show_tool_calls=True,
    markdown=True,
    instructions=["Use tables to display data."],
    debug_mode=True,
    
    )

agent_team = Agent(
    model = Groq(id="llama-3.3-70b-versatile"),
    team = [web_agent, finance_agent],
    instructions = ["Always include sources","Use tables to display data"],
    show_tool_calls=True,
    markdown=True,
    debug_mode=True,
    )

agent_team.print_response("Sumarrize analyst recommendation and share the latest news for NVDA", stream=True)


Awesome yes that looks like a correct configuration. I hope its working for you now!

We plan to make both model and model ID explicitly required going forward to avoid issues like these.