Unable to Use Agent Without LLM in Workflow

Description

I am trying to use an agent (images_extractor) in my workflow for extracting images from a PDF. The agent is defined as:

python

Copy code

images_extractor: Agent = Agent(
    tools=[extract_images_from_pdf]
)

However, when I attempt to invoke the workflow via the run method, I encounter an error related to OpenAI API key configuration. My workflow is structured as follows:

python

Copy code

def run(self, pdf_path: str, data_source: str, use_cache: bool = True) -> Iterator[RunResponse]:
    """Main workflow logic for image extraction."""

    logger.info(f"Generating Images from given PDF: {pdf_path}")

    # Step 1: Extract images
    extraction_results = self.images_extractor.run(pdf_path)

Error Message

plaintext

Copy code

SmartCap\venv-smartcapAgents\Lib\site-packages\openai\_client.py", line 110, in __init__
    raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Issue Details

  • I am not explicitly using LLMs in this workflow, but it seems the Agent class implicitly requires an LLM or a valid OpenAI API key to function.
  • My use case only involves invoking a tool (extract_images_from_pdf) without any reliance on OpenAI’s LLM features.

Question

Is there a way to configure the Agent to work without an LLM? How can I decouple the agent’s dependency on the OpenAI API key when only using tools? Alternatively, is there a workaround to bypass the need for an LLM in this scenario?

Any help or guidance would be appreciated!

Hey @atharvx Thanks for trying out Phidata!

Agents by definition means to achieve certain tasks using language models. If there is no model passed - OpenAI is the default.

I believe using a tool definition might be a better option.