I encountered an error with "context_length_exceeded" while using in python function agent tool

I have used the below code to fetch the option chain data from yahoo finance.
It errored out with the below errror "openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “This model’s maximum context length is 16385 tokens. However, your messages resulted in 301730 tokens (301701 in the messages, 29 in the functions). Please reduce the length of the messages or functions.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘messages’, ‘code’: ‘context_length_exceeded’}}

from phi.agent import Agent, RunResponse, Tool

#from phi.tools.openbb_tools import OpenBBTools

from phi.model.openai import OpenAIChat

from phi.model.google import Gemini

from phi.tools.yfinance import YFinanceTools

import yfinance as yf

from phi.tools.duckduckgo import DuckDuckGo

from dotenv import load_dotenv

import os

from typing import Dict, List, Iterator

#import yfinance as yf

import time

import pandas as pd

import json # For JSON serialization

import httpx

# Load API key from .env or environment variables

load_dotenv()

openai_api_key = os.getenv("OPENAI_API_KEY")

def fetch_option_chain(ticker_symbol: str, retries=3, delay=5) -> str:
    try:
        print(f"Fetching ticker data for: {ticker_symbol}")
        # Fetch the ticker data
        ticker = yf.Ticker(ticker_symbol)

        print("Fetching available expiry dates for options...")
        # Get available expiry dates for options
        expirations = ticker.options
        print(f"Available expirations: {expirations}")

        # Store option chain data
        option_chain_data = []

        for expiry in expirations:
            print(f"Fetching option chain for expiration date: {expiry}")
            attempt = 0
            while attempt < retries:
                try:
                    # Fetch options for each expiration date
                    opt = ticker.option_chain(expiry)
                    print(f"Storing calls and puts for {expiry}")
                    
                    # Append calls and puts to the list with expiry date
                    calls = opt.calls.to_dict(orient='records')
                    for call in calls:
                        call['type'] = 'call'
                        call['expirationDate'] = expiry

                    puts = opt.puts.to_dict(orient='records')
                    for put in puts:
                        put['type'] = 'put'
                        put['expirationDate'] = expiry

                    option_chain_data.extend(calls)
                    option_chain_data.extend(puts)

                    break  # Exit the retry loop on success
                except Exception as fetch_error:
                    attempt += 1
                    print(f"Retry {attempt}/{retries} for {expiry} due to: {fetch_error}")
                    time.sleep(delay)
                    if attempt == retries:
                        print(f"Failed to fetch data for expiration: {expiry}")

        print("Option chain data fetching complete.")
        print(f"Data type of option_chain_data: {type(option_chain_data)}")

        # Serialize to JSON string to ensure compatibility
        option_chain_data = json.dumps(convert_timestamps(option_chain_data))
        return option_chain_data
    except Exception as e:
        print(f"An error occurred: {e}")
        return json.dumps([])


agent = Agent(
    model=OpenAIChat(id="gpt-3.5-turbo"),
    tools=[fetch_option_chain], 
    show_tool_calls=True, 
    markdown=True)
agent.print_response("Find the top 5 Option chain contracts of Apple?", stream=True)

Hi @nirmalya
Thank you for reaching out and using Phidata! I’ve tagged the relevant engineers to assist you with your query. We aim to respond within 48 hours.
If this is urgent, please feel free to let us know, and we’ll do our best to prioritize it.
Thanks for your patience!

Hi @nirmalya

  1. I can suggest changing to gpt-4o-mini (it has a much larger context window limit). Its just better in general.
  2. The function result of fetch_option_chain is a massive list of dictionary objects. This result is sent to the model so it can complete its task and this goes over the context limit… You’d have to reduce this list to only information the model would need to complete the task/

@Dirk,
I have tried gpt-4o-mini. I have encountered a same error as before. Is it possible to do chunking through this agent ? if yes , what should be the change in the code ?

openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “This model’s maximum context length is 128000 tokens. However, your messages resulted in 298200 tokens (298171 in the messages, 29 in the functions). Please reduce the length of the messages or functions.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘messages’, ‘code’: ‘context_length_exceeded’}}

Hi
Yeah you would still need to change the output of the function, it is just too much. Chunking doesn’t really apply here, what you want to do is filter out values from the dictionaries that are not needed, to make the function output smaller.

thanks ! I will try it.