I'm facing a challenge where I'm asking the AI (Navigator, Argon, Neon and Open AI models) to parse through 10 different websites and on each website conduct 15 actions to lookup data. With all models, it seems to always stop short after doing one or two websites with only one or two of the requested actions. The instructions also explicitly ask it to go through each and every website instead of steps. When looking at steps taken, it runs through the steps correctly for the first two websites and few lookups, but quits out as if the task is complete when it hasnt completed all the steps. Any suggestions on how to debug this further?
Break your large task into smaller batches - processing 10 websites with 15 actions each may exceed model limits. Try processing 2-3 websites at a time instead of all 10 simultaneously.
Check your API tier limits, as lower tiers require smaller batch processing (around 50 rows for Anthropic tier 1). Monitor your usage limits and API quotas which can cause premature task termination.
Consider using Claygent Neon for better task completion - it's specifically designed for agent tasks and excels at data extraction, though other models like GPT-4 offer enhanced reasoning. [1]
Start with smaller test runs to ensure smooth scaling before attempting the full workflow.
Was that helpful?
If you still need help with debugging your AI task, I’m here to assist. Would you like to provide more details about what you’re trying to achieve or any specific issues you’re encountering?
