Hey I'm getting many 'Error' outputs for my AI and GPT tasks. Never experienced this before. Can someone assist?
For AI and GPT task errors, try these solutions: First, clear your browser cache and refresh using Cmd/Ctrl + Shift + R. Check your OpenAI API key - errors often occur due to incorrect API keys. Find your key at OpenAI API Keys and update it in Clay. Verify your API limits - you need 30,000 TPM for ChatGPT columns and 450,000 TPM for Claygent. Check your usage tier at OpenAI API Usage Limits. For processing delays, ensure your queries are specific, break large requests into smaller batches, and monitor your API quotas.
Hey Petar R. that usually happens when there’s a formatting issue or the input being passed to the AI is empty or not structured as expected try checking if the fields being referenced (like columns or variables) have valid data also, refreshing the table or duplicating the block sometimes clears it up lmk if you’re still stuckhappy to help more!
You're right about the common causes! For AI and GPT task errors, try these steps: Quick fixes: ** Clear browser cache and refresh with Cmd/Ctrl + Shift + R ** Verify your OpenAI API key is correct at OpenAI API Keys Check your data: ** Ensure referenced columns contain valid, non-empty data ** Verify input data format is correct ** Make search queries specific and well-defined API limits: ** Check your OpenAI usage tier and quotas - you need sufficient TPM limits If errors persist after these steps, the issue may require further investigation of your specific setup.
Abdul S. thanks. When I re-run one by one I'm getting good output. I've run 100 rows at once. Might need to clear cache. Input is website, looking good with https://website.com
Running 100 rows at once can cause processing delays due to API tier limits. With lower API tiers, you should process data in smaller batches of around 50 rows at a time. For better performance, break large data requests into smaller batches and monitor your API quotas. Always test with 5-10 rows first, then scale to 50, then 500 to avoid issues.
