For Claygent processing delays, try these solutions: • Break into smaller batches - Process fewer rows at a time instead of 500 at once • Check your OpenAI API limits - Claygent requires 450,000 TPM for reliable performance. Verify your limits at https://platform.openai.com/settings/organization/limits • Optimize your prompts - Ensure search queries are specific and well-defined • Try a hard refresh - Performance issues can occur with large tables: Ctrl+Shift+R (Windows) or Cmd+Shift+R (Mac) Even on tier 5, processing speed depends on your OpenAI API tier and query complexity.
150,000,000 is our TPM
With 15,000,000,000 TPD
With 150,000,000 TPM, your OpenAI API tier far exceeds Claygent's requirement of 450,000 TPM. Since API limits aren't the issue, the slowness is likely due to query complexity or batch size. Try processing smaller batches (50-100 rows) and ensure your prompts are specific and well-defined.
With 15 billion TPD (tokens per day), your OpenAI API limits are extremely high and definitely not causing the slowness. Since you have abundant API capacity, focus on optimizing your Claygent setup: • Process smaller batches - Try 50-100 rows instead of 500 • Refine your prompts - Ensure search queries are specific and well-defined • Break large requests into smaller chunks - This improves processing efficiency The delay is likely due to query complexity or batch size rather than API limitations.
We have to run 500,000 rows - I cant divide into batches of 50s and 100s
For processing 500,000 rows with Claygent, you can use larger batch sizes than 50-100 rows. Try these approaches: • Use "Run Starting from a Specific Row" - Right-click the column > Run Column > Choose Number of rows to run, then specify larger batches (like 1,000-5,000 rows) and set your starting row • Process in manageable chunks - Break your 500,000 rows into larger batches of several thousand rows each rather than tiny batches • Ensure specific, well-defined prompts - This reduces processing complexity and improves speed With your high OpenAI API limits, you can handle much larger batches than 50-100 rows while still maintaining efficiency.
Source: Run Options