Hey team, we are seeing very slow OpenAI responses in Clay on a client project. The org is Tier 5 and rate limits look high, and we are using gpt-4o-mini, so it does not look like we are hitting TPM or RPM. Has anyone run into this and found a fix in Clay or on the API side?
Even with Tier 5 and high rate limits, slow OpenAI responses can happen due to a few things:
First, double-check your API key is correct at https://platform.openai.com/api-keys and update it in Clay if needed.
If you see yellow warnings in Clay, that indicates the action may run slowly due to low limits on your account for that specific model, even if your overall tier looks good.
For reliable performance, Clay recommends 30,000 TPM for content generation columns and 450,000 TPM for web research/Claygent workflows. You can verify your specific model limits at https://platform.openai.com/settings/organization/limits.
The slowness might be model-specific rate limiting rather than your overall account tier. Try checking your usage and limits for gpt-4o-mini specifically in your OpenAI dashboard.
Did that answer your question?
If you still need help with the slow OpenAI responses, I’m here to assist further. Would you like to provide more details about what you’re trying to resolve or understand?
we are using 4o mini with 1,500,000 / 150,000,000 TPM and there is a green tick next to the api in clay, no warning
With those high TPM limits and green status, the slowness likely isn't rate limiting. A few other things to check:
OpenAI's side: Even with high limits, OpenAI can have regional latency or model-specific performance issues that vary by time of day
Request complexity: Large prompts or complex workflows can slow responses regardless of rate limits
Clay processing: The delay might be in Clay's processing rather than OpenAI's API response time
Try testing the same prompts directly in OpenAI's playground to see if you get similar slow responses - that'll help isolate whether it's an OpenAI issue or something in Clay's implementation.
You could also try switching to gpt-4o (regular) temporarily to see if performance improves, which might indicate a model-specific issue with 4o-mini.
Is that what you were looking for?
If you still need additional help with the slow OpenAI responses, I’m here to assist further. Would you like to share more details about what you’re trying to resolve or understand?
in sandbox it seems to run fine
That's a helpful clue! If OpenAI runs fine in sandbox mode but slowly in your main table, it suggests the issue is specific to your production environment rather than OpenAI itself. [1]
This could be due to:
- Higher data volume or complexity in your production table
- Different prompt structures or input data between sandbox and production
- Resource allocation differences between the two environments
Try comparing the exact same prompts and data between sandbox and production to isolate what's causing the performance difference.
Was that helpful?
i ran the same prompts on similar volume in other tables and it worked just fine
Since the same prompts work fine in other tables with similar volume, this points to a table-specific performance issue. This can happen when there are large tables within the workbook and multiple actions running simultaneously, causing latency where cell data takes longer to load.
Try a hard refresh first:
- Mac: Command + Shift + R
- Windows: Ctrl + Shift + R
You could also test using sandbox mode on this specific table - it creates a locked copy with selected test rows to see if the issue persists in a cleaner environment. [1]
Was that helpful?
it happens across this entire specific workspace
If it's happening across your entire workspace, this suggests a workspace-level performance issue rather than table-specific problems. This can occur when there are multiple large tables and simultaneous actions running across the workspace, causing resource contention.
Try these steps:
- Hard refresh your browser (Cmd+Shift+R on Mac, Ctrl+Shift+R on Windows)
- Check if there are other heavy workflows running simultaneously across different tables in the workspace
- Monitor your OpenAI API usage limits to ensure you're not hitting workspace-level quotas
Since this affects the entire workspace but other workspaces work fine, it's likely related to the specific data volume or concurrent operations in this workspace.
