hello! I'm experiencing lots of slowness on my enrichments using AI, tables that took a couple hours to run are now 10h+ and not finished yet.. contacted support through the website for more than 2h ago and didn't hear back
AI enrichments in Clay can run much slower than usual due to rate limits, especially if you’re using heavier prompts, running large tables, or are on a lower API tier. Recent updates may have increased token usage per run, which can further slow processing. To resolve this:
Break large jobs into smaller batches (e.g., 50 rows at a time if using a lower Anthropic API tier).
Monitor which columns are most affected and standardize prompt settings if possible.
Check your API usage limits and consider upgrading your API tier if you frequently hit limits.
Start with smaller test runs to ensure smooth scaling.
For persistent issues, you can request human assistance by clicking on "Talk to a person" in this chat.
Is that what you were looking for?
You’re not alone AI enrichments can slow down when Clay’s OpenAI queue is busy or when there’s a large batch of rows running at once. If your table normally runs in a few hours but now takes 10+, it’s likely temporary performance throttling on Clay’s backend. You did the right thing by contacting support — they usually prioritize these cases quickly. In the meantime, you can try:
Duplicating the table and running a smaller batch (e.g., 500–1,000 rows).
Switching the model to GPT-4o mini or Claude 3.5 Sonnet if available — they tend to process faster.
Checking that your “Run If” conditions aren’t rerunning all previous rows unnecessarily.
If you want, I can help you check what’s likely causing the bottleneck in your specific setup — want to share which AI column or model you’re using?
Extreme slowness in Clay AI enrichments is often caused by rate limits, especially if you’re running large tables, using heavier prompts, or are on a lower API tier. Recent updates (like to Claude Sonnet 3.7) have increased token usage per run, which can further slow processing. To address this:
Break large jobs into smaller batches (e.g., 50 rows at a time if using a lower Anthropic API tier) to avoid hitting rate limits.
Monitor which columns are most affected and standardize prompt settings across columns for consistency.
Check your API usage limits and tier—upgrading may help if you frequently hit limits.
Start with smaller test runs to ensure smooth scaling before running large jobs.
Audit your column configurations to ensure prompt settings align with your intended outcomes.
Monitor system updates for new features like retries and pacing, which help manage rate limits and improve stability.
If you need further help, you can request human assistance by clicking "Talk to a person" in this chat.
Is that what you were looking for?
I understand the frustration with the slow enrichments and hope the suggestions provided were helpful. Would you like to share more details about your AI column or model so I can assist you further?
thanks Kabeer A.!
You're welcome! If you have any more questions or need further assistance, feel free to ask. I'm here to help.
Hari K. can you check this please? I shared the url with you on the support chat through the website my table is "frozen", the enrichments aren't going through
Extreme slowness in AI enrichments is often due to API tier limits or rate limiting, especially if you’re processing large tables or using a lower API tier. To improve performance, try breaking your data into smaller batches (e.g., 50 rows at a time for lower Anthropic tiers) and monitor your API usage and tier settings. If you need further help, you can request human assistance by clicking "Talk to a person" in this chat.
Is that what you were looking for?
If you still need help with the slowness in your AI enrichments, I’m here to assist. Would you like to provide more details about your current setup or any specific issues you’re encountering?
