I'm finding Claygent often fails or returns incomplete results on the first run, but then works perfectly when I rerun the exact same row without changing anything. Might take 2 or 3 runs to get a proper response. Is this a known issue? It's costing me a lot in API use + time having to rerun rows repeatedly.
Claygent can sometimes return incomplete or failed results on the first run due to several factors:
Prompt specificity and input quality: Vague or broad prompts, or missing/invalid input data, can lead to inconsistent or partial results. Ensuring your prompts are clear and your data is well-formatted helps improve accuracy , [1].
Processing delays and rate limits: Clay enforces rate limits, especially on heavier prompts or lower API tiers. If you’re running many rows or complex enrichments, you may hit these limits, causing failures or incomplete runs. Clay is actively improving retry logic and pacing to reduce these issues, but reruns may still be needed in some cases .
Redundant or repeated actions: If enrichments are triggered multiple times (manually or automatically), this can lead to extra credit usage and repeated runs. Monitoring your workflow and avoiding unnecessary reruns can help manage credit consumption .
These behaviors are known and can be mitigated by refining prompts, breaking up large tasks, and monitoring your usage and workflow setup. If you notice consistent failure patterns, updating your prompts to address those cases can significantly reduce reruns and improve reliability .
Is that what you were looking for?
Kind of but doesn't really help. If rate limiting is the problem, then why is there not a rate limit option?
I understand your frustration with the rate limiting and reruns. I’ll connect you to a human agent who can provide more detailed assistance.
Our support team has got your message and we'll get back to you soon!
If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!

Hey there, thanks for the table link. I'm seeing several Claygent columns there, all using OpenAI models, private key.
Did you see this "failure on first run" issue in all three? And did this seem to affect just about every cell, or only a few oddballs?
I'd be really curious to know any specific rows/records that took a few iterations to get a suitable result.
I've experienced it a lot over the past 6 months or so. The most affected column in this table was "Growth Challenges Insights" It first returned approx 3k "dud" responses Ran again: approx 1.4k "dud" responses Ran again: approx 800 "dud" responses Ran again: 400 etc etc
Thanks. I had a suspicion it might be that one – the green circle / orange triange / red square icons indicate much more variance in the confidence level.
Can you tell me a bit more what a "dud" response looks like. Is is just a low-effort "Unknown", where a later run has a more firm conclusion? Or indication of error?
Not an error. Low effort result like: Overview: Unknown Recent News: None found Company Growth Pains: Unknown Growth Signals: Unknown Buyer Roles: Unknown Messaging Angles: Unknown
But I could run it again (might take 2 or 3 retries) and it would complete the research as expected.
Hey there Joseph, jumping in here for Mark, another question for you here are there specific models you are seeing this issue occur more frequently with than others? Or is is consistent across all models here
