It looks like Gemini API was fixed, can anyone else confirm it?
Gemini API issues can be intermittent due to Google's shared capacity model - even when your own limits aren't hit, you might get rate limited when Google's overall capacity is constrained.
We recently added Gemini 2.0 Flash and Flash Lite models which should provide better performance. [1] If you're still seeing issues, try switching between the different Gemini model options or check if the problem persists across multiple runs.
Is that what you were looking for?
Human
Got it — I’m connecting you to a human teammate now.
Our support team has got your message and we'll get back to you soon!
If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!

Hey,
Yes, there were a few tweaks on our side. If you’re using Clay’s Gemini integration, you should see fewer issues now. If you’re using your own Gemini API key, you may still run into it depending on usage and limits.
Let me know if you have more questions.
We haven't heard back from you here, so we're going to go ahead and close this thread out.
Still need help here? Reply back and someone will jump back in.
Have a question thats not related to this thread? We recommend kicking off a new ticket in the support channel!
Bo (. its not working for me today, but it was on Friday
I'm using my own API key
Hey,
If you’re using your own Gemini API key, this behavior is expected right now.
Gemini has very low rate limits, and those limits are shared across their infrastructure. When you run anything that involves online search or data retrieval, it relies on their retrieval system, which hits those limits fast. Once you cross that threshold, requests will start failing or stalling, even if it worked earlier in the week.
We’re already operating at the maximum throughput Gemini allows on our side. There isn’t a setting in Clay to bypass this when you bring your own key. The only difference you may notice is that Clay-managed Gemini keys tend to be more stable because we batch and throttle them more aggressively.
We’re actively working with Gemini on this, and we’ll be pushing improvements after the webinar window when they raise limits on their end.
Let me know if you have more questions.
Bo (. My problem is that I've already spent over 5,000 credits while it would have costed me less than $50 or $25 in my API credits Any way to reimburse the credit spend while this issue is happening?
I'll waste another 3-5k credits this week while API is not working
Hey,
I get the frustration here. Short answer: we can’t reimburse credits for this.
What’s happening isn’t a Clay-side issue. Our API is running normally and already operates with much higher rate limits than most personal API keys. The credit usage you’re seeing is coming from retries and slower responses caused by your own API key limits being too low, not from a failure on our end.
When requests take too long or get throttled upstream, Clay still has to process them, and those runs consume credits. That’s why the spend looks disproportionate compared to what you’d expect with a properly scaled API setup.
The fix here is to request higher rate limits on your API key so calls don’t stall or retry. Once that’s in place, the credit burn should normalize.
Let me know if you have more questions.
We never reached even 10% of our limits
