Hey team what's with token usage according to Clay being sometimes 100x what it actually is? (by using an external token counter tool). I was wondering why the same LLM query using our OpenAI key in Clay is vastly more expensive than using the same model via OpenRouter.
Clay's token estimates can be conservative, especially with recent updates to Claude Sonnet 3.7 that doubled estimated token usage per run to minimize failures and enhance stability.
For cost comparison: using your own OpenAI API key is cheaper (around 0.003 cents vs 1 cent per call) but Clay's shared key is faster. [1] The variable credit system means some prompts may require additional credits based on token count, but this only applies when needed - not every row incurs higher costs.
The conservative estimates help prevent rate limit issues but can make costs appear inflated compared to external tools.
Did that explanation clear up your concerns?
nope
I'm sorry for the confusion. I'll connect you to a human agent who can assist you further with this issue.
Our support team has got your message and we'll get back to you soon! We're currently outside of our standard office hours (9 AM to 9 PM EST, Monday through Friday), so it may take a little longer for us to respond.
If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!

Hi try testing with ‘Run with Logs’ in Clay to see real token usage per step. You’ll spot where extra tokens come from and optimize your setup easily.
Hi Hunain. Where's that option?
Hey Barnaby- when you run an AI column, you should be able to click into the cell and see the token usage. Please let me know if you have any other questions.
Amit K. you've literally told me nothing new.
How would I have written my original message had I not known what you just told me
Hey Barnaby- apologies. Clay's AI responses can consume different amounts of tokens even for identical prompts due to varying context processing Clay We often adds system prompts, formatting instructions, and additional context behind the scenes that increases token usage.
100x seems more than excessive. I have 500 token prompts costing me 50k tokens on Clay
Hey Barnaby- this is understandable. Transparently, we have a ton of other calls running under the hood which increase the cost, but provide the best outputs possible. Let me know if you have any other questions.
We haven't heard back from you here, so we're going to go ahead and close this thread out.
Still need help here? Reply back and someone will jump back in.
Have a question thats not related to this thread? We recommend kicking off a new ticket in the support channel!
