Hey Support, I've been using openai to enrich lead data which has taken me from Usage Tier 1 to Usage Tier 4 very quickly that's cost me like $225 in openai... I still need to do more enrichment before the campain is finished but I'm unable to use openai integration. It's coming up with the error in clay "Could not finish the message because max_tokens or model output limit was reached. Please try again with higher max_tokens." I have messaged openai support and the bot is responding with: "Hi! I'm an AI support agent here to help. The error message you’re getting—"Could not finish the message because max_tokens or model output limit was reached"—usually isn’t about credits or your usage tier. Instead, it means that either the prompt you (or Clay.com) are sending is too long, or the response you're asking for would go over the model’s output token limit. Here’s how you can address this: Try sending a shorter prompt or asking for a shorter response. If possible on Clay.com, look for a “max_tokens” setting in your integration—set it to a higher value (but not above the model’s documented max). Make sure that your overall prompt + expected output stay within the limits set for your usage tier and the model you are using (for most GPT-4 models, that’s usually 128k tokens, but check what’s available on Clay.com). Your credits and tier affect how much you can use the API, but they do not increase the per-request token or output limit. If Clay.com’s platform has its own internal limits, you may need to reach out to their support to see if they allow changing max_tokens or if there are other platform-specific restrictions. Let me know if you want details on per-model token limits, or need help finding these settings in your Clay.com workflow!" I'm not sure where the problem lies or what I should do? Can you offer any help / guidance?
The error you're seeing is related to token limits, not your OpenAI usage tier. In Clay, you can resolve this by adjusting the "Clay Credit Budget" setting in your AI column.
Edit your AI column and scroll to the "Clay Credit Budget" section - increase this number to allow more credits per row for longer/complex prompts. [1] This controls the maximum charge per row and only uses extra credits when needed for larger token counts.
You can also control output length by setting a maximum output length in your model configuration to manage token usage. [2]
Did that answer your question?
I do not see a "Clay Credit Budget" section in the column
Sorry for the trouble you're experiencing. I’m routing you to our human team now so they can assist you further.
Clay Support is sharing a file with you.
Our support team has got your message and we'll get back to you soon!
If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!
Hey Chris! Thanks for reaching out! I believe what OpenAI is suggesting with the prompt being too long is accurate. I would suggest throwing into our meta-prompter ('Generate') to shorten it.
Hey! That error usually happens when Clay sends a long input (big prompt or enrichment context) that exceeds the token limit for the OpenAI model — it’s not about your usage tier or credit balance. You can fix it by:
Shortening the enrichment prompt (remove unnecessary context).
Reducing row inputs Clay sometimes passes multiple fields that bloat tokens.
Switching models e.g., GPT-4o Mini handles long prompts more efficiently and costs less per run.
If Clay doesn’t expose the max_tokens field, try splitting the enrichment into two columns (smaller tasks).
We haven't heard back from you here, so we're going to go ahead and close this thread out.
Still need help here? Reply back and someone will jump back in.
Have a question thats not related to this thread? We recommend kicking off a new ticket in the support channel!
