I am not able to use GPT 5 mini in claygent, I get this message 'Could not finish the message because max_tokens or model output limit was reached. Please try again with higher max_tokens.'
This error indicates your OpenAI API key doesn't meet Claygent's requirements. Claygent requires Tier 2 or higher (≥450,000 TPM) to function properly. Check your API usage tier at OpenAI's limits page. If you're on a free plan or Tier 1, you'll need to upgrade your OpenAI plan to meet the 450,000 TPM requirement. The token limit error occurs because lower API tiers have insufficient capacity for Claygent's processing needs.
I m on Usage Tier 4
Since you're on Usage Tier 4, your OpenAI API key meets Claygent's requirements (≥450,000 TPM). The token limit error you're experiencing with GPT-4o mini might be due to the specific prompt or task complexity. Try increasing the "Max Cost" parameter in your Claygent settings to allow for higher token usage, or consider breaking down complex prompts into smaller, more specific tasks.
I am facing problems in GPT 5
GPT-5 isn't currently available in Claygent. The available OpenAI models are GPT-4 and GPT-4o mini. For Claygent, we recommend using our optimized models: Claygent Helium (best price-performance), Claygent Neon (excellent for data extraction), or Claygent Argon (strongest for complex analysis). Since you're on Usage Tier 4, your OpenAI API meets our requirements (≥450,000 TPM).
it is showing in claygent
You're correct! GPT-5 was recently introduced in Claygent on August 7, 2025, bringing enhanced research and conversational writing capabilities to Clay workflows. You can now select GPT-5 as a model option in Claygent alongside the other available models like Claygent Helium, Neon, and Argon.
Hi Raj, Thanks for reaching out to support. You're absolutely right that GPT-5 has been experiencing some instability since its recent launch. OpenAI is dealing with high demand which is causing intermittent issues with tokens, speed, and throughput that we're seeing across the platform. Here are a couple of solutions that have been working well for other users experiencing similar issues: Solution 1: API Key Reset ** Regenerate a new API key in your OpenAI account ** Remove the old connection in your Clay settings ** Add the new API key as a fresh connection This has been resolving the issue about 50% of the time for users experiencing GPT-5 connectivity problems. Solution 2: Adjust Model Output Limits ** Go to your model settings and increase the model output limit ** If you're using the "create content" use case, the default is currently set to 4k tokens ** GPT-5 has a much higher limit (around 128k), so you can safely increase this setting These solutions should help stabilize your GPT-5 experience while OpenAI continues to optimize their infrastructure for the new model. If you continue experiencing issues after trying these steps, please let us know and we can explore additional troubleshooting options. Hope this helps! 🚀
Even the GPT 4 model has become too slow
Hi Raj, If you're using your own GPT key, large calls may be stalled due to your API rate limitations. Feel free to run smaller batches in order to avoid this and try to run one AI column at a time for maximum speed. Hope this helps!
How to do this? 'Solution 2: Adjust Model Output Limits * Go to your model settings and increase the model output limit * If you're using the "create content" use case, the default is currently set to 4k tokens * GPT-5 has a much higher limit (around 128k), so you can safely increase this setting These solutions should help stabilize your GPT-5 experience while OpenAI continues to optimize their infrastructure for the new model. If you continue experiencing issues after trying these steps, please let us know and we can explore additional troubleshooting options. Hope this helps! 🚀'
Hi Raj - thanks so much for reaching out. This seems to be a bug and I've flagged you case for the engineering team, so they can take a closer look. You'll hear more updates from us here on this. If you could share the table URL in which you're seeing the "Could not finish the message because max_tokens or model output limit was reached. Please try again with higher max_tokens" error, that would be helpful in helping the team dive deeper.
Its for every table I am trying
Understood - as mentioned I've already flagged it as a report for the team, but if you could share a specific table it would be helpful.
How to share table here
If you could just drop the URL to the table you're experiencing the issue in, that would be helpful:
I’m seeing the same issues with all gpt-5 calls: Could not finish the message because max_tokens or model output limit was reached. Please try again with higher max_tokens.
Same issue with me for all tables
Hello Raj! Thanks for reaching out! I can see the engineers are actively working on the bug. We will reach out to you as soon as we get an update. To confirm, you are currently facing this issue with gpt 5 right? Did the issue with gpt 4 get resolved? I am not 100% sure the two are connected but I can check for you.
I only have issue with gpt 5
Same
Okay got it! Will keep you updated, thanks! :)
Any update on this?
Hey Raj - I'll go ahead and follow up with the engineering team - Currently the team is investigating the issue and we'll have an update soon.
Thank you for your patience!
Has a fix been found for this?
I dont think so
The team is still working on it, but we'll get back to you here when we have more updates.