Hi, it looks like Clay is using the wrong OpenAI model when we connect our own API keys. If you select GPT-4.1, 4.1-mini, or 4.1-nano, Clay runs GPT-4o, which is 10x more expensive. I caught this because my OpenAI bill suddenly skyrocketed. Is the team aware of this bug? This could be costing customers thousands of dollars without them realizing it.
We don't have specific information about the bug you've mentioned. Our integration with OpenAI allows users to leverage various GPT models within Clay workflows, but we don't have details about individual model behavior or pricing. If you're experiencing unexpected charges or model behavior, we recommend checking your OpenAI API usage and limits within your OpenAI account. This will help you verify which models are being used and their associated costs. Have you already checked your OpenAI account for detailed usage information that might confirm this issue?
Hi Juan, thanks for your message - you're saying that when you select an openai model in clay while using your personal gpt key, the call is made to a more expensive model instead?
If yes, can you share the url to the table where you're seeing this pop up in so I can take a closer look at how the action was executed on our end?
Hey Bruno. My business partner is the one who handles clay, I'll ask him for that link. In the mean time I can share some screenshots with you.
This is the log from Open AI for that table
My business partner is traveling, so it might take a few hours for him to reply. He sent me this overnight. Thankfully we caught it early, but there could be other users who haven’t realized they’re calling the wrong model and are seeing their costs skyrocket too.
Unfortunately we will not be able to backtrack what happened here unless you share the url to the table where this issue popped up in so we can take a closer look at our backend - I just tested those models and was not able to replicate the same issue on my end.
Would be helpful to have a table to check what may be causing this - sorry about this Juan, as soon as you get your hands on the table url please send this over to us.
My business partner is traveling, so it might take a few hours for him to reply. He sent me this overnight. Thankfully we caught it early, but there could be other users who haven’t realized they’re calling the wrong model and are seeing their costs skyrocket too. I will send you the link once he sends it to me.
Hey Bruno R. I spoke with my business partner and he told me he fixed it by migrating the OpenAI enrichment to the new version. The issue only affects the legacy version, so we're good now. But you might have other clients still using the legacy version, they’re probably running into the same issue.
Hi Bruno R.! As Juan said, all my enrichments were using the legacy Open AI Enrichment. And that’s where the problem was. I selected the model 4.1-nano but it was actually using the 4o model which is 10x more expensive. I fixed it by migrating to the newest version. I didn’t even know I was using a legacy enrichment option. So yeah, maybe that’s happening to other users as well. The image shows the version I was using:
Hi all, thank you for brining this to our attention - can you possibly share the table where you're seeing the behavior with the errored routing on the legacy GPT action? Trying to find instances of this in parallel.