When I run the same prompt via the ChatGPT API in two different tables, the costs differ significantly. To double check I even put the same website to skim (which is part of the prompt) and still cost were different. *I can see the tokens used are different for both, but how is that possible is the prompt is the same (ánd the website is the same, the only variable I use in the prompt). Both times the ChatGPT API is used. Can somebody explain this?
ChatGPT API costs in Clay vary based on token usage, which depends on prompt complexity and response length. Even identical prompts can consume different amounts of tokens due to varying AI responses or context processing. Clay uses variable credit pricing - you only pay extra credits when prompts exceed the default budget limit. Credit costs are based on actual prompt and token usage, not fixed per row. The same website content might generate different response lengths or require different processing, causing cost variations between tables.
So odd, and now it changed to this
Hey Jelle O., just thought I'll jump in to help! (Since the Clay support team will likely only come online later on)
When you say the prompts are the same, are you writing out the exact same prompt for both? Or perhaps you are writing the same text, but you are letting Claygent generate the prompt?
Or it might also be because the answer to your prompt is fairly open-ended, so while it's the exact same prompt, OpenAI handles them different.
Lastly, if you are gonna be running this same prompt for many rows and you want some certainty in terms of the max cost per row, my best recommendation would be to set a max cost like $0.001. This ensure some kind of predictability when it comes to costing per row.
This looks different: The Claygent version came automatically though, before I put ChatGPT. With the Claygent version (where we use the same ChatGPT API it seems, no) costs are significantly higher
Asher thanks Asher. Regarding your question, yes exactly the same prompt (I don't put in the generate section -bc possibly diff output -but in the configure section.) You say, put max costs per row: will this prevent some rows from running/creating output, or it being done more economicaly?
Jelle O. If you do put a max cost to it, a few scenarios might happen.
The prompt runs successfully without any issue (Especially if it's a simple prompt and task. Using some of the examples you've done, it does seem like putting a max of $0.001 would be fine)
The prompt runs successfully, but it doesn't go the 'extra mile'. For every prompt you do, especially if it's web research, depending on the complexity of information, Claygent might take more steps to find or verify the information. If you set a limit, it will likely just give you whatever it can find within the cost limit. (Again, it shouldn't be a big issue if your prompt and task is simple)
As best practice, I would always recommend this:
If possible, try not to let AI do too many things. Split it into simple tasks, and get AI to just do 1-2 things per prompt. So in this case, the $0.001 limit will be fine.
If you need it to do more complex task, use something like 4o model (Has more reasoning power) instead of the 4o mini model, but it will be more expensive. And if you do use 4o model, don't put cost limits.
If you need want to control costs but also not limit what AI does, my best recommendation would be to test on 50-100 rows first, look into the average cost per row, and from there, set the limits accordingly.
There's not right or wrong answer, but those are just things to think about 🙂
One final question, how do I put the row cost max. Just tried to do it in edit column but cannot find it
Here? Maximum output length?
Ahhh I see, you are using the use case of 'Create or modify content'. In that case, you can't set the max cost, but it will most of the time be really cheap! Quick explainer video for you - https://www.tella.tv/video/for-jelle-explanation-on-max-cost-ff0k
Asher thank you! I was distracted by smth else so bit later reply. I tried doing that: so put the max cost to 0.001, save don't run, and then re-ran an individual cell. I got this as the cost though. To double check, I also tried it on a new cell (which I hadn't run before) and also there it exceeded the max cost.
In this case, it's because your prompt requires fairly large amount tokens. So it might go over 0.001 a little, but it will never go to 0.002 for example.
Jelle O. Ahhh I saw it wrongly. Hmmm, even if you rerun it, it's still the same? It costs you 0.01? It shouldn't happen 😕 Might have to let the Clay support team look into your Clay tables already then! The things that should happen should be:
The prompt runs successfully without any issue (Especially if it's a simple prompt and task. Using some of the examples you've done, it does seem like putting a max of $0.001 would be fine)
The prompt runs successfully, but it doesn't go the 'extra mile'. For every prompt you do, especially if it's web research, depending on the complexity of information, Claygent might take more steps to find or verify the information. If you set a limit, it will likely just give you whatever it can find within the cost limit. (Again, it shouldn't be a big issue if your prompt and task is simple)
Or worse case, it shows 'Error' and the reasoning for the error is because the max cost is set too low
Hey Jelle O. Even if the prompt and website look the same, token usage can still vary due to hidden differences like slight changes in how the site loads, formatting, or system-level variables Clay adds behind the scenes. Check both runs in clay Logs and compare the full request + token breakdown also, try pasting the prompt into OpenAI's tokenizer to verify