Hi Team, sometimes when running Claygent with GPT models, it returns the wrong output, even when it knows the right output according to it's reasoning. IE, If it should return either "Yes" or "No", it could return "No", even though it actually comes up with the right determination that it should be "Yes".
Any idea why this could be?
Example would be the column titled "CHU - Incorrect" in this table - https://app.clay.com/workspaces/223473/workbooks/wb_0t58zs6Fdsku4T6gC5k/tables/t_0t58zt2HyboAysDdKDz/views/gv_0t58zt2iTp3ziXpAnix
Also, when setting my API key - it's giving the below error (I'm Tier 5 - so not sure why this issue would show)
"Could not finish the message because max_tokens or model output limit was reached. Please try again with higher max_tokens."