Issues with Claygent GPT Model Outputs: Understanding Errors
Hi Team, sometimes when running Claygent with GPT models, it returns the wrong output, even when it knows the right output according to it's reasoning. IE, If it should return either "Yes" or "No", it could return "No", even though it actually comes up with the right determination that it should be "Yes". Any idea why this could be? Example would be the column titled "CHU - Incorrect" in this table - https://app.clay.com/workspaces/223473/workbooks/wb_0t58zs6Fdsku4T6gC5k/tables/t_0t58zt2HyboAysDdKDz/views/gv_0t58zt2iTp3ziXpAnix
.png)