Claygent isn't sticking to the defined output structure at scale. When I set sonnet 4 to investigate a website and return either "Valid" or "Invalid" - it works fine when rows are run individually - but gets very confused when hundreds of rows are run.
It will start ignoring the output structure completely. This is very frustrating, as it's been a recurring issue across other tables across different models.
https://app.clay.com/workspaces/223473/workbooks/wb_0t6sw90Zb35DKX93pJf/tables/t_0t8aefk36Zg2vYU5ewS/views/gv_0t8aefk7TeKJBBixK2c
Hi team, is Clay not processing Claygent runs? When clicking "Run", it looks like it's running - but isn't (as evidenced that the Stop button isn't available)
Hi Team, sometimes when running Claygent with GPT models, it returns the wrong output, even when it knows the right output according to it's reasoning. IE, If it should return either "Yes" or "No", it could return "No", even though it actually comes up with the right determination that it should be "Yes".
Any idea why this could be?
Example would be the column titled "CHU - Incorrect" in this table - https://app.clay.com/workspaces/223473/workbooks/wb_0t58zs6Fdsku4T6gC5k/tables/t_0t58zt2HyboAysDdKDz/views/gv_0t58zt2iTp3ziXpAnix
Also, when setting my API key - it's giving the below error (I'm Tier 5 - so not sure why this issue would show)
"Could not finish the message because max_tokens or model output limit was reached. Please try again with higher max_tokens."