Inconsistent Performance of 4o-mini Model in Claygent Tasks
Hi everyone, has anyone had any recent bad experiences with 4o-mini inside of Clay? especially for Claygent tasks? I usually use 4o-mini for simple and short tasks since it is cheaper and also works but recently I found that it is not consistent enough. For example: If I try to check if a website is not working with the help of Claygent and using 4o-mini as a model with my own API key, it has a 1 out of 4 error rate. While 4.1 mini is almost 100% accurate (with the same prompt), I was wondering if something changed inside OpenAI's model limitations and I am not aware of it. I wanted to ask the community here first if any of you had similar experiences with the models especially after the new ones came out few weeks ago. Thanks