I'm using Claygent and it seems the full prompt is not taken into consideration by the agent. It worked fine last week. My prompt has grown since last week - is there a character limitation to be aware of with the prompt?
While there isn't explicit mention of a character limitation, if you're experiencing issues with prompt processing, here are some recommended solutions: 1. Break down your large prompt into smaller, more specific queries 2. Verify your input data format is correct 3. Double-check your prompts and search parameters before running tasks 4. Use our metaprompters to help craft a better promptIf these solutions don't resolve your issue, you may want to request human support for further assistance.
Looks like you're still looking for help. We're escalating this over to our support team and someone should be in touch soon. In the meantime, feel free to include any other information that may be helpful in solving your issue!
If you run into the upper bounds of your context window you will be warned through the interface so this should not be the root cause of issues with your output. I suggest clicking on the "help me" button on the bottom right of your prompting window - this tool will help you fine tune your prompt for the optimal results!
ok, clear, will experiment further.
Thank you!
That sounds great! Let us know if you have any other questions here.
I have conducted tests which seem to indicate the prompt gets truncated, possibly when the ensuing research goes beyond a certain limit. I'd prefer to show you via a video call or send a loom privately
due to sensitive data
Is there a way for me to see the actual prompts getting input to the model?
Based on the results I'm getting right now, compared to just last week, it seems from my POV that something has happened with the model
The same prompt with openAI 4o works perfectly
Thanks for sharing your concerns, Philip. To properly investigate what's happening with the prompts, I can take a look at your specific table through its URL (just copy it from your browser's address bar). This will let me see exactly what's going on while keeping your data private. While different AI models definitely handle prompts in their own unique ways and get regular updates, significant performance changes usually come down to the input data and prompt structure. Let me take a look at your specific case so we can see what might be causing these changes you're seeing. Would you mind sharing that table URL so I can investigate further?
Hey there - just wanted to check in here to see if you needed anything else! Feel free to reply back here if you do.
We haven't heard back from you in a bit, so we're going to go ahead and close things out here - feel free to let us know if you still need something!