Anyone ever experience getting a different result / output when using GPT 4o in Clay vs just natively in GPT? I'm entering the same exact prompt & info, but the output from native GPT client is a lot better than what I'm getting in a Clay cell π€
Hi Daniil, thanks for reaching out! One reason you could be getting different results from Clay vs native GPT is because the Clay enrichment does not use context from previous responses that were generated. The only context it gets is what you input into your prompt. Could you share a link to your table as well as a screenshot of the result you get with GPT? Happy to take a closer look!
Hi Tanvi, thanks for the quick reply! Sure thing, here's the table: https://app.clay.com/workspaces/241406/workbooks/wb_Ro3fHp7Mny9s/tables/t_tC3dQMitBPah/views/gv_VuPHZzntrj2P
For reference, i'm using inputs and outputs in Row 10
Here's what GPT gave me (a more succinct answer I much prefer)
As opposed to the Clay one
Performing the same test for some other columns, and im getting the same output in both clay and gpt. Weird, maybe it's just that particular one
Hi there! Thanks for the additional details. Itβs normal for AI responses to vary, even when asked the same question multiple times. AI recalculates with each instance, so answers can differ slightly based on the data it considers each time. This is just part of how AI processes information uniquely with each interaction. If you feel like the response isn't up to what you want feel free to explore our metaprompters to create a different prompt and tweak it :) https://www.clay.com/university/lesson/ai-metaprompter-guide
Hey there - just wanted to check in here to see if you needed anything else! Feel free to reply back here if you do.
We haven't heard back from you in a bit, so we're going to go ahead and close things out here - feel free to let us know if you still need something!