My claygent is responding with "no data found" when it shouldn't. If I give the same prompt that I gave Claygent to ChatGPT, it does what I tell it to (see attached images). This is in a workflow. I will test outside of a workflow and update if it fixes issue. https://app.clay.com/shared-table/share_fGKHk4CaCHxr
Yeah it's still pretty trash even when its not in a workbook. The prompt I'm using currently is very simple because I assumed the original one I gave it was too complex. However, it just doesn't seem to be giving good responses. Attached is what I got form ChatGPT and what I got from Claygent with basically the same prompt
Bruno R. - you mentioned GPT cannot scrape website. I used paid version of ChatGPT 4o which actually does scrape websites. U can see in my query that it searched the website to perform the task. LinkedIn will not work based on my task because my end goal is to find out all the products the company sells and give me a breakdown. This must be taken from the website. The problem I am encountering is that Claygent is saying that it can't find any info when it is definitely available. As stated, I gave the same query to ChatGPT and it was able to scrape their website to find the information, so I am not sure why Claygent cannot. See this video for an explanation: https://www.loom.com/share/9c9234b889e04d05813e0feb9de1dc79?sid=09e46d02-fe81-4f18-af42-566ca7e053b7 P.S. I am a relatively advanced user of Clay so you don't need to worry about explaining the basic features to me. Thank you though.
I want to note that the prompt I gave Claygent is just to "summarize the website". I did that because my more complex prompts also weren't working so I brought it down as much as I could to a task that I know should be possible. Claygent isn't performing this basic summarization task which should be possible even if you just go to the website and look at it, so that tells me it's a problem with Claygent
We haven't heard back from you in a bit, so we're going to go ahead and close things out here - feel free to let us know if you still need something!
Hi Kieran, sorry for the delay! What you're seeing as references in your query outputs for GPT-4o are actually references to training data—these are reasoning guidelines rather than live scrapes of the websites. Regarding your goal of identifying the current product line a company is selling, Claygent seems to be a suitable choice for this task. When prompting Claygent, approach it as if you were conducting the research manually, then reverse engineer those steps into a repeatable prompt that you can input into Claygent. The task of summarizing a company's activities can be effectively handled using the prompt I outline in <The%20task%20of%20summarizing%20a%20company's%20activities%20can%20be%20effectively%20handled%20using%20the%20prompt%20outlined%20in%20this%20video.%20Remember,%20the%20reliability%20of%20the%20output%20depends%20on%20two%20key%20factors:%20how%20well-crafted%20your%20prompt%20is%20and%20the%20availability%20of%20the%20relevant%20information%20on%20the%20website%20you're%20targeting%20for%20this%20task.|this video>. Remember, the reliability of the output depends on two key factors: how well-crafted your prompt is and the availability of the relevant information on the website you're targeting for this task. I also see that you're using your own API key to run Claygent with 4o - it looks like your usage tier may cause you to run into performance issues. We now require 30,000 TPM (tokens per minute) for ChatGPT columns and 450,000 TPM for Claygent. If you’re trying to use Claygent with a model less than GPT-4 or on Tier 2, or GPT 3-5 on Tier 4, it won’t work. However, ChatGPT will work just fine. You can check your ChatGPT usage or limits at these links: • Usage • Limits • Usage Tiers Their support team will always grant you the request if you have prepaid the minimum amount required and they’ll be quick to assist if you need to increase your limits or troubleshoot any issues. I hope this was helpful. If there's anything else I can assist you with, please let me know!
Bruno R. - ChatGPT Plus can search the web. It is not referencing its training data. It uses a headless version of Bing search to perform keyword searches. Here is an article by OpenAI about how it works: https://help.openai.com/en/articles/8077698-how-do-i-use-chatgpt-browse-with-bing-to-search-the-web The problem I am having is that my Claygent is not working for neither the very simple task of just summarizing a website nor is it able to perform the complex workflow of identifying the that you said it should be able to task. I am not sure if it is something I am doing wrong or a bug with my environment, but I gave you the ChatGPT example because I know it should be possible given ChatGPT can search the website and give me a result
It may be my usage tier that is blocking me from performing it. I would be surprised because of how simple of a prompt it is and how it sometimes works, but this may be a possibility
Hey Kierin, thanks for reaching out! 😊 I can see why this might be a bit confusing. While ChatGPT and Clay have similar features, they handle certain tasks differently. For example, ChatGPT, when accessed through its API, doesn’t have internet access, meaning it can’t browse the web or summarize live websites. However, when using ChatGPT Plus with the browsing/chatbot feature in the regular interface on their platform, it can search the web, but this is not available in the API. That's why we invented Claygent Neon and Claygent In your case with Claygent, it might be a limitation of your current plan or settings that’s preventing the workflow from working as expected, but we can easily check. Could you send me the real table URL (the one from the URL bar), so I can investigate further? Let’s figure this out together! 😊 Let me know if you need any other clarification.
Bo (. This is the URL: https://app.clay.com/workspaces/276918/workbooks/wb_XvxcSSPANrf3/tables/t_Zbf7cESxQzF5/views/gv_SpQGwsts7VuG. As you can see from the screenshot, it does sometimes work, but it is only 2/8 times in this test case. After checking the URLs by hand, I confirmed that the information is there and the same prompt can be run on the URLs with ChatGPT Plus, so I am not sure what is causing it to not work
It was also working fine for me in the past to use Claygent and do more complex tasks, so I imagine there was an update that changed something about the way Claygent worked. It could be my token limitation that Bruno brought up. I will look into upgrading that when I have the time, but given that it does sometimes work, I'm not sure why that would be the issue
You can’t directly compare ChatGPT and Clay in this way. The reason ChatGPT might understand your simple prompts is that you’ve been training it over time, so it knows what you mean. However, the ChatGPT API doesn’t have that same context. When you use a simple prompt here, it’s like giving a stranger vague instructions with very little information, which might not make sense to the model. Check this GPT bot that can help structure better prompts as well. We also have pre-made prompts in our prompt library that can help. I’ve already added some for you in the Claygent Neon. I also noticed you’re using GPT-4o mini, which isn’t the most powerful model so it's not making things easier for you https://downloads.intercomcdn.com/i/o/1170350890/69a5de002f237fd01afeebcd/d82c3f5c-5bfb-4c3b-bdce-6f35a49cd6e7?expires=1725567300&signature=cf70c253b622ad8b305ca7486ea7ceaaecd5b3f491f7e635d8b53429a380b9d8&req=dSEgFsp7nYlWWfMW1HO4zVp%2B1V2LQYRxi9q320v7zlJwUsKIDj72IKynlIR9%0AJ%2Fpo%0A It’s important to remember that ChatGPT might give great results in a one-off instance, but over time it can derail or hallucinate. That’s why prompts need to be well-structured. Since the model doesn’t bring the same results every time, re-running columns with “no data found” can often change the outcome. I’ve added three versions: Claygent Neon, Claygent with GPT-4o, and another one with GPT-4o Mini. Check the output for each row to compare results and get a better sense of how each model works. Let me know if you need further clarification! 😊
I tried rerunning with no data found many times
I started with new session in ChatGPT and cleared the history as well to make sure it wasn't something I did
The prompt for the claygent is just "summarize /website". It isn't able to do that nor was it able to do my more complex prompt
I also had it set to 4o mini because I was thinking it had something to do with the model
But it didn't work with any of the models I tried
Sorry the prompt for claygetn is actually "Act as a marketing consultant. Go to the website of the company /website and create a detailed breakdown of what this company does."
See this is what happens when I changed your prompt to gpt 4
Yes, I saw it in your table 😊! But as you noticed, that prompt might not deliver the best results. Have you checked out the GPT bot that I sent and already added to your table? And also all of the models difference? The first one is with Neon, the second with GPT 4o and the last one with GPT 4o mini. You should see a clear difference It should help you structure better prompts and improve the output. Let me know how it works for you! 👍
It works with claygent neon but not gpt 4o. The majority of my gpt 4o claygent prompts are working and I don't know why. It is however going to their website and returning something. It is just not useful
^ are not working*
That's correct, the prompt you’re referring to is tailored for Claygent Neon, which uses specified inputs and works with different models. If you check the table, you’ll notice the updated prompt with GPT-4o has also produced great results.
This is what happens when I change it to my API from claygent 4o
So it has to do with my API i guess?
As in I changed the openai account
I'm open for a call if it's easier for you to troubleshoot/explain that way
AI can sometimes return “no data found” during enrichment with Clay Credits, just as it can with any other system. That’s simply AI being AI—it processes things differently each time, which can lead to varying results. AI essentially processes each request slightly differently, and while it’s improved (with hallucination rates dropping from 40% to 15% over the last year), it’s still a factor to consider. The best way to handle this is by adjusting the prompts and using better models when possible. The mention of Bruno earlier was to clarify that if you’re on a lower-tier plan, your tier might prevent you from running certain enrichments, but it wouldn’t directly change the results themselves. As you’ve already noticed, every time we re-run the rows, the AI returns different responses—AI doesn’t return the exact same information twice. Neon would have better results vs. other models tho, Neon is much better because we’ve developed our own AI models tailored to specific needs. That’s also why it doesn’t use an API key—it operates on a different architecture. Now, I re-ran all the rows, and some now have results while others still don’t. We’ve seen cases where people run their entire table, get some results, wait for it to finish, then re-run the rows with “no data found” and get additional results. The new prompt should work totally fine, feel free to run the table and then re-run those "no data found" Let me know if you need further clarification! 😊
It feels like claygent used to work better
Would you be able to explain why ChatGPT Plus is better at performing these tasks than Claygent?
I understand Claygent Neon would be best for this but I would rather use my credits on things like email/phone verification
I guess since as you said the API isn't available for web search, the Clay team had to create their own keyword search to search teh web. Given Clay doesn't have the resources of OpenAI, it wouldn't be as good as theirs
Do you have any examples that come to mind that might help me better understand your perspective? As for ChatGPT Plus, it isn’t necessarily performing better or worse—it’s designed for single calls, whereas here, you’re performing tasks at a much higher rate, like 10 queries in 10 seconds. If you’d like, you can try running the same task multiple times in ChatGPT, and you’ll likely see similar variations in the results. Interestingly, OpenAI is one of our clients, and they also use ChatGPT within Clay, alongside our Claygent models, which they’ve shared positive feedback on their website. That said, I understand AI can be quite complex and unpredictable, with many variations. However, using more refined prompts and clear guidelines can significantly improve the output. And if you’re not satisfied with the results, you can always re-run the row for better accuracy. Let me know if you have any further questions or concerns! 😊
Well the example would be that basic prompt: "Act as a marketing consultant. Go to the website of the company /website and create a detailed breakdown of what this company does." Clay has been consistently incorrect 20-30% of the time, while ChatGPT has been correct every time
I just reran it and got 40% this time. It's similar with your prompts too
I’m sorry for that, Kierin. Let’s focus on fixing the actual issue here. 😊 Are you happy with the prompt I added, or would you prefer to modify it? If you’d like to make any changes, please go ahead and do so, and then try running the table across all rows. After that, we can re-run the rows that returned “No Data Found.” Let me know when you’ve done that, and we’ll move forward from there! 👍
I don't have a problem with coming up with prompts myself. My issue is that Claygent is giving me approximately 30% incorrect data when I run it, no matter what prompt I use. I re ran yours and it returned 30% incorrect again. Having 3 out of 10 results be wrong but still cost me money and have to click play every time it is wrong is not good.
Is it common to be returned 30% incorrect data when using Claygent?
I see u just re ran it with clay's account and it returned 100% correct data. Are u sure it's not my API?
Actually nvm u got 80%
That’s not common, but it can happen if the columns are constantly re-run. AI won’t return the same response twice 😊. I’ve already updated it to use our credits and ran it for 300 rows so you can see a bigger sample. You can modify the prompt, add it to your API, and get similar results. From there, you can add the response as a column, use the table filters to find “no data found,” and re-run those rows. You might get some extra results that way! 😊 Let me know how it goes!
I'm not trying to make a problem btw. Also not angry. I love clay. I just don't understand why it's not working for me all of a sudden. I have used claygent hundreds of times with results that worked consistently
Like do you see how many no data availables you got form running all those 300 rows? This has never hapened to me before
Oh don't worry I'm super happy to help 🙂 I've also checked if we had other bug and super greatfull to have a chat with you !! I just want to make sure we focus on fixing the issue here. That's also possible that with other prompts there's more results due to the level of complexity of it. If you'd prefer being more consistent, you can also use scrape website and then feed that data to a ChatGPT enrichment 🙂
Yeah 118 of the 300 did not return any data. That's almost 40%
And it's still running. It may get to 50% inconsistency with the larger data set
This is why I'm saying it feels like it's gotta be a bug and not user error
It ended at 120/300 returning "No data found". That's exactly 40%
Hey Kierin, I’m checking with the team to confirm everything! I’ll keep you posted with any updates. Thanks so much for your patience! 😊
Ok sounds good. Thank you
So you think it's a bug, right? Not user error
Please let me know. It hasn't been working since I reported it on Monday so it's already been a while since I haven't been able to use it
Hey Kierin, thanks for the follow up and appreciate your patience! Bo is out today but I asked him if there were any updates with his conversation with the engineering team, will let him share as soon as he's available. For now, I re-ran the Claygent column using 4o and got a few more results this way to bring it to ~40 pending inaccurate results. I refunded the credits and added a bit more back in on top, as well. Once we get those insights we will know more what the cause could be with these AI models, but out of experience it does depend on the model and the wording in the promt. An example with chatGPT models is that they like to run a lot of Google searches to backup it's findings, but if you constrain it to only browse the site, it sometimes runs into the issue where it's unable to navigate to a subpage and when the main site doesn't have enough. Let us know, Bo, thanks!
Hey Kierin, Just wanted to flag that we’re still waiting on some info from the engineers. These are typically ranked based on priority and scope, we’ll keep you posted as soon as we hear back! 😊
Ok thank you. Please keep me updated. As I mentioned I can’t really use clay without claygent working so it’s a big issue for me
Totally understand! I went ahead and ran the entire table, and I got some great results. 😊 Hopefully, that helps get you unblocked in the meantime!
Over 25% of them still are returning "No Data" but I can probably continue the table build-out until then. The problem is that I obviously won't be able to properly use the output until it is fixed because of how many are wrong. Also, if every Claygent has a 20 - 40% error rate, it will be multiplicative when I try to use multiple claygents after the other
Yes I totally get it. We'll keep you posted of any changes here Kierin :)
Hey there - just wanted to check in here to see if you needed anything else! Feel free to reply back here if you do.