My claygent is responding with "no data found" when it shouldn't. If I give the same prompt that I gave Claygent to ChatGPT, it does what I tell it to (see attached images). This is in a workflow. I will test outside of a workflow and update if it fixes issue. https://app.clay.com/shared-table/share_fGKHk4CaCHxr
Yeah it's still pretty trash even when its not in a workbook. The prompt I'm using currently is very simple because I assumed the original one I gave it was too complex. However, it just doesn't seem to be giving good responses. Attached is what I got form ChatGPT and what I got from Claygent with basically the same prompt
Bruno (Clay) - you mentioned GPT cannot scrape website. I used paid version of ChatGPT 4o which actually does scrape websites. U can see in my query that it searched the website to perform the task. LinkedIn will not work based on my task because my end goal is to find out all the products the company sells and give me a breakdown. This must be taken from the website. The problem I am encountering is that Claygent is saying that it can't find any info when it is definitely available. As stated, I gave the same query to ChatGPT and it was able to scrape their website to find the information, so I am not sure why Claygent cannot. See this video for an explanation: https://www.loom.com/share/9c9234b889e04d05813e0feb9de1dc79?sid=09e46d02-fe81-4f18-af42-566ca7e053b7 P.S. I am a relatively advanced user of Clay so you don't need to worry about explaining the basic features to me. Thank you though.
I want to note that the prompt I gave Claygent is just to "summarize the website". I did that because my more complex prompts also weren't working so I brought it down as much as I could to a task that I know should be possible. Claygent isn't performing this basic summarization task which should be possible even if you just go to the website and look at it, so that tells me it's a problem with Claygent
We haven't heard back from you in a bit, so we're going to go ahead and close things out here - feel free to let us know if you still need something!
Bruno (Clay) Clay (Welcome Bot) i added to the bug 2 days ago. Plz dont close
Hi Kieran, sorry for the delay! What you're seeing as references in your query outputs for GPT-4o are actually references to training data—these are reasoning guidelines rather than live scrapes of the websites. Regarding your goal of identifying the current product line a company is selling, Claygent seems to be a suitable choice for this task. When prompting Claygent, approach it as if you were conducting the research manually, then reverse engineer those steps into a repeatable prompt that you can input into Claygent. The task of summarizing a company's activities can be effectively handled using the prompt I outline in <The%20task%20of%20summarizing%20a%20company's%20activities%20can%20be%20effectively%20handled%20using%20the%20prompt%20outlined%20in%20this%20video.%20Remember,%20the%20reliability%20of%20the%20output%20depends%20on%20two%20key%20factors:%20how%20well-crafted%20your%20prompt%20is%20and%20the%20availability%20of%20the%20relevant%20information%20on%20the%20website%20you're%20targeting%20for%20this%20task.|this video>. Remember, the reliability of the output depends on two key factors: how well-crafted your prompt is and the availability of the relevant information on the website you're targeting for this task. I also see that you're using your own API key to run Claygent with 4o - it looks like your usage tier may cause you to run into performance issues. We now require 30,000 TPM (tokens per minute) for ChatGPT columns and 450,000 TPM for Claygent. If you’re trying to use Claygent with a model less than GPT-4 or on Tier 2, or GPT 3-5 on Tier 4, it won’t work. However, ChatGPT will work just fine. You can check your ChatGPT usage or limits at these links: • Usage • Limits • Usage Tiers Their support team will always grant you the request if you have prepaid the minimum amount required and they’ll be quick to assist if you need to increase your limits or troubleshoot any issues. I hope this was helpful. If there's anything else I can assist you with, please let me know!
Bruno (Clay) - ChatGPT Plus can search the web. It is not referencing its training data. It uses a headless version of Bing search to perform keyword searches. Here is an article by OpenAI about how it works: https://help.openai.com/en/articles/8077698-how-do-i-use-chatgpt-browse-with-bing-to-search-the-web The problem I am having is that my Claygent is not working for neither the very simple task of just summarizing a website nor is it able to perform the complex workflow of identifying the that you said it should be able to task. I am not sure if it is something I am doing wrong or a bug with my environment, but I gave you the ChatGPT example because I know it should be possible given ChatGPT can search the website and give me a result
It may be my usage tier that is blocking me from performing it. I would be surprised because of how simple of a prompt it is and how it sometimes works, but this may be a possibility
Bruno (Clay) - are you saying it may be the usage tier that is likely my issue? I'm a little confused as to what my solution should be
Hey Kierin, thanks for reaching out! 😊 I can see why this might be a bit confusing. While ChatGPT and Clay have similar features, they handle certain tasks differently. For example, ChatGPT, when accessed through its API, doesn’t have internet access, meaning it can’t browse the web or summarize live websites. However, when using ChatGPT Plus with the browsing/chatbot feature in the regular interface on their platform, it can search the web, but this is not available in the API. That's why we invented Claygent Neon and Claygent In your case with Claygent, it might be a limitation of your current plan or settings that’s preventing the workflow from working as expected, but we can easily check. Could you send me the real table URL (the one from the URL bar), so I can investigate further? Let’s figure this out together! 😊 Let me know if you need any other clarification.
Bo (Clay) This is the URL: https://app.clay.com/workspaces/276918/workbooks/wb_XvxcSSPANrf3/tables/t_Zbf7cESxQzF5/views/gv_SpQGwsts7VuG. As you can see from the screenshot, it does sometimes work, but it is only 2/8 times in this test case. After checking the URLs by hand, I confirmed that the information is there and the same prompt can be run on the URLs with ChatGPT Plus, so I am not sure what is causing it to not work
It was also working fine for me in the past to use Claygent and do more complex tasks, so I imagine there was an update that changed something about the way Claygent worked. It could be my token limitation that Bruno brought up. I will look into upgrading that when I have the time, but given that it does sometimes work, I'm not sure why that would be the issue
You can’t directly compare ChatGPT and Clay in this way. The reason ChatGPT might understand your simple prompts is that you’ve been training it over time, so it knows what you mean. However, the ChatGPT API doesn’t have that same context. When you use a simple prompt here, it’s like giving a stranger vague instructions with very little information, which might not make sense to the model. Check this GPT bot that can help structure better prompts as well. We also have pre-made prompts in our prompt library that can help. I’ve already added some for you in the Claygent Neon. I also noticed you’re using GPT-4o mini, which isn’t the most powerful model so it's not making things easier for you https://downloads.intercomcdn.com/i/o/1170350890/69a5de002f237fd01afeebcd/d82c3f5c-5bfb-4c3b-bdce-6f35a49cd6e7?expires=1725567300&signature=cf70c253b622ad8b305ca7486ea7ceaaecd5b3f491f7e635d8b53429a380b9d8&req=dSEgFsp7nYlWWfMW1HO4zVp%2B1V2LQYRxi9q320v7zlJwUsKIDj72IKynlIR9%0AJ%2Fpo%0A It’s important to remember that ChatGPT might give great results in a one-off instance, but over time it can derail or hallucinate. That’s why prompts need to be well-structured. Since the model doesn’t bring the same results every time, re-running columns with “no data found” can often change the outcome. I’ve added three versions: Claygent Neon, Claygent with GPT-4o, and another one with GPT-4o Mini. Check the output for each row to compare results and get a better sense of how each model works. Let me know if you need further clarification! 😊