Hey team, I am getting this error when I run Claygent. May I kindly ask for assistance? The error is in the step:
I notice you mentioned an error but I don't see the attached image in our conversation. Without seeing the specific error message, I can help you troubleshoot common Claygent issues:1. Check your OpenAI API requirements: β’ Claygent requires 450,000 TPM (tokens per minute) β’ If you're on a free plan or tier 1, this won't be sufficient 2. To improve task processing: β’ Make your search queries more specific β’ Break large data requests into smaller batches β’ Verify your network connection is stable β’ Check your usage limits and API quotasCould you please share the specific error message you're seeing? This will help me provide more targeted assistance.
Here is the table and the column is called "Company Website Finder": https://app.clay.com/workspaces/425830/workbooks/wb_5ajB7Crf4hbi/tables/t_6tnHknss89ic/views/gv_4FAjoWeFxR9A
To resolve errors when running Claygent, try these steps:1. Ensure your search queries are specific and well-defined. 2. Break large data requests into smaller batches. 3. Monitor your usage limits, tiers, and API quotas. 4. Check your network connection stability. 5. Verify your input data format is correct. 6. Double-check prompts and search parameters before running tasks. 7. Use our metaprompters to craft better prompts.If you're experiencing issues related to OpenAI API limitations:1. Check your API usage tier at the OpenAI API Usage Limits page. 2. Ensure your plan meets the required tokens per minute (TPM): 30,000 TPM for ChatGPT columns and 450,000 TPM for Claygent. 3. Upgrade your plan if necessary, as free plans or tier 1 won't support these requirements.Can you provide more details about the specific error message you're seeing when running Claygent?
Looks like you're still looking for help. We're escalating this over to our support team and someone should be in touch soon. In the meantime, feel free to include any other information that may be helpful in solving your issue!
Hey Valentin, thanks for reaching out and sorry to hear this was causing some friction. Which column are you getting this error and are you referring to the "Cannot read properties of null (reading scored chunks) aspect to the messaging or which aspect are you referring to exactly?
Hey Owen, I have written in the thread of my message for the column βCompany Website Finderβ and yes referring to that error, but also if you see that the domain is visible in the web hook but not in the output from the prompt
Hey there Valentin thanks for reaching out, jumping in for Owen here, taking a look at the exact error messages on our end the issue here is that the "Company MAtcher" column was never able to proerply access these websites and so when this AI column was taking in the results from that column the same issue persisted. https://downloads.intercomcdn.com/i/o/w28k1kwz/1320523793/856a46b5bb36562d83d9e4589bfb/image.png?expires=1735933500&signature=5c49c13206ca87a5c3e81837034bf815786cd718b4b9654e5a3b57a8248498ba&req=dSMlFsx8noZWWvMW1HO4zV0nnmIB77uYoxL8leG0%2F%2FX8FrAy%2FuKXLgF3tDr%2F%0AI27%2B%0A
Hey, so the problem is in the Company Matcher prompt? Because it access the website for 80% of the companies
So the error message appears when Clay is trying to read data from a website (factsfound.news in this case) but encounters a null value in the "scoredChunks" property. This typically happens when the scraping integration can't properly access or parse certain elements from the webpage. The most common reasons for this error are: 1. The website might be blocking automated access 2. The content you're trying to scrape might not be loading properly To resolve this, you could try: 1. Verify the URL is correct and accessible 2. Check if the website content is dynamically loaded (in which case you might need a different scraping approach) If you keep seeing this error, it would help to share the full scraping setup you're using so I can suggest more specific solutions. I just went into your table tho and I haven't been able to find that column again, is this expected?
Hey Bo, yes it should be there, I haven't removed anything. What's strange to me is that it works for 80% of the websites and it outputs what I need therefore the approach shouldn't be wrong
Can you let me know which rows exactly? I'm not able to find it still! You can also create a new view and only highlight those that are present - I tried finding 'https://noordhollandsdagblad.nl' But haven't been able to find it. You can also try to re-run the row to see if that would fix it! :) https://downloads.intercomcdn.com/i/o/w28k1kwz/1321230739/f5b7b4bc0a2865a275a01bc56b10/CleanShot+2025-01-04+at+_28ncjNXWTi%402x.png?expires=1736005500&signature=ff9e467c20a0ae2d24fd3da7639ef2092ba5c430ffdbc11a7a79f339334418f9&req=dSMlF8t9nYZcUPMW1HO4zYLUMqYJ3VhpcF7v6Fs28TcYCKKauWMRfYHRfWSS%0ArNz%2F%0A
Row 2,8 and 10
Hey Valentin, I do see this for those rows on the backend; Error accessing website: All access was rejected. Have you noticed this with a lot of value inside of your table or only those 3?
Hey there - just wanted to check in here to see if you needed anything else! Feel free to reply back here if you do.
We haven't heard back from you in a bit, so we're going to go ahead and close things out here - feel free to let us know if you still need something!
Hi Valentin I.! This thread was recently closed by our Support team. If you have a moment, please share your feedback:
Thanks! We've reopened this thread. You can continue to add more detail directly in this thread.
It is happening on a lot of cells, the workflow is taking hits because of this
Hey Valentin - I noticed this happened 7 times out of 78 rows, but in all of those steps, I can still see the "Selected Company" data that you were looking for. Which data were you specifically trying to pull from the website that you're not currently getting? For those blocked sites, we could try usingScrape Website to extract the data and feed it to the ChatGPT integration instead of web research. Some websites have privacy settings that block LLMs from accessing their data to prevent model training. I've tested with our other LLMs too and got the same results. Let me know what specific data points you're after and we can explore alternatives. https://downloads.intercomcdn.com/i/o/w28k1kwz/1324887545/f9b77b2dd95e5ac6e9f046845018/CleanShot+2025-01-07+at+_50M6tkzDpj%402x.png?expires=1736274600&signature=be17f6ba4dad61329f93337197e41cb88031d80cbcab995af2628503019e8934&req=dSMlEsF2moRbXPMW1HO4zbi0%2FMdKILJFxtx4W0syA5jwH%2BT79TZ0dGrvU3M%2F%0AbxjI%0A https://downloads.intercomcdn.com/i/o/w28k1kwz/1324886549/9ef225e9a26877ce3dfc1bd14cd3/CleanShot+2025-01-07+at+_12sOHig5Zg%402x.png?expires=1736274600&signature=8e8f9724ac9b1e60e954576000a61306415a2ace9f1e3073e2687d0bab69e5d1&req=dSMlEsF2m4RbUPMW1HO4zYwNag5YWO1minofBOYWxcW1Av6b3yWo5jA3Tmz0%0AFtvD%0A
I want to extract the information on home page and the "About us" section so the next prompt can categorize the companies
To extract company information effectively for those that the AI can't access their website: 1. Use the "Find Redirect Page" integration instead of trying to find the redirect through prompts. 2. For scraping the homepage: 1. Use the Scrape Website integration 2. In the input settings, specify to only return the body content to avoid excess data 1. For the About Us page: 1. Check the Sitemap integration (already in your table) to locate the About Us URL 2. Apply the same scraping approach to get focused content This way, you'll get data for those without data ! :) https://downloads.intercomcdn.com/i/o/w28k1kwz/1325037356/980fdbc180d67f5e940c1803f0e2/CleanShot+2025-01-07+at+_41t1tOLiPo%402x.png?expires=1736280900&signature=7380bbdf9c8d0332115d515ec1113f4e1d4ce7d74c19dceec831e46592a55339&req=dSMlE8l9moJaX%2FMW1HO4zR1LftHzLb3eHXe5NlsHnkAe0W%2Fr0KPtuFCOLzxs%0AMx8W%0A
Hey there - just wanted to check in here to see if you needed anything else! Feel free to reply back here if you do.
We haven't heard back from you in a bit, so we're going to go ahead and close things out here - feel free to let us know if you still need something!
Hi Valentin I.! This thread was recently closed by our Support team. If you have a moment, please share your feedback: