Hello! I'm just joining the community and have a few questions.
How does Clay crawl websites to extract information? Does it scan multiple pages? Are there limits on the number of pages or characters? Can we extract a company logo, and if so, how would that work? More generally, I'd like to understand in detail how website crawling works in Clay.
How can we integrate Clay with our database/system under our current plan? Would this work via an API? Ideally, we'd like to minimize the manual work involved in enriching accounts and syncing the data back into our system. Any details you can share on this would be much appreciated!
Website Crawling:
Database Integration:
Yes, Clay integrates via HTTP API on all plans. Use it for enrichment (adding data to existing records) or as a source (importing data to create new tables). [3] You can also use webhooks to send data to Clay, process it, then push results back to your system via HTTP API.
Clay connects to any tool with API access using our HTTP API feature. [4]
Is that what you were looking for?
If you need any more help understanding how website crawling or API integration works with Clay, I’m here to assist. Would you like to provide more details about what you’re trying to achieve or any specific questions you have?
"Clay uses AI-based enrichment to scrape websites" didn't really help here. I need more specific technical details from you. This conversation may need to be handed over to someone from Engineering or Product. Your answer about extracting logos from websites is also unclear. Additionally, the documentation link you shared for HTTP API integrations is not correct. At the moment, the answers provided are too generic and not very helpful.
I'm sorry for the frustration. I’ll connect you to a human agent who can provide more detailed assistance.
Our support team has got your message and we'll get back to you soon! We're currently outside of our standard office hours (9 AM to 9 PM EST, Monday through Friday), so it may take a little longer for us to respond.
If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!

Hi Bruno, thanks for reaching out! If you're looking to extract a company URL, I would recommend using a different action called "Enrich Company" that will return the company logo's URL - alongside other data points like size, industry, website, etc:
Sharing more about Claygent's (AI web scraper) capabilities below! There are no limits to number of pages or characters.
Scrapes JavaScript-heavy dynamic pages
Can extract data that isn't in standard HTML structure
Works with natural language prompts (e.g., "Find the pricing page URL" or "Extract the company logo from this domain")
Claygent Navigator feature can take human-like actions on pages
For your second question, do you mind sharing more about what your current system looks like - ie, are you using a CRM?
Hi Natalie! Thank you so much for jumping into it! I’ll look into it soon and get back to you with more details 🙂
Awesome thanks Bruno!
Hey. So,
What you're saying is that if I go to Add column → Add enrichment → Enrich Company, that would automatically extract all those data points for me for 1 credit per row. Is that right?
What if I only need the company logo URL?
Couldn't I edit my current prompt in the Subcontractor Extraction column to also extract the logo URL?
Regarding the web scraper: does this mean Claygent will simply go through all available/accessible pages on a website?
On the integration topic: we do have a CRM (Salesforce), but we don't want it to be our source of truth—our database should be, and we can sync data from there to Salesforce. In my head, there would be a "low-tech" integration between our database and Clay where I could (i) extract data from a database table (e.g., run a SQL query for the company websites I want to enrich), (ii) send it to Clay for enrichment, and (iii) get the enriched data back into the database somehow—maybe not directly, but at least in a format that our internal systems can use to update those companies. Does that make sense? I'd like to automate this process.
Thanks Bruno! I filmed a loom in my test account to walk through an explanation: https://www.loom.com/share/f4b42fa7a4344bdab47a5c76831fe0a0. Let me know if you have further questions or need additional help! 😊
Hey Natalie, thank you so much for the video!
It doesn't seem that Enrich Company is free (please see the screenshot below). It shows 1 credit per row for me in the UI. Could you clarify this, please?
Regarding the web scraper, my question is mainly about what happens when I don't specify a particular page to look at—just something like “extract from this website.” Does it crawl through every possible page it can find?
And finally, regarding integration: the connection with Salesforce is clear to me, but I'm interested in connecting to a database (e.g., Postgres). Do you have documentation on how this would work via an API?
