Is there a list somewhere of factors that can affect how quickly a table runs? I was running a few tables yesterday with 20k rows and none were finished after 4 hours of running. This morning I ran one with 20k rows on the exact same table and it was finished within an hour. Any help would be appreciated so I can optimize how I'm using Clay!
Several factors affect table processing speed in Clay: 1. Table size - larger tables (like your 20k rows) naturally take longer to process 2. Complexity - multiple tables linked together or complex workflows slow performance 3. Concurrent actions - running multiple operations simultaneously affects processing time To optimize performance: ** Break large data into smaller batches ** Test with small samples (5-10 rows) before scaling to full tables ** Use proper indexing for frequently used columns ** Monitor your usage limits and API quotas * Ensure search queries are specific and well-defined
All tables I've run have been 20k rows so this shouldn't affect it
I don't have any tables linked together
3. What do you mean by "running multiple operations simultaneously". Do you mean running multiple Clay tables at once?
Yes, "running multiple operations simultaneously" refers to running multiple tables or actions at the same time in Clay. When you have multiple tables processing data concurrently, it can affect performance and cause latency issues. This is especially true with large tables (like your 20k rows) or complex workflows. Clay has limits on concurrent operations - for example, a maximum of 10 tables can be connected when sending data between tables. Running fewer operations simultaneously can help improve processing speed.
Ah okay so the only reason is that I was running 4 20k tables at once yesterday when today I only ran 1 20k table
Yes, running multiple large tables simultaneously is a key factor affecting processing speed. When multiple actions run concurrently, it creates latency issues. Other optimization factors include: ** Breaking large data into smaller batches ** Using specific, well-defined search queries ** Monitoring usage limits and API quotas ** Implementing proper indexing for frequently used columns * Testing with small samples (5-10 rows) before scaling
Is there any way to speed this up? I have 3 list builders on my team right now all running lists simultaneously. What happens if I have 10 list builders - will Clay not be able to handle this?
Yes, you can optimize performance when multiple list builders are working simultaneously: 1. Break large datasets into smaller batches instead of running full 20k tables at once 2. Test with small samples (5-10 rows) before scaling to full tables 3. Use specific, well-defined search queries 4. Consider using Bulk API for efficient data processing of large volumes Clay has a limit of 10 connected tables maximum, which includes tables across workbooks. With 10 list builders running simultaneously, you'll likely experience performance issues due to latency when multiple large tables and actions run concurrently.
Can I talk to a person please - that didn't help
Hey Teddy, thanks for reaching out. Can you do a hard refresh for me? A hard refresh will clear the cache for the page that you're on and load the most recent version of the page. This can often resolve issues with outdated content or errors that have been fixed on the server side but are still being displayed due to cached data. Here are the steps you can follow depending on your operating system and browser: For Mac: ** Chrome/Firefox: Press Cmd + Shift + R ** Safari: Press Cmd + Option + R
For PC: * Chrome/Firefox/Microsoft Edge: Press Ctrl + F5
If it doesn't resovle it, do you mind sending the link (url) to the table so we can take a look?