I got around 150k results from serper.dev through http api call, but im hitting the 50k row limit for clay, is there a workaround for it, or is there any easier way to segment the list, im using write to table now.
For handling data beyond the 50,000 row limit per table, there are two main options: 1. If you're an enterprise customer, you can use Passthrough tables. These tables automatically process incoming data, enrich it, and forward it to a designated destination before deleting the original entries. This allows you to bypass the standard row limit while keeping your tables manageable. 2. If you're not an enterprise customer, you'll need to segment your 150k results into multiple tables, as Clay tables are limited to 50,000 rows and 70 columns per table. Note: Passthrough features do not work with CSV files or bulk uploads at high volumes.
Was that helpful?
Great! I'm glad I could help. I'm here if you have any more questions.