With Passthrough Tables, you can define how many rows are retained (default is 100), but you can view all the retained data at once in your table.
You can also use the row formatting controls to set custom row limits - click the "Rows" button to set starting row and row limit, or use "Show All Rows" to display everything at once. [1]
So it's not limited to small chunks like 10k or 20k - you control the display based on your retention settings and view preferences.
No, Bulk Enrichment is different from Passthrough Tables. Both handle unlimited rows on Enterprise, but they work differently:
Bulk Enrichment processes millions of records from your CRM (like Salesforce) and sends results directly to external destinations without storing data in Clay. [1] You access it via New → Bulk Enrichment from your homepage. [2]
Passthrough Tables are regular Clay tables that automatically delete older rows to maintain continuous data flow while staying within your workspace.
For unlimited rows in a standard table, you'd use Passthrough Tables, not Bulk Enrichment.
You're right that Enterprise includes unlimited rows, but you need to enable "Passthrough Tables" to access this feature.
Regular tables are still capped at 50,000 rows even on Enterprise. Passthrough Tables work differently - they handle rows on a rolling basis and automatically delete older rows to maintain continuous data flow.
To enable Passthrough Tables for unlimited rows, you'll need to activate this specific Enterprise feature in your table settings. [1]
If you still need help with Claygent's performance or have other questions, I'm here to assist. Would you like to provide more details about what you're trying to resolve or understand?
Claygent performance can vary based on several factors. Recent updates to Claude Sonnet 3.7 doubled estimated token usage per run, which may cause slower processing but improves stability.
Common causes of delays include:
Rate limits on heavier prompts or lower API tiers
Large data requests that need breaking into smaller batches
Network connection issues
However, we've also introduced GPT-5 which improved research accuracy by 8-17% while consuming 38% fewer tokens, making it faster overall. [1]
To optimize performance: use specific queries, monitor your usage limits, and break large requests into smaller batches.