Is there any way to tell when exactly the scheduled run will win? I’m sending new rows out via HTTP API, and I want to process the output in batches, not individually. Since this is scheduled to run weekly, maybe I could run the processing the day after the Clay run, but I need to know the date of the Clay run. Or is this other work around for processing new rows (maybe more of a workflow/automation/engineering question than a Clay question)?
We don't provide an exact date for scheduled runs, but we offer flexible scheduling options. You can set sources to run daily, weekly, or monthly. For processing in batches, you could: 1. Schedule your source to run weekly 2. Set up a scheduled column to process the data the day after This approach would create a consistent workflow for batch processing. If you need more precise control or a custom solution, you might want to explore our API capabilities further. If you need additional assistance, please ask to speak with a human agent for more detailed guidance.
I need a way to group all of the new rows from the scheduled run into one place. I need to get AI to analyze the new rows all at once. Sending out each individual row via HTTP API means a bunch of new triggers for the AI workflow. I need to catch them all, then run the AI workflow, or I need a way to group all the new rows together in Clay and then send out one HTTP API action with all of them, that way the AI workflow only receives one trigger, with all the data.
Hey,
To understand when a scheduled run happens, you can unhide the columns in your table—this lets you see when each row was created or last updated. Just note that you can’t reference the “Updated At” field in formulas due to circular reference issues. Instead, you can create a formula that checks when a specific column was last run. Use the forward slash (/) to reference the column, then use it to return the timestamp. Example, return the last time this column updated and format the output as YYYY-MM-DD so it’s easy to work with. To group new rows, you’ll want to give them all a shared value (like a domain or tag) when they’re added. Then, in another table, use a Lookup to pull in all rows with that shared value. This way, you can trigger your HTTP API once with the grouped data instead of sending individual rows. Here’s a quick example that walks through this setup of finding duplicates in the table and numbering them (so you don't run on the multiple values) https://www.loom.com/share/a39a4569d95244b7886026f3c70812c7 Let me know if you want help applying this to your table.
What does “Run this source” in my screenshot above do? Does that determine when it will search Reddit in this case and add new rows?
“Run this source” lets you manually fetch new results—so in your Reddit example, it would pull in any new posts that match your search. You can also set it to run on a schedule, like once a week. That way, it automatically updates and add new results without you needing to click anything.