Clay S. - i manually triggered a source to update earlier and it imported 5k+ rows despite the previous trigger only bringing in 70 results. This has instantly tapped all of our chatgpt spend AND over half of our 10k credits that only renewed today...????? Can someone explain why this happened - it's a bit concerning that half of our credits have been burnt on something that (if you look at previous execution results) should not have happened?
While I can't determine the exact cause of the unexpected large import in your specific case, here are important points to help prevent such situations in the future: 1. Before running large updates, it's recommended to test enrichments on a few rows first to verify the expected behavior. 2. To prevent unintended credit usage, you can control auto-updates by: • Turning off auto-update at the table level using the three-dot icon beside the table name • Managing auto-update settings for specific columns through Run Settings 3. For ChatGPT specifically, you have the option to use your own API key instead of Clay's managed key, which would prevent consuming Clay's ChatGPT credits. However, you'll need to manage your own API usage and costs in this case. To understand exactly what happened with this specific import and credit usage, you may want to request help from a human agent who can investigate the details of your table's configuration and the trigger event.
Was that helpful?
Looks like you're still looking for help. We're currently outside of our standard office hours (9 AM to 9 PM EST, Monday through Friday), so it may take a little longer for us to respond. If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!
Credits renewed today. 95% of the results should not have been brought in as they are companies that align with the EXCLUDE INDUSTRIES etc.
Alex, sorry that happened. Can you show me where it happened exactly? Could you send me the table URL? I'm not able to find it inside your workspace, so I'll need to see it.
The third source in the list - you can see in the Run History
😞
I had to stop it processing as it got down to like 3K credits RIP
Hey Alex, from what I can see, the biggest consumer of Clay credits in the last week was this table: https://app.clay.com/workspaces/443643/workbooks/wb_J9AAnXuBhFj2/tables/t_PChHsnPn4Ejt/views/gv_M5v2GiUPRK6o I see a bit over ~10k credits used on March 3, which I've rounded up to 11k and added back to your workspace. Do you recall making any changes/updates to the parameters of the search? Odd that it would have that sort of jump, and I don't see any obvious causes for that.
IVe also built failsafesinto the rest of the table so that if it does pull a high amount in, it will filter them out and only enrich results in line with the original parameters
Hey there Alex, jumping in for Mark here, can you point which column contains the fail safe here so that we can take a look?