Is there a way to enforce the auto-dedupe function when importing data from another table? As in, I'm using the "Write to Other Table" tool in table A to table B and table B has auto-dedupe enabled on a column, however there seems to be some sort of race condition that's making it not prevent duplicate rows from being created...
Unfortunately, there's no direct way to enforce auto-dedupe when using the "write to table" tool. Auto-dedupe is a separate feature that works independently. To minimize duplicates, you can: 1. Enable auto-dedupe on the destination table before using "write to table". 2. Use the "Only run if" condition in the "write to table" settings to control when data is transferred. If duplicates persist, you may need to manually dedupe the column after the transfer. To do this, right-click the column, select "Dedupe", then "Delete Duplicate Row". For more complex scenarios or if the issue continues, you might need to request human assistance for a tailored solution.
I'd like to talk to a human if possible
Hey, Just to make sure I understand—are you saying that duplicates are still being created even though auto-dedupe is enabled in Table B? If that’s what you’re seeing, let me know which column you’re using for deduplication, and I can help take a closer look or suggest a workaround to enforce it more reliably.
Yes, that's exactly what I mean. This table: Job Openings exports rows to this Companies table. The "Companies" table has auto-dedupe enabled on the "LinkedIn URL", to prevent duplicate company entries, since rows in "Job Openings" may share the same value in their "Company URL" column. It does successfully dedupe rows in "Companies" as long as the duplicate rows aren't being exported in the same "batch", but if they are, it doesn't.
One thing that would also be useful and fix this issue is the ability to have a column run in sequence rather than in parallel between all rows
Hey, Thanks for the extra context, that helps a lot. I’ve flagged this behavior to our engineering team since it looks like a bug with how auto-dedupe handles rows exported in the same batch. In the meantime, one workaround you can use is to add a formula column that counts how many times a LinkedIn URL appears. That way, you can filter out any rows where the count is greater than one—keeping only the first instance. https://www.loom.com/share/a39a4569d95244b7886026f3c70812c7 Let me know if you’d like help setting that up or need another approach.
Okay, thank you for your help. Is there a chance you could give me an approximate estimate of when something like this could be fixed? I'd like to know if it's worth adjusting my current setup rather than wait for a patch to be deployed
Hey, totally get that Right now, we can’t give a specific timeline—it depends on how this bug is prioritized among others. Since we handle bugs based on severity and impact, it’s hard to say where this one will fall just yet. If adjusting your setup now saves you time, that might be the better move rather than waiting. Let me know
Yeah I'll probably rework my current setup then. One thing that I thought of that would be useful and probably fix this issue was to be able to run a column's rows sequentially rather than in parallel - although I'm not sure how feasible that would be to implement technically speaking, just an idea though! Thank you so much for your time
Thanks for the idea — really appreciate you sharing it. I’ll make sure to pass it along to the team.
Same issue here. Need to manually dedupe every time
Hey - Thanks for flagging this—Our engineers. I’ll let you know as soon as it’s fixed. Sorry for the hassle in the meantime.
This thread was picked up by our in-app web widget and will no longer sync to Slack. If you are the original poster, you can continue this conversation by logging into https://app.clay.com and clicking "Support" in the sidebar. If you're not the original poster and require help from support, please post in 02 Support.