Inconsistent Claygent Results: Understanding 40% Variation Issues
We are having an issue with the consistency of claygent results. We run a claygent a cross a population of rows that visits the provided domain & performs a check for some specified characteristic (binary YES/NO) e.g. whether the business is a fund manager. When we re-run an identical duplicate of the same Claygent column, we consistently get different results - on average 40% variation. This seems deeply flawed? I'm certain other people must be encountering this. Any ideas why, and how does one address it? I have heard someone suggest it's due to the default temperature setting in the model but unsure (we can't make it more deterministic in clay.com settings). We typically use GPT-4.1-mini.