Also noticed that AI columns aren't respecting the outputs recently. Not sure if this is an LLM problem or a Clay problem.
If AI columns aren’t “respecting” your defined outputs, it’s usually an output definition / mapping issue rather than the model itself. Double-check that your output fields are correctly defined and mapped in the prompt settings (so the model knows exactly what to populate).
Also verify you’re not accidentally treating a structured output (an object) as a plain string in a downstream formula. If a formula references the whole object, you can get confusing results (like [object Object]) - fix by referencing a specific key/field instead.
If behavior differs across AI columns, compare the column configurations/options - different settings can produce different output behavior even with similar prompts.
Is that what you were looking for?
Doing everything properly
If you’re doing the outputs setup correctly and the AI column still isn’t “respecting” them, the most common causes are:
Output fields not being correctly defined and mapped in the prompt configuration (even if the AI “finds” the info, it won’t populate the field unless the output mapping is right).
Different AI column options/use cases (web research vs content creation, different prompt options) can produce different output behavior. Compare the column configurations and standardize them if you need consistent results across columns.
In the Use AI “Fields” output format, make sure each field name is defined and the correct data type is selected in the dropdown for that output. [1]
Also, click into an enriched cell to preview the generated details and confirm what the AI actually returned vs what the output mapping expects. [2]
Did that answer your question?
If you still need help with the AI column output issue, I’m here to assist further. Would you like to provide more details about what you’re trying to resolve or understand?
