Clay Icon

How to Verify AI Results for Errors & Use ChatGPT for Proofreading

ยท
ยท

I've been loving using AI in Clay. But every once in a while, it outputs something strange or deviating from its instructions. It can be difficult to spot at times though when there's thousands of rows. Does anyone have any recommendations or best practices on how to double-check our AI results for any strange errors? I tried exporting CSV files and having ChatGPT/Code Interpreter scan them for mistakes. Sometimes it works, but I'm getting a lot of errors from ChatGPT in trying to do this. Any suggestions or better workflows?

  • Avatar of Roshan K.
    Roshan K.
    ยท
    ยท

    Keep in mind it's GPT that's causing these "hallucinations", not Clay. Are you experiencing this with GPT4 or GPT3.5 Jaemin Y.?

  • Avatar of Jaemin Y.
    Jaemin Y.
    ยท
    ยท

    Yup, totally know it's GPT! Just curious if people have come up with some efficient & effective ways of checking for these. I've heard Eric N. mention that he has GPT proofread the GPT input. Curious how people are doing it. And I've been using 3.5, which probably hallucinates and ignores instructions more than 4.

  • Avatar of Eric N.
    Eric N.
    ยท
    ยท

    We give the AI tons of examples to work from and then I usually have a prefix that every sentence should start with. I run a formula on the result column to check if the result column contains the prefix

  • Avatar of Roshan K.
    Roshan K.
    ยท
    ยท

    I second what Eric N. said haha, I found that giving as many examples as possible is what helps most. 3.5 hallucinates more but if you give it the guard rails via what Eric said, I think it should still give you a consistent output like you want

  • Avatar of Clay S.
    Clay S.
    ยท
    ยท

    Jaemin Y. software like gentrace.ai helps w/ this issue

  • Avatar of Jaemin Y.
    Jaemin Y.
    ยท
    ยท

    Thanks guys, appreciate the responses! Very helpful.