I want to research if our ICP is matched by doing an Open AI lookup on the individual website. I tried to do this in column "ICP assessment" and it sort of worked but didn't come up with just 2 different options "ICP" or "Not ICP match" and instead had lots of different results returned including URL's and more. In wrote quite a detailed prompt thinking that would help, but think my final outcome based instructions might need improving. https://app.clay.com/workspaces/236043/tables/t_ToPwenw3J4AJ/views/gv_ezcVkwMvQGub
To get consistent "ICP" or "Not ICP match" responses, you'll need to structure your prompt more effectively: 1. Set a clear system prompt that defines the AI's role and output requirements 2. Follow these best practices for your prompt: • Start with clear context about the assessment task • Provide specific step-by-step instructions • Explicitly specify that the output should only be either "ICP" or "Not ICP match" 3. In the column outputs section, set the data type as "Select" with only two options: "ICP" and "Not ICP match"This structured approach will constrain the AI to provide only the two desired response options instead of varied results or URLs. Sources: Prompt Engineering Crash Course [Clay 101] - Clay University Use AI integration overview | Documentation | Clay University
Did that answer your question?
No, I changed the prompt and I spent credits on OpenAI and it came back with an error, this was a prompt that was designed by asking open AI for the prompt to put in to clay based on the criteria. I tried changing the formatting but this again is costing me more credits for it to return error and no results. It seemed to work much easier the last time I did Open Ai prompts. Will I get the $30 back for continually not returning any results?
For API calls that return invalid data or errors, our refund policy works as follows: If a provider charges us for an API call, we will charge Clay credits for that call. However, refunds are only provided if the provider refunds us due to their inability to deliver valid data.Different providers have different policies regarding charges for unsuccessful queries. Some providers will charge just for attempting to find a result, regardless of whether they find something or not.You can check which actions are eligible for refunds in our refund policy document. Sources: FAQ | Clay.com Providers Do Not Return Valid Data
Was that helpful?
Looks like you're still looking for help. If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!
That makes me not want to use one of the main time savers of Clay, if I'm unable to know before I hit send if I'm going to get a result vs if I don't get an valid email returned you don't charge anything.
Hey Mark, thanks for reaching out. Totally hear you and prompt engineering to get a desired result across thousands of rows can take some time to hone in on. First, I noticed your OpenAI account is not on a Tier 2 plan meaning the rate limits will lead to slower loading times and challenges when trying to perform web scraping (more info below) I've added some extra credits to adjust the prompt a bit. When it comes to this prompt specifically, there are a few areas that we can improve on. The initial question about TAM is relatively subject for an AI to interpret. What may be more effective is asking AI to look at any customers or testimonials featured on their website that can be used as a guide to determine the size of companies they're reaching out to. As for the order size. Let's say you weren't using AI here, I'm curious how would you find what the average order size is? Is this publicly available on their website or anywhere else that we could look? Industry fit could be a separate enrichment that may make this process more effective. As for Growth Stage and needs, how would you determine if a company is struggling with lead generation? and have tried cold outreach? or what information on their website would point to this? Same thing goes for marketing team insights. Employee count is something we should remove and have as a separate enrichment. The theme here is that a lot of these questions don't have direct data to point to that is publicly and readily accessible for the AI to use to make a decision here. Addittionally, while there are many questions prompted to the AI, there isn't a clear path for the AI to make a decision. I.e. "Does the company seem to be struggling with predictable growth or lead generation?" even if the answer was easily available, there's not a clear connection between the answer to this question and whether it meets your ICP fit or not. Overall, the prompt is asking a ton of subjective questions that likely don't have data to point to when making a decision. And without a clear connection between the answers to these questions and how that impacts whether this is an ICP fit or not, the AI is going to struggle. I would recommend separating out a few of these questions (number of employees, marketing headcount, industry fit) into their own separate enrichments: * Headcount is an enrichment waterfall we provide * Headcount of specific departments is another enrichment we provide using our "Find contacts at company by criteria enrichment" * Industry fit, you can pull the industry this company is by using our "Enrich company from profile" and we can filter out companies that don't match your ICP To conclude, it's not that these data points aren't able to be found and used to make an ICP determination. However, trying to do them all at once, in a single prompt, without clear instructions of what to do if those answers are found will not leave AI enough context to make reliable decision. Curious to hear your thoughts here. This task is challenging, and trying to do this at scale means having very specific criteria for success whether you're prompting or using other enrichments. About Rate Limits ---------------------------- Today, Clay requires 30,000 TPM (tokens per minute) for "ChatGPT - Generate Text" columns and 450,000 TPM for Claygent. "ChatGPT - Generate Text" will work with any tier, as long as the API key has access/can be used. To use Claygent, you will need a Tier 2 API key from OpenAI, as it supports the required TPM rates. You can check your OpenAI API usage or limits following these links: • Usage • Limits • Usage Tiers
Their support team can help if you need to increase your usage limits. I believe you can request a review from the “Usage Limits” page in your OpenAI account settings. Just click on “Need help?”, submit your request, and they’ll review and follow up with you. They will typically grant the request if you have already prepaid the minimum amount required.
Thanks Owen C. A lot to digest there, I guess the confusion is in ChatGPT creating the prompt, I assumed there would be some kind of base level of knowledge as I'd inputted the prompt was for Open AI through Clay.com. I understand your point about there needing to be an easily accessible data source for the web AI researcher to access.
Of course, apologies for the knowledge dump. Ah got it. Claygent doesn't necessarily have direct access into our database of people or company data. This agent is used frequently for web research and for finding data that may not be available through our other enrichments. Hope this helps! Happy to continue brainstorming, otherwise let me know if you have any other questions or concerns. :)
