Hi Clay team, I'm trying to use Claygent to analyze websites and determine whether a company is a good/bad fit for my agency services. Been modifying my prompts but I can't get it to work properly. Any advice on how to organize it?
columns: Target Worthiness & Prospect Evaluation
Scrape the following website: /website. Analyze if it’s a company worth targeting for our custom software development and design services firm. If they have an app, platform, portal or marketplace, then it’s a good fit and mark it as Yes. If they don't mark no. If it is an Agency or Studio mark Agency. Any company that specializes or offers services related to Web Development, Product Development, Mobile App development, software development, software solutions, custom development, IT solutions, consulting, advertising or Marketing mark as NO.
I don't think it understands the line "Analyze if it’s a company worth targeting for our custom software development and design services firm" It continues marking "Yes" companies that are competitors and offer the same dev/design services
Hey Gonzalo B. try this prompt, I ran a few results from your table with this new prompt and the results seemed better. You can also try using GPT 4 as the model for Claygent if the quality still isn't good.
Task: Evaluate a website to determine its suitability as a target for a custom software development and design services firm.
Website
Website
Criteria:
AGENCY:
Agency providing services, or studio providing services.
SUITABLE: If the company has an app, platform, user portal, or marketplace.
UNSUITABLE: If the company specializes or offers services in:
Web Development
Product Development
Mobile Development
Application Development
Software Development
Software Solutions
Custom Development
IT Solutions
Branding
Consulting
Advertising
Marketing
Instructions: Please provide the evaluation result based on the above criteria. Your response should be one of SUITABLE, UNSUITABLE, AGENCY.
Thanks, Osman. I'll try it!
No prob, let me know how it goes! Takes some experimentation to get AI prompts right.
Is it normal to have such different results between models?? The first column with 3.5 missed almost all the unsuitable. 4.0 was more accurate. My prompt with 3.5 worked better than yours 🤔
Yup that's expected, GPT4 is more expensive but the quality is much higher