Hey Support, what are the limits for writing prompts for the Anthropic and OpenAI integrations? Do they differ for content creation vs web research?
Thanks! We've reopened this thread. You can continue to add more detail directly in this thread.
Hey there - we're having a bit of trouble with our support tooling and closed this conversation out preemptively. Reopening it here and someone from our team will get back to you soon!
For the Anthropic (Claude) integration: ** Maximum output length is 4096 tokens ** You can adjust creativity settings (defaults to 0.4) ** You can include both a main prompt and an optional system prompt for contextFor web research using Claygent: ** You can set a maximum cost limit (defaults to 0.10 ** You can specify the desired answer format for structured outputsThe limits and settings are based on the model you choose rather than the specific use case (content creation vs web research). GPT-4 Turbo and Claude models are noted to be 20x more expensive than GPT-3.5 but offer higher quality output. Sources: Claygent (AI Web Researcher) with Artificial Intelligence Claude: Generate text with Anthropic
Right, but I'm asking about limitations in the prompts
Hi Tanner, thank you for reaching out. The token limits for prompts depend on the model used. OpenAIβs GPT-4-turbo has a 128k token context window, while GPT-3.5-turbo has a 16k limit, including both input and output tokens. Anthropicβs Claude models vary, with Claude 2 supporting up to 100k tokens, also accounting for both input and output. Regarding web research, we always suggest using Claygent, which has been optimised to access and retrieve information from the web. I recorded this quick video for you explaining the differences between GPT and Claygent: https://www.loom.com/share/01ade1a8fb1d4624917f303589cc8cd3?sid=1612ff71-1e8e-4d59-b3e2-a76db1d7de9c Let me know if this helps
Got it, so there's no lower limit for context window through Clay's claygent interface than there is through using the models through an API call or their native chat products
Hi, there is a lower limit for context, but Claygent can actually access the web with much better results than ChatGPT, which is why most people tend to use Claygent when trying to get results from the internet, whereas ChatGPT tends to be used for writing tasks.
More focused on the content creation option for using Anthropic in clay, not web research - is that context window smaller in clay than it would be using Claude natively?
Hi, the context tier is aligned with the highest usage tier of the Claude API key, meaning that the context windows are the same for a native API and using CLay's key. Let me know if this clarifies things.
Excellent yep, thank you
Happy to help!
This thread was picked up by our in-app web widget and will no longer sync to Slack. If you are the original poster, you can continue this conversation by logging into https://app.clay.com and clicking "Support" in the sidebar. If you're not the original poster and require help from support, please post in 02 Support.