Clay Icon

Limits for Writing Prompts in Anthropic & OpenAI Integrations

Β·
Β·

Hey Support, what are the limits for writing prompts for the Anthropic and OpenAI integrations? Do they differ for content creation vs web research?

  • Avatar of Channeled
    Channeled
    APP
    Β·
    Β·

    Hi Tanner B.! This thread was recently closed by our Support team. If you have a moment, please share your feedback:

  • Avatar of Channeled
    Channeled
    APP
    Β·
    Β·

    Thanks! We've reopened this thread. You can continue to add more detail directly in this thread.

  • Avatar of George D.
    George D.
    Β·
    Β·

    Hey there - we're having a bit of trouble with our support tooling and closed this conversation out preemptively. Reopening it here and someone from our team will get back to you soon!

  • Avatar of Channeled
    Channeled
    APP
    Β·
    Β·

    For the Anthropic (Claude) integration: ** Maximum output length is 4096 tokens ** You can adjust creativity settings (defaults to 0.4) ** You can include both a main prompt and an optional system prompt for contextFor web research using Claygent: ** You can set a maximum cost limit (defaults to 0.10 ** You can specify the desired answer format for structured outputsThe limits and settings are based on the model you choose rather than the specific use case (content creation vs web research). GPT-4 Turbo and Claude models are noted to be 20x more expensive than GPT-3.5 but offer higher quality output. Sources: Claygent (AI Web Researcher) with Artificial Intelligence Claude: Generate text with Anthropic

  • Avatar of Tanner B.
    Tanner B.
    Β·
    Β·

    Right, but I'm asking about limitations in the prompts

  • Avatar of Stefano
    Stefano
    Β·
    Β·

    Hi Tanner, thank you for reaching out. The token limits for prompts depend on the model used. OpenAI’s GPT-4-turbo has a 128k token context window, while GPT-3.5-turbo has a 16k limit, including both input and output tokens. Anthropic’s Claude models vary, with Claude 2 supporting up to 100k tokens, also accounting for both input and output. Regarding web research, we always suggest using Claygent, which has been optimised to access and retrieve information from the web. I recorded this quick video for you explaining the differences between GPT and Claygent: https://www.loom.com/share/01ade1a8fb1d4624917f303589cc8cd3?sid=1612ff71-1e8e-4d59-b3e2-a76db1d7de9c Let me know if this helps

  • Avatar of Tanner B.
    Tanner B.
    Β·
    Β·

    Got it, so there's no lower limit for context window through Clay's claygent interface than there is through using the models through an API call or their native chat products

  • Avatar of Stefano
    Stefano
    Β·
    Β·

    Hi, there is a lower limit for context, but Claygent can actually access the web with much better results than ChatGPT, which is why most people tend to use Claygent when trying to get results from the internet, whereas ChatGPT tends to be used for writing tasks.

  • Avatar of Tanner B.
    Tanner B.
    Β·
    Β·

    More focused on the content creation option for using Anthropic in clay, not web research - is that context window smaller in clay than it would be using Claude natively?

  • Avatar of Stefano
    Stefano
    Β·
    Β·

    Hi, the context tier is aligned with the highest usage tier of the Claude API key, meaning that the context windows are the same for a native API and using CLay's key. Let me know if this clarifies things.

  • Avatar of Tanner B.
    Tanner B.
    Β·
    Β·

    Excellent yep, thank you

  • Avatar of Stefano
    Stefano
    Β·
    Β·

    Happy to help!

  • Avatar of Channeled
    Channeled
    APP
    Β·
    Β·

    This thread was picked up by our in-app web widget and will no longer sync to Slack. If you are the original poster, you can continue this conversation by logging into https://app.clay.com and clicking "Support" in the sidebar. If you're not the original poster and require help from support, please post in 02 Support.