Clay Icon

Dynamically Generating JSON Schema for LLM Prompts

·
·

Is it possible to dynamically generate the "JSON schema" used in an LLM prompt? For example, could I store it in a text column and then use /my_schema to insert it into the "Definte Outputs" section of the LLM config?

  • Avatar of Channeled
    Channeled
    APP
    ·
    ·

    When defining outputs for LLM responses, you have two main options: 1. Fields Option: This is the standard way where you can create individual fields that map to columns in your table. Each field can be set as: Text, Number, True/False, URL, Email, or Select. 2. JSON Schema Option: This is an advanced option that provides more flexibility for structuring your data in JSON format. By default, all fields will return as Text unless you specify another data type. However, based on the available documentation, there's no built-in functionality to dynamically generate or insert a JSON schema from a text column into the "Define Outputs" section. You would need to manually configure the output structure through the interface.

  • Avatar of Channeled
    Channeled
    APP
    ·
    ·

    Was that helpful?

  • Avatar of Mathew I.
    Mathew I.
    ·
    ·

    no

  • Avatar of Channeled
    Channeled
    APP
    ·
    ·

    Looks like you're still looking for help.

    If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!

  • Avatar of Bo (.
    Bo (.
    ·
    ·

    Hey Mathew! 👋 Right now, it’s not possible to dynamically generate the “JSON schema” for an LLM prompt in that way. However, I’d love to understand more about your goal so I can share that with the team. Let me know !

  • Avatar of Mathew I.
    Mathew I.
    ·
    ·

    It would allow me to use the AI web-research via API. I could send a webhook into a table with the prompt / JSON schema and send the response back to my software

  • Avatar of Bo (.
    Bo (.
    ·
    ·

    Got it! Why not include the JSON structure directly in the prompt to ensure the response is formatted correctly? You could also add another step to validate the JSON format before sending it back. Either way, I’ll share this feedback with the team. Thanks for the suggestion! 🚀

  • Avatar of Nik G.
    Nik G.
    ·
    ·

    Bo (. - I am having the same issue, if I put the json output format in the body section of the http request to a LLM, will that return it in seperate JSON fields instead of the "content" part of a chat completion?

  • Avatar of Bo (.
    Bo (.
    ·
    ·

    No, it will return in the content but formatted as JSON (i.e. the response will be formatted as JSON but not as individual values). You'd have the response formatted as JSON, which could be used/adapted when sent back to your HTTP API.

  • Avatar of Nik G.
    Nik G.
    ·
    ·

    Thank you! Bo (.