Is it possible to dynamically generate the "JSON schema" used in an LLM prompt? For example, could I store it in a text column and then use /my_schema to insert it into the "Definte Outputs" section of the LLM config?
When defining outputs for LLM responses, you have two main options: 1. Fields Option: This is the standard way where you can create individual fields that map to columns in your table. Each field can be set as: Text, Number, True/False, URL, Email, or Select. 2. JSON Schema Option: This is an advanced option that provides more flexibility for structuring your data in JSON format. By default, all fields will return as Text unless you specify another data type. However, based on the available documentation, there's no built-in functionality to dynamically generate or insert a JSON schema from a text column into the "Define Outputs" section. You would need to manually configure the output structure through the interface.
Was that helpful?
no
Looks like you're still looking for help.
If you’re dealing with a specific table, drop the URL below so we can help you quicker. Otherwise, someone from our team will be in touch soon!
Hey Mathew! 👋 Right now, it’s not possible to dynamically generate the “JSON schema” for an LLM prompt in that way. However, I’d love to understand more about your goal so I can share that with the team. Let me know !
It would allow me to use the AI web-research via API. I could send a webhook into a table with the prompt / JSON schema and send the response back to my software
Got it! Why not include the JSON structure directly in the prompt to ensure the response is formatted correctly? You could also add another step to validate the JSON format before sending it back. Either way, I’ll share this feedback with the team. Thanks for the suggestion! 🚀
No, it will return in the content but formatted as JSON (i.e. the response will be formatted as JSON but not as individual values). You'd have the response formatted as JSON, which could be used/adapted when sent back to your HTTP API.