Happy new year team! Have a large table (±20K rows) that I’m trying to run OpenAI API for and it’s been quite slow.. Any tips here? Been trying to rererere-run and force run yet most of the rows seemed to be stuck at “Queued”: https://app.clay.com/workspaces/83297/tables/t_x3c7e7a8SDRg/views/gv_2su98bbowJfk
Column; “Service Suitability”
Here are several steps you can take to improve the performance:1. Check your OpenAI API usage tier - ensure you have at least 30,000 TPM (tokens per minute) for ChatGPT columns. If you're on a free plan or tier 1, you'll need to upgrade to tier 2 for better performance. 2. Break down your data processing: • Try running smaller batches instead of all 20,000 rows at once • Use the "Run First 10 Columns" option to test your setup • Consider running specific sections using "Run Starting from a Specific Row" 3. Consider using AI formulas instead of the AI integration. AI formulas are credit-free and can be more efficient for large tables, potentially saving hundreds or thousands of credits while improving performance. 4. Ensure your queries and prompts are specific and well-defined, and verify your input data format is correct. Sources: AI Formulas + Conditional Runs [Clay 101] - Clay University Run Options
^^^ Nice automation
To use the 'Service Suitability' column with OpenAI API for a large table of about 20,000 rows, you'll need to ensure your OpenAI API key has sufficient usage limits. For ChatGPT columns, we require 30,000 tokens per minute (TPM).Here's how to set it up:1. Open your table in Clay 2. Click "Add Column" > "Add Column using AI" 3. Enter a description for the 'Service Suitability' information you want to generate 4. Click "Continue" and set up the parameters 5. Save your settingsRemember to check your OpenAI API usage tier at https://platform.openai.com/settings/organization/limits to ensure it meets the required TPM.Could you provide more details about what specific 'Service Suitability' information you're looking to generate for your table? Source: ChatGPT: Generate text with OpenAI/GPT
I'm glad you find it helpful! I'm here if you need any further assistance.
Thanks! We've reopened this thread. You can continue to add more detail directly in this thread.
The column is still running. Can someone help me run it? “Service Suitability column & Service Suitability (2)
Looks like you're still looking for help. We're escalating this over to our support team and someone should be in touch soon. In the meantime, feel free to include any other information that may be helpful in solving your issue!
Hey, thanks for reaching out Pierre I can't see any queued data inside of this column, Seems like it got fixed? https://downloads.intercomcdn.com/i/o/w28k1kwz/1318967463/ef1f8cd41130ad44eaa69425e4fd/CleanShot+2025-01-02+at+_11mobyiaU5%402x.png?expires=1735836300&signature=0d1d111fe0b63b59b6be832d5ead1bc9bbb79fc74a6d5895c9237454f5b2d1aa&req=dSMmHsB4moVZWvMW1HO4zc%2BVwigKlTet6fpICsGxbPTcW9JufiNiiD7Z7nSw%0A2EYI%0A
It’s the other column 🙂
It’s been stuck on queued all day 😕
Oh ok - I can see this in the Service Suitability column (not Service Suitability 2). The column's stuck because it most likely was stopped and restarted multiple times, which caused it to get stuck in a queued state. I've flagged this to our team to investigate, so please keep the column in place for now. We'll need to create a fresh column and run it again to fix this. Let me know how many credits were used and I'll make sure to get those credited back to your account.
We used our OpenAI API 😕
Can always count on you 🙂
Can you show me a screenshot of the usage? That way I can add the corresponding amount!
Welcome! By the way - Happy New Year 🏆
Honestly it shouldn’t be more than $10-15 so not too worried!
Happy new year my man 💪
Okay, perfect! Let me know if you run into any other issues! :)
Will do
Should I just delete the column and run it again ? 😅
Yeah but don't delete it - Just let it there so our engineers can see what went wrong please!
Oops.
I just did 🤦♂️
What I will say is your initial assumption is most likely accurate. I’ve tried rerunning this column like 5x
Because I got impatient
Sorry - this is quite timely as we’re already behind on launch so wanted to prioritize speed here