Token Limit Workaround for GPT API: Seeking Solutions and Ideas
🔧 Token Limit Workaround with GPT API – Need Help or Ideas! Hey Clay fam 👋 I’m running into a challenge that might be familiar to some of you. Maybe someone has found a smart workaround? Here’s the deal: ⚙️ My Setup: • Using Claygent + my own GPT-4 API key (Tier 1 = 30,000 tokens/min limit) • Running 500+ rows in Clay • Claygent works super fast – no issues at all • But my custom GPT columns (API) throw tons of Error due to token limit overload 💥 The Problem: • I can’t upgrade to Tier 2 until I’ve spent $50, which I haven’t yet • I’d love to run things in one flow, but slower, e.g. 10 rows/min • Currently there’s no built-in throttle I can set per GPT-connected column 😕 🧠 What I’m looking for: • Any workaround to force Clay to run the table at a slower pace? • Maybe a way to delay execution per row or per column? • Or something like a “run selected rows in intervals”? 💬 Has anyone solved this? Or built a workaround using Run buttons / batches / Make etc.? Would love your ideas 🙏