Hey there, I'm using an external API to scrape comments on LinkedIn posts The API returns the 50 first comments If there are more comments, then the API says there are "5 pages" When there are 5 pages, i have to loop the scraper 5 times with the pagination token What's the best way to organize this "loop" stuff in a clay table ? I need to make it dynamic, if 3 pages, then 3 loops, if 35, then 35 etc Thanks for your help
my problem is not so much regarding pagination and the api provider, but more on the organization into Clay the api provider is this one : https://rapidapi.com/rockapis-rockapis-default/api/linkedin-api8/playground/apiendpoint_34fc9d87-cff5-426a-b34f-4d4b492cb84b some lines will need a 2nd, 3rd or event 4th API call to get the full list, and i guess i have to organize this into a separate Clay table in the workbook ?
Hi Louis, thanks for your message - can you share the documentation to the endpoint you're calling here? There may be a way to specify what page you want to return data from, which may help with the pagination issue here.
Afaik no easy way to loop. If you have a defined threshold ie get max 50*5 comments. Then setup 5 columns for pagination. Workaround would be to use n8 get data outside of clay then push to clay via webhooks. Anyway the api you are using (Get profile post comment) has rate limits which will start erroring out in clay if you run a lot of rows (if i am not wrong)
This actor need to scrape with my cookie, which i won't do I have another provider who deals with its own cookie, so i use http requests, and not a direct apify-clay integration
I would suggest to use the Apify Actor for LinkedIn Post Scrapping and import into Clay through Apify Integration.
To dynamically organize and automate looping through paginated API responses in Clay, you can use a combination of HTTP API integration and our "write to table" feature. Here's a suggested approach: 1. Set up an initial HTTP API call to get the total number of pages. 2. Use Clay formulas to generate URLs for each page based on the total count. 3. Create a separate table for the paginated results. 4. Use the HTTP API integration to scrape each page, one by one. 5. Utilize the "write to table" feature to add the scraped data from each page to your results table. 6. Set up conditional run criteria to ensure the process stops when all pages are scraped. This approach allows for dynamic looping based on the number of pages returned by the API.