My request timed out. I have a dataset in apify of LinkedIn profiles that I have scraped. The actor has already run and I did everything from apify console, I did not trigger the actor from clay. I now want to import the dataset into a clay table but I cannot find a way to do it. I'm also trying with make.com where I have the 'get dataset item' module successfully getting everything but I do not know how to then use an HTTP module to send the data in the correct format to clay as a webhook.
๐ Hey there! Our support team has got your message - we'll be back in touch within 24 hours (often sooner!). If you haven't already, please include the URL of your table in this thread so that we can help you as quickly as possible!
Easiest way
download the dataset and upload to clay
tricky way: use clay webhook to send data from make.com to clay table
I have a video about that: https://www.loom.com/share/b7365260050c4791a4c2b53aaf9bd5fe?sid=9e237436-6147-40e7-8b03-cd594a75689d
The dataset contains too many columns... I first need to combine several columns into one to make it work. For instance, details around somebody's previous job is spread over many columns. 'PreviousJob0Title' 'PreviousJobCompanyName' 'PreviousJobSstartDate' 'PreviousJbDescription' ... and there are many previous jobs. So I want to combine data from multiple columns to make one 'Previous Job 1' column that contains everything about that role
Hey! ๐ There are definitely some creative ways to tackle this issue. As suggested by @Muhammad, one approach could be to download the dataset and filter out only the most relevant columns to avoid hitting any column limits. Alternatively, you could consider Paginating the GET Request: With Apify datasets, you have the option to paginate the data. By adding query parameters like offset and limit to your GET request, you can retrieve the dataset in manageable chunks. Here's an example URL with pagination: https://api.apify.com/v2/datasets/{datasetId}/items?format=json&limit=1000&offset=0 limit=1000 retrieves 1000 items per request. offset=0 starts from the first item, and you can increment this to retrieve the next batch. This allows you to pull smaller portions of the dataset and import them into Clay iteratively. Hope this helps! Let me know if you have any more questions or need further assistance.