Hey Daniela,
what I mean is for example in the table e.g. row #2:
I have the result form the URL in Clay (screenshot 1) and the ones scraping manually (going from top to bottom result #21 to 30)
Used the same link https://www.google.com/search?q=gesundheit+site:digistore24.com&uule=w+CAIQICIGQmVybGlu&start=20 for both. Probably the results also differ not just on the region I force Claygent to search for, but in general from cookies in my browser etc?
Hey Luis,
thank you for the answer with putting out the result number.
But what to do on getting different results scraping via Claygent vs scraping manually? 😞
Maybe I'm complecating stuff, but overall it seems to work quite fine (even tho, would be awesome to also know, how many results we can scrape or to have a like breaking point, when the results are getting too far of).
Seems to work via Claygent and visiting the search query url. But I keep getting very very long queue times (of like 1 minute waiting and then having the run erroring).
What could fix that issue? 😞
Hey I want to scrape google results, based on on query searchs.
The issue I have, is that I keep starting the searchs form the beginning, but I'd need to start much further behind (e.g. on page #5) to not get allways the same results.
Is there any possibility to solve this topic? E.g. going for a different approach?
Check this: - Had the same questions, got it fiexd here
Hey Mustafa! Thanks for reaching out. Happy to help✨. The reason why it's returning an error is because it's not in list format. List here refers to the data being structured as an array of objects. The workaround for this using "ChatGPT:Generate Text" to convert the field into a JSON array which can then be mapped into the write to table column to be added as new rows in a separate table.
Here's a quick guide for using JSON mode in Chat GPT: https://www.loom.com/share/317de47023fb429a80022de0f4a5d6dc
Also sharing an additional resource for using write to other table: https://www.loom.com/share/d46fd74ae9004d229142a2474ac61866?sid=440fafc1-5d4c-4ed4-bebe-449748573397
Let me know if this helps! (edited)
Hey people,
Any Tips for the PhantomBuster Pull Integration, so I can get Data updated regularly (everytime the phantom runs)?
Or at least to get all 450 (or how much results) I have in Phantombuster to a table, without having to fetch every container ID manually? :(