Hi all, I'm trying to figure out how I can use clay to analyse google maps images to see where solar panels are installed. Anyone have experience with this or know the most accurate way?
I saw a workflow that used the Google Earth API plus an LLM trained to recognize specific objects like solar panels. Not sure who built it but it might help you to get an idea.
Max M. this touches on a workflow we are exploring in the AI cohort. You can pass latitude and longitude into the following URL format (https://www.google.com/maps/@(LATITUDE),(LONGITUDE),200m/data=!3m1!1e3) where 200m indicates a viewspace of 200 meters.
Then you can use 'capture screenshot' enrichment based on that url to get a screenshot of google maps. Then use 'analyze image' enrichment with specific instructions to identify solar panels in the image (estimate sq meters maybe?). You may need to pass that output into an AI text model (GPT 4.1 mini for example) to parse out specific results.
I would recommend you find an example or two where solar panels are visible on google maps, and use those as a test case to refine your prompts until its giving you consistent accurate results.