Optimizing LinkedIn Job Scraping to Handle Long Descriptions
Hey! I’m currently running a LinkedIn job scraping process that includes several enrichment steps, such as using ChatGPT to search for tools, identify remote roles, and more. However, I’ve encountered an issue where the LinkedIn job descriptions sometimes exceed the data size limit permitted by Clay, causing my processing chain to break. Is there a way to still parse essential information from these longer descriptions, perhaps by truncating the content before passing it through the rest of the chain? I’d appreciate any suggestions. Thanks