Performing the same test for some other columns, and im getting the same output in both clay and gpt. Weird, maybe it's just that particular one
As opposed to the Clay one
Here's what GPT gave me (a more succinct answer I much prefer)
For reference, i'm using inputs and outputs in Row 10
Hi Tanvi, thanks for the quick reply! Sure thing, here's the table: https://app.clay.com/workspaces/241406/workbooks/wb_Ro3fHp7Mny9s/tables/t_tC3dQMitBPah/views/gv_VuPHZzntrj2P
Anyone ever experience getting a different result / output when using GPT 4o in Clay vs just natively in GPT? I'm entering the same exact prompt & info, but the output from native GPT client is a lot better than what I'm getting in a Clay cell 🤔
What's the best way to scrape Linkedin job descriptions and extract the role's responsibilities? I'm trying to do it with Neon but getting inconsistent / incomplete results. For example, from this job it's only grabbing like 5 out of the 14 bullet points. https://www.linkedin.com/jobs/view/revenue-operations-rev-ops-lead-baltimore-at-akko-4024797914/
tysm
alright sweet!