The replacement for PhantomBuster's LinkedIn engagement scrape. One Unipile-backed CLI, structured JSON, no per-action pricing.
Any of these natural language phrases activates the skill inside Claude Code.
The LinkedIn Scraping skill wraps the Unipile API to pull engagement data from any public LinkedIn post: likers, commenters, reactions, and the post itself. The skill writes structured JSON to `00_Inbox/linkedin_scrape_{type}_{date}.json`, ready for downstream qualification.
Where PhantomBuster charges per scrape and adds rate limit drama, this skill uses the Unipile API which is included in your Unipile subscription. For Earleads workflows that scrape every Othmane LinkedIn post weekly to find engaged prospects, this skill is the workhorse.
The LinkedIn Scraping skill sits at the **intake** node when the lead source is "people who engaged with a LinkedIn post". It complements Crustdata (database queries) with engagement specific data Crustdata doesn't ship.
Output flows directly into earleads-leads-qualification or linkedin-visitor-qualification for scoring, then into a campaign via unipile-campaign. The skill is the first step in the engager-to-customer pipeline.
The LinkedIn engagement intake node. Yalc invokes this skill when a post needs to be mined for prospects. Output is structured JSON at a known path, picked up by the next skill in the chain.
UNIPILE_API_KEYUNIPILE_DSNRequires the Unipile CLI at `~/bin/unipile/cli.mjs` and at least one connected LinkedIn account. First-time setup goes through the unipile-outreach skill's connect verb.
Cost model. PhantomBuster bills per phantom run (around $1-3 per scrape of 100 likers). This skill bills nothing extra; you're already paying for Unipile. Latency is similar; structured output is more developer-friendly.
Posts up to about 1000 reactions and 200 comments scrape in a single pass. Larger posts require batching; the skill auto-batches and concatenates results.
Yes. The skill accepts both personal and company post URLs. Engagement data structure is the same.
It pulls headline, name, and profile URL by default. For deeper profile data (current company, seniority, location), pass the result through Crustdata enrichment downstream.
Real-time when the scrape runs. The skill records the scrape timestamp. The list of engagers can change if more people engage after you scrape, so re-run weekly for active posts.
Earleads convention. The Inbox folder is the staging area for fresh data before processing. Yalc's other skills (qualification, campaign) read from this path by default. Override with the --output flag if needed.
Clone the Yalc skill set, drop in your env, run from your next Claude Code session.
gh repo clone Othmane-Khadri/YALC-the-GTM-operating-system && cp -r YALC-the-GTM-operating-system/.claude/skills/linkedin-scraping ./.claude/skills/