Prospecting · Claude Code skill

LinkedIn Post Engagement Scraping skill and the Yalc Framework

The replacement for PhantomBuster's LinkedIn engagement scrape. One Unipile-backed CLI, structured JSON, no per-action pricing.

Yalc Fit Score
9/10
License
MIT (Yalc)
Verbs
5 CLI ops
Output
JSON to Inbox folder
Last reviewed
2026-04-29
Trigger phrases

Say this to fire the LinkedIn Post Engagement Scraping skill

Any of these natural language phrases activates the skill inside Claude Code.

scrape LinkedIn post
get LinkedIn post likers
get LinkedIn post commenters
who liked this LinkedIn post
who commented on this LinkedIn post
LinkedIn engagement scrape
scrape this post
What it does

LinkedIn Post Engagement Scraping, plainly

The LinkedIn Scraping skill wraps the Unipile API to pull engagement data from any public LinkedIn post: likers, commenters, reactions, and the post itself. The skill writes structured JSON to `00_Inbox/linkedin_scrape_{type}_{date}.json`, ready for downstream qualification.

Where PhantomBuster charges per scrape and adds rate limit drama, this skill uses the Unipile API which is included in your Unipile subscription. For Earleads workflows that scrape every Othmane LinkedIn post weekly to find engaged prospects, this skill is the workhorse.

Where it slots in

Position in the GTM operating system

Intake
Enrich
Score
Route
Draft
Send
Listen

The LinkedIn Scraping skill sits at the **intake** node when the lead source is "people who engaged with a LinkedIn post". It complements Crustdata (database queries) with engagement specific data Crustdata doesn't ship.

Output flows directly into earleads-leads-qualification or linkedin-visitor-qualification for scoring, then into a campaign via unipile-campaign. The skill is the first step in the engager-to-customer pipeline.

The Yalc Framework

Running the LinkedIn Post Engagement Scraping skill end to end

Workflow position

The LinkedIn engagement intake node. Yalc invokes this skill when a post needs to be mined for prospects. Output is structured JSON at a known path, picked up by the next skill in the chain.

Required inputs

  • → A LinkedIn post URL
  • → Engagement type (likers, commenters, both, post details)
  • → A connected Unipile LinkedIn account (run unipile-outreach connect first)

Outputs

  • → JSON file at 00_Inbox/linkedin_scrape_{type}_{date}.json
  • → Structured records with name, headline, profile URL, engagement type, timestamp
  • → Resolution-ready provider IDs (LinkedIn slugs) for downstream messaging

Chaining recommendations

UpstreamYalc prompt with a LinkedIn post URL → linkedin-scraping
Downstreamlinkedin-scraping JSON → earleads-leads-qualification → unipile-campaign

Anti patterns to avoid

Don't scrape posts older than 30 days for fresh leads. Engagement data is timestamped; old engagers may have changed roles or companies. Yalc skill warns when post age exceeds the threshold.
Don't scrape the same post twice in 7 days. Likers don't change that fast. Cache the JSON output and reuse.
Don't scrape posts with above 1000 reactions in one shot. Unipile rate limits will throttle. Use the Unipile MCP's batch API for large posts.
Operator take

Pros, cons, who it's for

Pros

  • Replaces PhantomBuster's LinkedIn engagement scrape with no per-action cost
  • Structured JSON output ready for downstream skills
  • Resolves LinkedIn slugs into provider IDs needed for messaging
  • Includes timestamps so engagement freshness is queryable
  • Uses the same Unipile credentials as outreach skills (no separate setup)

Cons

  • Public posts only. Private or member-restricted posts are not accessible.
  • Rate limits on very large posts (1k+ reactions) require batching.
  • Comment thread depth is single-level. Replies to comments are not pulled.
  • JSON output requires downstream parsing; not directly readable.

Who it's for

  • Yalc operators who post on LinkedIn and want to mine engagement weekly
  • Agencies turning client LinkedIn posts into qualified outbound lists
  • GTM engineers building engager-to-customer pipelines
Dependencies

What this skill expects to find

Other skills

Environment variables

Requires the Unipile CLI at `~/bin/unipile/cli.mjs` and at least one connected LinkedIn account. First-time setup goes through the unipile-outreach skill's connect verb.

Related

The LinkedIn Post Engagement Scraping ecosystem inside Yalc

Alternatives

Skills that overlap

FAQ

Frequently asked

How does this skill differ from PhantomBuster's LinkedIn engagement phantom?

Cost model. PhantomBuster bills per phantom run (around $1-3 per scrape of 100 likers). This skill bills nothing extra; you're already paying for Unipile. Latency is similar; structured output is more developer-friendly.

What's the maximum post size that scrapes cleanly?

Posts up to about 1000 reactions and 200 comments scrape in a single pass. Larger posts require batching; the skill auto-batches and concatenates results.

Can I scrape company page posts?

Yes. The skill accepts both personal and company post URLs. Engagement data structure is the same.

Does it pull profile data (headline, current company) or just IDs?

It pulls headline, name, and profile URL by default. For deeper profile data (current company, seniority, location), pass the result through Crustdata enrichment downstream.

How fresh is the engagement data?

Real-time when the scrape runs. The skill records the scrape timestamp. The list of engagers can change if more people engage after you scrape, so re-run weekly for active posts.

Why does the JSON go to 00_Inbox folder?

Earleads convention. The Inbox folder is the staging area for fresh data before processing. Yalc's other skills (qualification, campaign) read from this path by default. Override with the --output flag if needed.

Get the LinkedIn Post Engagement Scraping skill

Clone the Yalc skill set, drop in your env, run from your next Claude Code session.

gh repo clone Othmane-Khadri/YALC-the-GTM-operating-system && cp -r YALC-the-GTM-operating-system/.claude/skills/linkedin-scraping ./.claude/skills/