The client-facing SEO audit deliverable. Comparative mode is the killer feature: re-audits a quarter later show score deltas, resolved findings, and new gaps overlaid on the prior report.
Any of these natural language phrases activates the skill inside Claude Code.
The SEO/GEO Audit skill runs a full audit on a client website (technical SEO, content, schema, GEO/AI search readiness, sitemap, etc.), then deploys an interactive Vercel report under a custom subdomain (e.g. `audit-clientname.vercel.app`). Output is structured JSON consumed by the report template at `~/Desktop/earleads-seo-report/`.
Comparative mode is unique to this skill: when a prior audit JSON exists for the same client, the skill auto-archives the old audit and produces a `comparative` block (score deltas, resolved findings, still-open findings, new findings). The new audit deploys to a `-v2` or `-v3` subdomain so prior audits stay live for client reference.
The SEO Audit skill sits at the **listen** node when the audit is the deliverable. It also produces actionable recommendations that flow into downstream content or dev work, so it's adjacent to **draft**.
The skill is most powerful when run quarterly. Comparative mode shows clients what improved (justifies the retainer), what's still open (sets the next quarter's plan), and what's newly broken (surfaces unexpected regressions).
The audit deliverable. Yalc runs this on demand at the start of each engagement (baseline audit) and quarterly thereafter (comparative audits).
FIRECRAWL_API_KEYBRAVE_API_KEYVERCEL_TOKEN (for deploy)Report template repo at `~/Desktop/earleads-seo-report/`. The skill deploys via the Vercel CLI. Prior audits archive as `audit-results-YYYY-MM-DD.json` automatically. v2.1.0 introduced comparative mode (April 24, 2026) with the DataScaleHR baseline audit going from score 37 to 65.
First audit is 45 to 90 minutes depending on site size. Comparative re-audits are faster (30 to 60 minutes) because the framework is already known and the diff focus narrows the analysis.
A `comparative` block in the JSON output with score deltas (overall and per-section), resolved findings (gone since last audit), still-open findings (still present), and new findings (introduced since last audit). The Vercel template renders this as a progress overlay.
A custom subdomain like `audit-clientname.vercel.app` for production. Comparative re-audits deploy to `audit-clientname-v2.vercel.app` (and v3, v4, etc.) so prior versions stay live.
Yes. Drop battlecards, keyword files, content strategy docs, and prior SEO work into the client's `01_Projects/Clients/Active/<client>/` folder. The skill reads them at runtime and grounds findings in the client's specific context.
First audit was March 18, 2026, score 37. Re-audit April 24, 2026 score 65 (+28). Comparative overlay shows AI Search +59, Schema +37, Images +36, Content +32. 13 findings resolved, 20 still open. Live at `audit-datascalehr-v2.vercel.app`.
Yes. The audit checks AI crawler accessibility (GPTBot, ClaudeBot, PerplexityBot), llms.txt compliance, brand mention signals, and passage-level citability. GEO is a first-class section, not bolted on.
Clone the Yalc skill set, drop in your env, run from your next Claude Code session.
gh repo clone Othmane-Khadri/YALC-the-GTM-operating-system && cp -r YALC-the-GTM-operating-system/.claude/skills/earleads-seo-audit ./.claude/skills/