Skip to main content
Intelligence Guide

The Complete Guide to Competitor Benchmarking

How to measure where you actually stand, what the numbers mean, and what to do when you find out you're behind.

By Elevated Signal Research Team · March 30, 2026 · 17 min read ·

Key takeaways

  • 1. There are 5 types of benchmarking (competitive, strategic, process, performance, internal). Most companies only do one and miss the other four.
  • 2. Real 2026 benchmarks: SaaS median NRR is 102%, ecommerce conversion averages 3.44%, manufacturing OEE averages 66.8%.
  • 3. For private companies (most of your competitors), proxy data works: H-1B filings reveal salary bands, OSHA logs estimate headcount, job postings reveal strategy 3-6 months early.
  • 4. 73% of companies using data-driven benchmarking report improved ROI (ResearchAndMetric 2025). Companies without structured evaluation face 3.2x higher project failure rates.
  • 5. The hidden danger: benchmarking competitors can trap you into copying instead of innovating. Use it for gap analysis, not as your strategy.

Competitor benchmarking (sometimes called competitive benchmarking in management literature) is the practice of measuring your company's performance against specific rivals using hard numbers. Not gut feelings, not assumptions about who's winning. Actual metrics: their revenue growth vs. yours, their customer retention vs. yours, their website converting at 4.2% while yours sits at 1.8%. Where exactly are you ahead, and where are you losing ground?

That distinction matters because most businesses confuse benchmarking with competitive analysis, which is broader and more qualitative. Competitive analysis asks "who are they and what are they doing?" Benchmarking asks something narrower: "how do we compare on the things that actually determine who wins?" The U.S. Small Business Administration draws a similar line: market research finds customers, competitive analysis finds your edge, and benchmarking tells you exactly where that edge is sharp or dull.

Xerox formalized the practice in 1989, and Bain & Company's Management Tools survey (running since 1993) consistently ranks benchmarking in the top three most-used management tools globally. But that same survey shows satisfaction with benchmarking falls below the average. Companies do it constantly and are frequently disappointed with the results. Why does that happen, and how do you avoid it?

Types

What are the 5 types of benchmarking?

Robert Camp at Xerox published the original taxonomy in 1989. APQC (the American Productivity & Quality Center) uses a four-type model. The blogosphere cites five. Here are the five most commonly referenced, with what each one actually does for you.

01

Competitive benchmarking

Direct comparison against named rivals. Your NPS vs. theirs, your pricing vs. theirs, your page load time vs. theirs. This is what most people mean when they say "benchmarking." It gives you an immediate reality check, but the challenge is obvious: your competitors are not publishing their internal data for you.

02

Strategic benchmarking

Comparing business models, market positioning, and long-term direction. When a legacy on-premise software company studies how Salesforce transitioned to cloud subscriptions, that is strategic benchmarking. It answers "where should we be heading?" rather than "how are we performing today?"

03

Process benchmarking

Analyzing operational workflows against someone who does a specific process better, regardless of industry. The classic case: Great Ormond Street Hospital benchmarked their ICU patient handoff procedure against Ferrari's Formula One pit stop routine and cut handoff errors by 66%.

04

Performance benchmarking

Purely quantitative. Revenue growth, profit margins, market share, customer acquisition cost. APQC maintains over 4,400 standardized performance measures for this exact purpose. It answers "what numbers should we be hitting?"

05

Internal benchmarking

Comparing across your own business units, regions, or time periods. A manufacturing company might compare defect rates between its facilities in Ohio and Texas. The advantage: you control the data. The limitation: you only learn from yourself.

One useful pattern from the research: internal benchmarking yields roughly 10% improvements, competitive benchmarking around 20%, and functional or generic benchmarking (looking outside your industry) can drive 35% gains. The further you look from your own backyard, the bigger the breakthroughs.

Metrics

What should you actually measure?

Most guides dump 50 KPIs on you and call it a day. That is not helpful. The right metrics depend on your business model, your competitive set, and the decisions you are trying to make. Six categories cover the specific numbers that matter.

Financial metrics

Revenue growth rate, gross and net profit margins, revenue per employee, and market share. For SaaS companies, add blended CAC ratio, CAC payback period, and the Rule of 40 (growth rate plus profit margin should exceed 40%). For public companies, 10-K filings through SEC EDGAR give you everything. For private companies, you need proxy data (more on that below).

Product metrics

Feature parity, release velocity, pricing structure, and customer satisfaction (NPS, G2 ratings). Release velocity is underrated. How often do competitors ship updates? A company pushing weekly releases operates differently than one releasing quarterly. That cadence reveals engineering investment, risk tolerance, and responsiveness to market shifts.

Marketing and digital metrics

Website traffic, organic keyword rankings, estimated ad spend, content publishing cadence, and social media share of voice. But which of these matters most? Conversion rate, not traffic. A competitor getting 200K visits at 0.5% conversion is doing worse than one getting 50K visits at 3%.

Operations and workforce metrics

Employee headcount growth, hiring velocity by department, Glassdoor ratings (average is roughly 3.3 out of 5), and patent filings. Hiring patterns are a leading indicator: if a competitor that historically hired marketers suddenly posts 30 machine learning engineering roles at $180K base salary, they are pivoting toward AI infrastructure. You are finding out months before any product announcement.

Customer metrics

Review ratings across G2 and Trustpilot, sentiment extracted from social media and forums, complaint volumes (BBB, CFPB databases), and churn rate estimates. Where do you find the most honest competitor intelligence? Reddit. G2 reviews are curated. A competitor's subreddit threads are not. Pseudonymous structure means people say what they actually think.

Digital infrastructure metrics

Page load speed, Core Web Vitals, mobile optimization, technology stack (via BuiltWith), and security posture. In 2026, add AI visibility: how often is the competitor cited in ChatGPT, Perplexity, and Google AI Overviews? Digital health scorecards can quantify all of this in a single audit.

Industry data

What do good benchmark numbers look like?

"Good" depends entirely on your industry. A 3% conversion rate is excellent in ecommerce and terrible in food delivery. Here are verified 2025-2026 benchmarks across three sectors that come up in almost every competitive analysis we run.

SaaS (Software as a Service)

MetricMedianWhat it means
Net Revenue Retention102%Each cohort of customers is worth slightly more the following year. Below 100% means you are shrinking even if you add new logos.
Gross Revenue Retention90%Nine out of ten customers renew. Below 85% signals a product-market fit problem.
Revenue Growth Rate26%Top quartile has slowed from 60% (2023) to about 50% (2025). The "growth at all costs" era is over.
Expansion Revenue40% of new ARRFor companies above $50M ARR, expansion revenue is nearly 60% of growth. Mature SaaS grows by upselling, not acquiring.

Sources: Benchmarkit 2025 B2B SaaS Report, SaaS Capital 2025, ChurnZero retention data

Ecommerce

IndustryAvg. conversion rateContext
Food & Beverage6.0-6.1%Highest converting category. Low price points, repeat purchases, subscription models.
Health & Beauty4.2-4.6%Strong social proof. Influencer-driven. Predictable replenishment cycles.
Fashion & Apparel2.9-3.0%Sizing uncertainty kills conversions. Return rates are the silent margin killer.
Home & Furniture1.2-1.3%High AOV, long consideration phase, shipping friction.
Cross-industry average2.5-3.0%Desktop and mobile reached parity at roughly 2.8% in early 2025. Referral traffic converts at 5.4%.

Sources: Shopify 2025 benchmarks, Dynamic Yield, Statista global aggregate

Manufacturing

The gold standard metric is Overall Equipment Effectiveness (OEE), which multiplies equipment availability by performance efficiency by quality rate. World-class facilities hit 85% or higher. The average across discrete manufacturing? 66.8%, based on Godlan's 2025 data covering 1,470+ operations. Medical device manufacturing leads at 78.2%. Low-volume trailer production trails at 57.2%.

Forty-eight percent of manufacturers report severe challenges filling production roles (Deloitte 2025 Smart Manufacturing Survey), which is driving massive investment in automation. Leading manufacturers allocate over 20% of their improvement budgets to smart manufacturing initiatives.

Framework

How do you benchmark competitors step by step?

Most guides give you a theoretical seven-step framework that looks great on a slide deck and collapses under real-world conditions. Below is the process we actually use when producing competitive intelligence reports. Five steps, each tied to a specific output.

1

Define your competitive set (3 to 5 companies)

Pick three to five competitors. More than that and you drown in data. Include at least one direct competitor (same product, same market), one aspirational competitor (where you want to be in two years), and one adjacent competitor (different product, same customer).

How do you find competitors you do not know about? Check G2 "alternatives" pages. Read Reddit threads where people discuss your category. Look at who bids on your branded keywords. Search SEC filings of public companies in your space for vendor names. Each source surfaces different blind spots.

2

Choose 10 to 15 metrics that map to your strategic questions

This is where most benchmarking projects go wrong. They measure everything available instead of everything relevant. If your CEO is asking "why are we losing deals?" then benchmark pricing, feature parity, win/loss rates, and G2 ratings. If the question is "why is our growth slowing?" benchmark acquisition channels, content output, and market share — including video where most B2B competitors now publish their best demand-gen material. Tracking competitor YouTube channels is its own discipline; we built YouTube content intelligence specifically to surface what your competitors are uploading, what is getting watched, and which topics are shifting their organic demand.

Limit yourself to 10-15 metrics. Spread them across at least three of the six categories above (financial, product, marketing, operations, customer, digital). Single-dimension benchmarking gives you a distorted picture.

3

Collect data from multiple layers

Free sources first: SEC EDGAR for financials, Google Trends for relative interest, LinkedIn for headcount, Glassdoor for employee sentiment. Paid tools add depth: SEMrush or Ahrefs for digital metrics, SimilarWeb for traffic estimates, BuiltWith for technology stacks. The unconventional layer is where it gets interesting. H-1B visa filings from the Department of Labor expose salary bands and hiring priorities at private companies. OSHA injury logs can proxy facility headcount. Patent filings reveal R&D direction 18 months before product launches.

Most benchmarking efforts stop at the first layer and wonder why their insights feel shallow. The real signal is in layers two and three.

4

Score, normalize, and find the gaps

Raw numbers are not comparable across companies. A $10M company and a $500M company both growing at 25% are in very different situations. Normalize metrics to common denominators: revenue per employee, customer acquisition cost as a percentage of first-year contract value, marketing spend as a percentage of revenue.

Then build a weighted scoring model. Not every metric matters equally. Weight them based on what drives competitive advantage in your market. A gap analysis is only useful if it shows you where closing the gap actually moves the needle.

5

Turn insights into decisions (this is where most projects die)

Databox research puts the number at 46%: companies that claim to do benchmarking but have no process beyond routine reporting. The benchmark report gets built, circulated, filed away, and nothing changes. We have seen this dozens of times.

Every insight must connect to a decision. "Their content team publishes 4x more than ours" is an observation. "We need to hire two writers and publish 8 articles per month targeting these 12 keywords where they rank and we do not" is a decision. If a benchmark does not lead to a resource allocation change, a product roadmap adjustment, or a go-to-market shift, it was not worth collecting.

Intelligence sources

How do you benchmark when competitors are private?

Public companies file quarterly reports with the SEC. Private companies do not. And most of your competitors are probably private. So how do you get numbers on companies that do not publish numbers?

H-1B visa filings (Department of Labor)

Every employer hiring foreign nationals under H-1B files a Labor Condition Application with the <a href="https://www.dol.gov/agencies/eta/foreign-labor/wages/lca" target="_blank" rel="noopener noreferrer" class="text-cyan hover:underline">Department of Labor</a>. These are public, updated regularly, and extremely detailed. Query a competitor's LCAs and you get exact salary bands by role and location, hiring velocity, and strategic direction. Thirty new machine learning engineer LCAs at $180K means something different than thirty sales development rep LCAs at $65K.

OSHA injury logs

Industrial and manufacturing companies report workplace injuries to <a href="https://www.osha.gov/data" target="_blank" rel="noopener noreferrer" class="text-cyan hover:underline">OSHA</a> alongside total hours worked. In the absence of published employee counts, "total hours worked" is a direct mathematical proxy for headcount, shift volume, and facility utilization.

Job postings and LinkedIn

Job postings reveal strategic priorities 3-6 months before product announcements. LinkedIn employee counts over time proxy growth rate. Hiring velocity by department tells you where money is flowing. A company that was all sales hires six months ago and is now all engineers has shifted strategy.

Forums, communities, and Glassdoor

Formal review platforms like G2 and Capterra are gamed with incentivized reviews. Anonymous forums are not. Pseudonymous architecture and community moderation produce unfiltered sentiment. We analyze archived consumer discussions across these platforms for exactly this reason.

Technology stack profiling (BuiltWith, Wappalyzer)

Identify the exact tools powering a competitor's digital operations. An enterprise-tier analytics platform, a specific marketing automation stack, or a high-cost CRM tells you about their budget, sophistication, and process maturity without them saying a word.

No single source gives you the complete picture. Layer them. Cross-reference. A claim from one source is a hypothesis; the same signal from three sources is intelligence. Valona Intelligence calls this triangulation. Without it, benchmarking misleads as often as it informs.

Case studies

What does competitor benchmarking look like in practice?

Theory is cheap. Three companies used benchmarking to make specific, measurable changes, and one case shows why benchmarking alone is not a strategy.

Toyota studied Ford and built something better

In 1950, Eiji Toyoda spent three months at Ford's Rouge plant in Dearborn, Michigan. Ford was producing 8,000 cars per day. Toyota was producing 2,500 per year. Toyota's engineers did not come back and copy the assembly line. They identified its core limitation (rigidity; retooling for a new model was expensive and slow) and built the Toyota Production System around flexibility instead: just-in-time inventory, pull-based production, and continuous improvement. The result became the foundation of lean manufacturing and helped Toyota overtake General Motors as the world's largest automaker.

Good benchmarking works this way. You study the leader not to copy them, but to understand what they solved and find a better answer.

Walmart benchmarked supply chain metrics across industries

Walmart compared its logistics performance against not only retail competitors but the best supply chain operators in any industry. They benchmarked order-to-delivery cycle times, stockout rates, and distribution center throughput against companies like FedEx and Amazon. The gaps they found drove cross-docking, vendor-managed inventory, and satellite-linked replenishment systems. The result: Walmart's supply chain became its competitive moat, not just a cost center.

Functional benchmarking at work. The breakthroughs came from looking outside retail.

An APQC member retailer saved $2 million on invoice processing

Using APQC's benchmarking database, a retailer discovered its peers spent 40% less on invoice processing through automation. They moved to e-invoicing, cut processing time from 10 days to 3 days, and saved $2 million annually. Similarly, Ford benchmarked Mazda's accounts payable department in the 1980s and discovered that Mazda ran the entire function with five people while Ford employed over 500. Ford did not try to cut headcount by 20%. They completely redesigned the process and reduced headcount by 75%.

When you benchmark and discover a 40% gap, incremental improvement is the wrong response. The gap is telling you the process needs to be rebuilt.

ResearchAndMetric: 73% higher ROI from structured evaluation

A 2025 study by ResearchAndMetric found that companies using data-driven assessment (a form of systematic benchmarking) achieved 73% higher ROI than companies relying on intuition. Companies without structured evaluation were 3.2 times more likely to fail on major initiatives. The finding is not surprising: measuring against external standards forces honest assessment in a way that internal-only reviews do not.

Counter-argument

The hidden danger of benchmarking (and when to ignore it)

Most articles about benchmarking read like sales brochures. This section is the counterweight.

The biggest risk of relentless benchmarking is strategic convergence. When every company in an industry monitors the same competitors and copies the same moves, the market commoditizes. Strategy consultant John Olivant puts it bluntly: benchmarking creates a "sea of sameness" and gives you a "false sense of progress" because you feel like you are improving, but you are running in circles.

He is right. If your entire strategy is "close the gap with Competitor X," the best possible outcome is parity. You become a slightly delayed copy of them. Michael Porter made this argument decades ago: sustainable advantage comes from being different, not from being better at the same things.

Blue Ocean Strategy (Kim and Mauborgne) goes further: if you are only benchmarking existing players, you are competing in a "red ocean" of known market space. Real profitability comes from creating markets, not from fighting over existing ones.

So where does that leave benchmarking? It is a diagnostic tool, not a strategy. It tells you where you stand and what is possible. It cannot tell you where to go. The best practitioners use it to match table stakes (the minimum viable performance to stay in the game) while investing separately in the things that make them genuinely different.

There is a second risk: analysis paralysis. Benchmarking tools make it easy to generate thousands of data points. If those data points do not translate to resource allocation changes within 30 days, the exercise was corporate theater.

Monitoring

Should you benchmark once or monitor continuously?

Both. One-time benchmarking works for strategic questions: "Where do we stand relative to the market?" That answer is valid for a quarter or two. Tactical signals (pricing changes, new product launches, hiring spikes, reputation shifts) move weekly and need continuous tracking.

A practical cadence: run a deep benchmark twice a year. Between those deep dives, maintain automated monitoring on the 5-10 signals that matter most (pricing pages, job boards, review sentiment, keyword rankings). When something significant triggers, pull it into the next strategic benchmark. For the continuous-signal layer specifically, see brand reputation monitoring: tools and pricing.

Enterprise teams with CI platforms like Crayon or Klue handle the continuous monitoring side. Companies without that infrastructure (or without someone to interpret the alerts) benefit more from periodic done-for-you intelligence reports that deliver analysis and recommendations together. Continuous intelligence monitoring bridges the gap: automated tracking with human interpretation layered on top.

ApproachBest forCadence
Deep benchmark reportStrategic planning, board presentations, annual reviewsQuarterly or semi-annually
Continuous monitoringPricing shifts, hiring signals, product launches, reputation changesWeekly or real-time
Ad hoc analysisResponding to competitive moves, preparing for deals, due diligenceAs needed
Tools

What tools do you need for competitor benchmarking?

More than you think. No single tool covers all six benchmarking dimensions: financial, operational, strategic, customer, digital, and human capital. The minimum viable stack combines a search tool for public filings (SEC EDGAR), a web-traffic estimator, a review aggregator, a hiring signal tracker, and a pricing-page monitor. For most mid-market companies this runs under $300 a month.

CategoryToolsApproximate cost
SEO & trafficSEMrush, Ahrefs, SimilarWeb$130-$500/mo per tool
CI platformsCrayon, Klue, Kompyte$12,500-$47,000/yr
Tech stackBuiltWith, Wappalyzer$295-$995/mo
Social listeningBrandwatch, Sprout Social$800-$6,000/mo
Financial dataSEC EDGAR (free), Crunchbase, PitchBookFree to $24,000/yr
Review & sentimentG2, Trustpilot, Reddit analysisFree (manual) to $1,000/mo (automated)

Add it up for a midsized company covering all six dimensions: $2,000 to $6,000 per month in tooling, plus the labor to run them. That is before interpretation. Kompyte (now owned by Semrush) claims AI reduces competitor analysis to about one hour per week, but that assumes someone is already configuring the platform, filtering noise, and knowing what to look for.

Or skip the tool stack entirely and get a finished benchmarking report delivered. That is how our competitive intelligence reports work. We run the tools, cross-reference the data, and hand you a report with analysis and recommendations. One deliverable instead of six dashboards.

Framework

What are the 4 P's of competitor analysis?

Product, Price, Place, and Promotion. Borrowed from the marketing mix framework, adapted for competitive comparison. It is a fast way to structure an initial competitive scan before going deeper into quantitative benchmarking.

Product

What do they sell? How does the feature set compare? What is their release velocity? Where are the gaps in their offering that your customers complain about?

Price

What do they charge? How are tiers structured? Where are the hidden costs (setup fees, per-seat pricing, overages)? Are they pricing to penetrate or to skim?

Place

Distribution strategy: direct enterprise sales, self-serve PLG, channel partners, marketplace listings. How a company sells tells you as much as what they sell.

Promotion

Where do they market? Which keywords do they rank for? Look at ad spend, content calendars, and social following. The real question: which channels are actually producing pipeline for them?

The 4 P's give you structure for qualitative comparison. Benchmarking gives you the numbers underneath. Use them together: the 4 P's frame the question, benchmarking answers it.

Pitfalls

What mistakes waste the most time in competitor benchmarking?

Four mistakes waste the most benchmarking time. Comparing companies at radically different stages (a Series B startup against a public competitor produces useless gaps). Using stale data that no longer reflects current performance. Treating a one-time benchmark as if it were an ongoing program. And picking metrics that sound important but don't drive decisions. Each one burns analyst hours with zero actionable output.

Benchmarking too many competitors

Three to five is the right number. Ten competitors across 30 metrics creates a spreadsheet nobody reads. Focus and depth beat breadth.

Measuring vanity metrics

Social media followers without engagement data is meaningless. Website traffic without conversion context is noise. Always ask: does this metric connect to revenue?

One-time benchmarking treated as evergreen truth

A benchmark from January is stale by July. Digital metrics shift weekly. Build a cadence, not a snapshot.

Single-source data

Every tool has blind spots. SEMrush estimates drop in accuracy for sites under 50,000 monthly visits. Glassdoor ratings are skewed by companies that incentivize reviews. Cross-reference.

Benchmarking the wrong metrics because they're easy to get

If your phishing test uses obvious simulations, a 2% failure rate means nothing compared to a company running targeted spear-phishing simulations at 10%. The metric you can easily measure is not always the one that matters.

No connection between insight and action

If the benchmark report does not lead to a specific resource allocation change within 30 days, you wasted the effort. Every data point must answer: "So what? What do we do differently?"

Common Questions

Competitor Benchmarking FAQ

What are the 5 types of benchmarking?
The five types are: (1) competitive benchmarking, comparing metrics against direct rivals; (2) strategic benchmarking, evaluating business models and long-term positioning; (3) process benchmarking, analyzing operational workflows; (4) performance benchmarking, measuring output KPIs like revenue and market share; and (5) internal benchmarking, comparing across your own business units. Robert Camp formalized this classification at Xerox in 1989, and Bain & Company still ranks benchmarking among the top three most-used management tools worldwide.
How often should you benchmark competitors?
It depends on the metric. Digital signals (pricing pages, ad campaigns, job postings) shift weekly, so those need continuous or monthly tracking. Financial and strategic benchmarks only change quarterly. Full competitive positioning reviews should happen twice a year at minimum. Data older than six months is generally stale for tactical decisions. If your industry moves fast (SaaS, ecommerce), lean toward monthly scans with quarterly deep dives.
What is the difference between benchmarking and competitive analysis?
Competitive analysis asks "who are our competitors and what are they doing?" It is qualitative, broad, and strategic. Benchmarking asks "how do we compare on specific numbers?" It is quantitative, narrow, and specific. You need both. Analysis tells you that a competitor is growing quickly. Benchmarking tells you their revenue grew 34% while yours grew 12%, and their customer acquisition cost is 40% lower than yours. One gives you context; the other gives you targets. For a structured approach to the qualitative side, see our competitive analysis template. For the costs of outsourcing the whole thing, see our BI consulting cost guide.
How do you benchmark private companies?
Private companies do not file 10-K reports with the SEC, so you use proxy data. LinkedIn employee counts and hiring velocity reveal growth. H-1B visa filings (public through the Department of Labor) expose salary bands and hiring direction. Job postings reveal strategic priorities. BuiltWith shows their technology stack and budget. Reddit and Glassdoor provide unfiltered sentiment. OSHA records can estimate facility headcount from "total hours worked" data. No single source is complete, but layered together they paint a reliable picture.
What are competitor benchmarks?
Competitor benchmarks are specific, measurable performance data points you track across your competitive set. Examples include a SaaS company tracking net revenue retention (industry median: 102%), a manufacturer measuring Overall Equipment Effectiveness (industry average: 66.8%), or an ecommerce brand comparing conversion rates (global average: 2.5-3%). The benchmark becomes your target or baseline for improvement.
What are the 4 P's of competitor analysis?
Product, Price, Place, and Promotion. Borrowed from the classic marketing mix, applied to competitive comparison. Product: what are they selling and how does it compare? Price: what do they charge and how is it structured? Place: how do they distribute (direct sales, channel partners, self-serve)? Promotion: where and how do they market? The 4 P's give you a fast structural comparison before you go deeper into quantitative benchmarking.

How we researched this guide

This guide cross-references 50 primary sources including APQC benchmarking research, Bain & Company management tools survey, SaaS Capital retention benchmarks (2025), Benchmarkit B2B SaaS reports, Godlan manufacturing OEE data, Dynamic Yield ecommerce conversion studies, and practitioner discussions on Reddit and LinkedIn. Industry benchmarks reflect 2025-2026 verified data. The five-step framework reflects how our team approaches benchmarking engagements.

Related Comparisons

Evaluating benchmarking platforms?

Head-to-head breakdowns of the CI platforms this guide references — pricing, source coverage, and when a per-report model wins instead.

Done-for-you benchmarking

Want competitor benchmarking done for you?

We benchmark your company against 3-5 competitors across multi-source dimensions. Finished report with analysis, gap scoring, and specific recommendations. No tooling. No internal headcount.