Data Playbook

Content-Market Fit Scoring: The Open-Source Framework for Measuring What Actually Works

Feb 2026 · 16 min read · By Lukas Timm

Most B2B content teams measure vanity metrics — impressions, likes, follower count. They build dashboards full of numbers that go up and to the right while pipeline stays flat. The CMO reports that "brand awareness is growing." The sales team reports that inbound is not. Everyone nods politely and nothing changes.

We built a scoring system that predicts which posts will generate pipeline. Not which posts will get the most likes. Not which posts will reach the most people. Which posts will move a buyer closer to a decision.

We call it Content-Market Fit scoring — CMF. The name is deliberate. Just as Product-Market Fit is the moment when your product meets genuine demand, Content-Market Fit is the moment when your content meets genuine buyer intent. A post that scores high on CMF does not just get engagement. It generates profile visits from decision-makers. It triggers DMs from prospects. It creates the kind of recognition that makes a cold outreach feel warm.

This article is the complete, open-source framework. The five scoring dimensions. The step-by-step process for scoring your own posts. A copy-paste LLM prompt for automated scoring. And the pattern data from scoring 1,000+ posts across 15+ B2B tech companies over 18 months. Everything you need to stop guessing and start measuring.

CMF scoring framework overview showing five dimensions arranged as a radar chart: Hook Strength, Value Density, Audience Precision, Conversion Signal, and Shareability, each scored 1 to 10 for a total of 50

Why Traditional Content Metrics Fail

Before we get into the framework, it is worth understanding why the metrics most teams rely on are fundamentally broken for B2B. This is not a nuance problem. It is a category error. The metrics that matter for media companies and consumer brands are actively misleading for B2B companies selling into enterprise.

Impressions Don't Measure Intent

An impression means your post appeared on someone's screen. It does not mean they read it. It does not mean they cared. It does not mean they are a potential buyer. LinkedIn reports impressions for anyone who scrolls past your post, including the recruiter in Manila, the college student browsing during class, and the bot account that inflates your numbers by 15-20%.

We have seen posts with 50,000 impressions generate zero pipeline. We have seen posts with 2,000 impressions generate three qualified enterprise conversations. The difference is not reach. It is who you reached and what they did next.

Likes Don't Correlate with Pipeline

Likes are social currency. People like posts to signal agreement, to maintain relationships, to be seen engaging with certain topics. A VP of Engineering who likes your post about team culture is not signaling buying intent. They are being polite on the internet.

In our data across 15+ B2B tech companies, the correlation between like count and pipeline generation is 0.12. That is barely above random noise. The posts that generate the most likes are often inspirational stories, hot takes on industry drama, and memes. These are not the posts that generate enterprise conversations.

Follower Growth Is a Lagging Indicator

By the time follower growth shows up in your analytics, the content strategy that drove it is already weeks or months old. Follower count tells you what worked in the past. It tells you nothing about whether your current content is connecting with buyers. And a large following of the wrong people is worse than a small following of the right ones — it dilutes your feed relevance and teaches the algorithm to show your content to people who will never buy.

The Real Question

The metric that matters is simple: Did this post move a buyer closer to a decision?

That is not a single number. It is a composite of signals — who engaged, how they engaged, what they did after engaging, and whether the engagement pattern matches the behavior of people who eventually become customers. You need a framework that weights engagement quality, not quantity. That is what CMF scoring does.

The CMF Scoring Framework (5 Dimensions)

CMF scoring evaluates every post across five dimensions, each scored from 1 to 10. The total score is out of 50. Each dimension captures a different aspect of content effectiveness, and together they give you a composite picture that no single metric can provide.

Detailed walkthrough of CMF scoring applied to a real LinkedIn post, showing how each of the five dimensions is evaluated with specific criteria and the resulting scores

1. Hook Strength

Score: 1-10

What it measures: The stopping power of your first two lines. Did the hook create enough curiosity, tension, or relevance to make someone pause their scroll and read the rest?

Scoring criteria:

Proxy metrics: Click-through rate on "see more," ratio of impressions to full reads, first-line engagement in comments ("This hook got me").

2. Value Density

Score: 1-10

What it measures: How much actionable, specific value the post delivers per paragraph. Not length — density. A 100-word post with two concrete frameworks scores higher than a 500-word post with vague advice.

Scoring criteria:

Proxy metrics: Save rate, screenshot mentions in DMs, "bookmarked this" comments, time-on-post (if measurable).

3. Audience Precision

Score: 1-10

What it measures: How tightly the post targets your ideal buyer persona. A post that resonates with everyone resonates with no one. Audience precision is about whether the right people — the people who can sign purchase orders — feel that this post was written for them.

Scoring criteria:

Proxy metrics: Comment quality (are VPs and directors engaging, or only peers and juniors?), profile visit quality, DM quality.

4. Conversion Signal

Score: 1-10

What it measures: What happened after someone engaged with the post. Did they visit your profile? Did they DM you? Did they click a link? Conversion signal is the bridge between content engagement and business outcomes.

Scoring criteria:

Proxy metrics: Profile visits within 24 hours, DMs received, connection requests with personalized notes, CTA click-through rate, "saw your post" mentions in sales calls.

5. Shareability

Score: 1-10

What it measures: The likelihood that someone shares the post with their network, either through reposts, tags, or private forwards. Shareability is the organic amplification engine — it extends your reach beyond your existing audience into the networks of people who already trust you.

Scoring criteria:

Proxy metrics: Repost count, tag count in comments, "sharing this with my team" comments, screenshot evidence in DMs.

How to Score Your Posts (Step by Step)

Scoring is not useful as a one-time exercise. It becomes powerful when you do it consistently, building a dataset that reveals patterns specific to your audience, your industry, and your voice. Here is the process.

Step 1: Pull Your Last 20 Posts

Open LinkedIn Analytics. Export or manually collect the data for your last 20 posts. For each post, you need: the full text, impression count, like count, comment count, repost count, and any available data on profile visits or link clicks. Twenty posts gives you enough data to identify patterns without making the exercise overwhelming.

Step 2: Score Each Post Across 5 Dimensions

For each post, go through the five dimensions and assign a score from 1 to 10. Use the criteria above. Be honest — this only works if you resist the urge to inflate your own scores. If you are unsure between two scores, pick the lower one. Better to be surprised by improvement than to mask a problem.

Tip: score the hook first while reading only the first two lines. Then read the full post for value density, audience precision, and shareability. Score conversion signal last, using your analytics data.

Step 3: Calculate Total CMF Score

Add the five dimension scores for a total out of 50. Record it alongside the post date, topic, and format (text-only, carousel, image post, video).

Step 4: Benchmark Against Tiers

CMF scoring benchmark tiers showing four levels: below 25 needs work, 25 to 35 is solid, 35 to 45 is high performer, and 45 plus has viral potential, with the percentage of posts that typically fall in each tier

Below 25 / 50 — Needs Work

The post did not connect. Either the hook was weak, the value was thin, or the targeting was off. These posts typically get low engagement and zero downstream action. Roughly 30% of all B2B LinkedIn posts fall here. The fix is usually not editing — it is rewriting with a different angle.

25-35 / 50 — Solid

The post did its job. Decent engagement, some profile visits, but nothing exceptional. These posts maintain your presence and keep you visible to your network. About 40% of posts land here. They are the baseline of a functioning content engine.

35-45 / 50 — High Performer

The post created real business impact. Multiple profile visits from target accounts, DMs, saves, and shares. These posts often become the "I saw your post about..." reference in future sales conversations. About 20% of posts from well-run content operations hit this tier.

45+ / 50 — Viral Potential

Everything aligned. The hook was exceptional, the value was reference-grade, the audience felt personally addressed, and the post triggered shares and DMs. Fewer than 10% of posts reach this level. When they do, they often become the posts your prospects cite months later in sales calls. Study these closely — they contain the pattern DNA of your most effective content.

Step 5: A Scoring Example

Let us walk through a real scoring example. Consider this post from a robotics company founder:

"78% of warehouse automation pilots fail. Not because the technology doesn't work. Because procurement takes 14 months and the champion who approved the pilot has moved to a different role by the time deployment starts. We lost our first two enterprise deals this way. Here's what we changed..."
Dimension Score Reasoning
Hook Strength 9 Specific stat (78%), unexpected cause (procurement, not tech), and personal vulnerability ("We lost our first two deals") in the opening lines. Strong curiosity gap.
Value Density 7 The insight about champion turnover during procurement cycles is genuinely useful and non-obvious. "Here's what we changed" promises actionable follow-through.
Audience Precision 8 Directly addresses founders and operators in warehouse automation. The procurement/champion problem is deeply familiar to this audience but rarely discussed publicly.
Conversion Signal 7 Generated 12 profile visits from target accounts, 3 DMs, and 1 meeting request within 48 hours. The vulnerability made the founder approachable.
Shareability 7 5 reposts. Multiple "tag your BD team" comments. The 78% stat became a reference point in several follow-up conversations.

Total CMF Score: 38 / 50 (High Performer)

This post sits in the High Performer tier. It generated real business outcomes because it combined a strong hook with genuine insider knowledge targeted at the right audience. The vulnerability (admitting lost deals) actually increased conversion signal because it made the founder human and approachable, not just a thought leader broadcasting opinions.

The LLM Prompt for Automated CMF Scoring

Scoring 20 posts manually takes about 90 minutes. Scoring them with an LLM takes about 5 minutes. The following prompt is the exact template we use across our client portfolio. Copy it, paste it into Claude or GPT, and batch-score your entire post history.

You are a B2B content analyst specializing in LinkedIn content performance
for deep tech and enterprise technology companies.

I will give you a LinkedIn post and its performance metrics. Score the
post across 5 dimensions of Content-Market Fit (CMF), each from 1-10.

THE 5 CMF DIMENSIONS:

1. HOOK STRENGTH (1-10)
   How effectively do the first 2 lines stop the scroll?
   - 1-3: Generic, no curiosity gap, no specificity
   - 4-6: Relevant topic but weak execution
   - 7-8: Strong curiosity gap with specific data or claim
   - 9-10: Exceptional. Combines specificity + emotion + irresistible gap

2. VALUE DENSITY (1-10)
   How much actionable insight per paragraph?
   - 1-3: Platitudes, generic advice
   - 4-6: Some useful info, mostly common knowledge
   - 7-8: Multiple actionable insights, frameworks, or specific data
   - 9-10: Reference-grade. Reader screenshots or bookmarks it

3. AUDIENCE PRECISION (1-10)
   How targeted to ideal buyer persona?
   - 1-3: Could be any industry, any audience
   - 4-6: Industry-relevant but not buyer-specific
   - 7-8: Decision-makers clearly engaging, buyer-specific language
   - 9-10: Surgically precise, comments from people with buying authority

4. CONVERSION SIGNAL (1-10)
   What downstream actions did it trigger?
   - 1-3: Likes only, no profile visits or DMs
   - 4-6: Some profile visits, no further action
   - 7-8: DMs, connection requests, CTA clicks from target accounts
   - 9-10: Direct pipeline creation, meetings booked

5. SHAREABILITY (1-10)
   How likely is someone to share or repost?
   - 1-3: No sharing impulse
   - 4-6: Mildly shareable
   - 7-8: "Tag a founder who needs this" energy
   - 9-10: Screenshot-worthy, saved to Notion, forwarded in Slack

INSTRUCTIONS:
- Score each dimension with a number AND a one-sentence justification
- Calculate total CMF score (sum of 5 dimensions, out of 50)
- Classify into tier: Below 25 (Needs Work) | 25-35 (Solid) |
  35-45 (High Performer) | 45+ (Viral Potential)
- Identify the #1 improvement that would raise the score most
- Be rigorous. Do not inflate scores. Most posts score 25-35.

FORMAT:
Hook Strength: [X]/10 - [justification]
Value Density: [X]/10 - [justification]
Audience Precision: [X]/10 - [justification]
Conversion Signal: [X]/10 - [justification]
Shareability: [X]/10 - [justification]

TOTAL: [XX]/50 ([tier])
TOP IMPROVEMENT: [one specific, actionable suggestion]

---

POST TEXT:
[Paste your full LinkedIn post here]

METRICS (if available):
- Impressions: [X]
- Likes: [X]
- Comments: [X]
- Reposts: [X]
- Profile visits (24h after): [X]
- DMs received: [X]
- Link clicks: [X]

How to Batch-Score 20 Posts in 5 Minutes

The most efficient workflow for batch scoring:

  1. Export your data. Pull your last 20 posts from LinkedIn Analytics. For each post, copy the full text and the engagement metrics into a single document.
  2. Format as batch input. Stack all 20 posts in a single message, separated by "---POST [number]---" dividers. Include the metrics inline after each post text.
  3. Submit to the LLM. Paste the prompt above at the top, followed by all 20 posts. The model will score each one sequentially with consistent criteria.
  4. Transfer to spreadsheet. Copy the output into a spreadsheet with columns for each dimension score, total CMF, tier, and the top improvement recommendation.
  5. Sort by total CMF. Your highest-scoring posts reveal the patterns you should double down on. Your lowest-scoring posts reveal the patterns you should stop.

This gives you a scored dataset in five minutes that would take a human analyst most of a workday. The LLM scores are not perfect — they tend to be 5-10% more generous than a trained human scorer — but the relative rankings are remarkably consistent. The post that scores highest with the LLM is almost always the post that actually performed best in business outcomes.

Get Your Content Scored

We score and analyze content for 15+ B2B tech companies every week. Want us to run your last 20 posts through our CMF framework and show you exactly where the pipeline opportunities are?

Request Your Content Audit

What CMF Data Tells You (Pattern Analysis)

Numbers are useful. Patterns are powerful. After scoring 1,000+ posts across 15+ B2B tech companies over 18 months, we have enough data to identify the structural patterns that separate high-performing content from noise. Here is what the data shows.

Chart showing correlation between each CMF dimension and key business outcomes: reach, saves and shares, and pipeline generation, with audience precision showing the strongest pipeline correlation at 0.81

Hook Strength Is the #1 Predictor of Reach

Correlation with impression count: 0.73

This is the most intuitive finding, but the magnitude surprises people. A post with a hook scoring 9 gets, on average, 3.4x the impressions of a post with a hook scoring 5 — even when the content quality is similar. The LinkedIn algorithm amplifies content that stops the scroll, and the hook is the primary scroll-stopping mechanism.

The implication is stark: spending 50% of your writing time on the first two lines is not excessive. It is mathematically justified. Most founders spend 90% of their time on the body and 10% on the hook. Invert that ratio and your reach will increase proportionally.

Value Density Is the #1 Predictor of Saves and Shares

Correlation with save + repost rate: 0.68

People do not save or share content that makes them feel good. They save content that makes them feel smarter. The posts with the highest save rates in our dataset are not the inspiring ones or the hot takes. They are the ones that contain a framework, a specific data point, or a step-by-step process that the reader wants to reference later.

Value density below 6 almost never generates saves. Above 7, save rates increase sharply. The threshold effect suggests that there is a minimum level of specificity and actionability required before a reader's brain shifts from "interesting" to "I need to keep this."

Audience Precision Is the #1 Predictor of Pipeline

Correlation with downstream pipeline activity: 0.81

This is the most important finding in the entire dataset. Audience precision — how tightly your post targets your ideal buyer persona — has the strongest correlation with actual pipeline generation by a significant margin. It is more predictive than hook strength, more predictive than value density, and more predictive than overall CMF score.

Posts scoring 8 or above on audience precision generate 5x more DMs from qualified prospects than posts scoring 5-7. The effect is not linear — it is exponential. A post that feels like it was written specifically for a VP of Manufacturing at an automotive OEM will generate more pipeline than a post about "manufacturing challenges" that could apply to any industry.

The mechanism is recognition. When a buyer reads a post and thinks "this person understands my exact situation," trust is established instantly. That trust lowers the barrier to outreach from "I need a compelling reason to DM a stranger" to "I should connect with someone who clearly gets it."

Content Type Patterns

Across our 1,000+ scored posts, clear patterns emerge by content type:

Content Type Avg CMF Top Dimension Weakest Dimension
Industry truth posts 38 Hook Strength (8.2) Value Density (6.8)
Framework posts 34 Value Density (8.0) Hook Strength (6.1)
Data/proof posts 35 Shareability (7.5) Audience Precision (6.4)
Personal story posts 31 Hook Strength (7.4) Conversion Signal (5.2)
Company news posts 22 Audience Precision (5.8) Shareability (3.4)

The data reveals a consistent pattern: industry truth posts have the highest average CMF because they naturally combine strong hooks (uncomfortable truths stop the scroll) with audience precision (industry-specific language attracts the right people). Framework posts score highest on value density but often underperform on hooks — the "here is my framework" opening is not as compelling as an industry truth or data-driven claim.

Company news posts score lowest across the board. "We just raised a Series B" or "Excited to announce our partnership with..." are the content equivalent of empty calories. They generate polite congratulations from your existing network and zero engagement from potential buyers. If you must post company news, lead with the buyer-relevant implication ("This means X for the 200 companies currently struggling with Y") rather than the announcement itself.

The Sweet Spot: Audience Precision 8+

If there is a single insight to take from this entire analysis, it is this: posts scoring 8 or above on audience precision generate 5x more DMs than posts scoring below 8. This is the highest-leverage dimension in the framework. You can have a mediocre hook, average value density, and modest shareability — but if your post speaks directly to the right buyer with surgical precision, it will generate pipeline.

The practical takeaway: before you publish any post, ask yourself one question. "If my ideal buyer read this, would they feel it was written specifically for their situation?" If the answer is not a clear yes, rewrite until it is.

Building Your Content Engine Around CMF

Scoring posts is not the end goal. Building a system that compounds learning over time is. CMF scoring becomes a competitive advantage when it feeds into a continuous improvement cycle. Here is how to operationalize it.

CMF content engine flywheel diagram showing the four-phase cycle: Score weekly, Extract patterns monthly, Pivot quarterly, and Compound results, with each phase feeding into the next

Weekly: Score Last Week's Posts (30 Minutes)

Every week, score the posts you published in the previous 7 days. This is a 30-minute ritual that compounds over time. After 12 weeks, you have a dataset of 36-60 scored posts that reveals clear patterns about what works for your audience.

The weekly cadence matters. If you wait until the end of the month, you lose the ability to course-correct in real time. A weekly review lets you spot a declining trend in audience precision and fix it next week, not next quarter.

Monthly: Extract Patterns (60 Minutes)

Once a month, step back from individual post scores and look at the aggregate data. What patterns are emerging?

Document these patterns. They become your content strategy — not a strategy based on best practices from blogs, but a strategy based on evidence from your actual audience.

Quarterly: Strategy Pivot (2 Hours)

Every quarter, use your 12 weeks of CMF data to make strategic decisions:

The Flywheel Effect

Score, learn, optimize, score higher. That is the flywheel. Each cycle makes your content more effective because you are learning from your own data, not from generic advice. After 6 months of consistent CMF scoring, most companies see their average CMF shift from 25-28 (the baseline for unoptimized content) to 32-36 (the range where content reliably generates pipeline).

That shift sounds modest. It is not. In our data, the difference between a 28-average and a 34-average CMF score translates to roughly 3x more DMs from qualified prospects and 2x more "saw your post" mentions in sales conversations. Content goes from being a brand awareness exercise to being a pipeline generation engine.

What to Do Next

CMF scoring is the measurement layer. To build a complete content engine, you need the rest of the stack. Here is where to go depending on what you need:

The gap between companies that grow through content and companies that post into the void is not creativity. It is not budget. It is not frequency. It is measurement. The companies that win are the ones that know, with data, which posts move buyers and which posts waste keystrokes.

CMF scoring gives you that data. The framework is open source. The prompt is copy-paste. The only thing left is to start scoring.

Continue Reading

Ready to build a data-driven content engine?

We score, analyze, and optimize content for 15+ B2B tech companies. CMF scoring is just the measurement layer — we build the full engine that turns content into pipeline.

Request Your Campaign