FounderRadar logo FounderRadar

Make Something People Want: the compounding cost of building what no one wants

2025-09-04 •

But I know what people want... right?

If you’ve ever watched a sprint review where everyone nods at a polished feature that quietly ships into a void, you’ve felt it: we’re busy, but are we building what people actually want?

“No market need” remains the number one reason startups fail. It’s not usually incompetence—it’s a compounding misallocation problem. Small weekly bets on the wrong things add up until burn, morale, and turnover converge.

References:

  • CB Insights, “Top Reasons Startups Fail” (#1: No market need) — https://www.cbinsights.com/research/startup-failure-reasons-top/
  • HBR, “Why Most Product Launches Fail” — https://hbr.org/2011/04/why-most-product-launches-fail
  • YC’s mantra — “Make something people want.” — https://www.ycombinator.com/library/4g-make-something-people-want

The silent killer: No-market-need is rarely obvious in week 2, but obvious in month 12

Early on, “usage is light because we just launched.” Six months later, “we’re pre-traction but learning.” A year in, runway is tense and the roadmap is full of features that don’t move any core activation or revenue metric.

The common pattern:

  • Attention anchored on inside views (roadmaps, opinions, internal stakeholders).
  • External signal quality is low (few customer conversations, cherry-picked threads, anecdotal DMs).
  • Time-to-signal is long (wait for GA launches to learn).
  • When signals do arrive, they’re not comparable or trusted.

A simple model: the compounding cost of building the wrong thing

Let’s quantify.

Assume a lean product team:

  • 5 builders (3 engineers, 1 designer, 1 PM)
  • Fully loaded monthly cost per person ≈ $18k (salary + benefits + overhead)
  • Monthly burn on product team ≈ $90k

If 50% of shipped work has low or no pull from users (no measurable lift in activation, retention, or revenue) …

  • Direct waste: $45k/month on features that don’t move needles
  • Opportunity cost: you’re also not shipping the right thing
  • Compounding: even “small” misallocations create downstream maintenance, support, and complexity tax

After 9 months, that’s ≈ $405k in direct misallocation—before you count the morale hit of seeing usage dashboards stay flat.

The real killer is not one bad bet; it’s the portfolio of unvalidated bets compounding maintenance and context-switching taxes while burning runway.

Opportunity cost of misaligned bets

Some notes:

  • If you plot cumulative burn on low-signal work vs. validated work, the slope diverges by month 3–4.
  • Complexity grows non-linearly; each low-impact feature still adds surface area to maintain, test, and re-think later.

The human side: morale, velocity, and turnover

  • Morale: Shipping into silence is demoralizing. Builders want to see impact.
  • Velocity: Teams ship slower as the codebase accretes low-use features.
  • Turnover: PMs and engineers leave when they feel like a feature factory with no feedback loop. Replacements take 2–3 months to hire and another 1–2 to onboard—more compounding delay.

Signals you’re drifting:

  • “We need more marketing” becomes the reflex answer.
  • PRDs cite “competitive parity” more than user quotes or links.
  • Usability tests are positive, production telemetry is flat.
  • Backlog items lack source links; “customer demand” = internal hearsay.

Drifting

A weekly operating cadence to de-risk “no one wants it”

  1. Source reality (ongoing)
  • Scan public places where your ICP complains (Reddit, HN, GitHub Issues, relevant forums).
  • Save the links. Save the quotes. Don’t summarize away the pain.
  1. Cluster and rank (weekly)
  • Cluster similar complaints to separate “one-off” from repeatable pain.
  • Rank by multi-factor signal (frequency, severity/intent, engagement, author diversity).
  • Make the weighting transparent so you can defend bets.
  1. Decide small, learn fast (weekly)
  • Convert top-ranked pains into small, testable bets (brief, metric, decision date).
  • Define a minimal leading indicator (e.g., sign-up conversion on a “notify me,” usage of a behind-flag endpoint, willingness-to-pay survey with thresholds).
  1. Show your work (weekly review)
  • For each bet: include source links, the factor breakdown behind its rank, and what you learned this week (even if that’s “not enough signal—keep watching”).
  1. Ruthlessly prune
  • Archive or deprecate features that don’t reach a minimum usage threshold in a set window.
  • Celebrate kills that save future time.

What we built in FounderRadar to make this easy

We built FounderRadar for this exact cadence.

  • PainGraph (cluster): Groups semantically similar complaints across Reddit, HN, and GitHub so you see the problem, not just the post.
  • Explainable Opportunity Score: Transparent weights across Frequency, Severity, Engagement, and Author Diversity so you can defend prioritization.
  • Defensible links: Every item cites sources. You can click through to the original threads and issues.
  • Trends and alerts: Weekly ranked opportunities, plus alerts when an item spikes so you can pounce quickly.

Result: fewer “we shipped into a void” moments, more “we shipped into demand.”

A short vignette: the feature that almost shipped

A devtools startup planned a month on “config profiles.” Before kickoff, they pulled external signals:

  • 19 mentions in 5 days about env var sync pain (17 unique authors) with high-intent phrasing (“nightmare,” “error-prone across teams”).
  • “Config profiles” had scattered mentions but low engagement.

They paused the profiles project, shipped a small env var sync proof in 6 days, and got 80 signups on a waitlist page linked in threads. That changed the next 8 weeks of roadmap—and probably their survival odds.

Metrics that matter more than “features shipped”

  • Activation lift for a specific audience segment (week over week).
  • Time-to-first-value from new entry points (by cohort).
  • % of roadmap items with external source links attached.
  • Bet hit-rate: % of bets that clear a pre-set leading indicator threshold.
  • Decision latency: time from external signal → team decision → shipped experiment.

If these don’t trend, it’s a smell that your discovery loop is too weak.

How to start next week (2 hours)

  • 30 min: Pick 2–4 ICP venues (subreddits, HN tags, GitHub repos) and start collecting links.
  • 30 min: Cluster top 50 items and rank with a simple weighted score (ratios are fine; transparency beats precision).
  • 30 min: Write two “bets” you can ship in 3–5 days to address the top item. Define a leading indicator (and what you’ll decide).
  • 30 min: Kill something. Archive a low-use feature or postpone a “nice-to-have” to pay down complexity tax.

Where FounderRadar fits in your loop

  • Replace ad-hoc browsing with a weekly ranked opportunity report.
  • Add the factor breakdown to your PRD template.
  • Paste source links directly into tickets so engineers see the real request, not only a summary.
  • Use alerts to catch emergent pains before your competitors do.

A Successful Sale

TL;DR

  • The #1 startup killer is “no market need”—and you rarely see it until burn, morale, and turnover stack up.
  • Misallocation compounds. Even 50% “low signal” work can waste ~$400k over 9 months for a small product team, while slowing velocity.
  • Run a weekly loop that sources external complaints, clusters them, ranks with transparent factors, and turns the top item into a small testable bet.
  • FounderRadar makes this loop cheap and trustworthy: clusters (PainGraph), explainable scores, defensible links, and alerts—so you build what people actually ask for.