How to Find Real Product Demand Before You Build
Most product ideas sound better in your notes than they do in the market. Here’s a practical way to separate noisy trends from repeated pain points and explicit buyer intent before you commit to building.

Most bad product bets do not fail because the team could not build them. They fail because the original signal was weak.
A founder sees a few viral posts, a clever AI demo, or a fast-growing niche on X, and the idea starts to feel inevitable. But popularity is not the same as pain. Attention is not the same as demand. And one loud complaint is not the same as a repeatable market problem.
If you are an indie hacker, SaaS builder, or part of a lean product team, the real challenge is rarely “how do I come up with ideas?” It is “how do I stop wasting time on ideas that only look good from a distance?”
Start with pain, not concepts

A stronger product discovery process begins with user pain that appears repeatedly and specifically.
That means looking for signals like:
- People describing a workflow that keeps breaking
- Users asking for a workaround, tool, or automation
- Buyers explicitly saying they would pay for a fix
- The same complaint showing up across multiple communities or weeks
- Frustrations tied to an existing budget, team process, or business goal
This matters because specific pain creates clearer product boundaries. When you know what someone is struggling with, in what context, and how often it happens, you can define a product around an actual job to be done. Without that, you are usually just building around a theme.
“AI for recruiting” is a theme.
“Agency recruiters keep manually rewriting candidate summaries between ATS exports and client updates, and several are actively asking for automation” is a product starting point.
The three signals that matter most
When reviewing potential opportunities, three signals tend to matter more than everything else.
1. Repetition
One complaint can be random. Repeated complaints are more useful.
If the same pain point appears across different threads, different users, or over a longer time window, it becomes harder to dismiss as edge-case noise. Repetition suggests the problem is structural, not incidental.
2. Intent
Not every frustrated user is a buyer. Look for language that suggests urgency, budget, or active search behavior:
- “Does anyone know a tool for this?”
- “I would pay for…”
- “We are still doing this manually”
- “We tried X and it still does not solve Y”
- “Need a better way to handle…”
Intent is what turns interesting research into commercial potential.
3. Weak-signal discipline
Some opportunities are real, but early. Others are just vague trends wearing the costume of opportunity.
You need a way to distinguish between:
- strong bets with repeated pain and clear demand language
- weak signals worth tracking, but not building around yet
That distinction protects you from premature conviction, which is one of the most expensive habits in product work.
A simple demand-validation workflow

You do not need a giant research team to do better validation. You do need a repeatable process.
Step 1: Define the audience narrowly
Do not start with “small businesses” or “creators.” Start with a narrower group and context:
- solo accountants managing client reporting
- RevOps teams cleaning CRM data
- Shopify operators handling refunds at scale
- engineering managers tracking sprint bottlenecks
Specific audiences produce more useful research because their pains are easier to compare.
Step 2: Collect raw language from public conversations
Reddit and X are useful because people often describe their problems in direct, unpolished language. But they are also noisy. You will find jokes, trend chasing, vague complaints, and low-context opinions mixed in with valuable signals.
Your job is not to gather everything. It is to identify statements that reveal:
- what the person is trying to do
- what keeps failing
- what they have already tried
- whether they are looking for a tool
- whether the problem appears repeatedly
Step 3: Group by pain point, not by topic
A common mistake is sorting research into broad topics like “marketing,” “AI,” or “productivity.”
Instead, group by pain pattern:
- manually moving data between systems
- poor reporting visibility across clients
- repetitive customer support triage
- broken handoffs between teams
- inability to trust outputs from existing tools
This makes it easier to see whether a problem is repeated and whether a solution could be focused enough to matter.
Step 4: Rank opportunities by evidence
A decent idea should not survive on intuition alone.
Rank each opportunity by questions like:
- How often does this pain appear?
- How specific is the complaint?
- Is there explicit buyer intent?
- Are users already paying for adjacent tools?
- Does the pain connect to time, revenue, compliance, or operational risk?
- Is the current workaround obviously inefficient?
This helps you compare opportunities on signal strength, not excitement.
Step 5: Track patterns over time
Some problems flare up because of a product launch, API change, or temporary platform issue. Others persist for months. The second category is usually more valuable.
That is why longitudinal tracking matters. A pain point that keeps resurfacing is often more investable than one that arrives in a burst and disappears.
Why founders still get this wrong
Even experienced builders fall into a few predictable traps.
They confuse audience size with demand quality
A huge market with vague pain is often less attractive than a smaller niche with urgent, repeated frustration and clear willingness to pay.
They overweight novelty
Something can feel new and still be commercially weak. In fact, novelty often masks weak validation because people discuss it more than they buy it.
They trust their own interpretation too early
Founders are good at connecting dots. The downside is that they can connect dots that are not there. Raw evidence matters because it slows down self-deception.
They do not separate research from excitement
If your research process does not force you to label weak signals as weak, you will naturally promote them in your own mind.
When manual research stops being efficient

There is real value in doing your own early research. You hear market language firsthand, notice nuance, and build better product instincts. But there is also a point where manual scanning becomes a tax.
If you are searching Reddit and X every day, copying links into docs, trying to remember which complaints repeated last month, and arguing with yourself about whether a trend is real, you are probably spending too much energy on collection and not enough on judgment.
That is where curated research can help. For builders who want cleaner demand signals without digging through social noise themselves, Ethanbase’s Miner is one relevant option. It is a paid daily brief built for people choosing what to build, validating niches, and tracking repeated pain points over time. Instead of treating every conversation as equally important, it focuses on validated pain, explicit buyer intent, and the difference between stronger opportunities and weak signals worth watching.
The important point is not that you need one more feed to read. It is that your research input should reduce noise, not multiply it.
A better standard for product ideas
Before you commit weeks or months to a product, ask for more than inspiration.
Ask:
- Can I point to repeated evidence of the same pain?
- Do users describe the problem in operational, costly terms?
- Is there explicit intent to find or pay for a solution?
- Does this problem recur over time?
- Am I seeing a real workflow break, or just general interest in a category?
If the answers are thin, keep researching. If the answers are strong, you have something much more useful than a trendy idea: you have a problem with evidence behind it.
Build from signal, not hope
There will always be too many possible products to build. The advantage goes to teams that can identify stronger demand earlier and ignore weaker stories, even when those stories sound exciting.
That usually means spending less time hunting for flashy ideas and more time studying repeated pain, buyer intent, and persistence over time. The goal is not to predict the future perfectly. It is to make better bets with better evidence.
Explore one research option
If your current product research process is mostly manual and noisy, it may be worth looking at Miner by Ethanbase. It is a practical fit for indie hackers, SaaS builders, and lean teams that want daily high-signal demand research from Reddit and X before committing to what they build next.
Related articles
Read another post from Ethanbase.

How to Unstick a Sales Email Thread Before the Deal Goes Cold
When a sales conversation stalls, most teams guess at the next follow-up. This article shows a lightweight way to read deal risk inside an email thread, spot blockers, and send a more effective next reply.

Ace Your Next PM Interview: A Smarter Way to Prep
Preparing for a product manager interview? Generic interview prep often feels vague. But there's a smarter way to practice that can dramatically improve your performance.

A Better Pre-Market Routine for Traders Who Already Do the Work
Many active traders already do pre-market prep, but still start the session with scattered notes and too many names. A tighter workflow can reduce noise, clarify setups, and improve decision quality before the bell.
