How to Validate a Product Idea Before You Build Anything
Most product ideas sound better in your head than they look in the market. Here’s a practical way to test demand using repeated pain points, buyer intent, and signal quality before you commit weeks of build time.

A lot of weak product ideas survive because they feel plausible.
You notice a complaint on Reddit, see a few excited replies on X, and your brain fills in the rest: market demand, urgency, willingness to pay, maybe even distribution. But scattered attention is not the same thing as validated need.
For indie hackers, SaaS builders, and lean product teams, the real challenge is not generating ideas. It is filtering them. The market produces an endless stream of “someone should build this” moments. Most are noise. A few are real opportunities.
The difference usually comes down to whether you can find three things:
- a specific recurring pain point
- clear evidence that the pain matters enough to solve
- signs that people will actually change behavior or spend money
Start with pain, not features

Founders often validate the wrong thing.
They test whether people like a concept, a clever workflow, or a new layer of AI. What they should test first is whether a painful job keeps showing up in the wild without prompting.
That means looking for language like:
- “I keep wasting hours on…”
- “Is there a tool that can…”
- “Why is this still so manual?”
- “I’d pay for something that…”
- “We tried X and it still doesn’t solve…”
This is stronger than general interest. It points to friction inside a workflow, and workflow pain is where durable products usually begin.
A useful rule: if the conversation is mostly about novelty, the signal is weak. If the conversation is about repeated frustration, workarounds, abandoned tools, and urgency, the signal is much stronger.
Look for repeated complaints across contexts
One post means almost nothing.
What matters is repetition across different users, threads, and moments. If the same issue appears in multiple communities, with similar wording but different use cases, you may be looking at a genuine market gap rather than a temporary complaint.
For example, a stronger opportunity often has patterns like:
- the same pain appears in separate subreddits or X threads
- users describe current tools as bloated, expensive, or incomplete
- people have stitched together spreadsheets, Zapier, scripts, or manual steps
- the pain appears over weeks, not just after one product launch or API change
This is where many builders lose time. The raw material exists, but manually checking Reddit and X every day is slow, inconsistent, and mentally expensive. You can do it, but it becomes easy to over-index on whatever you happened to read most recently.
Separate emotional noise from buyer intent
Not every loud complaint is a business.
People complain online for many reasons: boredom, status signaling, venting, dunking on large products, or reacting to trends. What you want is buyer intent, not just emotional energy.
Signals worth taking seriously include:
- users actively asking for recommendations
- people comparing paid tools
- budget or ROI language
- teams describing failed attempts to fix the problem internally
- requests tied to concrete workflows, deadlines, or job outcomes
By contrast, be careful with:
- vague “this should exist” comments
- excitement around new categories without operational detail
- complaints that disappear once a small workaround is mentioned
- requests from non-buyers who will never make the purchase decision
A good validation habit is to ask: Would this person change tools, change workflow, or pay to remove this pain? If the answer is unclear, the signal may still be too soft.
Rank opportunities by strength, not by how exciting they sound

Some ideas are attractive because they are legible. You can explain them in one sentence. They fit current trends. They seem easy to ship.
That does not make them strong.
A better ranking system looks something like this:
Strong bets
These have repeated pain, evidence of urgency, visible buyer language, and failed existing solutions.
Weak signals worth tracking
These are interesting but premature. People care, but demand is inconsistent, or the category is still forming.
Noise
These generate discussion but not commitment. Lots of reaction, little real intent.
This ranking matters because many builders waste months on ideas in the second or third bucket. The opportunity was not entirely fake. It was simply not mature enough yet.
If this is already part of your workflow, a tool like Miner from Ethanbase can be a practical shortcut. It is built for builders who want daily high-signal demand research from Reddit and X without manually sifting through everything themselves, with attention to validated pain points, buyer intent, and the difference between stronger opportunities and weaker signals.
Use an evidence log before you commit to build
One of the simplest ways to improve idea quality is to keep a lightweight evidence log.
For each product direction, collect:
- exact user quotes
- source links
- frequency of similar complaints
- whether users mention current alternatives
- signs of willingness to pay
- your current confidence level
Then review the log after a week or two.
If the idea still looks strong after repeated observation, it is probably worth deeper validation. If it weakens once the initial excitement fades, that is useful too. You just saved yourself from building on momentum instead of market truth.
This also helps teams avoid a common trap: confusing internal enthusiasm with external demand.
Validate the workflow, not just the idea
A niche can be real while your proposed product is wrong.
That is why validation should move beyond “Is this a problem?” into “How exactly does this problem show up in work?”
Try to understand:
- what triggers the pain
- how often it occurs
- who owns it
- what the current workaround costs
- what a user would need to trust a replacement
For SaaS and AI products especially, workflow detail matters more than category hype. A narrow problem with obvious urgency is often more valuable than a broad idea with vague appeal.
Track patterns over time

The best opportunities usually do not appear once. They repeat.
A frustration that keeps resurfacing over time is far more useful than a hot topic that peaks for three days and vanishes. That is why archives and historical review matter. Looking back at past signals helps you distinguish enduring workflow pain from temporary platform chatter.
For founders who want a more structured version of that process, especially if they are choosing between several product directions, a research brief can be more useful than another generic trend report. The key is whether it surfaces evidence, not just topics.
A practical decision rule
Before you build, ask whether your idea has at least:
- repeated pain from multiple real users
- concrete workflow context
- clear signs of buyer intent or willingness to switch
- evidence that current solutions are unsatisfying
- consistency over time, not just one spike of attention
If you cannot get four out of five, keep researching.
That may feel conservative, but it is usually faster than building first and discovering later that the problem was never strong enough.
A grounded next step
If your biggest bottleneck is finding reliable demand signals in the first place, especially across Reddit and X, it may be worth exploring Miner. It is a fit for indie hackers, SaaS builders, and lean teams that want evidence-backed product opportunities and repeated pain points without doing all the manual digging themselves.
The goal is not to outsource judgment. It is to start from better signals.
Related articles
Read another post from Ethanbase.

Why Sales Email Threads Stall — and a Simple Way to Regain Momentum
When a deal goes quiet, the problem is often hidden in the email thread itself. Here’s a practical way for founders and small sales teams to diagnose stalls, reduce risk, and send stronger follow-ups.

How to Practice for PM Interviews Without Getting Stuck in Generic Mock Answers
Many PM candidates practice hard but still sound generic in interviews. Here’s a practical workflow to sharpen product sense, execution, metrics, and behavioral answers with more realistic mock practice and better feedback loops.

How Active Traders Can Make Pre-Market Prep Less Noisy and More Decisive
Active traders rarely need more information before the bell—they need better structure. Here’s a practical way to narrow your watchlist, frame setups clearly, and reduce scattered pre-market prep.
