The Prototype Paradox: Why Looking at Startups is the Wrong Way to Judge Feasibility
Why the Most Innovative AI Products Come from Ignoring What Others Have Already Built
TL;DR:
Existing startups are a lagging indicator of what's possible, not a leading one
The most innovative AI products came from people who ignored conventional wisdom
Rapid AI advancement means yesterday's impossibilities are today's standard features
Prototyping reveals hidden constraints and opportunities invisible to passive observers
Small experiments yield outsized insights about what's truly possible in AI development
The Innovation Fallacy
As AI product managers, we often fall into a common trap when evaluating new opportunities: searching for proof that something is possible before committing resources to it.
"Has anyone built this before?" "Are there startups working on this problem?" "What does the competitive landscape look like?"
These questions seem rational. After all, why reinvent the wheel or chase impossible dreams? But this approach fundamentally misunderstands how innovation happens—especially in the rapidly evolving AI landscape.
The hard truth: If you're only building what others have proven possible, you're always following, never leading.
Why Existence Proofs Are Overrated
ChatGPT wasn't built because OpenAI had seen other successful chatbots with similar capabilities. DALL-E wasn't created because other image generators had proven the market. The teams behind these products operated from a different mindset entirely.
They asked: "What if this could work?"
This shift—from requiring proof to exploring possibility—is what separates incremental improvement from breakthrough innovation.
Key insight: The most significant AI products of the past three years came from teams that temporarily suspended disbelief about technical limitations.
The Self-Fulfilling Prophecy of Impossibility
Here's what typically happens when we approach problems with a validation-first mindset:
We assume something is impossible without evidence
We don't attempt it because "it's impossible"
No one builds it because "it's impossible"
The lack of existence becomes proof of impossibility
Rinse and repeat
This circular logic keeps us trapped in a narrow band of possibility, blind to opportunities hiding just beyond our assumptions.
Practical takeaway: Your competitors are making the same assumptions about impossibility. The advantage goes to whoever tests those assumptions first.
Case Study: How Prototyping Reveals Hidden Possibilities
GPT-3's launch in 2020 provides a perfect example of prototyping revealing unexpected possibilities. When OpenAI first built GPT-3, most experts believed large language models couldn't perform complex reasoning or follow nuanced instructions without extensive fine-tuning.
The conventional wisdom was that these models were useful primarily for text completion and basic generation tasks. Competitors were focusing on narrow, domain-specific applications rather than general-purpose assistants.
But when OpenAI released their API and allowed developers to experiment, something unexpected happened: users discovered prompt engineering techniques that unlocked capabilities no one had anticipated. The "few-shot learning" approach—showing the model examples of desired outputs—revealed that GPT-3 could perform tasks nobody had explicitly trained it to do.
This insight came not from market analysis but from actually putting the prototype in users' hands.
Actionable tips:
Build the smallest experiment that could challenge your core assumption
Put real prototypes in front of real users before making feasibility judgments
Track which "impossible" features users keep requesting despite your explanations
The Rapid Evolution Effect
The AI landscape evolves so quickly that "impossible" has an increasingly short shelf life:
2022: "AI can't write code that actually compiles"
2023: "AI can't create realistic images from text"
2024: "AI can't reason about complex problems"
2025: "AI can't..."
You can see where this is going. What's deemed impossible today becomes a standard feature within months.
Strategic implication: In AI product development, the "impossible today, standard tomorrow" cycle is measured in quarters, not years. Your product roadmap should account for this acceleration.
The Prototype-First Framework for AI PMs
Instead of looking outward for validation, try this approach:
Define the outcome: What would solve the user's problem completely?
Suspend disbelief: Temporarily assume technical limitations don't exist
Prototype rapidly: Build the smallest thing that could test core assumptions
Observe reality: Let actual constraints reveal themselves through experimentation
Iterate based on evidence: Use what you learn to refine your understanding of what's possible
This approach doesn't ignore reality—it discovers it through active experimentation rather than passive observation.
PM tools for feasibility testing:
Low-fidelity simulations (the "Wizard of Oz" technique)
Feature stubbing with manual processes behind the scenes
Isolated capability tests focused on specific technical challenges
User interviews with prototype demonstrations
Ethical Considerations
The prototype-first mindset comes with responsibility. When exploring what's possible:
Be transparent with users about experimental features
Consider potential harms if your "impossible" feature becomes possible
Develop ethical guardrails alongside technical capabilities
Document your learning to help future teams navigate similar challenges
The goal isn't to push boundaries recklessly, but to discover where the real boundaries are through responsible experimentation.
From Impossible to Inevitable: The Implementation Path
Once your prototyping reveals something is possible (even partially), how do you move forward?
Map the constraint space: What specific limitations did you discover?
Identify acceptable compromises: Which constraints can you work around vs. which require fundamental solutions?
Sequence your approach: Build the highest-value, most feasible components first
Create feedback loops: Design systems that improve as they gather more data
Document learning: Capture insights about what worked and what didn't
Example implementation path: If your AI-powered content moderation system works for 70% of cases but struggles with the rest, launch with human review for edge cases while continuously improving your models with the new data.
Final Thoughts: The Advantage of Trying First
The best signal for whether a problem can be solved isn't looking at existing startups—it's trying to solve it yourself.
Every unsolved problem lives in one of three states:
Genuinely impossible (rare in software, especially AI)
Possible but not yet attempted effectively
Possible but abandoned because someone gave up too soon
Most "impossible" problems fall into categories 2 and 3. The only way to know which category yours falls into is to attempt a solution.
Remember this: No meaningful innovation comes from waiting for someone else to prove something is possible. It comes from being willing to test boundaries yourself.
This Week's Challenge
Identify one "impossible" feature that would dramatically improve your product if it existed. Now, design the smallest possible experiment that could test whether it's truly impossible.
Have any followup questions on this? Hit reply and I'll answer them. Or share your own experiences with prototyping the "impossible"—I'm always looking for great examples to feature in future newsletters.
Until next week,
Miqdad Jaffer
Product Lead, OpenAI
p.s. Ready to build the "impossible"? For readers of this newsletter, I’m offering $500 off the #1 rated AI course on Maven (the biggest discount I offer). Use code ED4 or [this link] to sign up.