
How To Ship AI Features Without Cannibalizing Your Core Product
Becoming AI-First Without Losing the Plot: Crunchbase’s CPO on Question-Behind-the-Question Search, Scout, and Product Pragmatism
Turning a beloved data product into an AI-first experience isn’t about sprinkling in a chat box—it’s about helping users answer the question behind the question. In this episode, we unpack how to evolve core workflows with AI without breaking user trust or muscle memory, what “post-model” product thinking looks like, and why the right adjacencies beat shiny features every time.
Guest Introduction
Megh Gautam, Chief Product Officer at Crunchbase, has spent the past two years reshaping how millions query company data. From launching natural-language search and the Scout agent to grounding answers in Crunchbase’s trusted dataset, Meg shares hard-won lessons on balancing rapid model progress with pragmatic product priorities.
Why AI-First Must Still Be User-First
Question > Query: Users don’t just want “how much have they raised?”—they want “is this company growing, healthy, and worth my time?”
Trust as a Constraint: Answers must be grounded in Crunchbase data—even if that means returning “not found” over hallucinations.
Avoid Buzzword Compliance: Don’t ship “reasoning” or other trendy modes unless they solve your users’ problems.
From Filters to Conversation: Reinventing Search
Natural-Language On-Ramp: AI search removed the “blank page” problem and spiked query volume by letting users “talk to the dataset.”
Scout: Turn plain English into lists and searches—no fiddly filter-building required.
Context Retention: Keep follow-ups inside Crunchbase to prevent disintermediation and preserve intent.
What Surprised Us in Prototyping
Muscle-Memory Mismatch: Long-time users needed guidance as interaction patterns shifted from filters to chat.
Edge-Case Reality: Ambiguous names, non-venture funding (debt, crowdfunding), and non-canonical paths forced clearer definitions and off-ramps.
Sync vs. Async: Large, long-running searches sometimes work better in the background—UX must adapt.
Measuring Value in a Non-Deterministic World
Personas Drive Paths: VCs (sourcing depth), tech pros (competitive research), journalists (verification) need different defaults.
Signals That Matter: Thumbs up/down, follow-up tone, and completion behavior—plus rigorous qual + quant review.
Staged Rollouts: Start with early adopters, then broaden while keeping feedback ratios healthy.
Tech Choices Without the Hype
Model Upgrades with Restraint: Evaluate new releases against concrete use cases—not headlines.
Grounding Over Generic RAG: Early experiments reinforced that answers must be anchored to Crunchbase records.
Right Adjacencies Only: Partnerships (e.g., combining world knowledge with Crunchbase’s private-market depth) should serve clear user jobs.
Post-Model Product Thinking
After the API Call: The real work is UX, feedback loops, evals, and operational choices (long-running agents vs. synchronous flows).
Platform Creep Caution: Point solutions shouldn’t reflexively morph into “platforms” without user-validated scope.
No “AI Pillar”: AI is embedded across pillars—core UX, integrations, API—so tradeoffs map to customer value, not org charts.
Practical Advice for Builders
Start Where It Hurts: Make something faster or more explainable—often in support or success—before chasing “magic buttons.”
Pick the Right Layer: Don’t slap AI on a platform-level problem; find quick wins, then earn the right to deepen.
Harness Team Energy: Let curiosity fuel small bets, but graduate features only when they consistently help real users.
This episode is a field guide for product leaders evolving established products into AI-first systems—without losing sight of trust, workflow fit, and the everyday questions users actually need answered.
Interested in being a guest on Future Proof? Reach out to forrest.herlick@useparagon.com