Level up discovery with these AI tools
okay, so I think that we have a few different, tentacles here.
A different paths to explore.
A few different tentacles to grab at things out there in the market.
Like a couple of Ursulas.
if it counts for anything, I got that Little Mermaid reference as a fellow father.
Hey everybody, welcome to Prompt and Circumstance.
My name is David and today we are going to be doing part two of Vibe Discovery.
On today's episode, we're going to talk about a pivot that we made And then how do we use
AI to accelerate that process of discovery?
And then we're going to take you through an example where
We're gonna show you how you can hone in your hypotheses and the experiments to run
in just a few minutes with AI tools.
I like that we are pivoting early uh because uh you you think about other successful
companies, they pivoted too, right?
mean, Slack took what, two, three years to pivot?
Yeah, we beat them by an order of magnitude or so.
Yeah.
Yeah.
it.
That's right.
We're definitely the next unicorn over here.
Yep.
what I'm thinking, David, is that We open up a document and just start.
noting down our hypotheses.
And let's try to come up with a decent list.
Let's say like eight to 10 hypotheses test.
They don't have to be good.
They don't have to be well formulated.
They just need to be jot notes off the top of your head.
And we're going to use AI to turn those into some decent hypotheses.
Since we're talking about product discovery, let's tell you about one of our favorite
tools to use in this process, Querio Querio is an AI agent which sits on top of your data
stack and allows you to ask natural language questions to pull out data insights.
It's the perfect tool if you're looking to understand market segments and user data, as
well as trends within your product.
Click on the link in our show notes to try it today and let them know that David and Ilan
sent you for two months free.
That is a $1,200 value for our listeners.
All right.
So what we're going to do is create this document and just put together our thoughts on
what the problem is.
Some how might we's if you follow the design thinking principles m and uh put together
some hypotheses.
for us to then go out and test.
And so this has just been uh hand typed out.
All right.
So at the top right, for those who use the Google Pro, there is this Gemini option here.
And what that lets you do is use AI right directly in your document.
So what I'm gonna do is select this problem statements and say something like, um split
this into two problem statements from the perspective of a principal product manager, be
succinct.
Okay, so.
As we had discussed, there's sort of two distinct problems here, right?
There's one about uh analyzing the data and the other is about presenting insights.
So here's what it generated.
It's not bad.
So let's go ahead and click the insert.
Boom.
All right, so what we got from Gemini was users struggle to quickly derive valuable
insights from raw data extracts without dedicated BI tools.
And then the other is users lack effective ways to visualize and present findings from
their data analysis in common extract formats like Google Sheets.
I would say that this is a good summary of the two user types that we talked about and the
problems that they're facing.
Do we want to get a little bit more specific on what we mean by users?
Um, think no, because we're not sure yet.
I think that's part of our hypothesis.
generation is which users are the ones who have, struggles.
think that's one of the hypotheses that we need to validate in our discovery
so we're going to generate some hypotheses here.
And these are top of head, not supposed to be uh great.
And we're going to use AI to help us refine these and make these hypotheses that can be
experimented David, what do you got?
How about
So users are dissatisfied with the current data analysis solutions.
All right, I have a hypothesis that.
small business owners, especially in e-commerce struggle to manage their data analysis
given the tools that they have.
Okay.
Alright, so managers of teams who need to present to leadership are unhappy with how they
are currently unable to make something nice and professional.
So I'm deliberately being a little bit chunky with the words to see what AI is going to do
with this.
On the other side.
All right, so I said individual contributors who need to make their cases to management
are unhappy with how they are unable to make compelling data-driven arguments through
visualization.
Great.
And so m I'd to now get to the willingness to pay hypothesis.
Alright, so the first one would be, the pain experienced by the personas above is
significant enough that they are willing to pay for a solution.
And I'd to add a little bit to that as well.
So the second one is the personas above have the means to pay for a solution.
The thinking is that you might want to pay for a solution, but then you don't have maybe
the money to do so.
Makes sense.
All right.
I think that we're in a pretty good place here, David.
All right, so we're in Claude.
We're giving Claude a prompt.
We're telling it that it is a number of product influencers and it's going to review the
problem statement and hypotheses from a product team And it's going to help critique these
hypotheses and suggest a better formulation to lead to the ability to prove or disprove
using experimentation.
And I'm just gonna slam all of our information that we came up with inside and let's see
what comes up with.
That is quite the chimera, by the way, that you've created.
It's the mashing up of all these product influencers.
You know, with all of those people, we should be able to get something decent, right?
Imagine if we had them all in our room, in a room critiquing our hypotheses.
All right, so it's come up with an improved hypothesis framework.
It told us, by the way, we had lack of specificity and measurability.
We had undefined user segments, and we had missing outcome metrics, all three of which are
very true, and what I was expecting to hear out of this.
First is user segment hypotheses The first one is e-commerce business owners with 10 to 50
employees who currently spend four plus hours weekly managing sales data in Google Sheets
will engage with an automated insight generation tool at least three times per week.
The second one is marketing managers at SaaS companies who present monthly performance
reports to executives will adopt a visualization tool that reduces their report
preparation time by 50%.
Great.
Yeah.
And that's the user segmentation hypotheses.
And then there's the value proposition hypotheses, which is the first one being when
presented with automatically generated insights that highlights their top three revenue
drivers, e-commerce users will rate the tools value as eight plus out of 10 within their
first week of use.
The second one would be marketing managers who can create executive
ready visualizations in under 15 minutes versus their current two plus hours will convert
to paid users within their trial period.
So this is assuming that we're going to do a sort of freemium approach to it.
then we have the willingness to pay hypotheses.
E-commerce owners currently spending $200 plus a month on analytics tool will pay $99 a
month for a solution that provides actionable insights 3x faster than the current
workflow.
And 50 % of users who save five plus hours per week on data analysis tasks will upgrade
from a free trial to a paid plan within 30 days.
So I got to say these are better than what we came up with.
but they also have some baked in assumptions.
it gave us some.
well formulated hypotheses if you had a product that was live and had a lot of users and
that you were able to run experiments on at scale.
What I did was I used this prompt chain as a sounding board, basically questioned first
whether or not these really made sense validating a product idea, period, with no basis.
And then it gave me some more
Hypotheses that still had like 50 % of users interviewed said this.
And this is where, you know, there's, there's some limitations to AI and you really need
to use your, your knowledge as a product manager.
What happens if three people tell me this is a really big problem for them?
Does that mean that my thing is not valid?
My idea is not valid.
Um, so I questioned it yet again.
And where I got at the end was six hypotheses that I think are pretty good and where I
think we could work from
Sounds good.
Now, um, you had mentioned a term that I'm not sure all of our audience knows, which is
prompt chain.
And by that, do you mean the thread of the conversation?
I do mean the thread of the conversation.
Yeah.
in a previous episode, we've talked about, uh, when we talked about building agents, one
of the things I've mentioned is that there, there are a number of terms that get thrown
around in the world of AI and agentic AI.
And sometimes you can assume that they're more complicated than they really are.
But prompt chain really just means that you're creating a thread of prompts where the
output of the previous prompt becomes the input or the context for the next prompt.
Yep.
That makes sense.
Let me tell you where we got to.
The improved hypothesis framework has in problem validation that in 20 customer
interviews, at least five people will...
unpromptedly mention frustration or pain with data analysis the second one yeah I know I
love the second one at least three people will say that they've spent money or significant
time trying to solve this problem the third one at least two people will ask when can I
get this or how much will it cost during interviews
And we can probably quibble with the numbers, but I think those are good directional
signals that this is a real problem.
Right?
When you talk to users, they express real frustration or real pain with this problem.
And they may have already tried to solve it themselves going through significant time or,
or with some costs associated.
Yep, that's a really good point and an important principle for any product management,
which is if you were trying to solve a problem and you're trying to detect whether it's a
problem worth solving, looking for evidence that your target customer has tried solving
this in a meaningful way is a very good signal.
That's And then the next set of hypotheses are around market reality check.
And these ones are, one, we can identify at least three existing paid solutions that
people actually use for similar problems.
At least one person we interview is currently paying.
for partial solutions to this problem.
And we can find evidence, for example, job postings or forum posts, et cetera, that
companies are hiring people specifically to handle this problem.
Yeah.
Again, you know, the, it's, it's great to see these hypotheses expressed this way.
em It, it makes me think about, the, The Mom Test if anybody's ever read that book by Rob
Fitzpatrick.
It really emphasizes that when you're doing discovery, that there's this tendency where
if, if you're talking about how, how cool your solution is, how great it is, people will
just be nice and be like, yeah, that's a good idea.
Uh, sounds good, but that's not a signal whatsoever.
The real signal is whether they're willing to put something at stake, uh, to, use your
solution.
Definitely.
So given where we ended up with this uh improved hypothesis framework from our friend
Claude, How do you feel about where we ended up here with this improved hypothesis
framework?
Are you happy with this set of six hypotheses for us to test and validate?
I like where we landed.
em know, it's a significant improvement from where we were before.
You're absolutely right that it had assumed that, you know, this was some product that
might have already seen some product market fit, This is very early and this is a lot more
appropriate for early discovery.
Like you said, some of the numbers I think we can discuss, but hypotheses themselves and
how they're phrased, think that's really good.
That's something that we can work with.
Yeah.
And I think that's the takeaway here is this process to get from really poorly formulated
hypotheses that we, you know, jammed into a Google doc six definitely testable hypotheses
maybe 10, 20 minutes and
If you're a busy person and you have a hundred things on your mind and you need to move on
to the next thing, I think this really shows how you can use these tools as a sounding
board.
Right.
And this would be the first AI discovery tip use uh an LLM of your choice as a sounding
board to get you to a place where you have
real hypotheses that you can test and validate.
Yeah, that makes a lot of sense.
taste, as you mentioned, I think in previous episode, was essential here in understanding
what good looks like and what mediocre looks like, being able to make that distinction.
Okay.
from here?
Let's give the audience a little bit of a preview of what they can expect coming into the
next episode.
So I think the next step from here, logically, would be for us to go and execute on
testing these hypotheses.
Definitely.
So that probably looks like a couple of different things, given the hypotheses we have
here.
One is for us to talk to users, ideally record those conversations so that we can take the
output and put it into a project.
And then the second step is doing some market reality check.
like it says here, looking at the fundamentals, what tools are out there, what are people
talking about?
And I think there we can really leverage AI tools.
All right, exciting.
We're making progress step by step.
step by step.
All right, with that, thank you all for listening to this week's episode of Prompt and
Circumstance
Alright, don't forget to like and subscribe.
You can follow us on the socials at @pandcpodcast and we'll catch you next week.
Bye everyone.