We spend a fair amount of time on user and market research in order to optimize decisions around marketing experiments (more commonly referred to as campaigns). Optimizing for product validation and campaign experiments has good overlap.
Philip Morgan recently wrote a neat post on validating ideas with your email list and whether you should do one-shot or a lot of micro-validations.
I thought I’d write a couple thoughts about validation research. This was prompted by Josh Earl, another list member’s response. Josh was warning that your top email subscribers’ enthusiasm for an idea might lead you to overshoot your sales projections:
The top 1% of your email subscribers tend to love you so much, they will literally sign up for anything, do anything, and buy anything you create.
– Josh Earl
I think that’s a valid concern and there are a lot of common traps and things to consider with validation research. A few more:
The ceiling assumption
If you’re new enough to overshoot generalizing interest when validating, then you’re also new enough to undershoot it.
Setting too conservative an initial validation threshold target can lead to nixing instead of revising a product idea or campaign experiment.
The mistake here is making a ceiling (instead of floor) assumption. More often than not, an initial interest figure will be your floor, not your ceiling.
The ceiling assumption is why no one took Donald Trump seriously when he decided to run. Statisticians assumed his initial support numbers were the ceiling, that his base had spoken, and that was the max. But alas, that figure was actually the floor and here we are.
A real example of poor initial list interest for what ended up being a completely valid product
This happened recently – a client with info products and 12k list size added a PS to a few emails at my suggestion. (PS’s often get read more carefully than the often skimmed email body content). She wrote something like, “I’m doing this beta launch thing on getting rid of <common problem adult women have>, reply if you’d like to be shortlisted” and got 12 replies. That is 0.1% interest. Low by any standard. Upon direct follow up, only half responded, bring it down to 0.05%.
She had been doing similar work in one-on-one coaching and knew there was more to it. So she pushed onward and two weeks later she had 500 people sign up for a webinar about it, 100% success rate on those who bought. She’s currently in her first re-launch.
The point is we often take an incomplete approach to validation.
When you’re flying solo, not generating initial interest feels like failing or being rejected.
It becomes hard to see past that.
“No one liked my poetry so I should stop writing poetry.”
In reality, without a well-tuned approach, the way you validate means very little. Look at all the successful kickstarters that became failed companies out there. I’m reminded of Coolest Cooler, the all-time second most successful (in funding) Kickstarter, with tens of thousands of backers still waiting for their coolers five summers later.
List bias is good
When selling to your list, list bias is a good thing. What’s wrong with 1% of your list being interested as a result of fandom? Someday your list will be much bigger. That 1% of raving fans will continue to grow. If an idea is valid enough to do based on pre-launch sales, seems good enough to me.
Related, you should skew decisions in favor of what your top 1% wants. They are the ones feeding you so they should get a disproportionate vote.
You can do this by segmenting responses from most engaged or comparing results from buyers and non-buyers when you send out a survey.
The “that’s a great idea” to “take my money” gap
Maybe similar to the selection bias Josh Earl mentioned.
A common occurrence in market research is polling x people on whether they would buy something, taking those who said yes, and then asking them to make the purchase on the spot.
The idea is that what people say they’d do and what they actually do is often different. By accounting for the gap between the two, we can more accurately get a sense of true purchase intent.
We don’t do this though. It’s manipulative. If anything, its a turn off to those in your target market who are already doing you a favor. Updating your projections based on this gap will likely make them less accurate.
The online equivalent is sending users to a product page and not telling them the product is unavailable until checkout. Manipulative and illegal.
Over time you get a pretty good feel for what you can generalize so we decide to trust that people who say they would buy, would do so given optimal circumstances (timing, price, need, trust).
Validation is not binary – it’s a scale
Whether you ultimately decide to execute on a product, idea, or experiment is binary, but the validation part is not. Multiple variables go into the function of estimating value to a group.
Let’s say you write a book. No one is willing to publish it. You have a big list. You decide to self-publish.
If 1% of 100k list is willing to pay $5 for your ebook, but 2% would pay $1 for your ebook, what do you do?
$5k vs $2k – no brainer right?
Not necessarily. At 20k purchases you could potentially hit the best seller lists for NYT and Amazon. It would be well worth the $3k difference in list sales to get that kind of visibility and resulting sales of already being a best seller.
The lower price actually validates the book launch. Ramit Sethi did this when he launched, “Your Move” (Amazon Smile link), he put it up for $1, and watch it skyrocket to the top of the charts. If you don’t subscribe to his emails, the book is a good summary of his point of view and teachings that list subscribers have learned from him over the years.
The point is you are estimating the amount of value to amount of people you can reach more than you are saying, “yay/nay.”
Doing this allows your idea the flexibility it needs to get to validation.
All the idea birds, one stone
Even better, let’s say you know you want to do something. You have a couple ideas and you want to pick the best one, because let’s face it, action is better than inaction. And you’d be an old head if you waited for perfect validation on any one idea.
So if you’re asking your list to choose their own adventure by comparing ideas (would you be more interested in product A or product B?) then you automatically control for any overzealousness anyway.
Past vs future audience
Your current audience is your past audience. By that I just mean they signed up based on something you did in the past. Your past audience won’t necessarily all be on the same page about your future directions.
With every new direction chosen, you’ll become less relevant to swaths of your list. Heck, this just happens with time. People changes jobs every 16 minutes these days.
In validating ideas, its important not just to look at who raises their hand, but to also have a good enough sense of your list breakdown to know who you’ll be alienating with your new adventure.
I watched this happen to someone we did some consulting for a long time ago. They ranked for really strong fitness keywords in Google as well as really strong science keywords.
Then they decided to do something completely different than either of those things. Five years later their list size is about what it was five years ago.
I don’t think they regret it, but it would have been worthwhile to create a plan to account for people with the other interests effectively, like a simple autoresponder product launch loop to non-buyers for science-y things. And maybe do an affiliate partnership for a badass product they found for the fitness-y list members.
You cannot invalidate an idea
Idea validation is tricky because you can validate an idea, but you can never really invalidate it.
Your advantages, approach, messaging, targeting, trust, price are all variable at the idea stage.
A recent example, though not for a digital product
Ann sent out a Google form to a client’s list who owns boutique fitness studios. The survey was to ask if they’d be interested in a weekend retreat and get an idea of what people would pay.
Response was okay (about 50) but average price sensitivity was $250 with a few in the $250 to $500 for a 3 day 2 night retreat. Conservative estimates would have her losing money or having to put everyone up in a roach motel. She set the price at $600 and $900 depending on bed type and sent it to only those who had expressed interest in the survey.
After one email, it was more than half full (~25% conversion on initial interest group). And its not until the fall so plenty of time to fill the other spots.
The more real it is the more accurate your validation
The change in the above example (I think) is that before the survey it was an idea. Once she hashed out the what and when of the weekend, it made it real. Compare your price sensitivity for these options:
- A 3 day 2 night retreat in the late summer or fall
- A retreat with limited rooms available on a hip farm near the beach 90 minutes outside of Philly.
That made it more real for people.
We can get more into micro vs one-shot validation tomorrow and dive into why you should be validating outside of your list.