During a startup’s lifetime, countless hypotheses are tested to find whatever pushes it forward. Basically, as long as a startup is alive, there should be a constant cycle of verification that tests different approaches, ideas, and concepts.
Why is it so? Since this is a startup, it constantly needs to grow. Hardly any project reaches such a point where its founder stops and says, “Well, I think we are done, let’s wrap it up.” If you want to evolve and succeed, then it needs to be an endless process.
But since the validation cycle doesn’t stop, it means you can potentially waste unlimited amounts of money on it. So a logical question arises, “How do you test hypotheses quicker and cheaper?”
In this article, I’ll tell you how Admitad Startups optimizes its costs without stopping its HADI cycles.
How to test an idea cheaper and faster
Personally, I do not know a better option than designing an experiment. Based on your experience, expertise, benchmarks, and observations, you model a funnel of this experiment, trying to guess how it will go.
For example, a startup team has the following hypothesis, “If we call 10 people and interview them, we’ll confirm that they need a solution to a problem X.” Since it is yet unclear how many people you should contact in order to attract 10 respondents, you need to use hypothetical conversion rates.
If an estimated conversion rate is 50%, then you need to get in touch with 20 people in order to conduct 10 interviews. Next, you estimate the cost. Let’s say, your experience tells you that obtaining respondents costs an average of 100 bucks, so to conduct these 10 interviews, you will need to spend 1000 bucks.
Of course, in 95% of cases, nothing goes as you expected. Nevertheless, by designing an experiment before conducting it you will have some kind of ground under your feet. It’s useful to understand your approximate budget and know that you can’t move beyond it since these restrictions greatly help with focus.
All in all, this is the framework that will contribute to your budget-friendly validation. Without distractions, the process will be much quicker — and cheaper since you’ll use a limited amount of tools to validate your idea.
Running a few tests at once
One of the ways to speed up the validation is to launch several cycles in parallel. It’s especially relevant for corporations with a huge budget since they have enough people and resources to test a few product hypotheses simultaneously.
But when there is only one startup with a small team, it usually has no such option, so it has to check its product hypotheses one by one. Larger teams can be divided into 2–3 groups to distribute hypotheses among them, but again, for many startups, this is simply impossible on the initial stage.
At Admitad Startups, we can simultaneously check 2–5 hypotheses, depending on their formats. If the validation cycle requires huge resources, then we prefer to stay focused and test our assumptions one at a time. But with ‘mature’ hypotheses (something like A/B testing of the color or location of the button), we usually run several tests at once.
Let me illustrate how this works with an example. Say, we came up with a USP for our Collabica project (“Increase your average check by partnering with other brands on the Collabica platform”), and now we need to validate it. Since USP hypotheses are high-tier and complex, we prefer to focus on one USP at a time.
However, each USP includes lower-tier hypotheses — for instance, “If we do X, we will bring in Y leads.” Such ideas are typically quicker to test, so we can run a few validation cycles in parallel:
- Send emails
- Launch ads on LinkedIn
- Launch ads on Facebook
- Leave messages in thematic chats in Telegram, Slack, etc.
Essentially, instead of dedicating one week to LinkedIn and another week to Facebook, we check our assumptions simultaneously. But it can only work with low-tier hypotheses that don’t require huge time and labor investments.
How many marketers are needed to launch a validation cycle?
In testing hypotheses, the rule ‘the more, the faster’ does not work. Trying to gather a 9-person crew to birth a child in a month would be futile. So instead of gathering a crowd of people to disperse the amount of work between them, you need to be smarter about it.
Any validation cycle starts with a product manager. Being the one responsible for the entirety of a product hypothesis, they are constantly involved as a brain center of operations. Hence, a product manager decides what skills and/or professionals are required to complete a single validation cycle. It usually depends on 1) the very nature of a hypothesis and 2) what methods a product manager wants to use.
For example, to check some assumptions, it is enough to get your marketer to send some emails, and you will be done in a week. But testing more complex hypotheses that describe business models and products can sometimes take up to several months. In such cases, naturally, more people have to be involved in the verification cycle.
In our practice, we had instances where the whole team wasn’t enough. So I recommend adding freelancers if you don’t have enough human resources. It also makes sense to hire them when there’s a lot of monkey job, such as sending 1,000 emails. Admitad Startups tries to outsource such tasks because otherwise team members might turn into very expensive monkeys.
“But wait!” you might say, “I get it, everything is relative and no two hypotheses are alike. But it would be nice to have reference points, don’t you think?” If you ask me, you usually need 2–3 people to test a single internal hypothesis:
- a marketer able to find respondents or leads,
- and an interviewer (or two) able to pull information out of them.
It gets significantly easier when a business has already built a lead list (for instance, from previous research or validation cycles). In this case, all you need is interviewers who will start making phone calls.
Combining CustDev and pre-sale
To test a hypothesis, whether it’s a traffic landing page, an interview, or something else, you need to find potential customers. And to do that, you often need to work very hard: it’s not simple to generate leads when all you have is a non-existent product solving a problem you didn’t check and a value you didn’t learn.
I believe this is the most common bottleneck. To try and tackle it, you need to define your buyer persona — this way, you will at least know who you are supposed to attract. For example, in one of Collabica’s iterations, our ideal customers were Shopify sellers with certain revenue and certain UIA who have no marketer in their staff.
Your next step is finding actual customers that fit your buyer persona and conducting interviews. (By the way, we shared more on Customer Development in this article.) At this stage, teams of Admitad Startups use this rare lifehack to speed up the process.
We turn to various search forms like Respondent.io in order to connect with our target audience. The service needs to allow making specific and accurate queries — this way, you’ll filter out in advance those consumers who don’t fit the description of your buyer persona. Having accumulated a pool of respondents, we use the discovered consumers not only for interviews but also for pre-sales.
For example, as we discuss with a potential client their pain points and possible solutions, we can also show them a slide deck and ask, “Would you want to buy such a thing?” It blends well with CustDev interviews because asking why yes and why not will provide you with useful insights on the product while possibly generating a lead.
Great research leads to instant ‘sales’
A hypothesis is confirmed when several members of the target audience perform the target action. In the industry lingo, we often call it ‘selling’ or ‘buying a hypothesis’, but it doesn’t have to do with actual purchase. Instead, it can be something like:
- agreeing to buy a pilot version
- entering into an agreement
- using our service, and so on
At Admitad Startups, all the research work of the incubator is aimed at getting potential customers to ‘buy’ our proposition. If it does not happen, it means the product hypothesis is not confirmed, so we move on to the next one.
What does the research process look like? We analyze the niche, formulate an offer, and start suggesting it to potential buyers. As a rule, they rarely get immediately ‘sold’ on a product, but we get to learn something new from them. After refining what we have, we come back to our target audience for more feedback. This cycle can run several times until our potential clients are satisfied with the offer.
The importance of preliminary research is not to be underestimated. The more carefully it is carried out, the more time it will save you. Here’s a case study from our practice: we developed an SDK for mobile applications called CashBro. The audience was ‘sold’ on it instantly thanks to extensive knowledge we’d acquired beforehand. I mean, when the product manager came to me and said, “We should validate our cache browser”, we were prepared 110%. So in a day, I whipped together a slide deck and handed it over to our salesperson. It turned out our potential customers were ready to buy CashBro from the get-go.
If you want to run cheaper and faster validation cycles for your hypotheses, you can try the following methods.
- Design an experiment. It means that you need to plan the funnel of all test stages beforehand, contemplating on how it’s going to turn out. If you currently have no conversion data backed with real evidence, use the data you believe in.
- Run a few tests in parallel, distributing hypotheses between your team members.
- Be smart about the number of people you encharge with the task of validation. Based on our experience, I would say that you only need two specialists: an interviewer and a marketer.
- Try to combine the Customer Development stage with pre-sales. Apart from interviewing respondents, you can offer them your product and see if they are ready to buy it (and ask useful questions in case they are not).
- Don’t strive for validation through sales. If you focus on achieving that, it might only hinder your progress since not all hypotheses need to result in a sale.