Why and How Startups Should Run Growth Experiments

Why and How Startups Should Run Growth Experiments

You know what’s as predictable as death, taxes, and your mom calling right when you’re in an important meeting? It’s you, pouring resources into ideas based purely on gut-feel, only to find out they’re as effective as a plastic teapot.

 

One thing’s for certain: growth isn’t just about scaling, it’s about smart scaling.

So you’ve got a product or a service, and it’s out there in the wild. You’re not in the idea stage anymore; you’re in the “let’s explode this thing” phase. But here’s the curveball: diving deep into uncharted growth tactics based solely on gut feelings can be as dicey as betting all your savings on a “sure-win” tip your neighbor told you about.

This brings us to our sly, often overlooked villain: Confirmation Bias. Oh, it’s crafty! It convinces you that your particular growth strategy is gold, just because it feels right.

It’s that annoying friend who always tells you, “You’re absolutely right!” even when you’re way off base. It’s the itch in a startup’s mindset, making founders believe they’re on the right path simply because the direction feels right. In essence, it means you’re prone to seeing only the evidence that supports your existing beliefs while blissfully ignoring anything that contradicts it.

Think of it this way: It’s like wearing rose-tinted glasses in a paint store. Everything looks rosy, so you end up buying ten buckets of red paint, only to realize at home that it’s brown…or whatever the color shit is.

Reminds me of that time when I was convinced that sending out emails with neon-colored fonts would grab more attention. Spoiler: they didn’t.

Most of them landed in the spam folder, and those that didn’t, well, let’s just say they hurt a few eyeballs. Had I just taken a step back and run a simple A/B test, I would’ve known better.

Growth experiments are to startups what wind tunnels are to aerodynamics. Essential, enlightening, and sometimes downright surprising.

And here’s a framework to run them while minimizing confirmation bias.

The Framework of a Growth Experiment (with practical examples from Kernel)

Building a Hypothesis:

While optimism is good, don’t be blind. Remember: Love can be blind, especially for your own ideas!

  • Problem Identification: Every great experiment starts with a question. Maybe it’s customers fumbling on a feature or a dip in user engagement after an update.
  • Potential Solution: Formulate an explanation and devise a potential fix to the identified ‘customer struggle’.

Example

  • Problem Identification: Our new users are struggling with creating the first invoice on the platform. We suspect two reasons:
    • New users are unfamiliar with the invoice templates we’re offering them;
    • New users are having trouble envisioning the end product of their usage.
  • Potential Solution: We came up with two:
    • Offering a productive onboarding where we guide them to create their first invoice should solve the problem;
    • Provide a dynamic template of the invoice on the side of the screen, that is filled out as they progress through onboarding.

Hypothesis: Providing a skippable productive onboarding with and end product always shown on the side of the screen (so they always know what their document will look like) will help them create their first invoice.

Designing the Action:

Lay out what you want the user to do. It’s the heart of your experiment. Ensure actions aren’t just engaging but offer genuine value.

Example

  1. The first action was starting an onboarding OR skipping it;
  2. Setting up the company;
  3. And the final action was finalizing their first invoice.
Establishing a Trigger:

Determine the precise moment and method to initiate your action. Understand your audience’s rhythm. Not too early, not too late.

Sending a promotional email at 3 AM? Might as well address it to the moon!

Define: audience, message and timing.

Example

  • Audience: Users who haven’t created their first invoice;
  • Message: A skippable productive onboarding;
  • Timing: As soon as they enter the product;
Measuring the Outcome:

Set definitive and achievable goals and define clear metrics to gauge success, like click-through rates or engagement scores.

In the world of data, not measuring is like wandering without a map. Always remember: If it’s not measured, it’s guessed.

Google Sheets and a proper database are all you need really.

Example

We just tracked 3 metrics:

  • Skip % – Anything BELLOW 40% was acceptable.
  • Company setup % – Anything ABOVE 40% was good.
  • Invoice Creation % – Anything ABOVE 20% was good.

It’s very important to set TARGETS. Otherwise, you won’t be able to tell if it worked or not.

And in our case, it really did. We improved all three metrics.

So…

We’re not just throwing darts in the dark here. Growth experiments are about sharpening the darts, adjusting our aim, and, sometimes, realizing the board is on a different wall altogether!

You see, it’s all about dancing to the rhythm of our users – understanding their moves, predicting the next step, and sometimes even leading the dance.

It’s about making informed bets, where the odds are shaped by insights rather than mere chance.