The fastest-growing software companies of the last decade (such as Atlassian, Slack, Shopify, and others) were built with product growth at the center of their go-to-market strategies, and growth teams behind it.
Furthermore, this model opens the doors to incredible scaling potential:
Being empowered by low CAC with a unique distribution model – the product itself – allows you to acquire customers more cheaply and efficiently than your competition.
However, post COVID we're seeing reports that PLG companies are actually unprofitable and growth teams at PLG darlings such as Airtable and Snyk are being cut.
So what's going on here?
Besides startups (and scaleups) ignoring the rule of 40%, and getting drunk on the COVID boom and overhiring, growth teams have always had this fuzziness around economics, performance, and ROI.
Despite the economic downturn, having a growth team can be your best shot at winning because:
Growth teams are best positioned to work on these challenges. They can be your secret weapon or a bust. There are many traps, even if you have the right skill set in place.
This guide is meant to show what the traps look like in real-life scenarios and how to avoid them. Then growth leaders can prove ROI, and founders and leaders can select the right team for the right job.
“PLG companies R&D spend hasn’t produced new business at the same rate as a dollar invested in sales & marketing post-Covid.”
-Tomasz Tunguz, VC
This is the Achilles heel for growth in 2023 when it's all about profitability and ROI. Growth teams, like other teams, have four main variables that determine costs:
And five main variables that determine ROI:
In another article, I outlined four levels of growth teams. In order for growth teams to produce new business, it's vital to understand that:
Although I don't have exact data, I have years of experience and have seen the insights of how growth teams are being formed and operate at many startups and scaleups. I’ve personally dealt with many of the challenges outlined in this article. Knowing this, my educated guess is that 80%–90% of growth teams get stuck going from level 2 to level 3 and thus don't make it.
That's why I'll be focusing on this area.
This is the crucial make or break point, where the first team/squad is being formed (it can be in product or marketing). And there's a limited time window involving gaining trust from leadership and many growth challenges. This happens in early-stage startups (series A, series B), or brand-new areas in a mature company (e.g., a new product line).
I'll give you three simple examples to see how it plays out in three common scenarios:
We'll use a hypothetical B2B SaaS company with the following profile and unit economics:
This is a simplified example with the purpose of showing you when the work of growth teams is working economically and when it isn't (without looking into CAC, expansion, different segments, pricing tiers, etc.).
Growth teams are spread across acquisition, activation, retention, and monetization. In these examples, we'll focus mainly on acquisition.
Why? Because that's where early-stage or inexperienced growth teams usually start and set themselves up for failure.
The growth team identifies that the web-to-signup conversion rate is 2x times below the benchmark, so they decide to focus there.
A single A/B test will probably require around 6–7 weeks from idea to results: 2 weeks to design and launch the test and around 4 weeks to get the results.
In most cases, you'll need a growth PM, a designer, 1–3 engineers, and an analyst. This is an oversimplification, but let's assume that the team costs ~$3000/day ($500 for each person), which is approx. $90,000/month in salaries (US based).
Add another five figures for tooling (your A/B testing tool, data and analytics, design, engineering tools, project management, etc.). So, you have a small growth team with a burn rate of +$300,000 for the quarter.
In one quarter, this team is able to ship around four A/B tests (based on the traffic and conversion volume, and avoiding running overlapping tests). On average there are two per month, but there are usually a few that take longer. Each will require ~6–7 weeks to produce results, and a good benchmark is around a 30% success rate (which is optimistic because the failure rate is high).
Say you get one successful A/B test this quarter: You improved your web-to-signup CVR by 20% – Congrats!
In our example, we had 100,000 monthly visitors, 1000 signups, and 1% CVR, so a 20% improvement is 1.2% CVR and you'd get 200 more signups every month. The average value of your signup is $120 ARR so a 20% improvement means roughly a $288,000 improvement of ARR over the course of 12 months.
Now on paper, you are already losing money. This quarter, your growth team cost $300,000 and you produced results worth $288,000 ARR (and if the growth team is part of product org, it's probably not even part of CAC).
You might argue that this improvement will compound. This is true theoretically, but in practice it will play out differently.
It will get worse.
Depending on your code environment and DevOps, you will likely need to wait another few weeks for it to be rolled out to 100% of your users. So 6–7 weeks become 2 months for the effect to actually come about.
Second, that 20% improvement is relative and a prediction if all the conditions remain the same. Over 12 months it could be just 10% or 5%, or nothing (aka the “margin of error''). Imagine what happens if all of a sudden you have another version pushed because an executive felt like it. Your gains disappear (it happens).
Realistically with 16 A/B tests per year, that means probably only 1–2 successful A/B tests per year (or none). You need high velocity to hit benchmarks. And it doesn't matter if the team is talented because they're constrained by volume and lack of data, not skill.
So, your team costs $1.2m per year and can produce maybe $500,000 ARR worth of improvement in the best case (assuming two successful A/B tests with similar wins).
Of course, you need to consider long-term effects but imagine bringing this to your CFO in 2023. Trying to explain that this is the way growth works is not going to help.
But it gets even harder.
The success of growth teams depends heavily on alignment with the rest of the GTM teams (marketing and sales). That 20% improvement in web-to-signup conversion rate won't be felt across the other GTM teams.
For example, assuming 20% (very optimistic) of the 200 extra signups are PQLs (product qualified leads) and the company has three AEs (account executives), that's 13 extra PQLs per AE.
If you ask your account executives, they almost certainly won't notice any difference. This is because those 13 extra PQLs won't happen precisely every month but “maybe” on average over the course of the next 12 months.
When everyone is pressured to deliver results to keep their jobs, you can see how easily other GTM teams might lose trust in growth teams.
Now let's look at another situation with different volumes but the same unit economics:
The growth team decides to focus on improving web-to-trial CVR again because it's still way below a benchmark. But this time, the growth team now is able to ship ~16 A/B tests per quarter, and 64 A/B tests per year.
Because your time to results will decrease with a bigger sample size (your traffic and monthly conversions), and you won't need to wait as long to ship the next A/B test, it's still not at the highest velocity but it's starting to make more sense.
It gets better.
With a 30% win rate, that's realistically 10–20 successful A/B tests per year, you are more likely to hit the benchmark. And with more tests, your team will improve their execution time and also increase their chances of success due to more learnings. It will likely mean a ~30–40% improvement with strong compounding effects.
A 20% improvement in web-to-signup CVR is already 600 more signups per month and a $720,000 ARR improvement. Your margin of error also becomes smaller because of a bigger sample size.
Your team and tools cost roughly the same. Economics works in the short term. Your team costs you $1.2m, but the return is around $3m/year.
If you didn't overhire sales, your AEs will (more than likely) notice a difference, this through a higher number of PQLs per AE.
Leadership and other GTM teams will gain more trust, and so long-term effects can kick in. You'll also get more budget, can start expanding into different areas, and progress.
Bottom line: The likelihood of positive ROI increases with more volume. This is prevalent across all areas: acquisition, activation, retention, and monetization.
Not all growth teams should be running A/B tests and focusing only on the top of the funnel. And what do you do if you don't have the volume?
The ultimate goal of the growth team is to unlock compounding growth.
A popular explanation is to focus on incremental improvement every day that will compound over time. This type of thinking leads to focusing on “improvements”. This only improves your local maxima (perhaps). There’s improved efficiency of what you already have, but there's a ceiling (that’s why it's called local maxima).
Another important element of the compound effect is achieved through “building” not improving. You stack things together on top of one another; things that are different in nature but complement each other and thus fuel your growth.
Real-life case studies:
The third important element is innovation. You do something radically different with something you already have, or completely different than your competition.
Real-life case studies:
In the early stages, you don't have much to improve/optimize and you don't have the volume. For these reasons, early-stage growth teams should be building and innovating.
Furthermore, early on you probably won't have a robust data infrastructure to effectively measure performance (setting it up is not the job of a growth team).
In other words, go for big bets (bigger swings) vs. incremental improvements that usually center around:
Bigger swings entail higher risks, but the rewards are usually outsized. Plus, you won't need a sophisticated data infrastructure to measure outcomes, you will feel them. And as a result, it can have various degrees of impact on revenue and retention, so you need to keep an eye on that.
Once you have enough data volume and an infrastructure (from product analytics and also to feature flagging), you can start rolling out controlled experimentation (A/B testing) faster and scale velocity.
Not all of the examples above are the work of growth teams. These simple examples are meant to show you the areas and approaches that will more likely lead to a positive ROI. You want your growth team to work on worthwhile projects.
This does not exclude more mature companies. Strong innovation culture in general can be a major growth driver and a way for attracting top talent.
Let's get back to the same hypothetical B2B SaaS company but with a lot less volume:
In using tools like Ahrefs or Semrush, you identify high keyword volume (~50,000 monthly searches) for relevant search terms (centered around the problem your product is solving) and a medium level of competition.
Should you hire a $500,000/year growth team to run A/B tests? The ROI will be negative, guaranteed.
In this case, you'd be better off hiring a growth marketer who knows SEO. Someone who can work together with an early product manager or a content marketing agency to build out a content loop.
This would cost you anywhere from $100k to $250k/year, depending on where in the world you hire. You might be able to get 20–30% of that traffic within one year. That's ~20,000 new organic monthly visitors, which would bring you ~300 more signups per month, which would then be worth ~$49,800 ARR per month and $597,600 ARR per year.
I'm obviously making a lot of assumptions in this hypothetical example (e.g., the CVR might be different for organic traffic). And as with A/B testing, this would be a projection and not immediate revenue (or even pipeline). It will take time to materialize and it would include a margin of error.
But now you have a $200,000/year team working on a $600,000/year bet (with a high compound effect).
My friends at Omniscient Digital have an excellent guide on hows and whys of measuring the ROI of content marketing.
For growth teams to deliver a positive ROI, growth leaders and company leadership need to understand all the constraints and prerequisites surrounding their stage and product.
Tools to use: