Doubling SaaS Revenue By Changing The Pricing Model

Most technical founders abominably misprice their SaaS offerings to start out.  I’m as guilty of this as anyone, so I wrote up my observations about un-borking this as The Black Arts of SaaS pricing a few months ago.  (It went out to my mailing list — sign up and you’ll get it tomorrow.)  A few companies implemented advice in there to positive effect, and one actually let me write about it, so here we go:

Aligning Price With Customer Value

Server Density does server monitoring to a) give you peace of mind when all is well and b) alert you really darn quickly when all isn’t.  (Sidenote: If you run a software business, you absolutely need some form of server monitoring, because the application being down costs you money and trust.  I personally use Scout because of great Ruby integration options.  They woke me up today, as a matter of fact — apparently I had misconfigured a cronjob last night.)

Anyhow, Server Density previously used a pricing system much beloved by technical founders: highly configurable pricing.

Why do geeks love this sort of pricing?  Well, on the surface it appears to align price with customer success (bigger customers pay more money), it gives you the excuse to have really fun widgets on your pricing page, and it seems to offer low-cost entry options which then scale to the moon.

I hate, hate, hate this pricing scheme.  Let me try to explain the pricing in words so that you can understand why:

  • It costs $11 per server plus $2 per website.
  • Except if you have more than 10 servers it costs $8 per server plus $2 per website.
  • Except if you have more than 50 servers it costs $7 per server plus $2 per website.

This is very complicated and does not align pricing with customer success.  Why not?

Pricing Scaling Linearly When Customer Value Scales Exponentially Is A Poor Decision

Dave at Server Density explained to me that their core, sweet-spot customer has approximately 7 servers, but that the per-server pricing was chosen to be cheap to brand-new single-server customers.  They were very concerned with competing with free.

Regardless of whether this wins additional $13 accounts, it clearly under-values the service for 7 server accounts, because their mission-critical server monitoring software in charge of paging the $10,000 a month on-call sysadmin to stop thousands of dollars of losses per minute only costs $79.  You don’t get 7x the value from server monitoring if you increase your server fleet by 7x, you get (at least) 50x the value.  After you get past hobby project you quickly get into the realms of a) serious revenue being directly dependent on the website, b) serious hard costs like fully-loaded developer salaries for doing suboptimal “cobble it together ourselves from monit scripts” solutions, and c) serious career/business reputational risks if things break.

Let’s talk about those $13 accounts for a moment.  Are $13 accounts for server monitoring likely to be experienced sysadmins doing meaningful work for businesses who will solve their own problems and pay without complaint every month?  No, they’re going to be the worst possible pathological customers.  They’ll be hobbyists.  Their servers are going to break all the time.  They’re going to misconfigure Server Density and then blame it for their server breaking all the time.  They’ll complain that Server Density costs infinity percent more than OSS, because they value their own time at zero, not having to e.g. pay salaries or account for a budget or anything.

My advice to Dave was that Server Density switch to a SaaS pricing model with 3~4 tiers segmented loosely by usage, and break with the linear charging.  The advantages:

  • Trivial to buy for non-technical stakeholders: name the plans correctly and they won’t even need to count servers to do things correctly.  (“We’re an enterprise!  Of course we need the Enterprise plan!”)
  • Predictable pricing.  You know that no matter what the sysadmins do this month, you’re likely to end up paying the same amount.
  • Less decisions.  Rather than needing to do capacity planning, gather data internally, and then use a custom-built web application to determine your pricing, you can just read the grid and make a decision in 30 seconds.
  • More alignment with business goals.  Unless you own a hosting company, “number of servers owned” is not a metric your CEO cares about.  It only tends to weakly proxy revenue.  Yes, in general, a company with 10 servers tends to have more commercial success than a company with 1 server, but there are plenty of single-server companies with 8 figures of revenue.

(Speaking of custom-built web applications to determine pricing, the best product with the worse pricing strategy is Heroku.  Enormously successful, but I’m pretty sure they could do better, and have been saying so for years.  All Heroku would have to do is come up with four tiers of service, attach reasonable dynos/workers/databases to them, and make that the core offering for 90% of new accounts.  You could even keep the actual billing model entirely intact: make the plans an abstraction over sensible defaults picked for the old billing model, and have the Spreadsheet Samurai page somewhere where power users and the sales team can find it.)

Ditching Linear Scaling In Favor Of A Plan Model

After thinking on my advice, Server Density came up with this redesign:

I love this.

  • The minimum buy-in for the service is now $99 a month, which will segment away customers who are less serious about their server uptime.
  • You now only need to make one decision, rather than needing to know two numbers (which you might not have access to at many of their customers).
  • The segmentation on users immediately triples the price for serious businesses using the service, irrespective of the size of their server fleet.  This is good because serious businesses generate a lot of money no matter how many servers they have.
  • Phone support will be an absolute requirement at many companies, and immediately pushes them into the $500 a month bucket.

My minor quibbles:

  • I still think it is underpriced at the top end.  Then again I say that about everything.
  • Did you notice the real Enterprise pricing?  (Bottom right corner, titled “More than 100?”) Like many SaaS services, Server Density will quote you a custom plan if you have higher needs.  Given that these customers are extraordinarily valuable to the business both for direct sales and for social proof, I might make this one a little more prominent.

Results From Testing: 100% Increase In Revenue

Server Density implemented an A/B test of the two pricing strategies using Visual Website Optimizer.

At this point, there’s someone in the audience saying “That’s illegal!”  That person is just plain wrong.  There is no carbon in a water molecule, and price testing is not illegal.

What if the fact of the price testing were discovered?  Not really that problematic: you can always offer to switch someone to the most advantageous pricing model for them.  Since most existing customers would pay less under variable pricing than they would under the above pricing grid, simply grandfathering them in on it removes any problem from people who have an actual stake in the business.  For new customers who get the new pricing grid but really, really feel that they should be a $13 a month account, you can always say “Oh, yep, we were testing.  I’ll give you the $13 pricing if you want it.”  (David from Server Density says that this is in fact what they did, three times, and had no lasting complaints.)

Most customers will not react like this because most customers do not care about price.  (Those that do are disproportionately terrible customers.  To quote David from Server Density, “We had the occasional complaint that pricing was too high but this was from users with either just a single server or very low cost VPSs where the cost of monitoring (even at $10/m) was more than the cost of the server.”)

Anyhow, where were we?  Oh yeah, making Server Density piles of money.  They requested that I not disclose the interval the test was conducted over, to avoid anyone reasoning back to their e.g. top-line revenues, but were OK with publishing exact stats otherwise.

Variable pricing: 150 free trial signups / 2161 visitors

Pricing plans: 113 free trial signups / 2153 visitors

At this point, variable pricing is clobbering the pricing plans (they get 25% less signups and pricing plans being inferior at maximizing trials has a confidence over 99%)… but let’s wait until this cohort reaches the end of the trial period, shall we?

Server Density does not make credit card capture mandatory.  (I might suggest revising that decision as another test.)

Variable pricing: 23 credit cards added / 2161 visitors

Pricing plans: 18 credit cards added / 2153 visitors

That’s a fairly similar attachment rate for credit cards.  But collecting credit cards doesn’t actually keep the lights on — the important thing is how much you successfully charge them, and that is highly sensitive to the prices.

Variable pricing: $420 monthly revenue added / 2161 visitors   (~$0.19 a visitor)

Pricing plans: $876 monthly revenue added / 2153 visitors  (~$0.41 a visitor)

+100% revenue (and revenue-per-visitor) for that cohort.  Pretty cool.

(P.S. Mathematically inclined readers might get puzzled at the exact revenue numbers — how do you get $876 from summing $99, $299, and $499?  Long story short: Server Density is a UK company and there are conversion issues from GBP to USD and back again.  They distort the exact revenue numbers a wee bit, but it comes out in the wash statistically.)

We Doubled Revenue?!  Can We Trust That Result?

Visual Website Optimizer displays on the dashboard that it is 93% confident that there was indeed a difference between the two.  (The reported confidence intervals are $0.19 +/- 0.08 and $0.41 +/- $0.16.  How to read that?  Well, draw your bell curves and do some shading, but for a qualitative description, “Our best guess is that we doubled performance, but there’s some room for error in these approximations.  What would those errors look like?  Well, calculus happens, here we go: it is more likely that the true performance improvement is more than ~3x than it is that there was, in fact, no increase in performance.”)

Truth be told, I don’t know if I trust that confidence in improvement or not, because I don’t understand the stats behind it.  I understand the reported confidence intervals and what they purport to measure, I just don’t know of a defensible way to get the data to that point.  The ways I’m aware of for generating confidence intervals for averages/aggregates of a particular statistic (like, say, “Average monthly revenue per visitor of all visitors who would ever sign up under the pricing plan”) all have to assume something about the population distribution.  One popular assumption is “Assume normality”, but that’s known to be clearly wrong — no plausible arrangement of numbers makes X% $99, Y% 299, Z% $499 into a normal distribution.  Even in absence of a rigorous test for statistical confidence, though, there’s additional information that can’t be put in this public writeup which causes me to put this experiment in the “highly probable win” column.  (If my Stats 102 is failing me and there’s a simple test I am neglecting, feel free to send me an email or drop a mention in the comments.)

Note that since this is a SaaS business that is monthly revenue added.  Increasing your monthly revenue from a particular cohort by $450 increases your predicted revenue over the next year by in excess of $4,000.  (The calculation is dependent on your churn rate.  I’m just making a wild guess for Server Density’s, biased to be conservative and against their interests.)

Now, in the real world, SaaS customers’ value can change over time via plan upgrades and downgrades, and one would ideally collect many months of cohort analyses to see how things shook out.  Unfortunately, in the equally real world which we actually live in, sometimes we have to reason from incomplete data.  If you saw a win this dramatic in your business and were wondering whether you could “take your winnings” now by adopting the new pricing across all new accounts, I would suggest informing that decision with what you previously know about customer behavior vis-a-vis number of servers over time.  My naive guess is that once a server goes into service it gets taken out of service quite rarely indeed and, as a consequence, most Server Density accounts probably have roughly static value and the few that change overwhelmingly go up.

And what about the support load?  Well, true to expectations, it has largely been from paid experts at larger companies, rather than from hobbyists complaining that they don’t get the moon and stars for their $13 a month.  Dave was particularly impressed how many were happy to hop on a phone call to talk about requirements (which helps learn about the customer segments and develop the future development and marketing roadmaps) — meanwhile, the variable pricing customers largely a) don’t want to talk about things and b) need a password reset right now WTF is taking so long.

Server Density expects that their plan customers will be much less aggravating to deal with in the future, but it is still early days yet and they don’t have firm numbers to back up that impression.

Testing Pricing Can Really Move The Needle For Your Business

Virtually no one gets pricing right on the first try.

(When I wrote the pricing grid for Appointment Reminder I snuck a $9 plan in there, against my own better judgment, and paid for that decision for a year.  I recently nixed it and added a $199 plan instead.  Both of those decisions changes been nothing but win.)

Since you probably don’t have optimum pricing, strongly consider some sort of price testing.  If I can make one concrete recommendation, consider more radical “packaging” restructurings rather than e.g. keeping the same plan structure and playing around with the plan prices +/- $10.  (This means that, in addition to tweaking numbers, you find some sort of differentiation in features or the consolidated offering that you can use to segment a particular group of customers into a higher plan than they would otherwise be at numerically.)

For more recommendations, again, you probably want to be on my mailing list.  You’ll get an email today with a link to a 45 minute video about improving your app’s first run experience, the email about SaaS pricing tomorrow, and then an email about weekly or biweekly about topics you’ll find interesting.  Server Density is not the only company telling me that those emails have really been worth people’s time, but if they don’t appeal to your taste, feel free to unsubscribe (or drop me an email to tell me what you’d rather read) at any time.

Disclosure: Server Density is not a client, which is very convenient for me, because I’m not ordinarily at liberty to talk about doubling a client’s revenue.

About Patrick

Patrick is the founder of Kalzumeus Software. Want to read more stuff by him? You should probably try this blog's Greatest Hits, which has a few dozen of his best articles categorized and ready to read. Or you could mosey on over to Hacker News and look for patio11 -- he spends an unhealthy amount of time there.

26 Responses to “Doubling SaaS Revenue By Changing The Pricing Model”

  1. Brennan Dunn August 13, 2012 at 9:32 am #

    Excellent post. I’ve been guilty of tying my pricing plans to usage (“If you have one user, your monthly cost is $X, two users, $2X, and so on”), instead of really looking about my accounts and thinking: Forgetting the if/then’s of my code’s pricing logic, how are my customers actually using this?

    I’ve recently started doing some research, and armed with usage data and a lot of customer interviews, it’s becoming obvious that the “consultant” umbrella is pretty large. A freelancer working from a coffee shop probably shouldn’t be charged 1/5th that of a larger agency who has five developers using my software. The respective account owner’s worldviews and needs are radically different.

    I’ll report back once I’ve been able to experiment with this new pricing plan setup.

    • Patrick August 13, 2012 at 9:50 am #

      Very interested in hearing about your results, Brennan.

    • Allen Rohner August 13, 2012 at 4:19 pm #

      At CircleCI, we recently switched away from per-user pricing ($18/developer/mo), to the standard 3 plan setup like you see above.

      Turns out, users *hated* the per-user pricing, even compared to paying the same amount of money, or more, on a different plan. i.e. We found customers with teams of 5 developers, who would be happy paying “$100 mo”, as opposed to “$18/developer/mo * 5 devs”.

      I don’t have good numbers yet, but it feels like conversion rates are up, and most of the people who emailed to tell me they wouldn’t buy because “prices were too high”, on the old plan, are now happy on a new plan.

      I’ll stop now, I think I need to turn this into a blog post…

  2. Michael Selik August 13, 2012 at 9:57 am #

    You can assume normality for the probability distribution of the average revenue per customer. That’s the central limit theorem. This is true even though the revenue per customer is non-normal. If you take several random samples and calculate the average within each sample, you will find that this calculated value has an approximately normal distribution. This is pretty much the way that normal distributions show up in nature — through aggregation.

    • Michael Selik August 13, 2012 at 10:18 am #

      About constructing a significance test, an ordinary least squares linear regression should be fine. This tests the relationship $ Y = mX + b $, with Y being your explained variable and X being your explanatory variable (or a vector of explanatory variables).

      If we are testing the hypothesis that a relationship exists between visitor revenue and visitor pricing plan, then we construct a null hypothesis that no relationship exists. In other words, our null hypothesis is that the slope parameter in our line equation is zero, $ m = 0 $.

      After we collect data, we can estimate the slope parameter and construct a confidence interval for that parameter. It is safe to assume that the probability distribution of the parameter is normal. If we had a different random sample of data, the estimated parameter would be slightly different. Repeated samplings and estimations would reveal a normal distribution for the values of the parameter. If the 95% confidence interval of the slope parameter does not cover zero, then we can reject the null hypothesis and have (95%) confidence that there is a relationship between our explanatory and explained variables.

      Unfortunately, it’s not quite that simple. In your case, because the possibile monthly revenue values are four discrete values (500, 300, 100, 0), that will probably create some heteroskedasticity — prediction errors that are non-normal. That violates an assumption of ordinary least squares, but there are standard methods to adjust for that, such as heteroskedasticity-robust regression. That’s another story. Still, ordinary least squares will give you a pretty good answer. If it’s close to the edge, maybe look for more rigorous tests. If your null hypothesis is knocked out of the park, I wouldn’t worry about deeper tests.

      • Michael Selik August 13, 2012 at 10:29 am #

        I didn’t mean that to come off as pedantic. I’ve been thinking a lot lately about how to teach stats.

        • Paul August 13, 2012 at 3:03 pm #

          As a developer just beginning to learn statistics, I for one really appreciated such a thorough contribution. Thanks!

  3. Derek Haynes August 13, 2012 at 10:11 am #

    Our experiences (Scout – same market) going the opposite way (from tiered to per-server pricing):

    * Prior to December 2011, we offered tiered pricing similar to what Server Density offers now. We switched to per-server ($10-$20/server/month).
    * We had smaller defined capacity increments (3 servers/8/16) vs. Server Density (10/50/100).
    * We’ve always required a credit card on signup

    Observations after 9 months or so:

    * Our signup rate increased significantly (kind of obvious – lower price point) – growth has as well. I’d associate the growth more to general awareness.
    * Our signups stay with us longer. They’re paying for exactly what they need and feel better about that.
    * It’s a big win for our customers. Server usage is becoming more elastic and our customers like paying for exactly what they need.
    * With per-server pricing, you lose the ability to identify potential large customers.
    * 100% agree – hobbyist are (understandably) the most fickle customers. You’re selling to a consumer, not a business.

    So, is one better than the other? I don’t know :). Our customers are happier with per-server pricing. I like that.

    • Patrick August 13, 2012 at 12:06 pm #

      I really appreciate the comment, Derek.

  4. TVD August 13, 2012 at 11:07 am #

    “Spreadsheet Samurai page” => ROFL

    All jokes aside, great article Patrick. At this point, I consider you the Starter Godfather.

    I myself have been guilty of suggesting a variable pricing solution.

    Needless to say, definitely will look at that option with a finer tooth comb in the future.

    Simplicity > Complexity.

  5. Jorge August 13, 2012 at 1:08 pm #

    What’s your opinion on anchor pricing? and how competitor’s pricing affects new customers.

  6. Reuben Swartz August 13, 2012 at 5:37 pm #

    Great article. Giving people options to configure things that they don’t really need to configure creates unnecessary barriers in the sales process.

    Loved your tip about making sure your value and price scale at approximately the same rate.

  7. Colin August 13, 2012 at 5:49 pm #

    With pricing you need to start somewhere… so we made some best guesses. But one of the things I’m really proud of with how we approached pricing is we didn’t start adding complex logic for the limits of each plan. In fact, the only distinction is “paid / not paid”

    This gives us the freedom to find the right pricing without having to re-write logic in the app each time.

  8. Ernie Symons August 13, 2012 at 6:08 pm #

    If you REALLY want something nice, you might want to consider Fort DeSoto County Park in southern Pinellas County (Tampa Bay area).

  9. Eydun August 13, 2012 at 9:39 pm #

    I like better the old structure. Why? Because I have 3 servers, and they used to cost $39, and now they cost $99. The old structure was more fair.

  10. Steven Forth August 14, 2012 at 4:13 am #

    Great article. You should post it to the Professional Pricing Society group on LinkedIn and to some of the SaaS groups. The pricing metric is a key focal point of innovation and with cloud solutions we can all be a lot more innovative around how we structure price and link how we price to how our users get value from our solutions.

  11. Pat Ransil August 14, 2012 at 10:07 am #

    When I did the original pricing plan for Amazon’s SimpleDB I made the same mistakes. The goal was to do ‘cost-based’ pricing and it came out WAY too complex. I agree that ‘value-based’ pricing is good for the company selling the service but correctly modeling customer value requires significant assumptions about the nature of the customer’s business and their use of your service. You must also remain cost-competitive with comparable services the customer could choose.

    I don’t think Amazon will go away from the linear ‘pay only for what you use’ pricing. I am no longer there so I have no inside knowledge but it is core to their model. If other companies compete directly with AWS, it may be hard to compete on pricing models that are significantly more expensive so you will have to differentiate in a way that adds customer value over what AWS offers.

    But I definitely agree with the goal of simplified pricing and value-based pricing when possible.

  12. Lilia August 14, 2012 at 1:46 pm #

    Thanks for a case study Patrick. I suspect that AB testing on pricing might not be equally practical for all businesses, depending on their strategic goals (growth comes to mind) and many other factors (customer loyalty, competition, etc). Also, long-term customer retention (and LTV) is another consideration when deciding on a pricing strategy and a short term AB test might not paint a full picture.

    • Dennis Gorelik August 14, 2012 at 4:30 pm #

      Lilia,

      Customer loyalty and retention could be very important factors indeed.

      Unfortunately in A/B tests Patrick describes he does not even mention retention, but that likely plays very important role in overall revenue.

      It’s hard to test for retention, in particular because such tests take very long time (at least several months).

  13. David Zhao August 14, 2012 at 2:12 pm #

    Interesting case study. Our company is currently looking at our own pricing model and this article really made us think.

    Are there any studies on how many people would self service a $299 plan though? Seems like quite a big number for a web signup type service.

  14. Folke Lemaitre August 14, 2012 at 10:48 pm #

    Awesome blog post. I am actually a happy user of ServerDensity and it looks like we will be paying them more in the future (if they push their new pricing to existing customers), but am OK with that ;-)

    For my own startup, we are still looking on how we can optimize our pricing. At this moment, we know that we are kinda cheap, but that has been a decision we made initially for increasing our reach and awareness.

    We are currently in the process of properly segmenting our current user base and create different kinds of packages with specific features added to cover their needs.

    What is still a difficult one of me is how you can properly AB test different pricing schemes without users being aware of this. Apparently for ServerDensity this hasn’t been a big issue, so I think we’ll do this anyhow ;-)

  15. Peter B August 21, 2012 at 12:25 am #

    Since ServerDensity changed their pricing, they’ve pushed it out of reach of our customers. Let me explain why…

    Our development shop works with a number of startups. To keep billing simple (and in such a market, to reduce our potential liability) we make each startup sign up direct with the provider for hosting, monitoring etc.

    That worked with the old pricing model, but the new one assumes either a) each company is large, or b) we’re going to resell.

Trackbacks/Pingbacks

  1. Doubling SaaS Revenue By Changing The Pricing Model - August 16, 2012

    [...] McKanzie about his rework of Server Density pricing model. At the very least it’s a good inspiration of how you should approach your pricing [...]

  2. JulianSchrader.de | Interesting Links for August 17, 2012 - August 16, 2012

    [...] Doubling SaaS Revenue By Changing The Pricing Model | Kalzumeus Software [...]

  3. Doubling SaaS Revenue By Changing The Pricing Model | danielbachhuber - August 17, 2012

    [...] Doubling SaaS Revenue By Changing The Pricing Model. “Most customers do not care about price… those that do are disproportionately terrible customers.” Share this:TwitterFacebookEmailLike this:LikeBe the first to like this. This entry was posted in asides and tagged business, pricing, startups by Daniel Bachhuber. Bookmark the permalink. [...]

  4. Issue #39 | Freelancing Weekly - August 17, 2012

    [...] Doubling SaaS Revenue By Changing The Pricing Model [...]

Loading...
Grow your software business:
(1~2 emails a week.)