Innovation Endeavors Circle Logo in white with no Text.
  • Home
  • Companies
  • Companies
  • Team
  • Thesis
    Intelligent SoftwareComputing InfrastructurePhysical EconomyEngineering Health
  • About
  • About
    About UsCuriosity Camp
  • Insights
  • Curiosity Camp
  • Contact
    Follow Us
    Twitter logoMedium Logo
  • Resources
    Resource Link 1Resource Link 2Resource Link 3

News + Insights

Pricing AI: How to capture value in a world of AI abundance

Pricing AI: How to capture value in a world of AI abundance

By 
Davis Treybig
By 
Davis Treybig
 & 
By 
Davis Treybig
, 
, 
& 
By 
Davis Treybig
, 
, 
, 
& 
By 
By 
Davis Treybig
 & 
By 
Davis Treybig
, 
, 
& 
By 
Davis Treybig
, 
, 
, 
& 
By 
Davis Treybig
, 
, 
, 
, 
& 

Originally posted in Davis Treybig's substack

TLDR- I think you can differentiate as an AI SaaS startup by being more expensive and embracing usage based pricing, rather than trying to hide it. This allows you to build reasoning-native features, avoids the need for dark patterns like model downgrading that have crept into seat-based products and alienate the most engaged users, and aligns more with the mental model that forward thinking buyers have.

‍

Amp is unconstrained in token usage (and therefore cost). Our sole incentive is to make it valuable, not to match the cost of a subscription - Amp Code Owners Manual

Pricing of AI-native SaaS tools has been widely discussed over the past year or two - particularly the margin challenges that can be associates with AI SaaS products (Windsurf Gets Margin Called) and the idea that you can sell “outcomes”, not work, with AI (Sell the Work).

However, there is one overlooked angle of pricing AI products which I increasingly find interesting from a startup perspective: the idea that usage based pricing, particularly in a world of a reasoning models, can act as a competitive advantage. In other words - I think more products should price for abundance.

Most AI SaaS products out there today try to map their pricing back to some kind of seat based model. Cursor charges $20/month, Greptile charges $30/month, Perplexity charges $20/month. While this is a simple and predictable pricing model for users, it comes at a cost - because of the massive variance in COGS across a user base when you offer a low fixed seat cost, you often end up needing to adopt dark patterns to control margins.

For example - you can find quite a few discussions online about how various AI code generation tools seem to “invisibly” downgrade the model they use behind the scenes if you use the product too much in a given billing period. This leads to a ton of user frustration as the product suddenly seems worse for a reason the user can’t quite explain.

Invisible or opaque downgrading or rate limiting of AI products is widespread

Claude Code’s recent rollout of invisible “usage limits” reflects the same problem - they are likely trying to maintain their per-seat-per-month business model, but as a result they have to curb extreme use because otherwise the top 5-10% of users will ruin the margins of the entire product.

This approach feels fundamentally flawed to me. First, it alienates your biggest power users - many of whom would likely pay much higher prices given the value they receive. Indeed, many of the more forward thinking executive teams I know are not aiming to reduce token usage, but increase it (see Shopify CTO quote below). Why would you design a pricing model counter to this?

Shopify CTO on why you should reward token spend - Shopify maintains a leaderboard of the TOP token spenders and treats it as a goal to climb the leaderboard

Second, this business model prevents you as a company from leaning in to what is special about foundation model-based SaaS tools - namely, that you can pay more for them to do more. This is especially true in the wake of reasoning models and foundation model systems - where quality & accuracy of results are directly correlated with how much money you are willing to spend on incremental compute.

My central argument is that you can actually build a differentiated product experience by embracing this concept, rather than trying to obfuscate it behind a seat-oriented pricing model.

For example:

  1. What if I could set a policy for Cognition’s Devin agent to spend a lot to try to automatically fix sev0 or sev1 bug reports from customers, but use weaker models & less reasoning for standard customer feature requests?
  2. What if I could set a policy for a code review agent to think a lot longer with a large reasoning model for code changes that touch the database, but spend far less to review simple client-side visual changes?
  3. What if I could modulate how much effort I need to put into drafting a contract as a lawyer with an AI legal tool based on how important & complex of a contract it is vs. how non-important it is?

You can map out these sorts of “dynamic effort policies” for basically all key categories of AI SaaS tools. There are always cases where you are willing to spend more for higher quality work to get done faster - and architecturally, it is well known how to get foundation model systems to achieve this (e.g. more reasoning, repeated sampling, best of N parallelization, etc). It’s simply a question of whether your business model allows for these sorts of ideas to be built into the product. I think it is basically impossible to truly embrace reasoning-model product paradigms in a SaaS tool if you do not have usage-based pricing front and center.

If executed correctly, I think this approach may actually allow you to invert the phenomena you see with Claude Code - where they alienate power users to maintain a simple, low-cost seat based model for non-power users. In gaming, it is long established that extreme power users subsidize everyone else - e.g. in mobile gaming the top 1% of users generate ~60% of all revenue. It is becomingly increasingly clear that in many of these AI categories - you see a similar distribution of user behavior, with top users using the product 1000x times the regular user if not more, but the business models for the most part do not accomodate this. If you instead embraced this idea, could the power users of a tool like Claude Code actually subsidize the low end?

Taking a step back - never before in the history of SaaS was it possible for you to pay more for a product to do more. Products like Docusign, Salesforce, Hubspot, and similar were, for the most part, fixed functionality that could not modulate quality no matter how much they were willing to spend on compute for a given task. Even traditional ML startups could not really achieve this - e.g. a demand forecasting model is going to keep producing the same results for the same features regardless of a customer’s willingness to pay more for forecasting X than Y.

Correspondingly, I think foundation model SaaS tools should lean into the fact they they can offer differential quality based on compute - as this is part of what allows you to counterposition and build a better product than existed before. Note that this is not necessarily outcome based pricing - which is nice when you can do it but is realistically unfeasible in most domains where there is not a clear, obvious outcome to charge against. Rather, it is “effort based” pricing.

There are, of course, numerous downsides with this type of pricing strategy. It is often harder to get users comfortable with usage based pricing. How much will it actually cost me? How do I explain it to my CFO? How do I ensure it doesn’t go completely out of bounds? But, in balance, I think it is better to address these problems head-on (e.g. support usage caps, limits, gaurdrails, alerts, real-time spend dashboards, etc) rather than deal with the impedance mismatch that occurs when you build an AI-Native SaaS with a seat oriented model.

You are starting to see some startups embrace this. I began the article with a quote from Amp Code, which has embraced this pricing philosophy the most in the coding agents space. I suspect that this pricing philosophy will rapidly become more commonplace in applied AI startups. The startups which adopt it will be able to innovate on product more and take dramatically more advantage of recent model capabilities in reasoning, because they will not need to deeply consider & model COGS for every single product decision they make.

Share

LinkedIn

Arrow indicating that this link will take you out of Innovation Endeavors site to an external page.

Twitter

Arrow indicating that this link will take you out of Innovation Endeavors site to an external page.

Slack

Arrow indicating that this link will take you out of Innovation Endeavors site to an external page.
Innovation Endeavors Circle Logo in white with Text.

Stay up to date

Subscribe to our newsletter for insights, news, and more.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Navigate

  • Portfolio
  • Team
  • About Us
  • Curiosity Camp
  • Insights
  • Contact

Follow Us

LinkedIn Logo on the Innovation Endeavors' website.Twitter logo