Home Artificial Intelligence SaaS AI Features Meet Applications Without Moats

SaaS AI Features Meet Applications Without Moats

2
SaaS AI Features Meet Applications Without Moats

Several enterprise SaaS firms have announced generative AI features recently, which is a direct threat to AI startups that lack sustainable competitive advantage

12 min read

14 hours ago

Back in July, we dug into generative AI startups from Y Combinator’s W23 batch — specifically, the startups leveraging large language models (LLMs) like GPT that powers ChatGPT. We identified some big trends with these startups — like give attention to very specific problems and customers (eg. marketing content for SMBs), integrations with existing software (eg. with CRM platforms like Salesforce), ability to customize large language models for specific contexts (eg. voice of your organization’s brand).

A secondary, not-so-harped-upon a part of the article was around moat risks — quoting from back then:

A key risk with several of those startups is the potential lack of a long-term moat. It’s difficult to read an excessive amount of into it given the stage of those startups and the limited public information available nevertheless it’s not difficult to poke holes at their long run defensibility. For instance:

If a startup is built on the premise of: taking base LLMs (large language models) like GPT, constructing integrations into helpdesk software to know knowledge base & writing style, after which generating draft responses, what’s stopping a helpdesk software giant (think Zendesk, Salesforce) from copying this feature and making it available as a part of their product suite?

If a startup is constructing a cool interface for a text editor that helps with content generation, what’s stopping Google Docs (that’s already experimenting with auto-drafting) and Microsoft Word (that’s already experimenting with Copilot tools) to repeat that? One step further, what’s stopping them from providing a 25% worse product and giving it away free of charge with an existing product suite (eg. Microsoft Teams taking up Slack’s market share)?

That’s exactly what’s played out in the previous few months. Several large enterprise SaaS firms have announced and / or launched their generative AI products — Slack, Salesforce, Dropbox, Microsoft, and Google to call a couple of. It is a direct threat to generative AI startups which can be constructing useful productivity applications for enterprise customers but have a limited sustainable, competitive advantage (i.e. moatless). In this text, we’ll dive into:

  • Recap of AI value chain
  • Recent AI features from enterprise SaaS firms
  • How startups can construct moats on this environment

We won’t spend much time on this but as a fast reminder, one strategy to take into consideration how firms can derive value from AI is thru the concept of the AI value chain. Specifically, you possibly can break down the worth chain into three layers:

  • Infrastructure (eg. NVIDIA that makes chips to run AI applications, Amazon AWS provides cloud computing for AI, Open AI provides large language models like GPT for constructing products)
  • Platform (eg. Snowflake provides a cloud-based solution to administer all of your data needs in a single place, from ingesting to cleansing as much as processing)
  • Applications (eg. a startup constructing a product that helps SMBs quickly create marketing content)
AI value chain; Source: author
AI value chain; Source: creator

Though the generative AI wave began with OpenAI’s launch of ChatGPT, which is powered by the GPT model (infrastructure layer), it’s becoming increasingly clear that the infrastructure layer is commoditizing, with several large players coming into the market with their very own LLMs including Facebook (LLaMA), Google (LaMDA), Anthropic to call a couple of. The commoditization is explained by the proven fact that most of those models are trained using the identical corpus of publicly available data (like CommonCrawl which crawls sites across the web, and Wikipedia).

Outside of this data pool, every large company that has a big corpus of first party data is either hunkering down their data for themselves or creating licensing models, which implies that this data goes to be either unavailable or available to each model provider for training, i.e. commoditization. That is the same story to what played out within the cloud computing market where Amazon AWS, Microsoft Azure and Google Cloud now own a big a part of the market but aggressively compete with one another.

While the platform layer is a bit less commoditized and there may be likely room for more players to cater to quite a lot of customer needs (eg. startups vs SMBs vs enterprise customers), it’s moving within the direction of commoditization and the massive players are beginning to beef up their offerings (eg. Snowflake which is a knowledge warehousing platform recently acquired Neeva to unlock application of LLMs for enterprises, Databricks which is an analytics platform acquired MosaicML to power generative AI for his or her customers).

Due to this fact, a majority of the worth from AI goes to be generated on the Application layer. The open query, nonetheless, is which firms are more likely to reap the advantages of applications unlocked by large language models (like GPT). Unsurprisingly, of 269 startups in Y Combinator’s W23 batch, ~31% had a self-reported AI tag. While the applications are all objectively useful and unlock value for his or her customers, particularly within the enterprise SaaS world, it’s becoming an increasing number of clear that incumbent SaaS firms are in a a lot better position to reap the advantages from AI.

There was a flurry of announcements from SaaS firms up to now few weeks. Let’s walk through a couple of.

Slack initially began by supporting the ChatGPT bot to operate inside your Slack workspace, each for summarizing threads and for helping draft replies. This was quickly expanded to support Claude bot (Claude is Anthropic’s equivalent of the GPT model). More importantly, Slack announced their very own generative AI built natively inside the app, which supports a big selection of summarizing capabilities across threads and channels (eg. tell me what happened on this channel today, tell me what’s project X). What might have been plugins built by startups is now a native feature built by Slack, because Slack can easily pick up models like GPT off the shelf and construct a generative AI feature. This will not be terribly difficult to do and it also saves Slack the effort of coping with integrations / clunky user experiences from unknown plugins.

One other announcement got here from Salesforce. Their product Einstein GPT is positioned as generative AI for his or her CRM. It is going to let Salesforce users query a big selection of things (e.g. who’re my top leads without delay), mechanically generate and iterate on email drafts, and even create automated workflows based on these queries. It’s likely that the feature looks nicer in screenshots than it’s in point of fact, nevertheless it could be a good bet that Salesforce can construct a fairly seamless product in a 12 months’s time. This, actually, is the precise functionality being built by a few of the generative AI startups today. While useful within the short term, the success for these startups depends not only on being higher than Einstein GPT, but being so a lot better that an enterprise SaaS buyer could be willing to tackle the friction of onboarding a latest product (I’m not going to call startups in my critique because constructing products ground up is difficult and writing critiques is simpler).

In the same vein, Dropbox announced Dropbox Dash which is positioned as an AI-powered universal search. It supports a big selection of functionality including Q&A answers from all of the documents stored on Dropbox, summarizing content in documents, and answering specific questions from a document’s content (eg. when is that this contract expiring). Again, there are generative AI startups today which can be essentially constructing these functionalities piecemeal, and Dropbox has a better path to long-term success given they have already got access to the info they need and the flexibility to create a seamless interface inside their product.

The list continues:

  • Zoom announced Zoom AI that gives meeting summaries, answers questions in-meeting for those who missed a beat & need to catchup, and summarizes chat threads. Several startups today are constructing these features as separate products (eg. note-taking tools).
  • Microsoft 365 Copilot will read your unread emails & summarize them, answer questions from all of your documents, and draft documents amongst other things. These capabilities may also be embedded seamlessly into interfaces of products like Word, Excel, OneNote and OneDrive.
  • Google has an equivalent product called Duet AI for his or her productivity suite of products
  • Even OpenAI (though not a dominant SaaS company) launched ChatGPT enterprise that may essentially plug into all of an organization’s tools and supply easy answers to any questions from an worker

I’m, by no stretch, claiming that the battle is over. If you could have used any generative AI products to this point, there are some wow moments but more not-wow moments. The pitches for the products above are appealing but most of them are either being run as pilots or are news announcements describing a future state of the product.

There are also several unresolved issues limiting the adoption of those products. Pricing is far and wide, with some products offering AI features free of charge to compete, while other broader copilot products charging a fee per seat. Microsoft 365 Copilot is priced at $30/user/month and ChatGPT enterprise is around $20/user/month — while this seems palatable at face value for a consumer, several enterprise buyers might find this price laughable at scale, especially provided that costs add up quickly for 1000’s of employees. Data sharing considerations are one other big blocker, given enterprises are hesitant to share sensitive data with language models (despite enterprise AI offerings explicitly saying they won’t use customer data for training purposes).

That said, these are solvable problems, and the main focus with which large SaaS firms are constructing AI features signifies that these will likely be unblocked near-term. Which brings us back to the moat problem — generative AI startups constructing for enterprise customers must figure strong moats in the event that they need to proceed to thrive within the face of SaaS incumbents’ AI features.

Let’s start with the apparent non-moats: taking a big language model off the shelf and constructing a small value proposition on top of it (e.g. higher user interface, plugging into one data source) doesn’t create a long-term, sustainable advantage. These are fairly easy to mimic, and even when you could have first-mover advantage, you’ll either lose to an incumbent (that has easier access to data or more flexibility with interfaces), or find yourself in a pricing war to the underside.

Listed below are some non-exhaustive approaches to constructing a moat around enterprise AI products.

1. Domain / vertical specialization

Some domains / verticals are more suited to construct AI applications than others. For instance, constructing on top of CRM software is admittedly hard to defend because CRM firms like Salesforce have each the info connections and the control over interfaces to do that higher. You possibly can provide you with really smart innovations (eg. making a LinkedIn plugin to auto-draft outreach emails using CRM data) but innovators / first to market players don’t all the time win the market.

Legal is one example of a vertical where AI startups could shine. Legal documents are long, take an incredible amount of person hours to read, and it’s a frustrating process for everybody involved. Summarizing / analyzing contracts, Q&A from contract content, summarizing legal arguments, extracting evidence from documents are all time-consuming tasks that might be done effectively by LLMs. Casetext, Harvey.ai are a few startups which have copilot products catering to lawyers, and have built custom experiences that specifically cater to legal use cases.

One other vertical that’s dire need of efficiency in healthcare. There are several challenges with deploying AI in healthcare including data privacy / sensitivities, complex mesh of software (ERP, scheduling tools, etc.) to work with, and lack of technical depth / agility amongst large firms that construct products for healthcare. These are clear opportunities for startups to launch products quickly and use the first-to-market position as a moat.

2. Data / network effects

Machine learning models (including large language models) perform higher the more data they’ve had to coach against. That is considered one of the largest the explanation why, for instance, Google Search is the world’s most performant search engine — not because Google has all of the pages on the planet indexed (other search engines like google and yahoo try this as well), but because billions of individuals use the product and each user interaction is a knowledge point that feeds into the search relevance model.

The challenge with enterprise products nonetheless, is that enterprise customers will explicitly prohibit providers of SaaS or AI software from using their data for training (and rightfully so). Enterprises have a variety of sensitive information — from data on customers to data on company strategy — they usually don’t need this data fed into OpenAI or Google’s large language models.

Due to this fact, it is a difficult one to construct a moat around but it could possibly be possible in certain scenarios. For instance, the content generated by AI tools for promoting or marketing purposes is less sensitive, and enterprises usually tend to allow this data for use for improving models (and consequently their very own future performance). One other approach is having a non-enterprise version of your product where usage data is opted into for training by default — individuals and SMB users usually tend to be okay with this approach.

3. Usher in multiple data sources

The toughest a part of applying large language models to a particular enterprise use case will not be picking up a model from the shelf and deploying it, but constructing the pipes needed to funnel an organization’s relevant data set for the model to access.

Let’s say you’re a big company like Intuit that sells accounting and tax software to SMBs. You support tens of 1000’s of SMB customers, and when considered one of them reaches out to you with a support query, you ought to provide them a customized response. Very likely, data on which products this customer uses sits in a single internal database, data on the shopper’s latest interactions with the products sits in one other database, and their past support query history lives in a helpdesk SaaS product. One approach for generative AI startups to construct a moat is by identifying specific use cases that require multiple data sources that usually are not owned by a single large SaaS incumbent, and constructing within the integrations to pipe this data in.

This has worked incredibly well in other contexts — for instance, the entire market of Customer Data Platforms emerged from the necessity to pull in data from multiple sources to have a centralized view about customers.

4. Data silo-ing

Large enterprises don’t need to reveal sensitive data to models, especially models owned by firms which can be competitors or have an excessive amount of leverage available in the market (i.e. firms with whom enterprises are forced to share data resulting from lack of alternatives).

From the YC W23 article, CodeComplete is an important example of an organization that emerged from this pain point:

The thought for CodeComplete first got here up when their founders tried to make use of GitHub Copilot while at Meta and their request was rejected internally resulting from data privacy considerations. CodeComplete is now an AI coding assistant tool that’s superb tuned to customers’ own codebase to deliver more relevant suggestions, and the models are deployed directly on-premise or in the shoppers’ own cloud.

5. Construct a fuller product

For all the explanations above, I’m personally skeptical that a majority of standalone AI applications have the potential to be businesses with long-term moats, particularly those which can be targeting enterprise customers. Being first to market is certainly a play and will indeed be an excellent path to a fast acquisition, however the only real strategy to construct a powerful moat is to construct a fuller product.

An organization that is targeted on just AI copywriting for marketing will all the time stand the chance of being competed away by a bigger marketing tool, like a marketing cloud or a creative generation tool from a platform like Google/Meta. An organization constructing an AI layer on top of a CRM or helpdesk tool could be very more likely to be mimic-ed by an incumbent SaaS company.

The strategy to solve for that is by constructing a fuller product. For instance, if the goal is to enable higher content creation for marketing, a fuller product could be a platform that solves core user problems (eg. time it takes to create content, having to create multiple sizes of content), after which includes a strong generative AI feature set (eg. generate the perfect visual for Instagram).

I’m excited concerning the amount of productivity generative AI can unlock. While I personally haven’t had a step function productivity jump to this point, I do imagine it can occur quickly within the near-mid term. On condition that the infrastructure and platform layers are getting reasonably commoditized, probably the most value driven from AI-fueled productivity goes to be captured by products at the appliance layer. Particularly within the enterprise products space, I do think a considerable amount of the worth goes to be captured by incumbent SaaS firms, but I’m optimistic that latest fuller products with an AI-forward feature set and consequently a meaningful moat will emerge.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here