The Economics of Generative AI

-

What’s the business model for generative AI, given what we all know today concerning the technology and the market?

Photo by Ibrahim Rifath on Unsplash

OpenAl has built certainly one of the fastest-growing businesses in history. It may additionally be certainly one of the most expensive to run.

The ChatGPT maker could lose as much as $5 billion this yr, in line with an evaluation by The Information, based on previously undisclosed internal financial data and folks involved within the business. If we’re right, OpenAl, most recently valued at $80 billion, might want to raise extra cash in the following 12 months or so.

The Information

I’ve spent a while in my writing here talking concerning the technical and resource limitations of generative AI, and it is vitally interesting to look at these challenges becoming clearer and more urgent for the industry that has sprung up around this technology.

The query that I feel this brings up, nonetheless, is what the business model really is for generative AI. What should we expect, and what’s just hype? What’s the difference between the promise of this technology and the sensible reality?

I’ve had this conversation with a number of people, and heard it discussed quite a bit in media. The difference between a technology being a feature and a product is actually whether it holds enough value in isolation that individuals would purchase access to it alone, or if it actually demonstrates most or all of its value when combined with other technologies. We’re seeing “AI” tacked on to a lot of existing products at once, from text/code editors to look to browsers, and these applications are examples of “generative AI as a feature”. (I’m writing this very text in Notion and it’s continually attempting to get me to do something with AI.) Then again, we now have Anthropic, OpenAI, and diverse other businesses attempting to sell products where generative AI is the central component, corresponding to ChatGPT or Claude.

This may begin to get a bit of blurry, but the important thing factor I take into consideration is that for the “generative AI as product” crowd, if generative AI doesn’t live as much as the expectations of the shopper, whatever those is likely to be, then they’re going to discontinue use of the product and stop paying the provider. Then again, if someone finds (understandably) that Google’s AI search summaries are junk, they will complain and switch them off, and proceed using Google’s search as before. The core business value proposition will not be built on the inspiration of AI, it’s just a further potential selling point. This leads to much less risk for the general business.

The way in which that Apple has approached much of the generative AI space is an excellent example of conceptualizing generative AI as feature, not product, and to me their apparent strategy has more promise. On the last WWDC Apple revealed that they’re engaging with OpenAI to let Apple users access ChatGPT through Siri. There are a number of key components to this which might be essential. First, Apple will not be paying anything to OpenAI to create this relationship — Apple is bringing access to its highly economically attractive users to the table, and OpenAI has the possibility to show these users into paying subscribers to ChatGPT, in the event that they can. Apple takes on no risk in the connection. Second, this doesn’t preclude Apple from making other generative AI offerings corresponding to Anthropic’s or Google’s available to their user base in the identical way. They aren’t explicitly betting on a selected horse within the larger generative AI arms race, regardless that OpenAI happens to be the primary partnership to be announced. Apple is in fact working on Apple AI, their very own generative AI solution, but they’re clearly targeting these offerings to enhance their existing and future product lines — making your iPhone more useful — fairly than selling a model as a standalone product.

All that is to say that there are multiple ways of fascinated about how generative AI can and must be worked in to a business strategy, and constructing the technology itself will not be guaranteed to be essentially the most successful. After we look back in a decade, I doubt that the businesses we’ll consider because the “big winners” within the generative AI business space might be those that truly developed the underlying tech.

Okay, you would possibly think, but someone’s got to construct it, if the features are worthwhile enough to be price having, right? If the cash isn’t within the actual creation of generative AI capability, are we going to have this capability? Is it going to succeed in its full potential?

I should acknowledge that a lot of investors within the tech space do consider that there may be loads of money to be made in generative AI, which is why they’ve sunk many billions of dollars into OpenAI and its peers already. Nonetheless, I’ve also written in several previous pieces about how even with these billions at hand, I think pretty strongly that we’re going to see only mild, incremental improvements to the performance of generative AI in the long run, as a substitute of continuous the seemingly exponential technological advancement we saw in 2022–2023. (Specifically, the constraints on the quantity of human generated data available for training to attain promised progress can’t just be solved by throwing money at the issue.) Because of this I’m not convinced that generative AI goes to get an entire lot more useful or “smart” than it’s at once.

With all that said, and whether you agree with me or not, we should always keep in mind that having a highly advanced technology may be very different from having the ability to create a product from that technology that individuals will purchase and making a sustainable, renewable business model out of it. You’ll be able to invent a cool recent thing, but as any product team at any startup or tech company will let you know, that will not be the tip of the method. Determining how real people can and can use your cool recent thing, and communicating that, and making people consider that your cool recent thing is price a sustainable price, is amazingly difficult.

We’re definitely seeing a lot of proposed ideas for this coming out of many channels, but a few of these ideas are falling pretty flat. OpenAI’s recent beta of a search engine, announced last week, already had major errors in its outputs. Anyone who’s read my prior pieces about how LLMs work won’t be surprised. (I used to be personally just surprised that they didn’t take into consideration this obvious problem when developing this product in the primary place.) Even those ideas which might be someway appealing can’t just be “nice to have”, or luxuries, they should be essential, because the value that’s required to make this business sustainable must be very high. When your burn rate is $5 billion a yr, so as to turn out to be profitable and self-sustaining, your paying user base have to be astronomical, and/or the value those users pay have to be eye-watering.

This leaves people who find themselves most occupied with pushing the technological boundaries in a difficult spot. Research for research’s sake has all the time existed in some form, even when the outcomes aren’t immediately practically useful. But capitalism doesn’t really have an excellent channel for this type of work to be sustained, especially not when this research costs mind-bogglingly high amounts to take part in. The USA has been draining academic institutions dry of resources for a long time, so scholars and researchers in academia have little or no probability to even take part in this type of research without private investment.

I feel this can be a real shame, because academia is the place where this type of research may very well be done with appropriate oversight. Ethical, security, and safety concerns might be taken seriously and explored in a tutorial setting in ways in which simply aren’t prioritized within the private sector. The culture and norms around research for academics are in a position to value money below knowledge, but when private sector businesses are running all of the research, those selections change. The individuals who our society trusts to do “purer” research don’t have access to the resources required to significantly take part in the generative AI boom.

After all, there’s a big probability that even these private corporations don’t have the resources to sustain the mad dash to training more and larger models, which brings us back around to the quote I began this text with. Due to economic model that’s governing our technological progress, we may miss out on potential opportunities. Applications of generative AI that make sense but don’t make the form of billions crucial to sustain the GPU bills may never get deeply explored, while socially harmful, silly, or useless applications get investment because they pose greater opportunities for money grabs.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x