Jul 11, 2023
 • 
1 min read

Vertical AI-Powered SaaS — Beyond the AI Prompt Box

There has also been a lot of talk around the new AI-Powered stack, as far as foundation, infrastructure, and application layers, and the opportunities that lie ahead across each.
Boris Bogatin
Boris Bogatin

There’s no doubt that the rise of LLMs, and most importantly them evolving to a level of capability now sufficiently valuable and reliable for core use cases, has turned the world upside down. There has also been a lot of talk around the new AI-Powered stack, as far as foundation, infrastructure, and application layers, and the opportunities that lie ahead across each. And yet the most valuable opportunity may very well be at the application layer, as “apps” get built out to fulfill ultimate business and consumer values as / even more transformative than those addressed by now tech leaders like Apple, Google, Amazon, Salesforce, Uber and many others through applications. As always, the essence is in the details. Here’s what we are distilling at Catio.

First, it’s helpful to reflect on familiar paradigms and as much as they often get challenged (and sometimes overturned), they are still a great source of perspective and thoughtful inquiry. Take a look at linear programming and AI — one would think that to achieve reasoning, one could simply extend on what we’ve achieved mathematically to arrive at the right answers using optimization, but now scaled significantly with AI. Yet AI has transcended this notion extensively (extending beyond hundreds of decision variables and equations to a much larger set), and even completely changed the paradigm in the context of LLMs proving that probabilistic rather than reinforced learning based approaches can be highly effective even though generalized (even though contestably still differing in value dependent on the use cases, general or specific).

Since “intelligence” has existed and has been developing for millions of years in humans, it’s a natural place to consider as just that paradigm. The notion of patterns and probabilities can conceivably be compared to “instincts” and “impulses”, the overall innate / natural side of our intelligence which forms the foundation for further generalized intelligence humans acquire over their lifespan — maybe we can generalize that form as “street smarts”. This spans the seemingly involuntary actions we learn to perform (ie. fight or flight, pleasure of eating), even beyond those triggered by our physiology (ie. breathing and eating itself), all the way through the generalized intelligence we acquire to safely cross the road or have a meaningful conversation. In this loose context, LLMs would appear to effectively fit this bill, as their “foundation model” terming supports.

But the other side of learning is one that yields an outcome of “book smarts” and / or “trade smarts”, ideally in combination, and one that can achieve expertise in a specific area beyond the “general public” (ie. the ability to perform brain surgery, design a competitively superior engine, or discover general relativity). This form of intelligence is most effectively attained not through generalized intelligence and memorization, but through effectively understanding the “why” of any concept (Richard Feynman describes this rather accurately reflecting onhis talk with his dad about The Name of a Bird). By the very virtue of that, this is a relative concept. Ie. while hunting using a sharpened rock may have very well reflected as “book and trade smarts” expertise in the stone age, today this has largely entered generalized intelligence as much as an average person may not know the intricacies of that art at the stone age expert level. Reinforcement learning on proprietary data sets, with ample ability to provide this reinforcement including through clearly confined simulated environments, is an example of AI which has proven excellent ability to achieve this type of an outcome.

To provide some additional context, one could evaluate the overall methodologies of supervised learning used in AlphaGo and traditional “best-in-class” benchmarking commonly used in strategy consulting, as similar paradigms to foundation models of LLMs, used to achieve generalized intelligence. The overall benefit is the ability to generalize and scale across more widely available datasets and pursue AI insights on the back of probabilities. On the other hand, training AI how to achieve optimal outcomes through reinforcement learning approaches akin to those used in AlphaGo Zero and through understanding of the underlying causes and effects (and hopefully eventually the actual science), enables the training of true expert systems that can outperform in specific domains by being able to make decisions rooted in understanding the why behind them. This is a good reflection on the AlphaGo counterparts.

Keeping this in mind, we completely agree with the essence of Index Ventures’ blog post on Vertical AI-Powered SaaS — “vertically-focused AI platforms, bundled alongside workflow SaaS, built on top of models which have been uniquely trained on industry-specific datasets” as the next iterations of Enterprise SaaS. However, the important additional context is exactly how will these Enterprise SaaS applications leverage AI. Will they provide interfaces to LLMs as part of their workflow? Should they create a level of abstraction to protect enterprise data but still leverage the important enterprise context when prompting LLMs? Or will they build out independent AI models, which develop the specific expertise that achieves that type of a “why” understanding around the enterprise area of focus?

Breaking this down, our view is that there is certainly an “it dependence” qualification and that there are many use cases adding enterprise value beyond the core business process value, but the following approaches appear to inform the highest business process impact:

  • In the near-term, leverage LLMs through a concept likely akin to what Dmitri points out in his post on Semantic Brokers (reach out to Anyquest for more info) — using a layer of safety and further semantic processing to interface LLMs on behalf of enterprises, and similarly to interface the feedback from LLMs into enterprises in a semantically relevant manner
Source: Anyquest.ai
  • In the longer-term, in-house AI expertise that is developed based on proprietary data and learnings. This AI expertise will either leverage LLMs for extensive augmentation, or may otherwise create value-add to LLMs by applying extensive fine-tuning and processing expertise to extend the LLMs with the enterprise-specific expertise. Yet at the heart, this in-house AI expertise will be differentiated only when it’s rooted in access to proprietary enterprise data and in applying training and reinforcement learning. Used in conjunction, this is how SaaS providers will gain the “book and trade smarts” essential to provide accurate, dependable, and potentially mission-critical insight into enterprise workflows.
  • The extensive LLM augmentation will come in the form of “generalized insight” which is post-processed by proprietary ML for specific expert insight, and “generalized hypotheses and recommendations” which are used as experimentation fodder at scale, both in place of human feedback for reinforcement learning in well simulated cases and as inputs to human feedback which gets incorporated into reinforcement post-human-feedback processing

Yet at the end, this will all need to be synthesized into Enterprise SaaS which truly appeals to end-user needs. Just as with phones and consumer electronics, advanced by Steve Job’s maniacal focus on serving customers with superb design and tech capability married into one, to usher in mass-market adoption of personal computing; and with digital services, advanced by the usability and customer support focus of companies like Amazon and Zappos, to usher in mass-market adoption of e-commerce; AI will face a very similar paradigm. Mass-market adoption of AI in the enterprise will not come from adoption of its raw capability, but instead will come from adoption of Enterprise SaaS which delights users with highly usable services, elegantly and efficiently addressing specific enterprise use cases, enabled by value propositions and differentiation of AI versus existing baselines. Whether that’s aiding drug discovery, optimizing supply chains, or informing software development that better achieves objectives, AI will have to be baked into customer lauded Enterprise SaaS services which they can effectively adopt into their ongoing workflow.

As McKinsey points out in its recent report on Generative AI, the Enterprise SaaS opportunity is tremendous and will advance the productivity of the enterprise sector to the next frontier. A number of use cases are predicted to benefit, with software engineering at the very front of value creation. One area we’d argue that is yet to be properly sized up is the advancement in decision making ability (versus automation) — given its impact on ROI and the ability to scale up the most valuable decision maker resources in enterprises, it can have outsized / exponential returns versus those of automation which ultimately appear to be more linear in nature in advancing human capital efficiency.

Source: McKinsey & Company

In conclusion, Vertical AI-Powered SaaS is going to unlock tremendous value across the enterprise sector and may very well be the sector most disruptively ushering in AI. It’s going to be built on the back of “book and trade smarts” developed by deeply understanding the “why” of enterprise needs and expertise while leveraging the “generalized intelligence” of LLMs at the foundation level. And yet this value will be realized by Enterprise SaaS pioneers most effectively able to marry up their “book and trade smarts” AI expertise with Enterprise SaaS offerings which delight their users to advance their decision making capabilities as well as their overall productivity through automation.

Sincere thanks for inputs and contributions to this blog post go out to the entire Catio Team, Mike Ostrovsky, Hashem Alsaket, Fred Zhang, Iman Makaremi, Dmitri Tcherevik, and Dorian Kieken.

------------------------------

Follow this post on LinkedIn for additional comments and discussion.