Transcript for:
Understanding Agentic AI and Causal Reasoning

Hello, and welcome to a special edition of the Road to Intelligent Data Apps, where we were going to continue the discussion around the rapidly evolving AI marketplace. Again, you know, you can't go a day without hearing something new. Today, I'm excited to be joined by Scott Hebner, who's our newest principal analyst, really focusing on keeping us in... on the latest developments that are shaping the future of AI and how to best prepare now rather than later, because obviously you don't want to be late to this market. It's moving so fast. Our topic today is the research note that Scott put out on the critical role that causal plays in future state agentic AI systems. And I think, again, if you haven't read it, go take a look at it. We'll actually link to it. Thanks and welcome, Scott. And again, I don't know that we can, you know, go a week without something new happening in this space, especially. Yeah, it's incredible. It's incredible, isn't it? It's moving at a phase that's probably double the last technology transformation. And that was probably double before. And so it seems to be accelerating every time. Right. Yeah. And you got to you know, one thing I certainly have learned with dealing with all the customers and businesses that I have over the years is that. In this marketplace, you not only have to focus obviously on today's ROI, but you've always got to be thinking a couple steps down the line, at least be preparing for what's coming next. Otherwise, it's too easy to lag. Yeah, no, I think that to me is the biggest thing with this market is there's so many different companies out there. There's new ones sprouting up over and over. And I think one of it is that it's like... how do we not reinvent the wheel? How do we learn from the past? Because AI has been around and some of this AI has been around and has been built upon. And it's just because there's a genetic technology and AI in that doesn't mean that we get rid of all the other AI, the good stuff. And I think this is a lot of where you guys have been, you know, keep it's keeping you busy for sure. Yeah, no, I think, you know, what's interesting about this technological revolution revolution that's underway is that it's the ultimate team sport. If you think about it, everything builds on everything else, right? Nothing can be done effectively in the world of AI without wiring together a bunch of other components, right? There's no AI without an information architecture. The information architecture has to connect to all different kinds of sources. You have to have the fabric, right? You can't build all your own models, you got to feed off some of the larger models. You know, it goes on and on and on and so... It is really about an ecosystem and how you wire everything together. And that creates opportunities for the long tail to thrive, where you have more and more entrants into the marketplace that have new innovations because they plug in. It's sort of a different model than cloud and then the internet model before that, and IoT, if you really think about it, because it all needs to come together. Absolutely. And it's evolving so fast. So before we go deeper into your findings, let's kind of level set. on agentic AI and what it's all about and how it really differs from AI assistants and chatbots, which, again, we all have our perspectives on that. Yeah, we'll bring up a chart here. I was actually reading the other day some statistics that about 70% of businesses across the globe have AI chatbots or assistants. And there's some three to four billion inquiries that occur, or resolutions, I think they called it. per year. You think about that, that's a huge amount of people that are using a chat bot or an AI assistant. But if you think about what they are, they're task-based. They're about a task, right? There is a prompt, explicit prompt, and there's a known way to achieve the response. And it's about automating, you know, simple tasks, right? It's good, important, has been very productive. I think an AI agent now takes the next level of sophistication in the use case and obviously the technology to become goal-based. It's about achieving a goal, right? A goal is not an explicit, you know, prompt or command. It's a little bit more undefined. So the first thing it needs to do is it needs to autonomously help you figure out how to achieve the goal, right? And usually a goal involves some sort of dynamic set of conditions around it. So going from a task to a goal creates a whole new level of sophistication. And then of course there are going to be very complex goals, and perhaps they're not just one individual user trying to solve a goal, it's a whole organization trying to solve a goal. That's too much for one agent to handle. So that's when you get into what I think we're considering agentic AI, which is systems of agents that are collaborating. negotiating, they all have their own goals, their own knowledge, their own data sets, right, and they come together to help, you know, create a plan for solving a goal and then how you get how you get there, right. So from task to goals that are a little bit more straightforward and usually individual goals to an organizational set of goals that are in a very dynamic environment. I think that's the progression of what we're talking about here. Yeah, and I think, again, to your point, we talk about it as potentially large action models or small action models that are built off of a number of either small language models or small or large language models, for that matter, and even other AI as well. I think as we were talking off camera here. Yeah, no, and I was also, there was a great study that's in the research I'm putting out from Capgemini. I think it was 1,000 or... yeah, 1,100 decision makers across large enterprises. About 10% of them report having deployed AI agents using the definition that we're using here. But that's expected to grow to 82% in the next three years. And by the end of next year, I think it was 50 plus percent, 52%. So there's definitely an investment stream underway to kind of progress the chatbots and the assistants into this notion of an agent. Yeah. Yeah, I think people look at the bots and they have their limitations. Like you said, they're focused on one task. If you get outside of that, then it has to be bumped out to a human in the loop kind of thing. In many cases, hopefully. In some cases, it doesn't. It just fails. But, you know, in good places, they bump it out. But that's just like kind of an example of some of the challenges. But what are some of the other challenges that organizations are dealing with when they're looking at AI assistance and going to? AI agents. Well, that is the big gap, right? And that's why there's going to be a lot of eloquent, you know, math and technology and algorithmic capabilities that have to be infused into the AI models to make this happen. And the chart we'll bring up here is one from Gartner, which I think nails it, right? This one definitely nails it. If you look at what it's basically saying here, if you look at the deterministic chatbots, right, they're simple, right? They're much more explicit in what you're trying to accomplish. They work in environments that are, you know, more simple, right? Very straightforward. The LLMs in generative AI take that a step forward, I think. Get a little bit more support for a dynamic environment, you know, more complex data sets they can work off. But as you can see on the chart here, there's a huge gap in red here between task oriented assistance and chatbots to the whole idea of an AI agent, right, that's adaptive, right, that's able to plan, it's able to deal with dynamic and changing conditions, it starts to infuse autonomous activity and actions under the covers to help you, and perhaps can even act autonomously to resolve something when human you know, invention isn't feasible, right? How do you make that happen? And it's that red area. is what needs to get solved here. And one thing that's true about the AI agents is that pure correlation, you know, identifying patterns and anomalies is not going to allow you to achieve this. You're going to have to be able to get support from the AI to plan, right? And you're going to need to make decisions and you're going to be doing problem solving, not in a static environment, but in a dynamic environment. So you just start. you know, processing how do I make that happen with today's AI. There needs to be new methods or new ingredients, you know, in the mix here to make this all happen. So it's not just about building the agents and orchestrating them and governing them and getting them to communicate and, you know, collaborate together. It's what's going to feed them with the intelligence, the decision intelligence, if you will, to be able to make this all happen. That's where the innovation is going to come. Yeah. And I think both of us look at that gap and know that it's going to take more than just gen AI to close that gap. Things like causality coming into the picture. And I think this is where we violently agree on this. Is that where you see it again? Yeah. And that's where the research paper goes in more depth than we're talking about here. But if you bring up the next chart here, what we're really talking about here is the notion of causal reasoning. And obviously there are different degrees of reasoning when it comes to what AI is capable of doing, but every little baby step forward over time is going to add tremendous value and it's going to build on each other. So again, when you're trying to solve a goal, you need to make decisions. Decisions have consequences. There's never a single decision. There's always alternate decisions you can make, each of which have its own consequences that then lead to other decisions that have their own consequences. It is the process of problem solving. Everyone, you know, humans are by nature causal. The way we solve any problem, like how to drive home, you know, whatever it may be, is you are processing cause and effect, right? So causal reasoning becomes a really critical ingredient to fill that red gap that we saw in the Gartner chart. You have to have causal relationships mixed in there. You have to infuse tacit know-how into it. And it has to start to be able to understand why certain things may happen, given certain conditions. And then that is the key for the AI agent. So they'll not only be to tell you what you can do and what you should do, but it can tell you how to go about doing it and why you should do it. given that there are probably different ways you can do it, what's the best path for doing it? And then be able to actually explain it to you, right? Descriptively, predictively, and prescriptively in the sense of, you know, how do you actually get there, right? And I think that's what we need to infuse into the mix, if you will, the new ingredients into the AI mix of LLMs and small language models and gen AI and all that to be able to make this all happen. It's a new layer, if you will. in the ecosystem of models. Yeah, I agree and I mean it takes me back because I remember in the late 90s all the rage in observability, what we would call observability, then we called it management products and things like that, network management and application management and APM when it first started out was really about these rules engines that you would build just a ton of rules that if then else kind of things and very complex and playing off of each other. And then kind of the earliest stages of causal AI started to emerge to be able to help break this down. How do you see this work in practice for real businesses now and for data scientists? Yeah, and we kind of touched on this on our last conversation, right? I mean, today's AI models, LLMs are correlative in nature, right? They work on patterns and associations, probabilistic statistics, if you will. Knowledge graphs, right, and graph rec allows you to now infuse relationships among different entities, right? That kind of can give you a workflow of how different things relate to each other. That's a step forward, and more and more of the models today are incorporating that. But those knowledge graphs and the relationships between the entities still don't tell you about... the causality between those entities, the cause and effect relationships, right? They're not causal, they're just relationships, right? Now let's put that back on the agentic AI definition that we had. We have multiple agents that are engaging with each other and trying to solve some problem or help you make some decision. How do you know which agent to invoke and ask for what if you're unable in a dynamic world? to be able to, through cause and effect, know which is the right one to go to, based on what you're, you know, because every cause has an effect, right? Things just don't happen magically. So how that all gets orchestrated is why this causal reasoning becomes so important, not just in making a decision within an agent, but how these agents are actually going to interact with each other. And again, I think this is going to be a very progressive journey over time. And if you bring up the chart here, There are a bunch of new ingredients that are getting added into the mix. And these are real live things today. There's a ton of vendors out there that are growing at good rates that are providing the toolkits and the methods to do this. And I think often when I talk about this to people, I kind of put it in the context of the human brain, you know, how we all think and all that. So think of LLMs as sort of being like your limbic brain, which essentially is your memory and you know drives instincts right yeah memories being the data sets right and your instincts being you know the networks the neural networks and so forth the different approaches you take that turn those memories into instincts right my dog knew i was coming up here this morning when i got in the shower first thing in the morning because i never do it right you know that memory turns into instinct that's what a lm basically operates with when you start to infuse all this causal stuff that's when you get into um the notion of mimicking how people reason. And it starts off with like the cerebral cortex that's really about skills and know-how. I think that's where knowledge graphs start to come in, is where you can sort of encode incrementally know-how, skills, how do you actually get something done, how the entities relate. you know, relate to each other, which the LLM can't figure out on its own. And then the neocortex is the next layer of causality that's actually the reasoning and the problem solving. This is where the real cause and effect comes in. Because think about, probabilities are great and important in an LLM, but when you're reasoning, it actually knows how to deal with the probabilities, you know, how they change when everything around you changes. Yeah, I think... That makes total sense. And I think to your point, it's not the straight line. It's the line that kind of weaves around, comes back and loops back in there. And how you deal with the interlooping of the different pieces that have to come together on this as well. And I would assume that's where this toolkit comes in, this causality AI toolkit that you kind of talked about there. Yeah. So there are dozens and dozens of methods. open source toolkits, vendors that are essentially democratizing these new tools, these new methods that you can start to infuse. I think of it as ingredients, right? And when you start to integrate these things together, and it starts to become more of an architectural approach, that's where the power. So it's something you can start very, you know, a very simple approach is a monotonic control, right? Very straightforward, very simple. A lot of people are using it today. You start to incrementally put new ingredients, new spices into your AI mix, and you build over time. So it's not like you start over again. You build off what you have. It's very incremental. So it's very risk-free to start experimenting. And I'm going to cover that in a future research note where I'm going to go much deeper into neurocausal networks and Bayesian models and things of that nature. What's on the chart here is more the abstraction of what you're able to do. Like now you can intervene, right, and say, wait a minute, if I do X versus Y, what would be the difference in the outcome? If I have a bunch of different approaches I can take, you know, what if scenarios, right? And it, by understanding cause and effect, I can deal with that. I can do counterfactual reasoning. I can infuse known, you know, known tacit information or, you know, essentially intuition, right, or known conditions into a model. I can understand root cause. What are the influences on an outcome and rank those influences for me. Some may become irrelevant, others may be very relevant, which is the idea of you know identifying confounding effects. You know often bias or hallucinations in the LLM are the result of hidden influences, right? Confounding effects and things of that nature. It's going to allow me to understand the pathways to a resolution, not just here's the answer but Here's the logical steps that why we think that's the right answer, right? You know, that rationalizes it for you. So there's toolkits and methods that allow you to do all these different kinds of things. And that is what's going to allow us to incrementally evolve over time to have more and more causal reasoning in these models that then become the underpinnings for the agents, right? That sort of stuff's getting exciting. And as I covered in my last research note on the marketplace, it's not just a lot of these startups that are doing this it's vertical industry like process manufacturing software companies supply chain you know vendors marketing mix vendors that are starting to infuse causality into it and all the big guys ibm meta google open ai they all have research underway in causal ai which will eventually fuse into i think their core offerings so this stuff's going to start to come and become more prevalent i believe in the years ahead for sure Yeah, no, I think this is super exciting and such a quick changing area of study. And I think we can expect more out of this from you as well, right? Yeah. So, you know, step one is going to be, you know, to talk about what we're doing now, which is sort of the concept of causal AI, why it's important to agentics. Then I'm going to put a paper out on the use cases. So I'm going to go through a whole array of like real live use cases. and are away from customers. Then I'm going to do a deep dive on the actual technology. So all these methods I'm referring to. And then finally talk a little bit about, in more detail, the architecture of how this all comes together within an agentic system. I'm going to be collaborating with you and George Gilbert on all this. In fact, ending on this last chart here, just to give you a preview of where I think this is all heading, is this agentic AI system, right? And you kind of alluded to this as we were talking through this. The LLMs, they're the source of this enterprise-wide intelligence, right? It is the company-wide, you know, place to go for generative AI and to get information. From there though, then you start building these domain-specific small language models, right? And S meaning small, but it means specialized, sovereign, secure system, wire them together. And that's where I think you start to infuse knowledge through the knowledge graphs and graph rec and all that stuff. And you're able to wire those domain-specific things into workflows and tasks. And then within those small language models, you then create, you know, what, what, Jimenos, one of the vendors that I've been talking with, they call it a causal component model. Think of microservice architecture, where you have a whole bunch of causal components that then get wired together within these models, right? And if one causal component changes, it ripples through the rest. So you're wiring together a neural network, essentially, through microservices that can start to understand the cause and effect mechanisms, and therefore why things happen. That is what makes decision intelligence a reality. Decisions intelligence means it helps you intelligently make decisions. No one can make a decision without cause and effect. Then that gets into the agents, right? And that's then how you can figure out what to do, how to do it, and why to do it. And what's great about it, it's an ecosystem. So as the causal models, as you infuse knowledge graphs into the small language models and the causal mechanisms within those knowledge graphs, it learns, then it feeds back to the LLM for the company-wide intelligence and makes that more intelligent, more accurate, less biased. And it's a feedback loop that just makes everything more and more intelligent. Yeah, we all agree. The closed loop nature of it has to be there because I think, again, it's really, you have to learn from all of the learnings as it learns because it may see stuff. that you don't see and bringing that back in not just exiting out so hey thanks for coming on board i i'm really excited about this excited to be partnering with you and with george and with dave and the rest of the team on this because i think again there's just a ton of learnings for us to you know really unpack here so thanks absolutely you bet a lot of fun And thank you for joining the road to intelligent data apps. Stay tuned for more research from Scott and the team as we really dive deep here on the Cube, the leader in tech news and analysis. See you soon.