I appreciate everybody who has joined us this morning to learn about how you can supercharge your inductive automation SCADA system with some built-in AI. My name is Brian Thichen. I'm the Chief Revenue Officer here at Sorbo.
I'd like to introduce Aldo Fronte. He's the CIO and co-founder. And then...
And more importantly, I'd love to introduce Travis Cox, who is the CTE with Inductive Automation. Travis, I'd love to just kind of have you provide an introduction about yourself and your role within the organization there. And then we can go into a little conversation about AI in the space. Yeah, thanks, Brian. Thanks for having me here.
So, yeah, I'm Travis Cox. I'm the Chief Technology Evangelist for Inductive Automation. I've been with the company for 20 years.
And over my career, it's really been focused on... helping customers build the right architectures, integrate different systems, ultimately apply the best practices, and to solve the challenges that they have. And it's been a real joy to kind of see over the years the technology options that are available that they can start using with their systems.
And it's really exciting here to explore a lot of that today. Awesome. Thank you so much.
So I think just to kind of start off the conversation. AI is at the top of everybody's mind. It's from every C-suite is talking about it, regardless of what industry you're in. You know, new chat GPT, you know, solutions are coming out.
People are inventing that. How do you see the role of AI and ML playing in the industrial landscape? Like, where do you see it being most impactful? Yeah, I think AI is going to play a crucial role, you know, in manufacturing and the plant floor.
And that, I mean, It's really to get more insights out of that data, right? To do what it promises, to bring all kinds of data together. And that's not just process data, but that's process data, it's quality data, it's MES-related data, it's machine equipment information, documentations, operator experiences. All these things feed into these systems that can provide more insights. And these insights are going to help companies do things like predictive maintenance or they optimize their...
production processes or improving quality control, whatever it might be. But it is going to play a crucial role. And because there's so much out there, so much data, it's very difficult for people to make a lot of sense of that. And AI is really good at being able to learn on that data over time and to help hone in on what you should be doing. Yeah, I couldn't agree more.
I mean, there's so much data and computers and machines are really built to solve massive data problems and looking for these patterns that, you know, allow then subject matter experts to kind of dive in and really be able to make true decisions on it. While there is apparently a ton of benefit, as we all kind of can see coming in the future, what do you also see are some of the biggest challenges with customers? adopting AI into their facilities, mills, plants, et cetera? Yeah, I do think there are some significant challenges.
And a lot of it's not technology, right? A lot of it is more political, cultural. So I think number one is you have to have, a business have to, they have to have a use case. They have to have something they can rally against.
What are they trying to solve? What are they going to use AI to do? And start small with that, right?
Find a use case where they're going to get an ROI, get a win. You know, to put these systems in place to kind of see that result. And I think for a lot of companies, it's hard to identify that use case or sometimes they think too large where they're trying to eat the entire elephant.
Right. And not necessarily start small. So I think that is one big area.
It's really being clear about the use case. But secondly, though, it's an order. Companies have to drive from leadership all the way down that.
this is where we want to go, right? This is what we want to do. This is how we're going to use these systems because that gets the teams rallied against that business objective.
And they're all working towards that common goal, right? If you don't have that leadership and that culture change, it is always us versus them, right? And we've seen it so much.
When you look at OT divide from IT, Right. And how even for a lot of companies, it's still there. It's still very prevalent.
You know, it's like, don't touch my systems. And, you know, these are these from the OT side. And the IT is, you know, getting pressure right from cybersecurity and from the business to to do more with that data. And it just.
It's not going to work, right, unless we can drive that change all the way through. So I think to me, that is like the number one challenge. The companies that are going to be successful in this journey are the ones that will solve that. And we'll make this, you know, from leadership, we'll make this, you know, be part of the culture of the company. And I think that will be very, very important.
The last one I want to point out just real quick, and I think we'll get into more of this today, is. You can't just drive AI from the top, right? And so we talked about the culture, bringing the teams together. So that's really important.
It's like, but you also have to have the right foundation from the bottom, right? So you put the right foundation in that ultimately allows us to unify the systems and bring that data to higher levels, right? And get the people that are part of the, that are the main experts to, you know, model the data and, and. understand what they're trying to do closest to the source, right?
Because if you can do that, then it's much easier to kind of get, as you bring these teams together, to leverage that data and to do more with that data. So I think those, to me, are the three big challenges, but really it's that culture, I think, Brian. Yeah, I agree 100%.
I think the two big key points that you hit on that really resonate with me is, one, change management is always a challenge really in deploying any technology, not just AI, but any technology of getting people on board of what those changes would look like and how it impacts production from the plant floor all the way up to the C-suite. And then really also enabling subject matter experts to do what they do rather than bringing in kind of bespoke individuals who, you know, have exceptional skill sets, but don't necessarily understand the day-to-day operations and understand, you know, the... They know exactly, you know, the people in the plant know exactly where to hit that motor in the right spot to make it stop, you know, doing that problem because they've been there for 30 years.
It's enabling that knowledge to be put into the software, not having it be people who come in with, you know, a data science background who don't understand the process of making things. What are some of the operations you see kind of the most immediate impact, you know, in? by deploying AI in the plant floor or in a mill and a plant, you know, refineries, et cetera?
Yeah, I mean, I think there's a couple big use cases that are occurring more. One would be, you know, predicting machine failures, right? I think that's a pretty big one. For a lot of companies, downtime, of course, is huge, right?
You want to reduce that as much as possible. And so being able to really Yeah. get the most use out of your equipment and to kind of predict when things are going to fail or have some issues so that we can get ahead of it and we can plan properly without having to have very expensive products on the shelf, right?
Inventory. So it's that balance. And I think AI really helps in that arena. And it's one where you can definitely start small, right? You start with your most critical machines that you care about, that you want to look at.
So I think that's definitely one area. The other, though, that I think is more important is it's kind of tied to the connected worker, the people that are running these operations. Over my career, it's always been for me, it's that's always been near and dear to my heart because those are the people that we're designing systems for.
Right. We're helping them do their job better. And and unfortunately, they get pinned for everything.
Right. If they do something right or do something wrong, they're always the ones that are there, you know, that. that are looked at very critically.
And so if we can help them do their job, augment them with the right information and data so that they're not sitting at a panel, you know, HMI all day, or they're not in a control room and, or, you know, or they can, you know, those things are taken care of so they can start focusing on the bigger issues, right? In terms of how they know these processes should run. To me, I think that is an area where I get really excited about. AI because if you look at all these tools that we use in our personal life, right? I mean, look at Gemini with Google, you look at Apple and their AI assistant and all these things, it's to help you, right?
You just ask basic questions and get really great responses back. And I think that will start becoming into the platform more and more, right? Where that operator is able to ask these questions, get good answers and have it learn, you know, and together they're able to solve challenges that would have been very difficult to do before. So I get excited about that use case in particular.
It is not one you can just put in tomorrow, right? There's a lot of work that has to be done to get there. But I think that's going to be the biggest impactful area because It's proven over the years, right?
Especially with OE downtime was huge. And there was a lot of resistance from operators about, let's not put it in because it's like a big brother thing. But when they realize what that can do and how the information that provides them, they can use their expertise to run that system better. It was proven there, right? This is taking that to the next level, right?
Or where we can do the same thing. Absolutely. So last question here, before we kind of get into the meat of what... people are here to see.
How critical do you think integrating tools into SCADA is? How does this help operators and engineers maximize the impact of AI and ML within their systems? Yeah.
So this is one where when you see a lot of these projects going in place, it's a lot of, let's introduce a lot of new tools on top of other things where there is People are writing code or programming or trying to then clean the data that was already there and to use that a lot more. I think you can be successful in that way. But ultimately, the data from these processes is, let's face it, SCADA is where all the operations happens, on that layer, layer three. That is where all people are using it. That's the tools and systems they have to have in place.
We can't. cannot run these systems without a SCADA system, right? We know this. And of course, in order to run that, we have to have the data.
And what we should stop doing is stop thinking about the data specifically just for SCADA. Now, we need those operations, but if we can model our data properly, right, and bring that into SCADA, of course, but then have that available at higher levels, then we can really win. So I think SCADA becomes that point. where digital transformation can't happen without that right piece in place, right?
That is, you know, that is based on open standards, that is ultimately integrated with lots of tools, because one thing can't solve every purpose, right? That unifies that sort of plant floor that's deployed in a modern way that IT people need from a management and from security standpoint. So to me, I think SCADA, if you look at those domain experts, they're the ones that can actually figure out how should we standardize our data across our enterprise.
And then if we do that here, we build a superior SCADA system and we make it where that data is accessible, we are going to win, right? We're going to be able to accomplish this. So I think it all starts with SCADA, to be honest. But it's not just a SCADA in the old sense of the word, right?
It's a unified SCADA. It's one that is designed to integrate with other tools. That's exactly why we're here today, because this is how these worlds play so well together. I couldn't agree more. And again, I think when you have your domain experts working in SCADA every day to provide additional tools that they can use on the fly to create value and insight immediately, rather than having to move between panes of glass.
And you really can derive value as quickly as possible to see that impact and then continue to iterate. Well, with that, I want to go into, you know, talking about, you know, if you. Travis, if you could talk a little bit about Ignition for anybody on the call who is unfamiliar with inductive automation and the Ignition product, and then we'll get into what Sorba does and into the demo.
Yeah, so I'll keep this real brief. I mean, we're inductive automation. We are an independently owned software company.
We were founded back in 2003 by Systems Integrator. So we lived and breathed the world of solving challenges for customers, and that's still very much in the DNA of our company here today. And we provide software that...
that is rooted in OT that helps from a skater operation standpoint, but is one that can connect all these systems together. And we have a very, very diversified customer base across all verticals. We have a very strong integrator program and there's a lot of good case studies and references on the website.
Now our product ignition is a unified industrial integration platform. So It's one where it allows you, of course, to build a SCADA system or an EMEA system or, or from an IoT standpoint, get data where it needs to go. But it's a platform that allows for a lot of scalability, not just in its licensing model being unlimited so that we can add more connections to devices, more tags, more information, build more projects and screens that we can provide on the plant floor, but that it has an SDK in it so that we can actually extend the functionality further and allow this direct integration with other tools, along with using IT standard technologies and standards, open standards like OPC UA, MQTT, and SQL, and REST, and Kafka, and many other things that allows data to integrate between different services.
So all that, it's, you know, it ignitions. It's very modern from a technology standpoint, cross-platform, has industrial strength, security, and stability, and can be deployed any way customers want to deploy it and can be managed really easily. And really, it's a tool that OT uses, they understand, but IT also understands.
We're speaking the same language, and I think that's really important as we look into the future here. So that's a real short introduction on us and Ignition. I'll kick it back to you, Brian.
Awesome. So appreciate that. So who is Sorba and who's using it? So very similar to the origins of Inductive, the company was founded by people from industry.
So people, Aldo and Yandi, the two co-founders of the company, come from industrial automation, myself, 20 plus years in the space. Really, the goal and the purpose of Sorba was to enable subject matter experts, controls engineers, process engineers, people on the plant floor to be able to have a solution to deploy. So we have solutions deployed across the globe.
We've been around for nine years now and really tackling things in every different industry that exists out there. How we accomplish this is by delivering an end-to-end solution of how data comes into the system, enabling the ability to connect to any data source, whether that's, you know. up into databases that reside in the cloud or historians on site, all the way down to PLCs and other native level zero and one devices that are out there.
The next piece is really creating the no-code environment. So you don't have to have data science experience to build models, AI models, in Sorba. And we'll demonstrate later on the ability to allow the software to pick the best model for you to deploy that out and to manage it mass.
The ability to also create the auto ML on the back end and the rev ops that you can continue to iterate without having to be intimately involved with that data and allowing it to kind of do everything in an automatic fashion so you can focus on your expertise and running that facility. We can deploy on-prem, in the cloud, or hybrid. We are agnostic to devices. We are a pure software play that can deploy across all those things. And of course, the key here is the no code.
We offer all the different pieces that are needed to get to that end-to-end solution. So from that data ingestion all the way to the operations on the back end. Additionally, we offer video analytics as well.
If you have Python models that you want to bring in, we have an interface for you to do that as well. If you are familiar with using Node-RED, there's an interface. But at the end of the day, subject matter experts like yourself can use drop-down menus and really drive through in a wizard environment. to create very robust AI and ML models to be deployed wherever you feel the computer's required.
Additionally, we have some just built-in products that come off the shelf, from doing performance analysis and OEE, to vibration, to creating a digital twin. We look at a digital twin from a forecasting perspective, so looking at your historic data and how a machine performed, and then how you potentially want to see a machine perform if you change certain variables, to... advanced predictive maintenance.
And then of course, kind of the core of what we do, the choose your own adventure APC, where you take the data that you have within your system, you pick the values, you know, impact a process or a model, and then you build that model and deploy it at scale. So as we all know, everybody kind of faces the same challenges out there in the space, you know, quality control, you're looking for efficiency, you're looking for different things within supply chain where you have bottlenecks or you have lack of certain supplies coming in you have to pivot your production obviously equipment maintenance as Travis has queued in on and then of course regulatory compliance you know if you're looking at water treatment and you need to make sure your dosing is correct there's all sorts of things we're constantly being challenged with in the manufacturing production space and using AI and ML can really help kind of solidify some of those things one of our key customers is in the brewing space um a brief little story before I will turn it over to Aldo to go into the demo. We are challenged with a solution of reducing filtration within the brewing space. We were given three variables to control on it, and we were able to build and deploy within a five-week time period a control model that was in closed-loop control. So we were writing back to the PLC to create the value and allowing a very, very efficient...
company from an engineering perspective to create. even more exceptional value, allowing the equipment to operate at its kind of peak envelope. So being able to deploy AI and ML not only reduced that filtration substantially, we increased the amount of beer throughput. We also reduced the amount of energy that was required to pump across that.
And so it just allowed the system to look at all of its historic data, all the knowledge that comes from the subject matter experts like yourself that are inside these systems helping train these models to perform at that optimal level at all times. With that, I'm going to turn it over to Aldo to kind of give you a quick overview of the two pieces of our software and then dive into one, the Sorba standalone software solution, and then the piece where you can operate all this with an inductive automations ignition platform. Aldo?
Thanks. Thanks, Brian. So what I want to talk about today is how we can take AI and make it operational in a real time basis. A machine learning algorithm on its own does not provide value unless you have all the other components. that are part of the whole entire system.
So the way we've designed Sorbitch, two separate areas, our data ops or data operation allows you to pull in data from any source, whether it's from an automation PLC system, SCADA, sensors, OPC UA, we use connectors to pull data in. And then we also have very powerful IoT connectors that then point to different endpoints from MQTT, Sparkplug. SQL historians, as well as also cloud services from Azure, AWS, and of course, Ignition. So depending on where that data resides, we can pull it in, unify it, and part of that unification and importing, we can then model the data and contextualize it so it simulates what your process or similar to what your process is laid out.
In addition to, we auto-cleanse. So built into Sorba DataOps is an auto ETL. And that auto ETL automatically cleanses that data for you.
Over 50% of the time that a machine learning engineer spends is data cleansing. And that's where a lot of the heavy lifting is. So we've automated that whole process for you. Then let's move into the MLOps, or machine learning operations. Here's where you can build machine learning models.
You can then version manage them. And you pull in the training sets. In addition to training sets that you can pull in either from our built-in persistent time series database, you can also pull in from flat files.
And then we additionally process the data for improved data cleansing. That data cleansing on the MLOps may be specific to a data set or a model. Once that data is cleansed and imported in, then we go into the step of the auto ML.
So AutoML is our own proprietary machine learning application. We've actually patented it. It incorporates many different techniques, including generative AI, to synthetically create data to improve the model's performance. But the AutoML takes care of all of the engineering, from splitting the data, from balancing, from dimensionality reduction, from the preprocessing, from selecting the best algorithm, auto-tuning the hyperparameters. The post-processing, it's very involved, and the people that have done any type of machine learning development understand there's a lot of complexities.
So through a simple clicks, you can generate a machine learning model, and then the output of that model generates performance metrics that you can measure how confident the model is and how well it can perform. We've also introduced our very own proprietary advanced process controller and optimization. This is a very powerful tool.
that allows you to take a machine learning algorithm based on a goal, whether that is to improve efficiency, reduce energy, improve quality. You tell the model the goal that you want to set. You identify what control parameters, and the model will look for optimal conditions and then recommend set point changes to control, whether that is through the HMI that the human in the loop can make those changes.
or through our data ops, send that information back through the controller and automatically adjust those changes in real time. So this is a truly operational unified machine learning platform for industrial manufacturing. Next slide, Brian.
So the other thing that Brian mentioned is that our software can actually deploy on lightweight CPU hardware, as well as GPU. So you can actually run algorithms right on very lightweight modules. We have some OEMs that plug the modules into their control panel and run algorithms that interact with the PLC.
You can also run it in a data center on a computer, either as a virtual machine or as a Docker as well. So it's very versatile in how you run the application. So I'm going to jump into the software at this point.
Let me share my screen. Let's see here. Okay. Share. Move some of this around.
minimize that all right okay so the first thing i would like to do is uh first talk about how you build a machine learning model within the sorba platform just to show you as well as how easy it is then compare that to how we can build a model within ignition as well so the first step is the application is deployed through a web browser And even this can be pointing to, again, an on-prem solution completely running on your premise or in the cloud. And once you open up the web page, you then create an asset. So we adhere to the unified namespace.
So you can synchronize to any unified namespace and auto sync and either create this hierarchy or manually create assets yourself. And an asset consists of these attributes that you can configure. alarms, dashboards, flows, tags, and notification scripts that we can also build our own Python scripts. So you can do some customization or build your own models. But to pull the data in, you configure a channel.
So within the product, we do offer built-in drivers for all of the automation types of protocols that you have available to you. And then you can also add in IoT connectors that are very powerful to pull in data from different higher level systems. For example, MQTT, big data application services from Azure, web services. If you're connecting to some sensor companies, offer their data through their cloud so we can connect through that web service and pull that data to use it in an algorithm.
Of course, other different historians that we can pull in, Kafka. So once you specify the source, you then specify the destination, where it lands, it could land within the Sorba real-time or historical data. And now that data, as it comes into Sorba, can be used to train a model.
On its own, this DataOps can be used simply to move data in and out of different endpoints. So it's a very, very powerful tool for managing data and to contextualize data and to process data. So the next step.
is once you define a asset and some tags, you would then go into the machine learning trainer. Within the trainer itself, it has a inventory of training sets, projects or a collection of different algorithms, predictions you can predict based on flat files or any offline predictions as well. Work items is you can import work orders.
And you can train and classify models based on a specific work order or an event. For example, if you had a bearing failure and you had that work order for that bearing failure, the system will then train it on that bearing failure and give you a lead time to that failure as well. Now, for the more advanced machine learning engineers, you can build your own custom models and then import your code inside this framework. And you separate it based on the training, offline prediction, runtime code as well.
So now, utilizing our framework, you can pull in your own custom ML algorithms, and then using our auto ETL, you automatically cleanse that data. So it's a very powerful tool, even for advanced users. So the data, once you import the data in, and we're looking at some motor data, once you import that in, you can then explore the data.
So simply looking at the data, see what it looks like. You can zoom in. We can look at the autocorrelation, the data quality, and so on. So when you import a data set, the one thing I want to point out here is when you specify a data set, you can either pull it in from a flat file or from within our hot data, which is our persistent time series database. And we automatically do correlation analysis.
We do statistical outlier analysis. contribution analysis, and data drift analysis. And based on The data, you can either choose to include that data quality or you can filter it out.
And I'll show you here real quickly. When you pull that data in, looking at the data quality, it will do a statistical outlier analysis and show you which inputs either passed or failed. And we also can show you where the outliers are located.
And the data drift analysis tells you how uniform the data is. So you may have conditions with seasonality conditions, you may have operational conditions where the data drifts, that could either indicate that there is some going on with the data that could give you a prediction that might be off. So it's really important to understand how the drift and how the prediction and how the quality of the data performs before you are guaranteed a good model to use. So once you've done all that, you then go into the projects. And within this data set that I just imported in, I'm going to create a model.
So the first thing you do is you pick or give it a name. We'll just call it Anomaly 5. These are the inputs for that data set. Within this configuration, you can further filter based on the model.
So, for example, I can apply rules. So if I'm looking at a motor and I'm going to build an anomaly detection algorithm for that motor, I might want to filter it to say I only want to train when the motor's running. So add a rule that looks at the speed, and if it's above, you know, five or whatever the value is, it'll filter any data that's below five.
You can also add synthetic tags, where in the event that you don't have a lot of inputs, you can actually take very few inputs and synthetically generate more information for the models, like moving mean, standard deviation, and so on. So all this information to get gets processed. And then the next step is you pick the type of model that you want to train. So we offer, as Brian mentioned, we offer clustering, classification, regression or inferencing, our optimization APC, which I'll show you in a second, our digital twin for building simulations or replicas of your machine or process, and forecasting.
So by clicking the type of machine learning model, the default is auto. That is our auto ML, which does pretty much all the heavy lifting for you. You don't really have to do anything. But we also, if you're a machine learning engineer, you can pick from the list of different categories of machine learning algorithms or pick the custom model that you've imported into the system.
And there's a lot of technology. With machine learning, you know, this technology changes almost by the month. So it's very hard to put all the different versions of different types of machine learning. We're looking at building in, you know, transformers now.
and reinforcement learning to add more capability out of the box. But people that want to build their own custom, definitely you can use our platform. And the nice thing about it is when you import that model, it's now available to the other users to use it in their training as well. So let me just pick, I'll just pick one here, K-Means. And then if you click the advanced setting, again, if you're a machine learning engineer, you can go in and tweak this again.
I'm not a machine learning engineer, so I don't even touch these parameters. I let the AutoML do its thing. You click Next and Next, and now it's going through the process of analyzing the data.
And you can see it as it goes through the workflow. It's going through the data cleansing, all of the dimensionality reduction, the pre-processing. And once it's all done, I'll show you one here just for the sake of time, the result of the training. when you click on the anomaly detection tab, will show you based on this trend, the color bars indicate where the anomalies are located. So that threshold of 80%, as it gets above that anomaly score of 80%, will indicate an anomaly.
So here it's pretty obvious there's something going on, but maybe in this area it might not be so obvious. So you can zoom in and you can see there's some little disturbance there that it picked up. Again, it's based on the data and may It may catch something that's very, very fine, or it could be catching something more generalized.
At the end of the day, we always say it's the data will dictate how well that you can predict or indicate a problem. The tag ranking is a very powerful tool that gives you a ranking or a contribution of each input. By this ranking, we'll give you a probability of your root cause analysis.
So what could be the root cause of this anomaly? and this ranking shows that power is ranked the highest. Second is AC voltage and the third is torque.
So this is a really powerful tool when you put this in real time. And as these problems occur, now an operator doesn't have to do the whack-a-mole and try to figure out what's causing this alarm. The tag ranking will automatically give them that information that they can then go in, do an investigation, and then either correct the problem or, you know, do some preventative maintenance.
And then, of course, the quality. So the other one I want to show you is the... an APC, which I believe is very, very powerful.
So I have some data here, some chemical data, that I'm going to train to predict or optimize the chemical dosing. With APC, you have to know what your goal is. So let's create an APC. So taking this chemical data, I'm going to select, go next here. and I'm going to select the model type of optimization.
And let me know, Brian, if I'm running close to the time limit. So in your configuration of the APC, you have to specify what is your optimization variable. So in this chemical dosing, where I'm dosing chemicals to reduce chemicals for a phosphorus process, I'm going to pick my optimization variable to my chemical dosing pump, and I'm going to minimize that.
The next step is to identify what independent variables does the system have. It's things like influent flow. Independent variables can be your environmental conditions that you have no control over. So I'm going to select my influent flow as my independent variable. And then my control variable, I will control using the chemical dose ratio of the chemical pump.
And then the other inputs will be analyzed. You can either force the model to include them or not. And then at the bottom here, what we'll do is do a clustering analysis.
And what that does is if there is some abnormal conditions or the behavior of the data is different, what the APC will do is automatically create different APCs for different behaviors. So you can either let the clustering analysis do that for you, or you can pick a tag. And what this is very powerful is in a batch process, So every step of that batch would have its own optimization model.
And by selecting the tag for the batch step, the system will automatically generate multiple models, an ensemble of these machine learning models into one agent for you. So you click next, next. And then what you will see here is the output where the regression line will indicate where the optimal conditions are based on this APC.
So this green line, since we're minimizing chemical flow, will look for optimal conditions, those green dots, and it's going to build a digital twin around those optimal conditions. And the APC will then provide you a controller to tell you how to control the set points. And the critical part here is your optimal high and low limits.
So if an operator were to use this, they will look at their high and low limits and make adjustments within those high and low optimal limits to provide a minimized chemical dosing. And if the metrics in the property section will even tell you what is your potential savings for reducing chemical. So this created two different clusters and two different models.
So in one cluster, it's about approximately 5% reduction. And in the second cluster, it's approximately 8%. So very, very powerful tool just to analyze and see if there is any opportunity for increasing efficiency or improving the process in any way. All right. So what I'm going to do now, the other last part I'll show you.
One more step with my time, trying to be cognizant of the time here. So I'm going to then, I took a model that I trained. OK, so I've done the training.
Now I'm going to make it operational. I'm going to put it in a runtime mode to allow me to make real time predictions that I could use in a process or a machine. So let's go to my motor asset.
And in the asset, in the model section, you can then add an instance. And here's where the MLOps comes in. You will then give it a name.
You select your algorithm from the project. a library and I'll go into my project name, select my model and the version of that model and it identified it the type of model the version. And in the auto learning, this is the other part of the MLOps that's very powerful.
Part of machine learning, to maintain it, you have to retrain. Because conditions change, the model needs to adapt to either operational conditions, environmental conditions, whatever the case may be. That process typically takes another engineer to do that. So the auto learning, once you enable it, will automatically create a way to trigger the auto learning. That trigger can happen either manually, it can be based on a schedule, or it could be based on the prediction error.
So as that prediction error increases, you can actually trigger the auto learning to improve it. And after that trigger occurs, it takes that data, the last 10 minutes in this case, you can adjust that, sends that to the ML trainer, trains the model, and if the performance metrics are better, it'll replace it. But if it's not, it'll roll back to the current one, which always guarantees you the best performing model. So this is the way that you provide operational MLOps with your DataOps to give you good results and good predictions.
So doing that, the last thing you have to do is map the data so that we have a way to auto-detect tags. And then the outputs, every piece of information the model provides is a tag or a variable. So the prediction lands in a tag and you can send that tag to a dashboard, you can send that tag to ignition, you can send that tag to MQTT, wherever you want that data to reside. That's the beauty about having tags is that you can manage how that information gets consumed. So I can then create those automatically and I save, apply changes.
So what I'm doing now is putting that agent in a real-time mode, and as the window, so there's a window currently default to 10. Every 10 rows of data that flows into the system will analyze it in real-time and give you an output. So we have a built-in template, and you can start to analyze the algorithm, the anomaly score. There's the data coming from the motor, and it'll analyze it here in a minute as soon as it goes online.
There you go. So that 50% right now that it's trending is a normal condition. And as it increases to 80% when it detects a problem, it will then indicate an alarm or I should say anomaly detection.
The tag ranking in real time can give you the contribution. As that analysis occurs, it will then tell you what is the root cause of the anomaly, if it were an anomaly. In this case, you don't have any anomaly.
We also give you the prediction drift and the data drift detection as well. So if that data does start to deviate, then you can easily indicate that, and you can either trigger the auto learning, or you can go in and do your own retraining as well. All right.
So now you've seen how you can do it in Sorba. Let's jump into Ignition. So the first step in order to enable Ignition.
to utilize Sorba is you insert a module. So we have a module that you install. It's called the Sorba Integration Module and the module gives you access to all of the different algorithm types that we offer.
So once you have that configured, you open up your designer and I'll just minimize some of these right now. So the first thing you do is You go into your UDT tab. So what we've done is created UDT files, and you import them.
We provide them for you. And UDTs offer you a different UDT for each different type of algorithm. So once you import those UDTs, then you can create instances from those UDTs. So then I go into my tags, and I'll create an instance.
Let's do the clustering model. So in that clustering model, I'll give it a name. I'm going to call motor, motor1.
Let's create another one. Let's create a regression one as well for power. We'll say, so let's do regression. Call this motor power.
So how do we, how do we set this up? So the first thing you do is you go into the motor tag. It creates several folders.
So within those folders, you have a configuration folder. And inside the configuration folder, I'll speed this up a bit. I know I'm running out of time. How much time do I have, Brian? You're good.
Good? All right. Yeah.
So we'll go into the configuration of the motor, and you have several parameters. So in order to initiate the training, you have to give it a start date and an end date for that training set. So you go to the start date.
And by the way, the start date, you can specify any time here. So let me just I'm going to pick a small data set just for the sake of time. Just a few minutes.
This is AM5. Set the end time and we'll do it until now. OK. And then the the last piece in order to start the training, we have a output message that tells you where it's at right now.
Once I enable that initialized training. What the system will do is it will take that data set and it will then send that to Sorba, create the model, train it, and then create all the outputs automatically for you. The same ones that you saw in Sorba will create it inside that UDT for you that you can then use to display or create alarms or events, whatever you want to do with that data. So if I click the initialize, I'm going to turn it on.
here in a second you'll see that the message will then go into a training mode saying i'm training the model i can see it here let me go into and see if i picked So there may not be any data there. So let me just quickly change the date and time. The joys of doing live demos. One of the things while I'm going through this that I wanted to highlight as well, when you build out a model, one of the things that I think is super important, depending on what industry you're in, whether you build a model around a VFD or a motor, or you're looking at something like, say, the oil and gas space where you have to replicate, is we do have the ability to build classes of models that you can then deploy at scale. So say you build an ESP model and you have 300 or 400 of these ESP models, You can actually take that one application you build and deploy them at scale.
And inside that model, then you'll also have the ability to have the auto retraining and the triggering for that retraining built into that application. So as new data comes in, it'll continue to do that on your own. So you can kind of set and forget that application out there and let it run and then pull in the necessary data back into your SCADA system.
So those are some of the things, again, from a, you know, as systems integrators. We've really gone through all the different steps of what value, you know, people need in the plants, in the mills, in the field to kind of deploy effectively and efficiently so you can do these at scale. Yeah, it sort of jumped the gun. It actually did the training. It was pretty quick, I guess, because there's very little data.
But if I go back to my MLOps, you'll see that it did create the model and it passed. And so now after this point that the outputs. So now you can generate a template or a screen. Let's just go in here. And this is my data.
And by the way, the data, I'm actually taking the data from a motor, and I'm publishing that data to Sparkplug. And so Ignition is subscribing to that data. You can see here, this is the data coming from the Siemens drive.
And this is the data that I'm analyzing. And so when you go back to your model, Now you can get your outputs and your anomaly score, which is probably going to give you 100% because there's probably not a lot of data that it could give you a good analysis of the anomaly. So it's going to give you a high or an anomalous score. But you can see in addition to the anomaly score, you have all of the other attributes to that model exposed to you within Ignition.
The data drift. the actual prediction drift and the tag ranking as well. So every input, let's say the current, you can now, there's your current tag ranking, there is your frequency and so on. So now you can take this data, this is being generated by the model, sent to ignition in real time, and it's slowly coming down, but Eventually, if you had enough data, it would give you normal condition. There's no real problem with the data right now.
What I will do is generate that data. I also want to show you, if I have time, actually, let me go into my demo because I am running out of time. So what I've got here is I built a Perceptix screen that shows you the different types of things.
So here I'm looking at the anomaly for that motor. and I'm also trending the power and the prediction. The green line is the predicted and the orange line is the actual power itself. And here is an advanced process controller built within Ignition. So now Ignition can be an advanced APC to do control or make recommendations.
So here the set point is this line here that's telling the system how I should adjust my set point to reduce power on the motor. So what I'm going to do here real quickly, Brian give me another last one last thing and then we'll hand it back off. So what I'm going to do is I'm going to create some disturbance in the system and it's going to detect an anomaly hopefully.
So as the algorithm analyzes it, it'll start to look at the all of the inputs and eventually there it goes it's starting to creep up. So it's detecting an anomaly, and then the indicator will tell us that I've got a condition now that I could be at a probable unbalanced condition, and so on. So you can see even the prediction of the power changed. So the predictive power is following the actual power.
So this model is doing a pretty good job predicting power. Again, every application use case is different. At the end of the day, it's all about the data.
the data. I think I'm pretty much out of time. I know you want some time for questions. So I'm going to pass it back on to you, Brian. Yeah, absolutely.
So a lot of questions coming in. I've been trying to answer as many of them as I can while we've been providing the demo. One of the questions that came in that's a little above my pay grade from a technical perspective is how is the classification model being trained to handle imbalanced data? And what methods can be used to adjust the decision threshold?
Aldo, I'll kick that over to you. I'll answer it the best I can. But part of our AutoML is the balance, and depending on the performance and the metrics, we'll make adjustments internally. Now, you can change some of those hyperparameters. It's up to you.
But that's the whole beauty about the AutoML is that it will. modify all the internal hyperparameters and settings in order to make sure that you have the balance correct, the splitting correct, the hyperparameters correct as best as the model can to give you the best performance metrics. All right. Let me see here.
What is the typical sample rate required for usable ML on a motor application? Too slow and you lose important behavior, too fast and you overload the SCADA bandwidth. In general, although, you know, what is the kind of the typical amount of data frequency that we need to kind of get going?
I know I answered one of the questions we've done with as little as two days of data, but what is kind of the normal we try to look for of data that is required to build a model? I'm sure you've heard of this answer before, but... It depends.
Every application is different. But typically, like you said, we can do it in a couple days of data. It, again, depends on the fidelity. And if it's data that's coming in once an hour, obviously you need a lot more data. But most applications within two to three weeks of data, we can generate pretty good performing models.
Again, that's all part of that intellectual property that we design with that generative AI to help improve it. when there's not enough data, the synthetic data will help to improve the performance. And one more question here.
Is it possible to send commands from Sorba to Ignition in case of an anomaly detection, like stopping the pump? So one of the things that we are very focused on as control engineers building this company out is the ability to do closed loop control. So any detections, any processes that you build within Sorba, within Ignition using Sorba on the back end as the engine can be used for closed loop control.
So. if you want to optimize a process, you can. If you want to say, let's shut down or throttle back a process because we hit a certain anomaly or we went over a certain parameter, all those things can be done from a closed loop perspective. You can also run in a co-pilot mode, which is super important as you want to deploy multiple models to kind of determine, hey, Maybe I want this tag to create some influence, run it for a day or two, see if there's any impact, play with the models and kind of determine which one creates the most value.
Let it run in a co-pilot mode, kind of giving recommendations. And then when you're ready to go live and have that be closed loop control, you can then just change the parameters of where you're writing to and then have it go there. And I'll go one more question here. Could we use Sorba to cluster comments written by operators to categorize failure?
How do we annotate failures, Aldo, within Sorba from a clustering perspective? Yeah, so that is what I was showing you in the I didn't get a chance to go through in detail, but inside the ML trainer, you can create what's called work items. So those are the annotations that you can set.
on the model. So whether you can do it on anomaly detection, you can annotate it and say, specify this period of time as my classification that I'm going to generate a classification model, or you can import a work order or just specify the dates manually. So there's several ways that you can annotate to classify an event.
All right. Well, Travis, I would love to turn over to you to kind of your final thoughts. kind of where you see this going and the value of kind of combining these two resources together to create value for the customers out there trying to make things for all of us to consume?
Yeah, no, this has been a great demo. It kind of shows just how easy it is to integrate these two platforms together. And at the end of the day, you know, here's the thing.
We don't necessarily know all the problems we're going to solve, right? But when we identify one, Having tools that let you go in and try things out and do it without having to send the data to the cloud and write a crazy amount of mapping code and all this complexity that's there involving the IT teams and all that. The more that you can kind of stay in that domain expertise, the better. And ultimately, you're going to get some results. And that's going to get things to be bigger and bigger as you go forward.
So it's very exciting to kind of see this evolution happen. And. and to leverage these multiple platforms together to accomplish more goals. So that's all I have to say there, but it's been a great demo.
Well, again, I want to thank everybody who came on. I will continue to reply to questions. If there's additional information that you need, please feel free to contact us at info at sorbet.ai.
I think I've dropped my email in the chat as well. We also have Sorba University. So if you are interested in learning more about that, obviously Inductive Automation has an incredible library of videos and training. Their website is robust and has tons of partners.
And I really want to thank everybody for showing up and learning more about what we do here as a company and what the integration of an AI tool into Inductive can do. can do for you guys. So if there's any additional questions, please feel free to contact me directly and want to thank you all very much for taking the time.