Transcript for:
Summary of AI Innovation Summit

Let I'll paste that in later. And let's go ahead and get started with the artificial intelligence innovation summit. Make sure you put your LinkedIn URL in the chat for people to follow you and connect up with you.

These summits are held on the third Thursday of every month. Our next one's on November 21st, the week before Thanksgiving. They're three hours long.

We have five speakers that give 15 minute presentations. And most of the speakers give a PowerPoint slide or PDFs of handouts for all of you. So we'll be loading them into the chat throughout the chat.

So be watching it. The format for this summit is inspired by Pecha Kucha, which involve 20-minute presentations and short talks and TED Talks, which are 18 minutes long. Today, we are fortunate to have one AI showcase speaker who presents for 45 minutes.

And then as is the norm, we always have speed networking at the end for the final 45 minutes. So you get to speed network with all the speakers and connect up with them, and that'll be on Zoom meetings that we'll be doing there. So who do we have today? As a matter of fact, we're going to share with you their photos as we introduce themselves and their screens.

And our first speaker is Dr. Andy Armacost. He's the president of the University of North Dakota. And his second speaker is Mike Ruska, and Mike's with Problem Solution, and his topic is the new way towards organizational performance.

And I forgot to mention Dr. Armacost's presentation. It's entitled AI in Higher Ed, and he's got a lot to share. Anna Roisman, the CEO of Innovate QA Events, will talk about AI solutions that close the productivity gaps in SDLC.

SDLC stands for Software Development Cycle. I also look forward to hearing from Rob Wolcott from Northwestern University and the University of Chicago. He's just written a new book called Proximity, so the title of his presentation is Proximity AI. and the future of all industries.

Jason Kaufman will be the fifth speaker, and he's CEO of Erevo, and AI in Action, Real-World Demonstrations to Supercharge Your Productivity is the title of his presentation. And the last speaker is Andrew Soltis, founder of Dragonfly Rising. He's a Google alum, worked on AI in Google, and he's going to introduce you. to the topic of navigating the AI revolution, maximizing ROI. And I invite you all to be thinking about networking throughout with the speakers.

The speakers are engaged in the chat. You'll see them commenting on other speakers. And if you have questions, we have a Q&A in Zoom webinar.

And we'll be collecting those and sharing them with the speakers as part of follow-up and sending you a newsletter with their answers. So your voice will be heard. I also invite you to think about the no brainer tool.

NobrainerTools.com is an AI prompt tool that I created. And it's been around for a long time and does very well. LinkedinGroups.com is our website with our groups. And you've got the key topic right here. So our next speaker, I'm going to stop sharing my slides.

I'd like to introduce Andrew Armacost. Hi, Drew. Hi.

Why don't you give people a little 30-second summary of your background, because it's quite fascinating. Well, I'll do that at the first part of my presentation, but I currently serve as the president of the University of North Dakota, which is... here in Grand Forks. Actually, not here.

I'm actually in Seattle, Washington today. But Grand Forks, North Dakota, great home, a great university that's doing amazing things, including in the world of AI. Excited to share stuff today. Well, you can share your slides now. Will do.

And then we'll get rolling. All right. Hey, Gerald, you're still...

Okay, now I can share. There we go. Well, as I said, I'm Andy Armacost. I'm the president at the University of North Dakota.

UND was founded in 1883. It's our state's flagship comprehensive university, research university focused and rooted in the liberal arts. And we have an amazing collection. And I think you can see my screen now.

Gerald, give me the, I see Rob nodding. Good. It's home to the state's only school of medicine and only school of law. It has a broad set of academic programs and research activities from science and engineering to the arts and humanities. Our school has 15,000 students, about 70% of whom are on campus, 30% are online, and just a great place that's contributing to the knowledge in North Dakota and contributing to the national economy.

So let me offer my thanks to everybody for attending today. I saw people signing in. One caught my attention, the one who gets it done. So whoever said that.

Congratulations on getting it done. I think all of us are doing our best to do exactly that. Let me offer my thanks to Gerald Hammond for setting this up, inviting me to participate. Turns out he's also a University of North Dakota graduate.

It's where he spent his undergraduate days. So thanks, Gerald, for the great invitation. And thanks in particular to Jess Kinneberg, who helped with with the graphics and assembling this presentation. So I'm really appreciative of her work. And today was a bold charter.

Gerald asked me to. to answer questions such as how is UND responding to the significant challenges and opportunities with AI and better what are we doing that might be of interest to the audience of 40,000 or how many of our are listening today and just know there's great speakers to follow the lineup of these 15 minute speeches is really something else so be sure to stay tuned to each of them and so what to expect today is my insights and how one campus and one system of 11 universities is responding to the opportunities and challenges of AI, and then to share some ideas to catalyze an understanding of what we face in higher ed. And some have said higher ed will become less important in the wake of this revolution in AI. And I firmly believe the opposite is true, that what higher ed offers is certainly significant to understanding the balance of AI with being human.

But it requires, on our part, a sense of adaptation, collaboration, and also funding to make it happen. So let's see if this... Okay, good.

I wanted to start just by sharing who I am and how I got to where I am in the position and what that road was and why I think it makes it relevant to today's discussion about AI. AI is a field closely related to... where I grew up.

So I did my undergrad at Northwestern. You'll hear from Rob Wolcott yesterday. We were undergraduates together. We overlapped by two years. I did an industrial engineering degree there.

Then my graduate work was at MIT, both my master's and PhD in operations research. And then my academic life, I was active duty when I was at MIT, active duty in the Air Force. And I spent 20 years at the Air Force Academy as a professor, as a department chair, and also as...

the dean of the faculty or the provost, the chief academic officer. And from there, I joined the University of North Dakota just over four years ago as its president, serving to oversee the entire operation there, but still trying to use my background in the field of operations research and other areas to make a difference. So what the heck is operations research?

Many of you may have heard of it. Many of you might not have. It was born in World War II. It's the focus on using mathematics, computing.

It intersects with engineering and business, but it's really about quantifying and optimizing resource allocation. And it involves in many programs, optimizations, both linear and nonlinear optimization, as well as stochastic processes and queuing theory, and of course, a heavy dose of statistics as well. And so just in that description, you probably sense that it's algorithmic and very closely related to Cousin, as it were, to AI.

So both are interdisciplinary fields. Again, focusing, OR focused on that intersection of a bunch of disciplines with applications to engineering, to science, to health, to any number of fields, just like AI. My undergraduate thesis was...

in using advanced algorithms, developing algorithms called interior point methods to solve linear optimization problems. My master's degree... and PhD focused on other forms of optimization. In particular, my PhD was focused on large-scale combinatorial optimization problems to solve very large logistics problems, in particular, solving logistics problems for UPS, the United Parcel Service, and their overnight delivery network of airplanes. And so how can you model that mathematically and then come up with solutions using combinatorial optimization and other algorithms that benefit the company?

And of course, we received actually significant acclaim for the work that we had done. This is work with Cindy Barnhart at MIT. And the impact we had was significant. Not only were we computationally able to solve problems that hadn't been solved yet, but also to make an impact organizationally at UPS. And in fact, right after the algorithms development, we were able to implement at UPS.

And they've had it in play for the last couple of decades, supporting their long-range network. planning. So really just a great opportunity to see the power of computation and analytics to make a difference. So I'm going the wrong direction, I realized. So of course, there's a direct connection between OR and AI, just through the descriptions of the development of algorithms that I shared.

I think... My understanding of AI began with my father, who was in the same field as me in the early 80s when I was a high school student. He had talked about artificial intelligence and shared what it was and the impact it could make.

And those were the early days. Fast forward to 2017, I had returned back to Cambridge, Massachusetts, met with some of my old professors. And one of my professors, Dimitri Bertsimas, something he said really struck me. He said, Andy, I wouldn't have said this until recently.

But there's something special about machine learning models. We don't know why they work so well, but we can generate very good solutions to difficult problems. You need to make this a part of your work and your future.

And so, in fact, we adopted that mantra at the Air Force Academy when I was on faculty to launch a new data science program that focused heavily on AI. And we see the work that continues in the intersection between OR and AI, as many techniques or many researchers are looking at how we can. do things like create greater explainability of machine learning models by using the analytic frameworks that are prevalent in OR. So just a real opportunity to take advantage of a broad set of experiences. So my observations that I wanted to share from my own research, given that background in computation and optimization, and it's the reality that mathematical and computer models can solve problems that humans just can't solve.

And we see this in basic operations research courses when you're solving. problems involving two or three decision variables. And as humans try to work through problems, they have a hard time coming up with the optimal solution. So there's a very simple problems.

And when you then are dealing with problems concerning billions and billions of decision variables, it becomes absolutely impossible for humans to do that work. So that's a fact. Humans can't do it, do a lot of this.

Machines can help, computers can help. But the problem is we get negative reactions if If models or computers are seen to impact a person's job, there's this natural tendency for people to say, I value what I do, and we don't want to see machines take over. But there's also a catch, too, that getting good data is one of the tricky parts. If you want great models, whether they're AI models or other types of computational models, they rely on high-quality data. And then finally, customizing.

models to a particular organizational need or decision-making framework requires a lot of work. There are very few push-button solutions out there, at least in the examples that I've seen. So we have to be mindful of the fact that while generative AI and other forms of AI can make good decisions, most AI models require a lot of training, a lot of development, and a lot of customization based on the actual need of the decision problem that's happening. So I should share my views of AI before ChatGPT. And again, my dad was the one who introduced them to me.

But I see AI as powerful computational tools in a variety of disciplines and fields. It's certainly vital to many fields of research. When I thought of AI, again, before ChatGPT, I realized there are many campus functions that could be automated using the tools of AI and other kind of digital. approaches.

Of course, when we talked about AI, we recognized early, years ago, about academic honesty and whether tools, AI tools, could be used in a negative sense within the academic environment. And then, of course, the big questions about how big it might come if you read Ray Kurzweil or the trio of Kissinger, Schmidt and Hutenlocher. They question or they offer their suppositions about what's going to happen and the big impact on the connection of machines and humanity.

And everybody should be paying attention to what these folks write, whether you agree with them or not. It's an important perspective to keep to keep in mind. And then, of course, the date that.

It struck us all, of course, at least one who wasn't working in generative AI or large language models. When November 30th of 2022 came around, it certainly caught many of us by surprise and showed the power of large language models and generative AI. Of course, the reactions that we get are.

kind of rooted in fear. What is this? Will students start cheating? Should we ban it? Just like New York City proposed to do early on, shortly after ChatGPT was released.

But of course, as you look at this and you see and you know the computational benefits that computing, AI, and other forms of computing have, there just appear to be endless opportunities to move forward. So we need to kind of weigh the challenges with the opportunities. And you saw it in the headlines as well. And you see some of those here about the questions and the banning.

And then you see the sense of hope too, about generating opportunities with AI and how it will transform decision-making. So a real mixed bag. And we still see these themes carried through today.

In fact, I got two emails today, one talking about the promise. It was an article on the promise of AI and one saying AI is stupid. It doesn't do anything.

So we just know that there's still that debate that's happening. So in the wake of all of this in January, and this is after we played around with ChatGPT and saw its capabilities and read a lot, we established or I established five campus focus areas. One is looking at how AI will impact one's field of study on a college campus and for everybody to develop a sense of technological awareness.

Then we would have to think about effective use of AI in the classroom, whether it's creating personalized tutors or whether it's helping us create syllabi. And hopefully not being used inappropriately for academic dishonesty. How can we use it in the classroom? How do we devise and create the right computing infrastructure on the campus to build upon our high-performance computing infrastructure that we already have? How do we mitigate the risks?

Data security, data privacy, but also the academic integrity issue. And then we launched significant numbers of discussion groups and summer seminars and opportunities for the campus to come together. And we also... Beyond what we did on campus, we provided an important leadership role across the state of North Dakota and within the university system.

The university system comprises 11 universities, two research universities, four-year regional universities, and then two-year community colleges all together under one umbrella. And the resources available to each to chase the prospect of AI varies across that landscape. And so We took it as a responsibility to be a leader across the state.

We launched the North Dakota University System AI Forum, a monthly meeting like this to share ideas about what we're seeing in the world of AI. So we felt it was important for us as a flagship university to take that role. And in fact, just last week or two weeks ago, we hosted at Valley City State University a conference called Being Human and Working. in the age of AI.

And President Al LaFave, who has the greatest name, because if you write Al with lowercase l, the right version, it actually can be read as AI. So I'm going to rename Al LaFave as AI LaFave. I think he's done great work at his university, Valley City State University.

So it was natural, as we considered what to do in our state, to think about what other states are doing. And there was a number of states highlighted at the... The Association of Public and Land Grant Universities conference last year, APLU, focused on five universities that actually had a head start on us, on many of us, because they had invested in making their campuses kind of AI friendly well before the launch of ChatGPT.

But it seemed to us that few were tackling the AI challenge across the whole system. And so this is something that. caught our attention and we realized that what we were doing in the state of North Dakota, a unified approach for all 11 schools, might in fact at that time be unique.

I think there's other systems doing this kind of work right now. But we noted several things with the other schools. These were schools with strong financial backing and industry partnerships.

And then also the second point was we couldn't find, again, early examples of system-wide change happening within their states. So a lot going on at UND that ties into this work. We had ongoing partnerships with NVIDIA, with our high-performance computing system, and connections with both Microsoft and OpenAI.

I'll share more about those later on in the presentation. And then one of the suggestions out of the APLU session was do an inventory of how AI is being done on your campus. And so we put out a short questionnaire.

And we only have 825. faculty members on campus. And what we found was that we had 165 campus members involved in AI work, which was much larger than what we had anticipated, certainly in research areas like those that you see on the screen, but also there was broad work happening in pedagogy and how to consider ways to build AI tools and techniques into the way that we teach our courses. And furthermore, how to introduce our students to AI in meaningful ways for their time when they transition. into the workforce.

So we had strong indications that traditional cultural barriers were being dropped. You know, the typical belief that the only place where AI is done is in the computer science department. We had active use across the entire campus or active interest across the entire campus to know more and to experiment. But what struck us as we did that initial analysis was not only the realm of the possible, but... Dang, this seems really expensive.

When you think about accessing models, whether it's fee-for-service to have the full capabilities for many of the common tools, and I don't list all of them here, there's also tools that are very specific to campus functions like finance, admissions, and so forth, but there's a cost to accessing each of these technologies. Do we have the computational infrastructure to support an increasing demand on the campus? Should we consider a North Dakota data center?

that stores data and provides the computational support to support the entire North Dakota enterprise from K-12 to higher ed to state government and to industry partners. And there's lots of discussions happening about how that should happen. Should we build?

Should we lease space in someone else's spaces? We anticipate there'll be data centers that come physically to North Dakota. In fact, there's high interest. In case you don't know, North Dakota is principally an agricultural and energy economy. And we have ample energy resources that have driven huge interest by data centers and other industry partners to come get a part of.

So we'll see how the data center, we're not committing to any strategy yet, but there's ongoing discussions about the realm of possibilities. We also have to focus on faculty and staff development. How do you keep them abreast of all that technology can offer within their fields of study? We have curriculum design. How do you...

bring instructional designers to the campus that have the right techniques or the right abilities? How do you embed those abilities within your faculty members so that the design that we're offering is good for our students? We have to look at the whole range of business processes on the campus and figure out how on earth do we embed new tools and techniques to make those happen more efficiently. And then you have to hire people.

And one of the challenges, of course, is getting that expertise is tough. It's a competitive market to hire AI savvy people these days. It's just the way it is, and it's probably going to get tougher.

So the question is, where does the money come from? So let me tell you a quick story about a study group that emerged. We had a strategic planning initiative called Envision 2035 under our chancellor, Mark Hagerot.

He's the system chancellor. And Chancellor Hagerot launched this initiative to look at nine different areas, one of which was AI and digital sciences, really looking at curriculum issues and how do we infuse these ideas in. into our curricula.

And this year-long strategic planning process had eight other study groups who, when you ask them what the biggest risk and the biggest environmental impact, they all responded that it was AI and the growth of digitization. And so early in the process, we realized that our study group that was focusing on kind of academic delivery of AI and digital sciences. would now be expanded to how do we look at an enterprise-wide solution across the whole university system.

And so I led the study group there and the results really looked at the following areas. I'm not going to go into the results, but we examined what do we teach? What topics? What do we teach within given fields that aren't necessarily the computational sciences?

We asked the question, how do we teach and how do we operate our campuses? So in other words, the methods, the software, the hardware that we need, the infrastructure that we need to build. to operate and to teach effectively in this new digital world needed to be examined? And then also, how do we amplify what it means to be human? I mentioned before that our campus is rooted in the liberal arts, even though our largest college right now is the College of Engineering and Mines.

But how do we connect the two? How do we connect technology with humanity? All important. And then in addition, we want our universities to be places that are considered to be essential for the developing of the very human skills.

What does it mean to be a human being and appreciate human things? What does it mean to be a critical thinker? And how do students appreciate what it is to be human in the face of this amplified technology?

And we proposed a structure that would then support all 11 schools within the system and also increase the role of the virtual system that we created years ago called the Dakota Digital Academy, which was really, how do you deliver computational related courses, computer science, data science, AI courses within an online framework, unifying all 11 schools. So we had a lot of headstarts there. And so now we're still building that. But most importantly, we expanded the effort to be a statewide AI study that would link. K-12 with higher ed, with state agencies, and also with industry that exists within the state.

The fortunate thing is our executive, our governor is Doug Burgum. And you might have seen him on TV during presidential debates. But Governor Burgum was the founder of Great Plains Software, which was purchased by Microsoft back in 2001. He would later become a senior executive at Microsoft.

And he was also a venture capitalist in the world of IT. True love of technology and really is a catalyst to have us think about ways that our state can address the needs, both within higher ed and also across the state with respect to AI. So this study has an important outcome, and that's to create a legislative funding proposal to support broad investment across those agencies that I mentioned, with key focus areas on how do you govern it?

How do you create an organizational design that supports? consultative work that happens for the state agencies in North Dakota? How do you promote great development of a workforce through the education that you offer and the opportunities that you create?

How do you build the infrastructure, whether it's a data center or on-campus computational structures that support that goal? And then what legislation do we craft? We do have a key legislator, Josh Christie, who ran an IT company, a consulting company, and he's also a state legislator.

He's pivotal to how we sell the case to the broader legislature. The legislative session begins in January, and so it'll be exciting to present the case of AI, frankly, to a large number of people who are unfamiliar with how AI works and what the benefits are, as well as how do you legislate to prevent the drawbacks. So there are many opportunities that lay ahead of us, and we're building that case currently with a great team from across the state. So with that as a backdrop, I figured I should talk a little bit about where we are now on our campus. This is a picture of the University of North Dakota in Grand Forks and a great place to exist as an academic, often forgotten because it's pretty far north.

But we have an amazing research portfolio that contributes in so many ways to national security, to health, to energy, to a variety of disciplines. And so I wanted to just talk about things that we've done on campus to promote AI and to host a variety of activities that we think benefit our campus, but also the local community. We hosted Greg Brockman, co-founder of OpenAI. He's currently on sabbatical from OpenAI. He is well connected to UND.

In fact, he began his academic, his collegiate academic career at UND. You might not know that. He grew up in Thompson, North Dakota, and as a high school student, took his.

much of his coursework at UND and then would later go on to Harvard and then MIT before saying, I'm going to go out and found some companies. So he founded Stripe and then eventually was a co-founder of OpenAI. So it was great to host him, to hear his perspectives on the future of generative AI and also the march towards artificial general intelligence. Recently, we had Palmer Luckey from Andreal Industries. Palmer invented the Oculus, you know, the virtual reality goggles that were purchased by Facebook.

Paul Mer gave his perspectives on AI and automation and how autonomy is impacting both the commercial sector and the defense sector. So we've been very fortunate to be able to invite and welcome these two great national speakers to the stage at the University of North Dakota. I have an amazing team that Gerald had presented three questions that he wanted me to answer. I called on members of our team.

These are all folks who not only contributed to some of the data that you'll see here in a minute, but also have been central to this broad adoption of AI across our campus and also the support that we're providing across the state. And you can just look at all the different areas where they exist and how AI, you can imagine how AI is being used in each of their areas. So the first question that Solution Man asked me was, how has AI changed UND? And I think it's important to recognize how it's amplified our research and discovery. And whether it's health care, where we're pursuing a Center for Biomedical Research Excellence through the National Institutes for Health, focused on AI, or our creation of eight additional positions in...

in AI data science and cybersecurity. We've certainly amplified our work on general autonomy and uncrewed aircraft systems. So AI has really created a catalyst to amplify the work in those areas.

It's re-energized our course and lesson design. In fact, faculty are looking very carefully at how to deliver their courses in ways that are AI enabled and AI safe all at once. And I'll share more examples here in a moment.

There's been a dramatic shift in many disciplines to recognize the importance of readying our students for the workforce. And it's interesting, workforce in some circles on campus used to be a word where they said, well, no, we're something bigger than just preparing people for the workforce. But AI has forced all disciplines to say, you know, this is something that as we send our students forward, they need to be facile with, and they need to understand the implications of AI.

So it's really done amazing work. We've called group. cultivated cross-disciplinary work to a large extent.

AI and the many facets of the language of AI have brought many opportunities and collaboration to the forefront, both in research and course design and campus operations. We launched chatbot media. So I say this kind of facetiously.

It seems like every corner of the campus wants a chatbot. And so it's chatbots have forced us to have conversations about. Do we have a unified chatbot that is used across all areas of interest, whether it's admissions, student finance, library functions, whatever the interest?

Or do we purchase customized chatbots that are tailored for those individual areas of need? So just when we look at something as straightforward as a chatbot, because they're pretty common now, it forces some broad discussions about how we integrate across the campus. And that's really important. And speaking of bots, in case you're interested in our adoption of technology across the campus and autonomy, the KiwiBots are out and about every day, even in the snow and the rain, delivering food across the campus.

And I just thought it was an interesting statement about our campus and our willingness to move forward on all forms of technology. So the second question that Gerald asked was, what is UND doing with AI for students, faculty, and administration? And I think it's important. to focus on just general digital literacy outcomes that were adopted in the summer of 2023. Originally designed in 2019, we were slowed down by the COVID pandemic, but AI certainly lit a fire under us to make sure that we are appropriately specifying digital literacy outcomes. And this was a system-wide request as well.

So there was a demand for this to happen across all 11 schools, and UND jumped right on this and made sure that AI was firmly embedded in those. those outcomes. We had to focus on acceptable use of AI and so guidance was given to the campus by our provost and then within each of the courses instructors or faculty members will certainly specify their expectations for use of generative AI and other AI. We focused on professional development forums and developing not only workshops but workshops in the box that can be deployed and delivered not by a central person but remotely to others. And the most recent workshop in a box is crafting custom GPTs.

So now it's about getting into some interesting, fascinating areas that we know our students and faculty and staff will find quite interesting. And thanks to Anna Kinney and Ann Kelsch for all their work on these workshops and Lynette Kronelka, who runs our Teaching Transformation and Development Academy. This team has put together an amazing AI assignment library. You can find it here at.

commons.und.edu. And it's our AI-assignment-library, or you can Google it and research it and you can find it pretty readily. But it's a listing of 70 use cases of faculty members embedding AI-based curriculum examples and activities into this library with a full description of what was done, what the impact was. And so it's a really nice way for other faculty members across the campus or across the nation to get some information about what was done there. And then finally, we had course development grants, small grants between $500 and $1,000 that would allow faculty members to go out and either participate in...

external development activities or purchase equipment or software that they might need for their classes. And then finally, the third question was, what academic programs or courses on AI are being offered at UND? And of course, our computational, I call them computational sciences just as an umbrella term, but our traditional programs in computer science, data science, which we launched in 2018, 2019, cybersecurity, which has been around even longer.

but also a newly created AI and machine learning graduate program that was just launched recently, months ago. So great programs in computational sciences and then related programs across many different disciplines or schools, but applied economics and predictive, shouldn't say preventative analytics. That sounds too harsh. Sorry, team. A typo though.

It's predictive analytics, behavioral data analytics, business analytics, and then... classic programs in mathematics and applied statistics. And then what's really impressed me is the work that's going on with the design of new courses and programs.

And whether it's AI in the law, AI in the performing arts, or our philosophy of technology programs, we're hiring faculty members who have the expertise to address these technological questions in environments that we don't customarily see. So it's been gratifying to see the broad adoption, and it really gives me hope. that as we go through and continue to transform our campus, that the faculty and the staff and the students will be along for this incredible journey with us. So I have a couple final slides on commentary and the pace of growth that we're seeing and just offering some thoughts about what's ahead.

And I continue to find myself balancing this sense of urgency, like we got to get stuff done with a sense of patience, knowing that things are going to be wrung out. and funding strategies will be developed. But we have to face our interest in moving forward with the budget realities, which is why we're going before the legislature.

There's the great talent competition. And I'm always constantly reminded of the opportunities for the great partnerships with industry, for internships, for research, for course development. And there's ways to accelerate the progress at UND, in North Dakota, and across all of higher ed. But we're in the infancy of generative AI, for sure, and the capabilities.

that are being fielded are incredible. I just saw more demonstrations yesterday of people playing around with new tools in music and in image manipulation or image modification. But these AI tools can identify emergent properties or behaviors.

I know through, I know this through my own computational work, that things that aren't readily available or identifiable by humans become available because of AI and other tools. And so the question is, what is the extent to which AI will then be able to come up with new applications or computational approaches that are now conceived by humans? And will this meta-analysis or this meta-ability exist for AI tools to be able to do? And there's some examples that... reinforcement learning got to that.

How can you have a higher level of thought or a higher level of computation that's done by your AI models? Or what does AutoML tell us? And that experience about having kind of a loop around the models to do even deeper types of thinking and assessment.

And so that's an open question, right? And things are moving at such a fast pace. We'll get better examples of how that works. So this really forces us to think about, will AI be good at words and images, or will it also be good at designing new products and systems autonomously?

Or will it always require, or should it always require, the intervention of human beings to be along for the ride? Right? So these are the fundamental questions that we in higher ed need to be wrestling with, as well as our industry partners.

There are many issues of technology acceptance. I said this before, people are frightened by new technology. We see that.

And technology doesn't always work as planned. Take, for instance, me setting up for today's session. the wireless network here.

We had to go hardwired just in case because of some disruptions. So it's technology robustness, but also the acceptance of that technology and the doubts that the lack of robustness can create in the minds of others. And then, of course, data privacy and data security are always at the forefront.

Some stand to benefit. I'm just going to be frank here. I know we have a lot of industry reps. Some stand to benefit richly from the success of these AI models, in particular the big ones. All these new generative AI tools and it's moving at such a pace.

And so I think higher ed plays an important role because there's going to be winners, there's going to be losers, kind of an AI arms race, so to speak. And how does the desire to be first impact or compromise potentially the ethical standards that we need to understand and to implement? But higher ed expertise, I think, can help mitigate the risks of and be both an accelerator and a brake to make sure that we together are doing things right for society.

And I think higher ed can play an important role there. But how do we identify that winning technology? So as a university, when I'm contemplating where to invest our money, how do I know what's legit and what's not? I think. There are certainly opportunities to make guesses early on, but we'll certainly have a chance to to wring that out together, both with higher ed and industry partnering together.

And then we have legitimate resource issues that we're facing. And Gerald, I see you popped up. I think I'm still tracking for six minutes and eight seconds.

Right. You're right. OK, good.

Thanks for the heads up. The legitimate resource issues as well. most notably energy and water and lots of discussions here. And when we hear of big data oriented companies purchasing nuclear power plants, I think two in Pennsylvania over the last six months, and the discussion of deployment of small nuclear, small modular reactors for nuclear power, as well as aggressive moves towards energy rich states to figure out if data centers could be located there. We know that there are going to be legitimate resource issues, both with energy and water, and we need to be.

prepared to address each of those. Let me also talk about kind of the role of higher ed and sharing just some thoughts on the broad charter. And we need to prepare our students for this technology-rich future without a question and how it applies to each of our fields. We need to connect each of our disciplines to this new environment and invest in the development. The days of faculty members stagnating and being kind of doing the same things and having productive lives, that's great.

But we need also to have an eye towards the directions of the future of technology. And that's important. We need to have our universities as places of learning and development. absolutely focus on what it means to be human, keeping on focusing on the humanities and the arts. The importance of those will still remain.

At the same time, how does the human being work cooperatively and in conjunction with these new technologies? And a primary role of higher ed is the dissemination of knowledge, both to students and to society. And I know that our systems of peer review... While they might seem cumbersome, they play an important role in making sure that truth is found and that technology is created the right way. They serve as a validation of new discoveries and also, I think, will play an important role in the evolution and the future of AI.

And this, of course, becomes trickier when copyright, trade secrets, market caps are all involved in the discussion on the commercialization side. But again, I think higher ed must continue to play a central role. in all of this work. Again, I'm bright on the future of AI. I'm also cautious about what the implications are for humanity.

I'm cautious on making predictions about what the future of humanity is. But I know that higher ed is a place where our newest members of the workforce need to be well-prepared, and higher ed is the place where that happens. I'm exceptionally proud of how UND is setting itself up to be one of those national leaders in the work that's happening in examining these technologies as we deploy them and as we study their impact. You can't help but have noticed last week's announcement of the Nobel Prize in Chemistry. It couldn't have come, at least for this presentation, at a more appropriate time.

Certainly, the recognition of researchers from Google's deep mind come to note for their work on protein folding. So this was Hassabis and Jumper. Great work.

Keep in mind that... Earlier, the Nobel Prize in Physics was awarded to Jeffrey Hinton for his work on AI and back propagation and what it allowed for the modeling of physical systems. But there's an interesting trace of credit that goes here that seems to defy what's normally awarded in these prizes. Here we have a process of discovery that involved the development of tools, AI tools, that then impacted researchers in given fields that then received.

recognition in those fields. And so what this should highlight is the opportunity for collaboration across disciplines and the fundamental place that computational modeling and AI will place in the support of other disciplines. So some will argue that, well, these are misplaced awards, and others will argue that these are fundamental to the future of discovery. And so it's exciting to see how that conversation is going to play out.

My final two slides, I will quote two colleagues. One is from Ryan Adams, who is an associate dean for our College of Engineering and Mines. He's been central to our work across the campus in terms of AI adoption. And here's what he says. At the end of the day, what we need to keep in mind is that AI is intended and should be intended to make things easier.

It doesn't make us lazy. It allows us to focus on the things that we do best. So again, focusing on that connection between technology and humanity. And then finally, from Chancellor Hagerot, who has been a national leader in discussions of AI, and he serves on the U.S.

Navy's AI Advisory Committee, supporting many levels of AI work within the Navy. He talks about the role of state university systems. And I think this is important because, as I said earlier, most of the early work was going just by individual colleges and universities.

Now, current and future waves of technological disruption will shake and then reshape education. our society, economy, and government. State university systems will be challenged to thrive, or in some cases, even to survive the disruption.

But state systems have yet another calling. Across history, the academy has been a tool for society to navigate and even thrive through disruptive change. And we are being called upon to do so again, this time during the transition to a digitized, artificially intelligent world.

It's a great way to... to end this conversation recognizing the power that our university's systems have, what we're doing in the state of North Dakota, and then furthermore what we're doing specifically at UND. Thank you everybody for your great attention. Thank you for the little emojis that you've been flying my way throughout and enjoy the rest of the speakers.

Thank you Andy and let's show some love to Andy Armacost. Look at all those hearts and handshakes and Applause. So you covered all the questions I asked you and more. So I greatly appreciate it. Thanks, Gerald.

Have a great rest of the session. Thank you. And Andy put his link in the chat.

If you want to connect with him on LinkedIn, he's open to connecting with you. And if you look in the chat, you'll see the slides have been uploaded several times. So you can...

review what Andy had to say and maybe feed it into AI and see what new ideas you can come up with inspired by Andy. So we are moving along. We just finished a 45-minute presentation and the chat lit up. I encourage all of you to copy the chat to a separate document.

so that you don't lose it when we end up with the summit. So here we are. We finished up with Andy.

I want to remind you that we have networking on Zoom at the end. Our next speaker is Mike Hruska. He is CEO of Problem Solutions. That's a name you won't forget.

And his topic is the new way towards organizational performance. Thank you very much. It's his turn. Go ahead.

Sounds great. Thanks a bunch, Gerald. And it's so refreshing to hear Andy talk about what he was talking about, because in our future, higher education and empower our workforce for the future, not just the workforce, but our faculty to move this ahead is going to be an amazing thing.

I'm really excited to talk to you about organizational performance. This isn't the first time that we've thought about organizational performance, but we have a different context. to think about it in. Mike Hruska, I'm a former researcher from Nest. I spent the last 20 years building game changer technologies, kind of straddling industry and government's hard problems using emerging tech to build competitive advantage.

And now I've launched some of my own game changing products for sales intelligence and professional development coaching. I want you to think about this. How many things do you have in your house that plug into the wall? At first, we only had a few, right?

Those were light bulbs, but AI is this new electricity and the proliferation of things that we're going to see as we move past just the advent of making AI accessible for all is going to be absolutely unbelievable. I think in five years, we're going to look back today like the Ask Jeeves moment. That's a QR code to my LinkedIn. I'll have it at the end and I'll throw it in there.

Love to connect to you. The problem is we're limited. We're limited by our creativity and our imagination. Can AI help us with that?

I think so. Now, I've been building agents over the last number of years that totally transform our organization and helped us shine the flashlight further out into the future. So organizations struggle in a number of areas.

These range from culture and strategy to risk and M&A and innovations and success. According to MIT, only about 25% of managers know their company's top priorities. PwC found 93% of employees don't understand their company's strategy. Gallup says 22% of employees believe their leaders don't actually have a clear direction. And 14% of managers believe the speed of decision-making in the organization is actually effective.

These are huge challenges. Someone should imagine for a second the concept of baseball. A baseball player cannot execute a game of baseball.

Baseball player can run and hit and bat and slide. But it takes a team of baseball players that can do all of those things. versus another team in order to execute a game of baseball.

Now, there's an emerging set of competencies that happens as that team forms to know where to be, to know the right things to do, to know the right things to respond. Now, the same set of competencies that a team has to execute a game of baseball cannot be used to execute a baseball season. There's another whole set of competencies that are emerging on top of that team related to logistics and coordination and planning as well. Now, the same team that can play a game that can execute a season cannot execute a baseball campaign, right? You've got your farm team.

You've got tickets. You have all these other emerging sets of capabilities. Every organization is just like this.

And typically, we've focused on making the individuals better players and not enabling the collective performance of people in our organizations. So let me tell you a little bit about our story. I've spent a whole bunch of years building technologies for other people with the idea that We would eventually turn the corner and build some game changer technologies based upon our niche expertise.

But what we first did was some people would say, eat your own dog food, but I say, drink your own champagne. It was build things that changed the way that we work. And essentially forget that AI is a thing and just consider it to be really good software and think about how really good software could augment an individual, a team, a group, and even the entire enterprise with knowledge in some ways.

And so this is the idea of what we call the augmented enterprise. And so as an example, develop a lot of products for other people, they would come to us and we would interview them, spend an hour, spend eight to 12 hours working to determine what the best product concept brief would be. My multi-agent product development concept brief agent does this in about eight seconds with very limited information and does a better job than humans.

Huge quantifiable value proposition. And we built these things all across our organization. You might think of this as a virtual org chart.

where individuals aren't just using things like Copilot to develop and do singular tasks, but they're thought partners along the way. You might've read Kahneman's Thinking Fast, Thinking Slow, in which he posits that there's system one and system two. System twos are evaluative and analytical thinking, and system one is our improv brain in some ways. But building the virtual org chart of the future is what's going to enable organizations to grow rapidly without necessarily having to grow their employees.

Some organizations might do more with less. A really good book, if you want to read it, that I would recommend that was written, Chet Pre-ChatGPT 3.5 that posits a world post-ChatGPT is The Age of Invisible Machines by Rob Wilson and Josh Tyson. They talk about some concepts in there like BTHX or better than human experience or intelligent digital workers or IDWs. It's a really good guide for thinking about how to apply AI in your augmented enterprise or the organizations that you're working with along the way.

So where do organizations really need performance? Innovation and decision-making. certainly.

The fact that 14% of managers think their organization makes decisions fast enough or has proper decision call is a challenge. But this concept of collective intelligence, MIT has a center for collective intelligence, but imagine knowledge flows in the organization. New knowledge is created, that knowledge flows and people learn about that.

Rob Wolcott's going to talk about it a little bit later, but how do you have the collapse between supply and demand or close the proximity in that gap? How do you respond to customers better and how do you communicate better? Frankly, though, everyone has a standard set of corporate challenges, more with less, stakeholder pressure, margins, revenue growth, increasing customer value, operation efficiency, talent management, training.

Everyone's facing these issues. The question becomes, how do we thoughtfully design, prioritize, and maximize the ROI of these elements along the way? And there's a couple of things. If you're a design thinking nerd like I am, that you might ask HMW questions or how might we? How might we upskill our people?

How do we select use cases and pilots? How do we take them to scale? How do we improve our processes and tools or people and culture?

And how do we do this responsibly with governance, security, and privacy? At the end of the day, though, what we're trying to really build is augmented intelligence teams because these truly amplify the performance of an organization. So 51% of CEOs.

say they're hiring for Gen AI. Accenture says that 84% of global business orgs believe Gen AI will give them a competitive advantage. Over the next three years, 85% of CEOs are prioritizing adoption.

So there's a massive pull into the market. But just because we can do something doesn't mean we shouldn't. Being thoughtful about this and situating it in the use cases of the business is important.

But having a humanistic perspective is really... really important too. By 2025, 50% of the board's directors will be powered by AI.

I think that's fascinating. And I think that we all should be excited about that to most of a degree. So Thomas Malone founded the Center for Collective Intelligence at MIT.

And I really love the quote that he said, from humans in the loop to computers in the group. And really as combinations of people and intelligent machines. I was giving a workshop for a global manufacturer and 28 people, four people had ever used ChatGPT before. It's fascinating. Just taught them the state of the art, the art of the possible, the art of the probable, use cases and augmented enterprise design and sent them out into their way for an hour to discover their future.

And I saw two people that were working back and forth and they said, what are you doing? They said, we're using ChatGPT. I said, how do you use it?

And they said, we asked for ideas for AI manufacturing. I said, stop, start over, tell your names, your ages, your roles. your degrees, your tenure at the company, and tell it your mission, and ask it to ask you questions to help you think through it. And they looked at me a little sideways, and then they did.

When everyone came back together for the checkout, it was unbelievable the amount of ideas that it generated. That's an example of an augmented intelligence team or super team. So reality check, by 2027, Gen AI will augment 30% of all knowledge tasks.

There's huge leadership adoption in this direction. But not a lot of people agree that their organizations are educating employees. Huge opportunity for change. And not enough leaders believe that learning is really a core part of that. And not everyone's prepared to skill their workforce.

I mean, these are great challenges, but illuminating these in the context of your organization is absolutely critical. So why AI? There's a ton of things that you can do with it. You know, Robert Wolcott says, you know, in his book.

legacy assets and steep learning curves are effectively the blockers. So it's creativity and innovation about how do we apply this? He says to do things we've never done before. That means we don't need to think just about line extensions from our current thinking, but we need to really teleport our thinking. I call this perpendicular thinking, going out around the corner, coming back to tell us what's possible in a future that we don't see yet.

So what can AI really do? tons of things for customer service, supply chain, human resources, but it really helps organizations with two specific things, coordination and collective system one and system two thinking to improve both intuitive and fast thinking and deliberative slow thinking with thoughtful approaches. So AI is really good at a whole bunch of things.

Content creation, images, video, language, translation, content discovery, and summarization. Conversational AI is improving at an unbelievable rate and really enhancing its user experience. Nine out of the 10 most funded VC investments in the AI agent space are all focused on customer experience. This collapse between customer experience and employee experience, what I call org experience, I think is an important paradigm shift in the future as we think about applying creativity and innovation. And simulations with synthetic data and prediction and digital twins and manufacturing are poised to change the landscape very quickly.

Now, AI versus human is a really interesting thing. In terms of breadth, AI can beat it. In terms of depth, humans are a bit better.

Humans are a lot better at insights. AI doesn't have a lot of eureka moments, but it's... fast and it's cheap and it's available and it's scalable and has all of the memory from everything all the way.

So AI can really help us think differently. The question is, how do we harness that? And that really comes through agents. So a number of agent use cases that I've been involved with are building coaching advisor, consultant mentors, Socratic tutors, price negotiation, empathetic debt collection.

even podcast agents and succession planning. This was a text-based agent, but it was grading student literacy on an NSF-funded program and even an advisor for parenting neurodivergent children. So we've got to address it in the flow of work.

This URL here, getmaya.coach, is a product I'm coming out of stealth mode with that is a polymorphic and personalized and adaptive coach, agent, mentor, advisor, and consultant. and can really unlock organizational potential. There's a first in line beta program coming up.

So if you're interested, please fill out the form there and add me as a referral or add Gerald as a referral. AI can help us as an organization work differently. So it can help us as individuals, but also help us as teams. Sales is number one thing for every organization.

I launched a sales intelligence tool that helps onboard, grow, helps people practice, prospect, and perform, and saves about 20 to 25% of a salesperson's time. That's like adding extra capacity to your team and keeping the same headcount. If you're interested, feel free to hit salesage.ai. So AI agents can really help improve our processes. So building specific AI agents for an organization is something I absolutely love doing.

And I shared some of the example use cases, but they can be process dependent. They could be voice-based. They could be multi-agent that do many things and return your answers.

You know, agent sequencing and orchestration is a really lovely way to impact the world. Now, every organization is somewhere along this curve. You're either reacting or you're building something, you're emerging or operational. Moving along this curve is an important thing. for creating strategic value inside the organization and getting to a level of maturity.

I've helped a range of organizations from large consulting firms to manufacturing, to education, to technology companies. Think about this. There are three types of companies, companies that know if I just do this one thing, it's going to hugely impact my business. I don't know what to do, but I know that we should do something.

And then there are a bunch of companies that say, there's so many things, I don't even know where to start. This is something I love helping organizations with because it gives you clarity, understanding the state of the art, the art of the possible, the art of the probable, and using that in conjunction with lean thinking helps you to get an edge and move forward. Now, there are five steps that a company needs to take towards the future.

Got to increase your technology understanding. As I said, creativity and innovation are paramount to really thinking about the art of the possible for your organization. Lining up your learning strategy with the business strategy.

What does it mean? What does it mean in our industry? And benchmarking.

Now, not everyone's going to be as forward-thinking or forward-leaning as you, so you got to motivate and shift mindsets. But the key comes down to pilots, and pilots drive growth to give you the ability to scale and sustain solutions. Now, if you want to answer the key questions, why are you doing it? What are your hopeful outcomes? And what are your metrics?

Those are the three critical places to start. What does your organization need to consider? What are the known unknowns and next steps?

And then what are your use cases, pilots? Is your data ready? Do you have the capabilities?

Do you need to beg, borrow, or steal the capabilities? What's your strategy, governance, and investments? These are the critical things in your future as you take those five steps.

So I love this subject and I'm glad to continue the conversation. towards organizational performance. This isn't a fad. You can't say I'm going to stick with my fax machine. This is a momentous leap towards the future.

Thanks, Gerald, for having me. Thank you, Mike. That was outstanding.

Now we take a look at our agenda. We've had a whole bunch of people join us during the past two presentations. So if you're in the mood for speed networking.

know that we're going to be doing that at the end of the summit for 45 minutes in Zoom breakout rooms. And we've heard from Andy, Mike. Now we go to Anna Roseman.

And Anna is CEO of Innovate QA Events, and she's done some of the most popular events in the world. And her topic is AI solutions close the productivity gap. So, Anna, it's your turn to share your 15 minutes of wisdom.

Thank you, Gerald. I appreciate the stage. And I'm going with my slides.

I hope that you will see. Yeah, cool. So, my name is Anna Roisman.

I'll be talking about AI solutions close the productivity gaps in software development lifecycle. Hi everyone. By skill, I am a software tester and software test manager, and that's my background. So that's where I am coming from. I'm a pragmatic person.

So when I look at AI, I always look like... what's in it for me right now. I love all those presentations about future.

That's all remained to be seen. But as a person of a practitioner, I am looking for practical solutions. And I've been a founder of Equality Leadership Institute and I developed Test Masters Academy, which runs... test events throughout the world and also I'm co-founder of Innovate QA Events which is the next wave of events where we look at AI and AI innovation in the different spaces and QA is quality assurance that's where we're looking into AI innovation there. So what I'm going to be talking about is harsh stats first.

70% of projects fail to achieve their intended goals. 27% of projects exceed their allocated budget, maybe more. Those are the stats that I collected.

57% of projects fail to meet their deadlines, leaving stakeholders frustrated and objectives unmet. 17% of the projects fail due to undefined or constantly changing project requirements. and 29 of the project failures are attributed to poor communication and that's huge statistics and when you look at the uh software development there is different um life cycles different models of how you develop the software there is a lot of on the market and some of them call agile which are small iterations of the software um You don't deliver the software right away. You deliver small iterations.

So two weeks iteration, pieces of software are delivered. You have continuous CICD, continuous delivery, which is sometimes called DevOps, sometimes called DevSecOps. Sometimes it's called DevTestOps, whatever it is, but it's a continued delivery SDLC.

And in continued delivery, you continuously deliver. Every day you deliver something, And it's a loop where you create some processes, you use some tools to deliver continuously. There is more usual life cycle.

And the software development life cycle, in a nutshell, is you start with planning, you do the analyzing of your requirements, you design your software, you develop it, you implement it. And then you test it, and then you release it, and then you take care of it when it's in production. That's usually the software development lifecycle.

So there are some iterations of how it can be done, but in a nutshell, that's what it is, right? First, the idea has to come to your... Business analysts, then it gives them to develop, and then it gives them to testers, then gives them to productions for support.

There is a lot of productivity impediments in software development lifecycle. And the major, obviously, unclear requirements, definitely poor communication, lack of resources, technical depth, inadequate testing, scope creep, dependency on external teams, team conflicts. and lack of management support and inefficient processes. They're probably more the 10 most severe ones which really don't allow you to produce software as soon as you want.

And when I was looking into ways to enhance the productivity, I was looking into AI. And AI allows you, and again, I want to... Claim it again.

I only look at the practical solutions, easy ones. Because if you have a huge learning curve for AI, and a lot of platforms expect a huge learning curve from you, it doesn't increase your productivity. It doesn't.

It just enhances the time that you have to learn that thing, know how it works. So if you have some huge platform, yes, it can create AI opportunities for you, but you have to measure. how much learning it has to be, how much budget it has to be, and how much time it takes for people to get used to the new workflows.

So this is the reality of it. I pick up four pieces, four areas where the productivity can be done, enhanced quickly. And I will explain why in every certain, every of this. cases, I will explain why I pick up whatever I pick up.

So my first tool is knowing your customers. So the problems with knowing your customers, when developers are developing the product, they're usually technologists, right? So they know about themselves.

And when they develop something for people who are not them, Sometimes they develop something not that people would want to use. And the problem with it is the project goals could be affected. The scope could be affected because you're introducing something that people are not going to use, which means that you need to change your priorities later. Technical debt, definitely a reputation of your software is going to be impacted if there is something on the market that nobody wants, right? So the problems are the customer misalignment.

You're not solving customer problems. You don't fit customer lifecycle. You don't match customer skills.

Sometimes you think that customers are too smart, but they are not technically savvy. And you could be missing critical bug fixes. I used Copilot, Microsoft Copilot. Easy. It's free.

And why did I use it? I use it to understand your customer. Why is Copilot? Because Copilot was trained on a lot of data, which was in opening the internet. I would not suggest you to use the Copilot for your deep kind of dive into something, but to give some ideas of who are your...

people who you develop in it for, it's great because it collected a lot of like statistical data, which is open on the internet. So I ask it here, I want to develop product that's for sport enthusiasts who want to start exercising to lose weight. What can you tell me about the customer, their life cycle, skills, and demographics?

And you will see that there is some, even like for me, it was some interesting things that I developed there. I found there, for example, If you are a sports enthusiast and you want to start losing weight, you already know certain things, right? You know how to watch for your weight, you know how to create the goals for you, and you know how to use your supplements. So you're a little bit of an advanced user, so something very simple will not appeal to you.

This is an example of how you can understand your customer better and how the developers can look back at what they're developing and help. uh some kind of like understanding of like is it what the customer want one thing Um, big thing is defect prevention, good requirements and acceptance criteria. This is huge.

Ambiguity and complete requirement, change in requirements, stakeholders, misalignment, technical feasibility and risk issues in insufficient validation affect productivity big time. Rework, refactoring, missing deadlines, bug creep, scope creep, stakeholder relationship in unaccounted risks. All of that is a big deal. Um. There is one tool on the market called Spec2Test, and it introduces some kind of validation, the built-in validation of a requirement.

So if you just say in plain language, I want to original requirement user can log in. What it gives to you, it gives you different requirements. I will say there is some functional, some non-functional and operational. And what's...

very important it gives you the questions for stakeholders how do you want certain error messages to be displayed how do you want the process for users to reset or update your password so all of these additional um requirements are there so i would uh really suggest for you just try it out just try it out what it can give to you this is the skill of a business analyst And it allows people who are not too skilled in business analysis to really create requirements that are valid. So as I said, there is some kind of validation built into this tool. And I will give you some promo codes later so you can understand how to use it. It's really easy and it makes you really productive. I will give you some data on how productive it is.

But... seriously, try it out. You will see how easy it is to do. Defect detention.

Another thing, which is time constraint in creating tests, running tests, reintroducing tests, maintaining tests is always a problem with it. Could be time constraint. You could be dealing with changes that requires maintenance, and it's a big deal for testing. People may not be skilled and they may not have resource to do a real skill testing test automation.

You may have an incomplete coverage of compatibility issues. So what will bugs affect? Product quality, obviously, missing deadlines, rework, technical depth, reputation, and stakeholder relationship.

One of the tools that I want to show to you, easy, it's easy, and I only select Things that are really easy. It creates a plain English language into automation. You don't need to know the skills of test automation specialist. If you are PM, if you are user support, if you are user acceptance, business analyst, you can use this. It creates, if you use in plain language, it's called test trigger.

You're using plain English language, it creates a test automation for you with the visuals and you can see what exactly are you doing there. Very easy to work with and again it can enhance your productivity because you can write your test even before you write your code. And the last one is communication with stakeholders and what the problems could be there. Lack of clarity, Information overload, failure to address stakeholder concerns, not tailoring the message to the audience.

What it affects? Timely decision-making, number one. Trust, missed opportunities, obviously reworked, budget constraint, project risk, and legal issues. I use Cloudy. I hope I pronounce it because it's French.

I hope I pronounce it right. So I use it to rewrite your communication to the stakeholders'lingo that they can understand. For example, I am a developer. I need to write to business team about bug which creates incorrect layout for the report field. How can I persuade them of its high priority?

And this is what Clouded generated for me. It introduced... business impact.

So it explains business impact. This is what business stakeholders understand. It says that the data is misinterpreted, the client trust is affected, operation efficiency is affected.

It talks about the risks if it's not addressed, financial losses, compliance issues, and competitive disadvantage. Everything that the business can understand. And this is how, again, Cloud is free and you can use it. And it was trained on something like that. Why I like Cloudy?

Because it was training on business and legal documents. So again, low hanging fruit, you can use it. I have 30 seconds. So what I'm doing, I'm a quality strategist. I love enhanced quality and productivity too.

So I help engineering leaders, CTOs, to optimize their software development lifecycle. with low hanging fruit with the quality processes and i run cool events i was told that yes my uh conferences are uh one of the best conferences on in the world i'm on the list of 20 and please try out tools uh the tools that i talk about test trigger and agile ai labs to i gave you some promo codes Definitely try it out because it's easy. It's low-hanging fruit. It's cheap and it affects your productivity right away.

And connect with me. Thank you so much. Gerald, I want to hear you.

I don't think I hear you. Thank you, Anna. Sure.

You were right on time. And I commend you for putting together such a great presentation, considering you are dealing with hurricanes in Florida. Oh, yeah, yeah.

Productivity issues, right? Yes, we've been hit by Milton. Yes.

We still have no electricity outside on my street, yeah. But thank you all so very much for your support. And it's amazing, amazing how many people are here.

Yeah, I will connect to you all. Just talk to me about using those tools. It's easy.

And just try them out. It's really easy for you. And I put in the chat the link to connect with Anna on LinkedIn. And also several times slides were put in there too.

Oh, my God. Thank you. This is where we are right now. We are halfway through the summit, more than halfway through the summit.

We have three speakers left that are doing 15-minute presentations. And so for those of you that have just joined us, we've had several presenters, and they've all shared great information, and you're in for a real treat. Our next speaker, number four, is Rob Wolcott.

He is a professor at several universities, including Northwestern and the University of Chicago. And he's author of the new book. Proximity, and the title of his topic is, coincidentally, Proximity, AI and the Future of All Industries. So, Rob, thank you very much for joining us.

You have 15 minutes. Great. Thank you so much, Gerald.

It's great to be here and good to see so many friends and colleagues out there and to follow great speakers, Andy, Mike, Anna. In the 15 minutes I have with you, I'd like to share a simple yet profound concept that will help us predict the future. And that's not hyperbole. This is actually a predictive assertion, this notion of proximity.

So as Gerald mentioned, it's the subject of a new book from Columbia University Press with my co-author Kyan Krippendorff. And I'll give you the punchline, then I'll back up and share where this came from, and then give examples of how this is rolling out in the world today. So proximity asserts that digital technologies of all sorts compel value creation ever closer to the moment of actual demand in time and space.

Now, know that what we're not saying is just a little better supply chain management responding faster to customers. Of course, we've always wanted to do that for years. That's a great thing, but that's not what proximity is asserting.

It's literally asserting that digital of all sorts. pushes the creation of value ever closer to the moment of actual demand. In other words, encouraging us to set up technology platforms and business models that encourage us to procrastinate, to wait as long as possible for a specific need, a specific user with a specific set of demands, and then create and provide.

And that's the direction of every industry the rest of our careers. So let me back up a little bit and explain where this notion came from. So in 2014, I was at a tech conference, and many of you go to tech conferences.

And one of the things I noted, as you might as well, is that the second speaker sounded a lot like the first, and the third a lot like the second, and the fourth a lot like the third. And I thought, you know, we must have better foresight as to where things are going. And so that generated a question in my mind. And that is, what is fundamentally different about digital technologies?

compared to industrial age technologies. Now, this comes from the field of economics and economic history. We have a concept called general purpose tool, which by the way, is a coincidence.

It's not GPT like generative pre-trained transformer. A general purpose tool is a technology that can apply to almost anything. And when it cascades across an economy, it changes the basis of competition.

It changes how the economy and markets work. So an example would be the steam engine. and then later when electric dynamos rose. If you understood the fundamental operating characteristics, constraints, and opportunities of electric dynamos, you could then see where the future was going to go compared to the past.

And that's what we're doing with proximity. So what we saw when we asked this question, what is fundamentally different about all things digital compared to the industrial age, is digital allows us to compress more and more capabilities in smaller and smaller packages and distribute them all over the economy ever closer to each moment in time and space. And so we're talking certainly about mobile apps and AI. I'll come back to AI in a big way. But we're also talking about rooftop solar and 3D printing and anything that is digitally enabled.

Therefore, because we have this distributed capability of digital pushing capabilities ever closer to moment of demand, this compels the production and provision of value ever closer to the moment of demand. Let me start with an example we all understand quite well, and that's video streaming. Now, I know that the content is all digitalizable, so therefore it moves at the speed of light, but we'll get to physical products in a little bit because it equally applies to physical products, experiences, really everything.

So if you think about video streaming, all of video streaming is already 100% proximate. In other words, you can watch any video you want anytime, anywhere, on almost any device. Now, by the way, in the book Proximity, we're not always saying this is all great. I mean, imagine you can binge watch the Kardashians all the time. What could possibly go wrong?

But nonetheless, the point is that you can access any video content you want anywhere. So the provision of video content is already nearly 100% proximate. But the production of that video content, for the most part, is not. yet. So it was probably produced months ago, maybe even decades ago, if we're watching a classic movie.

But where are we right now with generative AI? We're at the knee of the curve, the very beginning of real-time creation of video content, of on the fly, in the moment, creating video content custom for that small audience, that moment, even an audience of one. Now, it's still early. If you've tried Sora or MetaVideo, you've...

It's pretty cool, but it's still early. Imagine where it will be in five years, 10 years, 20 years, where we know things are going to go. And this isn't a guess. We know that over time, we'll be generating video content, experiential content for audiences of one, customized in the moment.

So this is the distinction we make in the book in chapter two between the production of value, product, services, experiences, and the provision of that value. They operate differently. Nonetheless, they're both driving toward proximity. Let's talk for a moment about physical products.

So think about how difficult it is to buy one matching fork. It turns out it's really hard to buy one matching fork. How do we do it?

Well, somebody in Western Australia mines iron ore and they put it on a ship and send it to Southeast China and they melt it down and alloy it and then send it to another plant and they make 12 matching forks. They put them in a box onto a pallet, slide it onto a ship across the Pacific Ocean to a... a port to the train, to a truck, to a store, and I buy 12 matching metal forks. Today, it is already technologically trivial to download a design file, push a button, and print one fork to order. In fact, if Rob wants a picture of his kids on his fork, all I have to do is upload a picture, and voila, I have that one fork.

Now, This sort of silly story illustrates a powerful dynamic, and that is what we will experience over the next decade or two, is a hybridization of the manufacturing, global manufacturing supply chain. Today, we have a global supply chain optimized for scale manufacturing at a distance. The larger my plant, the lower my costs. That was because of the constraints and operating realities of industrial age technologies. With AI and all things digital, we'll be able to compress capabilities, create mid-sized plants distributed closer to demand.

You're seeing this already with reshoring, nearshoring, etc. We're going to start to see production of certain products in urban areas where people live and eventually on your counter at home. In the book, we look at different realms of life.

We look at how we work, how we eat, how we create and produce, how we prevent and cure, how we power, and how we defend. And in the final chapter, we look at the two horizons of the 21st century, space and virtual reality. By the way, we've never been there before, but we are definitely going there in this century.

And both space and virtual reality are 100% proximate. Think about it, as virtual reality improves, and I mean in the future when we eliminate the goggles and we have brain machine interfaces, it's coming eventually, we'll be able to have any experience anywhere, anytime. And that's the punchline of proximity.

Anything, anywhere, anytime. Now consider space. A good friend of mine, Dorit Denovial, is a professor of space health at Baylor University. By the way, that's a pretty cool title, professor of space health. And before COVID, I was telling her about this concept of proximity.

And she said, wait, Rob, you know what? Everything we're doing in the world to help humans live and thrive in space is driving toward proximity. Why is that? Well, when you're on a spaceship to Mars for seven months, you've only got what's on the ship. And so therefore, by definition, all of the research development going on right now to help us survive through that trip is driving.

proximity capabilities. And some of them are already coming back here to earth. So my second favorite chapter in the whole book, Proximity, is about healthcare. One, because it's so immediately important to all of us in our lives, but also because for me, there's a very personal story. My father, Bob Walcott, died 20 years ago of an aortal aneurysm at the age of 63, way too early.

Two months before, he had had a full physical, and the doctor said to him, Bob, whatever you're doing, keep it up. You're in excellent health. Two months later, he was dead.

Now, fast forward to today. We have aura watches and Fitbits and things like this monitoring our sight, but it's still rudimentary. Imagine in five years, 10 years, when we have systems monitoring every weak signal in our body, every part of our body at every moment, and an AI system is...

monitoring those weak signals and analyzing and doing predictive prescription. And it contacts you and says, Rob, you need to call your doctor. And by the way, the doctor might be virtual, but let's set that aside. Contact your doctor. There could be a problem.

And the doctor says, you know what, Rob, it looks like you have stage zero pancreatic cancer. Not a big deal. We can solve it. Compare that to how we are today. We wait until we're doubled over in pain.

We call the doctor. We set an appointment. They run some tests and then discover you've got stage four pancreatic cancer. There's really nothing we can do about it.

So what does this anecdote tell us? Proximity and health care. Proximity compels health care from curing things to preventing things. Now, note, we're not saying it's better to prevent.

Of course, we're saying that. But people have been saying that for 5,000 years and humans just don't like to do what's right for them. What we're saying instead is. that proximity technologies, including always-on health monitoring, certainly AI analytics in the moment, are going to be so much more effective, so much cheaper, and lead to so much better, longer, healthier lives. They will, over time, drive healthcare from curing things to preventing things.

Now, finally, in my travels around the world with the World Innovation Network, Twin Global, hello to any of our Twinians in the world. with us today. We go all over the world to see where innovation is happening. And one of the things I'm seeing is developing economies leapfrogging the developed West. So I'll give you a couple of examples.

In Rwanda, over a decade ago, the government said we need to be better at getting medical supplies and blood all around our country, but the road system isn't yet developed. So the traditional model would be to get billions of dollars. from the World Bank or other aid organizations to build roads and take a decade. And they said, we don't have a decade.

You know what? Let's use drones to deliver blood and medical supplies anywhere in the country. And today they do it.

It's called Zipline, and it works. In Bhutan, end of last year, I was hiking over a few miles to a temple complex, a beautiful bucolic area up the side of a mountain, and there were two high-tension wires over top. And I said to my Bhutanese friend, what are those for? And he said, those wires serve the temple complex.

There are only 30 people that live at this temple complex, and there are miles and miles of wires. to bring electricity to that complex. And I said, well, why not have proximate power generation? Maybe it's rooftop solar generation with a battery backup.

And he said, oh, we're already doing it. When we got to the temple, he took me around a corner. You couldn't see this from the temple. He took me around a corner to a small building.

And in that building, they were building a small scale hydroelectric plant. And later this year, they switched it on. And now all the power required by the temple complex, the people living there is provided by this hydro plant. And the power lines now just act as backup. And they also allow to send additional power back to the grid, which they sell to India.

This is proximate power. The key to proximity, and Mike Hruska pointed this out when he said everything we're using is electrified, will be proximate power. The more we can bring proximate electricity anywhere, anytime, any place, the more...

proximity will win. Now, imagine what AI means in this proximate environment. When you can compress capabilities in smaller and smaller packages, access to AI systems, think large language models to small language models, and have capabilities, diagnostics closer to every moment, the ability to even make decisions as we start to cede authority and agency to AI systems, this by definition will drive. proximity will drive us to create, produce, experience anything, anywhere, anytime. If you read the book, and I hope you do, please reach out to me.

I'd love your thoughts regarding this proximate future. And I'd ask you to consider for yourself and for your company, your organization, what is our proximity strategy? In the appendix to the book, we offer a proximity strategy workbook.

Again, Thanks so much to Gerald and the Solution People for having this extraordinary event. I look forward to hearing from you. And I ask you, what will your proximity strategy be? Thanks. I love it.

You end up with a question, Rob. And a lot of people on chat were talking about your book. So I put a link in the chat directly to Amazon so you can order the hard copy. Thank you.

The book or any other resources and so forth. I've also put... Rob's LinkedIn profile in the chat so you can connect up with him. And as you can see, we are making progress with hearing from dynamic speakers from all over the world. And today we are ready for speaker number five, Jason Kaufman.

Jason is CEO of Erivo. And his topic is AI in action, real world demonstrations to supercharge your productivity. So there we go, Jason.

It's your turn to share your wisdom in 15 minutes. Well, thank you very much. I really appreciate that. I really have been looking forward to the opportunity to presenting to all of you. One of the things that I've found, you know, over the...

past couple of years that's really resonated with my audience and the clients that I work with is showing it in action, you know, actually showing demonstrations of what can be done. You know, I do talk, of course, and we will talk a little bit here about, you know, some of the higher level concepts, but I really want to try to reserve as much time as possible for the demonstrations. So we're going to look at navigating change.

with the concept of AI intuition, prompt engineering, curating prompts, but then, as I mentioned, getting into those use cases there. And, you know, I really found that it's one of the things that I think is really important, especially in these first couple of years here with generative AI kind of hitting the mainstream is, you know, understanding change, right? And really embracing it. And I was reminded of this book that some of you are probably familiar with came up. gosh, about 20 years ago, that's this parable about these mice, right?

They're in this maze and they're starting to run out of cheese. There are those that sort of held onto it and stayed in that one place and didn't explore that ran out of cheese, right? There were those that went through the maze to try to find where that cheese is.

So it's a great little parable, but... Considering how AI has really impacted all of us in all of our various industries, it's definitely something to take a look at. So in my industry, I've been in knowledge management consulting for over 20 years.

We've done technical writing, staffing, knowledge engineering. And when generative AI, chat GPT hit the mainstream, I had to reassess everything. right?

It flipped my world on its head. Here I was in this space where I built an entire business around writing content. And now we've got a tool that can write in seconds what may have otherwise taken someone hours, if not days, to create a draft.

And so I had a decision to make. I could either kind of lean into it and learn everything that I possibly could, or go sell hammocks or something on the beach. fulfill that lifelong dream. So obviously, I've chose to embrace it and, you know, to really try to anticipate what this means to my business, my customers, and, you know, the industry as a whole for knowledge workers. So really, you know, kind of letting go of that fear, you know, keep moving, be proactive.

And really, a big part of that is just experimenting. There are so many tools out there now. These are just a few, but... ChatGPT has over 200 million weekly active users at this point. 2,000 generative AI tools currently available on the market, more than 2,000.

This is just a small sort of sample of some of those larger tools, larger models. And you'll see DoEasy AI. It's something I've mentioned in the chat there as well. It's our own product. So one of the things that we've done in speaking with our clients is really started to identify some of those gaps.

What were their actual needs? and found opportunities to really kind of come at AI from a very creative way, an intuitive way. So I'll show a little bit of that today as part of our demonstrations.

So I like to start out at a really high level. I don't like to necessarily assume that everybody knows what prompting is, everybody knows what prompt engineering is, everybody knows what tools there are. I like to really step back. This is the bottle that we use.

It's the Tracy model. It's a prompt engineering framework. So it has to do with giving the AI a task, explaining to it that task, giving it a role or persona, telling it who the audience is for that output, creating an understanding of exactly what you want that output to be. But the intent, and it's a shame that in this acronym here, the intent comes last, because it's so important to give it context. you know, give it, give these AI systems context as to who you are, you know, what you're trying to do.

Saying things like, my boss wants this by Wednesday. You wouldn't think to necessarily put that in, but sometimes that helps, gives that sense of urgency to the prompt. It's not like a Google keyword search, right? It's not just put something in and find your answer.

This is really helping. Prompt engineering is really that active. explaining to the AI how it can help you. If you're not sure what it can do, ask it what it can do, which is not always entirely intuitive.

So one of the things I wanted to also speak about was curating prompts, right? So you, if you've used AI at all, you probably have a set of prompts that you like to use over and over again. If you're working in an organization, you may have multiple prompts that people use. I really encourage everybody to start a prompt library, even if it's something as simple as a spreadsheet, just tracking them, curating them.

Because these prompts you'll refine over time. You know, what worked last month may not work this month. These models change as well.

So keeping that prompt and really treating it as content. For those of you out there that maybe work in the knowledge management space or the content management space, you're definitely familiar with content repositories, knowledge-based applications. Treat these prompts as content and give them that life cycle for continuous improvement.

So that's one of the things that we do within our product, DoEasy. We really try to marry up knowledge and AI because there's such an obvious connection there. One needs the other. One can inform the other and both can improve. each other.

So I want to, like I said, I want to save as much time as possible to really sort of demonstrate. some things here. So I'm actually going to jump into ChatGPT.

I've got some preloaded discussions that I'm just going to sort of continue here. But here's an example of a prompt within ChatGPT. I've given it a persona, your skilled technical writer. I tell it the intent, who I am.

You know, I'm a technical writer who needs assistance in ideation around a document I'm tasked with writing. And so what I've done is actually in the prompt, I said, hey, don't start until I say, let's go. So just for demonstration purposes here. But I'd like to draft a document or APIs on my new software product. So let's say you're just starting out this new product in documentation.

And maybe you haven't done this type of documentation before. Well, I've written a prompt here. And you can write these and customize these any way you want.

But I basically said, hey, ask me this first question. And based on that, I said, use your intuition, right? Which isn't something that you would necessarily think to ask an AI or tell an AI.

But it's been a very powerful tool. As I say, use your intuition and basically fill in the blanks. Answer the rest of these questions for me. So it does. And what's great about this is I really believe 90% of the benefit we get from these types of systems are drafts, ideas, and research.

I'm not looking for the answer. And I think that's where a lot of people kind of get stuck. They get stuck in that, well, it gave me the wrong answer.

So what? If it was 90% correct and it got me to the 10-yard line, I can bring it in for a touchdown. That's the real value of the stuff. So in this particular case, kind of going through and looking at this, it's going to get me unstuck.

And that was the intent of this prompt. Just looking at a different way, different ideas, things that I may have otherwise overlooked. So just ideas. So this one's kind of fun. So I'm going to click on a customer support article from Canon.

USA. I'll just say it's somewhat lacking. So I'm just going to go ahead and copy this.

I'm going to copy all the text from this. And I'm going to go to ChatGPT. And I've got another preloaded prompt here. You're a technical writer, copy editor.

I'm a technical writer who needs assistance in revising documentation and so on. I've given it guidelines. Again, this is some of that context, right?

Telling it how to do it. how I want that output, what I want it to do. And one thing that I'd really pass on as a powerful tool, use AI to help you do the research to build better prompts, to gather the context to include in prompts.

If there's nothing else you take from this, please take that. It's a very powerful tool for researching this kind of context that you can later add to your prompts. So I just pasted this in.

And I'm going to go ahead and submit that. And here we go. So it is rewriting this entire document or page based on the requirements that I just gave it.

It's even formatting it pretty well. And I'll just show them side by side here. And I asked it, tell me what edits or adjustments you made.

So some people may be a little bit leery to actually just go, OK, go rewrite it for me. and then I have a draft. What's great about this is you can tell it, don't rewrite it, just give me a report on what you might change so that you can make the decision yourself instead of just having it go into that black box. So let me just kind of cruise along here. I'm realizing I've only got about four or five minutes left.

Translation, super powerful tool for translation. So I'm going to take a copy. And forgive me for skipping around. It's just the nature of demos. I'm going to take a copy of that.

I'm going to put it over here. I'm going to say, so I'm going to ask it to translate what it just wrote into Mandarin. So I basically just, again, copied this whole context.

It can be ugly, which is fine. It's just text. The great thing about these generative AI systems is this is all they need. They don't need formatting.

Although Markdown can help give it a little additional context, it's not required. So you can see here, it's now rewriting this in Mandarin. And it does a really good job. I've actually played the telephone game with it quite often, where I'll go from one language to the next to the next, and then circle back to English and compare A, B, how it did.

The logic remains the same. Sometimes the words are changed, but the key spirit of the information definitely remains intact. This one I'm going to go ahead and show you quickly here, distilling community feedback. And I'll just run it real quick here.

So I've asked in this prompt for it to go out to this website, and this is actually a website on Apple's discussion page. And I said, go through this, look at the content of that page, and find opportunities where customers are asking questions. Then draft an article.

that will answer that question for them. That's super powerful. And if you can imagine this happening real time, you've got that agent for your company, like on that discussion thread, that's just looking at the information and what users are talking about, what's working for them, what's not, and drafting real time knowledge that of course a human should verify and vet, but that could be published out within minutes of a new issue happening.

That's really powerful. It's a really powerful use case. So I'm not going to have time to get through all of these, but Gerald will be sending out a document that's got links to all of these prompts. Not links. It's got the actual text from all of these prompts.

Take them, use them, customize them any way you'd like. Feel free to reach out to me directly. I'm always excited to talk about different use cases. But quickly here, we do provide consulting. One of the things that we'll do often is a discovery session and workshops for our clients.

Again, showing them the real world applications of this is really key. That's where I believe everybody really gets excited. You know, it's no longer sort of necessarily theory. It's application. And that's where I get excited as well.

And then there's our DoEasy product as well. So it is a... knowledge management system, as well as a prompt management system.

And it uses AI to help you use AI, you know, which again, is sort of up leveling there, right? I don't think that people necessarily have to know what model is the best model. I don't think they have to know what all it can do. People should just be able to ask for what they want.

And that's what I've really tried to build in this product. I do have some upcoming events I wanted to share. There's a Swarm community event I'll be presenting at Lavacon next week, TC World in Stuttgart the following week, and Enterprise KM World in Washington, D.C. in November.

If any of you are in those areas or attending any of those conferences, please feel free to reach out. I'd love to talk more about AI with you. Well, thank you, Jason. Thank you. That was wonderful.

Hands on. And it was nice to see you in chat GPT using your prompts. And I want to remind everybody that if you look in the chat, you'll see Jason's handout with all the prompts.

And you will also see a link to his LinkedIn profile to connect up with them. And speaking of LinkedIn, I encourage all of you to join the Artificial Intelligence Innovators Summit. a group that has a million members and you'll be getting an email about that tomorrow that'll give you a link to it. As you can see on the screen we are now at our sixth speaker and we're saving the best for last. We have Andrew Salatas and he is the founder of Dragonfly Rising and a alumni from Google working in AI there as well.

And his title is Navigating the AI Revolution, Maximizing ROI. So we're going to hear about ROI. I like that.

So it's your turn. Thank you very much, Daryl. Okay. So Navigating the AI Revolution, Maximizing ROI. I'm going to tell you a story about the dragonfly.

I'm going to promise to try not to put you to sleep. So if you want to close your eyes, you can and just listen to it. But let's engage about the dragonfly. So if you think about a dragonfly. It starts its life underwater.

It may be underwater for months to years, molting up to 17 times. When it gets close to that final molting, it moves to the shore, finds a reed to crawl up, starts to dry, starts to expand its abdomen, push away its exoskeleton. start to open its legs, its wings, waits for them to dry. And during this whole process, right, this has only taken about three hours after it was in the water for months to seven years.

It now is about three hours. It dries and takes flight, breathing air and hunting. So here is something that goes from hunting and living underwater, breathing, to then going up and flying and hunting and breathing air.

And its lifespan in the air and when it's above water is weeks to a couple of months. And for many of them, it's only weeks. If you think about transformation, there's probably nothing more transformational than what the dragonfly goes through.

So when we think about where we are today as enterprises and everything with AI, this is a massive transformational time that we're approaching. So think about that and think about the dragonfly as being symptomatic of your type of transformation. So let's now pretend that we're all on a IT team, data science and engineering.

I have a working at a mid-sized regional bank, and we're tasked with enhancing fraud detection. Our company does about 100 million transactions a year. We have fraud that's about 0.1%, about 100,000 fraudulent transactions, and the cost is about $500 per transaction. And our loss is around 500 million.

So here we are on this team, and we're thinking about how are we going to solve that problem? So we start to think about this use case, right? We're going to implement some AI-powered fraud detection system. We then have this initiative that, okay, we're going to reduce fraud losses by 32%.

Our use case we're going to build is enhanced fraud. And we want to be able to... reduce the loss from fraud by 32%.

That's what we're going to measure. So we set out on our way and we're going about doing this. And as we start to progress, we start thinking through of how we're going to deliver this.

We use all these latest technologies. You've heard a lot of great presentations today about all sorts of AI technologies from quality and testing to prompting to things being in proximity. It's just all different ways of thinking about it, augmenting how you work.

And then when we get there, we start to think about how do we measure success if we're going to do this? The typical ways. We may think of precision or recall or lift. There's typical ways that we may think about as a data science, data engineering team, how do we measure success of this use case that we're building? These are some of the typical measures.

And we go through there and we start to build it out. And then we get to, well, what do we do? So now we've built it, but how do we actually quantify this value to the business? There's probably not a lot of business people that are going to understand what precision or recall means.

And does it matter? Precision is a really blunt tool. So can we actually quantify this to the business?

Can we help them understand that because our model is accurate, that we are able to reduce fraud? be really hard pressed, right? So we have this mismatch, if you will, between what we're used to doing as a data science and IT team and what the business is understanding. So when you think about transformation, right, as the IT business side, we need to be able to reach across the proverbial aisle and start to work with the business. So then if we say that, okay, well, wow, how do we do this?

So it's really like, we're not alone, right? There's this AI value realization challenge that everyone's facing, right? Estimating and demonstrating business value. is the number one AI adoption barrier. This is a recent Gartner study.

It's the number one barrier. Not the number one barrier of people experimenting and doing AI, but putting it into production. You can't demonstrate the value. This is fascinating of a recent IBM survey, that on average, average organizations show about 5.9% ROI. on their AI initiatives.

And at the same time, the average cost of capital for companies is about 10%. So you're really better off in many cases, putting money in the market. In the best organizations, the top 10%, they see about 13% ROI. And so it's not that it is, wow, GPT doesn't work or these algorithms don't work.

or the AI is wrong or we have these problems. This is organizational, right? This is where as organizations, we need to transform how we do business together and how we work together. So now, you know, you look at this as a dilemma from the top of there's a lack of accountability, right, of who's responsible, how do we measure this?

There's clearly unclear business results. All right, we may... be reporting back to the business of precision and recall, you know, confusion matrix and all these things. And they're not sure what we're talking about.

Right. And then because of that, we can't demonstrate these business results. Well, then there's confusion of implementing it. And do we even put it into production?

Do you get more funding? I mean, we can't measure it. We can't understand it.

So then we're probably not going to put it in production. So you see a lot of testing and using of it, but not a lot of production and clearly not a lot of ROI. So we need to set out of how do we fix this and how do we fix this collectively as an organization?

So now let's imagine that our data science team that we're on, and I'm guilty of some of these things we were talking about as well. I'm not saying that I've never been there. I've been on both sides of this fence. In the engineering side. on the business side.

When you look at this, our team, our new team, executive led, we need to have sponsorship and cross-functional. So we need to make sure too that our business teammates, they're upskilled on AI and our tech teammates, we're all upskilled on business. We need to all be speaking the same language. So we need to understand the business perspective and the business that we're in. And the business colleagues need to understand AI, not down to the level that the data science is going to of maybe what precision or recall is, or how do you implement some of these technologies, but they need to understand the capabilities.

They need to be literate, right? So we need AI literacy across the board, and we need business literacy really from the technology side, right? We need to transform and start working together.

Now, if we do this, and we start working this way together. Now, how do we measure success? So if we're in this room together and now it's not just a bunch of engineers, now we're going to focus on the business metrics and really drive towards measuring success means business metrics.

So let's say in this case, it's false positives for fraud. Well, that means that my transaction is mistaken as fraud. Now as a consumer of a retail bank? Am I going to be upset if my transaction gets denied because they think it's fraud?

I'm going to be frustrated, but I'm also at the same time going to be happy that they tried to prevent it. They thought it was fraud. So I'm going to be happy that that happened, even though I may be inconvenienced.

I know that they're trying and I'd be a lot more upset if they let fraud go through. Then you got the false negatives, right? Where an actual fraudulent transaction gets through.

So now as this collective team, this is how we're going to measure. So now let's take that example that we had from when we were just a data science team, and let's kind of work this through as now part of the larger team. So now we make this assumption, our model is 99.8% accurate.

So we're okay with it having the 2% or 0.2%. We've got a lift of about 300. Yeah, so fraud occurs 300 times more than the average in that 0.2%. So if we look at this, right, so we block 200,000 transactions, the percentage that are blocked that are fraudulent, 30%, the fraudulent transactions blocked, 60,000 of them, and the false positives, 140, and the false negatives, 400,000 and 40,000.

So here now we could clearly see that if we look at it this way, we just saved as a bank $16 million. And so we're able to articulate and we're able to actually drive and understand how we're contributing to business value. So now we look at this use case, you know, and the same thing. Now, can we actually confirm this to the business? Now we absolutely can, right?

Now we know that yes, we could quantify and we could state that we're able to reduce loss from fraud by 32%. So we're able to do this. So it's... imperative that we start to change as organizations and we really start to work together on building these use cases and understand how to connect the dots between IT and business and really how to drive towards business results.

So when you think about developing these impactful use cases, first questions, right? And there's a lot of questions that you need to get answered. But the very first are, are we even solving the right problems? Are we aligned with strategy? And if we are, okay, now we have some candidates to analyze, how do we solve these problems strategically and responsibly?

And how do we tie this together? And you can think of this as kind of like managed AI. And how do we do this? And as you look across all of the different initiatives that you have, you'll... encounter or you may embark on different AI initiatives across these four general areas, whether you're doing things to support revenue, whether you're driving towards operational excellence, whether it's for innovation, reducing risk.

So it's usually the big buckets that you may see that you're pursuing different key AI strategies and direction you're going. With that though, the thing you have to do as... you go from use case up to the different buckets of initiatives is you need to make sure your AI strategy and your organizational business strategy come together. They have to meet in the middle. So we have to have this happening.

It's a way we get everyone involved. This is a team sport. So when you look at what we've been doing at Dragonfly Rising, we really have kind of like this. overall comprehensive solution for this.

There's a school of data of teaching people and executives and others of how do you do this? One, how do you improve literacy, but also how do you understand how to start to build these high impact use cases? And then a software product called Terrain, which now helps you map in these use cases and your strategic objectives and your ROI goals and help you compute ROI that you're getting from these initiatives and really make sure that everyone's engaged and everyone's accountable across the whole thing.

As you get started in this, I have a free AI maturity checklist. So you could scan the QR code there or go to the link at the bottom. And you could get a checklist of, hey, here's where we are in our maturity. We need to be checking off the boxes of going to things. I'll leave that if you want to continue to dig deeper into mastering how to build these high impact use cases.

I really strongly encourage you to sign up for our workshop. It's November 19th. It's an hour-long live workshop. It's a whole bunch of things that you'll learn from that.

There's an opportunity to extend that and have four small group coaching sessions and access to an AI maturity assessment as well. And during this, you'll get kind of an exclusive ROI use case framework that's provided. So feel free, please register. It'd be an amazing time for you.

And with that, let's optimize this together. You can connect with me on LinkedIn. If you're interested in doing professional speaking as well, please submit a request via email or there.

I just started a podcast, first week of it. So it's getting off the ground on AI demystified for executives. Really, again, filling in that and helping people understand how do you apply these technologies.

So thank you very much for your time. Thank you, Andrew. That was an outstanding presentation.

Congratulations on putting together a workshop that you're going to do for an hour. That'll get people more engaged. And I want to thank you all for staying with me so far on this AI Summit. We are...

about to wrap up one part of the summit, which is a presentation from the keynote speakers. And I want to invite you to join in for speed networking. And if you look in the chat, notice there's a special link that you'll need to click on to join.

So here we have the summit finished up. We've had three hours of time together, five speakers spoke for 15 minutes, one speaker spoke for 45 minutes, and you got lots of handouts from most of the speakers that you can use to further your learning and connect with people as well. I do encourage you now to copy and paste the entire chat, if you want, into a separate document so that you can retain the links to people. and the connections and different comments that were made as well. And as you can see, we now have time for speed networking during those final 45 minutes.

But first, I want to say thank you to the six sponsored guest speakers, Dr. Andy Armacost, Mike Hruska, Ann Roisman, Rob Wolcott, Jason Kaufman, and Andrew Solaitis. Thank you for what you shared today. And we look forward to having you back again. And speaking of speakers, if any of you know other people that are experts on AI and that might be interested in speaking, our next summit is on the 21st of November, the week before Thanksgiving. And now is time for us to say farewell.

You can send up some emojis and thanks to everybody. And you connected up with a bunch of people. I hope you make some rewarding. discoveries as you collaborate.

And if you want to know the website domain to go to now, it's very simple. It's networkonzoom.com. It'll be two-click registration there. And we're going to take about a five-minute break right now to have everybody switch over to Zoom meeting, which takes a special link. I'm putting it in the chat one last time right now.

You will see the link to go to. And I'd like to thank Robert Sababati of Online Interpreters Worldwide, who has been the producer of this event and kept everything running smoothly. Thank you, Robert.

And I thank all of you for spending the time with the Artificial Intelligence Innovators Summit. Thank you.