Hello, hello. Welcome to Practical Strategies for AI Enterprise Adoption. We're going to give everybody just a few minutes to log in and join and get settled.
But for those of you early risers, those of you that have made it on already, feel free to start engaging in the chat session. We're going to make this fun and interactive. And the first question is, if you feel...
Hi. Bold, you can introduce yourself. If not, just throw in your favorite Star Wars character in the chat. We might use that for polling later.
But if you're so inclined, my guess is Jar Jar Binks is not going to hit the top of the list as favorites. But maybe a Porg or Chewbacca might make its way in there. So feel free.
But we'll get started in exactly one. minute. And maybe just a quick sound check if Klaus and JP, can you hear me okay?
If just not, if you do. Great. Fantastic.
Well, thanks, everyone, for joining us today. We had hundreds of people sign up for this webinar. So we're going to dive right in and be conscious of time. But for those of you that have just joined and starting to figure out why people are putting Star Wars characters in the chat window, that's going to be part of the fun, keeping this a little bit more engaging.
But feel free to share your favorite Star Wars character in the chat window as we go through this. And as we dive in. Let's talk about who we have today joining us on this webinar.
We have JP Gounder, Principal Analyst and Vice President at Forrester. JP, prolific writer, speaker, true subject matter on this concept of reshaping the future of work and taking a hard look, certainly at all the changes going on in the world of AI. His articles and quotes have been featured on prestigious publications such as Wall Street Journal, Financial Times, and many, many more. He's appeared on broadcast television, NBC, Bloomberg, and a fun fact, when he's not helping companies redefine the future of work, he's an aspiring science fiction writer, which is absolutely perfect for this topic.
And then also joining us from Germany, Klaus Schmidt, partner and alliance leader at PwC. For those of you just vaguely familiar with PwC, they are huge, 151 countries, 364,000 people, and one of the leading professional services. networks in the world.
Klaus has been helping companies for 23 years and really bringing together three unique perspectives in and around law, tax, and technology. So when he's not busy helping companies with their digital transformation strategy, he's out as a semi-professional skier and mountain biker. He can maybe share a little bit about that mountain biking disaster this weekend, but he's still in good shape and ready to join us for today's discussion. Let's dive in.
We're going to set the context for this. By the way, my name is Steve. I'm the CMO here at DeepL and absolutely thrilled to be part of this webinar as well.
So a little bit of agenda for us. So we're going to walk through some fun facts and market trends on AI. We're going to dive in deep with JP on the enterprise adoption insights, latest research, hot off the press stuff. Then we're going to get a panel discussion.
And this is really where we're going to take kind of a Rubik's Cube look at the world of AI from not only kind of a research perspective, from a technology perspective, from a customer perspective, but really kind of turning it on its side. So, you know, you'll get a full kind of grasp of this topic by the time you leave. And then we're going to, you know, kind of quickly walk through an overview of how DeepL fits into this landscape and then open it up for Q&A. So with that, we're going to use the Q&A. window on the webinar for all of those questions.
So please make sure you're putting those questions out there. So a couple of things. Recording will be available. So there's a tendency to throw a lot of information in these, which is the intent.
We want to make sure that you're getting your money's worth, if you will, out of this session. But the recordings will be available. Now, Here's a challenge.
I've been on enough webinars. We've all done these in the past. And there is a tendency to gravitate toward email, Slack, distractions. I would challenge you to focus. And I know that's hard because there's a lot going on in everybody's lives and everybody's businesses.
But the reality is, the information we're going to cover in the next 60 minutes may change the course of your business. for the next couple of years. And that's a really important decision.
And I certainly want to make sure all of you get the kind of full benefit of the discussion today. So again, recording will be available, but focus, right? Second is questions. So we have the Q&A pod. If you find your way to that and you want to just throw out some questions, throw them out throughout the discussion as soon as they're top of mind.
We'll answer them at the end. But please be, you know, just know that you can continue to put those out there and we'll stack those and get them ready for the end. And again, you know, the challenge here is focus for all of us as we're all bombarded with information.
Some of it AI generated, by the way, that, you know, you want to stay in touch and really dial into this session today. So, look, a couple of fun. market trends in and around this.
90% enterprise decision makers are going to implement AI for internal and customer-facing use cases in the next 12 months. There's no doubt that this is kind of a title shift, if you will, in terms of how enterprises are thinking about it. And that's a good thing.
That's certainly why we're all here today. And many of those use cases certainly kind of see productivity as the leading benefit. There's other benefits, customers. support, customer service, you know, driving top line growth. And JP is going to cover some of those in his, you know, his deck as well.
But, you know, productivity is certainly one of the key drivers. Now, you know, look, it's not all, you know, rainbows and unicorns, right? There's risk here as we think about this world of AI. And the stats are proving that out, right? 77% of CEOs concerned about AI security.
What's happening to my data? Is it being used to train models? Is it, you know, getting... show up in somebody else's content.
And this is one of my favorites. I love this image. And the, you know, the image that it creates is, you know, these AI smugglers, 78% of AI users are bringing their own AI to work. In other words, you know, the adoption is not just a top-down decision that IT leaders and business leaders are making. It's kind of happening.
at all levels. And that's certainly something that's an important factor as we think about this topic. A few other things in this world of risk is tech leaders thought their organization was moving too fast on AI, and not because they didn't want to embrace the new world of possibilities, but... Because they didn't feel like the data was organized in a way that might, you know, that might make them more effective.
So all these things combined kind of give us a fascinating and complex world that we're now in today, this moment, and thinking about how do we then do, you know, take this to the next step. So with us, again, J.P. Gowder, I'm going to stop sharing my screen. He's going to pick it up from here. And as J.P.
is going through his session, please, again, feel free to share questions along the way. We'll get to as many of them as we can. during this session.
And if we don't get to them, we're going to pick them up afterwards and, you know, send you a note personally. All right. Thank you for the introduction. Thank you, everybody.
I'm glad to be here. I am JP Gounder with Forrester. As you might imagine, we're spending an awful lot of time talking about AI these days, generative and otherwise.
And I'm going to give you about 25 minutes of our research on how enterprise AI adoption drives productivity. at its base. You heard Steve say productivity is a leading metric that people are looking at.
It's certainly a leading metric in the conversations that I'm having and in the data that we are collecting. I would say that generative AI has brought springtime blooms to the whole AI space. And we think of generative AI as a set of technologies and techniques that leverage big corpuses of data.
including large language models, but increasingly other kinds of content as well, to generate something new. It could be an input that you use natural language prompts for, or other non-code and non-traditional inputs. So you're going to be creating text or translations or images or video or code or audio out of this, as you've probably been experimenting already. I say springtime blooms because we have faced AI winters before. Those of you who are longtime watchers of this space know that there have been what have been characterized, this is an academic piece, as AI winters where Things didn't exactly go according to where people thought they were going to go.
Some of the fundamental problems that have been solved recently have made AI more conversational, more effective. It's a layer of interaction that makes the whole AI space more useful. That doesn't mean it's always easy, but we think that we're in a spring rather than a winter.
We also believe that, of course, as always, there's going to be some hype. And, you know, you're hearing about this, maybe generative AI is a fad. Ultimately, we don't believe it is. We think that it is going to be something that you'll be using forever. You can see in the newspaper here in 2000, internet may just be a passing fad.
There was a lot of hype around the internet. The internet was not a fad ultimately. And we think the same is true for AI. Why is it so important?
Well, for one thing, you could finally talk... to it and not in the way that you could just a few years ago, right? You can talk to it, it can talk to you, and it can automate important tasks on your behalf.
Again, that could be a translation, but it could be writing something for you or building something for you. And it does become a turbocharge for other things, predictive AI, computer vision, machine learning. They all become more valuable because you can interact with them in these new ways. So what I'm going to cover are some trends in adoption, a little bit about productivity. I always get a lot of questions about business case as you try to make investments, some risks, opportunities, the skills and preparation that we need for our employees and what it means.
So again, I think that AI is in fact fundamentally changing the way that we interact with technology and enterprise leaders are jumping on this as well. It's not a niche use case. Just in two years, we're finding that a massive number across customer-facing and internal use cases here, rounding up to about 90%, as Steve mentioned earlier.
And, you know, there's a lot of dynamism here. People are doing and planning quite a lot. And then there are a few firms that are learning.
Now, that doesn't mean that everything has turned to AI immediately, but it does mean that there are incredible... number of companies that are doing something, and they're ever expanding those somethings to more employees, more use cases, more customers, whatever the use case might be. By the end of 2024, the data that we just collected, kind of hot off the press, suggests that 33% of AI leaders believe that more than half of their organization's non-technical employees, so people who are not developers or IT will be using generative AI in something, in some way, in some software that they're using.
That is a very, very fast adoption rate by historical standards. And it speaks to the fact that there is a lot of sort of optimism and sort of proven use cases that can come out. And again, there will also be failures, and I'll get to that in a bit.
There are things that can go wrong, but there's an awful lot that can go right as well. Also, 83% of leaders say that spending on generative AI has actually increased the likelihood that they will spend on other forms of AI as well. Because of that turbocharging effect that I mentioned, applications are going to be built that aren't just LLMs, right?
Large language models. Large language models and generative AI is a part of an application often. And so there could be other things that are built into an application. that allow you to use them together.
So that's a powerful driver here. We forecast that there will be quite a bit of, you can see spending here, 124 billion dollars on generative AI software by 2030. And in fact this will be broken out into some generalized tools and some specialized tools. Specialized tools can solve specific problems. That could be translation, it could be generating some kind of role-specific output that is tailored to a particular job.
But there's going to be just a lot of growth in this industry over the next few years. So how do you then drive productivity? I can give you a little intro here to how we think about this problem. But let's just start with Bill Gates.
This year at Davos, he said, most of the applications of generative AI are just helping you be more productive. I found it's a real productivity increase. And of course, he knows a bit about this, having run Microsoft during some of its growth periods. But I think more importantly, other leaders agree with him, right? So these are some of the following, these are the applications that people want to use generative AI for.
And you can see they loosely fit into two categories. One of them is generating content. which itself you could argue is a form of productivity, and others are more directly employee productivity, whether that is simply saying, I want to support employee productivity or enabling some kind of self-service or even writing code.
There's an awful lot that you can do. Everything that's on this slide, much of it has some relationship to making your employees and your organization somehow more productive. When we ask leaders about benefits here, what are the benefits of your organization? This is the percentage of decision makers.
You can see that there's a bit of a long tail here, but they want to increase automation of certain internal processes, improve their operational efficiency and effectiveness. Of course, improve employee experience and revenue growth, but increase productivity directly and maybe reduce workforce, which honestly is not. the big driver right now.
I just want to warn you, more people say they want to reduce their workforce than are able to do that. But you can see, again, much of this is productivity related, as we've seen from our data. Steve already stole my thunder on this data point here, which is fine.
But productivity really is the leading greatest benefit in the minds of these decision makers, alongside things like innovation. cost efficiency, revenue growth, or even moving into new markets. All of these things come up in conversations weekly with the clients that I talk to. So when we're able to do this right, we can basically augment any role, and we can do this in a couple of ways. On the bottom, you see there are some horizontal applications.
These would include things like making a translation, summarizing content. summarizing a document, summarizing a meeting, composition, you know, creating a new document. But then there are also by specific or functional areas like marketing or design or IT, you can tailor certain more vertical or role-based solutions, right?
So maybe in sales, AI generation is going to help them produce ideas, use inclusive language, create new content. I've heard many salespeople say that they are using generative AI as an aid to write a rough draft of a proposal or help them compose a more effective email. The same thing could be true in other areas, you know, like a data scientist might use it to produce and share data to train models without risking any personal information, right?
Or in operations, maybe there's some intelligence going on. So part of what you're going to be doing in coming years is figuring out What are some of the horizontal applications that will apply to everyone or many of our workers? And then what are some of the more role-specific or department-specific use cases that you can use to drive productivity?
Building a business case. Well, this is important because most of us have to justify these investments along the way, and it can help you to sell a project internally. Like you may personally be bought into the idea that generative AI has some benefits when you do it right. But of course, you want to sell it internally and you also want to be able to track whether you've been successful and actually created the value that you had hoped to over time. These are all really smart business goals to have.
So I want to give you a framework for a very simple business case. It doesn't include everything, but here's how you can think about this. You have two different employees, so you may not be giving a license to every employee. employee.
You know what they earn. Each of them has a different earning that they make. You could break that out hourly if you want to. Let's say that the AI solution that you're using in this case is a SaaS solution and it's $10 a user per month. Obviously, you could plug in any number you want here and it will be different if you're building something bespoke.
But let's assume it's a SaaS offering. Let's say that you're able to find out through your... pilot and your trials that you're saving about four hours a month.
Now, it could be higher than that for certain roles, but this is not unrealistic based on the conversations I'm having. You then understand based on the hourly salary that that person makes, how much they're saving per month and per year, and then you can subtract away the difference. So it's easy to figure out based on when you input some of these things, meaning the salary, the cost, the time savings.
you can figure out, can I cover my license cost? Okay, it's pretty obvious that for both of these salary levels, it is covering the license costs. Apologies to those of you who are in the EU, but these are in dollars, but it could easily be euros.
However, there are some other costs that we need to take into account when we're building a business case. Costs of management. Is there someone in IT whose time is being used to manage this?
Are there data science people maybe who are spending a little bit of their time on this? Are there additional security things that you need to put into place? Because obviously that's crucial. Is there training? Well, I recommend that there be at least a little bit of training for any generative AI system that you're rolling out to your employees.
So you need to account for that. And of course, governance on an ongoing basis. So this is a very kind of quick intro to business case building, but I want to just say, this is something that clients are doing, people I talk to are doing. It is an achievable thing.
We will continue to refine our picture as we get more data, but it is something that is worth your time to start thinking through. Note that not everything is easily measured, however. That is definitely a problem we find today.
The best companies I'm talking to are often surveying the people that are using these solutions. They're doing focus groups. Where possible, they're using telemetry data on what features are used or how often it's being used.
and also employee experience surveys to try to understand how much is it being used for what kinds of scenarios and is it making employees feel better about their jobs, more effective, more productive. Ultimately, it's harder to measure things like better collaboration or higher creativity or, you know, maybe you can measure fewer errors. Some of these things are a work in progress and we can't expect to be able to measure everything right at this stage. But every month, I would say, new measurements come available to us.
And it's something that you can start tracking, put together your wish list and say, things I wish I could measure, because soon you probably will be able to measure them a little bit better. Risks. Steve did a nice job with his tightrope walking individual of pointing out that there's risk here. All AI projects require a degree of alignment. in your company.
And this is not just about technology. It's across the data that you use, the technology you use, the business processes, and the people in those business processes. All of these things need to come together in the right way to make sure that you're using this. The people are as important as the technology.
The processes are as important as the data that you use. So when these things don't work, sometimes it's because there is a lack of alignment that is going on. And when we ask leaders about their barriers, certainly things like a lack of technical skill is a fear that they might have. Maybe it's on their IT department, or maybe it's among their workers themselves. Maybe they're concerned about integrating something with their existing infrastructure.
This is a solvable problem that you can look into as you are vetting through whether to adopt something. Of course, data privacy and security, particularly in the EU, but But really everywhere we need to be on top of that. Governance and risk, right?
Do you have a good governance strategy for data, for AI in general? How are you adding this? Does it complement?
Is there work to be done? And then employee experience and readiness. All of these things are kind of closely bunched together.
They all kind of come up a lot. Interestingly, it's a long list of things. And as I say on the left, only 18% of respondents mentioned copyright and IP ownership. That's because some of these generative systems are more prone to those issues than others. And not everybody is concerned about that, but it is something to keep an eye on.
Other kinds of concerns that leaders have here, how concerned on a scale of one to five, these are fours and fives, privacy and data protection concerns, right? They don't want to violate laws. GDPR is not just for the EU, by the way.
A lot of American companies follow it because they do business in the EU. Misuse of generative AI's outputs, right? If I'm a salesperson and I'm writing a proposal with generative AI, I have to read through it. I need to edit it.
I need to make sure that I'm an active participant because we know generative systems can make mistakes. So there's a whole bunch of other things. One of them that Steve mentioned as well was the...
bring your own AI behaviors where an employee goes to a public source and starts using that. Maybe there's some data leakage. Maybe it's just wrong.
It's unvetted by your organization. So leaders I'm talking to are very intent upon. enabling their employees with sanctioned tools that have been vetted, that have security in place, and that are giving them the ability to know what was used.
Ultimately, Forrester's perspective on this is that you're going to be able to increase your operational excellence and ultimately business growth. Business growth is what we want. Ultimately, over the last 40 years, The productivity picture in the US and in the EU has been very up and down.
There have been times where we made massive investments in technology and we did not get the increases in productivity we were looking for. This has the ability to perhaps be a fulcrum, which is that triangle at the base of the seesaw, that would allow us, if we do this right, to actually grow, to have higher productivity, to even... give us time to be more creative and innovative as well. There are some things we need to bring to the table though.
We need to be skillful and prepared and human adoption of technologies requires training and change management. You can see on the left here that the technology change is happening very quickly, but organizational change for those of us who've been working for a while, we know that it can lag sometimes. It's not always as fast as we would like. So what we want to do is through change management and training and communication, we want to increase the capacity for organizational change, make it something that is faster, something that we can achieve more quickly.
In the past also, we were dealing with different kinds of computing that were rather linear. There was a one-to-one correspondence between the command I gave and the output that I received. So I could press the $80 or €80 button on the ATM, and it will give me $80 or €80. I know what to expect. But in the world of generative AI, we're not always sure what we're going to get, right?
This is the difference between something that's really deterministic, where you have that one-to-one correspondence. In the more probabilistic world, you're asking a computer to maybe write a document for you or to create a an image for you. And I'm sure everyone on this webinar has experienced putting in a prompt and asking for an image or a text.
And what you get is not quite what you expected, and you need to prompt engineer it a little bit more to get something that you're looking for. This varies a lot, by the way. Different generative systems work differently. Some of them have more deterministic quality rather than just being purely probabilistic. But we do need skills to navigate this world.
I have a framework I use called AIQ. It's a playoff of IQ, and it basically assesses how ready we are to use generative AI and other AI systems. Do we have a basic understanding of how these technologies work?
Do we have some hard skills like prompt engineering? Have we received training? We also need some soft skills. I should feel confident about my ability to adapt, or I should feel motivated to learn how to use these systems.
motivation is hard to teach, but it is part of this picture. And I need to have awareness of ethics and risk and privacy so that I don't violate customers by sharing their data, for example. We use a whole bunch of these different statements to construct AIQ.
It's 12 statements, and it's a one-to-five scale. We give it to employees, and they're able to say, you know, do they know when to question the output of AI, or do they understand privacy concerns? Are they feeling motivated? And when we put all 12 of these similar statements together, we can segment people into high, medium, and low. Now, understand this is a global survey, actually, US, UK, France, Germany, India, and Australia.
Being in the high AIQ category is actually the starting line in many senses. We want everyone to be in the high AIQ category. And so if you're not, and over half of our workers today aren't there yet, We need to put some time into it.
We need to give these folks a little bit of training and help them because they won't necessarily learn these skills on their own. So a few thoughts before I pass over to our panel conversation about what this all means for everyone today. You know, the chess champion, Garry Kasparov, has said that neither humans nor AI will be the future champions of chess. It'll be a human with a computer. a human plus an AI who will be able to defeat any human or any AI individually.
So we need to take number one. The lesson that this is not a replacement, that it is going to be a world of us working with AI to be more productive and to get tasks done. That is the way forward for genuine productivity today.
Number two, we should remain people-centric by looking to increase employee experience. And employee experience is the driver of your success. It is the relationship that an employee feels with...
the organization and the work environment, engaged employees who have a good employee experience are in fact more productive, more loyal, more creative. So keeping a human first lens, giving people tools to help them use these technologies more effectively, positioning these technologies as a benefit that makes your day better because you're more productive, all of those things can work together in a positive way if you do it right. Invest in those skills that I mentioned, right?
So high AIQ is our starting line for successful AI deployments. Let's make sure we are building some time for training and giving resources to our employees. A higher AIQ mitigates a certain amount of risk. If I know when to question the output of AI, I'm not going to just take what AI gave me and send it out in an email or use it in a customer-facing document, right? I'm going to engage with it more carefully.
And of course, AIQ itself will change over time as new kinds of paradigms for computing using AI continue to emerge. And finally, I want everyone to just dispel myths. A lot of employees, 59% of employees, fear losing their job to AI and automation in the next 10 years when we survey them.
And 86% of them fear that other people will lose their jobs. So I might not even fear about my own job, but I'm fearful other people will. And the reality is that we have been forecasting this space for a decade. We look very careful at this.
We'll keep a strong eye on it. There are some jobs that may go away. But for the most part, the number of jobs influenced by generative AI is so much higher than the number of jobs that are lost. And it doesn't even mention that there are a certain number of jobs that get created as well. So I thank you for your attention.
I'm going to turn things back over to Steve, but I'm going to stick around so I can be part of panel and Q&A. Fantastic. JP, that was amazing.
So many topics. For those of you that are just kind of getting your head wrapped around the world of AI, I mean, in just 25 minutes, JP covered driving productivity, building your business case, the risks, the skills, your AIQ. This was a master's course in AI. So thank you, JP, for that.
I also like how you really kind of closed it out with like, what does this all mean? Because I think in many ways that the pace and the rate of information coming at all of us in and around this topic can be a little overwhelming. And you're just like, look, you know, don't freak out.
Don't hit the panic button. You know, humans will be a big part of it. I love that quote from Gary Kasparov in that, you know, like. the AI and humans working together, because that's exactly what we see, you know, especially at DeepL, you know, in terms of, you know, translation is this kind of like, you know, cooperative nature of the technology and, you know, the human expertise working together. So look, we're going to walk through the panel questions now.
And for those of you that have, you know, not submitted any questions. we're taking those in the Q&A section. So not the chat, the Q&A section.
Please feel free. We have a number of them loading up now, but we'll try to get to as many as we can during the panel discussion. And if you missed our introductions of our prestigious guest presenters here, we have JP, who in addition to being the expert at Forrester is the aspiring science fiction writer and Klaus who's... you know, in addition to his work at PWC is a, you know, semi-pro skier and mountain bike experts, I guess we'll say. Now, if you think about those two passions, nothing better sends a message on what this world is like for all of us in AI, which is, it does feel a little bit like science fiction and it is going downhill or fast, I guess, not downhill, but it's going fast.
And sometimes we might feel a little out of control. So So With that as a little bit of a backdrop to the personalities behind this, let's tackle a couple of the questions that are coming through. I guess maybe kick it off, Klaus, with you.
You see a lot of companies. You see a lot of organizations that are going through business transformation initiatives. Building that business case, I know JP shared a little bit about how that translates into productivity savings.
What else are you seeing as organizations embrace transformation of any kind, especially big technical shifts, and where they're saying, these are worthwhile investments for us and how it's going to impact our business? Are they looking at just a couple of months of improvement? Are they looking out years? How would you say that the companies you work with are thinking about it?
First of all, thanks for your question and thanks for having me. Just to solve a little bit what you tease it with respect to mountain biking, I felt while mountain biking this weekend I crashed over the handlebars and break this my chin. So I'm trying to articulate myself clearly as clearly as I can.
But it's probably also a good segue into how all this transformation journey feels over the last few months and years because If you think you have everything under control, you're probably not going fast enough. What we're seeing in the marketplace, what I see in the marketplace, I think it's everybody is talking about that. It's here to stay.
It will have a severe impact on the companies. We're working together and we have this experience ourselves as being our client zero basically. So we pretty quickly introduced a lot of horizontal tools, JP referred to like Copilot, ChatGPT and things like that, just to avoid the AI smugglers, but also working on some specific tools which really far fine tune more modules for tax, for legal. I personally think it's a once in your lifetime opportunity to really reshape how this profession looks like going forward.
But I think one of the key elements and also that out of our conversations with clients, I think it's we don't need to look at that as it's just another tool. I think you need to look at that as a really sophisticated assistant you're getting and you need to have the ability and the passion to embed it really into your day-to-day workflow. That it really sticks and it as much as I agree with JP's comments around productivity.
I think it provides for the ability, and that's what we're trying to shape also with a lot of our clients, is to rethink how work gets done. How can you use those new AI capabilities to deliver additional value above and beyond what you're currently doing? Yeah.
I mean, again, I think that sentiment around the reason for doing this is reducing jobs was really low on the list. And if you look at all of those other reasons, whether it was productivity or global scale and growth or treating your customers better, there were a whole list of reasons. And I think if you ask most leaders, most business leaders, CEOs, they probably have a really long list of things that they would like to do, but they just don't have the resources to do it. And they see this as a way to open up those doors.
Now, Klaus, you touched on something in what you're describing as you see in the market. around, you know, specialized models. Maybe we just touch on that for a little bit because JP mentioned it at the beginning as well.
Maybe this distinction. So people, because people are probably pretty familiar with some of the, you know, household name, you know, large language model tools, but like specialized models, the distinction and how you see those shaping out in the market and how people should be thinking about those in the world. So again, kind of the general use versus specialized models. I think just to answer that, the generalised tools we have put into the hands of all our people, like Co-partner, like PwC, we call it PwC, it's a version of ChatGPT, but it's actually providing productivity gains and has the ability to work, to just approve what you're doing on a day-to-day basis, like the simple stuff embedded into day-to-day workflows in terms of preparing for meetings, summarize meeting notes, coming up with new context and the like.
Versus where we're building really bespoke models, for example, around tax and legal. That's where we do a lot of investment, but it is a really very particular niche use case. If you want an example, for example, we used Harvey. We were building on top of Harvey a bespoke tax model, which we... use and also bring into the market just around tax we launched this a few weeks ago in in the UK and also if you look at legal work I think that that's also something where it's pretty obvious I'm only by background and I marked up SBAs until very late at night during my career now we have specific legal tools available which do all the redlining and which free up time for me to really focus on complex stuff on stuff where I can really add more value into the contract negotiations and I think that's the way where we will where we see this really that especially specific fine-tune models augmenting the way and the jobs profiles and the skills we see at us, but also in our clients.
And JP, maybe the same question over to you, but when you talk to customers and clients of Forrester and they're like, hey, I'm trying to get my arms around this world of AI, do they come with a perception of like, I know I need a specialized thing or do they just know like kind of broadly, like there's these things out there, help me find my way? It's a little bit of both, but more organizations come to us saying, we know we need to work in this space, but we're not exactly sure where the real opportunities are. Oftentimes, a horizontal tool can be a good entry point for an organization.
It might be easier to kind of grasp, but some companies may start really specific as well. So there's a lot of variation, just as there is in business strategy. I think. To Klaus's point, sometimes you need to build something that is going to be really specialized to create the value. He used the example of finance or legal.
I mean, you need to equip that model with the right data so that it is learned. It has to know what is the law and it has to be accurate. And that's not something you can go and ask a public generative AI system.
And as we've learned, right, there have been these famous examples of. uh, public, you know, tools, making up legal cases that lawyers submitted and got in trouble for. So, um, knowing I mentioned earlier that you need data, you know, you need technology, you need process, you need people.
So by looking at those things, these very specific workflows may require specific data. They may require specific technology. What, um, you both touched on this in your discussions around, you know, changing the way people work and that change management is hard. Maybe just share with our audience two or three tips that you've seen like work really well in terms of just getting people to, you know, one, not be fearful, two, you know, kind of embrace and provide even more value. And then, you know, one of the questions from the audience here is around this kind of guardrails around like BYO, AY, and, you know, like, hey, how do you keep people from kind of going off on their own and just starting their own little kind of personal evaluation?
So, you know, maybe JP, you first, and then Klaus, over to you on this one. This is a sticky one. Yeah.
So change management always requires some forethought, no matter what kind of change management you're thinking of. One of the things that organizations have been doing is to do pilots where they have a group of people who they can kind of look at and see how is this implementation going, learn some lessons and listen to the employees about what is working and what's not working. When you start to do a pilot like that, you gain some record of success stories.
And those can be popularized throughout the whole workforce as you roll something out. I have talked to a lot of companies who, for example, are starting with a tool. And then they do little videos where they take a less than five minute video of somebody who was in the pilot and make that a resource. So rather than saying, hey, we're going to only use like external vendor driven training, you're going to see how somebody in your own company is using it. And that's very powerful.
So a human approach often is very helpful on the bring your own AI front. The best thing that you can do is to equip people with formal sanctioned tools that. Allow them to not want to go and use those things.
I mean, many companies are banning, you know, those websites from the behind the firewall. But everybody has a phone on their desk. If you're really busy and you want an answer, you know, you're going to be tempted.
So I think, you know, part of the reason we saw so many organizations who are in this process, about 90%, is that you need to give people options to take advantage of this technology because they really want to. Yeah, exactly. I'll add on that point, I think.
That was exactly the reason JP, Ravi, for example, put those horizontal tools in the hands of everybody because you can't avoid it. People use it. My kids are using it to do their work.
So why should our staff and team members not use it? I think this is something we did very, very early on. I would also agree that you need to offer really comprehensive training programs where you really upskill your people. And we just do two samples for that.
For example, we developed as a King's College, a prompting course for our people and for our clients. Really to root people through what, how can you get best out of that technology? And you need to keep on addressing that.
The other point I want to mention is really that you need to gain the trust of people actually using AI. If you look at these specific models, I think you will always have a human in the loop. For example, if you try to spin up legal opinions, there needs to be a human in the loop, and he needs to have the ability, whatever comes out of the machine, to review and have the references where the AI actually took the knowledge and how it assembled an answer.
I think that's a great point. crucial point to actually gain that trust and really have that ability to always have the human in the loop to have this final control. And that's, I think, one really crucial element of that. Yeah, no, this is fantastic.
Now, one of the things we wanted to share today, and I just want to briefly bring my screen back, because we've had a number of questions from the audience about like, well, why is DeepL talking about this? I know you guys as this kind of a A fantastic translation tool, and that is true. But one of the things that we wanted to do as part of this is just educate the audience a little bit more on what we're doing in the enterprise space, because this is so critical for us as an organization.
And as we think about this, a couple of things. One is just last week, DeepL was actually rated the number three most popular AI tool of the year. just behind ChatGPT and Gemini.
So, you know, certainly, you know, like above companies like Microsoft Copilot and things like that, which is quite, we're quite proud of that. And, you know, so again, it gives you this sense of traction in and around this technology. But just a couple of words around this, right? So for those of you that don't know, over 100,000 companies are using DeepL as a solution, including 50% of the Fortune 500. So you see this kind of like hunger within large organizations, you know, for this kind of technology. And, you know, certainly I'm not biased, really just not biased at all, but no, the world's most accurate translation solution.
Someone had asked in the questions about the, you know, the difference in comparing to that. We have an entire team of people just focused on translation accuracy. In fact, as we talk about this within the company, we're continually benchmarking and making sure that we're kind of top of the list. But something that many of you might not be aware of is we also introduced an AI assistant for brilliant writing.
This is kind of a multilingual, grammarly, if you will. But this is something for organizations that aren't really focused on translation. They just want to write better.
This is a tool for that. It's called Depot Depot Pro. So, again, as we think about bringing this all together into an enterprise solution, there are a couple of layers that are really important. One is just this kind of AI native platform, very high quality, very high scale. Cool fact.
And JP and Klaus, we haven't talked about this, but the Depot kind of. supercomputer, the power behind the DeepL platform is actually one of the greenest technologies on the planet. We own our own facility in and around this and 100% powered by renewable energy.
The construction of the place is actually wood beam construction to reduce cement and steel. And even the heat from the GPUs is actually ported to a nearby company to do. to heat wood pellets and dry wood pellets for heating homes in the local area.
So it's actually quite impressive underneath the hood. But on top of that, security, JP, you mentioned security is critical for organizations. And we've kind of hold to those highest standards around that. Something that we didn't talk about on the panel, but I do want to kind of, I think, bring this into the discussion is this idea of personalized.
personalizing that kind of content, your brand, your tone, your voice, your style is unique. And I guess, yeah, JP, you did mention this when you talked about training models specifically in and around specific content and Klaus mentioned it around things like law, but we're talking about going beyond that to even getting into the words you use and the phrases that capture the essence of your brand. And as we think about that enterprise...
picture for DeepL, it's all of those things. And then the applications that sit on top for translation, writing, integrations to the tools and APIs and so forth. So not going to get into a full-blown kind of platform overview, but many of the questions that came up in the panel or in the Q&A were centered around like, what is DeepL's answer for this? So we have some time left. I think we have roughly 10 minutes to get into the Q&A.
So I want to go back to some of those questions as we go through this. By the way, if any of you are interested and you want to see kind of a demo of the DeepL4 Enterprise, just throw your kind of interest in the chat and we'll follow up. You don't have to do anything else. But as we think about this kind of Q&A discussion, let me just kind of stop sharing my screen.
There we go. We're back to the team. Best practices.
Like, you know, JP, maybe start with you and then Klaus. Like if you, you know, you talk to customers and people in the market every single day, like words of wisdom that just like, here are the top three things you got to do. There's a lot, but here are the top three.
What do you leave them with? Yeah, I think as I've referenced AIQ, right? We want to get people up to speed. Most organizations don't plan enough time and energy for training.
Number two would be that is a comprehensive. program that's not just like a training webinar. There's a lot of other things you can do, many of which are not that expensive. For example, organizations using generative AI successfully often have peer-to-peer communities set up internally so that people can share best practices.
They can ask questions. They may have a weekly check-in where they can drop in for office hours. They're sending out proactive tips like a tip of the week. There's All these forms of engagement, which are not tremendously expensive to do, but that get people thinking and bidirectionally and allow them to connect to one another.
And then the third thing I would say is, you know, that the organizations that I'm working with are getting their data in order. And that is partly a security issue. It's partly a data hygiene issue.
But, you know, ultimately, depending on what kind of generative AI we're talking about. If you're using your own documents, well, that's a point of failure if they're not secure or they're not accurate. I mean, you need to do the hard work of data hygiene and data management as a prerequisite to some of these kinds of scenarios.
So AIQ, sort of training slash support community in the broadest sense and data. Fantastic. Thank you. I would echo some of the things HAP also said. I think training is one of the key elements.
You need to really upskill your people and really give them the tools to try and test it. I would say what we see in practice really, most distractions, if companies are putting some governance around how they go about AI use cases and how they go about pilots, I think what we have really good experience is looking for patterns and not for singular isolated isolated use cases i think this is providing a um really delivering really quick results and then you have the ability to showcase this and get other people people excited with those success stories i would absolutely echo the data point because to really embrace um genii capabilities you need to have your data in order and we're experiencing a lot of clients where this is not yet the case but you can look at a message about that you can build these capabilities up in parallel and i think the last thing i would mention is we take data privacy concerns really serious and put a solid governance around that to actually address that yeah i mean building on that one of the questions from the audience is around that aiq method that you mentioned jp is Is that a kind of a suitable method for assessing enterprise readiness? Do you do this as almost like a survey? Do you kind of just use this as like a gut instinct kind of barometer? How would you see companies actually kind of implementing that?
Yeah, it's a tool, right? It's not the only tool in the world. I would never claim that.
It is a survey that we basically give to employees, and we also give a similar version to leaders. So... Leaders may believe that their employees are more ready than they are or less ready than they are. That's possible as well.
So it's basically a five-minute survey that we do for Forrester clients. It's very easy to do. And it creates this starting point for just understanding where are my strengths, where are my weaknesses on the employee side, where are the misalignments between what leaders think and what employees are ready to do.
And it's a starting point for then embarking on that part of the journey. But again, it is a valuable tool, but it is one tool that many out in the world. Yeah.
The now Klaus, this one's going to be for you. Now, as you heard Klaus describe this weekend, as he's out mountain biking, tumbling over the the handlebars on his bike and, you know, in breaking his chin. But joining us for this standing ovation for your your tenacity in this. But at some point, as you're kind of riding down that mountain, you probably felt like things are going so fast that you can't keep up. And, you know.
The question here that we got from the audience is, how do we deal with the increasing number of AI tools in the workplace? Because this is only beginning and then the volume and the interest is only going to explode. It's a little bit like the marketing technology stacks of the last 10 years that have gone from a few dozen tools to thousands.
How should organizations think about this without becoming overwhelmed and maybe even paralyzed? I think it should just start with a reference you know I had with one of the Harvey founders one and a half years ago at dinner and he said something to me which still resonates with me he said you need to get used to the fact that that today was the slowest day in your life and since then it very much feels so but nevertheless I think putting some practical guardrails around that I think what we have found is redevelopable if you put in place a central governance, a central team who's really actually redacting those tools, giving it in the hands of the people of your organization. And this is true for horizontal tools as well as for the more specific tools and having the ability to route the right questions and challenges then to the right models and have some kind of reduction in the middle. Otherwise, I know that it's pretty easy that people can feel overwhelmed with all the dozens of different tools and different language models which are out there. But I think this is, from my perspective, really the most practical advice I can give to put something central in place, which the team is actually taking care of that.
JP, your thoughts on this? I guess as you kind of see this accelerating and we're maybe kind of going into that spring bloom, if you will, when the pollen is everywhere. What are your thoughts on how should organizations think about this without becoming a little bit overwhelmed?
Yeah, I think that in addition to what Klaus said, I mean, because I think that you do need to have some governance. These technologies can be misused, right? And if they're used poorly, you can get bad outcomes.
So you want some centralization, but you also want to listen to the users who have the licenses on an ongoing basis. Things are constantly changing. And the conditions of their employment and how they're using these things is changing.
You want to have a good listening mechanism in place to make sure you're getting all of the, you know, inputs that you need to tweak this. You want to continue to find, identify those use cases where you can learn from one area and port it over to another, right? Because you're not going to be able to do every specific use case at the same time. So what you want to do. is to be able to take learnings from this project and move them to another project.
So a degree of professionalization, I think of methodically evaluating on a business analysis basis, what exactly works, what isn't working? Is it fixable? Is it something we should pivot from?
And then listen to what the employees are saying. All of those are really good practices. Fantastic. I absolutely agree with that. We collected thousands of use cases, for example, within our own company, and we tried to boil it down to six patterns.
I think this helps just to find where the quick wins, how do we scale those use cases and patterns quickly across the organization. And that's also what we see as clients happening. Fantastic.
Well, with that, we are at the top of the hour. And I just want to thank, you know, again, our panelists here, JP, Klaus, you've been fantastic. I have. Really appreciate the insight. I learned a lot.
And certainly, you know, I know from the comments that, you know, our audience has as well. And also, you know, from the over 1,000 employees around the world at DeepL, we want to thank you for joining us, all of our esteemed audience here. Thank you for taking your time to join us on this journey.
We appreciate that. And we certainly want to make sure that we're continuing to provide value for you. So with that, we're going to close it out. Thank you all again. Have a great rest of the day and a wonderful and happy Tuesday.