Tim Sanders here, with my latest thoughts on artificial intelligence for business. C-suite executives have a tremendous fear of not understanding, combined with the fear of missing out. I've never seen so much energy towards AI adoption as I've seen in 2023. It was ignited by mass adoption of ChatGPT. For the first time, average business people actually used AI for direct benefits. It changed everything. But I think it's a really important time, and I had this discussion almost every day with executives, to revisit a book from 2019, "The Technology Fallacy." In this book, researchers found that the number one dependency for deploying digital transformations like AI is finding people to do all the work. In the first comment here, I'll have a link to an article, a fantastic article called "AI Is a Lot of Work." It takes many, many people to work on your data, to code, to manage quality, to provide support, implementation. AI, whether it's machine learning or ChatGPT, it's not something you buy off the shelf. In fact, what the authors of "The Technology Fallacy" say is that you've gotta look at external networks like Upwork as a strategic part of digital transformation if you want to succeed. Now, the big discussions these days are around how do we implement GenAI? And I say, you better be ready to find the talent for it, whether it's full-time talent, which is very difficult to do these days, with a lot of optics during a time of uncertainty, and it's something you can easily do on platforms like Upwork. But I think you need to understand exactly what the talent services model is for GenAI. And this is what I think: I think for almost every organization, especially if you're starting from zero or you've just got a bunch of rogue users that are using ChatGPT, the first step towards GenAI adoption, especially if you're adopting it across the critical path, is strategic consulting. And what I mean by strategic consulting is you need experts that can help you understand what the business opportunities are, or what the existential competitive threats are, and then how GenAI could be deployed across an organization, how to stage the use cases, how to measure the results, and then most importantly, how to connect all the efforts together along that data pipeline for real value creation. After that, I think you need a systems architecture as a fast follow, and info security, especially to prevent leakage of intellectual property if you're using open source solutions. And then from a tactical standpoint, you do need to find prompt engineers, quality engineers. These can be internal or external. They know what good looks like, they figure out what to take from a ChatGPT output and put directly into production, or what to not put in production and then document feedback. And then finally, you need tuning. AI engineers, for example, that are specialists in fine-tuning your models, fine-tuning your chat bots that are bespoke to your own business cases, this is what the services model looks like. And here's my final point: If you look five years into the future, assuming that these conversational agents are absolutely as game changing as we all think they are, you're gonna have a new workforce, and this is what it looks like. You'll have a core of full-time team members that are in the judgment business. Just go read the book "Power and Prediction" and you'll find as the cost of prediction drops, the value of judgment increases, so you'll have full-time team members that are co-designers, managers, and implementers of AI across the critical path, you'll have a rising population of on-demand talents, especially those with specialized skill sets. Think prompt engineers, think Python engineers, etc, and consultants, and CSEC architects. You'll also have a chat bot population, and think a little bit about that, that you've got a workforce that will be part machine, part human, and you're gonna have to manage not just simple issues, I think, like integration, but more complex issues like culture. Stay tuned. I'm gonna have more thoughts.