several months before we already were talking to Gucci and they were very much interested in this concept of using AI to help their own call centers their own service centers and we work together in what we called the Gucci voice essentially the idea is that we can use AI to empower the client advisors with a distinctive voice which is the Gucci [Music] voice good morning good afternoon or good evening depending on where you're listening welcome to Ai and the future of work episode 295 I'm your host Dan turchin CEO of people rain the AI platform for it and HR employee service if you enjoy what we do please tell a friend and give us a like and a rating on Apple podcast Spotify or wherever you listen if you leave a comment I'll be sure to share it in an upcoming episode I like this one from Kumar in hydrabad who says he looks forward to this pod dropping each week and prefers listening to it on the treadmill to his old standby Robi Shankar I have been called many things over the years but never a replacement for Robbie Shankar thank you Kumar but we learned from AI thought leaders weekly on this show and the added bonus you get one AI fun fact each week today's fun fact Brian Calbert writes in Vox that AI already uses as much energy as a small country and it's only just beginning according to the International Energy agency the combined energy consumption of data centers cryptocurrency and AI represented about 2% of global energy demand in 2022 and this demand could double by 2026 roughly equaling the electricity usage of Japan larger models consume more energy for both training as well as inference we should consider when and how smaller models are equally effective for example training gpt3 consumed around 1300 megawatt hours of electricity with which is equivalent to the annual consumption of about 130 us households another approach is only using generative AI when truly generative output is required generative AI using llms can increase energy usage by 30 to 40 times versus alternate AI techniques we haven't talked enough about the environmental impact of AI but we will be F focusing on that more in upcoming episodes of course we'll link to that full article in today's show notes now shifting to today's conversation Sylvio saves is a Pioneer and a in an AI research lead with the AI research team at one of the largest and most influential enterprise software companies on the planet you might have heard of it it's called Salesforce Sylvio spent nearly 11 years in Academia at Stanford before joining Salesforce in 2021 as Executive Vice President and chief scientist his previous research focused on robotics machine learning and language models at Salesforce sylvio's team has developed AI tools like cod gen to assist with coding using conversation Ai and Merion for time series intelligence to improve system availability by detecting potential failures his team's also working on conversational AI for personalized summaries syvia is a strong advocate for practicing AI responsibly which has led to Salesforce becoming one of the most vocal advocates for AI trust safety privacy and security to learn more go back in the archives of this podcast and listen to my discussion with K Nunes Salesforce VP of research and insights from season two seems like a long time ago we'll we'll link to that one in the show notes without further Ado Sylvio it's my pleasure to welcome you to Ai and the future of work let's get started by having you share a bit more about your background and how you get into the space yeah I done it's a great pleasure to be here uh thank you for inviting me to this great podcast yes as you mentioned I I've been in this space for quite some times I joined Salesforce 3 years ago I was professor of computer science uh for almost 15 years the last uh stint was in Stanford I was leading a group of researchers and scientists for various top topics in Academia including machine learning Machine Vision natural language processing Robotics and then three years ago I had an opportunity to join Salesforce and since then I've been running the AI research organizations as a as a chief scientist it's great honor to leading you know such a you know top- Notch organizations which comprises a talent group of researchers research Engineers product managers and and yeah it's been very exciting Journey so far there have a lot of academics or former academics reformed academics on this podcast and most curious to know how was the transition for you from Academia to Industry yeah transition has been definitely an in interesting step for my care here and in general in Academia the scope of research work as a faculty is rather Limited in scope if you wish so when I was in Stanford I focused on Machine Vision robotics machine learning as I said at Salesforce the scope of my organization is much broader and we span from natural language processing to Lang language model to time series to asoft softare processed automations found conditional models so there is much larger variation of topics to handle and in a way following Trends and stay up to dat on the most recent techniques can be challenging another challenge is about the structure about organizations so a typical size of research group in Academia is 10 15 students researchers post dos and my group as St for back then was around 20 students and the C of group is pretty much function how much you know faculty can raise and but this a simple organization it's flat it's has very simple objective function publish papers and work on first inclass research at Salesforce the team is larger there are researchers there are Engineers there are product managers so this adds a lot of more complexity to the operational and also another big difference that in Academia the pi has full decision power on the research agenda in industry it's important to align the agenda to the business needs so our a research organization doesn't really operate in a vacuum and it it needs to responds and reacts to the requirements of our product road map it's important that we work that our work has still you know a lot of scientific values we still do a lot of best class research but at the same time we need to uh make work that is relevant and brings values to the company so building on that theme your boss Mark Benny off is legendary for growing an organization it's very values based and yeah every three months he's got to get in front of Wall Street and Report earnings numbers how have you navigated the challenge of perhaps unlike Academia being driven by an organization with commercial interests while still uh pursuing your work of advancing the science of AI yeah so it's uh this is a good question so as I mentioned an important Mission or research is to develop new AI innovations that can help serves to innovate the in their products and Technology road map and the way I organized the research organizations at Salesforce is to operates around three major pillars one what we call called the the foundational and basic research contributions another one is what we call product incubations another one is called called Product Innovation so let me start a little bit from the first one in in the first pillars we really focus on developing fundamental research in area you know Thea I mentioned earlier machine learning Machine Vision and over the year we produced an incredible number of influential papers that go publish in top te academic and research venues in fact some of the seminar work that is Curr L use in gbd technology came from this very group we also have produced a large body of software that open that can can that has been open sourced and can be shared with the industry academic community and some of this work has been now popular in some repositories such as Ain faces or G up actually one I want to mention one recent example is the our own embeding model that has been scoring in the first place of in the leaderboard in a very popular leaderboard site for quite some time so it's just to you know as a Testament of the the kind of work we're doing so this is but this is all foundational work right this is research work the output is papers it's patents is open source then another area investment is what we call the incubation so incubation is about all the work all this research work that goes into prototypes into proof of concept and these are usually done in partnership with our customers so we talk to a customer and we say look what are your PIN points what is that you really want to what is that you know what do what it that can bring value to your business I can we can help so then we listen to the customers we understand you know their needs and we work together on this Pro of concept and one actually one recent example it was this partnership with Gucci which started most two years ago before world chbt you know took the World by storm before you know several months before we already were talking to Gucci and they were very much interested in this concept of helping using AI to help their own call centers the all service centers and we work together in what we called the Gucci voice essentially the idea is that we can use AI to empower the client advisors with a distinctive voice which is the Gucci voice and this is about you know enhancing uh those conversation with the the GU the history of Gucci the culture of Gucci when you know their own customers they were asking question about their products and this actually as has results into a very successful collaboration uh and after our prodct typeof concept was delivered to them and they start using it they reported fairly significant uplift in conversion rates actually something close to 30% and this is an example where actually this kind of early partnership led to a successful you know outcome and now actually Gucci was later considered as as the the customer zero for Gen AI for Salesforce but this is the incubation then the third pillar is product Innovation so in this case our research work goes straight to the to the products and to the engineering work that is currently in road map for Salesforce uh and we have you a number of examples of how our technology is being deployed for instance in one example you mentioned earlier our own llm Cod genen uh is powering a current tool for helping developers to be more productive more effective there is also many other tools that many other features that been deployed in our current Einstein one platform that Salesforce has produced I mentioned in your bio that you have a background in robotics I'm curious to know when you're developing AI to let's say power we'll call them silicon based life forms instead of what you're doing now with Enterprise software powering carbon based life forms humans what's the difference between how you think about Ai and maybe also what has your background in robotics done to inform or kind of influence your perspectives on using AI for enterprise software yeah this is a great question and actually I've been uh thinking about that you know for quite some time this has been you know one of my thoughts uh process in this period this in this phase of my carea at Sal Force so at Stanford one segment of my research agenda as I mentioned was on Robotics and we build in house robots that can learn social behavior they can learn social manners from humans we designed algorithms that that led robot robots to perform complex tasks such as cooking an omelet or making an espresso this is some application we were testing back then and it's very interesting that the many of these techniques that many of these techniques that are powering these capabilities can be transferred or adopted in a space of enterprise software in robotics teaching a robot to cook an omelette means teaching a robot to perform a number of task a sequence of task take the eggs from the fridge track them mix them H the oil fry the eggs and so on right so these is a series of actionable steps that may succeed or May Fail and require some adjustment along the way it's a it's a process right so then if you at some point you realize that cracking de is damaged you have to take a new one so there's a process but you also have to adapt to the environment you have to make some changes along the way not nothing is you know fully scripted right and in a way this is very similar to when it comes to teaching a digital agents to perform a task with an user say you know write an email to a customer and describe at this new product line this also requires a serious of steps it also means that some of the steps by fail at some point maybe the agent doesn't find the contact information on this customer and then need to escalate back to the human for help unlike robotics that operates in the physical space digital agents do function in in the digital space however methodologies for building the brain and the planner the orchestrator with respect to robots and and digital workers have a lot of commonalities and this is actually where I found a lot of a lot of work a lot of the insights that I was having while working robotics has been transferred in the new space of digital workers I would imagine in some ways programming robots is easier because they act in ways that are deterministic you and I we're messy we don't act in deterministic ways you don't always know what we're going to do given a certain set of recommendations how does that in form is is it harder to quote program humans with AI than it is to program robots but that's the thing right programming robots still means that you need to come up with a plan which has to react to the environment the the plant itself might be might not be messy but the environment is messy right so you you might find situation that the robots cannot perform this task because some obstacles because maybe the environment is AD adversarial and similar here when you design a flow process for an agent you might encounter a similar situation one of the other things that we've take into account when we're designing AI for humans is what could go wrong that might have un intended implications in potentially ways that could harm humans sometimes we talk about you know the ethics of AI and being very cognizant of what could go wrong and how it might impact the the end users what's your perspective or even your team's perspective on what it means to exercise AI responsibly yeah in short it means that we want to use we want to build an AI that it's safe for users to consume so this is the the short answer the longer answer is we want to make sure that the AI is compliant while we Define the trust AI principle and this is actually a work that we are doing in C collaboration with our partners in ethics teams with Paul Goldman other collaborators in the Salesforce where we really work closely to understand what are the most important principles to guiding a safe AI a trusted Ai and you know some of the principles include first of all accuracy so deliver verifiable results that can be balanced accuracy precisions recalls of these models so this is very important and this is very important for Enterprise because at the Enterprise level you cannot actually afford mistakes so you if you want to generate an email that advertise a certain product you don't want to you know tell wrong information about this product right so has to be factually correct this is very important the second thing is about safety so makes all effort to mitigate biases toxicity any kind of armful output and this requires lot of work behind the scene in conducting biases explainability and robustness assessment and this is also where red teaming is very important this kind of phase is also very important to protect the privacy of any information that can be potentially exposed during this in the process of generating content and this is particularly true when it comes to generative AI because you know we don't know exactly what kind of content can be produced and the type of data that we inject in the training process cannot absolutely contain any information private information about you users or customers there's also an aspect honesty and transparency so when collecting data to train and evaluate models we need to respect data Provence ensure that we have the right content of using data so with customers you know we definitely make sure that we first of all the default is that we don't use any customers data for training for building models you know Mark men has been saying you know customer data is not our product this is something that we take very seriously there are situations where we do have Pilots with customers in this case you know the custo customers might op in in sharing some of the data but we do make sure that this data is never used for training models because we never know how this dir model can spits out what we call Reg date you know it's not very appetising word but it actually makes the concept clear so reg dat the private data back to the to the output ofi also we care about empowerment recognize cases where AI should play a supporting role to the humans augmenting humans helping humans in situations of where you know there is additional help additional support and finally sustainability this also happen to be one of the values of Salesforce but we strive to build models which are not only accurate but also have an opportunity for reduce or contain the carbon carbon footprint completely unsed to complement but Salesforce is way ahead when it comes to AI custom safety really good model I'd encourage we can put a link to this in the show notes but the Einstein trust layer theys out a really good really mature framework for how to think about these things so i' like to comment you made about Benny off and you know customers data is not our product but through Einstein you do let customers I don't know if it's through rag techniques or fine-tuning a model Etc but you do let them introduce their own data that they own which obviously can't then be shared with their tenants my question is one of the challenges that AI first vendors have these days is that there's a lot of credible finger pointing that can happen when something goes wrong the customer says Salesforce I trusted you to not let the model whatever hallucinate introduce bias Etc and the vendor certainly not pointing a finger at Salesforce but you can credibly say we own the platform we own the algorithms we told you how to use it but you introduce the data that had the encoded bias in it everybody's pointing a finger and yet you know ultimately it's the end user the customer Etc that ends up being you know the unfortunate you know kind of you know where the outcome of of these biased outc comes ends that the user suffers how do you think about who owns ultimate responsibility when there's plausible deniability all around the table yeah this is an excellent point and but I think that what we need to it's important to make uh a clear difference between the consumer space and the Enterprise space so to actually create separations between these two aspects although of course there are some area of overlap in between these two spaces but the thing is you know most of the large model most of the large language model that's been deployed has been very popular so far has been devoted dedicated for in consumer space applications and you know for instance you know for companion AI or for as an assistant you help me write an essay let me find this kind of information from the web you know give me a right summary for me so this is you know helping consumer in performing certain task you know making them more productive more effective more efficient and in this case large language model mod are needs to absorb a lot of information right because in order to be spans from one task to the other in order to spans from let's say literature to shakesp to you know science to the how to grow a bonsai you need to have all possible knowledge in this planet and typical this knowledge come from the internet that's where how those large language model train from so from internet data some of this data you know may be subject to some copyright laws that leads to some issues with the some legal issues some other might contain some biases other might contain some toxicity but at the end of day it's very actually it's very difficult to control what the output can be when all the data gets fed into the models in the Enterprise space the needs are different right you need to you have very use case specific situation the use case is very specific the task are specific domain can be specific right you are you operate in a CRM you operate in a financial service you operate in healthcare so you don't have to include all possible you know domains in one single model so this an opportunity for building models which are much more specialized in fact it be smaller than those large huge you know gigantic large language models and when you build smaller models you also have more control of the data that use in training so it's still want to have this kind of conversational capabilities which are important because that's what makes this models this D effective in practice this ability to converse with AI which is new which is the new you know the new breakthrough that we have seen over the past years but at same time you know the type of output that we expecting from the models can be can be aligned with a specific task and in this case you know again you can use much smaller models and you can train those models in data that is much more controlled so this allows us to have you know reduce hallucination reduced toxicity redu biases and align you know those models with the the customers expectations when it comes to privacy that's actually is a different story so we never use any customers data for training models the way the trust layer works in the Einstein one platform is that all the information customer information sits in data Cloud which is you know our uh own data platform other infrastructure platform and and uh when there is a prompt when there is a need for injecting some information personal information private information this information comes from da cloud and is added to the prompt and through you know through grounding through you know rug to the other techniques allows to en reach the prompt and then this gets fed into the model and the model is has a zero retainment so it doesn't retain any information any data that goes into the model it's more like a faucet all the water goes to the faucet only only the model does is to regulate the the flow of the model but the force doesn't keep the water right all the water goes through the same thing here through these policies we established with the external vendors or with our own models the data doesn't in the model doesn't use is newed for training so there's no risk that this data that we are the prompt gets used by by in training so then so this allows us to pres preserve privacy confidentiality and then eventually after the the model produce an output there's another layer that checks for toxicity for biases for hallucinations there is some very important uh approaches methods that allows us to assess the confidence of the model to say okay I think that this is the answer but I'm not sure 100% in this case I need extra human validation to assess the quality of the output so these are some of the steps that we are making to ensure that you know this is we mitigate those issues as much as possible it's a really thoughtful answer I've asked versions of that conversation a lot and oftentimes you get answers that are not satisfying but I like that approach so you've published good thought leadership around when to use a large model versus a smaller model and I actually kind of teased that in the in the fun fact about the environmental footprint as it correlates to the size of the model you made another good point that the larger the model the more potentially toxic content might be in it that that could have unintended consequences could you summarize maybe you know the current state of yours and your team's thinking about the tradeoffs between large and small and and what your research shows yeah absolutely large models are in general useful for generic conversational task as I mentioned earlier these are you know requires in this kind of very broad applications but one thing that it's it's kind of useful that that makes those lar models powerful is the fact that actually they can take any new task very quickly without much needs for for tuning or for retraining for restructure and tuning so they kind of we call it in in our technical term zero shot learning so essentially there is new task for which you have almost zero examples and then all of a sudden they they know how to perform this task and this is actually one of the part ful in a way this is one of the unpredicted behavior of those large language models something that beginning we didn't think it was possible but actually turn out to be possible and in a way know the reason there's a lot of excitement actually building bigger bigger models because you hope that these models can start being more can abstract new you know new capabilities that you haven't necessarily planned at the beginning so this is the good thing on large large models on the contrary small models are useful when you know the task beforehand so you know what kind of things you want to do we know what kind of use cases we want to deal with we know kind of domains we want to operate upon and in this case building smaller models makes much more sense so there's no need for you know to come up with to transfer there's no need for operating in situations where the task is unpredictable so it's you know what our task and we know that this model can be useful for that right in this case we have seen that our experiment shows that if you reduce the scope of the task and you kind of focus on the specific task you can achieve on part performance or even better performance compared to larger models and and this actually I feel that it was a big a big kind of conclusion that we found and again operating with small models has the advantage of increasing the training agility we need less data we have more control of the data that goes into it but also the training process is simpler it's it's less you know it's less uh convoluted cost to serve it's a big plus so when you operate with the smaller models both in training and inference we need to deal with fewer parameters which means also less expensive we can run these models with the cheaper hardware and we actually have seen we done a lot of studies that shows that smaller models can really increase the give us a lot of competitional saving also latency is big thing so the larger the model the larger those lower the models are in response and we have seen that sometimes this model cannot operate well in real time uh situations uh where you need a faster reply if you uh notice you know often those big models produce an output through the techical streaming so the the answer is not produced you know of the B the W answer not produce in one shot it's not because you know there is actually a reason for that Reon because the the model takes time to produce the answer and uh and then by streaming the answer slowly you actually you save you kind of pretend that um you're saving computer time essentially right so you you you work on latency to make to make and it becomes less visible to the users there are applications where you actually need to have smaller models I me thinking about mobile applications there are real a full area of application Rel field service where actually having models on the device and the client on the cell phone it's very important sometimes you know there is no access to the to the network because maybe those workers operating in area where there's no network then running everything on a client is very important this case you need to have small models in the my team we build models which are as small as three billion parameters which are better performance than comparable model open source model in the same in the same category and of course the footprint right so smaller models also as a much less impact to the environment you referenced earlier how AI should or in many cases is being used to augment or complement humans and yet you and I both live in a world based on what we do where there's a lot of apprehension a lot of hand ringing in the call center and service desk automation about whether or not eventually AI will be able to replace these kinds of Frontline workers what's your perspective if we look forward 5 to 10 years what are the I believe like me you're on team human you're AI Optimist what do you think are the things that will never be able to be replaced by AI when it comes to the call center yeah so let me step back a little bit and you know I want to illustrate a little bit what are the two trends that we have seen in the space of generative AI so generative AI so AI has been traditionally deployed for predictive task so let's say you know I have a certain Behavior I want to see what happened next or have some information I want to classify you know yes or no one you know true false this is what you know AI has been used so far in the past and we use some of this Tools in Einstein you know Einstein is making billions of predictions per per year so it's a but these are predictions these are you know type of tools I use in in Tools in in in those platform and has been essentially used for supporting human users to perform those task jdi provides a set new of tools which are Empower humans to be more efficient more productive more creative as well so we have seen a lot of new tools for can also improve creativity and those tools essentially are used as as a single feature if you wish you know write an email for me write a summary for me you know build a new image that contains certain content these are more hoc tools right then the space we are moving now which is happening now but this going to be more more prevalent is the space of Agents right and then we have here two direction that that is that that we have seen one is what we call the digital assistants these are helping humans perform tasks more easier more faster more easily faster more efficiently and the thing is not only they do things for you but they also makes actions for you right so they is not just you know they write an email for you but it also makes actions on your on your behalf so when you say you know write an email and send it to this person it's both writing emails but also find the contact information you know make sure that this person is available send the email look at the answer reply summarize the answer to you and get back to you this is a serious task similar to what we said earlier when you talk about digital workers digital assistant and and but this is actionable steps that they I takes on your behalf and but the goal is to assist you you help you perform a task and you can actually delegate this task to these assistants and meanwhile you can do other things right and also those agents can be proactive in the future can do things you know on on send alert or you know to to do certain things to help you again make your work more productive such as you know incre increase resource allocations if maybe let's say your compute you know device is running out of memory let's say right so these are other things that this assistant can do but again they are supporting the human user the other direction is the what we call we call digital workers so digital workers are fully autonomous agents that perform tasks under the hood in background in full autonomy right these agents essentially are hiding resources to organizations and which has an opportunity to scale up operations when and this can be done with very or kind of limited budget increase right and the task performed by these digital workers are tend to be repetitive tend to be you know be implementing Minal duties this task tend to be very specific very narrow for instance you know check if new content is available from my repository create a summary batch time and make it available if I need it these are kind of things that happen in the background that're done by digital workers right so here there's an opportunity for humans to to start you know considering different roles and start organizing their Works in different ways so to act more like an orchestrate of digital workers so imagine now that you have a fleet of workers that can help you do things on your behalf and then you have your assistant can also help you do things so then your assistant more like CH staff helping you perform that support your work and then you also you have this Fleet of workers that can do things on your behalf right so then your role has become more like an orchestrator more like a manager that helps allocating into the resources and save me time on performing meal task but sometimes you know give me time to be more designed you know to think about other way of Performing my my job in a way that can be more effective more more empowering and more interesting and but this requires different mindset right require me you know learn new skills which which also understand what kind tool these tools can do for me right so it's a understanding what AI can do what a I cannot do and when it can be trusted when it cannot be trusted so this is there's a lot new Essences here is not about just replacing work it's about understanding you know the limitation of AI the what the benefits of AI and how you can Pi full advantage of those tools that are emerging in this kind of space so orchestrating digital workers requires like you mentioned a different set of skills but also a different ratio of orchestrators to to task owners which is the the current model what do you recommend to Let's assume you know there are a lot of call center agents that are going to be quote ratioed out of their current roles but like me you probably believe a lot of new opportunities will be introduced in that void based on some of these new technologies what would your coaching be let's say to your kids you know about what are the skills that are future proof that they can invest in to to be the orchestrators of digital workers in the future yeah first of all you know learn uh how to use use these tools don't be scared you know we have been saying to me saying to my kids that you know if you need help for doing your homeworks you can use chpt it's fine but of course you know remember to verify the sources verify that is is trust you know you're trusting your understanding where this information come from verification is very important because remember AI it does hallucinate the same here so when I I imagine you know this new generation of workers of of of employees that use AI need to use not how to use these tools at the same time also make sure that are still in control you know when you the capit needs to check you know we have this panel and then verify that all the the flow the workflow is happening in the right way also you know doesn't mean that the the user doesn't has to still has to be has to be knowledgeable about this task because if at some point there is a situation of escalation to humans hum needs to know what to do right can be unprepared and that's actually risk that many we are facing that we become lazy with don't need understand how things work actually we need to fully understand uh how these task are done to potentially help in situations of escalation and but it does require you know a different mindset different framework for operate which is interesting it's still work in progress I think still a lot of work to do to fully understand how to make progress here something we might have to unpack in a future version of the conversation it's just it's very nent but it's important topic now syia we're B at of time but you're not getting off the hot seat without answering one last important question for me who are your role models and who's influencing the kind of thinking that you're doing that could have a corollary impact on you millions of lives yeah this is a great question I do follow carefully you know what uh you know the other leaders in the space are saying you know I there are you a lot of people that do respect about their to leaderships that I think you know they are making uh a lot voice about certain topics I I do think that you know there are a bit of situations where there are two camps you know the more Optimist the more the pessimist camps in AI those that you know think that we need to be AI can lead Humanity into situations of great danger great risk and those that are more optimistic I do believe that actually I'm more in optimistic Camp actually I wrote with the with the great collaborator here at Salesforce with the Peter Force futurist Chief futurist officer an article blog on AGI and we definitely took a very more very positive approach take to this topic so we do believe that you know there is a lot of potential risk to this techology but there is also great opportunities and we need to push for advocate for these opportunities you know when when people talk about you know the more gloomy picture I also think that it's more important to be more concrete instead of thinking about you know yeah taking take take Taking Over The World and kill humans and thinking about more what is the concrete area of of focus we need to have how to make two days a trustworthy right how to make sure that you know the type of AI we're doing is safe to use safe to for our customers our consumers to use AI how how to make it safe for our kids most importantly because this is the new generation of users for of AI so this is the where we should put focus and I've been very aligned with many of the voices in this space when we when it comes to emphasize the trustworthy aspect of AI a lot of people look to you and Salesforce as Role Models so I appreciate all the good work that you're doing on topics of responsible AI trust safety privacy Etc so thank thank you to you and the team thank you then good syvia we're out of time where can the audience learn more about you and the work that your team's doing yeah we have an website where we have a lot of content and I also have a page where I publish all my blogs there's a lot of content that that is aligned with what we discussed today which I'm sure that your audience can benefit from there is also a new blog that coming up on AI agents which I'm happy to share in next couple weeks and yeah so this is where definitely there you can see a lot of great content for your readers your audience we'll link to all those resources in the show notes these are such important topics we really just had time to get started hopefully maybe be one to come back in the future as some of these conversations evolve yeah thank you then I'll be happy to answer any more questions appreciate that yeah just such a pleasure we're all rooting for for you and the team to succeed thank you thank you like guys that's that's all the time we have for this week on AI and the future of work as always I'm your host Dan Turin from people R and of course we're back next week with another fascinating guest [Music]