How would you define AGI to the layman? How geeky can I get? Brett Taylor's here, co-founder and CEO of a startup called Sierra. It enables companies to build AI agents that directly interact with customers. How would you define that? What's an AI agent? The word agent comes from agency and I think it means affording a software the opportunity to reason and make decisions autonomously. And that's what we're helping our customers do at Sir is help them build a conversational AI that does all of that. One of the unique things about you is that you've started companies, you've been acquired. Talk to me about founders working within a company. Technology companies aren't entitled to their future success. AI, I think, will change the landscape of software. It's incredibly hard to do something that lasts beyond you, but that I think is the ultimate measure of a company. Tell me about the Google Map story. This is like now legend and I want to hear it from you. [Music] What was your first real aha moment with AI where you realized, "Holy [ __ ] this is going to be huge." I had two separate aha moments. uh one that I don't think I really appreciated how huge it would be but it kind of reset my expectation which was the launch of Dolly uh in the summer of 22 is that right I might be off by year but I think summer of 22 and uh the avocado chair that they generated and I had been while I my background is in computer science and and pretty technically deep I hadn't been paying attention to large language models um uh I just it didn't follow the progress after the Transformers paper and I saw that and my reaction was I had no idea computers could do that and that particular launch um you know seeing a generated image of an avocado chair I don't think I extrapolated to what you know uh where we are now but it uh for me shook me and realized I need to pay more attention to this space and open AAI specifically than I had Um I think I had that was the moment I realized like I clearly have been not paying attention to something significant and then it was you know 6 months later coincidentally like the month after I I left Salesforce Chat GBD came out and before it became a phenomenon though it did so quickly but I was already you know plugged into it and and uh and I was from then on you know I could not stop thinking about it. Um, but that avocado chair, I don't know why. I think it was the there was a bit of an emotional moment where you saw a computer doing something that wasn't just rule-based, but uh creative and um the idea of a computer doing something creating something from scratch uh was uh well, it doesn't seem so novel uh you know, a few years later just blew my mind at the time. One of the unique things about you is that you've started companies. You've been acquired by Facebook and Salesforce. Inside those companies, you rose up to be the CTO at Facebook, the co-CEO at Salesforce. Talk to me about founders working for founders and founders working within a company. Yeah, it's a it's a very um challenging transition for a lot of founders to make. I I think there's lots of examples of acquisitions that have been really transformative from a business standpoint. Uh I think YouTube, Instagram being two of the more prominent that have clearly changed the shape of of the acquiring company. But even in those cases, you know, the founders didn't stay around that long uh in those that's maybe a little unfair. They stick around for a little bit. I think the interesting thing about being a founder is it's not just building a business, but it's very much your identity. Um, and I think it's very hard for people aren't founders to experience it. If you take everything very personally, you know, from the product to the customers to the press to, uh, your competitors, uh, the the both in inner and outer measures of of success. And I think when you go to being acquired, there's a business aspect to it. and you know can you operate within a larger company but that's intertwined with a sense of identity you go from being a the founder of a company and the CEO of a company or CTO of a company whatever your your title happens to be as one of the co-founders to being a part of a larger organization and to fully embrace that you actually need to change your identity um you need to go from being you know the head of Instagram or in my case the head of Quip to being an employee of Salesforce or going from being the CEO of friend feed to being an employee of Facebook. And what I've observed is it's that identity shift is a prerequisite for most of the other things. It's not simply your ability to handle the politics and bureaucracy of a bigger company or to navigate a new structure. I actually think most founders don't make that leap where they actually identify uh with that new thing. It's even harder for some of the employees too because most of the time in an acquisition an employee of an acquired company didn't choose that path and in fact they chose to work for a different company and they you know the the acquisition determined a different outcome and that's why integrating acquisitions is so nuanced and I would say that uh having the experience of having been acquired uh you know before and having acquired some companies before when I got to Salesforce I really tried to be self-aware about that and really tried to, you know, be a part of Salesforce, you know, and tried to shift my identity and and not be a single issue voter around Quip, you know, and really tried to embrace it. Um, and uh, and I think it's really hard for some founders to do and some founders don't want to, honestly. You know, they maybe cash the check and and you know, that's the it's more of a transactional relationship. I um, I I really actually am so grateful for the experience of having been at Facebook and Salesforce. I learned so much. show, but it really took a lot of effort on my part to just um transform my perception of myself and who I am to get that value out of the company that acquired us. How did you how did it change how you did acquisitions at Salesforce? You guys did a lot of acquisitions while you were there and you're acquiring founders and sort of startups and I think Slack was while you were there too. How did that change how you went about integrating that company into the Salesforce culture? I'll talk abstract. talking about some specific acquisitions too, but first I I think I tried to approach it with more empathy and more realism. You know, uh, one of the nuanced parts about acquisitions is there's the period of um, doing the acquisition, there's sort of the period after you've decided to do it of doing due diligence and then there's a period when it's done and you're integrating the company and sort of the period after. One of the things that I have observed is that uh companies doing acquisitions often the part of deciding to do it is a bit of a mutual sales process. Um uh you're trying to find a fair value for the company and and there's some back and forth there but in the end of the day there's usually some objective measure of that um influenced by a lot of factors but but there's some fair value of that. But what you're trying to do is what are uh in corporate speak would be synergies but like why do this why is 1 plus one greater than two you know that's that's why you do an acquisition just from first principles it's often an exercise in storytelling you know uh you you know bring this product together with our product and customers will you know find the whole greater than the sum of its parts this team applied to our sales channel or if you're a Google acquisition you know imagine the traffic we can drive to to this product experience. Uh you know in the case of something like an Instagram, imagine our ad sales team attached to your you know amazing product and how quickly we can help you realize that value whatever it might be. I find that people because there's sort of a craft of storytelling to uh for both sides to come to the same conclusion that they should uh do this acquisition sometimes uh either simplifies or sugarcoats like some of the realities of it. Um you know little things like you know how much control will the founding team of the acquired company have over those decisions? um uh will it be operated as a standalone business unit or will your team be sort of broken up into functional groups within the larger company? And it's sort of those little they're not little, but those I'll say boring but important things that often people don't talk enough about. And you don't need to figure out every part of an acquisition to make it successful. But often you can end up running into like true third rails that you didn't find because you were having these storytelling discussions rather than getting down to brass tax about how things work and what's important. The other thing that I think is really important is being really clear what success looks like. Um, and you know, I think uh, sometimes it's a business outcome, sometimes it's a product goal, but I found that um, if you went to most of the like larger acquisitions uh, in the valley and you two weeks after it was closed, interviewed the management team of the acquiring company and the acquired company, and you asked them like, "What does success look like two years from now?" My guess is like 80% of the time you get different answers. Um, and I think, uh, it goes back to this sort of storytelling thing where you're talking about the benefits of the acquisition, talking about like what does success look like. So, I really try to approach it. I try to, um, pull forward some harder conversations when when we're, you know, when I'm doing acquisitions or even when I'm being acquired since it's happened to me not twice. So that, you know, when you're approaching it, you not only get the, hey, why is 1 plus 1 equal greater than two? everything's gonna be awesome, you know, but no, for real, like what you know, what does success look like here? And then, you know, as a founder, your job of an acquiring acquired company is to tell your team that and align your team to that. And I think founders don't take on enough accountability towards making these acquisitions successful as I think they should. And um and it goes back to again a certain uh naive, you know, it's like you're you're you're not your company anymore. You're part of something larger. And I think you know successful ones work when everyone embraces um embraces that. What point in the acquisition process is that conversation is that after we've signed our our binding, you know, sort of commitment or is it we should have that conversation before so I know what I'm walking into. My personal take is it's not something you have you have to get to the point where the two parties want to merge, you know, and that's a obviously a financial decision particularly if it's like a public company. there's a board and shareholders. Most acquisitions in the valley are a larger firm acquiring a private firm. That's not all of them, but I would say that's the vast majority. And in those cases, there's often a qualitative threshold where someone's like, "Yeah, let's do this." We've kind of have the high level terms, sometimes a term sheet, you know, formally. I think it's right after that. Um, so where people have really committed to the the key things, how much value, why are we doing this, the big stuff, and there's usually uh, you know, many lots of lawyers being paid lots of money to turn those term sheets into, you know, more complete set of documents, usually more complete due diligence, stuff like that. That's a there's an awkward waiting period there. And that's a time I think where like the strategic decision makers in those moments can get together and say let's talk through what this really means. And um the nice part about having then for all parties is you've kind of made the commitment to each other. So it's you've I think you have more social permission to have real conversations at that point. Um, but you also haven't consummated the relationship, you know, and so, uh, there's a the power imbalance isn't totally there and and you can really talk through it. And it also, I think, engenders trust just because by having harder conversations in those moments, you're learning how to have real conversations and learning how each other works. So, that's my my personal opinion would have it. So, you mentioned the board, you've been on the board of Shopify, you're on the board of OpenAI, you're a founder. What's the role of a board and how is it different when you're on the board of a founder company? I um really like being involved um in a board um and I've been involved in multiple boards because I think I am an operator through and through. I I probably selfidentify as an engineer first more than anything else and I love to build. learning how to be an adviser um is a very different vantage point that I think uh you see how other companies operate and you also learn how to have an impact and add value without doing it yourself. Um and it's a very and I've really I think become a better leader you know having learned to do that. I have really only joined uh boards that were led by founders because typically I think they you can speak to them but I think that they sought me out because I'm a founder and I like working with founder companies. Um I I think the uh founders I'm sure there's lots of studies on this but I think founders drive better outcomes for uh companies. Um there's a I think founders tend to have permission to make bolder more disruptive decisions about their business than a professional manager. There's exceptions like Satcha I think is you know uh one of the greatest if not the greatest CEO of you know our generation and uh as a professional manager but you know you look at uh everyone from Toby Luki to Mark Beni off to Mark Zuckerberg to uh Sam at OpenAI and I think when you have founded a company it's all your stakeholders employees in particular uh give you the benefit of the doubt you know you created this thing and if you say hey we need who um do a major shift in our strategy even hard things like uh layoffs founders tend to get a lot of latitude and are judged I think differently and and I think rightfully so in some ways because of the interconnection of their identity to the thing that they've created and so I actually really believe in founder-led companies um one of the real interesting challenges is going from a founder company to not and you know Amazon has gone through that transition Microsoft has gone through that transition um for that reason. Uh but I love working with founders. Um and I I love working with people like Toby and Sam because they're so different than me yet um uh and I can see how they operate their businesses and I am inspired by it and I learn from it. And obviously working for for Mark at Salesforce, you you I'm like, "Wow, that's really interesting." Like almost like an anthropologist, like why did you do that? You know, I want to learn more. And so I love working with founders that inspire me because I just learn so much from them. It's such an interesting front row seat into what's happening. Do you think founders go astray when they start listening to too many outside voices? And this goes back to the I'm sure you're aware of the Brian Chesy, you know, he the founder mode like do you think talk to me about that? I have such a nuance point of view on this because it is decidedly not simple. Uh so broadly speaking I really like the spirit of founder mode which is just having deep founder accountability for every decision at your company. Um I think that that's how great companies operate. Uh and when you you know proverbially make decisions by committee or you're more focused on process than outcomes um that produces all the experiences we hate as employees as customers. you know that's the proverbial DMV right you know it's like process over outcomes um and then similarly uh you look at the disruption in all industries right now because of AI you know the companies that will recognize where things are clearly going to change like everyone can see it it's like a you know slow motion car wreck everyone knows how it ends you need that kind of decisive breakthrough boundaries layers of management um to actually make change as fast as required in business right now. The issue I have not with Brian's statements. Brian's amazing. Um is how people can sort of interpret that and sort of execute it as a caricature of what I think it means. uh you know there was a I remember after Steve Jobs passed away and you know um I don't know Steve I've met Steve a couple times so I I haven't never worked with him in any meaningful way you know but he was sort of uh if you believe the stories like kind of pretty hard on his employees and and very exacting and I think a lot of founders were like mimicking that you know down to wearing a black turtleneck and yelling at you their employees I'm like not sure that was the cause you know I think Steve Jobs taste and judgment through you know executed through that you know packaging was the cause of their success and somehow and then similarly I think founder mode can be weaponized as an excuse for just like overt micromanagement and that probably won't lead to great outcomes either and most great companies are filled with extremely great individual contributors who make good decisions and work really hard and uh companies that are like solely executing through the judgment of individual probably aren't going to be able to scale to be truly great companies. So I have a very nuanced point because I actually believe in founders. I believe in actually that accountability that comes from the top. I believe in cultures where you know founders have license to go in and all the way to a small decision and fix it. The infamous question mark emails from Jeff Bezos. You know that type of thing. That's that's a right way to run a company. But that doesn't mean that you don't have a culture where individuals are accountable and empowered and uh you don't want uh you know people trying to decide make business decisions because of what will please argue a leader you know which is like the caricature of this and so you know after that came out I could sort of see it all happening which is like some people will take that like you know what you're right I need to go down and be in the details and some people will do it and probably make everyone who works for them miserable and probably both will happen as a consequence. So totally thank you for the detail and nuance there. I love that too. Do you think engineers make good leaders? I do think engineers make good leaders, but one thing I've seen is that I think that I really believe that great CEOs and great founders um start usually with one specialty but become uh more broadly specialists in all parts of their business. Um you know I think the uh businesses are multifaceted and rarely is a business's success due to one thing uh like engineering or product which is where a lot of founders come from often your go to market model is important uh for consumer companies how you engage with the world and public policy becomes extremely important and I think as you see um uh founders you know grow from doing one thing to growing to being a real meaningful company like Airbnb B or meta or something you can see those founders really transform from being one thing to to many things. Um so I do think engineers make great leaders. I think the first principles thinking, the system design thinking um really benefits things like organization design, strategy um and but I also think that you know uh when we were speaking earlier about identity, I think one of the main transitions founders need to make especially engineers uh is you're not like the product manager for the company, you're the CEO and on any given day do you spend time recruiting an executive because you have a need. Do you spend time uh on sales because that will have the biggest impact. Um do you spend time on public policy or regulation because if you don't uh it will happen to you and and could really impact your business in a negative way. And I think engineers who are unwilling to elevate their identity from what they were to what it needs to be in the moment often leads to sort of plateaus uh in company's growth. So 100% I think engineers make great um leaders and it it's not a coincidence. I think that most of the Silicon Valley great Silicon Valley CEOs came from engineering backgrounds. Um but I also don't think that's sufficient either as your company scales and I think that making that transition as all the great ones have is incredibly important. To what extent are all business problems engineering problems? That's a deeper philosophical question than I think I have the capacity to answer. Um what is engineering? What I like about approaching problems uh as an engineer is uh first principles thinking and understanding uh the root causes of issues rather than simply addressing the symptoms of the problem. And I do think that coming from a background in engineering that is everything from process like how engineers do a root cause analysis of a outage on a server is a really great way to analyze why you lost a sales deal. you know, like I love the systematic approach of engineering. One thing that I think going back to good ideas that can become caricatures of themselves like one thing I've seen though with engineers who go into other disciplines is um sometimes you can overanalyze decisions in some domains. Let's just take modern communications which is driven in social media and and very fast-paced. um having a systematic first principles discussion about every you know tweet you do is probably not a great comm strategy. Um and so uh and then similarly um you know there are some aspects of say enterprise software sales that you know aren't rational but they're human you know like forming personal relationships you know and and the the importance of those to building trust with a a partner. it's not all just you know product and technology and so I would say I think a lot of things uh coming with an engineer mindset could really benefit but I do think that uh taking that to its like logical extreme can lead to analysis paralysis can lead to uh overintellectualizing some things that are fundamentally human problems and so yeah I think a lot can benefit from engineering but I wouldn't say everything's an engineering problem in my experience you've brought up first principles a couple times You're running your third startup now, Sierra. It's going really well. How do you use first principles in terms of how do you use that at work? Yeah, it's it's particularly important right now because the market of AI is changing so rapidly. So, if you rewind 2 years, you know, most people hadn't used chat GPT yet. uh most companies hadn't heard the phrase large language models or generative AI yet and in two years you have chat GBT becoming one of the most popular consumer services in history faster than his than any service in history and you have across so many domains in the enterprise uh really rapid transformation. The law is being transfer transformed. Marketing is being transformed. Customer service which is where my company Sierra works is being transformed. Software engineering is being transformed. And the amount of change in such a short period of time is uh I think unprecedented uh and uh perhaps I lack the historical context but it feels faster than anything I've experienced in my career. And so as a consequence, I think uh if you're responding to the facts in front of you and not thinking from first principles about why we're at this point and where it will probably be 12 months from now, the likelihood that you will make the right strategic decision is almost zero. Uh so uh as an example, uh it's really interesting to me that with modern large language models, one of the careers that is being most transformed is software engineering. uh and you know one of the things I think a lot about is how many software engineers will we have at our company 3 years from now what will the role of a software engineer be as we go from being authors of code to operators of code generating machines um what does that mean for the type of people we should recruit and if I look at the actual craft of software engineering that we're doing right now um I think it's literally a act that it'll be completely different 2 years from now. Yet, I think a lot of people building companies hire for the problem in front of them rather than doing that. But two years is not that long. Those people that you hire now will just be getting really productive a couple years from now. So, we try to think about most of our long-term business from first principles. Everything from I'll say a couple examples in our business. Our pricing model is really unique and comes from first principles thinking rather than having our customers pay a license for the privilege of using our platform. We only charge our customers for the outcomes. Uh meaning if the AI agent they've built for their customers solves the problem. There's like a usually a pre-negotiated rate for that. And that comes from the principle that in the age of AI software isn't just helping you be more productive but actually completing a task. uh what is the right and logical business model for something that completes a task? Well, charging for a job well done rather than charging for the privilege of using the software. Um similarly, um you know, we with a lot of our customers, you know, we help deliver them a fully working AI agent. We don't hand them a bunch of uh software and say good luck, you know, configure it yourself. And the logic there is, you know, in a world where uh making software is easier than it ever is before and you're delivering outcomes for your customer, um the delivery model of software probably should change as well. And we've really tried to reimagine what like the software company of the future should look like and trying to, you know, model that in everything that we do. That's brilliant. How do you think software engineering will change? Is it you're going to have fewer people or the people are going to be organized differently or h how do you see that? How geeky can I get? As geeky as you want, man. I actually wrote a blog post uh right before Christmas about this. I think this is an area that deserves a lot more research. U I'll describe where I think we are today and smart people may disagree but a lot of the modern large language models both the traditional large language models and sort of the new reasoning models are trained on a lot of source code and it's a an important input to all of the knowledge that they're trained on. Um as a consequence even the early models were very good at generating code. Um, so you know, uh, every single engineer at at Sierra uses, uh, cursor, which is a great product that basically integrates with the IDE, Visual Studio Code, to help you generate code more quickly. Um it feels like a local maximum uh in a really obvious way to me which is you have a bunch of code written by people um written in programming languages that were designed to make it easy for people to tell a computer what to do. Probably this funniest example of this is Python. Um it almost looks like natural language but it's notoriously not robust. Um you know most Python bugs are found by running the program because there's not static type checking. Um similarly there's most bugs uh while you could run a fancy static analysis like most bugs show up simply at runtime because uh it's just not designed um it's designed to be ergonomic to write. um yet we're using AI to generate that and we and so we sort of designed most of our computer programming systems to make it easy for the author of code to type it quickly. Um and we're in a world where actually generating code is going to like the marginal cost of doing that is going to zero but we're still generating code in programming languages that were designed for human authors. And similarly, um, if you've ever like looked at someone else's code, um, which a lot of people do professionally, it's called a code review. It's actually quite hard to do a code review. Um, you know, you end up interpreting, you're trying to basically put the system in your head and simulate it as you're reading the code to find errors in it. So the irony now that of taking things that are code programming languages that were designed for authors and now having humans do the job of essentially code reviewing code written by an AI and and yet all of the AI is being in the code generation part of it. I'm like, I'm not sure. It's it's great, but we're generating a lot of code with similar flaws to that we've been generating before from security holes to just functional bugs and in greater volumes. And I think we're uh what I would like to see is if you start with the premise that generating code is free or or or going towards free, what would be the programming systems that we would design? So for example uh you know Rust is an example of a programming language that was designed for safety not for programming convenience. Uh you know my understanding is that the you know Mozilla project you know there were so many security holes in Firefox they said let's make a programming language that's very fast uh you know uh but everything can be checked statically including memory safety. Well, it's a really interesting direction where you weren't operating like optimizing for authorship convenience or optimizing for correctness. Are there programming language designs that are designed so a human looking at it can very quickly evaluate does this do what I intended it to do? There's an area of computer science I studied in college called formal verification which at the time was turning a lot of computer programs into math proofs and finding inconsistencies and it sort of worked well not as well as you'd hope but you know in a world where AI is generating a lot of code you know should we be investing more in formal verification so that the operator of that code generating machine can more easily verify that it does in fact do what they intended it to do and could a combination of a programming language that is more structurally correct and structurally safe and exposes more primitives for verification plus a tool to verify. Could you make an operator of a code generating machine 20 times more productive but more importantly make the robustness of their output 20 times greater? And then similarly you know there's themes things go in and out of fashion but like test driven development you know you write your unit test first or your integration test first and then write code until it fulfills the test. Most programmers I know who are really good not despise it but it's just like a it sounds better than it than it is in practice. But again writing code is free you know so writing tests is free you know how can you create a programming system where the combination of great programming language design formal verification robust tests because you didn't have to do the tedious part of writing them all. could you make something that made it possible to write increasingly complex systems that were increasingly robust? And then similarly like the elephant in the room for me is the anchor tenant of most of these code generating systems are an IDE right now you know and um uh that obviously doesn't seem as important in this world and even with coding agents which is sort of where the world is going it doesn't change the fact that like you know who's accountable for the quality of it who's fixing it and I think there is a world where we can make reasonable software by just automating what we as software engineers do every A but I have a strong suspicion that if we designed these systems with the role of a software engineer in mind being an operator of a machine rather than the author of the code we could make the process much more robust and much more productive and and it feels like a research problem to me. doesn't feel and I think a lot of people and for good reason including me are just excited about the efficiency of software development going up and I I want to see the new thing though I'm I'm constructively dissatisfied with where we are. It's so interesting that if software AI is good enough to write the code should be good enough check the code. That's a great um great question. But actually all, you know, it's still funny to me that we'd be generating Python, you know, just because for anyone who listening right now has ever operated a web service running Python, it's CPU and intensive, really inefficient. You know, should we be taking most of the unsafe C code that we've written and converting it to a safer system like Rust? Uh, you know, if authoring these things and checking it are relatively free, shouldn't all of our programs be incredibly efficient? Should they all be formally verified? Should they all be analyzed by a great agent? I I do think it can be turtles all the way down. You can use AI to solve most problems in AI. The thing that I'm trying to figure out is like what is the system that a human operator is using to orchestrate all those tasks. And you know, I go back to the history of software development and most of the really interesting metaphors in software development came from breakthroughs in computing. So you know the C programming language came from Unix and uh when these time sharing systems were really it went from sort of punch cards to something that were were a lot more agile. Um small talk came out of the development of the graphical user interface at Xerox Park and you know there was a uh sort of a confluence of message passing as a metaphor and and the graphical user interface. Um and then there was a lot of really interesting um principles that came out of networking, you know, and um and sort of distributed systems, distributed locking, sequencing. I um I think we should recognize that we're in this brand new era as significant as the guey. You know, it's like a completely new era of software development. And if you were just to say, I'm going to design a programming system for this new world from first principles, what would it be? And I think when we develop it, I think it'll be really exciting because rather than um automating and turning up the speed of just generating code and with the same processes we have today, I think will feel native to this system and give a lot more uh control um to the people who are orchestrating the system in a way that I think will really benefit software overall. Let's dive into AI a little bit. How would you define AGI to the layman? I think uh a reasonable definition of AGI uh might be that any task that a person can do at a computer um that system can do on par or better. Um I'm not sure it's a precise definition, but I'll tell you where that comes from and its flaws, but I there's not a perfect definition of AGI um in my opinion. Uh or there's not a precise definition of AGI. I'm sure there's good good answers. One of the things about the G and AGI is about generalization. Um so can you have a system that is intelligent in domains that it wasn't explicitly trained um to be intelligent on? And so I think that's one of the most important things is like given a net new domain can this system uh become uh more competent and more intelligent than a person sort of trained in that domain. Um and uh and I think that's sort of the you know at or better than a person is certainly a a good standard there and that's sort of the definition of super intelligence. The reason I mentioned at a computer is I do think that it is a bar that means like if there's a digital interface um to that system um it affords the ability for AI to interact with it which is why that's a bar that's uh reasonable to hit. Um I say that because one of the interesting questions around AGI is how quickly it does generalize. Um and uh there are domains in uh the world that um the progress in that domain isn't necessarily limited by intelligence uh but by other social artifacts. So as an example and I'm not an expert in this area but if you think about um uh the pharmaceutical industry my understanding is you know the one of the main um bottlenecks uh is clinical trials. Um so no matter how intelligent a system would be in discovering new uh therapies uh it may not materially change that. And so you may have something that's discovering new insights in math and that would be delightful and amazing. Um, but the existence of that uh system that's super intelligent in one domain may not translate to all domains equally. Uh I just heard a at least a snippet of a talk by Tyler Cohen, the the economist, and um it was really interesting to hear his framing on this about which parts of the economy could sort of absorb intelligence more quickly than others. And so I choose that definition of AGI recognizing that uh there's not a perfect definition because it captures the ability of this intelligence to generalize while also recognizing that the um domains of society like it might not apply with equal velocity even once we reach that point of a system being able to have that level of intelligence. When I think about what artificial intelligence is limited by or the bottlenecks, if you will, I keep coming back to a couple things. There's regulation, there's compute, there's energy, there's data, and there's LLM. Am I missing anything? Uh, so you're saying the ingredients to AGI? Yeah, like there's limitations on each aspect of those things and those seem to be the main contributors to the what's limiting us from even accelerating at this point. Is that how do you think about that? Yeah, what you said is roughly how I think about it. I'll put it in my own words though. I think the three primary inputs are data, compute, and algorithms. And uh data is probably obvious but you know uh one of the things after the transformer model was introduced is it afforded an architecture with just much greater parallelism which meant models could be much bigger and train more quickly on much more data. um which just led to a lot of the breakthroughs with that's the LLM just they're large and uh the scaling laws you know a couple years ago you know indicated like the larger you make the model um the more intelligent would be and at a degree of efficiency that was uh tolerable uh and there we are you know there's lots of stuff written about this but you know there's in terms of just like textual content to train on you know the availability of new content is certainly waning and and some people would say I think there's like a data wall uh I'm not an expert in that domain but it's been talked about a lot and and you can read a lot about it there's a lot of interesting opportunities though to generate data too um so uh there's a lot of people working on simulation if you think about a domain like self-driving cars simulation is a really interesting way to generate is that synthetic data is that what yeah I would say that synthetic data the synthetic data has uh simulation and synthetic data are a little different. So you can generate synthetic data like generate a novel. Um simulation I would put at least in my head and I'm sure that like academics might critique what I'm saying but uh I view simulation as based on a set of principles like the laws of physics. So if you build a real world simulation for training a self-driving car, um you're not just generating arbitrary data like the roads don't turn into loop-de-loops, you know, because that's not possible with physics. So by constraining a simulation with a set of uh real world constraints, the data has more efficacy, you know, and so uh and there's sort of a it constrains the different permutations of data you can generate from it. So it's I think a little bit higher quality. But then along those lines, you know, uh a lot of people wonder if you generate uh synthetic data um how much value can that add to a training process? Um you know, is it sort of uh regurgitating information it already had? What's really interesting about you know reasoning and reasoning models is I think uh I feel really optimistic these models are generating net new ideas and so it really affords the opportunity to break through some of these the data wall as well. So data is one thing and I think both synthetic data and simulation are really interesting opportunities to to grow there. Then you have compute and uh this is um something that you know it's why uh there's so many data center investments. It's why Nvidia as a company has has grown so much. Um the probably the more interesting kind of breakthroughs there are these reasoning models where uh there's not quite such a formal separation between the training process and the inference process where you can spin more compute at the time of inference to generate more intelligence. Um which has really been a breakthrough in a variety of ways I think is really interesting but it shows you how you can run up against walls and and find new opportunities to use it. And then finally, algorithms. And the biggest breakthrough was obviously the transformers model. Attention is all you need. That paper from Google that sort of led to where we are now. But there's been a number of really important papers since then from the idea of chain of thought reasoning to um what uh at OpenAI what we did with the 01 model which is to um do some reinforcement learning on those chains of thought to uh really reach new levels of intelligence. Um and so I do think that I mentioned some anecdotes about some breakthroughs there because my view is that each one of them has their own problems. You know compute it's very capital intensive. Um and a lot of these models the halflife of their value is pretty short because new ones come out so frequently. And so you know you you wonder like you know can we afford what's the business case for investing this capex? Then you have a breakthrough like uh you know 01 and you're like gosh you know with a distilled model and moving more to inference time it changes the economics of it. You have data you say gosh we're running out of textual data to train on. Well now we can generate reasoning we can do simulations. Oh that's an interesting breakthrough. And then on the algorithm side as I mentioned just the idea of these reasoning models is really novel itself. And each of these at any given point if you talk to an expert in in one of them and I'm an expert in none of them they will tell you the sort of current plateau that they can see on the horizon and there usually is one. I mean you'll talk to different people about how long the scaling laws for something will continue and you'll get slightly different opinions but no one thinks it's going to last forever. Um and at each one of those because you have so many smart people working on them, you often have people discovering a breakthrough um in each of them. And so as a consequence, I I really do feel optimistic about the progress towards hi because one of those plateaus might extend a while if we just don't have the the key idea that we need to break through. The idea that we will be stuck on all three of those domains feels very unlikely to me. And in fact, what we've seen because of the potential economic benefits of AGIS, we're in fact seeing breakthroughs in all three of them. And um as a consequence, um you know, you're just seeing just the blistering pace of progress that we've seen over the past couple years. At what point does AI start making AI better than we can make it or making it better while we're sleeping or we can't be too far from that. Well, it might reflect back to our software engineering discussion, but you know, broadly this is the area of AGI around self-improvement, which is meaningful from a improvement standpoint, but also obviously from a safety standpoint as well. So I um I don't know when that will happen but I do think you know by some definition you could argue that it's happening already in the sense that every engineer in Silicon Valley is already using coding agents and um platforms like cursor to help them code so it's contributing already um and I imagine as uh coding assistants go to coding agents in the future most engineers in Silicon Valley will show up in the morning and but This is sort of the difference between uh you know the assisted driving in Tesla versus like self-driving. Yeah. Right. Like at what point do we leap from I'm a co-pilot in this to I don't have to do anything. I mean it's a question that's there's so much nuance to answer. I'm not sure to answer because I'm not sure you'd want to necessarily like I think for some software applications that's important but when we brought up you know we're talking about the active software development people have to be accountable um for the software that they produce. Um, and that means if you're doing something simple like a software as a service application, that it's secure, that it's reliable, that it the functionality works as intended for something as meaningful as uh, you know, an agent that is um somewhat autonomous, does it have the appropriate guard rails? Um, does it actually do what the operators intended? Is there appropriate safety measures? So, I'm not sure there's really any system where you'd want to turn a switch and and go get your coffee. But I do think to the point on, you know, uh these broader safety things is I think that when you think about uh more advanced models, we need to be developing not only more and more advanced um safety measures and safety harnesses, but also um using AI to supervise AI and things like that. So it's a part uh probably my colleague on the board Ziko Coulter is probably a better person to talk through some of the technical things but there's a lot of prerequisites to get to that point and I'm not sure it's simply like the availability of the technology um just because it is uh that at the end of the day we are accountable for the safety of the systems we produce not just open like every every engineer um and and and that's a principle that should not change. What does that mean? Like when we say safety and AI that seems so vague in general that everybody interprets it quite differently like how do you think about that and how do you think about that in the world where uh let's say we regulate safety in the United States and another country doesn't regulate safety. How does that affect the dynamic of it? I'll answer broadly and then go into the regulatory question. So I really like OpenAI's mission uh which is to ensure that AGI benefits all of humanity that isn't only about safety um and and I and I believe intentionally so though obviously the mission was created prior to my arrival because it's both about safety kind of hypocratic first do no harm and I don't think one could credibly achieve that mission if we created something unsafe. So I would say that's the most important part of the mission. But there's also a lot of other aspects of benefiting humanity. Um is it universally accessible? Is there a digital divide where some people have access to AGI and some don't? Um uh similarly you could argue that does it are we maximizing the benefits and minimizing the downsides? Clearly uh AI will disrupt some jobs but it also could democratize access to healthcare education expertise. Um, so as I think about the mission, it starts with safety, but I actually like thinking about it more broadly because I think at the end of the day, benefiting humanity is the mission. And um, uh, safety is a prerequisite, but it's almost like going to my analogy of the hypocratic oath. A doctor's job is, you know, uh, to cure you. First, do no harm, but then to cure you. And a doctor that did no harm, but didn't cure you wouldn't be great either. So, I really like to think about the holistically. And um again uh Zika or Sam might have a a more complete answer here, but broadly I think about does the system that represents AGI um align with the intentions of the people created it and the intentions of the people operating it. Um so that it does what we want and it's a tool um uh that benefits humanity that um a tool that we're actively using um to affect the outcomes that we're looking for. That's kind of the way I think about safety. Um and uh it can be meaningful things like misalignment or or more subtle things like unintended consequences. Um and I think that latter part is probably the area that's uh really interesting from a um intellectual and ethical standpoint as well. If I look at um uh what was the bridge in Canada that fell down where motivated the ring that a lot of engineers yeah I forget the name of it but just look at the whether it's the Tacoma Narrows bridge in Washington or three- mile island or these intersections where um uh we've engineered these um you know what what at the time people hope would be positively impact humanity but something went horribly wrong. Um, sometimes it's engineering, sometimes it's bureaucracy, sometimes it's a lot of things. And so I don't think when I think about safety, I don't just look at the technical measures of it, but how does this technology manifest in society? How do we make decisions around it? And you could take, put another way, technology is rarely innately good or bad. It's sort of what we do with it. Um, and I think those social constructs and uh matter a lot as well. Um, so I think it's a little early to tell because we don't have this kind of super intelligence right now. Um, and I think it won't just be a technology company defining how it manifests in society. And you could imagine uh taking a very well-aligned AI system and a human operator directing it towards something um that would uh objectively hurt society. And and there's a question of like who gets to decide who's accountable and it's a perennial question. I mean it's whether you're deciding uh you know should you use your smartphone in school you know who who should decide that and I there's parents who will tell you hey it's my decision it's my kid and then there's principles who will tell you it's not benefiting the school and I'm not sure that's going to be my place or our place but that there'll be a number of those conversations that are much deeper than that question that I think we'll need to answer um as it relates to regulation uh there's two uh not conflicting forces but two forces that exist somewhat independently but relate to each other. one is the pace of progress in AI and ensuring that you know uh the the folks working on frontier models are ensuring those models do benefit humanity and uh and then there's the uh sort of geopolitical landscape which is you know do you want uh AGI to be created by the freedom uh sort of uh the west um by democracies um or do you want it to be created by more totalitarian governments and So I think the inherent tension for regulators will be um a sense of obligation to ensure that you know uh the technology organizations creating AGI are in fact focusing enough on um better and fitting humanity all the other uh stakeholders that that whose uh interests that they're accountable for and ensuring that the west remains competitive um and and uh I think that's a really nuanced thing and I uh you know my my view is it's very important that the west is uh leads in AI and I'm very proud of the fact that um you know open AI is based here in the United States and and we're investing a lot in the United States and I think that's very important and I also you know having sort of seen the inside I think we're really focused on benefiting humanities I I tend to think that you know it needs to be a multistakeholder dialogue but I think there's a really big risk that some regulations could have the unintended consequence of of uh slowing down this larger conversation. But I don't say that to be dismissive of it either. It's actually just a impossibly hard uh problem and I think you're seeing it play out as you said in in really different ways in Canada, United States, Europe, China, elsewhere. I want to come back to compute and the dollars involved. So I mean on one hand you have um if I just I could start an AI company today by you know going putting my credit card down using AWS and leveraging their infrastructure which they've they've built they've spent the hundreds of billions of dollars and I get to use it on a timebased model. On the other hand, you have people like OpenAI uh and Microsoft investing tons of money into it that may be more proprietary or um how do you think about the different models competing and then the one that really throws me for a bit of a loop is Facebook? So Facebook has spent Meta. Oh god. So So Meta, I'm like aging myself here. So Meta comes along and you know possibly for the good of humanity but like I tend to think Zuck is like incredibly smart. So I don't think I don't think he's spending you know hundred billion dollars to develop a free model and give it away to society. How do you think about that in terms of return on capital and return on investment? It's a really complicated business to be in just given the capex required to build a frontier model. But let me just start with a couple definitions of terms that I think are useful. Um, I think most large language models I would call foundation models. And I like the word foundation because I think it will be foundational to most intelligent systems going forward. And most people building modern models, particularly if they involve language, image or or audio, shouldn't start from building a model from scratch. They should pick a foundation model. either use it off the shelf or fine-tune it. Um, and so it's truly foundational in many ways. In the same way most people don't build their own servers anymore. They lease them from one of the cloud infrastructure providers. I think foundation models will be something trained by companies that have a lot of capex and leased by a broad range of customers who have a broad range of use cases. Um and I think that leads and in the same way that data center builders having a lot of data centers enabled you to have the capital scale to build more data centers. I think the same will largely be true of uh you know building the huge clusters to to do training and things like that. Foundation models I think are somewhat distinct from frontier models. And frontier models I think it's a term credit to to Reed Hoffman but uh I may be mistaken on that but that's where I heard it from. And these are the models that are usually like the one or two that are clearly the leading edge 03 as an example from open AI. And these frontier models are being built by labs who are trying to build AGI that benefits humanity. And I think if you're deciding whether you're building a foundation model and uh what your business model is around it, it's very different business than I'm going to go pursue AGI. uh because if you're pursuing AGI really there's only one answer which is to build and train and move to the next front you know horizon because if you can truly build something that is AGI the economic value is so great uh I think there's a really clear business case there if you're pre-training a foundation model that's the fourth best uh that's going to cost you a lot of money and the return on that investment is is probably fairly questionable because why use your fourth best large language model versus a frontier model or an open- source one uh for meta and as a consequence of that I think we have probably have too many people building models right now there's already been some consolidation actually of companies being folded into Amazon and and Microsoft and others but I do think it will play out a bit like the cloud infrastructure business where a very small number of companies with very large capex budgets uh are responsible for both building and operating these data centers. And then developers and and in consumers will use things like chat GPT. As a consumer or as a developer, you'll license uh and rent, you know, one of these models uh in the cloud. Um how it will play out is a really great question. You know, I think the uh I heard one investor talk about these as like the fastest appreciating assets of all time. Um uh on the other hand, you know, I uh if you look at the revenue scale of something like an OpenAI and and what I've read about places like Anthropic, let alone Microsoft and Amazon, it's pretty incredible as well. And so you can't really if you're one of those firms, you can't afford to sit on the sidelines as as the world transforms. But I I would have a hard time personally like funding a a startup that says I'm going to do pre-training. You know, it's it's a I don't really know like what's your what's your um differentiation in this marketplace and I think a lot of those companies you're already seeing them consolidate because they have the cost structure of a pharmaceutical company but not the business model but this is just it though right like OpenAI has a revenue model around a revenue model Microsoft has a revenue model around their AI investments they just updated the price of teams with co-pilot you know uh Amazon has a revenue model around AI in a sense they're getting other people to pay for it through AWS and then they're getting the advantages of it uh at Amazon too from a consumer point of view and all the millions of projects. Bezos was doing an interview last week. He said there's every project at Amazon basically has an AI component to it. Facebook on the other hand has spent all of this money already and with you know an endless amount presumably in sight or like not insight an endless amount to go but they don't have a revenue model specifically around AI where it would have been cheaper obviously for them to use a different model but that would have required presumably giving data away or like I'm just trying to work through it from Zuck's point of view if you know I actually will take Mark at his word and you know that post he wrote about open source I think was very well written and encourage people to read it I think that's his strategy and you know if you look at Facebook um the now you've got me saying Facebook too so that was what it was called I was there you know the company's always really embraced open source and if I look at really popular things from react to uh you know now the llama models it's always been a big part of their strategy to court developers around sort of their ecosystem and Mark articulated some of the strategy there and I'm sure there's elements of commoditizing your compliment but I also think that you know if you can attract developers towards models there's a strength um I you know I'm not really on the inside there so I don't really have a perspective on it other than I actually think it's really great that totally there's different players with different incentives all investing so much and I think it is really furthering the cause of like bringing these amazing tools to society and um but a lot changes. I mean if you look at the price of GPT4 mini you know it is uh so much higher quality than like the highest quality model 2 years ago and much cheaper. Yeah. Um I I haven't done the math on it but it's probably cheaper to use that than to host self-host any of the open source models. So totally even even the existence of the open source models it's not free. I mean inference costs money and so there's a lot of complexity here and and actually I have the email even being relatively close to stuff like I have no idea where things are going. But you know it's um you could talk to a smart engineer and they'll tell you oh yeah if you built your own servers you'll spend less than renting them from say Amazon Web Services or Azure. That's sort of true in absolute terms, but misses the fact like do you want someone on your team building servers? Oh, and in fact, if you change the way your service works and you need a different SKU like you all of a sudden are doing training and you need Nvidia, uh, you know, H100's now all of a sudden you're built servers like this, you know, asset that's worthless. So, I think with a lot of these models, you know, the presence of open source is incredibly important and and uh and I really appreciate it. I also think like the economics of AR pretty complex because the hardware is very unique. The cost to serve is much higher. Um techniques like distillation have really changed the economics of models whether or not it's open- source or or uh hosted and and leased. Um, so it's I think broadly speaking for developers, it's kind of an amazing time right now because you have a like a menu of options that's incredibly wide and I actually think of it as you know just like in cloud computing you'll end up with a price performance quality trade-off and for any given engineering challenge they'll have a different answer and that's appropriate and some people use uh open-source Kafka some people work with Confluent um great you know like that's just the way these things work you know and And yeah, so you don't think AGI is going to be like a winner take all? You think there's going to be multiple options that have by definition whatever the definition is of AGI? Well, first I think OpenAI I believe will play a huge part in it because uh there's both the technology which I think OpenAI continues to lead on um but also chat GPT which has become synonymous with AI for most consumers but more than that um it is the way most people access AI um today. And so one of the interesting things like what is AGI? We talked about you know opinions on what the definition might be but the other question is like how do you use that like what do you what is uh what is the packaging um and some of uh intelligence will be simply the outcomes of it like a discovery of a new drug which would be you know remarkable and hopefully we can cure some illnesses uh but others will be just how you as an individual access it and you know I most of the people I know like if they're signing an department lease will put it into chat GPT to get a legal opinion. Uh if you get, you know, lab results from your doctor, you can get a second opinion on chat GPT. Um Clay and I use uh the 01 pro mode for like give criticizing our strategy at Sierra all the time. And so for me what's so remarkable about chatbt which was this you know quirky named research preview that has come to be synonymous with AI is I do think that it will be the delivery mechanism for AGI when it's produced and uh not just because of the many researchers at OpenAI but because of the amazing like utility it's become from individuals and I think that's really neat because I don't know if it would have been obvious if we were having this conversation three years ago um you know and you were talking about artificial general intelligence. I'm not sure either of us would have envisioned something so simple as a form factor uh to absorb it that you just talk to it. Um so I think it's great and especially as I think about the mission of OpenAI which is to ensure that AGI benefits humanity. What a simple accessible form factor. There's free tiers of it. Like what a kick-ass way to benefit humanity. So, I really think that will be central to what we come as society to define as a AGI. You mentioned using it at Sierra to critique your your business strategy. What do you know about prompting that other people miss? I mean, you must have the best prompts. People think that, you know, cuz I'm affiliated with it. You're not going like, "Here's my strategy. What do you think? What are you putting in there?" Um I often with the uh reasoning models which are slower will use a faster model first GBT40 to refine my prompts. Um so uh over the holidays um partly because I was thinking about the future of software engineering. I've I've written a lot of compilers in my time. I'm like written enough that I you know it's like a uh it's it's easy for me. So I decided to see if I could have 01 pro mode um generate end to end a compiler front end parsing the grammar um checking for semantic correctness generating an intermediate representation and then using uh uh LVM which is sort of a compiler collection um that's very popular to actually do you know run it all and I would spend a lot of time iterating on 40 to sort of like refine and and make more complete and specific what I was looking for and then I would put it into one pro mode, go get my coffee and you know come back and get it. I'm not sure if that's a viable technique, but it's really interesting because I do think in the spirit of AI being the solution to mo more problems in AI, um having a a lower latency, simpler model, help refine essentially, I like to think of it as like you're like a product manager and you're asking, you know, an engineer what to do. Is your is your product requirements document complete and specific enough? And uh waiting for it is sometimes slower than and so I like doing it in stages like that. So that's my my trip. At some point there's probably someone from Open AI listening who's gonna like roll their eyes, but that's just uh that's I've Who can I talk to at OpenAI that's like the prompt ninja? I'm like so curious about this because I've I've actually taken recently to uh getting Open AI or chat GBT I guess if you want to call it. I've been getting chat GBT to write the prompt for me. So I'll prompt it with I'm prompting an AI here are the key things. similar to my technique. I want to accomplish what would an excellent prompt look like? And then I'll copy paste that prompt that it gives me back into the system. Uh but I'm like I wonder what I'm missing here. Right? Like it's a good technique. I mean there's lots of techniques like that. Like self-reflection is a technique where you have a model observe and critique you know a decision like a chain of thought. Uh so in general you know that mechanism of self-reflection is I think a really effective technique. You know at Sierra we help companies build customerf facing AI agents. So um if you're setting up a Sonos speaker you'll now chat with an AI. If you're a SiriusXM subscriber you can chat with Harmony who's their AI to manage your account. Um, we use all these tricks, you know, self-reflection to detect things like hallucination or decision- making, generating chains of thought for more complex tasks to ensure that it's, you know, you're putting as much uh compute and cognitive load into important tricks. So, uh, you know, we're the there's a whole industry around sort of figuring out how do you exact the like robustness and and um precision out of these models. So, it's really fun, but changing rapidly. Hypothetical question. You you've been hired to lead or advise a country uh that wants to become an AI superpower. What sort of u steps would you take? What sort of policies would you think would help create that? How would you bring investment from all over the world into that country? And researchers, right? Like, so now all of a sudden you're competing. It's not the United States. like how do you how do you sort of set up a country like from first principles all the way back to like what does that look like? What are the key variables? Well, I mean especially this is definitely outside of my domain of expertise, but I would say one of the key ingredients to modern AI is compute. Um, which is a a noun that wasn't a noun until recently, but now compute is a noun. And uh you know I do think that's one area where policy makers can um because it involves a lot of things that uh touch uh federal and local governments like power, land um and then similarly attracting the capital which is immense to finance uh to the real estate to purchase the uh you know uh compute itself um and then to sort of operate the data center. And again there's really immense power requirements for these data centers as well. Um and then you know it's attracting sort of the right researchers and research labs to you know leverage that but in general where there is compute the research labs will find you you know and so I think that's it and then there's a lot of national security implications too just because you know these models are very sensitive at least the frontier models are and so um you know how you your place in the geopolitical landscape was quite important like will research labs and uh Will the US government be comfortable with training happening there and and export restrictions and things like that? But I think a lot of it comes down to infrastructure uh as it relates to policy is my intuition. Uh you know I think right now so much of AI is constrained on on infrastructure that that is the input to a lot of of this stuff. Um uh and then there's a lot around you know attracting talent and all that but as I said you know you look at the research labs it's not that many people actually it's a lot but the compute is the limited resource right now. That's a really good way to think about it. I I think about this through the lens of Canada right which is like we don't have enough going on in in AI. we tend to lose most of our great people to the states uh who then go to set up infrastructure here for whatever reason and don't bring it back to Canada and I I wonder how Canada can compete better. So this is like sort of the lens I like look at these questions through. How do you see the the next generation of education? Like if you were setting up a school today from scratch and again hypothetical, not your domain of expertise, but like using your lens on AI, how do you think about this? So like what skills will kids need in the future? And what skills do we probably don't need to teach them anymore that we have been teaching them? Well, I'll start with uh the benefits that I think are probably obvious, but I'm incredibly excited about. I think education can become much more personalized. Oh, totally. Have you seen Synthesis Tutor, by the way? No, I have not. Oh, so they developed this uh Synthesis, this AI company developed this tutor which actually teaches kids and it's so good that El Salvador, the country, just recently adopted replaced their teachers. That's amazing. and uh like it'll teach you but it teaches you specific to what you're missing. So it's not like every lesson is the same. It's like well you're not understanding this foundational concept. So it's like K through five or six right now. That's amazing. And you know I actually and the results are like off the charts. Well, it doesn't surprise me and I I don't actually view it as like necessarily replacing a teacher, but my view is if you have a teacher with 28 kids in his or her class. The likelihood that they all learn the same way or learn at the same pace is very unlikely. And you know, I can really think of a say an English teacher, history teacher, orchestrating their learning journeys through uh a topic, say AP European history in the United States. there's a curriculum they need to learn it. Um how someone will remember something or understand the significance of Martin Luther you know is very different and um you can you know generate a audio podcast for someone who might be an audio auditorial learner. Um you can create qards for someone who needs that kind of reputition repetition. um you can visualize uh key moments in history um for people who just maybe want to more viscerally appreciate why this was a meaningful event rather than this dry piece of history. And all of that, as you said, can be personalized to the way you learn and how you learn. And I think it's just incredibly powerful. And so one of the things I think is neat about AI is it's uh democratizing access to a lot of things that used to be fairly exclusive. A lot of wealthy people, if their child was having trouble in school, would pay for a tutor, math tutor, science tutor. Um, and you know, if you look at kids who are trying to get into, you know, uh, bigname colleges, you know, if you have the means, you'll have someone prep you for the SATs or help you with your college essays. All of that should be democratized if we're doing our jobs well. And it means that we're not limiting people's opportunity from by their means. And I think that's just the the most uh American thing ever, Canadian as well. It's the most incredible thing for the most incredible thing, humanity. And and so I I just think education will change for the positive in so many ways. Um because uh I I actually with my kids walking around when they ask, you know, if you have little kids, they ask why why, you know, there's some point a parent just starts making up the answer, being dismissive. Like we have chat TV out. It's like the best when you're traveling and put on advanced voice mode and be like ask away 100%. And I'm listening too, you know, it's like you're you live through your children's curiosity. And um you my daughter went to high school and came home with Shakespeare for the first time. And I was she asked me a question. I was like I I felt this is like total inadequacy. I was like I was very bad at this the first time. And then we put it into chat GBT and it was the most thoughtful answer and she could ask follow-up questions and I actually was, you know, with her because I was like, I forgot about that, you know, didn't even think about that. So, I I just think it's incredible and I would like to uh in public school systems uh I think it's really I think it'll be a really great uh when public school systems formally adopt these things so that they lean into uh tools like chat GPT uh as mechanisms to like raise uh the performance level of their classroom and and hopefully you'll see it in things like test scores and other things because uh kids can get the extra time even if the school system can't afford it for everyone. Uh and then most importantly kids getting explanations according to their style of learning which I think will be um quite uh important as well as it relates to skills. It's really hard to predict right now and I I would say that I do think learning how to learn and learning how to think will continue to be important. So I think most of you know primary and secondary education shouldn't and is not vocational necessarily. um some of it is uh you know I took auto shop and all of that and I'm glad I did but I couldn't fix my electric car today with that knowledge you know things change and I don't think it needs to be purely you know um nonvocational but you know the basics of learning how to think uh learning um uh writing reading math physics uh chemistry biology not because you need to memorize it but understand the mechanisms that uh uh create the world that we live in is is quite important. Um I do think that the there's a risk of people sort of uh becoming oified in the tools that they use. Um so you know uh let's go back to our discussion with software engineering for a second but I'll give other examples. you know, if you define your role as a software engineer is how quickly you type into your IDE, the next few years might leave you behind, you know, because that um that is no longer a differentiated, you know, part of the software engineering experience or or will not be. But your judgment as a software engineer will continue to be uh incredibly important your agency and making a decision about what to build, how to build it, um how to architect it, uh maybe using AI models as a creative foil. And so I think that uh just in the same way if you're an accountant, you know, using Excel doesn't make you less of an accountant. uh and um and just because you didn't, you know, handcraft that math equation, it doesn't make the results any less valuable um to your clients. Um and so I think we're going to go through this transformation where I think the um the tools that we use to create value in the world will change dramatically. And I think some people who define their jobs by their ability to use the last generation's tools really really effectively um will will be disrupted. But I I think we if we can empower people and and to reskill um and also broaden the aperture by which they define the value they're providing to the world, I think a lot of people can make the transition. The thing that is sort of uncomfortable, not really in education where it's just earlier in in most people's lives, it's just I think the pace of change exceeds that of most technology transitions. And I think it's unreasonable um to expect most people to change their way work that quickly. And so I think the the next 5 years I think will be you know for some jobs really disruptive and tumultuous. But if you take the longer view and you fast forward 25 or or 50 years, I'm incredibly optimistic. I I think it's the the change will require um from society, from companies, and from individuals like an open-mindedness about reskilling and and reimagining their job through the lens of this like dramatically different new technology. At what point do we get to I mean we're probably on the cusp of it now and it's happening in pockets but what point do we start solving problems that humans haven't been able to solve or eliminating paths that were on maybe with medical research that it's like no that that this whole thing you've spent $30 billion on you know based on this 1972 study that was fabricated but that one study had all these derivative studies and like I'm telling you it's false you know because I can look at it through an objective lens and get rid of these 30 billion. Why? You're smiling. S. Oh, no. I just I hope soon. I mean, I hope I mean I uh there was a a lot of there's a one of the models, I can't remember which one, introduced a very long context window and there was a lot of people on X over the weekend putting in their uh thesis, you know, like grad grad school thesis in there and and it was actually critiquing them with like surprising levels of fidelity. uh and uh I think we're sort of there perhaps with the right um right tools but certainly over the next few years I you know we talked about how what does it mean to generalize AI certainly in the areas of um science that are you know largely represented through text and digital technology like math being probably the most most applicable there's not really anything keeping AI from getting really good at math there's not really an interface to the real world you don't need to do a clinical trial to verify something's correct. So, I feel a ton of optimism there. Um, it'll be really interesting in like, you know, areas of like theoretical physics. Um, uh, you'll you'll continue to have the divide between the applied and the theoretical people, but I think there could be like really interesting new ideas there and perhaps some, uh, finding logical inconsistencies with some of the, you know, uh, fashionable theories, which has happened many times over the past few decades. Um I I think I think we'll get there soon. And I actually I um what's really neat about is most of the scientists I know people who are actually like doing science like they're the most excited about these technologies and I they're using them already and I think that's really neat and I think we're hopefully going to be I really hope we see more breakthroughs in science. One of the things I am not an expert in but I've read a lot like a lot as a amateur about is just the slowdown in scientific breakthroughs over the past you know few decades and and some theories that it's because of the degree of specialization that we demand of grad students and things like that and I hope with you know in general with AI um democratizing access to expertise um I I have a completely personal theory that it will benefit deep generalists in a lot of ways too because your ability to understand a fair amount in a lot of domains and leveraging AI um knowing where to prompt the AI to to go explore and and um bringing together those domains, it will start to shift sort of the intellectual power from people who are extremely deep to people who actually can orchestrate uh intelligence between lots lots of different domains for breakthroughs and I think that'll be really good for society because most scientific breakthroughs aren't they tend to be you know cross-pollinating very important ideas from a lot of different domains which I think will be really exciting. How important is the context window? I think it could be quite important um especially it certainly simplifies working with an AI if you can just give it everything and instruct it to do something. Um and so and and assuming it works, you know, you can extend a context window and it can um uh that the tension can be spread fairly thin and and the robustness of the answer can be questionable. So but assuming let's just for argument sake, you know, perfect robustness, um I think it can really simplify the interface to AI the not all uses. I also think that we're talking about open source models and and APIs. Um I also think that you know most what I'm excited about in the software industry is not necessarily a large language model with a prompt and a response being the product of AI but actually end toend closed loose systems that use large language models as pieces of infrastructure. And I actually think that a lot of the value in software will be that and for many of those applications the context window size can matter but often because you have contextual awareness of the process that you're executing. Um yeah context window is a little bit less important. So I think it matters a lot to intelligence. Um you know there's a I can't remember if someone one of the some researcher said you know you put all of human knowledge in the context window and you ask it to invent the next thing. You know, it's a obviously a reductive uh thought, but but interesting. Um uh but I actually I'm equally excited about sort of the industrial applications of large language models, sort of like my company Sierra. And if you're um returning a pair of shoes at a retailer, and it's a process that's fairly complicated, and uh you know, is it within the return window? Uh you know, uh do you want to return it in store? Do you want to send it? Do you want to print a QR code? Blah blah blah blah. um the orchestration of that is as significant as the models themselves and I actually think as we um just like uh uh computers you know there's going to be a lot of things where computers are a part of the experience but it's not like manifesting itself as a computer so I'm actually equally excited about those and I think context windows slightly less important in those applications do you think that the output from AI should be copyrightable or patentable or let let me just take an example if I go to the US patent office I download a patent for let's say the arrow press and I upload it to uh 01 pro and I say I can't upload it yet cuz you don't let me do the PDFs but I upload it to four and uh so I say hey what's the next logical leap that I could patent off this it would give me back diagrams and an output and presumably if I I look at that and I'm like yeah that's legit I want to file that patent can I I don't know to answer that question I'm not an expert in sort of intellectual property but I uh Uh I think there will be an interesting question of was that your idea because you used a tool to do it. I think the answer is probably yes that you you used the tool to do it. But I also think that the um uh uh in general like the sort of marginal cost of intelligence will go down a lot. So a lot of the you know I think in general like we'll we'll be in this renaissance of of new ideas and intelligence being produced and so uh I think that's broadly a good thing and I think you know the marginal value of that insight that you had might be lower than it was you know years ago. What I was hoping you would say is that, you know, that's going to become less and less important because I feel like all the patent trolls and all of the stuff that slows down innovation in some ways, uh, obviously like there's legitimate patents that people infringe on and there should be legal recourse. But if I could just go and patent like a hundred things a day, it seems like that should not be allowed. You know, this is what I'm saying though. Well, in general, I think that, you know, companies, you know, I think patents make sense if it's protecting something that's an active use that you, you know, invented and you're you're trying to uh, you know, like the standard, you know, uh, legal rationale for patents. Just generating a bunch of ideas and patenting it seems destructive to the value of a. So, here's the idea I had last night to counter this because I was like, I don't want somebody doing this. Uh, and I was thinking like, what if prior art eliminates patents? Yeah. So I was like, what if I just set up like an instance and just publish it on a website? Nobody has to read that website. Here's a billion ideas. Exactly. But it's like basically patenting like anything, but it's creating prior art for everything. So like you can't compete on that anymore. I don't know. I was like thinking about that. I thought it was fun. Um, tell me about the Google Map story. This is like now legend and I want to hear it from you. Uh, this is my weekend coding. Is that what you want to hear about? Yeah. Um, yeah. So, uh, I'll start with just like the story of Google Maps, the abbreviated version. Uh, we had launched a product at Google called Google Local, which was sort of a yellow pages uh, search engine. Uh, you probably probably most listeners don't even know what yellow pages are, but it was a thing back then. And um, we had licensed maps from Map Quest, which was the dominant sort of mapping provider at the time. and it was sort of an eyesore on the experience and also always felt like it could be a more meaningful part of the kind of local search and navigation experience on Google. So Larry Page in particular was really pushing us to really invest more in maps. Um we found this uh small company with a like four people in it if I'm remembering correctly started by Lars and Yen's Rasmusen called where to technologies where um they had made a Windows application called Expedition um that was just a beautiful mapping product um it was running on Windows long after it was sort of out of fashion to make Windows apps but they they sort of where the technology they're comfortable with but they're really um their their maps modeled the A to Z apps in in the UK and were just beautiful and they just had a lot of passion for mapping. So we did a little aqua hair of them and took together the Google local team and Lars and Yenz's team and and said okay like let's take the good ideas from this Windows app and the good ideas from Google local and like let's bring them together to make something completely new and that and that's what became Google Maps. But there's a couple of idiosyncrasies in the integration because it was a um Windows app. It really helped and hurt us in a number of ways. Like one of the ways it helped us is the reason why Google maps we were able to drag the map and it like uh was so much more interactive than any web application that preceded it was the standard that we needed to hit from interactivity was set by a native Windows app, not set by the legacy uh you know websites that we had used at the time. And I think that by having the goalposts so far down the field because they had just started with this Windows app, which was sort of a quirk of Lars and Yen's just like technical choices, we made much bolder technical bets than we would have otherwise. I think we would have ended up much less interactive had we uh not started with that quirky technical sort of uh decision. But the other thing was this Windows app. There's a lot of like it's hard to describe the like early 2000s. people didn't live it but like XML was like really in fashion. So like most things in Windows and other places was like XML and XSLT which was a way of transforming XML into different XML was the basis of everything. It was like all of enterprise software was like XML this XML that. So similarly when we were taking some of these ideas and putting them in a web browser, we kind of like went into autopilot and used like a ton of XML and it made everything just like really really tedious. And so Google Maps launched with some really great ideas like the dragable maps and and we did a bunch of stuff with the local search technology so you could you know overlay restaurant listings. It was really great. It was a really successful launch. uh we were like the hot shots within Google afterwards and uh but it really started to show its craft and we got to this point where we decided we wanted to support the Safari web browser which was relatively new at the time. This is before you know mobile phones and uh there was much less XML support in Safari than there was in Internet Explorer and Firefox. And so one of the engineers implemented like a full XSLT transform engine in JavaScript to get it to work. And it was just like [ __ ] on top of [ __ ] on top of [ __ ] And so what was a really elegant fa like fast web application had sort of quickly become something, you know, there's a lot of dialup modems at the time and other things. So like you'd show up to maps and it just was slow and like it just bothered me as like someone who takes a lot of pride in their craft. And so I got really uh energized and like over a weekend and a lot of coffee like rewrote it. Um, but it was rewrote the whole thing though. Rewrote Yeah, more or less the whole thing. And it took probably another week of like, you know, working through the bugs. But yeah, I sent it out to the, you know, the team after that weekend. And it was it was the reason I was able to do it. Yeah, I'm like a decent programmer, but you know, you'd also like lived with every bad decision up to that point, too. So I knew exactly the output I was going to like I had simulated in my head like if I could do it over again this is the way I'd do it. So by the time I like put my hands on the keyboard on like you know Friday night I it wasn't like I was designing a product like I knew I'd been in the every detail of that product since the beginning including made the bad decisions too. they're not all the bad decisions. And so it was just very clear. I knew what I wanted to accomplish. And for anyone, you know, any engineers worked on a big system, you have the whole system mapped out in your head. So I knew knew everything. And I and I um and and I also knew that, you know, there's a lot of pride of authorship with engineering and code. So I sort of knew I really wanted to finish it over the weekend so that people could use it and see how fast it was and kind of overcome anyone who was like you know you know protective of the code they had written a few months ago. And so I really wanted the prototype to go out and so I did it and it and then I didn't it's funny I never talked about it again but I think Paul Buhight who's was the co-creator of the Gmail and and I worked and started friend feed with me um he was on an interview and mentioned this story. So now all of a sudden it's like everyone's talking about it and I was like well thank you Paul. It's a little embarrassed that people know about it but uh it was it was it's a true story and um and and XML is just the worst. So did you get a lot of flack from the people who had built the system you effectively replaced like you were part of that team but everybody else had so much invested in it even though it was like [ __ ] on top of [ __ ] on top of [ __ ] you know. Um I wrote a lot of it too. So yeah, I'm sure there was some around it, but actually I think good teams want to do great work. And so uh I think there was a lot of people constructively dissatisfied with the state of things too and um uh you know I think uh you know the engineer had written that XSLT transform I think was like you know a little bit that's a lot of work. So you have to throw out a lot of work which feels bad but particularly you know um Lars and Yens and I like we want to make great products and so I don't think there was a you at the end of the day everyone was like wow that's great you know we went from a bundle size of 200k to a bundle size of 20k and it was a lot faster and better so you know broadly speaking I think good engineering cultures you don't want a culture of um you know ready fire aim but I also think you just need to be really outcomesoriented and I think people if they become they start to treat their code as too precious it can really uh impede forward progress. Um and yeah I'll just take like I my understanding is like a lot of the early self-driving car software was a lot of handcoded heruristics and rules and you know a lot of smart people think that eventually it'll probably be a more monolithic model that uh encodes many of the same rules. you have to throw out a lot of code in that transition, but it doesn't mean it's not the right thing to do. And so I think in general, um, yeah, there might have been some feathers ruffled, but at the end of the day, everyone's like, that's faster and better. Like, let's let's do it, you know, which is, I think, the right decision. That's awesome. Going to give you another hypothetical. I want you to share your inner monologue with me as you think through it. So, if I uh told you you have to put 100% of your net worth into a public company today and you couldn't you couldn't touch it for at least 20 years, what company would you invest in? And like walk me through your thinking? I literally don't know how to answer that question. Um how would you think about it without giving me an answer like what Yeah, it's a good question. I first of all, I'll give you how I think about it, but I'm so uh having not been a public company CEO for a couple years, I'm blissfully don't pay attention as much. um to the public markets and in particular right now it's obviously valuations have gone up a lot so there's a but because it's a long-term question maybe that doesn't matter I think what I'd be thinking about right now is um over the next 20 years like what are the parts of the economy that will most benefit from this current wave of AI that's not the only way to invest over a 20-year period but certainly it's a domain that I understand and in particular you know I mentioned that talk I heard a snippet of from Tyler Cohen, which is like it will probably AI will probably benefit different parts of the economy um disproportionately. There will be some parts of the economy that can essentially um where intelligence is a limiting factor to its growth and where you can absorb almost arbitrary levels of intelligence and generate almost arbitrary levels of growth. Obviously, there's limits to all of this just because you change one part of the economy, it impacts other parts of the economy. And that was what uh Tyler's point was in his talk. But I would probably think about that because I think that over a 20-year period, there are certain parts of society that won't be able to change extremely rapidly. Um, but there will be some parts that probably will, and it'll probably be domains where intelligence is a is the scarce resource right now. And then I would probably try to find companies that will disproportionately benefit from it. And I assume this is why like Nvidia stock is so high right now because if you want to sort of get downstream, you know, Nvidia will probably benefit from all of the investments in AI. Um I'm not sure I would do that over a 20-year period just assuming that the infrastructure will shift. So I don't have an intelligent answer, but that's the way I would think about it if I were if we're doing that exercise. I love that. Where do you think like what's your intuition say about what areas of the economy are limited by intelligence and not just economy I mean perhaps politicians uh might be limited by by this and and aid and benefit from in which case countries could benefit enormously from AI and unlock growth and potential in their economy. But I I think maybe just to scope the question like what areas of the economy do you think are limited by intelligence or workers like smart workers in which case like that's another limit of intelligence? Yeah, I mean uh two that are I think probably going to benefit a lot are technology and finance. Um you know where you're you know if you can make better financial decisions than competitors you'll generate outsized returns. And that's why over the past you know 30 years you know of machine learning um you know uh hedge funds and financial service institutions everything from fraud prevention to true investment strategies has already been an area of domain uh domain of investment um software similar as we talked about I think that uh at some point we will be um we will no longer be supply constrained in software but we're not anywhere close to it right now and you're taking something that is always been the scarce resource which is software engineers and you're making it not scarce and I think as a consequence if you just think of like how much can that industry grow we we don't know um but we've been so constrained on software engineering as a resource uh who knows over the next 20 years but we'll find out where where the limits are but to me intellectually there's just a ton of growth there um and then broadly I think areas of like processing information are areas that will really benefit um quite a bit here. And so that and I think the the thing that I would think about over a 20-year period is like second and third order effects, which is why I don't have an intelligent answer. And if you're asking me to put all my money in something, I would think about it for a while. Um probably use 01 Pro a little bit to help me. Um but uh you know, because you can end up uh generating a bunch of growth in the short term, but then you know, if everyone does it, it commoditizes the whole industry, you know, type of thing. So, you know, there it used to be, you know, before the introduction of the freezer, ice was like a really expensive thing and now it's free, you know, and so I I think it is really important to actually think through those if you're talking a time frame of like 20 years. And that's why having not thought about this question ahead of time, I um you could be quite simplistic elsewhere, but I would say software and finance are areas that I I think stand to reason should benefit quite a bit. I I love that response. How do you balance uh having a young family with also running a startup? Again, I work a lot. Um I don't uh I really care and love care about and love working. Um so one thing is that I um well there's always trade-offs in life. Um if I didn't love working, I I wouldn't do it as much as I do. But I I just love uh love to create things and love to have an impact. And so I like jump out of bed in the morning and um work out, go to work, and then spend time with my family. Broadly, probably, you know, being honest. First, I'm not perfect at second, I don't have a ton of hobbies. You know, I basically work and spend time with my family. Um the first time we talked, you saw the couple guitars in my background. Uh I haven't picked one of those up in a while. Um the I I mean I literally pick it up occasionally, but I you know do not devote any time into it and I don't regret that either. Like I am so passionate about what we're building at Sierra. I'm so passionate about opening. I'm so love my family so much. I don't really have any regrets about it. But I basically just like life is all about where do you spend your time and mine is at work and with family. And so that's how I do it. I don't know if I'm particularly balanced, but I don't strive to be either. I really take a lot of pride in and I love I love to work. Having sold the companies you started twice, how does that influence what you think of Sierra? Like are you thinking like, oh, I'm building this in order to sell it or do you think differently like this is my life's work. I'm building this with that's not going to happen. I absolutely uh intend Sierra to be an enduring company and an independent company, but to be honest, every entrepreneur with every company starts that way. And so, um, you know, uh, I'm really grateful for both Facebook and Salesforce for having acquired my previous companies and hopefully I had an impact at both those companies. But you don't start off, well, at least I never started off saying, "Hey, I want to make a company to sell it." Um, uh, and uh, but I actually think with Sarah, we have just a ton of traction in the marketplace. I really do think Sarah is a leader in helping consumer brands build customerf facing AI agents and I'm really proud of that. So I really see a path to that and I joke with Clay I want to be, you know, an old man sitting on his porch, you know, complaining how the next generation of leaders at Sierra don't listen to us anymore. You know, I want this to be something that not only is enduring but outlives me. Um, and I think just actually I don't think we've ever talked about this, but it was really interesting um, moment for me when Google went from its one building in Mountain View to its first corporate campus. It uh, we moved into the Silicon Graphics campus which was right over near Shoreline Boulevard in in Mountain View. And uh, SGI had been a really successful company enough to build a campus and when we it was actually quite awkward. We moved into like half the campus. they were still in half and they're like we're this upand cominging company they're declining and then when Facebook when we moved out of the second building we were in Palo Alto was slightly larger building I think we leased it from HP but when we finally got a campus it was from Sun Microsystems who had gone through an Oracle acquisition and had been sort of on the decline and it was interesting to me because both SGI and Sun um had been started and grown to prominence in my lifetime Obviously, I was maybe like a little younger obviously, but in my lifetime, enough to build a whole corporate campus and then decline fast enough to sell that corporate campus to a new software company. And for me, I it was just so interesting to have done that twice to move into like a, you know, a used campus, you know, from the previous uh owners. It was a very stark reminder that technology companies aren't entitled to their future success. And I think we'll see this actually now with AI. AI I think will change the landscape of software to be um tools of productivity that to agents that actually accomplish tasks and I think it will help some companies who for whom that's amplifies their existing value proposition and it will really hurt others where it will essentially the seatbased kind of model of legacy software will um very quickly and and really harm them. And so when I think about the what it means to build a company that's enduring um that is a really really tall um task in my mind right now because it means not only making something that's financially enduring over the next 10 years but setting up a culture where a company can actually evolve to meet the changing demands of uh society and technology at a when it's changing at a pace that is like unprecedented in history. So I think it's one of the most fun business challenges of all time and I think it has as much to do with culture as it has to do with technology because every line of code in Sierra today will be completely different you know probably 5 years from now let alone 30 years from now. Um and uh I think that's really exciting. So when I think about it I just get so much energy because um it's incredibly hard and it's harder now than it's ever been um to do something that lasts beyond you. Um, but that I think is the ultimate measure of a company. You mentioned AI agents. How would you define that? What's an agent? I'll define it more broadly and then I'll tell you how we think about it at Sierra, which is a more narrow view of it. The word agent comes from agency and I think it means affording a software the opportunity to reason and make decisions autonomously. Um, and I think that's really all it means to me. And I think there's lots of different uh applications of it. The three categories that I think are meaningful and I'll end with the the Sierra one just so I can talk about it a little more. But one is personal agents. So I do think that most people will have probably one but maybe a couple AI agents that they use on a daily basis that are uh essentially amplifying themselves as an individual. Um, you can do the wrote things like help you triage your email to helping you schedule a vacation. You know, you're flying back to um, Edmonton and help you arrange your travel. Um, to more complex things like, you know, I'm going to go ask my boss for promotion. Like, help me role play and um, you know, I'm setting up my resume for this job. Help me do that to I'm applying for a new job. Help me find companies I haven't thought of that I should be applying to. Uh, and I think these agents will be really powerful. I think it might be a really hard product to build because when you think about all the different services and people you interact with every day, it's kind of everything. So, it's not it has to generalize a lot to be useful to you. And because of the personal privacy and things like that, it has to work really well for you to trust it. So, I think it's going to take a while to go. I think it'll be a lot of demos. I think it'll take a while to be robust. The second category of agent is I would say um really filling a persona uh within a company. So a coding agent, a parallegal agent, um a analyst agent. Um I think these already exist. I mentioned cursor. There's a company called Harvey that makes a legal agent. I'm sure there's a bunch in the analyst space. Um these do a job and they're more narrow. Um, but they uh they're really commercially valuable because most companies hire people or consultants that do those things already like analyze the contracts of your supply chain, right? That's a kind of a rote kind of law but is really important and AI can do it really well. So I think that's why uh this is the area of the economy that I think is really exciting and and as I'm really excited about all the startups in this space because you're essentially um taking what used to be a combination of people and software and really making something that solves a problem. Uh and by narrowing the domain of of autonomy, you can have more robust guard rails and even with current models actually achieve something that's effective enough to be commercially viable today. Um and uh and by the way, it changes the total addressable market of these models too. Like I don't know what the total addressable market of legal software was 3 years ago, but it couldn't have been that big. I I couldn't tell you like a legal software company. I probably should. I just can't think of one. But if you think about the money we spend on lawyers, that's a lot. And so you end up where you're broadening the the addressable market quite a lot. The domain we're in um I think is somewhat special, which is um a company's branded customerf facing agent. And the reason why I think it's one could argue we're sort of helping with customer service, which is a a a persona, a role, but I do think it's broader than that because if you think about um a website, you know, like your insurance company's website, try to list all the things you can do on it. You can look up the stock quote. You can look up the management team. You can compare their insurance company to all their competitors. You can file a claim. You can, you know, uh buy you can bundle your home and auto. You can um uh um add a member of your family to your premium. There's a million things you can do on it. Essentially over the past 30 years, websites, a company's website, singular has come to be the universe of everything that you can do with that company. I like to think of like the digital instantiation of the company. And that's what we're helping our customers do at Sir is help them build a conversational AI that does all of that. So, you know, most of our customers start with customer service and it's a great application because no one likes to wait on hold and and having something that has perfect access to information is multilingual and empathetic is just amazing. But, you know, when you put a conversational AI as your digital um front door, people will say anything they want to it. And um we're now doing product discovery, consider purchases, going back to the insurance example. Hey, you know, I've got a 15-year-old daughter. I really am concerned about the cost of her premium until she grows up. Tell me um which plan I should be on. Tell me why you'll be better than your competitors. That's a really complex interaction, right? That's not something that can you make a web page that does that. No, that's but that's a great conversation. And so we really aspire that when you encounter a branded agent in the wild, we want Sierra to be the platform that powers it. And it's super important because there was a case at least in Canada where an AI agent for Air Canada hallucinated a bereavement policy, right? But they were found liable to hold themselves to what the agent said. Yeah. I mean, it turns out and it was an AI agent. There was no human involved in the whole thing. Well, look, it's one thing if Chat GPT hallucinates something about your brand. It's another if your AI agent hallucinates something about your brand. So, the bar just gets higher. So the robustness of these agents, the guard rails, everything is more important when it's yours and it has your brand on it. And um so it's harder, but I also I'm just so excited for it because this is a little overly intellectual, but I really like the framing. If you think about a modern website or mobile app, um it's essentially you've created a um directory of functionality from which you can choose. But the the main uh person with agency in that is the the creator of the website. Like what are the universe of options that you can do? When you have an AI agent representing your brand, the agency goes to the the cons the customer. They can express their problem any way they want in a multifaceted way. And so it means that like your customer experience goes from the enumerated set of functionality you've decided to put on your website to whatever your customers ask. And then you know you can decide how to fulfill those requests or whether you want to. Yeah. Um but I think it would really change the dynamic to be really empowering to consumers as you said. I mean the reason that that Air Canada case um is the reason we exist. You know companies if they try to build this themselves um there's a lot of ways you can shoot yourself in the foot. But in particular um too your customer experience should be wed should not be wetted to one model let alone even this current generation of models. So with Sierra you can define your customer experience once in a way that's abstracted from all of the technology and it can be a chat it can be you can call you on the phone. It can be all of those things and as new models and new technology comes out our platform just gets better but you're not like re-implementing your customer experience. And I think that's really important because you know we were talking about what's happened over the past two years. Can you imagine if you're a consumer brand like ADT home security and thinking about like how can you maintain your AI agent in the face of all of that right? It's just not even it's not tenable. I mean it's not what you do as ADT. So they've worked with us to to build their AI agent. How do you fend off complacency? like a lot of these companies and and maybe not in tech specifically, but they get big, they get dominant, and then they take their foot off the gas and that opens the door to competitors and there's like a natural entropy almost to bureaucracy in some of these companies that and the bureaucracy sews the seeds of of failure and competition. How do you how do you fend that off constantly? It is a really challenging thing to do at a company. Um, one of the there's two things that I've observed that I think manifest as corporate complacency. One is bureaucracy. Um, and I think the root of bureaucracy is often uh when something goes wrong, companies introduce a process to fix it. And over the like uh sequence of 30 years, the layered sum of all of those processes that were all created for good reason with good intentions end up being a um bureaucratic sort of machine where um the reasons for many of the rules and processes are are rarely even remembered by the organization. But it creates this sort of uh natural inertia. Um, sometimes that inertia can be good. You know, it's like, you know, if you end up with uh you you've there's definitely been stories of executives coming in and ready, fire, aim, new strategies that backfire massively. Um, but often it can mean in the face of a technology shift or a new competitor, you just can't move fast enough to to address it. The second thing that I think is more subtle is as a company grows in size uh often its internal narrative can be stronger than the truth from customers. Um, I remember one time uh when the sort of peak of the smartphone wars and I end up visiting a friend on Microsoft's campus and uh I got off the plane in, you know, Seattle Tacoma airport, drove into Redmond, went onto the campus and all of a sudden everyone I saw was using Windows phones. Um, I assume it must have been a requirement or formal or social like you were definitely uncool if you're using anything else. And from my perspective at the time, like the war had already been lost, you know, um, like it was definitely a two-horse race between uh, Apple and and Google on on iOS and Android. And I remember sitting in the lobby waiting for my friend to get me from this security check-in. And uh I made a comment like it wasn't a confrontation comment but I made a comment to someone who's at Microsoft. I was like um you know something along the lines are are you required to use Windows phones? how these other and I just sort of was like curio curious and then I got a really bold answer which was like yeah we're going to win like we're taking over the smartphone market and I was like you know I didn't say anything cuz it was like a little socially awkward of like no you're not like you lost like four years ago but there's some there there's a process there's something that's happening that's preventing you from getting reality well and that's the thing is if you think about it if anyone if you've ever worked for like a large company you know you're when you work at a small company, you care about your customers and your competitors and you feel every bump in the road. When you're a, you know, junior vice president of whatever and you're, you know, eight levels below your, you know, CEO and you have a set of objectives and results, uh, you're you might be focused as I want to go from junior vice president to senior vice president. That's what success looks like for me. and you end up with this sort of myopic focus on this internal world. In the same way your kids will focus on, you know, the social dynamics of their high school, not the world outside of it. And it's probably rational, by the way, cuz like, you know, probably their social life is more determined by those, you know, 10,000 kids in their high school than it is like all the things outside. But this is that's the life of a person inside of these big places. And so you end up where uh you know if you have a very senior head of product who's like our this competitor says they're faster but this next version we're so much better and then everyone says all of a sudden that's like the Windows phone is going to win. That's what everyone says and and you truly believe it because everyone you meet says the same thing and you end up reflecting, you know, uh, customer anecdotes through that lens and you end up with this sort of reality distortion field manifested from the sum of of this sort of myopic storytelling that um, exists within within companies. The what's interesting about that is like you know the ability for a culture to believe in something is actually a great strength of a culture but it can lead to this as well. And so the combination of bureaucracy and inaccurate storytelling um I think is the reason why companies sort of die. Uh and and it's really remarkable to look at you know the blackberries of the world or the toss or the you know there you can really um you know as the plane is crashing like tell the story that you're not and um and and and then similar as I said like culturally you can still have like the person in the back of that crashing plane being like when am I going to get promoted to SVP and and you're like what the you know and and that's I mean this is like I mean I've seen a hundred times. And so I think it really comes down to leadership, you know, and I think that one of the things that most great companies have is they are obsessed with their customers. Um, and I think the the free market doesn't lie. And so I think the one of the most important things I think for any like enduring culture particularly in an industry that changes as rapidly as software is how close are your employees to customers and how much can customer like the direct voice of your customers be a part of uh your decision-m um and that is something that I think you need to constantly work out because that you know person employee number 30,462 you know how does he or she actually actually directly hear from customers is it's not actually a simple question to answer. Is it direct? Is it filtered? How many filters are there? That's exactly right. And then um I think the other part on leadership is you know we talk about bureaucracy is process is there to serve the needs of the business and uh often u mid-level managers uh don't get credit for removing process. they often are held accountable for things going wrong. Um and I think it really takes top down leadership to uh you know remove bureaucracy. Um and uh it is not always comfortable you know when companies remove spans of control or uh all the people impacted will it's like antibodies uh and for good reason. I mean this makes sense their lives are negatively impacted or whatever it is but it almost has to come from the top because you need to give uh air cover uh almost certainly something will go wrong by the way I mean like processes usually exist for a reason um but when they accumulate um without end you end up with bureaucracy. So, those are the two things that I always uh and you could smell it when you go into a really bureaucratic company, the the inaccurate storytelling, the process over outcomes, and it's just uh it sort of sucks the energy out of you and you feel it. That's a great answer. We always end these interviews with the exact same question, which is what is success for you? success for me. We talked about how I spend my time with my family at work is, you know, having a happy, healthy family and being able to work with my co-founder Clay for the rest of my life making Sarah into an enduring company. That would be success for [Music] me. Thanks for listening and learning with us. For a complete list of episodes, show notes, transcripts, and more, go to fs.blog. blog/mpodcast or just Google the knowledge project. Recently, I've started to record my reflections and thoughts about the interview. After the interview, I sit down, highlight the key moments that stood out for me, and I also talk about other connections to episodes and sort of what's got me pondering that I maybe haven't quite figured out. This is available to supporting members of the knowledge project. You can go to fs.blog/membership, blog/membership. Check out the show notes for a link and you can sign up today. And my reflections will just be available in your private podcast feed. You'll also skip all the ads at the front of the episode. The front street blog is also where you can learn more about my new book, Clear Thinking: Turning Ordinary Moments into Extraordinary Results. It's a transformative guide that hands you the tools to master your fate, sharpen your decision-making, and set yourself up for unparalleled success. Learn more at fs.blog/clear. Until next time. [Music]