sam welcome to TED thank you so much for coming thank you it's an honor your company has been releasing crazy insane new models pretty much every other week it feels like um I've been playing with a couple of them i'd like to show you what I've been playing so Sora this is the uh image and video generator um I asked Sora this what will it look like when you share some shocking revelations here at TED want to see how it imagined it you know i mean not bad right how would you grade that five fingers on on all hands very close to what I'm wearing you know i've never seen you I've never seen you quite that animated you're normally No I don't i'm not that animated of a person so maybe a B+ but this one um genuinely astounded me when I asked it to come up with a diagram that shows the difference between intelligence and consciousness like how would you do that this is what it did i mean this is so simple but it's incredible what What is the kind of process that would allow like this is clearly not just image generation it's linking into the core intelligences that your overall model has yeah the the new image generation model is part of GPT4 so it's got all of the intelligence in there uh and I think that's one of the reasons it's been able to do these things that that people really love i mean if I'm a management consultant and I'm playing with some of this stuff I'm thinking uhoh what does my future look like i I mean I think there are there's sort of two views you can take you can say "Oh man it's doing everything I do what's going to happen to me?" Or you can say like through every other technological revolution in history okay now there's this new tool i can do a lot more what am I going to be able to do it it is true that the expectation um of what we'll have for someone in a particular job increases but the capabilities will increase so dramatically that I think it'll be easy to rise to that occasion so this impressed me too i asked it to imagine Charlie Brown as thinking of himself as an AI um came up with this thought this was actually rather profound what do you think um I mean the writing quality of some of the new models not just here but in detail is it's really going to a new level yeah i mean this is an incredible meta answer but there's really no way to know if it is thinking that or just saw that a lot of times in the training set and of course like if you can't tell the difference how much do you care so that's really interesting it doesn't we don't know um isn't there though um like at first glance this looks like IP theft like do you guys don't have a deal with the Peanuts estate or um you can clap about that all you want enjoy um the I I I will say that I think the creative spirit of humanity is an incredibly important thing and we want to build tools that lift that up that make it so that new people can create better art better content write better novels that we all enjoy i believe very deeply that humans will be at the center of that um I also believe that we we probably do need to figure out some sort of new model around the economics of creative output uh I think people have been building on the creativity of others for a long time people take inspiration for a long time um but as the access to creativity gets incredibly democratized and people are building off of each other's ideas all the time I think there are incredible new business models that we and others are excited to explore exactly what that's going to look like um I'm not sure like clearly there's some cut and dry stuff like you can't copy someone else's work but how much inspiration can you take if you say I want to generate art in the style of these seven people all of whom have consented to that how do you like divvy up how much money goes to each one these are like big questions but every time throughout history we have put better and more powerful technology in the hands of creators i think we collectively get better creative output and people do just more amazing stuff i mean an an even bigger question is when they haven't consented to it in our opening session Carol Cadwalada um showed you know chat GPT give a talk in the style of Carol Cadwalada and sure enough it gave a talk that wasn't quite as good as the talk she gave but it's pretty impressive and um she said okay it's great but I did not consent to this how are we going to navigate this like like isn't there a way should it just be people who consented or shouldn't there be a model that somehow says that any named individual in a prompt whose work is then used they should get something for that so right now if you use our image gen thing and say you know I want something in the style of a living artist it won't it won't do that um but if you say I want it in the style of this particular like kind of vibe or this studio or this art movement or whatever it will um I and obviously if you're like you know output a song that is like a copy of the song it won't do that the the question of like where that line should be and how people say like this is too much we we sorted that out before with copyright law and kind of what fair use looks like again I think in the world of AI there will be a new model that we figure out um but from the point of view I mean the world's full of it's creative people are some of the angriest people right now or the most scared people about AI and the difference between feeling your work's being stolen from you and your future is being stolen from you and feeling your work is being amplified and can be amplified those are such different feelings and if if we could shift to the other one to the second one that I think that really changes how much humanity as a whole embraces all this well again I I would say some creative people are very upset some creatives are like "This is the most amazing tool ever i'm doing incredible new work." Um but you know like it's it's definitely a change and I have a lot of like empathy toward to people who are just like I wish this change weren't happening this is I liked the way things were before i like the Sorry but in principle you can you can calculate from any given prompt how much there should be some way of being able to calculate what percentage of a subscription revenue or whatever goes towards each answer in principle it should be possible if one could get the rest of the rules figured out it's obviously complicated you could calculate some kind of revenue share yeah if you're a musician and you spend your whole life your whole childhood whatever listening to music and then you get an idea and you go compose a song that is inspired by what you've heard before but a new direction it'd be very hard for you to say like "This much was from this song I heard when I was 11 this much from when I saw that." But that that's right but we're talking here about the situation where someone specifically in a prompt names someone yeah so I Well again right now if you try to like go generate an image in a name style we just say that living we don't do it but I think it would be cool to figure out a new model where if you say I want to do it in the name of this this artist and they opt in there's a revenue model there that's okay i think that's a good thing to explain so So I think the world should help you figure out that model quickly and I think it'll make a make a huge difference actually I want to switch topics quickly um the the battle between your model and open source how much were you shaken up by the arrival of deepseek i think opensource has an important place um we we actually just last night hosted our first like community session to kind of decide the parameters of our open source model um and how we want to shape it i we're going to do a very powerful open source model um I think this is important we're going to do something near the frontier i think better than any current open source model out there this will not be all like there will be people who use this in ways that some people in this room maybe you or I don't like um but there is going to be an important place for open source models as part of the constellation here um and you know I think we were late to act on that but we're going to do it really well now i mean you're spending it seems like an order or even orders of magnitude more than Deep Seek allegedly spent although I know there's controversy around that are you confident that the actual better model is going to be recognized or are you actually like isn't this in some ways life-threatening to the notion that yeah by going to massive scale tens of billions of dollars investment we can we can maintain an incredible lead all day long I call people and beg them to give us their GPUs we are so incredibly constrained our growth is going like this deepseek launched and it didn't seem to impact it there's other stuff that's happening um tell us about the growth actually you you gave me a shocking number backstage there i mean I have never seen growth in any company one that I've been involved with or not like this uh like the growth of ChachiPT it is it's really fun i feel like great deeply honored but the it is crazy to live through and our teams are exhausted and stressed and we're trying to keep things up how many users do you have now um I think the last time we said was 500 million weekly activives and it is growing very rapidly like it's so I mean you told me that we like doubled in just a few weeks like in terms of comput privately but I guess oh I misremembered um I'm so I'm sorry we can edit that out here it's growing very fast um so you're confident you're seeing it grow take off like a rocket ship you're reing incredible new models all the time what are you seeing in your best internal models right now that you haven't yet shared with the world but you would love to hear on this stage so so first of all you asked about you know are we worried about this model or that model there will be a lot of intelligent models in the world very smart models will be commoditized um to some degree i think we'll have the best and for some use you'll want that but like honestly the models are now so smart that for most of the things most people want to do they're good enough i hope that'll change over time because people will raise their expectations but like if if you're kind of using chat GPT as like a standard user the model capability is very smart but we have to build a great product not just a great model and so there will be a lot of people with great models and we will try to build the best product and people want their image gen some Sora examples for for video earlier um they want to integrate with all their stuff we just launched a new feature called like well just still called memory but it's way better than the memory before where this model will get to know you over the course of your lifetime um and we have a lot more stuff to come to build like this great integrated product and you know that's I think people will stick with that so there will be many models but I think we will I hope continue to focus on building the best defining product in the space i mean after I saw your announcement yesterday that you've now uh chat GPT will know all of your query history I entered tell me about me chat GPT from all you know and my jaw dropped so I mean it was shocking it knew who I was um and all these sort of interests that um hopefully mostly were pretty much appropriate and sharable but the but uh it was it was astonishing and I felt the sense of real excitement a little bit queasy but mainly excitement actually at at how much more that would allow it to be useful to me one of our researchers tweeted you know kind of like yesterday this morning that the the upload happens bit by bit it's not you know it's not that you plug your brain in one day but it's you will talk to chat over the course of your life at someday maybe if you want it'll be listening to you throughout the day and sort of observing what you're doing and it'll get to know you and it'll become this extension of yourself this companion this thing that just tries to like help you be the best do the best you can in the movie her the AI basically announces that you know she's read all of his emails and decided he's a great writer and and uh sends you know persuades a publisher to publish him is that that might be coming sooner than we think um I don't think it'll happen exactly like that but but yeah I think something in the direction where AI you don't have to just like go to chatgbt or whatever and say I have a question give me an answer but you're getting like proactively push things that help you that make you better whatever that that does seem like it soon so what have you seen that's coming up internally that you think is going to blow people's minds give us at least a hint of what the next big jaw-dropper is the thing that I'm personally most excited about is AI for science at this point i think I am a big believer that the the most important driver of the world and people's lives getting better and better is new scientific discovery we can do more things with less we sort of push back the frontier of what's possible we're starting to hear a lot from scientists with our latest models that they're actually just more productive than they were before that's actually mattering to their what they can discover what's a plausible near-term discovery like room temperature superconductors that would be a great uh is that possible yeah I don't think that's like prevented by the laws of physics so I should be possible yeah but we don't know for sure um I think you'll start to see some meaningful progress against disease with AI assisted tools um you know physics maybe takes a little bit longer but I hope for it uh so that that's like one direction another that I think is big is starting pretty soon like in the coming months software development has already been pretty transformed like it's quite amazing how different the process of creating software is now than it was 2 years ago but I expect like another move that big in the coming months as a software engineering really starts to happen i've heard I've heard engineers say that they've had almost like religious-like moments with some of the new models where suddenly they can do in an afternoon what would have taken them two years it's it's it's Yeah yeah it's like mind it it it really like that's been one of my big field the AGI moments but but talk about what is the scariest thing that you've seen because it like outside a lot of people picture you as you know you have access to this stuff and you we hear all these rumors coming out of AI and it's like oh my god they've seen consciousness or they've seen AGI or they've seen some kind of apocalypse coming have you seen has there been a scary moment when you've seen something internally and thought uhoh we need to pay attention to this there have been like moments of awe and I think with that is always like how far is this going to go what is this going to be but there's no like we don't secretly have you know we're not secretly sitting on a conscious model or something that's capable of self-improvement or any anything like that um you know I people have very different views of what the big AI risks are going to be um and I myself have like evolved on thinking about where where we're going to see those but [Music] the I I continue to believe there will come very powerful models that people can misuse in big ways people talk a lot about the potential for new kinds of bioteterror models that can prevent present like a real cyber security challenge models that are capable of self-improvement in a way that leads to some sort of loss of control um so I think there are big risks there and then there's a lot of other stuff which honestly is kind of what I think many people mean where people talk about disinformation or models saying things that they don't like or things like that sticking with the first of those do you check for that internally before release of course yeah so we have this preparedness framework that outlines how we do that i mean you've had some departures from your safety team how many people have departed why have they why have they left we have I don't know the exact number um but they're clearly different views about AI safety systems i I would really point to our track record there are people who will say all sorts of things you know something like 10% of the world uses our systems now a lot and we are very proud of the safety track record but track record isn't the issue in a way because we're talking about we're talking about an exponentially growing power that we where we we fear that we may wake up one day and the world is is ending so it's it's it's really not about travel it's about plausibly saying that the pieces are in place to shut things down quickly if we if we see a danger yeah yeah no of course of course that's important you can't you don't like wake up one day and say "Hey we didn't have any safety process in place now we think the model's really smart so now we have to care about safety." You have to care about it all along this exponential curve of course the stakes increase and there are big challenges but the way we learn how to build safe systems is this iterative process of deploying them to the world getting feedback while the stakes are relatively low learning about like hey this is something we have to address and I think as we move into these agentic systems there's a whole big category of new things we have to learn to address so let's talk about agentic systems and the relationship between that and AGI I think there's confusion out there I'm confused so artificial General intelligence it feels like chat GPT is already a general intelligence i can ask it about anything and it comes back with an intelligent answer why isn't that AGI it doesn't first of all you can't ask it anything that very nice way to say but there's a lot of things that it's still embarrassingly bad at but even if we fix those which hopefully we will um it doesn't continuously learn and improve it can't go get better at something um that it's currently weak at it can't go discover new science and update its understanding and do that um and it also kind of can't even if we lower the bar it can't just sort of do any knowledge work you could do in front of a computer i actually even without the sort of ability to get better at something it doesn't know yet I might accept that as a definition of AGI but the current systems you can't say like hey go do this task for my job and it goes off and clicks around the internet and calls someone and looks at your files and does it and without that it feels definitely short of it i mean do you guys have internally a clear definition of what AGI is and when do you think that we may be there it's one of these it's like the joke if you got 10 open AI researchers in a room and asked to define AGI you'd get 14 definitions um and that's worrying though isn't it because because that that has been the mission initially we're going to be the first to get to AGI we'll do so safely but we don't have a clear definition of what it is i was going to finish the answer um the what I think matters though and what people want to know is not where is this one you know magic moment of we finished but given that what looks like is going to happen is that the models are just going to get smarter and more capable and smarter and more capable and smarter and more capable on this long exponential different people will call it AGI at different points but we all agree it's going to go way way past that um you know to to whatever you want to call these systems that get much more capable than we are the the thing that matters is how do we how do we talk about a system that is safe through all of these steps and beyond as the system gets more capable than we are as the system can do things that we don't totally understand and I think more important than a when is AGI coming and what's it you know what's the definition of it it's recognizing that we are in this unbelievable exponential curve and you can you know say this is what I think AGI is you can say you think this is what you think AGI is someone else can say super intelligence is out here but we're going to have to contend and get wonderful benefits from this incredible system and so I think we should shift the conversation away from what's the AGI moment to a recognition that like this thing is not going to stop it's going to go way beyond what any of us would call AGI and we have to build a society to get the tremendous benefits of this and figure out how to make it safe well one of the conversations this week has been that the real uh change moment is I mean AGI is a fuzzy thing but what is clear is agentic AI when AI is set free to pursue projects on its own and to put the pieces together um you've actually um you've got a thing called operator which starts to do this and uh I tried it out um you know I wanted to book a restaurant and it's kind of incredible it kind of can go ahead and do it but this is what it said there's a you know it was an intriguing process and you know give me your credit card and everything else and uh I I I declined on this case to go forward but I think this is this is the challenge that people are going to have it's it's kind of like it's an incredible superpower it's a little bit scary and Joshua Benjio when when he spoke here said that agentic AI is the thing to pay attention to this is when everything could go wrong as we give power to AI to go out onto the internet to do stuff i mean going out onto the internet was always in the sci-fi stories the moment where you know escape happened and potential things could go horribly wrong how do you both release agentic AI and have guardrails in place that it doesn't go too far first of all obviously you can choose not to do this and say "I don't want this i'm going to like call the restaurant and read them my credit card over the phone." I could choose but someone else might say "Oh it go out chat GBT onto the internet at large and rewrite the internet to make it better for humans or whatever." Yeah i mean the point I was going to make is just with any new technology it takes a while for people to get comfortable i remember uh like when I wouldn't put my credit card on the internet because my parents had convinced me like someone was going to read the number and you had to fill out the form and then call them and then we kind of all said okay we'll like build anti-fraud systems and we can get comfortable with this i think people are going to be slow to get comfortable with agentic AI in many ways but I also really agree with what you said which is that even if some people are comfortable with it and some aren't we are going to have AI systems clicking around the internet and this is I think the most interesting and consequential safety challenge we have yet faced because AI that you give access to your systems your information the ability to click around on your computer uh now those you know when the AI makes a mistake it's much higher stakes um it is the gate on so we we talked earlier about safety and capability i I kind of think they're increasingly becoming one-dimensional like a good product is a safe product you will not use our agents if you do not trust that they're not going to like empty your bank account or delete your data or who knows what else and so people want to use agents that they can really trust that are really safe and I think we are we are gated on our ability to make progress on our ability to do that but it's it's a fundamental part of the product but in a world where agency is out there and and say that you know maybe it's open models are widely distributed and someone says um okay AGI I want you to go out onto the internet and you know spread a meme however you can that ex people are evil or what whatever it is it it doesn't have to be an individual choice a single person could let that agent out there and the agent could decide well in order to execute on that function i've got to copy myself everywhere and you know like are there are there red lines that you have clearly drawn internally where you know what the danger moments are and that we we cannot put out something that could go beyond this yeah so this is the purpose of our preparedness framework um we and we'll update that over time but we've tried to outline where we think the most important danger moments are or what the categories are how we measure that and how we would mitigate something before releasing it um I could tell from the conversation you wish AI you're not a big AI fan no and that's on the contrary i use it every day i'm aed by it i think this is an incredible time to be alive i wouldn't be alive any other time and I cannot wait to see where it goes but it's we've been holding you I I think it's essential to hold like you can't divide people into those camps you have to hold a passionate belief in the possibility but not be overseduced by it because things could go horribly wrong no no i I I what I was going to say is I totally understand that i totally understand looking at this and saying this is an unbelievable change coming to the world and you know maybe I don't want this or maybe I love parts of it maybe I love talking to Chad GPT but I worry about what's going to happen to art and I worry about the pace of change and I worry about these agents clicking here clicking around the internet and and maybe I maybe on balance I wish this weren't happening or maybe I wish it were happening a little slower or maybe I wish it were happening in a way where like I could pick and choose what parts of progress were going to happen and I think I think the the fear is totally rational the sort of the anxiety is totally rational we all have a lot of it too but a there will be tremendous upside uh obviously you know you use it every day you like it um B I really believe that society figures out over time with some big mistakes along the way how to get technology right and C this is going to happen this is like a discovery of fundamental physics that the world now knows about and it's going to be part of our world and I like I think this conversation is really important i think talking about these areas of danger are really important talk about new economic models are really important but we have to embrace this with like caution but not fear or we will get run by with other people that use AI to do better well you've actually been one of the most eloquent um proponents of safety you testified in the Senate i think you said basically that we should form a new safety agency that licenses any effort i.e it's it will refuse to license certain efforts do you still believe in that uh policy proposal i have learned more about how the government works i don't think this is quite the right policy proposal um what is the right policy proposal i I do think the idea that as these systems get more advanced and have legitimate global impact we need some way you know maybe the companies themselves put together the right framework or the right sort of model for this but we need some way that very advanced models have external safety testing and we understand when we get close to some of these danger zones i very much still believe in that it struck me as ironic that a safety agency is might be what we want and yet agency is the very thing that is unsafe there's something odd about the language there but anyway um I'm trying to click on the slide and it's not going um so I asked I asked Can I say one more thing yes please yeah i I do think this concept of we need to define rigorous testing for models understand what the threats that we collectively society most want to focus on and make sure that as models are getting more capable we have a system where we all get to understand what's being released in the world i think this is really important uh and I think we're not far away from models that are going to be of great public interest in that sense so Sam I asked your um 01 Pro reasoning model which is incredibly Thank you for $200 $200 a month it's a it's it's a a bargain at the price i said "What is the single most penetrating question I could ask you?" It thought about it for two minutes two minutes you want to see the question i do sam given that you're helping create technology that could reshape the destiny of our entire species who granted you or anyone the moral authority to do that and how are you personally responsible accountable if you're wrong it was good that was impressive you've been asking me versions of this for the last half hour what do you think i [Applause] I but but what I would say is this here's my version of that question but no answer what was your question for me yeah how would you answer that one on your in your shoes yeah or what i don't know i Well I am puzzled by you i'm kind of aed by you because you built one of the most astonishing things out there um there are two narratives about you out there one is you know you are this incredible visionary who's done the impossible and you shock the world with far fewer people than Google you came out with something that was much more powerful than anything you've done i mean it's it's it is amazing what you've built but the other narrative is that you have shifted ground that you've shifted from being open AI this open thing to the allure of building something super powerful and you know some you've lost some of your key people there's a narrative out there some people believe that you're not to be trusted in this space i would love to know who you are how what is your narrative about yourself what what are your core values Sam that that can give us the world confidence that someone with so much power here is entitled to it look I think like anyone else I'm a nuanced character that doesn't reduce well to one dimension here so you know probably some of the good things are true and probably some of the criticism is true i in terms of open AI uh our our goal is to make AGI and distribute it make it safe for the broad benefit of humanity i think by all accounts we have done a lot in that direction um clearly our tactics have shifted over time i think we didn't really know what we were going to be when we grew up we didn't think we would have to build a company around this um we learned a lot about how it goes and the realities of what these systems were going to take from uh capital but I think we've been in terms of putting incredibly capable AI with a high degree of safety in the hands of a lot of people and giving them tools to sort of do whatever amazing things they're going to do i think it'd be hard to give us a bad grade on that um I do think it's fair that we should be open sourcing more i think it was reasonable for all of the reasons that you asked earlier as we weren't sure about the impact these systems were going to have and how to make them safe that we acted with precaution i think a lot of your questions earlier would suggest at least some sympathy to the fact that we've operated that way but now I think we have a better understanding as a world and it is time for us to put very capable open systems out into the world if you invite me back next year you will probably yell at me for somebody who's misused these open source systems and say "Why did you do that that was bad." You know you should have not gone back to your open roots but you know we're not going to get there's trade-offs in everything we do and and we are one player in this one voice in this AI revolution um trying to do the best we can and kind of steward this technology into the world in a responsible way we've definitely made mistakes we'll definitely make more in the future on the whole uh I think we have over the last almost decade it's been a long time now um it's you know we have mostly done the thing we've set out to do we have a long way to go in front of us our tactics will shift more in the future but adherence to the sort of mission and what we're trying to do I think very strong does um you posted this well so okay so here's Here's the ring of power from Lord of the Rings um your uh rival I will say uh not your best friend at the moment Elon Musk claimed that you know he thought that you'd been corrupted by the ring of power uh an allegation that by the way an allegation hi Steve an allegation that could be applied to Elon as well you know to be fair but the but but but I'm but I'm curious people you have I might respond i'm thinking about I might say something I it's in what is in everyone's mind as we see technology CEOs get more powerful get richer is can they handle it or does it become irresistible does the power and the wealth make it impossible to sometimes do the right thing and you just have to cling tightly to that ring what do you think i mean do you do you feel that ring sometimes how do you think I'm doing relative to other CEOs that have gotten a lot of power and changed how they act or done a bunch of stuff in the world like how do you think you have you have a beautiful um you you you do not you are not a rude angry person who comes out and says aggressive things to other people you do not I do that that's my single vice no but you No I I I think I think in in the way that you personally conduct yourself it it's it's impressive i mean the question some people ask is is that the real you or you know is there something else going on but I'm but I'm just you have seen you put up the Sauron ring of power or whatever that thing is so I'll take the feedback what is like something I have done where you think I've been corrupted by power i think the fear is that just the transition of open AI to a for-profit model um is you know some people say well there you go you got you got corrupted by the desire for wealth you know you at one point there was going to be no equity in it you it'll make you fabulously wealthy by the way I don't think that is your motivation personally I think you want to build stuff that is insanely cool and what I worry about is that your your is the competitive feeling that you see other people doing it and it makes it impossible to develop at the right pace but but you you tell me I just I'm if you if you don't feel that like what so few people in the world have the kind of capability and potential you have we don't know what it feels like what does it feel like shockingly the same as before i think you you can get used to anything step by step i think if I were like transported from 10 years ago to right now all at once it would feel very disorienting but anything does become sort of the new normal so it doesn't feel any different and it's strange to be sitting here talking about this but like you know the monotony of day-to-day life which I mean in the best possible way um feels exactly the same you're the same person i mean I'm sure I'm not in all sorts of ways but I don't feel any different this this was a beautiful thing you posted your son i mean I that last thing you said about I've never felt love like this i think any parent in the room so knows that feeling that wild biological feeling that humans have and AIs never will of your you're holding your kid and I'm wondering whether whether that's changed how you how you think about things like if you know say here's a red box here's a black box with a red button on it you can press that button and you give your son likely the most unbelievable life but also you inject a 10% chance that he gets destroyed do you press that button in the literal case uh no um if the question is do I feel like I'm doing that with my work uh the answer is I also don't feel like that having a kid changed a lot of things um and by far the most amazing thing that has ever happened to me like everything everybody says is true um the thing my co-founder Ila said once is I I don't know this is a paraphrase something like I don't know what the meaning of life is but for sure it has something to do with babies and it's like unbelievably accurate um it it changed how much I'm willing to like spend time on certain things and like the kind of cost of not being with my kid is just like crazily high um and I I but you know I really cared about like not destroying the world before i really care about it now i didn't need a kid for that part um I mean I definitely think more about like what the future will be like for him in particular but but I feel like responsibility to do the best thing I can for the future for everybody tristan Harris gave a very powerful talk here this week in which he said that the key problem in his view was that you and your peers in these other models all feel basically that the development of advanced AI is inevitable that the race is on and that there is no choice but to try and win that race and to do so as responsibly as you can and maybe there's a scenario where your super intelligent AI can act as a break on everyone else's or something like that but that the very fact that everyone believes it is inevitable means that that that is a pathway to serious risk and instability do you think that you and your peers do feel that it's inevitable and can you see any pathway out of that where we could collectively agree to just slow things down a bit have society as a whole weigh in a bit and say "No let's you know we don't want this to happen quite as fast it's too disruptive." First of all I think people slow things down all the time because the technology is not ready because something's not safe enough because something doesn't work um there are I think all of the efforts hold on things pause on things delay on things don't release certain capabilities so I think this happens and and again like this is where I think the track record does matter if we were rushing things out and there were all sorts of problems either the product didn't work as people wanted it to or there were real safety issues or other things there and I will get come back to a change we made i think you could do that there is communication between most of the efforts with one exception i think all of the efforts care a lot about AI safety um and I think that people not going to say um and and I think that there's really deep care to get this right um I I think the caricature of this is just like this crazy race or sprint or whatever um misses the nuance of people are trying to put out models quickly and make great products for people but people feel the the impact of this so incredibly that you know I think if like if you could go sit in a meeting in OpenAI or other companies you'd be like "Oh these people are really kind of caring about this." Now we did make a change recently to how we think about one part of what's traditionally been understood as as safety which is with our new image model we've given users much more freedom on what we would traditionally think about as speech harms um you know if you if you try to get offended by the model will the model let you be offended and in the past we've had much tighter guard rails on this um but I think part of model alignment is following what the user of a model wants it to do within the very broad bounds of what society decides so if you ask the model to depict a bunch of violence or something like that um or to sort of reinforce some stereotype there's a question of whether or not it should do that and we're taking a much more permissive stance now there's a place where that starts to interact with real world harms that we have to figure out how to draw the line for but you know I think there there will be cases where a company says "Okay we've heard the feedback from society people really don't want models to censor them in ways that they don't think make sense." That's a fair safety negotiation but to the extent that this is a collective a problem of collective belief the solution to those kinds of problems is to bring people together and meet at one point and make a different agreement if there was a group of people say here or out there in the world who who were willing to host a summit of the best ethicists technologists um but not too many people small but and you and your peers to try to crack what agreed safety lines could be across the world would you be willing to attend would you would you urge your your your colleagues i'm much more interested in what our hundreds millions of users want as a whole you know I think like a lot of the room has historically been decided in small elite summits one of the cool new things about AI is our AI can talk to everybody on Earth and we can like learn the collective value preference of what everybody wants rather than have a bunch of people who are like blessed by society to sit in a room and make these decisions i think that's very cool and and I think you will see us do more in that direction and and and when we've gotten things wrong because the elites in the room had a different opinion about what people wanted for the guardrails on image gen than what people actually wanted and we couldn't point to real world harm so we made that change i'm proud of that i mean there is a long track record of unintended consequences coming out of the actions of hundreds of millions of people and there are people also also 100 people in a room making right and and the hundreds of millions of people don't have control over they don't necessarily see what the next step could lead to um I I am hopeful that that is totally accurate and totally right i am hopeful that AI can help us be wiser make better decisions can talk to us and if we say "Hey I want thing X you know rather than like the crowd spin that up." Um AI can say "Hey totally understand that's what you want if that's what you want at the end of this conversation you're in control you pick but have you considered it from this person's perspective or the impact it'll have on this people?" Like I think AI can help us be wiser and make better collective governance decisions than we could before well we're well out of time sam I'll give you the last word what kind of world do you believe all things considered your son will grow up into i remember it's so long ago now i don't know when the first iPad came out is it like 15 years something like that i I remember watching a YouTube video at the time of um like a little toddler sitting in you know a doctor's office waiting room or something and there was a magazine like one of those old you know glossy cover magazines and the toddler had his hand on it and was going like this and kind of angry and to that toddler it was like a broken iPad and he never she never thought of a world that didn't have you know touchcreens in them and to all all the adults watching this it was this amazing thing because it was like it's so new it's so amazing it's a miracle like of course you know magazines are the way the world works my kid my kids hopefully will never be smarter than AI they will never grow up in a world where products and services are not incredibly smart incredibly capable they will never grow up in a world where computers don't just kind of understand you and do you know for some definition of whatever you can imagine whatever you can imagine um it'll be a world of incredible material abundance it'll be a world where the rate of change is incredibly fast and amazing new things are happening and it'll be a world where like individual ability impact whatever is just so far beyond what a person can do today um I hope that my kids and all of your kids will look back at us with some like pity and nostalgia and be like "They lived such horrible lives they were so limited um the world sucked so much." Uh I think that's great um it's incredible what you've built it really is it's unbelievable i think over the next few years you're going to have some of the biggest opportunities the biggest moral challenges the biggest decisions to make of perhaps any any human in history pretty much you should know that everyone here will be cheering you on to do the right thing we will do our best thank you very much thank you for Thank you for coming to TED thank you sir thank you thank you very much enjoy it thanks