Transcript for:
Insights from Radical AI Founders Event

Well good afternoon everyone. Gotta love the buzz in the room here today. Welcome to the Mars Discovery District, this wonderful complex, for this very special Radical AI Founders event. Co-hosted by the University of Toronto, my name is Merrick Gertler and it's my great privilege to serve as President of the University of Toronto. Before we begin, I want to acknowledge the land on which the University of Toronto operates. For thousands of years it has been the traditional land of the Huron-Wendat, the Seneca, and the Mississaugas of the Credit. Today this meeting place is still the home to many indigenous people from across Turtle Island and we are very grateful to have the opportunity to work and to gather on this this land. Well, I am truly delighted to welcome you all to this discussion between Jeffrey Hinton, University Professor Emeritus at the University of Toronto, known to many as the godfather of deep learning, and Fei-Fei Li, the inaugural Sequoia Professor in Computer Science at Stanford University, where she is co-director of Human Centered AI Institute. I want to thank Radical Ventures and the other event partners for joining with U of T to create this rare and special opportunity. Thanks in large part to the groundbreaking work of Professor Hinton and his colleagues, the University of Toronto has been at the forefront of the academic AI community for decades. Deep learning is one of the primary breakthroughs propelling the AI boom, and many of its key developments were pioneered by Professor Hinton and his students at U of T. This tradition of excellence, this long tradition, continues into the present. Our faculty, students and graduates, together with partners at the Vector Institute and at universities around the world, are advancing machine learning and driving innovation. Later this fall, our faculty, staff, students and partners will begin moving into phase one of the beautiful new Schwartz-Reisman Innovation Campus just across the street. You may have noticed a rather striking building at the corner with the official opening planned for early next year. This facility will accelerate innovation and discovery by creating Canada's largest university-based innovation hub. Made possible by a generous and visionary gift from Heather Reisman and Jerry Schwartz, the Innovation Campus will be a focal point for AI thought leadership, hosting both the Schwartz Reisman Institute for Technology and Society, led by Professor Jillian Hadfield, and the Vector Institute. It's already clear that artificial intelligence and machine learning are driving innovation and value creation across the economy. They are also transforming research in fields like drug discovery, medical diagnostics, and the search for advanced materials. Of course, at the same time, there are growing concerns over the role that AI will play in shaping humanity's future. So today's conversation clearly addresses a timely and important topic, and I am so pleased that you have all joined us on this momentous occasion. So without further ado, let me now introduce today's moderator, Jordan Jacobs. Jordan is managing partner and co-founder of Radical Ventures, a leading venture capital firm. supporting AI-based ventures here in Toronto and around the world. Earlier, he co-founded Layer 6 AI and served as co-CEO prior to its acquisition by TD Bank Group, which he joined as chief AI officer. Jordan serves as a director of the Canadian Institute for Advanced Research, and he was among the founders of the Vector Institute, a concept that he dreamed up with Tommy Putnan, Jeff Hinton, Ed Clark, and a few others. So distinguished guests, please join me in welcoming Jordan Jacobs. Thanks very much, Meric. I wanted to start by thanking a number of people who have helped to make this possible today. University of Toronto and Meric, Melanie Wooden, Dean of Arts and Science, and a number of partners that have brought this to fruition. So this is the first in our annual four-part series of AI, founder AI master classes that we run at Radical. This is the third year we've done it and today's the first one of this year. We do it in person. And online, so we've got thousands of people watching this online. So if you decide you need to start coughing, maybe head outside. We do that in partnership with the Vector Institute and thank them very much for their participation and support. With the Alberta Machine Intelligence Institute in Alberta and with Stanford HAI, thanks to FeiFei. So thank you, all of you, for being excellent partners. we're hoping that this is going to be a really interesting discussion. This is the first time that Jeff and Fei-Fei, who I like to think of as friends and I get to talk to, but this is the first time they're doing this publicly together, so it's, I think, going to be a really interesting conversation. Let me quickly do some deeper explanations of their background. Jeff is often called the godfather of artificial intelligence. He's won the Turing Award. He is a professor emeritus at the University of Toronto, co-founder of the Vector Institute. Also, mentored in a lot of the people who have gone on to be leaders in AI globally, including at big companies and many of the top research labs in the world in academia. So when we say godfather, it really is. There are many children and grandchildren of Jeff who are leading the world in AI, and that all comes back to Toronto. Fei Fei is the founding director of the Stanford Institute for Human-Centered AI. Professor at Stanford. She's an elected member of the National Academy of Engineering in the US, the National Academy of Medicine, and the American Academy of Arts and Science. During a sabbatical from Stanford in 2017-18, she stepped in for a role as a vice president at Google as chief scientist of AI ML at Google Cloud. There's many, many other things we could say about Feifei, but she also has an amazing number of students who have gone on to be leaders in the field globally. And Really importantly, and so for those of you who haven't heard yet, Feifei has a book coming out in a couple of weeks. It's coming out on November 7th. It's called The Worlds I See, Curiosity, Exploration, and Discovery at the Dawn of AI. I've read it. It's fantastic. You should all go out and buy it. I'll read you the back cover slip that Jeff wrote, because it's much better than what I can say about it. Here's Jeff's description. Fei-Fei Li was the first computer vision researcher to truly understand the power of big data, and her work opened the floodgates for deep learning. She delivers an urgent, clear-eyed account of the awesome potential and danger of the AI technology that she helped to unleash, and her call for action and collective responsibility is desperately needed at this pivotal moment in history. So I urge you all to go and pre-order the book and read it as soon as it comes out. With that, thanks Fei-Fei and Jeff for joining us. Thank you, Jordan. Okay, so I think it's not an exaggeration to say that without these two people, the modern age of AI does not exist. Certainly not in the way that it's played out. So let's go back to what I think is the big bang moment, AlexNet, ImageNet. Maybe Jeff, do you wanna take us through from your perspective that moment, which is 11 years ago now? Okay, so in 2012, two of my very smart graduate students won a competition, a public competition, and showed that... Deep neural networks could do much better than the existing technology. Now this wouldn't have been possible without a big data set that you could train them on. Up to that point there hadn't been a big data set of labelled images and Feifei was responsible for that data set. And I'd like to start by asking Feifei whether there were any problems in putting together that data set. Thank you Jeff and thank you Jordan and thank you University of Toronto for this. It's really fun to be here. So yes the data set that Jeff you're mentioning is called ImageNet and I began building it 2007 and spent the next three years pretty much with my graduate students building it and you asked me was there a problem building it. Where do I even begin? Even at the conception of this project, I was told that it really was a bad idea. I was a young assistant professor. I remember it was my first year actually as an assistant professor at Princeton. And for example, a very respected mentor of mine in the field, if you know the academic jargon, these are the people who will be writing my tenure evaluations. actually told me really out of their good heart that please don't do this after I told them what this plan is back in 2007. So that would have been Jitendra, right? The advice was that you might have trouble getting tenure if you do this. And then I also tried to invite other collaborators. Nobody in machine learning or AI wanted to even go close to this project. And of course, no funding. Describe ImageNet to us for the people who are not familiar with what it was. Yeah, so ImageNet was conceived around 2006, 2007. And the reason I conceived ImageNet was actually twofold. One is that... Jeff, I think we share similar background. I was trained as a scientist. To me, doing science is chasing after North Stars. And in the field of AI, especially visual intelligence, for me, object recognition, the ability for computers to recognize there's a table in the picture or there's a chair is called object recognition. It has to be a North Star problem in our field. And I feel that we need to really put a... dent in this problem. So I want to define that North Star problem. That was one aspect of ImageNet. Second aspect of ImageNet was recognizing that machine learning was really going in circles a little bit at that time that we were making really intricate models without the kind of data to drive the machine learning. Of course, in our jargon, it's really the generalization problem, right? And I recognize that. We really need to hit a reset and rethink about machine learning from a data-driven point of view. So I wanted to go crazy and make a data set that no one has ever seen in terms of its quantity and diversity and everything. So ImageNet, after three years, was a curated data set of Internet images that's totaled 15 million images. across 22,000 concepts, object category concepts, and that was the data set. Just for comparison, at the same time in Toronto we were making a data set called CIFAR 10 that had 10 different classes and 60,000 images and it was a lot of work. It was generously paid for by CIFAR at 5 cents an image. And so... You turn the data set into a competition. Just walk us through a little bit of what that meant, and then we'll kind of fast forward to 2012. Right. So we made the data set in 2009. We barely made it into a poster in an academic conference, and no one paid attention. So I was a little desperate at that time, and I believe this is the way to go, and we open sourced it. But even with open source, it wasn't really picking up. So my students and I thought, well, let's... And a little more. drive up the competition. Let's create a competition to invite the worldwide research community to participate in this problem of object recognition through ImageNet. So we made an ImageNet competition and the first feedback we got from our friends and colleagues is it's too big. At that time you can not fit it into a hard drive, let alone memory. So we actually created a smaller data called the Image That Challenged dataset, which is only one million images across 1,000 categories instead of 22,000 categories. And that was unleashed in 2010, I think. You guys noticed it in 2011, right? Yes. And so in my lab, we already had deep neural networks working quite well for speech recognition. And then in your... said, what we've got really ought to be able to win the ImageNet competition. And he tried to convince me that we should do that. And I said, well, you know, it's an awful lot of data. And he tried to convince his friend Alex Krzyzewski, and Alex wasn't really interested. So he actually preprocessed all the data to put it in just the form Alex needed it in. You shrunk the size of the images. Yes. He shrunk the images a bit. Yeah, I remember. And got it preprocessed. processed just right for Alex, and then Alex eventually agreed to do it. Meanwhile, in Jan LeCun's lab in New York, Jan was desperately trying to get his students and postdocs to work on this data set, because he said, the first person to apply convolutional nets to this data set is going to win. And none of his students were interested. They were all busy doing other things. And so Alex and Ilya got on with it, and we discovered by running on the previous year's competition that we were doing much better. better than the other techniques. And so we knew we were going to win the 212 competition. And then there was this political problem, which is we thought if we show that neural networks win this competition, the computer vision people, Jitendra in particular, will say, well, that just shows it's not a very good data set. So we had to get them to agree ahead of time that if we won the competition, we'd have proved that neural networks worked. So I actually called called up Jitendra and we talked about datasets we might run on and my objective was to get Jitendra to agree that if we could do ImageNet then neural nets really worked. And after some discussion and him telling me to do other datasets, we eventually agreed, okay, if we could do ImageNet then we'd have shown neural nets work. Jitendra remembers it as he suggested ImageNet and he was the one who told us to do it, but it was actually a bit the other way around. And we did it, and it was amazing. We got just over half the error rate of the standard techniques, and the standard techniques have been tuned for many years by very good researchers. I remember standard technique at that time, the previous year is support vector machine with sparsification. Right. That was... So you guys submitted your competition results, I think it was late August or early September. And I remember either getting a phone call or getting an email late one evening from my students who was running this, because we hold the test data. We were running on the server side. The goal is that we have to process all the entries so that we select the winners. And then by, I think it was beginning of October that year, that computer vision fields, in International Conference, ICCV 2012 was happening in Florence, Italy. We already booked a workshop, annual workshop at the conference. We will be announcing the winner. It's the third year. So a couple of weeks before we have to process the winning, the teams. Because it was the third year, and frankly, the previous two years results didn't excite me. And I was a a nursing mother at that time, so I decided not to go to the third year. So I didn't book any tickets. I'm just like, too far for me. And then the results came in. That evening phone call or email, I really don't remember, came in. And I remember saying to myself, darn it, Jeff, now I have to get a ticket to Italy. Because I knew that was a very significant moment. especially It was a convolutional neural network, which I learned as a graduate student, as a classic algorithm. And of course, by that time, there was only middle seats in economy class, flying from San Francisco to... to Florence with the one-stop layover. So it was a grueling trip to go to Florence, but I wanted to be there. David Plylar, Jr.: I'm sorry. Yeah, but you didn't come. David Plylar, Jr.: No, I didn't. Well, it was a grueling trip. But did you know that would be a historical moment? Yes, I did actually. And you still didn't go. But you sent Alex. Alex, yes. Yeah, so... Who ignored all your advice, right? Who ignored my email for multiple times, because I was like, Alex, this is so cool. Please do this visualization, this visualization. He ignored me. But y'all look okay. And it was because for those of you who have attended these academic conference workshops tend to book these smaller rooms. We booked a very small room, probably just the middle section here. And I remember Yang had to stand in the back of the room because it was really packed. And Alex eventually showed up because I was really nervous that he wasn't even going to show up. But as you predicted, at the end of the day, At that workshop, ImageNet was being attacked. At that workshop, there were people vocally attacking, this is a bad data set. In the room? In the room. During the presentation? In the room. But not Jitendra, because Jitendra had already agreed that it counted. I don't think Jitendra was in the room, I don't remember. But I remember it was such a strange moment for me because as a machine learning researcher, I knew history was in the making, yet... Imagine that was being attacked. It was just a very strange, it was an exciting moment. And then I had to hop in the middle seat and get back to San Francisco because then the next morning. So you've mentioned a few people that I wanna come back to. later. So Ilya, who's founder and chief scientist at OpenAI, and Yann LeCun, who subsequently went on to be head of AI at Facebook, now Meta. And there's a number of other interesting people in the mix. Before we go forward and kind of see what that boom moment created, let's just go back for a little bit. Both of you started in this with kind of a very specific goal in mind that is individual and I think iconoclastic, and you had to persevere through the moments that you just described, but kind of throughout your careers. Can you just go back, Jeff, maybe, and start, give us a background to why did you want to get into AI in the first place? I did. psychology as an undergraduate. I didn't do very well at it. And I decided they were never going to figure out how the mind worked unless they figured out how the brain worked. And so I wanted to figure out how the brain worked. And I wanted to have an actual model that worked. So you can think of understanding the brain as building a bridge. There's experimental data and things you can learn from experimental data. And there's things that will do the computations you want, things that will recognize. objects and they were very different and I think of it as you want to build this bridge between the data and the competence, the ability to do the task and I always saw myself as starting at the end of things that work but trying to make them more and more like the brain but still work. Other people try to stay with things justified by empirical data and try and have theories that might work. But we're trying to build that bridge. And not many people were trying to build the bridge. Terry Sanofsky was trying to build the bridge from the other end. And so we got along very well. A lot of people trying to do computer vision just wanted something that worked. They didn't care about the brain. And a lot of people who care about the brain wanted to understand how neurons work and so on, but didn't want to think much about the nature of the computations. And I still see it as we have to build this bridge by getting people to know... about the data and people who know about what works to connect. So my aim was always to make things that could do vision but do vision in the way that people do it. Okay, so we're going to come back to that because I want to ask you about the most recent developments and how you think that they relate to the brain. Faith, you, and so Jeff, just to kind of put a framework on where you started, UK to the US to Canada by mid to late 80s, you come to Canada in 87. Along that route, funding and interest in neural nets and the way the approaches that you're taking kind of goes like this, but... I'd say mostly like this. Going up and down. Fei-Fei, you started your life in a very different place. Can you walk us through a little bit of how you came to AI? Yeah, so I started my life in China. And when I was 15 year old, my parents and I came to Persepolis, New Jersey. So I became a new immigrant and where I started was first English and second language classes because I didn't speak the language. And just working in laundries and restaurants and so on. But I had a passion for physics. I don't know how it got into my head. And I wanted to go to Princeton because all I know was Einstein was there. And I got into Princeton. He wasn't there by the time I got into Princeton. You're not that old. Yes, so, but there was a statue of him. And the one thing I learned in physics, beyond all the math and all that, is really the audacity to ask the craziest questions. Like the smallest, You know, particles of the atom world, or the boundary of space-time and beginning of universe. And along the way, I discover brain as a... as a third year Roger Penrose and those books. Yeah, you might have opinions, but at least I've read those books and... It was probably better you didn't. Well, he, you know, it at least got me interested in brain. And by the time I was graduating, I wanted to ask the most audacious question as a scientist. And to me, the absolute most fascinating audacious question of my generation, that was 2000, was intelligence. So I went to Caltech to get a dual, pretty much a dual PhD in neuroscience with Christoph Koch and in AI with Pietro Perona. So I so echo Jeff what you said about bridge because that five years allow me to Work on computational neuroscience and look at how the mind works, as well as to work on the computational side and try to build that computer program that can mimic the human brain. So that's my journey. It starts from physics. Okay, so your journeys intersect at ImageNet 2012. By the way, I met Jeff when I was a graduate student. Right, I remember. I used to go visit Pietro's lab. Yeah. In fact, he actually offered me... a job at Caltech when I was 70. You would have been my advisor. No, not when I was 70. Okay, so we intersected at ImageNet. For those in the field, everyone knows that ImageNet is this Bing Bang moment, and subsequent to that, first the big tech companies come in and basically start buying up your students and you, and to get them into the companies. I think they were the first ones to realize the potential of this. I'd like to talk about that for a moment, but kind of fast-forwarding, I think it's only now, since ChatGPT, that the rest of the world is catching up to the power of AI because finally you can play with it. You can experience it in the boardroom, they can talk about it and then go home, and then the 10-year-old kid has just written a dinosaur essay for fifth grade with ChatGPT. So that kind of transcending experience of everyone being able to play with it, I think has been a... Huge shift, but in the period in between, which is 10 years, there is kind of this explosive growth of AI inside the big tech companies, and everyone else is not really noticing what's going on. Can you just talk us through your own experience, because you experienced a kind of a ground zero post-ImageNet? It's difficult for us to get into the frame of everybody else not realizing what was going on, because we realized what was going on. So a lot of the universities you'd have thought would be right at the forefront. forefront were very slow in picking up on it. So MIT, for example, and Berkeley. I remember going to talk in Berkeley in I think 2013, when already AI was being very successful in computer vision. And afterwards a graduate student came up to me and he said, I've been here like four years and this is the first talk I've heard about neural networks, they're really interesting. Well he should have gone to Stanford. Probably. But the same with MIT. They were rigidly against having neural nets. And the ImageNet moment started to wear them down. And now they're big proponents of neural nets. But it's hard to imagine now, but around 2010 or 2011, there was the computer vision people, very good computer vision people, were really adamantly against neural nets. They were so against it that, for example, one of the main journals, the IEEE Pattern Recognition. PAMI? PAMI. Had a policy not to referee papers on neural nets at one point. Just send them back. Don't referee them. It's a waste of time. It shouldn't be in PAMI. and Jan LeCun sent a paper to a conference where he had a neural net that was better at identifying at doing segmentation of pedestrians than the state of the art and it was rejected and one of the reasons it was rejected was one of the referees said this tells us nothing about vision because they had this view of how computer vision works which is you study the nature of the problem of vision you formulate an algorithm that will solve it. You figure out how to implement that algorithm and then you publish a paper. I have to defend my field. Not everybody. Not everybody. So there are people who are... But most of them were adamantly against neural nets and then something remarkable happened after the ImageNet competition, which is they all changed within about a year. All the people who have been the biggest critics of neural nets started doing neural nets. Much to our chagrin, some of them did it better than us, but... So Zisserman in Oxford, for example, made a better neural net very quickly. But they behaved like scientists ought to behave, which is they had this strong belief this stuff was rubbish. Because of ImageNet, we could eventually show that it wasn't, and then they changed. So that was very comforting. And just to carry it forward, so what you're trying to show, you're trying to label, using the neural nets, these 50 million images accurately. You've got them all labeled in the background. so you can measure it. The error rate when you did it dropped from 26% the year before, I think to 16% or so. I think it's 15.3. Okay, and then it subsequently keeps dropping. 15.32. I knew you would remember. Which randomization? Jeff doesn't forget things. And then in subsequent years, people are using more powerful neural nets and it continues to drop to the point where it surpasses. 2015, so there's a. Canadian very smart Canadian undergrad who joined my lab his name is Andre Kapathy and he got bored one summer and said I want to measure how humans do so you should go read his blog so he he had all these like human doing image that test parties he had to bribe them with pizza I think with the my students in the lab and they got to a they got to a about 5% and that was a 5 or 3.5? Three. Three, 3.5 I think. So humans basically make mistakes about 3% of the time. Right, right. And then I think 2016, I think ResNet passed it. It was ResNet. It's that year's winning algorithm past the human performance. And then ultimately you had to retire the competition because it was so much better than humans that it. We had to retire because we ran out of funding. Okay. It's a better, but a different reason. Still run out of funding. Incidentally, that student started life at the University of Toronto, where he went to your lab, and then he went to be head of research at Tesla. Okay, first of all, he came to Stanford to be a PhD student, and yesterday night we were talking, actually there was a breakthrough dissertation in the middle of this, and then he became part of the founding team of... of open AI. But then he went to Tesla. And then he went to Tesla. And then he thought better of it. But I do want to answer your question of that 10 years. Well, there's a couple of developments along the way. Transformers. So the Transformer paper is written, the research done, and written. Paper written inside Google. Another Canadian is a co-author there, Aiden Gomez, who's now the CEO and co-founder of Cohere, who I think was a 20-year-old intern at Google Brain when co-authored the paper. So there's a tradition of Canadians being involved in these breakthroughs. But, Jeff, you were at Google when the paper was written. Was there an awareness inside Google of how important this would be? I don't think there was. Maybe the authors knew, but it took me several years to realize how important it was. And at Google, people didn't realize how important it was until BERT. So BERT used transformers. And BERT then became a lot better at a lot of natural language processing benchmarks for a lot of different tasks. And that's when people realized transformers were special. So 2017, the transformer paper was published. I also joined Google. And I think you and I actually met on my first. week. I think most of 2017 and 2018 was neural architecture search. I think that was Google's bet and there was a lot of GPUs being used. So it was a different bet. So just to explain that, neural architecture search essentially means this. You get yourself a whole lot of GPUs and you just try lots of different architectures to see which works best and you automate that. that. It's basically automated evolution for neural net architecture. It's like hyperparameter tuning. Yeah. And it led to some quite big improvements, but nothing like transformers. And transformers were a huge improvement for natural language. Neural net architecture search was mostly done on ImageNet. Yeah. So I'll tell you our experience of transformers. So we were doing our company layer six at the time. I think we saw a pre-read of the paper, and we were in the middle of . fundraising and a bunch of acquisition offers and read the paper and I mean not just me but my partner told me who had studied with you and Max Volkov who came out of the your group lab and we thought this is the next iteration of neural nets we should sell the company start a venture fund and invest in these companies that are going to be using transformers so we figured it would take five years to get adopted beyond google and then from that moment forward it would be 10 years for all the software in the world to get replaced or embedded with this technology. We made that decision five years and two weeks before ChatGPT came out. So I'm glad to see we were good at predicting, but I have to give credit to my co-founders who I thought I understood what the paper was, but they were able to explain it fully. I should just correct you on one thing. I don't think Tommy ever studied with me. He wanted to come study with me, but a colleague in my department told him if he came to work with me, that would be the end of his career and he should go do something else. So he took the class. classes but he and this is my partner who in the late 90s was doing a master's at U of T and he wanted to go study with Jeff studied neural nets and his girlfriend now wife's father who was a engineering professor said don't do that neural nets are a dead end so instead he took the classes but wrote his his thesis in cryptocurrency So, okay, so... But are you still going to talk about the 10 years? Because I think there's something important. Yeah, so go ahead. So I do think there's something important the world overlooked. That's 10 years between... ImageNet, AlexNet, and ChatGPT, most of the world sees this as a tech 10 years, you know, or we see it as a tech 10 years. In the big tech, there's things brewing. I mean, it took sequence to sequence, transformer, but things are brewing. But I do think, for me personally, and for the world, it's also a transformation between tech to society. I actually think, personally, I grew from a scientist to a humanist in this 10 years, because having joined Google for that two years in the middle of the transformer papers, I begin to see the societal implication of this technology. It was post-AlphaGo moment, and very quickly we got to the AlphaFold moment. It was where bias, it was creeping out. There was privacy issues, and then we're starting to see the beginning of disinformation and misinformation, and then we're starting to see the talks of job within a small circle. not within the big public discourse. When I grew personally anxious, I feel, you know, 2018... It was also right after Cambridge Analytica. So that huge implication of technology, not AI per se, but it's algorithm-driven technology on election. That's when I had to make a personal decision of staying at Google or come back to Stanford. And I knew the only reason I would come back to Stanford was starting this human-centered AI institute to really, really understand the human side of this technology. So I think... this is a very important 10 years, even though it's kind of not in the eyes of the public, but this technology is starting to really creep into the rest of our lives. And of course, 2022, it's all shown under the daylight how profound this is. There's an interesting footnote to what happened during that period as well, which is ultimately you and Elias and Alex joined Google, but before that, there was a big Canadian company that had the opportunity to get access to this technology. Do you want to... I've heard this story, but I don't think it's ever been shared publicly. Maybe do you want to share that story for a second? Okay, so the technology that we were using for the ImageNet, we developed it in 2009 for doing speech recognition, for doing the acoustic modeling bit of speech recognition. So you can take the sound wave. and you can make a thing called a spectrogram, which just tells you at each time how much energy there is at each frequency. So you're probably used to seeing the spectrograms. And what you'd like to do is look at a spectrogram and make guesses about which part of which... is being expressed by the middle frame of the spectrogram. And two students, George Dahl and another student who I shared with Gerald Penn, called... Abdo. He had a longer name. We all called him Abdo, who was a speech expert. George was a learning expert. Over the summer of 2009, they made a model that was better than what 30 years of speech research had been able to produce, and big, big teams working on speech research, and the model was slightly better. Not as big as the ImageNet gap, but it was better. And that model... was then ported to IBM and to Microsoft by George went to Microsoft and Abdo went to IBM and those big speech groups started using neural nets then and I had a third student who'd been working on something else called Navdeep, Navdeep Jaitley and he wanted to take this speech technology to a big company but he wanted to stay in Canada for complicated visa reasons and so we We got in touch with BlackBerry, RIM, and we said, we've got this new way of doing speech recognition and it works better than the existing technology and we'd like a student to come to you over the summer and show you how to use it and then you can have the best speech recognition in your cell phone. They said, after some discussions, a fairly senior guy at BlackBerry said, we're not interested. So our attempt to give it to Canadian industry failed. And so then Navdeep took it to Google. And Google were the first to get it into a product. So in 2012, around the same time as we won the ImageNet competition, George and Abdo's speech recognition acoustic model, the acoustic model, was in... There was a lot of work making it a good product and making it have low latency and so on. That came out in the Android. And there was a moment when the Android suddenly... became as good as Siri at speech recognition. And that was a neural net. And I think for the people high up in the big companies, that was another ingredient. They saw it get this dramatic result for vision, but they also saw that it was already out in a product for speech recognition. It was working very well there too. So I think that combination, if it does speech, it does vision, clearly it's going to do everything. We won't say any more about Blackberry. It was a shame that Canadian industry didn't come, you know, I think we might have still had BlackBerrys if that happened. All right, we'll leave that one there. I thought it was a story. I've heard this story before, but I thought it was important for the rest of the world to know some of what went on behind the scenes, why this technology didn't stay in Canada, even though it was offered for free. Okay, so let's advance forward. We now have post-Transformers. Google is starting to use this and develop it in a number of different ways. OpenAI, where your former student Ilya had left Google, been a founder of OpenAI with Elon Musk and Sam Altman, Greg Brockman and a few others. Ilya is the chief scientist. And Andre, your student, as a co-founder. So they are working together, a very small team, to basically take, well, initially the idea was, we're going to build AGI. and artificial general intelligence. Ultimately, the transformer paper comes out, they start to adopt at some point transformers, and they start to make extraordinary gains internally, that they're not really sharing publicly, in what they're able to do in language understanding and a number of other things. They had efforts going on in robotics that spun out, Peter Abbeel ended up spinning out Covariant, a company we subsequently invested in, in other things. So the language part of it advances and advances and advances. People outside OpenAI don't really know to the extent what's going on. And then ChatGPT comes out November 30th last year, so ten months ago. Well, GPT-2 caught the attention of some of us. I think actually, I think by the time GPT-2 came out, my colleague Percy Liang, an LP professor at Stanford, I remember he came to me and said, Feifei, I have a whole different realization of how important this technology is. So to the credit of Percy, he immediately asked HAI to set up a center to study this. And I don't know if this is contentious in Toronto, Stanford is the university that coined the term foundation models. And some people call it LLM, Large Language Model, but going beyond language, we call it a foundation model. We created the Center of Research for Foundation Model before, I think, before 3.5 came out, so definitely before ChatGP. Just describe what a foundation model is, just for those who are not familiar. That's actually a great question. Foundation model in... Some people feel it has to have transformer in it. I don't know if you feel that. No, it just has to be a very big model trained on a huge amount of data. Very large, pre-trained with huge amount of data. And I think one of the most important thing of a foundation model is the generalizability of multiple tasks. You're not training it, for example, machine translation. So in NLP, machine translation is a very important task. But the kind of foundation model Like GPT is able to do machine translation, is able to do conversation, summarization, and blah blah blah. So that's a foundation model and we're seeing that now in multimodality, we're seeing a division in robotics, in video, and so on. So we created that. But you're right, the public sees this in the, what did you say, October 30th? November, I think. November 30th. One other very important... The thing about foundation models, which is for a long time in cognitive science, the general opinion was that these neural nets, if you give them enough training data, they can do complicated things, but they need an awful lot of training data. They need to see thousands of cats. And people are much more statistically efficient. That is, they can learn to do these things on much less data. And people don't say that so much anymore. Because what they were really doing was comparing what an MIT undergraduate can learn to do on a limited amount of data with what a neural net that starts with random weights can learn to do on a limited amount of data. And if you want to make a fair comparison, you take a foundation model, that is, a neural net that's been trained on lots and lots of stuff, and then you give it a completely new task. And you ask, how much data does it need to learn this completely new task? And that's called few-shot learning because it doesn't take much. And then you discover these things are statistically efficient. That is, they compare quite favorably with people in how much data they need to learn to do a new task. So the old kind of innatist idea that we come with lots of innate knowledge and that makes us far superior to these things that just learn everything from data. People have pretty much given up on that now because you take a foundation model that had no innate knowledge but a lot of experience. and then you give it a new task, it learns pretty efficiently. It doesn't need huge amounts of data. You know, my PhD is in one-shot learning. But it's very interesting. Even in Bayesian framework, you could pre-train, but it's only in the neural network. Kind of pre-training really can get you this multitask. Right. So. Okay, so this basically gets productized in ChatGPT. The world experiences it, which is only ten months ago, although for some of us it feels like it's been a long time. It was like much longer. It was like forever. Because suddenly you had this big bang that happened a long time ago that I think for a long time no one really saw the results of it. Suddenly, I mean, my comparison would be... There's planets that are formed and stars that are visible and you know Everyone can experience the results of what happened ten years before and then transforms etc. So The world suddenly becomes very excited about what I think feels to a lot of people like magic Something that they can touch and they can experience and gives them back a feed a feedback in whatever way they're asking for it But they're asking they're putting in text prompts and asking for an image to be created or video Or text and asking for more more text to come back and answer things that you would never be able to expect and getting those unexpected answers. So it feels a little bit like magic. My personal view is that we've always moved the goal line in AI. AI is always the thing that we couldn't do. It's always the magic and as soon as we get there, then we say that's not AI at all. There's people around that say that's not AI at all and move the goal line. In this case, what was your reaction when it came out? I know part of your reaction is you quit Google and decided to do different things. But when you first saw it, what did you think? Well, like Fei-Fei said, GPT-2 made a big impression on us all. And then there was a steady progression. Also, I'd seen things within Google before GPT-4 and GPT-3.5 that were just as good like Palm. So that in itself didn't make a big difference. It was more Palm that made an impression on me within Google because Palm could have... explain why a joke was funny. And I'd always just use that as a, you know, we'll know that it really gets it when it can explain why a joke is funny. And Palm could do that. Not for every joke, but for a lot of jokes. Incidentally, these things are quite good now at explaining why jokes are funny, but they're terrible at telling jokes. And there's a reason, which is they generate text one word at a time. So if you ask them to tell a joke, what they do is they're trying to tell a joke. So they're going to try and tell a joke. And they're stuff that sounds like a joke. So they say, you know, a priest and a badger went into a bar, and that sounds a bit like the beginning of a joke. And they keep going, telling stuff that sounds like the beginning of a joke. But then they get to the point where they need the punchline. And of course they haven't thought ahead. They haven't thought what's going to be the punchline. They're just trying to make it sound like they're leading to a joke. And then they give you a pathetically weak punchline, because they have to come up with some punchline. So although they can explain jokes... because they get to see the whole joke before they say anything, they can't tell jokes. But we'll fix that. Okay, so I was gonna ask you if comedian is a job of the future or not. Do you think it's gonna? Probably not. All right, so anyway. So, but what was your reaction to it? And again, you've seen things behind the scenes along the way. A couple of reactions. My first reaction is, of all people, I thought I knew the power of data and I was still awed by the power of data. That was a technical reaction. I was like, darn it, I should have made a bigger image then. Maybe not, but that was really... It's too good. Funding is the problem. Yeah, so that was... Second, when I saw the public awakening moment to AI with ChatGPT, not just the GPT-2 technology moment, I genuinely thought, thank goodness we've invested in human-centered AI for the past four years. Thank goodness we have built a bridge with the... policy makers, with the public sector, with the civil society. We have not done enough, but thank goodness that that conversation had started. We were participating in it, we were leading some part of it. For example, we as an institute at Stanford were leading the critical national AI research cloud bill that is still going through Congress. right now and Not right now Senate Senate it's it's by camera. So at least it's moving the Senate because We predicted the societal moment for this attack. We don't know where when they would come, but we knew it would come. And it was just a sense of urgency, honestly. I feel that this is the moment we really have to rise to, not only our our passion as technologists, but responsibility as humanists. So you both, I think the common reaction of you both has been we have to think about both the opportunities of this, but also the negative consequences of it. So for me, there was something I realized and didn't realize until very late, and what got me much more interested in the societal impact was... Like Fei-Fei said, the power of data. These big chatbots have seen thousands of times more data than any person could possibly see. And the reason they can do that is because you can make thousands of copies of the same model, and each copy can... look at a different subset of the data, and they can get a gradient from that of how to change their parameters, and they can then share all those gradients. So every copy can benefit from what all the other copies extracted from data, and we can't do that. Suppose you had 10,000 people and they went out and they read 10,000 different books, and after they each read one book, all of them know what's in all the books. We could get to be very smart that way. And that's what these things are doing. And so it makes them far superior to us. There's some schooling that we're trying to do that, but not in a way. But education is just hopeless. I mean, it's hardly worth paying for. Except University of Toronto. I've tried to explain to friends that Jeff has a very sarcastic sense of humor, and if you spend enough time around him, you'll get it. But I'll leave it to you to decide whether that was sarcastic or not. So the way we exchange knowledge, roughly speaking, this is something of a simplification, but I produce a sentence, and you figure out what you have to change in your brain, so you might have said that. That is, if you trust me. We can do that with these models, too. If you want one neural net architecture to know what another architecture knows, which is completely different. different architecture. You can't just give it the weights. So you get one to mimic the output of the other. That's called distillation. And that's how we learn from each other. But it's very inefficient. It's limited by the bandwidth of a sentence, which is a few hundred bits. Whereas if you have these models, these digital agents, which have a trillion parameters, each of them looks at different bits of data, and then they share the gradients. They're sharing a trillion numbers. So you're comparing an ability to share knowledge that's in trillions of numbers with something that's hundreds of bits. bits. They're just much, much better than us at sharing. So I guess, Jeff, that... So I agree with you at the technology level, but it sounded like for you that's the moment that got you feeling very negative. That's the moment I thought we're history, yeah. I'm less negative than you, and I'll explain later, but I think that's where... Actually, let's talk about that. Explain why... you are optimistic and let's understand why you are more pessimistic. I'm pessimistic because the pessimists are usually right. I thought I was a pessimist too. We have this conversation. So I don't know if I should be called an optimist. I think I'm... Look, when you came to a country when you were 15 now speaking... a single bit of language and starting from zero dollars, there's something very pragmatic in my thinking. I think technology, our human relationship with technology is a lot more a lot messier than an academia typically would predict. Because we come to academia in the ivory tower, we wanna make a discovery, we wanna build a piece of technology, but we tend to be purist. But when a technology like AI hit the ground and reach the societal level, it is inevitably messily entangled with what humans do. And this is where maybe you call it optimism, because my my sense of humanity, I believe in humanity. I believe in not only the resilience of humanity, but also a collective will. The arc of history is dicey sometimes, but if we do the right thing, we have a chance, we have a fighting chance of creating a future that's better. So what I really feel is not delusional optimism at this point, is actually a a sense of urgency of responsibility. And one thing, Jeff, I think, I really hope you do feel positive is you look at the- Students of this generation, in my class, I teach a 600 undergrad class every spring on introduction of deep learning and computer vision. This generation compared to even five years ago is so different. They walk into our class not only wanting to learn deep learning transformers, Gen AI, they want to talk about ethics. They want to talk about policy. They want to understand privacy and bias. And I think that really is. is where I see the humanity rising to the occasion. And I think it's fragile. I mean, look at what's going on in the world, in Washington. It's very fragile, but I think if we recognize this moment, there is hope. So I see the same thing. I don't teach undergraduates anymore, but I see it in sort of more junior faculty members. So at the University of Toronto, for example, two of the most brilliant young professors. went off to Anthropic to work on alignment. Roger Grace is coming back again, I hope. And Ilya, for example, is now full-time working on alignment. So there really is a huge shift now. And I think I'm unlikely to have ideas that will help solve this problem, but I can encourage these younger people, these younger people around 40... Thank you. ...to work on alignment. work on these ideas and they really are working on them now. They're taking it seriously. Yeah. As long as we put the most brilliant minds, like many of you I'm looking in the audience and online, onto this problem, this is where my hope comes from. So, Jeff, you left Google in large part to be able to go and talk about this freely in the way that you wanted to. And basically... Actually, that's not really true. That's the media story and it sounds good. I left Google... old and tired and wanted to retire and watch Netflix. And I happened to have the opportunity at that time to say some things I'd been thinking about responsibility and not have to worry about how Google would respond. So it's more like that. If we have time, we'll come back to the Netflix recommendations. I was going to say. In the meantime, but you did go out and start speaking pretty significantly in the media. I think you've both spoken to probably more politicians in the last. last eight months than in your lives before, from presidents and prime ministers right through Congress, Parliament, et cetera. Jeff, can you explain what your concern was, what you were trying to accomplish in voicing it, and whether you think that has been effective? Yeah. So people talk about AI risk, but there's a whole bunch of different risks. So there's a risk that it will... take jobs away and not create as many jobs and so we'll have a whole underclass of unemployed people and we need to worry hard about that because the increase in productivity AI is going to cause is not going to get shared with the people who lose the jobs. Rich people are going to get richer and poor people are going to get poorer and even if you have basic income that's not going to solve the problem of human dignity of many people want to have a job to feel they're doing something important including academics. And so that's one problem. Then there's the problem of fake news, which is a quite different problem. Then there's the problem of battle robots. That's a quite different problem again. All the big defense departments want to make battle robots, and nobody's going to stop them. them and it's going to be horrible and maybe eventually after we've had some wars with battle robots we'll get something like the Geneva Conventions like we did with chemical weapons. It wasn't until after they were used that people could do something about it. Then there's the existential risk, and the existential risk is what I'm worried about. And the existential risk is that humanity gets wiped out because we've developed a better form of intelligence that decides to take control. control. And if it gets to be much smarter than us, so there's a lot of hypotheses here. It's a time of huge uncertainty. You shouldn't take anything I say too seriously. So if we make something much smarter than us, because these digital intelligences can share much better, so can learn much more. We will inevitably get those smart things to create sub-goals. If you want them to do something, in order to do that, they'll figure out, well, you have to do something else first. Like if you want to go to Europe, you have to get to the airport. That's a sub-goal. So they will make sub-goals. And there's a very obvious sub-goal, which is if you want to get anything done, get more power. If you get more control, it's going to be easier to do things. And so anything... Anything that has the ability to create sub-goals will create the sub-goal of getting more control. And if things much more intelligent than us want to get control, they will. We won't be able to stop them. So we somehow have to figure out how we stop them ever wanting to get control. And there's some hope. These things didn't evolve. They're not nasty competitive things. They're however we make them. They're immortal. So with the digital intelligence, you just store the weight somewhere. And you can always run it again on other hardware. So they really, we've actually discovered the secret of immortality. The only problem is it's not for us, we're mortal. But these other things are immortal. And that might make them much nicer, because they're not worried about dying, and they don't have to sort of... Like Greek gods. Well, they're very like Greek gods. And I have to say something that Elon Musk told me. This is Elon Musk's belief, that, yes... We are the kind of bootloader for digital intelligence. We're this relatively dumb form of intelligence that was just smart enough to create computers and AI, and that's going to be a much smarter form of intelligence. And Elon Musk thinks it'll keep us around because the world will be more interesting with people in it than without, which seems like a very thin thread to hang your future from. But it relates to what Fei Fei said. It's very like the Greek gods model, that the gods have people around to have fun with. Okay, can I comment on that? Nothing I said was controversial. No, not at all. So I want to bucket your four concerns, economy, labor, disinformation, and weaponization, and then the extinction, Greek gods. I forgot discrimination and bias. Okay. So I want to bucket them in two buckets. The Greek god extinction is the extinction bucket. Everything else I would call catastrophic. Nearly catastrophic. Catastrophic danger. And I want to comment on this. I think that one thing I really feel is my responsibility as someone in the AI ecosystem is making sure we are not talking hyperbolically. It's especially with public policy makers. The extinction risk is, Jeff, with all due respect, is a really interesting thought process that academia and think tanks should be working on. That's what I thought for many years. I thought it was a long way off in the future, and having philosophers and academics working on it was great. I think it's much more original. It might, but this process is not. Just machines alone. Humans are in this messy process. So I think there is a lot of nuance. For example, we talk about nuclear. I know nuclear is much more narrow, but if you think about nuclear, it's not just the theory of fusion or fission or whatever. It's really obtaining uraniums or plutonium, the system engineering, the talents. and all that, I'm sure you watched the movie Oppenheimer. So here, if we're going towards that way, I think we have a fighting chance, more than a fighting chance, because we are human society. We're gonna put guardrails, we're gonna work together. I don't want to paint the picture that tomorrow we're gonna have all these robots, especially in robotic form, in physical form, creating the machine overlords. I think. I really think we need to be careful in this, but I don't disagree with you that this is something we need to be thinking about. So this is the extinction bucket. The catastrophic risk bucket, I think it's much more real. I think we need the smartest people and the more the merrier to work on. So just to comment on each one of them, weaponization, right? This is really real. I completely agree with you. We need international partnership. We need potential treaties. We need to understand the parameters. Humanity, as much as I'm optimistic about humanity, I'm also pessimistic about our self-destruction ability as well as destroying each other. So we've got to get people working on this. And our friend Stuart Russell and many of the AI experts are talking about this. Second bucket you talk about is disinformation. This is again, I mean, 2024, everybody's watching the US election and how AI will play out, you know, and I think we have to get on the social media. ...media issue. We have to get on the disinformation issue. Technically, I'm seeing more work now. Digital authentication technically is actually a very active area of research. I think we... We need to invest in this. I know Adobe is, I know academia is, I think we need to, I hope there's startups actually in this space looking at digital authentication, but we need also policy. And then jobs, I cannot agree more. I actually use the most important work that I think it's really at the heart of our AI debate is human dignity. Human dignity is just beyond how much money you make, how many hours you work. I actually think if we do this right, we're going to move from labor economy to dignity economy in the sense that humans, with the help of machines and collaboratively will be making money because of passion and personalization and expertise rather than just those jobs that are really grueling and grinding. And this is also why human HAI at Stanford has a founding principle of human augmentation. We see this in healthcare. One of the biggest, earliest day of CHaD GPT, I've got a doctor friend from Stanford Hospital. hospital who walked to me and said, Feifei, I want to thank you for chat GPT. I said I didn't do anything. But he said that we are using medical summarization tool from GPT because this is a huge burden on our doctors. It's taking time away from patients. But because of this, I get more time. And this is a perfect example. And we're going to see this more. We might even see this in blue collar labor. So we have have a chance to make this right. I would add another concern in the catastrophic concern is actually you talk about power imbalance. One of the power imbalance I'm seeing right now and it's exacerbating at a huge speed is the leaving public sector out. I don't know about Canada. Not a single university in the U.S. today can train a chat GPT in terms of the compute power. And I think combining all universities of U.S. GPT, A100 or H100 probably, nobody has it, but A100 cannot train a chat GPT. But this is where we still have unique data for curing cancer, for fighting climate change, for economics and legal studies. need to invest in public sector. If we don't do it now, we're going to fail an entire generation and we're going to leave that power imbalance in such a dangerous way. So I do agree with you. I think we've got so many catastrophic risks and we need to get on this. This is why we need to work with policymakers and civil society. So I don't know if I'm saying this in an optimistic tone or in a... pessimistic, I sound more pessimistic to myself now, but I do think there's a lot of work. Optimistically, since you've both been very vocal about this over the last six, eight months, there has been a huge shift, both as Jeff, as you said, key researchers going and focusing on these issues, and then public and policy shifting in a way that governments are actually taking it seriously. So, I mean, you're advising the White House and the U.S. government. government, you've spoken to them as well, and you've sat with the prime minister or multiple prime ministers maybe, and they're listening, right, in a way that they wouldn't have necessarily 10 months ago, 12 months ago. Are you optimistic about the direction that that is going? I'm optimistic that people have understood that there's this whole bunch of problems, both the catastrophic risk and the existential risk. And I agree with Feifei completely. rest are more urgent. In particular, 2024 is very urgent. I am quite optimistic that people are listening now. Yes, I agree. I think they're listening, but I do want to say, first of all, who are you listening from? Again, I see an asymmetry between public sector and private sector, and even in private sector, who are you listening from? It shouldn't just be big tech and celebrity startups. There is a lot of... agriculture sector, education sector. These are, a second is, then after all these noise, what is a good policy? We talk about regulation versus no regulation. And I actually don't know where Canada sits. There's always America innovates and Europe regulates. Where's Canada? Probably in between. Okay, good for you. So I actually think we need both. incentivization policy, building public sector, unlocking the power of data. We have so much data that is locked in our government, whether it's forest fire data, wildlife data, traffic data, climate data, and that's incentivization. And then there's good regulation. For example, we're very vocal. about you have to be so careful in regulating. Where do you regulate? Upstream, downstream? One of the most urgent regulation point to me is where rubber meets the road. It's when technology is now in the form of a product or service. It's gonna meet people, whether it's through medicine, food, financial services, transportation, and then you've got these current. framework. They're not far from perfect, but we need to empower this existing framework and update them rather than wasting time and possibly making the wrong decision of creating entirely new regulatory framework when we have the existing ones. Okay. So we are almost out of time for the discussion part, but we're going to have a long session of Q&A. Before we... started though, I'll ask two last questions. One is, I mean, our view is this technology is going to impact virtually everything, and some of the positive impacts are extraordinary. It is going to help cure diseases like cancer and diabetes and others. It's going to help mitigate climate change. There's just an enormous number of things. Invent new materials. I see over here someone who's focused on that, that can help in the energy sector and aero. aerospace and pharmaceuticals, and that's a big effort at the University of Toronto. There's, but there's this entire world of new things that could not be done before that now can be done. So it's basically advancing science in a way that was... part of either fiction or imagination before. Are you optimistic about that part of it? I think we're both very optimistic about that. I think we both believe it's gonna have a huge impact on almost every field. So I think for those in this room who are actually studying, it's an incredibly exciting moment to be coming into it because there's the opportunity to get involved in limiting the negatives, the negative consequences, but also to participate in creating all those opportunities to solve. some of the problems that have been, you know, they've been with us as long as we've been around as a species. So there's, I think, at least from our perspective, this really is one of the most extraordinary moments in human history. I hope that those of you who are embarking on your careers actually go out and go after the most ambitious things. You know, you can also work on like optimizing advertising and other things, but, or making more Netflix shows, which is great, but also... We like that. Yes. So would my mom, who I think is exhausted Netflix. If there's a Turkish or Korean show out there, she's seen the very last episode of all of them. But for those of you who are embarking on the career, my recommendation is try and think of the biggest possible challenge and what you could use this technology to help solve that is incredibly ambitious. And you have both done that and kind of fought against barriers all the way along. to achieve that. There's a room full of people and a lot of people online and others who will see this subsequently I think who are at the beginning stages of making those decisions. I'm guessing you would encourage them to do that too, right? Think as big as possible and go after the biggest. hardest challenges. Absolutely. I mean, embrace this. But I also would encourage this is a new chapter of this technology. Even if you see yourself as a technologist and a scientist, don't forget there is also a humanist in you because you need both to make this positive change for the world. Okay, last question and then we'll get into Q&A from the audience. Are we at a point where these machines have understanding and intelligence? Wow, that's a last question. How many hours do we have? Yes. Okay, I'll come back to the yes. No. Okay, we have questions from the audience. I'll start on the far side. Do you want to stand up and you're going to be given a mic. Hi, thanks. My name is Ellie. This is awesome and thank you so much. Jeff, your work really inspired me as a U of T student to study cognitive science and it's just amazing to hear both of you speak. I have a question. You mentioned the challenges for education and for, you know, enabling universities to empower students to use this technology and learn. And you also mentioned, Feifei, like the opportunity for this to become a dignity economy and empower people to just, you know, focus on personalization and passion and, you know, their expertise. I'm wondering if either of you... Have a perspective on the challenge that could emerge with overuse and over-reliance on AI, especially for kids and students as they're on their education career and they need to be building skills and using their brain and exercising the meat sack in their head. Our brains don't just continue to work and not accrue cobwebs if they're not learning. Yeah, I wonder your thoughts on burnout and over-reliance and just what happens around de-skilling and the ability to learn to paint when you can use stable diffusion or learn to write like Shakespeare when you can have chat GBT do it for you. And then as those systems progress and can accrue greater insights and more complex problem-solving, how that impacts our ability to do the same. So I have one very little thought about that, which is... When pocket calculators first came out, people said kids will forget how to do arithmetic. And that didn't turn out to be a major problem. I think kids probably did forget how to do arithmetic, but they got pocket calculators. But it's maybe not a very good analogy, because pocket calculators weren't smarter than them. Kids could forget doing arithmetic and go off and do real math. But with this stuff, I don't know. For myself, I found it's actually made me much more curious about the world. Because I couldn't bear to go to a library and spend half an hour finding the relevant book and look something up. And now I can just ask ChatGPT anything and it'll tell me the answer and I'll believe it, which maybe isn't the right thing to do. But it's actually made me more curious about the world because I can get the answers more quickly. Yeah, but normally I ask questions about plumbing and things like that. So I'll answer this with a very quick story. I don't know about you guys, ever since I've become Stanford professor, I'm always so curious. There's a mysterious office in the university, which is the Office of College Admission. To me, they're the most mysterious people. And I never know where they are, who they are, where they sit, till I got a phone call earlier this year. And of course, they wanted to talk to me about ChatGPT and college admission. So, and of course the question is related to, you know, do we allow this in the application process and now that there is CHAT-GPT, how to do admission. So I went home and I was talking to my 11-year-old. I said, well, I got this phone call and there's this college admission question. You know, what do we do with CHAT-GPT and students? What if students wrote the best application? Should we use in-chat GPT and blah, blah, blah. And then I said, what would you do? I asked my 11-year-old. And he said, let me think about it. He actually went back and slept on this, or I don't know what happened. The next day, in the morning he said, I have an answer. I said, what's your answer? He said, I think Stanford should admit the top 2,000 students who knows how to use chat GPT the most. It was actually, at the beginning I thought that was such a silly answer, right? Like, it's actually a really interesting answer, is kids already are seeing this as a tool. And they're seeing their relationship with this tool as enabling impact. clearly my 11 year old had no idea how to measure that, what that means and blah blah blah. But I think that's how we should see it in education and we should update our education. We cannot shut the tool outside of our education. education like what Jeff said. We need to embrace it and educate humans so that they know how to use the tool to their benefits. I've instantly I've met Fei Fei's 11 year old son. He might be the president of Stanford by the time he's 18, so. If Stanford still exists. Maybe let's go to this side of the room in the far corner. I want to ask about like we have really good foundation models right now but in many of the applications we need kind of a real-time performance of the model so like how do you see this area of future going this area of research going in the future of you know using and the abilities of this expert foundation models to train you know fast smaller models Well, you're talking about the inference, right? We need to start thinking about the performance, the inference, and also fit the model on devices depending on which... Well, I mean, without getting into the technical details, all these... research as well as even outside of research, it's happening. You want to talk about? Okay, you don't want to talk about it. Okay, it's happening, but it'll take a while. We talk about things he invests. That's true. I can't talk about it until the company says that it's okay to talk about it. Okay, let's go back in the middle. Just right here. Hi, my name is Ariel. I'm a third year inter-sci student majoring in machine learning at U of T as well and then that conversation was pretty great and then thank you, Prof Hinnant and Prof Lee. I just have a question that maybe a lot of undergrad or grad students are interested in this room. So just like in your like 20s, like what drove you to be like a researcher and like what drove you into the area of academia in AI? Because I'm kind of like confused right now. Like, should I continue with like industry? or like a direct entry PhD or like just like take a master and then go back to industry. And I have like one more question that usually what do you look for? Like if I apply for like a direct entry PhD to your lab? Is that like GPA or publication or recommendation letters? Could you just elaborate a bit more on that? Thank you. I think there are about 300 people in the room and about 6,000 online who want to ask that question of you, Fei-Fei. You want to start? Your 20s? Oh, I got interested in how the brain works when I was a teenager because I had a very smart friend at school who I came into school one day and talked about holograms and how maybe memories in the brain were like holograms. And I basically said, what's a hologram? And ever since then I've been interested in how the brain works. So that was just luckily having a very smart friend at school. I'm gonna be very shamelessly if you read my book that's actually what the book is about is that it's a very good book thank you no seriously I actually I told Jordan and Jeff there's so many AI books about technology and when I started writing this book about AI technology, I want to write a journey, especially to the young people, especially to the young people of all walks of life, not just certain look. And that book talks about the journey of a young girl, you know, in different settings, realizing or coming to understand her own dream and realizing her dream. And it's not very different from what Jeff said. It starts with a passion. It really did start with a passion, a passion against all other voices. The passion might come from a friend, it might come from a movie you see, it might come from a book you read, or it might come from the best subject in school that you felt most fun, whatever it is. And in the students I hire, I look for that passion. I look for ambition. A healthy ambition of wonder, wanting to make a change, not wanting to get a degree per se. And of course, technic... Technically speaking, I look for good technical background, not just test scores, but honestly, I would have never got into my own lab. The standard today is so high, so by the time you apply for a PhD or a graduate school program, you probably have some track record, some. It doesn't have to necessarily, of course if it's just student, I'll take them without even asking question. But even if you, and I'm saying this not only to U of T student, to every student online, you can have a very different background. You can come from an underprivileged background. What I look. for is not where you are but the journey you take that track record shows the journey you take shows your passion and conviction having read the book I will say that it is it is a very surprising journey I think to most people who will read it and just a plug during Canada go buy it at Indigo you can go to Indigo dot CA and ordered and pre-ordered the book and but I think that people will be surprised and really enjoy reading and understanding that experience, and you'll get a very good understanding kind of answering that question. Thank you. Okay, there's about 50 hands up. All right, let's go over here right in the corner. Hey, thank you for the great talk. My name's Shalev. I'm at Vector Institute. We're going to Sheila McIlrith. So I think benchmarks are very important. Benchmarks are like questions. ImageNet was basically a question, and then people are trying to answer it with models. And so right now, LLMs are very hard to evaluate. And generalist agents that take actions are even, it's so hard to start thinking about how to evaluate those. So my question is about questions. It's about these benchmarks. So two things. One, if you sat down with a with GPT-5, GPT-6, GPT-7, and you had five minutes to play with it, what questions would you ask that would tell you this is the next generation of these models? And the second is more of a comprehensive benchmark. What is the more comprehensive, not five minutes benchmark that we need in order to evaluate LLMs or generalist agents? You can choose which one you want to, I guess, think about or answer. Okay. Thank you. Thank you for your question. It's a very good question. I will answer a different question that's just vaguely related. So, this issue arose with GPT-4. How do you tell whether it's smart? And in particular, I was talking to someone called Hector Levesque, who used to be a faculty member in computer science and has beliefs that are almost the diametric opposite of mine, but is extremely intellectually honest. And so, he was kind of amazed that GPT-4 worked, and he wanted to know how it could possibly work. And so, we spent time talking to him. And he said, time talking about that. And then I got him to give me some questions to ask it. And he gave me a series of questions to ask it so we could decide whether it understood. So the question was, does it really understand what it's saying or is it just using some fancy statistics to predict the next word? One comment about that is the only way you can predict the next word really well is to predict what the person is to understand what the person said. So you have to understand in order to predict. But you can predict. it quite well without understanding. So does GPT-4 really understand? So a question Hector came up with was, the rooms in my house are painted white or yellow or blue. I want all the rooms to be white. What should I do? And I knew it would be able to do that. So I made the question more difficult. So I said, the rooms in my house are painted white or yellow or blue. Yellow paint fades to white within a year. In two years' time, I'd like all the rooms to be white. What should I do? do. And chat GPT, oh, and I said and why. If you say and why, it'll give you the explanation. Chat GPT just solved it. It said you should paint the blue rooms white. It said you don't need to worry about the yellow rooms because they'll fade to white. It turns out it's very sensitive to the wording. If you don't use fade, but use change. I got a complaint from somebody who said, I tried and it didn't work. And they used change instead of fade. And the point is, we understand fade to mean change colour and stay changed. But if you say change, it will change colour, but it might change back. So it doesn't give the same answer if you change rather than fade. It's very sensitive to the wording. But that convinced me it really did understand. And there's other things it's done. So there's a nice question. that people came up with recently that many chatbots don't get right, and some people don't get right, but GPT-4 gets right, which is... So you see, I'm answering the question, does GPT-4 understand? Which does have some relation to what you asked, right? So the question goes like this. Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have? And most chatbots get that wrong. What about humans? Well, I just gave a fireside chat in Las Vegas, and the interviewer asked me for an example of things that chatbots got wrong. So I gave him this example, and he said six. And that was kind of embarrassing. We won't ask his name. No, just kidding. No. So people get it wrong. Yeah. But... I don't see how you can get that right without being able to do a certain amount of reasoning. It's got to sort of build a model. And Andrew Ng has these examples where playing Othello, even if you just give it strings as input, it builds a model of the board internally. So I think they really do understand. And to take that a step further, is that understanding crossed the line into intelligence? You said yes? Yeah. I mean, I accept the Turing test for intelligence. People only started rejecting the Turing test when we passed it. So that's the moving goal line that I was talking about. Okay, do you want to answer that? I want to quickly answer, first of all, also applaud you for asking such a good question. I'm going to answer in addition to Jeff's, because I think what Jeff is trying to push is really how do we assess the fundamental intelligence level of these big models. But there are a couple of other dimensions. One is, again, Stanford, HAI's Center for Research of Foundation Model, is creating these evaluation metrics, right? You're probably reading the papers by Percy Helm and all that. I think also this technology is getting so deep that some of the benchmark is more messier than what you think the ImageNet benchmark. For example, in collaboration with government now, for example, NIST. the US National Institute for Standard, what's the T? Technology. And the technology, testing or something. You know, we need to start benchmarking against societally relevant issues, not just core fundamental capability. One more thing I wanna open your aperture a little bit is that beyond the LLMs, there are so many technology towards the future of AI that we actually haven't built good benchmarks for yet. I mean, again, my lab is doing some of the robotic learning one. Google just released the paper yesterday on robotic learning. So there is a lot more research coming up in this space. Okay, I know we have a lot of questions online. I'm going to maybe take another few in the room, and then maybe someone from Radical could read out a question or two from online. Okay, in the room, let's go for one that's not too far away from the last one. Here, just right here. Yeah, here's the mic coming. Hello, I'm Vishwam, and I'm a graduate student at the University of Guelph, and I'm doing my thesis in AI and agriculture. So building upon something you mentioned that universities don't have enough funding to train kind of foundation models, right? So same question, so I want to work in agriculture, I'm passionate about it, but I don't have enough resources to do that. I might think of a very good architecture but I can't train it, so maybe I can go to industry then pitch them this idea then I don't have the control over the idea. I don't know how they're going to apply it. So do you have some advice on how to handle the situation? So if you can get a startup, that's what we're here for. Sorry, I'll let you answer. If you can get your hands on an open source foundation model, you can fine tune one of those models with much less resources than it took to build the model. So universities can still do fine tuning of those models. That's a very pragmatic answer for now, but this is where we have been really talking to the higher education leaders as well as policymakers, invest in public sector. We've got to have a national research cloud. I don't know if Canada has national research cloud, but we're pushing the US. We need to bring in the researchers like you to be able to access the national research cloud. But you do have an advantage by not being a company is that you have more opportunity to get your hands on unique data sets, data sets especially for public good, and play up that card. You could work with government agencies or communities or whatever because public sector still has the trust. And take advantage of that. But for now, yes, fine tune on open source models. Thank you so much. OK, we're going to take a couple questions. We have thousands of people watching online, watch parties at Stanford and elsewhere. So let's see if we can get a question from some people online. Leah's going to ask this question on behalf of someone online. By the way. She's done an enormous amount of work to make this happen, along with Erin Brindle, so thank you both. Thank you. All right, thank you. So we do have hundreds of AI researchers online, and they're folks who are building AI-first companies. And so the first most upvoted question was from Ben Saunders, or Sanders. He's currently CEO of an AI startup, and his colleague was actually a student of Jeffrey Hinton's in 2008. And he has asked about building responsibly and a lot of these questions have to do about building responsibly and they're thinking about what measures can help them as teams be proper stewards for good versus bad and what it actually means to be a steward. Great question. So... Responsible AI framework, there's a lot of framework and I think somebody has estimated a few years ago there were like 300 framework from state nation state all the way to corporate. I think it's really important for every company to build a responsible framework. There is a lot you can borrow even radical is is making making one or and Create the value framework that you believe in and recognize that AI product is a system. So from the upstream defining problem, data set, data integrity, how you build models, the deployment, and create a multi-stakeholder ecosystem or multi-stakeholder team to help you to build this responsible framework. And also create partnerships, partnerships with public sector like academia, like us, partnership with the civil society that worries about different dimensions from privacy to bias to this. So really try to take both have a point of view as a company, but also be part of the ecosystem and partners with people who have this knowledge. So that's my current suggestion. I'll add to that. No, that was a much better answer than I could have given. I'll just add a little bit. To Feifei's point about working with people who are interested in this, I think there are people in the investment community who are thinking and leading on this. In our case, Radical, we've written into every single term sheet an obligation for the company to adopt responsible AI. Initially when we did that, some of the lawyers who read it were like, what is this and tried to cross it out, but we put it back in. But we're also, we've been working on a responsible AI investing framework that we are going to release pretty broadly. And we've done this in partnership with a number of different organizations around the world. We've met with 7,000 AI companies in the last four years, and I think we've invested in about 40. So we've seen a lot and tried to build a framework that others can use going forward and we'll open source it so we can develop it and make it better. But. I think there's a lot that individual companies can do by just reaching out to others who are thinking in a like-minded way. Do you want to ask another question? Yeah, great. There's so many questions, so we'll only get to a couple of them, unfortunately. But playing off of that, a lot of these questions have to do with the relationship with industry, considering how big of a role industry and the private sector is now playing in model development. And some folks are even asking, should researchers... And different engineering roles also be taking management courses today. Sure. I have to tell you a story of when I was at Google. I managed a small group, and we got reports every six months from the people who worked for us. And one of the reports I got was... Jeff is very nice to work for, but he might benefit from taking a management course. But then he wouldn't be Jeff. That's how I feel about management courses. I don't have a better story than that. We have about a minute and a half left, so maybe let's do one more in the room, if we can. Let's see. Do you want to take... Yeah. No, beside you. Sorry. All right. Well, hopefully ask quickly and then we'll get a quick answer. Thank you. And it's a pleasure to be here. Good to see you, Feifei. My name's Elizabeth Gao. I work at Cohere. So my question is, from a private sector perspective, we work with everybody. to take NLP, large language models, to the broader society. On the specific public sectors and research institutions, universities, who has a lot of talent, a lot of data, what is the best way to find the mutual kind of beneficial relationship that we can contribute and they can contribute? Thank you. Give them some money. Or H100s. We'll take H100. But look, it's very important. I advocate for public sector investment, but I also actually probably more so advocate for partnership. We need government, private... sector and public sector to work together. So the past four years at Stanford HAI, this is one of the main things we have done is create an industry ecosystem. And there's a lot of details we can talk offline, but. if I'm talking to university leaders or higher education, is that I think we need to embrace that. We need to embrace that responsibly. Some people will have different ways of calling it, but I think this ecosystem is so important. Both sides are important. Create that partnership. Be the responsible partner for each other. And resources are a big thing. It really, we would appreciate that. Thank you. Okay, with that, we're exactly out of time. I want to thank you both. I feel very privileged always to be able to call you both friends and, Feifei, you a partner and, Jeff, you an investor, and have these conversations privately with you. So it's great to get you both together and let other people hear what you have to say. So thank you both so much for doing this. Hopefully it was as informative for you as it was for me. Thank you. And we'll turn it over to Melanie Wooden, Dean of Arts and Science at U of T. Thank you so much, Jordan. So Jeff and Feifei and Jordan, on behalf of everyone in the room tonight here on Mars and the thousands joining us online, we are deeply grateful for such a... profound conversation this evening. I can say, and I think many of us know, that being part of a university community offers a never-ending set of opportunities for engaging conversations and lectures, and at the end of a Faculty of Arts and Science, I have the pleasure of attending many of them. But I can say without reservation that tonight's conversation was truly unparalleled. And of course, this conversation couldn't be more timely. Jeff, when you shared your concerns with the world about the threats of superintelligence, we all listened, and we all did what we could to try and understand this complex issue, whether it's reading opinion pieces, watching your video, or reading long for journalism, we really tried to understand what you were telling us. So to hear... directly from you and from Feifei who spent so many years now leading the way in human-centred AI is really truly powerful. So with that, thank you both and thank you everyone here for attending this afternoon and big thanks to Radical Ventures and the other partners that made tonight possible. And so with that, the talk is concluded and we invite those of you that are here with us in person to join us out in the foyer for some light refreshments. Thanks for joining us. Thank you