Transcript for:
Dr. Fei-Fei Li's Insights on AI

Please welcome to the stage, Dr. Fei-Fei Li, Sequoia Professor in Computer Science and Co-Director of Human-Centered AI Institute at Stanford University, for a conversation with Bloomberg's Emily Chang. AKA the godmother of artificial intelligence. How do you feel about that title? That's the first question. You know, Emily, I would never call myself godmother of anything, but when I was given that title I actually had to pause and think about it and I was like well if men can be called godfathers of things women so can women so I accept it 100 percent I mean you are one of the the most influential computer scientists of our time. You have written hundreds and hundreds of papers. You were the creator of ImageNet, which laid the foundation for modern AI, which was basically this database of images and their descriptions. Did you have any idea how influential it would be? ImageNet was conceived back in 2007 as probably the inflection idea for big data's role in AI algorithms. So from a scientific point of view, I... I had the conviction that big data would fundamentally change the way we do AI. But I don't think I could have dreamed that the convergence of big data, neural network, and GPUs is so... give the birth of modern AI and I could not have dreamed the progress, the speed of progress since then. You are in rooms with the people who are making decisions about the future of this technology. Sam Altman, Sundar Pichai, Satya Nadella, you are testifying before Congress, you are on task forces. What is your main message to the people who have the power about how they should use that power? Great question, Emily. Honestly, the message is the same as I'm in the room of K-12 summer camps, as well as Stanford, you know, introduction to AI courses. that recognize this technology, what it is, and how to use it in the most responsible and thoughtful way. Embrace it because it's a horizontal technology that is changing our civilization, is bringing goods, is going to accelerate scientific discovery, help us to Find cures for cancer, you know, map out our biodiversity, discover new materials with us. but also recognize all the consequences and potentially unintended consequences and how to develop and deploy it responsibly. I just think that voice of balance, of thoughtfulness, is so important in today's conversations, whether it's in the White House or on campus. Right now, and I don't know if you would call this a crisis or an inflection point, but AI models are running out of data. to train on and then you've got companies turning to AI generated data and synthetic data to train their models. How big a problem is this? What are the risks? Like what's the next step here? So first of all I think AI models are running out of data is a very narrow view. It is I know that you're implicitly referring to these large language models that's ingesting internet data especially data from you know, from websites, Reddits, Wikipedia, like whatever you can get a handle of. Even when looking at language models, let's just stay in this narrow lane, I think there's so much more. We are seeing that differentiated data work can be used to really build customized models, whether it's, you know, journalism as a business, or or in very different enterprise verticals, for example, healthcare, we're not running out of data. In fact, there are many, many industries that we are still not, have not even entered the digital age yet. We have not taken advantage of the data, whether it's healthcare or environment or education and all that. We're not running out of data. Even in the lane of language models, I don't think we're running out of data from that point of view. Do you think using AI-generated data to train models... Now is a good thing or does that take us further and further from the original source in a dangerous way? That's a so much more nuanced the way a Question, it's a good question. So AI there are many ways to generate data. For example in my Stanford lab. We do a lot of robotic Research right robotic learning they're simulated data is so important, because we simply don't have enough resources or enough opportunities to collect human generated movements and all that. And simulation is really, really important. Would that take us onto a dangerous path? I think. I think even with human generated data, we can go down a dangerous path. And simulation data, likewise, if we don't do it in a responsible way, if we don't do it in a thoughtful way, of course it might take us down. I mean, I don't need to even call it out. You know what are the bad human-generated data, are there? Like the entire dark web. And so the problem is not simulation itself. The problem is data. You're getting into the hot and crowded AI startup game. You're starting something reportedly. Can you tell us anything about it? Nope. I'm just kidding. Okay, well stay tuned. We also conducted a poll about trust in the age of AI. Can we bring the results up of that poll? The question was, how much do you trust tech companies to develop AI safely and securely? I fully trust them. 0%. I'm skeptical, everyone. Not at all. A significant portion of people. Who are doing this? The people in this room. The people in this room. Okay. If you had to rank the big AI players, who do you trust the most and who do you trust the least? My trust is not placed on a single player. My trust is placed in the collective system we create together and in the collective institution we create. together. So I, maybe that's your trap question, but I'm not, I'm not going to be able to call out anybody that I feel is, you know, I mean, think about the founding father. fathers of United States, they did not place trust in a single person. They created a system that all of us can trust. Are we doing that? We're trying. At least the Stanford Institute for Human-Centered AI is trying. I think many people are trying. I get to ask this question a lot. Emily, do you still have hope in AI? First of all, it's such a sad question. I do say my hope is not in AI. My hope is in people. And I'm not a delusional optimist. People are complex. I'm complex. You're complex. But the hope is in people, is in our collective will, our collective responsibility. And many things are happening. We're moving. Many of us are working towards moving to make this a trustworthy civilizational technology that can lift all of us. So there are... There are so many risks that get talked about. Human extinction, bad actors, bias, like racial, all kinds of bias getting exaggerated. What is the thing that you worry about the most? I worried about catastrophic social risks that are much more immediate. I also worry about the overhyping of human extinction risk. I think that is blown out of control. It belongs to the world of sci-fi and just pondering about it. There's nothing wrong with that. wrong about pondering about all this, but compared to the actual social risks, whether it's the disruption of disinformation and misinformation to our democratic process, or, you know, the kind of labor market shift, or the biased privacy issues, these are true social risks that we have to face because they impact real people's real life. Meta is leading an open source AI campaign. What? What do you think should be open and what should not be open? That is a nuanced question. I do believe in an open ecosystem, especially in our democratic world. I think that the beauty of the past, you know, several couple of hundred years or a hundred years, especially our country is the kind of innovation, entrepreneurship, and exchange of information. And so it's important that we advocate for that kind of open... What is the biggest thing in AI that nobody's talking about? What should we be talking about? I think we should talk more about... Oh, God, that's a long list, actually. We should talk about how we really can imagine how we use this technology. You know, I talk to doctors. I talk to doctors. biochemists, I talk to teachers, I talk to artists, I talk to farmers. There are so many ways we can imagine using this. There are so many ways we can use this to make people's life better, work better. I don't think we talk enough. We are talking about gloom and doom. And it's also just a few people talking about gloom and doom. And then, you know, the media is amplifying that. I don't know who you're talking about. Yeah, my hand was waving leprously. I don't think we give enough voices to people who are actually out there in the most imaginary way, creative way of trying to bring good to the world using AI. Is there anyone, anything you want to call BS on? Like anyone or any company just kind of piss you off? I know where you're getting. I already called them. I wouldn't say the BS. I just think over-indexing on the existential crisis. I'm sorry, extinction, existential crisis. Existential extinction crisis. Yeah, exactly. That is over-indexing. I am concerned about some of the bills that are in different parts of our country, California state. That is over-indexing. on that and it might come from a good place, but it's putting limits on models and might even inadvertently criminalize open source and are not really being thoughtful about how to evaluate and assess these models. I am concerned about that. So you think we might overregulate? We might. We might overregulate in ways that we didn't mean it. hurt our ecosystem. But in the meantime, there are places where rubber meets the road, like healthcare, transportation, finance, where we should look at the proper guardrails. Did you talk to President Biden about that? Because I know you have like a line to him. I can't tell you what I talked to him about. Actually, with President Biden, one of the things we talked about is the moonshot mentality to invest in public sector AI, because, you know, you know, We're here in the heart of Silicon Valley. It's not a secret that all the resources, both in terms of talent, data, and compute, are concentrated in industry, actually in big tech industry. And Americans'public sector academia is falling off the cliff pretty fast in terms of AI resource. Stanford Natural Language Processing Lab has 64 GPUs. 64. Think about it. the contrast and so we talk about resourcing a public sector because that is the innovation engine of our country it produces public goods it produces scientific discoveries and it produces trustworthy you know responsible evaluation and explanation of this technology for the public so last question and this is something I know you're really focused on in your lab and in general there are not enough women and people of color color in this field who have their hand on the dial. How serious is this risk and what could it lead to? Yeah, well, Emily, I know you have been advocating this issue. Look, there isn't enough. In fact, I think the culture is not necessarily better. We're seeing more and more women and people of different diverse backgrounds entering the field of tech and AI, but But we're also seeing that the voices of the men being much more lifted and all that. And people do say, well, Fei-Fei, you're here talking. But there are so many people who are better than me. There are so many young women, people in tech from different diverse backgrounds whose voices should be lifted, who should be given a bigger platform. So if we don't hear from them, we're really wasting human capital. These are brilliant minds and innovators and technologists and educators, inventors, scientists, not giving them the voice, not hearing their ideas, not giving them, not lifting them, waste our collective human capital. I think godmother is a pretty good term. What do we all think? Do you agree? All right. Thank you. The godmother of artificial intelligence.