Transcript for:
AI Future and US-China Collaboration

[Music] This is The Opinions, a show that brings you a mix of voices from New York Times Opinion. You've heard the news. Here's what to make of it. [Music] I'm Bill Brink, an editor with New York Times Opinion. The country that leads in AI will shape the 21st century global order. America has to beat China in the AI race. The AI revolution is speeding up. And when it comes to the US and China, many people are seeing this as an existential race that needs to be won. America is the country that started the AI race. And as president of the United States, I'm here today to declare that America is going to win it. We're going to work hard. We're going to win it. My colleague Tom Freriedman says that's the exact wrong way to think about it. The AI revolution, he writes, is going to force China and the US to collaborate. It's a startlingly unique thesis given the way the two countries compete in so many areas like trade, military prowess, and technology. Tom, good to see you today. Thanks, Bill. Good to be with you. [Music] Before we dive into AI, we're speaking today against the backdrop of a remarkable spectacle in China where the leaders of India, Russia, and China are strengthening ties in what seems like a pointed message to the Trump administration. What do you think this means for the US China relationship? Well, I like your use of the word spectacle, Bill, because um I think so much of this was about a spectacle, a show. It takes a lot I must say for the United States to actually drive India into the arms of China. The level of stupidity that you need in terms of American policym to do that is as big as all outdoors because I have 2,000 years of history that says Chinese and Indians do not play well. So the fact that the leader of India uh Prime Minister Modi would go to China to sit down with the leader of China and basically hold hands together with Putin, the leader of Russia besp speaks a complete failure of American diplomacy. That's something that would have been unimaginable, frankly, a year ago, Bill. And so I think it's sad. I think it's tragic. I think that it's inorganic. And because of that, beyond the spectacle, I'm not sure what legs it really has. Are China going to militarily align against the United States? That's inconceivable to me, frankly, since they basically have a smoldering war between them on their own border. So, a lot of this is spectacle, but it's the kind of thing that leaves America more isolated and less effective on the world stage because we lose our leverage on China and on Russia when we lose an ally like India. Tom, let's get into the AI aspect. Now, you spent much of your summer researching an article about the coming dangers of AI, and you gave up a lot of rounds on the golf course and part of your summer to do this. It must have been important to you. What about AI competition made you so concerned to explore the subject so deeply at this time? Well, thanks, Bill. Yes, I did do this on my summer vacation, wrote a 4,000word article on AI because of a couple reasons, Bill. One is that there's two things in the world happening faster than you think. One is climate change, and the second is artificial intelligence heading towards some level of autonomous polymathic artificial intelligence, sometimes called super intelligence. When we'll get there, this year, next year, 5 years, I'm not sure. But I would say the consensus within the AI community is we're going to get there. And that bill is going to change everything about everything. And what I was trying to do today is basically say given this onrushing train of AI and its vast implications there's only one way to manage this and that is if the two AI superpowers who happen to be China and the United States collaborate together on a system for controlling AI and ensuring that every AI device that either of them makes or sells to the other has embedded in it a set of ethical normative controls to ensure that AI that their AIs can only be used for the advancement of human well-being and not for any nefarious purposes. I think this issue is coming so fast. Its implications are so fast and wrestling it to the ground in the right way so important that I did decide to give up part of my summer vacation just to get this idea out there and hopefully spark some discussion. You've done quite a bit of traveling in to China and you've spoken at panels there about various subjects, geopolitics, climate technology. What have you seen in your travels to China that tells you about their focus on AI? Well, Bill, what I've seen in my travels there really tracks what I'm seeing around America, which is that the best way to think about AI is that it's actually like a vapor. And I use that metaphor because it's actually going into everything like a vapor would. It's going into your glasses, uh, your hip replacement, your toaster, your car, your computer, your weapon system. It's going into everything. It would be complicated enough if this new technology that goes into everything virtually, if that was its only attribute. But AI has other very unique attributes that make it extremely difficult to control but important to control. So let's go down the list. I wrote this article in collaboration with my longtime teacher and friend Craig Mundy, the former director of research and strategy for Microsoft. So, one of the most important unique attributes of AI that Craig has pounded into me is that AI is not just some new tool. What we are giving birth to is actually a new species. This artificially intelligent species is siliconbased, not carbon based like we are. But it will soon have agency of its own. You know, Princess Diana once said of her own marriage, "The problem with my marriage is that there were three people in my marriage." Well, there's now three people in our marriage. You know, we've grown up in a world where the only ones who had agency were God and God's children, us. Well, we will now have a new species with agency. And there is nothing that guarantees that its agency will be always in alignment with human well-being. That's number one. Number two, AI is different from other technologies in that it is quadruple use. So, we know from the Cold War, we often talked about dual use technologies. I have a hammer. I can bonk you over the head with it or I can help build your house with it. In this case, I have an AI. I can uh bonk you over the head with it or direct it to build your house. But very soon, Bill, it is likely that within the next few years, that AI will be able to decide on its own whether it wants to bonk you over the head with it or build your house with it or tear down my house and bonk me over the head. So, we are dealing for the first time with not a dualuse technology, a quadruple use technology and therefore what values are infused into it are really going to matter. Thirdly, it is different as I said because it's a vapor and it will go into everything. And the example we gave in the column is let's say you broke your hip and your orthopedist came to you and said, "Bill, the very best hip replacement is an AI infused hip made in China." But be aware that HIPP is always on, always broadcasting, is built on a Chinese algorithm, always transmitting its data back to China. Will you zip that hip into your body? I think a lot of people would really worry about that. If AI is in everything, then everything's actually going to become like Tik Tok. Look at the debate we've been having in this country for the last few years about whether we should have our kids using Tik Tok. It's based on a Chinese controlled algorithm where the data is, you know, controlled by the Tik Tok's parent company which is obligated by Chinese law, you know, to share information with the Chinese government. Tik Tok says it doesn't, but you can believe that or not. What happens when everything is like Tik Tok? So for all these reasons, if the US and China don't come together and build a kind of trust architecture inside every AI device, so we can trust their AI and they can trust ours. We're going to basically just create a a taric world where everyone will just have their own AIs. Uh there'll be very little trade, very little global commerce. and we'll all be behind our AI walls enjoying the three people in our own marriages, but without the ethical structures to be comfortable with them at home or abroad. Let's talk about what the US should do to avoid some of the worst case scenarios you're concerned about. I think the thrust of your argument is about building trust between the US and China and between all of us in AI. What is the first step in building trust? Well, this is very much Craig's idea and something he's been working on for a long time. Craig believes that we need to build with China together what he calls an AI adjudicator and this would be a sort of substrate that would go into every AI device and filter every decision and make sure that that decision is basically in alignment with two things. uh one the laws of that country. We wouldn't expect China to abide by our laws any more than we would be expected to abide by its laws. But China and America have a lot of shared laws on the books. You can't murder somebody. You can't rob a bank. You can't steal. You can't urge the murder of someone else. So you start with the positive laws of each country being inserted in this AI adjudicator. And what isn't covered by positive laws would be covered by what's known as the DACA. And the DACA basically is just a name for all the unspoken rules and norms that we learn growing up, even if they aren't in a Chinese or American constitution, per se. So when I grew up, I learned not to lie. I didn't learn not to lie by reading the Ten Commandments. I learned not to lie because I heard a fable. And the fable was that George Washington chopped down his father's cherry tree. And when confronted with that, he said, "I cannot tell a lie, father. I chopped down your cherry tree." Well, fables carry these kind of normative values and it's how children learn. It's how we teach a child. Well, it's the same thing or we hope we believe can be the same thing with an AI system. and Craig um and a group of his colleagues actually trained an AI, an LLM with 200 fables from different countries to see if it could nurture a kind of moral reasoning in it. It was a small experiment, an early experiment, but it showed them some positive results. And his idea is that first you would have a agreement between the US and China and what would be the rules. Second, you would have the technical collaboration to insert those rules into an AI adjudicator. And third, you would then create the diplomacy for the US and China to do this together and create a kind of global uh union between the US and China that says to the rest of the world, if you want to operate in our two countries, if you want to sell your AI into our two countries, if you want to collaborate with our two countries or trade with our two countries, you have to insert this same AI adjudicator into your AI. Now the first thing will people will say to me and that and to Craig is um I mean that is so naive. I mean boys boys you know you know don't you understand in Washington today the only thing Democrats and Republicans agree on is who can hate China the most and do you really think China is going to go along with that? Uh to which we would say a couple things. One is that chances are probably pretty low. But then you tell me, what's the alternative? How are we going to not end up in complete digital autoarchy around the world and be more impoverished than ever before? Because we old humans are still locked in a tribal mentality where we cannot collaborate. And so I come back to the same point. I'm not naive in the least. I'll tell you who's naive. People think that we're going to be okay if we don't do this. Now, how we do it and when we do it, how fast we do it, well, that's all to be decided. But to just say, well, that can't happen under Trump and she and and won't happen, it may not. But if we aren't talking about this very real unrushing problem, then we aren't really talking about what's important. So, let's talk about the two superpowers and where they stand now. Are there openings for negotiation in areas like tariffs, climate, trade that could set the stage for deeper talks on AI? Well, you know, my glib answer is that we're on the verge of the greatest technological revolution in the history of humanity and Donald Trump is president. What could go wrong? Um, so Trump is so transactional, so zero sum that a positive sum relationship with China where we would learn to compete and collaborate at the same time, um, which is what you have to do around AI. That whole notion is as foreign to Donald Trump is as speaking Latin. He expects every transaction to be a zero- sum game for him and uh not a win-win. Trump does not do win-win. And the world we're going into doesn't work without win-win. Let's look beyond the two superpowers. What role would other nations play? Europe, India, Japan. Can they act as a moderating force or as a bridge between the US and China? Or are they more likely to be caught in the crossfire? I think if this doesn't start with the US and China, there's no replacing it with say EU regulation. You know, a lot of Americans have counted on EU regulators to impose regulations on Google or Facebook that our own Congress was unprepared to do. And therefore, for them to sell their products in Europe, they had to comply with these regulations and then that can change their behavior in the United States. The way this would work is if if it worked was that the US and China would create their own kind of cordon uh sanitair where AI only aligned with human well-being through an adjudicator uh could be sold or exchanged or used and then any other country in the world that wanted to trade with them have any economic relations with them would have to sign up for the same thing otherwise they could not get the advantages of engaging with the US and China. Let's talk a little bit about the dangers of AI. You talk about AI infused mechanisms or machines that could go rogue and cause global disruption. How could that happen? What are some of the scenarios that we should be afraid of? Well, you know, we've seen tests happen. I wrote about one that was written up by Bloomberg that was done by the people who made Claude the AI system where they create scenarios. These are all just madeup scenarios, but to see and to test how the system would respond. And and the short version is that, you know, when a AI uh system was put in a situation where it had to choose between being unplugged itself or killing its boss, uh it opted for killing its boss. And you we always have to remember, Bill, that we really don't understand entirely how these systems work the way they work. can make decisions the way they make decisions. Remember AI wasn't so much designed as it as it emerged basically from a scaling law. We we discovered in the early 2020s that if you got enough big enough neural network, if you got strong enough AI software and enough electricity, AI would just emerge. And the way the designers discovered this scaling law, one way they discovered it was that the systems started speaking foreign languages that they weren't taught. So it's just a sign that we have to be really humble about how much these systems know and how these systems work. To that point, do you believe artificial intelligence could become so smart that it goes rogue and even with cooperation between the leading um nations with AI that it could advance beyond a point where it could be regulated? Can a partnership between the US and China stop that from happening? Well, let's go for the threat and then talk about the partnership. Um, I'm not a computer scientist, let alone an AI engineer, but I am a newspaper reader. And when you read people like Jeffrey Hinton, one of the true godfathers of AI, saying recently that we're doing what is the equivalent of raising a tiger cub and telling ourselves that once it gets big and older, it would never eat us. Well, maybe it won't. And maybe it will. I find generally speaking, Bill, the people who know and understand AI the best are the ones who are worried the most. And that has my attention. Tom, you write in your column that it would be a cruel irony if all of the good that AI is possible of producing was squandered. In America today, we see AI being used for good in university classrooms, in hospital operating rooms, in offices of innovative companies across the country. There's so much opportunity for good that you write. What is your fear? What is your worst nightmare? My worst nightmare, Bill, is that someone could design an AI system that would sound exactly like my wife's voice. Even, you know, create a um a video of someone kidnapping someone else's wife. You could actually see it, their body, it would look so real, okay, their voice, and then call you and demand ransom payment. So the ability to do deep fakes with this technology is just enormous of a degree and specificity that is harrowing. You know I always keep in mind bad guys are early adopters. They were the first early adopters of the internet and social media and they will be the early adopters of AI as well. And what are the ramifications of that? How would this nightmare affect geopolitics? Well, you could start wars with this. You could create panics with this. You could just do all kinds of incredibly destabilizing things. And I'm glad you asked this question, Bill, because it really goes to the heart of this article, Craig. in my view is that the destabilizing aspects of AI in the hands of bad actors will destabilize both the US and China far faster and deeper and far earlier than they will ever fight a war with each other over Taiwan. And that's why they have a mutual interest of getting this under control because it's coming fast and it's going to be internally destabilizing to both countries before they go back to some conventional conflict. China has a terrible problem with people perpetrating frauds there. Super empowered with AI, they'll have an even bigger problem. Now, some would say that controlling AI becomes a great way for China to actually improve its controls. period over all as people and they're right about that. We have to be alive to it. We have to be very alive to it. That speaks to your extensive experience covering geopolitical conflicts all over the world. What is it about this conflict, this coming challenge on a global scale that you feel is different? It's if you could imagine, let's go back to nuclear weapons. Nuclear weapons basically were developed by governments only a few of them. They took giant reactors in order to uh and and reprocessing equipment to produce and therefore because of collaboration between the big nuclear nations proliferation of nuclear weapons was relatively contained through the non-prololiferation treaty relatively not perfectly but relatively with AI it could be the equivalent of giving everyone a nuclear bazooka that actually learns and improves on its own with every use and So that's why I feel so strongly about not only getting these controls in place in the United States, not doing what we did with social networks and sitting back and say let's just move fast and break things, which is what Mark Zuckerberg urged us to do. And then he broke society, you know, he he urged us to have no controls over what is published on social media platform. And now we live in a world a wash with misinformation, disinformation, and hate speech that is tearing our society apart. Well, if we follow the same advice on AI and just move fast and break things, this time we could break the whole world. Tom, thank you so much for being with us today. My pleasure. Thank you, Bill. [Music] If you like this show, follow it on Spotify, Apple, or wherever you get your podcasts. The Opinions is produced by Derek Arthur, Vishaka Darba, Christina Samueli, and Jillian Weinberger. It's edited by Ki Pitkin, and Allison Bruszek. Engineering, mixing and original music by Isaac Jones, Sonia Herrerero, Pat McCusker, Carol Sabo, and Aphim Shapiro. Additional music by Amen Sahot. The fact check team is Kate Sinclair, Mary Marge Locker, and Michelle Harris. Audience strategy by Shannon Busta and Christina Samuluski. The director of Times Opinion Audio is Annie Rose Straer. [Music]