Well, we have an absolutely fabulous morning, you know, and you saw the start of it, and it's going to go on, except for my presentation, but it's going to be a fabulous morning. And I am really glad that my friend Gary Korn helped arrange to have General Shanahan here, because General Shanahan is the man in terms of military artificial intelligence. And recently retired, I think Gary's going to do more of an introduction. Those of you who are veterans of the conference know Gary Korn.
He's been here a couple times, I think, Gary. And he is now a retired Army colonel, judge advocate, and he was previously the legal advisor to U.S. Cyber Command, and he's currently the director of technology law.
and security program and adjunct professor at American U. Without further ado, and apologies general for the delay, we've been pretty good on schedule, but flexibility is the key to our power. Thank you so much. No question.
Good morning, everybody. Sir, thanks for the intro. Yeah, that is a tough act to follow.
But I can't think of anyone better, certainly on this topic, to fall on the heels of that discussion. There were certainly some nuggets in that that sort of drive my thinking again about our discussion. I'll come back to in a minute.
But I have had the honor and privilege of knowing and working with Lieutenant General Retired Shanahan since about 2010 when I was I had the privilege of advising him as a JAG when he was the J39 on the joint staff, part of a 36-year career in the U.S. Air Force, you know, a variety of jobs. I think what's relevant most to this discussion is he was one of the leading thinkers in incorporating artificial intelligence for the department, thinking about how we're going to use artificial intelligence. I'm going to sort of let him talk about Project Maven and the Joint Artificial Intelligence Center and his role with all that.
And he has continued since retiring to be extremely active in this space. Just came back last night from Copenhagen and where he was talking not just artificial intelligence, but with a bunch of lawyers. And I think, you know, again, is a fan of the legal advice other than mine that he's gotten over the years. And we could talk more about that.
But, sir, thank you for being here today. Yeah, I mean, one of the things that General McKenzie said, and it came up with one of the questions, just the focus he had on making sure that at that bottom of the pyramid, that the people who had to actually execute at the tactical level had everything they needed to be successful. And we're now in this era where part of that discussion is, okay, are there new tools that we can provide them that will make them more successful? Part of that is the machine learning capabilities, artificial intelligence. We'll unpack all this.
And also autonomy, right? And there's been discussions going on for several years now, international efforts to try and get our hands around what this means. From one end of the spectrum of the band killer robots and a lot of the assumptions that go into that to we need to break things and move fast, right?
I think we want to talk through all of that this morning. But I'd sort of like to level set. Yeah, I've had to wrestle with technology for the last decade or so.
I'm not a computer scientist. When I walked into the halls of the command, I had worked some of these issues at Joint Staff, but when I walked into Cyber Command, It was a very steep learning curve to try and get with all of the technologists and operators to understand what they were talking about when they would throw whiz-bang terms around and the flux capacitor and this, that, and the other thing, right? And AI is that much harder. But it's ubiquitous now in some sense. We're all dealing with it every day.
People probably use their GPS on their phone to get here and so on and so forth. So can you start off talking just Level set our understandings of what do we mean by artificial intelligence that is not the same as autonomy? What do we mean by autonomy or autonomous weapons?
What's the relationship of all this stuff? It's like economists. When you ask a certain question of 10 economists, you get 12 different answers.
Artificial intelligence is a little bit of the same way. I use a very simple explanation, will not satisfy everybody, which machines that perform at or above the level of humans. That's OK until you get to this thing called artificial general intelligence, which more advanced. And what do you mean by that? How broad is it?
Right now we're talking narrow, single domain. At some point in the future, could machines do everything a human can do, reason and do all these other things? That's sort of the place we start. We should not conflate it with just the question of an autonomous system, which everybody understands autonomy is just autonomy. You could add AI to it.
So you start with automated, your garage door opener, an ATM machine, automated. Autonomous is it goes, acts based on how it was programmed and can take action by itself. Now, AI-enabled autonomy is this very different future of which the machine truly could end up doing things on its own if you go to sort of a far scenario that could be unexpected or even undesired. We can get into a little bit of that, what we mean in the military context.
I'll also put this in a little bit of framing, and this may sound a little controversial to some of you. I think we are approaching, I can't say this because I'm not a historian or social scientist, they will make this determination. I think we're at the cusp. of a digital revolution. There was an agrarian revolution, there's an industrial revolution.
I do not call this the fourth industrial revolution. Some like to call it that. I think it is fundamentally going to be different because a merger of humans, data, and machine that goes well beyond symbiosis.
The idea of cognition at scale that we have never seen before that will affect power, economics, military strengths. But I don't know when that's going to happen yet. I would say the AI we have today is somewhere close to that. So I'm predicting. based on what we think is coming down in the future.
I think it will be seen in retrospect as a digital revolution. But I'll just, you know, Gary, more specifically to your question, how did I get involved in this? My last five years in Uniform, I never expected to be working on AI.
I was handed a problem. My team was handed a problem we could not solve. And to be very quick about this, a lot of you in this room would understand this, especially those in Uniform that have worked. downrange, so to speak.
The full motion video feeds coming off of predators and reapers and other assets. We ran out of human capacity to analyze those. It's a very classic problem. There's no new problem here.
Humans cannot possibly analyze all this information. We had more information from more sources than at any point in history, quite literally more than any point in history before. It's only getting worse. So we looked around and looked for a solution in the Department of Defense and could not find anything. The research labs, who I love, Said we're working on AI.
We say great, we're ready to field it, right? No, about three years away. Nothing available to us to figure out how to do this. So it was a Marine Corps colonel that led this program. We gave him the problem to solve and he went out to Silicon Valley and they said, yeah, we think we got something for it called computer vision.
And then we were off to the races. We sort of figured out how we could get a computer vision algorithm from a company, use Department of Defense data to turn it to an AI model. put it into a ground station that was processing this voluminous amount of information from these drones, and doing three things, augmenting, acceleration, and automation.
I use automation third very deliberately because people think their jobs are going away. I needed more Intel analysts, not fewer of them. So I wanted to get through those processes kind of in that order.
And we put a pilot project together. And by the way, to put this in an additional context, there was an advisory board. Some of you may have heard the Defense Innovation Board. Being extremely critical of the Department of Defense for missing the revolution that was happening in commercial industry in Silicon Valley.
They said, you're going to lose the next war unless you do something about it. So coincident with that criticism, we had this problem we were trying to solve. We went to the leadership of the Department of Defense, Deputy Secretary of Defense Bob Work, and pretty much on the spot, he stood up this thing called Project Maven, also known as the Algorithmic Warfare Cross-Functional Team, which is a little bit hard to say, but it was true. We envisioned a future. Many of us envision a future to algorithm against algorithm, but really Project Maven what is known in all project may even was I almost look at this Wright Brothers 1903 kind of technology to identify three classes of object Detect them identify classify them and track them buildings people vehicles.
That was it in those early days very unsophisticated but Nothing the department had ever seen before it's like oh, where does crazy technology come from so I had a day job that was done not what I was brought in the Pentagon to do, but I was then put in charge of Project Maven as a three-star, and we were up and running with this Marine Corps colonel and his number two, Lieutenant Colonel Joe Larson, who's a Stanford law grad. That law experience, along with Drew Kukor's operational experience, acquisition and contracting experience, proved to be sort of the game changers that I needed to get this program up and running. We demonstrated enough success.
If you were in the commercial industry, you would not have been blown away by the technology. But we had shown a culture shift was possible. You know, the idea of running a startup company in the Pentagon, that you could have a startup culture in the institutional bureaucracy, we showed it could be done. So as a result of that success, modicum of success it might have been, I was asked then to stand up the DoD Joint AI Center, which is to do everything beyond the intelligence surveillance reconnaissance. And we can get into maybe a little bit of what does that mean, what other things you're working on.
But the whole point is, and I'll stop here, it's going too long. This was about not coming to the table with an AI solution in search of a problem, which too many people do. We had a problem we could not solve in any other way but AI, and it was an enabler.
Everybody wants to look at AI as this magic thing, as an enabling technology. It's like electricity to me. It's not a capital asset.
What really matters to me is how this technology will diffuse across the military over the next two decades. It won't happen instantly. You'll need organizational redesign, reform. Operating concepts will have to change.
It's going to take some time. Every technology throughout history has gone through the same cycle. It's just going to go through much faster. And that's what's uncomfortable for some people with AI today is the pace of change.
The rate of adoption and the breadth and depth of diffusion. So that's what's different. We did that in Project Maven and then in the JAIC. And then after about a year and a half of that, I retired and have been kind of involved in AI and national security ever since.
Because I'm fascinated by it. I see its flaws. It has a lot of flaws in this technology, but I also try to project where it could be. And I think it requires a vision.
It is easy to get skeptical about AI. We all get skeptical about chat GPT. Look where chat GPT was two years ago. Look where it is today.
Fundamentally different. Fundamentally different. And it will be hard to describe how different it will be in two years.
And if it's not, we're in trouble because I think there is a case to be made. Got to come up with a new way of training models than the current just scaling laws for those who have heard that term. A lot there.
Yeah, so a couple of things. One of the themes... As I've started to wade into this area, and I was just in Copenhagen with a lot of the same people about, I don't know, three weeks ago or so, but there's a question of, is this something new under the sun or not?
And, you know, we'll unpack that, we'll unpack that in the next panel discussion. And sometimes the sense of this being so new and different is based on some of the mythology that's grown around it. some of it is based on some of the real concerns.
I've heard you describe this, the original impetus for Project Maven, as success catastrophe, right? It's interesting. Or I think about it as information catastrophe.
Data, we know there's just more and more data available to us everywhere, but data is not information, right? Data is just raw digits at some point, and unless you have the ability to fully unpack it. Now, from a military operational perspective, it makes sense, right?
You need to be able to decipher. More information is better for better operational planning. From a legal perspective, it's almost an obligation, right, to inform those decisions.
But the rate of change, has that changed your sense of when we are on the cusp, we're going to pass that cusp? That's a really important question, and I say not yet. And what you said, Gary, I'm going to emphasize this for everybody in this room.
It's easy to think of AI sometimes as black magic. The pace of change is very quick, but it's not that fast right now, but it could get that fast. Do I think it will?
I always come back to, let's focus on what's the same with this technology in law versus any other technology that we've already dealt with. And this is where I differ a little bit from the academic world. I say I have 36 years in uniform. I'm used to watching military organizations adapt to new technology.
They do it really well. You do it through academics. You do it through simulation.
You do it through exercises, training, and so on, modeling and simulation. Why is that fundamentally different with AI? It shouldn't be, but we're treating it as something that different.
So my starting position, let's focus on the intersection of AI and the law and all the things that I think are common. And most of these things are common. There are some unique differences that I get into, both on use ad bellum and use in bellow. Each have some things that I'm beginning to consider as a little different now that I have to think of differently than I did when the technology was just brand new in the Department of Defense.
Can I interrupt for one second just for, look at, you just heard an operational An operator with years of experience talking about use in bellow and use ad bellum. The lawyers have done their jobs. And I didn't say juice, I said use.
This is really important to me and my keynote in Copenhagen was on the use ad bellum piece of it. And I said, my going in argument, it should not change the way nations decide to go to war. The cities 2,500 years ago still relevant.
Fear, honor, culture, and interest in various combinations. Why should AI be different? But then you get, and I caveat this, I say, well, here's what you ought to be thinking about differently.
First, about speed. I always thought AI would be most valuable in giving humans more time to think. I actually have to start thinking about that myself more. Because we're going to get to a point where the humans are unable to keep up with the technology and not have as much time to think as we would otherwise hope they would. That's one way.
And there's another thing I think it's worth saying here because I've never thought of this until my keynote. I put it in print. For 36 years, and in fact for four years almost since retirement, almost five, I've always said the character of warfare constantly evolves with new technology, but the nature of war. The lucidity part, why we fight, is immutable.
I, for the first time, qualified that assertion by a fully autonomous system future. in which an autonomous system can interpret and execute human instructions in a way that no further human intervention is allowed or permissible, whether by mistake or design. And once a machine can decide when and how to initiate, sustain, and terminate conflict, we are in a different world. And that's not a world I'm particularly optimistic about.
We're not there yet, which is why it's so important to talk about the intersection of law and AI right now to, one, get the ethicists in the room, but two of the lawyers in the room. And I did that all the time with these technologies, but there are some differences. I'm not going to get into details. We want to talk more. I'm glad to do that.
But the three you hear most common with AI, you're probably going to hear maybe on the panel, accountability problem, black box problem, control problem. Some people say accountability gap. I refuse to say that.
It's not a gap. It's a question of who's held accountable and responsible. I wore a uniform for a very long time.
Humans are held accountable and responsible for sins of commission or omission. It's not a machine, not a software developer, it's not a company, it's a human. And lawyers and commanders will work together to figure out what that accountability and responsibility falls.
Sometimes we get it wrong. We haven't held people accountable when we should have. It's a different problem.
I am not a big... person of concern when it comes to the black box problem, not with current technology. It's actually more deterministic than you think, especially on things like computer vision.
Accountability problem is going to be an issue the more and more we delegate decision making to machines. So the question there will be, I don't even use the term in the loop, on the loop, I avoid them like the plague. The question is, where does the human come in? I think it's going to be more into design and development phase with lawyers there at the table, because at some point when you field them and they're fully autonomous. too late unless you build in a failsafe message, which may be.
That's why you have these discussions early on. Yeah, on the accountability point, I think we'll talk about this in the next panel as well. You and I both contributed to a symposium of Peneurius' website, and there was some concordance, I think, and not coordinated, but some concordance in what we said. I have some concern with...
the accountability discussion in the sense of, as we see most of the political declarations, the ethical principles, and so on, there will be someone accountable and responsible. At the same time, and I think you said this in your article, we haven't fielded a weapon system yet that doesn't have some known error rate associated with that weapon system. So where's...
Where's the dividing line there? Where's the difference there? If a system is tested, certified, et cetera, fielded, and the operator or commander employs it within the parameters that it has been, and it fails in some way, what's the question of accountability at that point? Yeah, well, first of all, I agree.
And I'll say this, there are no risk-free weapon systems. They never have been, they never are. What you have is a process. And this is what I wrote about in this Opinio Uris post is, look, we have a long tradition in the U.S. military, maybe it's different around the world, it is different around the world, of going through this process of designing and development weapon systems where we have a very deliberate process we go through. By the time it's fielded, very comfortable with that.
When I got into an airplane, very comfortable that I'd been through a whole lot of testing and evaluation. Why do we treat AI any differently than the hardware stuff we're used to? And that's really an important question is, is it that different? And where do you need to start bringing in these considerations of where it's different? So I am probably more than anybody else emphasize one, risk management frameworks.
Let's talk about risk. Every system has risk. How do you mitigate risk?
Let's go through and let's talk about it. but also the importance of test and evaluation for AI-enabled military systems. There's a tendency for people to say it's software. We need to go faster. We're going to lose the war.
I hate that way of looking at this. We've always done test and evaluation. What makes this different?
Could it be fast and at the speed of operational relevance? My answer is yes, because I was there. I know it. I had somebody that did this in Project Maven, and the Jake, she became the center of the universe for AI test and evaluation, Dr. Jane Pinellas of a Johns Hopkins. Wonderful.
She built this program for the DOD. So we should be focusing on that, even though we all want to move faster and faster. There are no risk-free systems. But for those of you who might have some familiarity with this thing called DOD Directive 3000.09, autonomy and weapons systems, it lays out a process in there, a framework.
It does not prohibit. It does not prohibit lethal autonomous weapons systems. But it gives a framework for saying, if you're going to do this.
Here's the process you go through. There's one review before development. There's one review before fielding.
It's at the very senior levels. Could all that be thrown out the window in the current administration? It could. I hope it doesn't happen that way. We should look at this the way we've looked at every other weapons system in history.
But what you said earlier, Gary, is focus on what is really different about this. And once you know what's different. I say you can do a couple of things.
You build in a combination of technical constraints, which could be hardware and or software, procedural constraints, and policy constraints. And then in things like rules of engagement and special instructions, which anybody who's done the military JAG knows exactly what I'm talking about. That's how you sort of bound the performance of these systems. Again, we do have to think about a fully autonomous system may break through those boundaries and do things no human ever expected. We're not there yet.
We ought to be thinking about it, which is why sessions like this are so important to think about that. Yeah, I mean, to one of your earlier points about speed, I think we have been facing this problem at an accelerating rate. And there's sort of a feedback loop, the OODA loop generally.
decision-making for military operations, for national security decision-making has been condensing and condensing and condensing. Yeah. Which then tells you, I need help, right? I can't crunch through this at human speed. I need the assistance of some system.
But as you start to introduce those systems, it accelerates even faster to the point where, yeah, I mean, I think there is Danger of abdication of responsibility. And we can talk about that a little bit more. But, you know, the risk question, I 100% agree. We have to come at this from identifying and managing risk.
Risk, consequence times probability, right? But there's different risk profiles depending on how you're using these capabilities. Maybe you could talk a little bit about.
There's a spectrum of potential uses for the department. And how's the department thinking about that? What's the trajectory? It isn't all about killer robots.
No. And so I laid this out in that post I'm talking about. And it builds off some terrific groundwork that Lieutenant General Retired Ravi Panmar from India. We work closely together in an AI track two dialogue.
For those track two, it's unofficial dialogues, multilateral, bilateral, whatever. And he built this risk framework. And I've just kind of done some refinement to it.
you start it sort of Begin with excessive risk down to negligible risk. Are there some systems that are negligible risk with AI? Yes.
Stuff that just say human relations kind of stuff that you just quotidian work in your office, whatever. Maybe not. Negligible for the individual that's suffering from something going wrong with it, but negligible from the state of geopolitics and sort of warfare and war writ large, all the way up to excessive risk. Ravi says prohibited risk. I don't use that, but I don't think we're in the position of telling nations what is prohibited and what is not, unless we get to some sort of international agreement banning some of these systems.
I would call excessive risk, it's very black and white, to one thing that I can put in that category right off the bat, AI nuclear command and control. A human will make the decision to launch and a human will actually do the physical launching of a nuclear weapon. Easy to say.
In fact, President Biden, President Xi came to that agreement largely on the back of the work we had done in the Track 2 dialogue. But what about all those other systems that feed the nuclear decision-making process? Gets a little bit more complicated pretty quickly. So you put that at the top, negligible in the middle, then you work your way through to what is high risk, what is medium risk, what is, you know, acceptable risk and so on. And then you come up with these mitigation measures.
The hard part right now is... We don't have enough people, especially on the law side, that understand the technology, and a lot of the people making these military decisions also don't understand the technology, and they also don't understand the law. So we're into a little bit of a period of everybody trying to get more comfortable by what do we mean, and I had this experience with Project Maven. Nobody at OSD General Counsel at the time had any experience with AI.
There was a little bit of a problem for us, not because of the weapon system AI. Because, as General McKinsey said this morning, the difference between op law and admin law, this was the admin law. This was intellectual property discussions, contract discussions, data privacy rights, all those other things. Who owns what? We finally got an army major in, JAG, this reservist that worked in the technology industry, and we were off to the races.
By the time I got to the JAG, I had a part-time lawyer, civilian, but he was being shared with me from the chief information officer because that was who my boss was. By the time I retired, I said, adamant we need a full-time lawyer. There are far too many considerations of this technology.
We can't be part-time on this anymore. Because at the time, we were working on a big project related to COVID called Project Salus. We were in all sorts of privacy questions about U.S. personal data, companies giving us data, and all these other things.
So there has to be. I said this to Emily just a little while ago. If you're at the intersection of AI and national security law, you're employed for life.
You're good. Don't worry about it, whether you're in uniform or not. We need it.
We need more of it. Because it is a growth industry, and you focus on the things that are most different. I think the law is the law. Ethics should be ethics.
Morality should be morality. But let's focus on the things that will surprise us when these systems work in ways that we didn't quite intend. That could happen in the future as we get to these very advanced large language models and we start crossing the digital to physical divide.
By human instructions being then interpreted by a system that then acts on it using this thing called AI agents that's sort of the phrase of the day, agentic AI. Then you put that in a robot or autonomous system and it's acting in a physical world based on your digital instructions. That's what's coming in the Department of Defense.
I made a prediction on a podcast the other day. I think this year there will be one big development in this area that will surprise people. And I said, I don't know what it is. I think it will be looked at as a biplane, 1903 Kitty Hawk, not putting a man on the moon.
But it will be the beginning of a journey which now we have to think very seriously about how we constrain those systems to avoid offloading. the ethical responsibility to machine, which as far as I'm concerned is impossible to do. But I do think this idea of cognitive offloading or cognitive automation is a very serious concern we should be thinking about more, where under the scenario you described, we're going so fast, we then get into this world of automation bias, where you say, I can't, I don't have time to think the machine do what the machine does. And you trust, people trust machines generally. On the other hand, the last thing I'll say on that subject is, humans.
People have written volumes of books on human biases, especially in wartime. Fear, chaos, friction, life, death. Why can't you use machines to help humans?
We focus so much on the negative parts of AI. I always thought one of the first things you should think about in AI is minimizing collateral damage, civilian casualties, better decision making. I hope that doesn't sound as pie in the sky. It's very real for me, and we proved some of that. Project MAPE, which is being used, by the way, in support of Ukrainian operations right now.
In a different form, you may hear different terms from it, but it's the same technology we use. Yeah, it's interesting. There's sort of, I think about it as like the irony or paradox of amanthropomorphizing this technology, which on the one hand, we don't like conceptually the notion that we're going to allow a machine to take over human functionality.
On the other hand, we then say, but it has to act in ways that we don't actually require or expect humans to act in similar circumstances. Now, that may be appropriate as a policy matter, as an ethics matter. But, you know, to the point, the bias question, if you take a decision support system, like all the discussions around the use of gospel and lavender, for example, by the Israelis.
It's a resource question as much as anything else. If a commander had 10,000 analysts as opposed to 10, they take it in a heartbeat. You get that much more analytical input. But the commander's not going back and cross-examining every one of the 10,000 analysts to make sure the commander is satisfied.
There's a bias in the deference to a J2. Some of this, too, is are we expecting things of the systems that are developing that we don't expect of humans in the same context? Yeah, I think the danger is we set an unrealistically high bar.
I mean, we always should set a high bar. That's who we are. At least I hope that's who we stay. But you're not going to always get to that bar. But I'd much rather have it begin there than set a low bar.
we succeeded. So I think, to me, the world is like everything else. You can describe the world in sine curves, cosines, and a few other Venn diagrams. In this case, it's a bell curve. The world of AI is a bell curve to me on military-enabled or AI-enabled military systems.
On one end, it's very conceivable you're going to have machines doing machine things at the speed that only machines can do because the humans have delegated to those machines. I could think ISR. I could think cyber.
I could think cyber. A couple of other places like electronic warfare. Okay, you just have to do that. On the other end, nuclear command and control, humans only.
Everything else in the middle is some blend of human-machine. I've been saying this for the last few years based on my very personal experiences at Maven and Jake. One of the most important things we should be working on over the next decade is human-machine interaction.
I don't say teaming because some people react differently to that. I'll say interactions. Because not that this hasn't been studied for a very long time. But in the world of AI, it's going to look and feel different. Anybody who uses ChatGPT, Perplexity, Lama, you know it's different.
I don't want to anthropomorphize, but I say you when asking a question to ChatGPT. I can't get around it. I don't say it. I say you, and I get an answer back.
But I don't think for a second that it's a human. It's a very clever system which helps me out in brainstorming, editing, summarizing large volumes of information and elsewhere. So we have to figure out more of what we would call in the commercial software world UI, UX, user interface, user experience, having lawyers understand.
If you design it this way, here are what some of the legal implications and ramifications of that design is. Ooh, never thought about that before. Not that people haven't been studying this idea of human machine. Of course, they have for a long time. But it's going to be different in a world of systems that are rapidly evolving in ways I don't think we're fully anticipating right now.
So it's going to be a mix of that. And that's why the lawyers are so important. Where can you turn over decision making entirely to a machine?
and at the end of that if something goes wrong What is the process for accountability? Not that it's not there, but it might be a little bit different than we're used to in the past. To your point about a breakthrough or like a Wright Brothers moment in this. One, how do you react to the deep seek news?
Two, how does that fit into, are we in an AI race? My view is we are with China. How does that impact whether or not we're running with scissors? here.
And more broadly, it's interesting. We saw the Biden EO on AI, safety, security, et cetera, come down, but not the National Security Memorandum, to my knowledge. No, it is not. So it's a lot to throw in one big pot. Where do you see this heading?
Yeah, there is a lot there. So let me start with DeepSeek. I answered the question on DeepSeek in two ways, the AI research community and then the military.
And the AI research community, they call it? the equivalent of an earthquake of 6.0 magnitude. It's not catastrophic by any stretch of imagination, but it is a big, big wake-up call if you are at the epicenter because they showed a very different way to do what had been costing hundreds of billions of dollars, massive amounts of compute, the entire internet scale of data.
Now they relied on that to do what they did, but they did it in a much cheaper, faster, more efficient way. That said to our research community, maybe We're not just going to be about scaling in the future. And maybe when we talk about $500 billion being spent on this thing called Stargate, we should be talking about the opportunity cost of that $500 billion.
Maybe you should be buying that's different. So it hasn't changed sort of the way. Most of the research community is coming at this, but it was a wake-up call.
No question about it. For the military, I don't see anything different because it isn't about diffusion of the technology. It's just a different kind of research showing here's what you can do with AI. Until a military unit takes that technology, comes up with a new operating concept, and finds something actually concrete to do with it, it's just a good idea.
That's all it is. And then on the second part of it, which I spent a lot of time thinking about because I'm in these dialogues to include representatives from China, those in this room that may have an international relations background, this thing called security dilemma, escalatory spirals, I do feel like you Yes, it is a race. I say it's a very aggressive competition, but I'm closer and closer to admitting it's a race.
I won't say arms race. I don't like that because it's a dual nature technology. It's not just military. It's not economics power and innovation as a form of hard and soft power and so on.
But it is some kind of race. And because we have this mutual suspicion and fear that the other country is going to come up with this deus ex machina, this AI black swan that will win the war, technological decapitator decapitation strike with no negative consequences, no legal consequences, everybody's happy. Of course it's so unrealistic, but that's causing each country to go faster and faster in their development. You see it every day in the stories that are coming out. China, China, China, China.
What I don't want to see is an AI race to the bottom, which I think is a potential. That somebody's going to get there first, but they're going to be untested. potentially unsafe and potentially unlawful.
So don't let that happen. Be what General McKenzie said of a JAG, say, boss, stop. We shouldn't be doing this. I think that is a growing risk right now that we're going to be in this escalatory spiral and going down when we shouldn't go down. That's why we have these conversations at the track two level.
People are like, why are you talking to the Chinese? You're going to help. No, it's in our national interest in both countries to get this right or AI will be shut off when there's a catastrophe.
Your national self-interest, our national self-interest. The importance of tests and evaluation. Let's figure out actually how to get this right.
And lawyers have got to be part of that conversation. We do have lawyers that are involved in the track two dialogue. So that to me is what we're talking about with this competition. There's no question right now.
You can look at the sanctions, the Chips and Science Act. This is industrial policy for the first time in the United States in about 25 years. When it became a four-letter word, even though it's two words, I would say industrial policy is back.
with a vengeance today. And I'm a supporter of that because China has an industrial policy. Russia has an industrial policy. But there is this real risk right now of going so fast trying to beat the other. To what?
What's the end game here? I don't know what the end game is. It's just a technology that could enable. So my last, before we'll open it to questions, is...
So within the context of that race, color me skeptical, but my experience is that China is very good at the do as I say, not as I do approach to things. They talk law when it suits them. They don't actually. So I don't have great faith that they're going to incorporate lawyers and these constraining principles into their development efforts.
Notwithstanding that, I agree with you. I've said this in a number of contexts lately. When you engage in a race to the bottom, it leads to only one place, the bottom.
There's no off-ramp for that. But do you feel like it would be an inhibitor? This is kind of the tension in the discussion now.
Well, the more we introduce lawyers and law and ethics and this and that in our development, thinking it's going to hinder innovation. Yeah, it's a healthy tension, but it's good to be a healthy tension. One of the things I'm very proud of, it doesn't get any publicity because it's not a weapon system.
The fact that we stood up a responsible AI division in the Joint AI Center is something I'm very proud of because we came up with AI ethics principles. Somebody else had done all the groundwork, but we presented them, changed them a little bit, and they exist today. You can find them online.
The only country you can find that puts these things out unclassified. Here's our... ethics principles and an implementation plan to go with it, which is even harder.
So to me, I understand the pressure to move fast. I always understand. I'm an operator.
I'm a practitioner. I'm not a theorist. I'm always trying to go faster and faster.
But there's a way to go fast and do it right. And I'm always interested in doing it right. And responsible AI, not magic, it's just here are the things you need to think about. Responsible, equitable, reliable, traceable, and governable. I've never forgotten them because they're that important to me.
I think that's what we should be focusing on. That's why we talked to other countries and said, look, you should be doing the same thing. And the last thing I'll say is I put Russia and China in two different places.
Russia will violate everything, drop of the hat. I'm not convinced on China yet. I'm skeptical.
In the intel vernacular, I have... medium confidence that they will do more. They actually have more on governance going on than the United States does on AI. It's very interesting to me.
Now the PLA, black box, you talk about the black box, that's the black box. I'll leave it at that. Why not the schedule because this is so important. Jason.
Thank you, John. What level of collaboration with allies on the military use of AI? Excellent question.
I spend a lot of my time. Anecdotes are always good. So when I was doing Project Maven, I also, in my other capacity, reason I was brought into the Pentagon is I had meetings with the Five Eye Partners at my level, three-star level equivalent.
And one day I'm bragging to them. We just started up Project Maven. I go, look at this cool stuff. There was an air vice marshal from the UK who was quite humorous in his approach to things.
He paused. He said, Jack. What the F are you doing to us? We've seen this every time. The U.S. races ahead with the technology.
You turn around and say, where are the allies and partners? From that point on, Project Mabon was a five eyes program. I learned from that. So when I got to the Jake, we stood up something called Partnership.
for defense, which was 13 countries. It may still be 13 countries. It's close to it. Because we have to do a lot more in this idea of interoperability with allies and partners. We do not want a future with one country's this place, this country hasn't even seen AI before.
And in NATO, that's what you have, from a Malbania to a UK and everything else in between. So we spent a lot of time on that. Not surprisingly, the hurdles are often policy hurdles, less technical hurdles. You can share this stuff together.
And by the way, our hubris catches up with us. You don't think that UK, Australia has great AI that they could also share with us? Of course they do. And what do they have that we don't have?
UK and Australia collected data. We want to make our systems as good as possible? Get as much data as you can. That's what AI is all about.
So I'm encouraged by that, but there's a lot more that needs to be done. Emily. I'll try to be loud so I can hear me. Oh, thank you. I just want to make sure.
I don't think you'll have an issue hearing me. I just want to make sure people can hear my question. Okay, so you talked a little bit about using AI to maybe give us solutions where human bias might come into play.
What's your thoughts on the machine bias that could happen from the people who are doing the actual coding and how that application will work? Because I'm going to give you a contrary answer. There is no such thing as machine bias. There is human bias that is then translated into the form of machine and the outputs could be skewed.
What's the vocabulary between skewed and biased? I don't know. I have one. I find one. Because it anthropomorphizes a machine to say the machine is biased.
I think what we have to acknowledge is every single piece of data that goes into a machine is skewed in some way. Anybody today thinks that they're getting an honest, fully comprehensive answer from any of these large language models? No. They're all doing some form of reinforcement learning with human feedback where they're still sort of correcting the answers to the test based on the company's guidance.
So there's always going to be bias built in on the human side. You will have skewed data. What we have to spend a lot of time talking about, and this is where lawyers come in here, is on the output side.
What do we need to be considering? We've seen bad examples of this. Predictive policing, hiring practices. If everybody in the resume pool is a white 45-year-old male, guess what you're going to be recommended for future hiring practices? Not a surprise.
This needs a lot of work. The good news is synthetic data, at least on the large language model side, is getting better and better and beginning to bring in more sources of data, more quality of data. But we should never, ever, ever expect that the outputs will not have some sort of, I'll use the word bias, or skewed in some direction.
We have to figure out what we do with that. But if you don't think every human in this room has their own biases and skewed way of looking at the world, it's a matter of applying it at the right place. The human judgment applied to the outputs and saying, why do I have confidence in this output?
What you will never have or should never expect is confidence in those people. those systems when they're deployed for the first time in combat versus in training. I got examples of that from Project Maven as well. Let's go all the way in the back there. Good morning, sir.
Thank you for your talk. My question is, what things do you suggest that we do or specific sources to consult to gain a better understanding of AI at a fundamental level, especially for military officers? There's so much out there right now. I started with, my journey was honestly three things.
It was Deep Blue or Deep Thinking by Garry Kasparov about his loss to Deep Blue Chess. Visceral for him. Fascinating book.
Things are so much different now than they were then. The second one is the movie AlphaGo. A movie about the game of Go? Is that compelling?
It was to me at the time. And then a New York Times Magazine article on Google Translating. Google Translate, throwing out all the old ways of doing business and then deciding to go down the neural network as opposed to the symbolic way.
Symbolic is just rules-based, expert-based systems, whatever. Those got me started on the journey. After that, there's a hundred different books out there.
I spent about two hours a day, to be absolutely honest with you, catching up on sub-stack posts on AI. There's a lot of outstanding. Some of it is too technical detail for me, but I'll read it.
I just know I'm not going to understand everything because it's changing that fast. I've got to keep up. Every single day, there's sub-stack posts.
From the skeptics like Gary Marcus, for those of you who know AI, he's like, this shit ain't what you think it is. All the way to like, here's down in the weeds how you could do it. do prompt engineering, which for CHAT-GPT is an important thing.
So that's not maybe a fully satisfactory answer, but there's a lot out there. For the law, Corin Stone, for those who know Corin, wrote, I think it's still the gold standard for sort of the intel side of AI in national security law. Charlie Baker's written a book on AI in national security law.
Charlie Dunlap has written on AI. There are a lot of good things that are being published now just on the law piece of it. So it's a fertile ground to get smarter and smarter on. But I think it's still ripe for a lot more to come.
I'll just note that Cornstone wrote that paper while she was part of the tech law and security program at the Washington College of Law. It's excellent. She does a whole risk framework. I plagiarized from her ruthlessly. We have time for two more quick questions.
This lady back here. No, you. I'm sorry. I didn't mean to say you like that.
Why don't we go to another question and we'll get it fixed. All the way on the side there. Hey sir, thank you for being here.
I was actually one of your volunteer monkeys, Navy monkeys back in the day. of the day, training the Captcha images. I think it was about 2017 at the time.
I was also working with Palantir and their machine learning capabilities. Funny enough, we probably could have saved about 20 minutes if we had not had Palantir of the previous speaker. It was an absolutely incredible tool.
But my question was a little bit of pushback on the black box concerns and how it doesn't seem like you're really worried about it. The military has discreet data sets that they can use, particularly like raw SIGINT data, raw imagery data, that they can sort of right-size and keep the bias to a very, very minimum. When you're having a conversation about those data sets and then what we have in civilian practice, how do you sort of make those adjustments?
Yeah, I'll answer this quick. I agree completely with you. When I'm talking about that, I generally talk about computer vision, actual language processing. The game's changed a little bit with large language models and how they do their vector stores and all the other crazy mathematics that go behind it. I can ask the chat GPT the exact same question 10 times.
I will get variations on 10 answers. It will not be the same. That's interesting. That is a little bit of a black box. Not worried about it yet because I don't have that tied into an autonomous weapon system.
So it's a fair point. I take the point that as we get to more advanced systems, black box becomes more of an issue, which is why. I go back to the importance of test and evaluation. That's where you figure out what you put in and what you get out and what's the difference between those. As you know from your experience, once you feel them, they're still going to act a little differently than you expected and they have to be updated or they're going to drift and do those other things.
So, yeah, I may have been a little bit too stark on my representation. We have to think more about back box. I'm just not worried in current systems with the military systems. Yeah. Thank you.
My name is Beth Berman, and I live here in Durham, and I have a very basic question to ask. We hear about the large language model, and... I had the idea that, oh, they take all these newspapers and books in print and things in the public domain and they feed it into the computers. Is it all in English?
How is this translated? I'm a retired translator, so I'm just wondering, is it all in English? It's actually, it's not a basic question.
It's actually a great question. I say one of the, I'm going to put this in a military context and say one of the limitations I know we have right now is most of the U.S. large language models have been trained with what's publicly available in different languages, but it's not as much as you think it should be, especially in Mandarin, which is a very. Character-based language, and it's a little bit hard to do. Now when you bring in the military side of it, you're not getting access to that because it's hidden.
It's classified. It doesn't exist. I worry about especially what we would call psychological operations or information operations.
where we're trying to come up with ways of influencing foreign populations, not our own, we don't do that, is missing the cultural context that humans have because the machines don't have that yet. I can bring in all sorts of data and ask it, what do you think would happen in this? and it'll give me an answer, but if it's missing a few centuries worth of Mandarin writing in the traditional Mandarin, never mind simplified Mandarin, this is what Taiwan's struggling with because they do traditional Mandarin even harder.
So a lot of work's going into this and I think seeing these models come out of places like Taiwan and China that have solved this within them. In the U.S., we have to come up with a solution as well. We're going to be having big gaps that we don't even know are gaps because they don't have all that cultural context piece of it going with us. It's a great question.
We are out of time for this portion of our AI discussion. General Shanahan, will you stay around? I'm here for the next panel, and I hope they don't say I was wrong and everything?
And here's the scheme of attack. After this, we are going to do the CLE presentation, because one of our speakers has to catch an airplane. We will end at one o'clock, and the last presentation is going to be from me about the role of JAGs and ROE and the TJAGs and all that sort of thing. I know everybody will want to stay to hear me. So we're going to have a little bit of switch around, but we'll end at one, and we're going to have a full program.
So thank you very much.