When I wrote about Anduril in 2018, the company explicitly said it wouldn’t build lethal weapons. Now you are building fighter planes, underwater drones, and other deadly weapons of war. Why did you make that pivot?
We responded to what we saw, not only inside our military but also across the world. We want to be aligned with delivering the best capabilities in the most ethical way possible. The alternative is that someone’s going to do that anyway, and we believe that we can do that best.
Were there soul-searching discussions before you crossed that line?
There’s constant internal discussion about what to build and whether there’s ethical alignment with our mission. I don’t think that there’s a whole lot of utility in trying to set our own line when the government is actually setting that line. They’ve given clear guidance on what the military is going to do. We’re following the lead of our democratically elected government to tell us their issues and how we can be helpful.
What’s the proper role for autonomous AI in warfare?
Luckily, the US Department of Defense has done more work on this than maybe any other organization in the world, except the big generative-AI foundational model companies. There are clear rules of engagement that keep humans in the loop. You want to take the humans out of the dull, dirty, and dangerous jobs and make decisionmaking more efficient while always keeping the person accountable at the end of the day. That’s the goal of all of the policy that’s been put in place, regardless of the developments in autonomy in the next five or 10 years.
There might be temptation in a conflict not to wait for humans to weigh in, when targets present themselves in an instant, especially with weapons like your autonomous fighter planes.
The autonomous program we’re working on for the Fury aircraft [a fighter used by the US Navy and Marine Corps] is called CCA, Collaborative Combat Aircraft. There is a man in a plane controlling and commanding robot fighter planes and deciding what they do.
What about the drones you’re building that hang around in the air until they see a target and then pounce?
There’s a classification of drones called loiter munitions, which are aircraft that search for targets and then have the ability to go kinetic on those targets, kind of as a kamikaze. Again, you have a human in the loop who’s accountable.
War is messy. Isn’t there a genuine concern that those principles would be set aside once hostilities begin?
Humans fight wars, and humans are flawed. We make mistakes. Even back when we were standing in lines and shooting each other with muskets, there was a process to adjudicate violations of the law of engagement. I think that will persist. Do I think there will never be a case where some autonomous system is asked to do something that feels like a gross violation of ethical principles? Of course not, because it’s still humans in charge. Do I believe that it is more ethical to prosecute a dangerous, messy conflict with robots that are more precise, more discriminating, and less likely to lead to escalation? Yes. Deciding not to do this is to continue to put people in harm’s way.
I’m sure you’re familiar with Eisenhower’s final message about the dangers of a military-industrial complex that serves its own needs. Does that warning affect how you operate?
That’s one of the all-time great speeches—I read it at least once a year. Eisenhower was articulating a military-industrial complex where the government is not that different from the contractors like Lockheed Martin, Boeing, Northrop Grumman, General Dynamics. There’s a revolving door in the senior levels of these companies, and they become power centers because of that inter-connectedness. Anduril has been pushing a more commercial approach that doesn’t rely on that closely tied incentive structure. We say, “Let’s build things at the lowest cost, utilizing off-the-shelf technologies, and do it in a way where we are taking on a lot of the risk.” That avoids some of this potential tension that Eisenhower identified.