Anduril Founder Palmer Luckey Wants To Challenge the Masters of War

Published May 16, 2024, 8:00 AM

For years, a Hawaiian-shirt and flip-flop wearing, smack-talking entrepreneur has been promising to disrupt the US defense industry. Now, Palmer Luckey, founder of autonomous weapons startup Anduril Industries, isn’t just talking about it, he’s doing it. Emily Chang and Palmer Luckey sit down to discuss building Anduril, his relationship with Silicon Valley, how autonomous weapons are changing the battlefield in Ukraine, and culture clashes between Big Tech and the defense industry. 

I'm Emily Chang, and this is the circuit. If you told me that one day I'd be driving a warship in the Pacific with Palmer Lucky, I probably wouldn't believe you. Are you sure I'm not gonna put the boat?

Yeah, You're gonna be fought.

For the uninitiated. Lucky is the flip flop wearing Hawaiian shirt sporting creator of the Oculus VR headset. After selling his company to Meta and then getting ousted from Silicon Valley over issues we'll get into later, Lucky switched gears to the defense industry.

Some of the United States technology is very bad. Some of it is actually very very good, but it's also extremely expensive and not necessarily adapted to the types of conflicts we're going to see in the future. The United States has a lot of investment in legacy weapons systems that are not necessarily having China quaking in their boots.

Modernizing the battlefield is key here because the steaks have never been higher Ukraine tensions between China and Taiwan. Lucky is hoping company Anderil's smart bets on modern weapons infused with ai can take a slice of the Pentagon's massive budget and play a critical role as the paradigm of war evolves. Joining me on this edition of the Circuit Anderil Industry's founder, Palmer Lucky. What are you building here? What's the mission?

Ultimately? Anderil is trying to build radical new defense technologies that allow the United States to better deter violence against US, to stop conflicts that would drag us in, drag our partners in, drag our allies in. We're building tools that allow our partners and allies around the world to make themselves a lot more prickly so that people don't want to step on them.

Paint a picture for me just how bad is the technology that the US government is using right now.

Well, some of the United States technology is very bad. Some of it is actually very very good, but it's also extremely expensive and not necessarily adapted to the types of conflicts we're going to see in the future. The United States has a lot of investment in legacy weapons systems that are not necess necessarily having China quaking in their boots.

So historically military technology, weapons, vehicles super expensive. Now that's changing.

If you look at this in terms of the history of the United States, We've always had our most innovative companies working hand in hand with the duty and we've actually won a lot of wars by being better at the other guy than cheaply manufacturing large numbers of very good weapons systems. It's actually quite recent that it's gotten so out of whag. It's very recent that these weapons systems have become extraordinarily expensive for what they are as in within living memory. And that's really one of the problems anerals trying to solve. We're trying to use not just the right product decisions, but also modern manufacturing techniques and leveraging a lot of the technology that's come out of Silicon Valley over the last few decades to get back to building cost effective and combat effective weapons that are going to protect the United States.

So explain the state of play on the battlefield and where unduraal.

Fits in well. Interplays across basically every domain. We have things that are underwater, on the surface of the water, in air, in space, on land. Our goal is not to go really deep on one particular area and be, you know, the AI Torpedo Company or the AI small quad copter company. It's to become one of the major defense primes, one of the ones that is large enough to have an impact not just within their own company, but externally to change the way that the government does procurement. I want to be in a position where I can have that impact, make those changes to the way the government develops and buys weapons.

How'd you get interested in defense technology?

I've been interested in it for a really long time, probably my whole life. I briefly was able to work as a lab technician on an army project called Brave Mind, which was treating veterans with PTSD using virtuality exposure therapy. And that was actually before I started Oculus. But even as I ran my company Oculus, I kept in touch with a lot of friends in the defense industry, and what I heard over and over again is that it was broken. The incentives were wrong. They were being punished for doing the right thing rewarded for doing the wrong thing. They made more money through these cost plus con tracks when they took longer than they were supposed to, They made more money when they were over budget, and that really got me worried.

You're trying to run a defense tech company like a startup, How does that compare to like Locky Martin and Boeing, and how do you get Washington to accept that?

And Roll thinks of ourselves as a defense product company more than a defense contractor. What that means is that we build products using our own money, and then we sell them to the customer. This is not a radical business model in most industries, it's actually quite common, but in the defense space it's not the way that things are done. Most major weapons procurement is down on a cost plus basis. Most new R and D is don on a cost plus basis, meaning the contractor gets paid for their time, their materials, and then a fixed percentage of profit on top. Of course, that incentivizes you to come up with expensive solutions, to build them using expensive parts, and to drag it out as long as you can. Not necessarily consciously, people aren't waking up and saying, ah, another day, another chance to screw the US tax pair. But the system naturally wards people who run programs in these slower ways and these more expensive ways and or all. We're the opposite, because we're a defense products company that makes things that work and sell them rather than getting paid to do work. It means that when we do something faster, it helps our profit margins. When I spend less time developing a product, I'm going to make more money. It means that when I can reuse technological building blocks that we've built for other programs, sometimes worth hundreds of millions or billions of dollars in investment, it means that I'm able to make more money rather than losing out on money that otherwise would have been paid to me to redevelop those capabilities from scratch for a given program as a spoke building block. And those are really all the incentives you need to be a fast moving modern tech company style defence company.

You're not the typical Silicon Valley founder wearing a black turtleneck, nor are you walking around and talking like a buttoned up defense contractor CEO, Like, how does that play well?

I don't know. I am a little bit of a careture, but it's because I just haven't changed. I'm wearing the same stuff I was wearing a decade ago, and I'm doing the same stuff I was doing the decade ago. I'm blessed with success to do whatever I want, and I think that's probably what's behind a lot of them more esoteric figures in Silicon Valley They're people who have done well enough for themselves that they can afford to do whatever they want and have whatever image they want and don't necessarily have to adapt.

Do you wear a suit when you go to Washington?

Absolutely? You do.

Okay, So that's like there's a line.

So funerals, weddings in Washington, d C. They all get suits. Okay.

How often are you in DC all the time? And what are you doing? Is it relationship building? Is it trying to win contracts and talking to lawmakers?

It's all the above. A lot of the core of what we do is kind of centralized in DC, so not just the lawmakers who have the power of the purse, who fund new R and D, who decide how things are going to be procured, but also the decision makers on individual product sides. It's very different from my last job. My last job at Oculus, we're trying to sell Virtual REAC had set to millions of people. We had to convince them one by one that VR was something they needed in their lives. And the military is a little bit different. For a lot of these major programs, the number of people that you need to convince in order to make a major sale right, it's actually very very small. It's a small number of people.

You also overcame some serious obstacles to get and ural to this point, right, like, oh, SpaceX, Impalaner. How did they pave the way?

First of all, they proved that it was possible for a new company to win significant contracts with the government and to use the competitive process to kind of force themselves into niches that they normally would not have been able to get into. It It wasn't an easy thing. I mean, they were literally suing their customers in court saying, you have to take our proposals seriously. You have to give them a fair shake, and you have to compare them against what other people are doing in a good way. You can't pay other people billions of dollars to make what we literally already have made for you to buy as an off the shelf product. That was a really big deal. Palataer and SpaceX also really paved the way for ADDERL in that in the last thirty five years, they were the only defence unicorns. They were only the only new defense companies that were worth over a billion dollars that were really at any kind of scale. I don't want to sound like a rich guy. A billion dollars is not actually that much money. It's a lot for a person, but it's not a lot for a company to be worth. You only need low hundreds of millions in revenue to be worth that. You're talking at that point about a fraction of a fraction of a tiny percentage of the Department of Defense budget. The other thing that Palataron SpaceX proved is that you could definitely start a defense company if you bring in really significant resources. It's no coincidence that the only two companies to break through in the last thirty five years since the winding down of the Cold War really were both founded by billionaires. It's unfortunate, but it reflects the reality that we've created where this muscle we used to have as a country of turning small, innovative defense companies into large scale providers of weapons, we lost it. And the only way to bypass that was to already have made billions of dollars somewhere else. But as a country, we need to do better.

Right, And you've raised over two billion dollars.

I don't know if I say the exact number, but a lot of money. Raising money was not easy in the beginning. I mean investing in weapons was actually explicitly against the rules and partner agreements for most venture capital firms, even ones that are quite forward leaning, like my favorite examples, Founder's Fund. Not only were they the first institutional investor to write a check into Oculus, my first company, but even as a forward looking, contrarian, very sometimes controversial ventor capital fund, they still had their terms for their fund saying they could not invest in weapons companies, and they ended up changing that and then investing into Aneral. It was really easy for venture capital firms in twenty seventeen when we started an ANERL to say, conflict is over. We're living at the end of history. This idea of putting our best brains towards things that can kill people is a waste of talent and a waste of money and unethical even and that's not what anyone's saying anymore.

So what does it take to win?

What does it take to win? Well? I wish that it was just building the best thing, because that's what we want to believe that we live in a pure meritocracy technocracy where you build it and they will come. But it doesn't actually work. That way, we have to be very conscious of what the dods problems are, what they believe their problems are, and what Congress is willing to fund. It's a much more complex set of stakeholders than let's say, selling virtual reality headsets. For example. If Congress doesn't believe that something is a strategic priority for the nation, they're not likely to fund it. We also have to work on things that the Pentagon cares about because we don't have the luxury of going through a ten year or twenty year development cycle and just getting paid to develop something for years and years on end. We have to build products that we can immediately sell. So it's not a priority for the Pentagon, an urgent strategic priority. It's not going to go into production and purchase fast enough for us. And finally, we want to work on things that other people are doing a bad job of. ANDROL does not want to be in the business generally of building things that have a lot of great competent affordable competition. If someone is building a great capability, whether it's a robotic system or a weapons system, or a sensor system or software system, if they're doing a good job of solving some problem, for duty. I don't want to compete with them. I don't want to burn up my venture capital dollars burning their company down. I want those guys to succeed. I want us to have a flourishing ecosystem. I want to probably want to work with them and partner with them. So typically, where we are building things, it's things where we think we can do a lot better and a lot cheaper than the existing incumbents.

So let's talk about where the money comes from. ANDROL has raised billions of dollars. Yep, how much of it is from vcs? Are you using your own money? How much is coming from the Pentagon.

So yeah, we have a lot of money that has come from venture capital firms and also other investors. You know, we're not only raising from venture capital firms. There's family offices that have invested, strategic investors who have invested, but certainly a lot of venture capital. I'm a big fan of venture capital as a concept. Allowing companies to grow at an inorganic rate is a very powerful thing, and I learned that during my oculous days. I've seen it happen at the companies that many of my friends have started. I'm a big fan of it. In general, I'd say that we get a little bit of money for development from time to time from the deity, but what we really look for them is demand signal. We say, listen, you don't have to pay us to develop technology, but we do need you to tell us if you would buy something if we build it. And sometimes it's an obvious, oh yeah, we would buy it out of that. Sometimes it's I don't understand why I would buy this. You have to explain to me why this makes any sense. And then it's on us to explain why the thing we're going to build is actually the better, cheaper, faster way to solve their problem than the thing they're already buying. And if we can get that demand signal, then we can make the decision to invest our own money, so the money of our investors, some of it is my own money. When ANDERL started, we started in a warehouse that I had already bought for the purposes of doing this type of thing, So it allowed us to move faster than the company that would have had to start day one and then start raising money before they're able to even get started.

Is Intro profitable or are you making products.

As we are not profitable. Now we have elements of the business that are very profitable. Our successful products, our mature products, the ones that we've been doing for the longest, have followed a very repeatable and reliable path from investment losing money, through to selling to the customer, making money, paying back the initial investment, and now paying off cash that we use in the rest of the business to fund new research and development. All of the money that we make we put back into new research and development. That's another big difference between us and a lot of other companies that are kind of at steady state, not really growing. We could just stop working on all the new stuff, kill all of our research and development, and be a profitable company, but that's not what we set out to do. We set out to be a major defense prime and to do that, we're going to have to keep reinvesting the profits. We're going to have to keep putting more money into these new things, and I'm confident that on some timescale all, or at least most of those bets are going to turn into profitable business lines.

It's like, how do you get there? You want to go public? Your future peers potentially are public companies. How do you get ready for that.

The way that you get ready for that is building a company that is the right shape to be a publicly traded company. And that's a different shape than you want to be when you're in the early markets in the kind of high growth, high risk venture capital world. You need to be a company that has not just a history of growth, but a clear path for future growth. You want to be a company that shows that you can reliably turn investment on our part into a profitable product line, and you want to show that the company as a whole is profitable. These things sound trite, like you gotta make money, you got to be good, But I mean that's really what it boils down to, and a lot of companies don't get there. There's a lot of vc BAT companies that don't meet those requirements. They just talk to you. They don't make money, and they're not growing. There's a lot of companies that are growing but not making money. You have to do both to be successful as a public company.

Well, as you said, the numbers are big. Microsoft got a twenty two billion dollar contract to supply the US military with HoloLens Boks reality headsets. As the creator of ocuus. What do you think of that.

There's been so many programs over the years to put a radio and a head mount to display onto a soldier and do really good stuff with it. It's a science fiction staple for a reason. The ability to have a set of glasses that tells you where the good guys are, shows you where the bad guys are, shows you where you're safe and where you're in danger. That allow you to have visions so you can see through buildings by using tracks that are fed by other sensors in the air on other soldiers, making so that anything that any sensor can see is something that you have superhuman perception of. It's obviously the future of kind of close combat. So I'm a huge Microsoft A. I actually had a launch party for Windows seven at my house. That's right, Me and my friends we all got together at my house and we installed Windows.

Seven together with your lightsabers, like I.

Mean, that's a We have lightsabers, but we didn't bring them out for the party specifically and making it even nerdier as we were already running the release candidate software. So we'd actually already been using Windows seven for the better part of a year. At that point, but this was officially activating the final release. And of course the IVASS program is kind of born of the investments they made in HoloLens. It's gone through some hiccups here and there, but IVAS is actually one of the coolest programs going on in DoD in my opinion. It's one of the ones that is truly looking at what the future should be, rather than just building an iterative upgrade for a legacy system for the army, to say, we are going to put tens of billions of dollars of our resources towards something that is a radically new capability, you know, like transforming soldiers from people who see with their eyes to seeing with technology. I think it's as big as inventing the aircraft carrier. It's as big as inventing nuclear submarines. It's a really big bet that the future is going to be different. DoD full of programs that are not that that excites me.

Some tech employees have pushed back on working with the US government and the US military. Do you see where they're coming from.

So there's been Microsoft employees who are unhappy about this, unhappy about IVAS and HoloLens. There's people at Google who have been unhappy about it. There's people even in Amazon who have been unhappy about it in terms of the cloud services that Amazon's providing for the Department of Defense. I think that if you're concerned about ethics and defense, then you should be more involved, not less. If these are the people who think that their opinions on how wars should be fought are so strong, they shouldn't just be on the periphery of these programs. They should be fighting. Get transferred to the core of these programs. They can make a difference fundamentally. I think it's an emotional thing. They came to a company to work on consumer tech. They weren't told that their work would be used for potentially violence, and they don't like that. And I empathize with that because to them it feels like a little bit of a bait and switch. I came to Google to work on advertising, and instead my work is going to fight US adversaries and andre all. That's why we're so clear with everybody about exactly what we're doing.

Autonomous weapons powered by AI are now on the battlefield more than ever, how advanced are they today?

Not as advanced as they should be. Autonomous weapons not necessarily using modern artificial intelligence techniques, though, have been deployed for depending on the way you look at it, decades or even longer by the United States. There's a long history of autonomous weapons on the battlefield. So it's always interesting when people say, oh, Man Panora's boxes being open. You know, what are the ethical concerns? Has the deity thought about this? The reality is the United States government has more thinking on this and more experience with this than any nation in the world, certainly than any think tank in the world that's just now looking at these issues for the first time. You get a lot of pushback from people who are like in the United Nations and saying, well, can't you agree that AI should never pull the trigger, that an AI should never be making the decision about when to kill and when to not. And I totally reject that premise. It's a SoundBite that sounds good in the moment, but then you have to answer the question what's the moral hund and making, for example, an autonomous land mine that is not allowed to differentiate between a school bus of children and a Russian tank. I think it should be allowed to make that distinction. People say well, what if it's not perfect. Well, that's a big problem. We need to make it so that it is perfect. But in the meanwhile, I'd rather get it right ninety nine percent of the time than zero percent of the time with a dumb system that isn't able to act in that way. That's why I'm working in this space. If other people feel strongly about the ethics and weapons, I encourage them to not sit on the sidelines and critique, get in the arena and work on them yourself, and show that you can make even more precise things that are even less damaging to people.

AI means a lot of things. Silicon Valleys in a frenzy right now about LMS and generative AI, But that's really different than the artificial intelligence you're using.

Right anderl is at its core an AI company. We have been since the beginning. Our first product we started building, and the core product that applies to all of our other hardware products is our software platform, which is Lattice, an AI that fuses data, moves data. He gets all the right information to the right people and the right robots at the right time, detecting and classifying targets across the battle space. Across every domain.

Lattice is kind of like the control center right or at the bridge between the hardware and a It's.

Kind of like a distributed brain, and also the control interface, and also the processor, and it's a whole bunch of different things. It's everything you need to deploy autonomous weapons at scale. It's doing target identification, it's doing communication between people, communication between robots, deconfliction of good guys with bad guys to make sure you're going after the right things. It's really an all encompassing solution that powers all of our products. Now, when we started as an AI company, it was at a time in early twenty seventeen where AI was not that hot. It was this weird thing that some of the nerds were paying attention to, but there was a lot of nasaying, a lot of people who believed it was going nowhere, and we invest in it because we believe that it was really really important for the future of the DD. We believe we would not be able to compete as a nation unless we had very cap artificial intelligence applied to weapon systems. Now, the AI that's going through this explosion is basically large language models as kind of the dominant hype. They're very different from what we're doing building an AI that is able to, for example, look at all the weapon systems that you have in an area connected to a mesh network, identify all of the targets that you need to destroy in the very near future, and then coming up with the optimal matchmaking process between targets and weapons. Very different from what a large language model is going to be optimized to do. Much higher standards for accountability, much higher standards for explainability, much higher standards across the boards for auditibility, and so it's very different. But I will say the AI boom has been very good for ANDROL in terms of convincing people on less the technical level and more the spiritual level. Like people who didn't believe in AI before are like, oh, I've used chat GPT, I was able to make it do things that I didn't believe a computer could do. I now believe that Androl actually and build AI that powers these things. This even goes to politicians. I'll meet with people in Congress who have been skeptical for years about ANDROIL and about whether we could really build these capabilities and have them be useful, and they say, yeah, I use chat GPT and I asked it to write me a recipe based on the food in my refrigerator, and it did it. I think I understand what you guys do. Now.

You're all writing the same wave.

We're all writing the same way.

Well, an Androl is a hardware company and a software company.

That's hard, right, Well, there's the old saying. Hardware is hard. Software is where you can build a lot of the most durable advantages. The United States used to be able to build things that would fly twice as fast as our adversaries, and we'd be twice as fast for a decade. Those days are gone. Hardware advantages like that are going to be quickly copied by our adversaries. So a lot of the most durable advantages we build, for example, using software to make decisions twice as fast or ten times as fast, or using software that allows you to be ten percent better in the strategic process than your adversari who doesn't have the better software tool is a capability that I don't think our adversaries are close to copying.

There are concerns that AI could deepen the fog of war. What do you think about that now?

I super disagree. I think AI is going to give everybody a very clear picture of what's on the battlefield. I think that the benefits a crewe to the sensing side much more than the fogging side. I think AI is going to be a tool to put all the cards on the table for everyone, for everyone understand just how powerful the US is. My hope is that you're going to have dictators who make better decisions because even they have better information from AI. Let's use Putin as an example. I don't think he would have launched this invasion in Ukraine if he would have understood what was actually going to happen. Remember, they believe this is like a three day special operation. They were going to roll in. It was going to be over very very quickly, and it was going to be a huge political victory military victory. I think that if he had a better understanding of what was going to happen, I think he probably would not have made the play. And then if he had believe that he had the better play, the United States could have had greater confidence in that as well, and we would have reacted. So I think AI is actually going to bring a lot more clarity. I think that it's going to make warfare more like chess, and that everyone can see what everyone's doing on the board. The only unknown is what's in their mind.

What are the implications of a more transparent battlefield. Does it mean fewer faster wars?

I think making the global stage more like a chessboard, where you can see what everyone has and understand the capabilities of what you can see quite well, is actually a net stabilizing force. We've seen this with nuclear weapons. Nuclear weapons and the concept and the doctrine of mutually assured destruction worked because it was so easy to model out you know, it was actually not that many pieces on the board we're going to determine the outcome, and it was quite easy to model what would happen in these many scenarios. Conventional warfare is too messy for that. But I think that artificial intelligence could lead us to a world where we have much greater confidence in what wars are fightable, which wars are winnable.

So are you innovating yourself out of a job, like if you're trying to If the goal is fewer wars, that means fewer countries buying your technology, that means less need to make the technology that means less, Well, it's money for Anderil.

I mean, honestly, I think there's a little too much money in defense right now. So I'm like, there's a world where the defense budget goes down and Anderill goes up. I'm not saying that that's necessarily what the United States government should be doing, but our whole thing is that we build these systems at a much lower cost, and we're building also largely defensive systems. Now, if I was building defensive systems that lasted forever and were forever better than all of our adversaries and could never ever be beat, then yeah, I would be putting myself out of a job. I have not yet come up with such a system, but the second that I do, I'm going to be very excited, because I would love to put myself out of a job, Like I would love to live in that post historical world that everyone thought we were living in in the early twenty teens, but we don't. That's not the world we live in, and so I have to deal with reality, and human nature doesn't seem to be changing fast enough. I suspect that I'll be gone before war.

Is there's a lot of fear out there. Have you seen the Black Mirror metal Head episode.

I've seen every episode of Black Mirror.

The killer robots turning on us? Is that possible?

The thing about fictional scenarios, like in Black Mirror or in science fiction novels, is that they are fundamentally oriented around telling a story, sometimes about the technology, but also there has to be conflict. There has to be a good guy, there has to be a bad guy. I know the film nuts are going to say that's not true, but that's what most people want the story.

So, and that's the world we live in, as we just discussed, oh it is, But.

I'll give a specific example. I'm friends with Ernest Klein, the author of Ready Player one, and in Ready Player one, virtual reality is this technology that has sort of destroyed the world economy. Everyone's living in stacks of trailers and slums, and because that's the only world that matters, and the whole world's kind of falling apart as a result. Another one of my favorite books, The Unincorporated Man, has basically the same premise. Virtual reality technology is outlawed because it destroyed the world's economy by making the market for physical goods disappear, among other problems. I've spoken with Ernie and I've spoken with lots of other authors, said, hey, do you actually think that this is what's gonna happen? They say, oh, hell no. I love virtual reality, but I have to tell a story, and a story where imagine a story in the future. There's no conflict. Tech is very advanced, people don't have to work nearly as hard, and there are amazing VR games that everyone just plays all the time and everything's pretty great. That's not an interesting book. Nobody's gonna make a Hollywood adaptation out of that. I think that it's the same thing with Black Mirror, Like, yes, it can be a useful lens to look at the future, but they're not trying to come up with the most likely conflicts. They're not even trying to come up with likely problems. They're taking really the least likely problems and highlighting them as the thing to be concerned about. In my mind, it's counterproductive the amount of attention we focus on these fiction a lized scenarios when there's actually much more threatening things like, for example, artificial intelligence being used to generate novel biological warfare agents or chemical warfare agents in a garage. That's actually what the problems are gonna look like. And so it actually drives me kind of nuts. It really grinds my gears when you have people in the United Nations putting up pictures from Terminator movies and me like, this is what's coming. We have to make sure this doesn't happen. What the heck are you talking about? That's not even worth talking about. Yes, it could happen theoretically, maybe someday, but it's like one hundred on my list of things to be concerned about, and there's so many more concerning things, like you know what, I'm more concerned about bad people with reasonably smart AI doing evil things. I'm way more concerned about that than super smart AI doing evil things on its own without people involved.

There are a lot of thorny ethical questions. We're talking about a possible future of self guided bombs and killer robots and algorithm deciding who to kill, who is liable humanism in the loop.

First of all, I'd say it's not a future of that, if it's a present of that and a past of that. We have a long history of totally autonomous weapons deciding when to kill and when to not. I mean, in Vietnam, we deployed radar seeking missiles at scale that would fly over the horizon, look for emissions that they believed were correlated with a particular target and decide to either go after it or fall into the ocean. Land mines are fundamentally autonomous systems, and that they're being designed to analyze electromagnetic profiles and decide if something is a military asset or a civilian asset, decide if they're going to PLoP. We actually have a long, long history of autonomous weapons. The key is that a person is responsible for the deployment of those systems, that a person understands the limitations of those systems. The existence of an algorithm, or of an automated fusing system or an automated target acquisition system cannot replace human responsibility for deploying that weapon system, and it has to be a person who deeply understands the limitations of that system and who's going to be held to account when it goes wrong. There will be people who are killed by AI who should not have been killed. That is a certainty. If artificial intelligence becomes a core part of the way that we fight wars. We need to make sure that people remain accountable for that, because that's the only thing that will drive us to better solutions and fewer inadvertent deaths, fewer civilian casualties. And again I don't want AI to do these things, but a lot of times the existing technologies.

Are much worse an AI start a war.

I think that anything in deody could start a war. You could have a single rogue person caused something to happen that starts to war. You could have a malfunctioning weapons system start a war. You could have a miscalibrated weapensess or WEAPENSYSM that's programmed with the wrong target information start a war. There's a lot of ways you can inavertently start a war. I think AI makes it less likely that those things happen, and not more likely. It adds another safeguard in the chain. For example, you could have a system that is going to confirm not just radiation signatures that it's going after, but also other things. You could have an onboard AI saying okay, I see an electronic signature from that, but I also see it in the thermal spectrum, in the visible spectrum. I'm literally doing video analysis of it, and it might say, oh, shoot, that's not what I'm actually looking for. Despite having a similar radar signature, I'm going to wave off. And so in almost every case that matters, AI is going to be an additional safeguard against problems and increasing accountability.

Changing tax You started with border security, How is your technology being used at the border now?

Today? Our systems are deployed all along the US southern border, our northern border, and in a whole bunch of other places. We've actually saved a lot of lives. We've prevented a ton of criminal activity, drug trafficking, sex trafficking, things that have been going on for decades without enough of a technological counter to them. And the common complaint people have is, oh, but I don't like Androl working on border security because my preferred immigration policy does not align with that of the United States government. The point that I would make them is a border security and immigration policy are related, but very separate issues. Even if you want different immigration policy, even if you want every person in the entire world to be able to walk across our border, get a passport, get citizenship, and you know, get a freehouse and a free car, like the most extreme version of a caricature of what someone who wants open borders would want. Even that person should want border security because even if you have full open immigration, you still need to know when weapons are being moved back and forth across the border. You need to know when people are being trafficked into human slavery back and forth across the border. You need to know when people are moving fentanyl lace to marijuana that is killing people who didn't even know that they were getting a laced product. These are things that need to be stopped with border security, regardless of immigration policy.

Well, another thing the critics that you know, critics see the technology or building warrior. It could be misused on American citizens for example. Are they right to worry?

Oh, of course anything can be misused. But if you want to point to things that can be misused against American citizens, I mean, the military has a lot of guns, the military has a lot of aircraft. The right place to control what our military can do is at the policy level, at the elected leader level, not at the technological level. We can't say a person might shoot somebody. They shouldn't, So the military shouldn't have guns. The military might surveil someone they shouldn't have surveilled. They shouldn't have surveillance airplanes. It's just not a tenable position. That's what leads to NATO being armed with score guns. You have to have trust in the system, you have to believe that democracy works, and you have to believe that the right way to control these is on the policy side, not saying you're not allowed to have the capability to deter Russia because you might use it against US citizens. If that happens, we're broken in ways that holding back weapons is not going to solve.

Speaking of democracy, and Durrell's mission is to strengthen America, there's a clear argument to be made that former President Trump supercharged polarization in America is another Trump presidency good for America.

It's pretty early to be commenting on what their presidency might mean one way or the other. I mean, I'm a Republican, so if Trump wins, I'm going to be supporting Trump. Now does that mean that I think Trump's perfect? No, But in a world where you make compromises and trade offs, you select the candidate that is most aligned with what you want the world to look like the DoD is pretty a political It's not perfect at it, but they really do pride themselves in trying to be a political because they know that these problems that we face as a country they transcend both the politics of the White House but also the timelines of the White House. In a world where it's flipping back and forth every few years and you have weapons programs that need to stay on track for decades, you kind of have to have an a political organization that's willing to work with both sides. It's willing to work with both parties. Andrel made a lot of money under President Trump. We've actually made a lot more money under President Biden, and we're going to make even more money under whoever's president next.

So even after everything, you still vote for Trump. Do you hope he's not the nominee? Do you think there's a better candidate.

I'm actually in decline, not because I don't have an opinion. I have an opinion, but I think that going out and saying my opinion would be detrimental to my political objectives. The thing is, I'm actually not nearly as political of a person as people think, Like, yes I am.

I mean people think you got kicked out of Silicon Valley.

Well, but here, I mean, let's look at this, like, yeah, I got kicked out of Silicon Valley because I made a nine thousand dollars political donation. I have an opinion as to who should be president. A lot of people think that's controversial, but like, let's not forget I supported the guy who became president of the United States. And for a lot of people, that's the thing they want to talk about. Like here we are on TV, talk about Palmer. You supported the guy who became president of the United States. That's so interesting. Let's talk about it.

So you'd be in a defense context, it makes more sense because it is a political issue.

It's possible, but I think we would have gotten pasted a lot faster. The reason that people pay attention to it at the end of the day is because it's novel for a person in tech to have supported the person who became president that year. I don't think that's that controversial. Here in annerl we have a lot of Democrats, we have a lot of Republicans, we have a lot of libertarians, not very many communists, But generally we have a lot of political diversity. Here. That's because what we're doing is way more important than the politics of it.

Which countries are you willing to sell your technology too? And who won't you sell to.

I'm willing to sell more or less to anyone the US government wants me to. And this is another one of those areas where I have to trust that the democratic process is going to come to the right conclusion. If I don't believe in it, if I just say no, I'm going to decide who ANDERL sells to, it puts our country in a pretty bad position, especially if all of our weapons sales in the country. That way, What if the United States says we need to aid this nation for strategic reasons, and then every weapons ceo said no, I don't want to. I don't like that country. That'd be a disaster. That means the president's not in charge. That means Congress isn't in charge. Means a handful of billionaires are in charge. I don't think anybody particularly wants a handful of billionaires to be directly in charge of the military decisions of our nation. There's also a lot of situations that seem crazy to people because they don't have all the information including to myself. You know, there was a big brew haa haa a long while back with the US selling certain defensive weapons SEMs to Turkey, and people were saying, Oh, we shouldn't sell them those, We shouldn't give them those because they might use them in bad ways. And then it later came out publicly this is not controlled information that we had nuclear armed aircraft on that particular airbase in Turkey on standby. Now, the US can't go out and announce to the world, Hey, the reason that these guys are selling this particular capability is because we have an extremely important, tactically relevant reason to do so. In that situation, you kind of have to trust that people are doing it for a good reason, that they're doing it for a reason it's aligned with US interests, that's aligned with the interests of our citizens.

You went to Ukraine a couple months into the war. You met with President Zelenski personally. What did you see? What did you learn?

Well, I saw a lot of stuff. This wasn't the first time I've met with Zelenski. I had actually talked with him prior to the war here in the United States about potentially using our border security technology on Ukraine's eastern border to help track Russian systems when incursions might be happening, when build up might be happening, and unfortunately we weren't able to make it. I saw a lot of things in Ukraine, some of which I can talk about, some of which I can. You know, we've had weapons there since the second week of the war, we've been involved the entire time. But one of the things that really struck me was that the Russians that were going into Ukraine really did believe that they were the good guys. And they really believed that because they had been fed a pack of lies. That made them willing to do things that they never would have done had they understood the truth. There are people there who came into that fight with literally four days of clothes and a parade uniform. That's what they thought they were going to need in Ukraine. They thought they're needed four days of clothes and food, and then they were going to be a parade in the streets for them because everyone's going to be all over them. And the reason that's made such a big impact on me is it made me realize that Russia's most powerful weapons system isn't any piece of steel. It's actually the control of their media apparatus that allows them to raise an entire generation of young Russians who believe these crazy things.

Ukraine has become sort of a testing ground for low cost, scalable technologies. How is this changing the nature of war? I mean, this is happening as we speak.

Imagine how different the war in Ukraine would be if they could deploy thousands of drones all at once that didn't have to have a dedicated pilot. If you could have one person managing hundreds or thousands of drones in an attack. That's the technology that Nanerl's building, and that's the technology that will take what's going on in Ukraine and put it on steroids. I think that that's what we're going to need to deter conflicts in the future.

There are reports of drones attacking targets in Ukraine without human control. What have you heard about this?

I know a lot about it, but this isn't the right venue to talk about it.

But that is sort of the fear of the roboca, you know, Like right, I can't.

Talk about operational specifics of how our systems or other systems might be used in the specific tactics that are going on, but I'm aware of a lot of autonomous weapons being used in a lot of different places, not just Ukraine, and my thoughts on it are generally big thumbs up. I am a big fan, for example, of having drones that are taking out a guy who's trying to kill me, not being reliant on a direct communications link. Because if my drones relying on communications link, all he has to do to kill me is jam my communications. This is a huge advantage of autonomy. It means that my weapons can go over there and get that guy, even if he has an electronic warfare system that makes it impossible for me to fly my radio controlled drones. I don't feel bad about that. I think it's great. I think it's fantastic. It means that Russia can't sever everything that we make with the push of an electronic warfare button.

China and Taiwan, how does this play out?

Everything that Anderl is working on on the R and D side is oriented towards that fight. The term the Dey uses is a great power conflict or a fight in the Pacific, and what that means is fighting with China over Taiwan or at least being in the area to scare China from going after Taiwan. There's a lot of ways this could play out. Will they move quickly, will they move slowly? Will it be a trade blockade that escalate into an invasion. What we do know is the only way to win this war is to build things that make China have the belief that they cannot take Taiwan without a cost that is unacceptable right now. China does not believe that. China looks at what we have, and they look at what we're willing to do, and they look at our posture, they look at what Taiwan has, and they say, we will reunite by force if necessary, and we're going to be able to do it in the next few years. We have to change their mind.

What is and Urall's role here? Who are you partnering with and what are you deploying.

Or partnered with? Every branch of the United States DoD building things that they tell us they desperately need to win a great power conflict, a fight in the Pacific to deter a fight with China that nobody wants to fight, but that we need to be able to win. And we work very closely with them on this. We got great ideas, we've got great tech, but at the end of the day, we are working hand in hand with the people who know what we need to build to stop this fight, so that we can try to over the course of the next few years build it. I'm probably going to eat these words, but if China ends up invading Taiwan and things go the worst for them, I'm going to feel like we've really failed in our mission. I'm going to feel like we've failed in what we're doing because in the same way China is focused on the Taiwan conflict as they're kind of driving force for building their military, and Rail is focused on it as well. Taiwan is not just strategically important, not just morally important. It's economically extraordinarily important, and allowing Taiwan to fall is probably the worst signal that we could ever send to the rest of the world. There are places in the world you can debate whether we should be involved. I'm generally a non interventionist. Taiwan is not the place to play that game.

You actually spend a lot of time in China working on Oculus headsets. What do you know about China's capabilities in AI. What don't you know?

I mean, I know quite a bit, some of which I can say, and someone which I can't. Obviously, I'm building things that are designed specifically to go up against China's AI systems, So I know more than I can let on. But I did. I spent time in China back in the Oculus days because that's where we did our manufacturing, and we didn't really have a choice. I deeply understand how dependent our country has become on Chinese manufacturing, Chinese engineering, Chinese supply chain materials. It's really extraordinary how they've pulled themselves up from almost nothing to being an economic superpower that is on track to surpass the United States in such a short period of time. And we did this too. We're the ones that gave them the blueprints, We're the ones that gave them the tech. We're the ones that shipped all overseas, and I'm part of the problem. I'm one of the guys who did it. We tried to do our manufacturing in Mexico at one point, and we actually did do some early Oculus manufacturing in the US and Mexico. We just weren't able to make it work without China, and so I have sympathy for companies that do end up in China. But my strong advice to new companies is do not build a company that is dependent on China.

Do you worry that China is outpacing us on technological innovation? Could the US military lose its edge to China.

Well, depending on who you ask, China has between fifty times and three hundred times the military shipbuilding capacity of the United States. This is a huge problem, especially if you're fighting a war where you lose all your ships and it takes you decades to rebuild. They lose all their ships and they rebuild the same year. This is really inarguably an area where China has outpaced the United States. Now they have an outpaced us everywhere, but in a lot of the areas that matter. For a fight in the Pacific, they are kicking our ass, and the United States is not going to be able to win by following the same strategy they do. We're not going to be able to build enough shipyards and train enough welders to build three hundred times more ships. That's off the table. So we have to win with our brains.

The stakes are so much higher here than strapping on a headset and playing in the metaverse. How do you think about the human cost, the human toll of what you're building?

How do you deal with the human toll? You have to deal with the dilemma that people have dealt with for millennia. War is hell. People die, and our goal has to be to try to minimize that. Our goal has to be to build the things that prevent it to the best extent possible. And some people who have a different moral framework internally will say, well, how could you live with yourself if somebody got killed by one of your systems? And like, I'm not stoked about the fact that, you know, let's say an autonomous weapon blows up a Russian tank crew, but I also know that it's the right thing to happen. I can't say I lose much sleep over it, because in a world where things like that need to be done for the greater good and to save the lives many more lives that could stem from not making that decision. It's a really easy decision for me.

What if your technology kills the wrong person? Would you take responsibility for that? Would you apologize?

Well, it would depend on the situation. Didn't kill the wrong person because someone in the military shot at the wrong person. Is it because it was attacked or hacked by an adversary using some components that China implanted in their supply chain to make a command and control system. It's a hard hypothetical, but like if a system killed somebody who should not have died, and it's entirely my fault personally, Palmer Lucky it was everyone in the military did their job, everyone on the deployment side did their job. Everyone in my company that it was my personal flower. Yeah, I feel awful about it, but it wouldn't make me say, well, I guess we shouldn't build the tools that Taiwan needs to save themselves from being taken over from China. I don't think that anyone who works in this space can afford to have that opinion. Every weapons company has made weapon systems that have malfunctioned at some point. Imagine if the bar for withdrawing from that duty was getting it wrong one time. The stakes are high. That's why we have to do it. That's why we have to keep doing it.

So big picture, what does the future of warfare look like? And what is Andurrell's role in it.

The future of warfare is going to be defined by large numbers of autonomous systems, managed at a high level by people who are able to focus on what people do best, leaving to robots what they do best. Every sensor is a censor for every weapon. Every weapon is an appendage of every person who needs to use them, and the enemy will not be able to deny us our communications. They will not be able to deny us the big picture, we will have full awareness of what we are doing, why we are doing it, and confidence that is the best way to accomplish our strategic games. That's what I think. The future of warfare looks like. It's an ambitious future, but we're building the tech that can make it happen.

Thanks so much for listening to this episode of the Circuit. You can watch the full episode featuring Palmer Lucky on Bloomberg Originals. I'm Emily Chang. Follow me on Twitter and Instagram at Emily Chang TV. You can watch new episodes of the Circuit on Bloomberg Television on demand by downloading the Bloomberg app to your smart TV or on YouTube, and check out other Bloomberg podcasts on Apple Podcasts, Spotify, or wherever you listen to your shows and let us know what you think by leaving a review. I'm your host and executive producer. Our senior producers are Lauren Ellis and Alan Jeffries. Our editor is Alison Casey. See you next time.

The Circuit with Emily Chang

Bloomberg journalist Emily Chang sits down for intimate interviews with the biggest names at the int 
Social links
Follow podcast
Recent clips
Browse 25 clip(s)