In April of 2023, a man carrying a small, black box walked into one of the nation’s most secure buildings, right next to the White House. In the box were ingredients that could be used to create a bioweapon. What the man revealed about how he got his hands on these ingredients was even scarier: an AI chatbot had given him the recipe.
On today’s Big Take, host David Gura speaks to Bloomberg healthcare reporter Riley Griffin about why that stunt alarmed White House officials and woke them up to the potential dangers of AI-made bioweapons.
Read more: AI-Made Bioweapons Are Washington’s Latest Security Obsession
Further listening:
Bloomberg Audio Studios, podcasts, radio news. Bloomberg reporter Riley Griffin has a pretty unique beat.
I am a healthcare reporter and I'm based here in Washington, DC, and I have a particular interest in where health and national security meet.
And there's been no shortage of stories at that intersection of health and national security since the COVID nineteen pandemic.
We were all incredibly humbled by that moment. And I think, you know, prior I had been really focused on the pipelines of pharmaceutical companies and how they were going to ensure a profitable and booming business. I was not thinking about healthcare as something that threatened our economy or threatened our ability to act in everyday normal ways.
Newly alert to the possibility that it could be a threat, Riley was intrigued when she got a tip.
I heard this from government officials who witnessed the briefing by a guy who had walked in with a black box.
A black box, and that briefing wasn't held just anywhere.
He brought that black box in with him to the Eisenhower Executive Office Building EEO B. That's the building right next to the West wing. Really, it's where a lot of White House staffers are.
Riley says she talked with several people who were in that meeting, and they told her that inside that small plastic container.
Were test tubes with synthetic DNA.
Synthetic DNA that included ingredients that could be used to make deadly diseases. That was frightening enough, but Riley's sources told her what really startled them was when they found out how that guy, a former UN weapons inspector, had figured out what to put into those test tubes. It was with the help of an AI chatbot.
He had his team basically pretending to be a bioterrorist. If a biotean art wanted better instructions on how to create a biological weapon or help with the process of actually making that in a laboratory, what would it ask, What would it need to know?
They typed in questions and they got answers, But it wasn't just recipes what to do with that synthetic DNA, Riley says. Those government officials heard about how that chatbot recommended where to put pathogens and how to deploy them to do the most damage, which raised an alarming question.
If a bad actor could acquire the synthetic DNA and the instructions to put it together in a lab, what would that mean.
I'm David Gura and this is the big take from Bloomberg News today on the show, how information that could be used to make bioweapons slipped through AI safeguards and what that tells us about the challenges of regulating is still pretty brand new technology. Of course, information about how to create deadly weapons of all kinds is out there and it has been for a while. Information that could help weaponize diseases are buried in books, and Riley Griffin says they're online if you know where to look.
There's a lot of information on the Internet about viruses, pathogens, respiratory illnesses, fungal diseases, right, And the reality is there's a lot of information on Google that is dangerous when it comes to the field of biology.
I asked Riley, what's different now? What could a bad actor, a bioterrorist do with an AI chatbot that he couldn't do with a standard search engine.
One thing that really struck me through the interview process was just hearing from some of the researchers explain how good. The chatbot was in the brainstorming phase in offering up ideas that weren't necessarily in the first place.
AI companies know that their transformative technology has a lot of potential perils, and they're also aware of the damage it could do, not just to their business models. So many of them have tried to get ahead of a lot of this stuff, and Anthropic is one of them. It was founded by people who had worked at Open Ai.
Nanthropic was created really with a bent towards safety. They wanted to put in checks and balances, test these things out before they reached the public, and commercialize at a little bit of a slower pace at the time.
It was Anthropic who hired that former UN weapons inspector who brought that black box to the heart of Washington. His name is Rocco Casa Grande, and Anthropic hired him to put its AI chatbot, Claude through its paces.
So that's the origin of this story. He had vested it out with a team for more than one hundred and fifty hours. He had virologists, microbiologists really examining what it is could and couldn't do, and so they went through that process and what they found scared Casa Grande.
He was concerned by how simple it was to get pretty accurate instructions for how to make bioweapons and how to maximize harm. But Riley says something else made Casa Grande even more worried.
Another notable piece that was shared with me was the chatbots could advise on how to acquire the materials, where to go purchasing them, and how to evade scrutiny when doing so. Some companies that sell the synthetic DNA this man made DNA, you know, look to know their customer. They want to see who is purchasing this. Is this a researcher at the CDC? Is this an academic who studies ebola? If not, you know, a red flag is raised.
But Riley says not all companies that sell man made DNA have or follow strict protocols.
Like with all companies, there's a varying level of commitment to that cause, and the chatbok could recommend where to look that might be better at evading such scrutiny.
That motivated Cossa Grande to put together that black box and to carry it with him into one of the most secure office buildings in the world. He called it a stunt, but he wanted to make a point, and Riley says it resonated with the people Cossa Grande met with.
He really struck the imagination of Washington with that black box.
Coming up after the break, how government officials responded to Roco Costa Grande's black box warning and what that could mean for everything from AI safety to our ability to treat and cure diseases. Just a few months after Roco Cassa Grande's briefing, in October of last year, President Joe Biden called for greater oversight of government funded research and new safety protocols for tech companies and AI developers to.
Realize the promise of AI and avoid the risk. We need to govern this technology, not and there's no other way around it. In my view, it must be governed.
He laid out a new way for AI companies to think about the threats posed by the technologies they're creating.
I'm about to sign an executive order, an executive order that is most significant action any government anywhere in the world has ever taken on AI safety, security, and trust.
Well since then, Bloomberg's Riley Griffin says the debate over AI regulation in Washington has gotten louder and more urgent since October.
There's really begun in earnest a serious conversation about regulation and or what steps can be taken without hard and fast rules, just to limit the risks. So this is top of mind. Every agency is thinking about it, of Homeland Security, Commerce Department, the Department of Defense. I mean, it's a full government conversation.
Late last year, Vice President Kamala Harris announced the establishment of the US Artificial Intelligence Safety Institute, and in her remarks, Harris specifically singled out AI made bioweapons as a threat.
From AI enabled cyber attacks at a scale beyond anything we've seen before to AI formulated bioweapons that could endanger the lives of millions of people. These threats are often referred to as the existential threats of AI, because of course, they could endanger the very existence of humanity. These threats, without question, are profound and they demand global action.
But at the same time as the government has put up new guardrails, some researchers are saying not so fast.
Nobody wants to stop or certainly many including in the scientific world, do not want to stop the innovation that AI is contributing to around new treatments and diseases and ways of diagnosing healthcare conditions. There are a lot of benefits that scientists point to that they fear could be hindered by regulation.
One hundred and seventy four academics have signed a letter pledging to use AI in their research responsibly. They argue that when it comes to AI, its benefits quote far outweigh the potential for harm. Riley says, they're advocating for a more measured approach to regulation.
You know, a focus of the letter is we recognize there are some risks, but hold up, we don't want to limit the progress that can be achieved. We can do good things with these tools, and that is a priority, and we promise to move forward with that work while practicing safe behaviors around the for example, only purchasing synthetic DNA from kind of reliable providers, which isn't again an AI question, but there's a it's not just AI. It's kind of a broader conversation about security in the field of biology. As technology advances.
Some critics decry what's being called AI doomerism. What they see as a misplaced worry that generative AI poses an existential threat to humanity.
Some people think the conversation about biological threats and AI is a.
Distraction, and Riley notes that while Casa Grande and his team were able to do something with that chatbot that is objectively scary, the test they ran led to a positive outcome.
My reporting suggests that these initial briefings were really the beginning point of a massive undertaking by government to think about biological risk in a new way in the wake of a pandemic, but also in the wake of a fast moving emerging technology.
And those meetings also led to calls for a broader conversation so that it's not just the largest AI companies like Anthropic and Open Ai that are briefing government officials.
You know, one interesting comment that I've heard is many would like to see smaller AI companies more present in the room. It's the biggest players that are engaged with government on this subject, but some of the smaller companies, particularly those tailored to biological data, may not have as active of a voice.
The debate continues, and according to Riley, the trend is looking better than it was last year when she got that tip about that black box.
Hopefully we're at a place where it's harder to get that information from the chatbots.
One of Anthropics founders told Riley the company has made changes to address vulnerabilities Cosa Grande and his team identified, but more work is needed.
Well.
Riley says she doesn't want the takeaway from this story to be one of fear, but more so about how we think about what we do when there is so much innovation happening in so many places.
You know, what I want the takeaway from the story to be is not that we are in a doomsday scenario where the worst is here and present and going to impact us tomorrow. It's more around the nuance of a debate about how to regulate a fast moving technology. And I think everybody knows that there's a question around regulation of AI because AI is accessible, but the field of biology is not so present in that conversation, and we talk about synthetic biology or this kind of revolution in the field of biology. It is moving just as quickly, so that will be kind of my note is what happens when too fast moving revolutionary technologies crash into each other.
This is the Big Take from Bloomberg News. I'm David Gera. This episode was produced by Thomas lou It was fact checked by Alex Segura. It was mixed by Robert Williams. Our senior producers are Naomi Shaven and Kim Gittleson, who also edited this episode along with Becca Greenfield. Our senior editor is Elizabeth Ponso, Nicole Beemster bor is. Our executive producer. Sage Bauman is Bloomberg's head of podcasts. Thanks so much for listening. Please follow and review The Big Take wherever you get your podcasts. It helps new listeners find the show. We'll be back on Monday.