Newt discusses the transformative potential of Artificial Intelligence (AI) with Neil Chilson, the leader of AI policy at the Abundance Institute. Chilson explains that while AI has the potential to revolutionize various sectors, including healthcare and creative fields, there is a pervasive fear and pessimism surrounding the technology. He argues that this fear-based approach could hinder the full potential of AI. Chilson also discusses the Abundance Institute's focus on Artificial Intelligence and energy, emphasizing the need for regulatory changes to foster innovation in these areas. He invites those interested in a positive technological future to get involved with the Abundance Institute.
On this episode of news World, we're living in a time of transformation that may rival any other time in human history. We have a big problem. The narrative that surrounds the biggest and most groundbreaking technologies of today is one of pessimism and fear. While technology holds the power to transform our economy and our lives, we often throttle technological breakthroughs before they can fulfill their life changing potential. Fueled by a mix of cultural anxieties and policy changes. This fear based approach risk denying humans and abundant future. My guest today says quote, our leaders are unprepared to address the rapid pace of technological change in a positive way. The Abundance Institute is a new, mission driven nonprofit organization that is folks on creating space for emerging technologies to grow, thrive, and have a chance to reach their full potential. Here to discuss his new organization, I'm really pleased to welcome my guest, Neil Chilson. He is the new leader of Artificial Intelligence policy at the Abundance Institute. Neil, welcome and thank you for joining me on Newtsworld.
Thanks so much for having me. It's great to be here.
Can you talk about your background and artificial intelligence?
Sure? So I am a Master's agree in computer science undergrad and master's agree in computer science. And when I was in grad school, I focused on a couple different things, but did some work on it what are known as agent based systems. Now, when I was in grad school in the early two thousands, none of the huge breakthroughs that have happened in machine learning were Some of them were maybe on the threshold, but nothing had broken the way that it has, and certainly hadn't broken into the public consciousness like it did with the release of chat GPT two years ago. And so my focus was on another type of artificial intelligence. And maybe we'll get into this a little bit, but there have been many, many waves and many different types of what computer scientists have called it artificial intelligence over time, and so I've since translated that expertise into the policy space. After grad school, went to law school, spent some time doing telecommunications law, spent a lot of time at the Federal Trade Commission, where I was the chief technologist at one point, and my job there was to engage with new technologies and figure out how do they affect consumers? How can government set policies so that consumers are getting the best benefits out of it? And how can we drive innovation through good policy making? And so that's what I bring to this conversation.
You wrote a paper Getting out of Control Emergent Leadership in a Complex World, which looks at different leadership paradigms. Is that published? Is it available?
Yeah, it's actually a book that's available on Amazon, and I also have a substack out of Control dot substack dot com where you can learn more about the book.
What led you to write it?
Well, you know there's two paradigms, Well there's at least two paradigms, but there's one that's very dominant in DC in particular, and that's the idea that in a world of increasing complexity and what might even look like chaos, what we really need is more control. We need people with more authority who take strong positions to jump into the fray and get things under control. And I think a lot of people often when they look at how complex the world is, they're sort of wishing for that that somebody will just take control and tell the world how to run. But the problem is that complexity brings so many benefits when we think about the ability of our ecosystem and our economy to adapt and to create new products and benefits for individuals, and those benefits rely on a complex network where nobody is really in control, but where many people have influence. And so I wrote that book to say that both in your personal life and in the policy space, we should be really focused on not trying to seize control, which can have some really negative side effects and get rid of a lot of the benefits, but in trying to increase our influence and also to understand the complex systems that we're in better so that we can resist this impulse to grasp for control.
It's interesting. It's a little bit like Adam Smith's description of the hidden hand, which enables us, through the market mechanism, to move resources and increase innovation in a way that nobody can control, and if you try to control it, you actually kill it.
The invisible hand is an emergent phenomenon. It's something that happens when all of us trying to apply the information that we know in front of us, are trying to solve the problems that we face, and then over time you get something that looks quite orderly from the outside, but which no one person designed, and that's very powerful and it says a lot about our capabilities as humans to solve really complex problems as a group, even when there's nobody setting out a design ahead of time.
The number of different things that have to come together for major breakthroughs are beyond the ability for bureaucratic planning. They almost occur serendipitously.
Yeah, and you can actually see often they occur in parallel, so you might have multiple people who are doing similar work on similar tracks, and there's a sort of threshold where all of the enabling technologies enable a lot of new people who are trying to solve lots of different problems to solve a very similar problem all at once. The light bulb this happened, you know, Calculus was kind of parallel invented between two very very smart people, and so I think this happens a lot, and I think it's happening in the artificial intelligence space right now, where you have a lot of people are seeing like these new capabilities are possible and trying to solve new problems, and there's just so much going on right now.
You saw that with the Wright brothers, where there were really many people in Europe and America trying to figure out how to fly, and they happened to be first, but there were a lot of other people working the problem, and in fact they often would write letters back and forth. There was certainly not an isolation.
One of the great things about that example is that the Wright brothers were not scientists, right. They were bicycle mechanics who were trying to solve a problem that lots of scientists had tried to solve, and many scientists at the time, if you read the contemporary history, were quite skeptical that it was possible from the equations that they had worked out, and the Right brothers, through trial and error and practical experience, were able to show that no, it actually was possible. And once they made that break through, all of a sudden there was a flood of even bigger flood of people tried to solve that problem once they could see that it was possible to do.
The Smithsonian got fifty thousand dollars to try to build a heavier than air vehicle and failed, and did it with a very elegant design using a very powerful German motor which required a very large structure to be able to hold the motor, and it was just way too big, and they didn't know what they were doing. And it actually cleverly decided to launch it off of a ship in the Potomac, so when it crashed, it went straight into the water and they couldn't figure well went wrong. Meanwhile, the Right Brothers, for about a dollar per flight, are operating out of Kitty Hawk, doing ten, twelve, fifteen flights a day, and they weren't breaking their plane, so they could say, gee, that didn't quite work, let's do this and that. But when the Right Brothers broke through shortly after the Smithsonian failed, the Smithsonian was so angry that for years they wouldn't deal with the Wright brothers. That's one of those great examples where human nature transcended the scientific impulse. I am interested by the way. You have a bachelor's in computer science from Harting and a master's in computer science from the University of Illinois, and you have a law degree from George warshingtam Law School, So you really have a combination of the law and public policy with the science and technology. Do you think that gives you a different outlook than most of the people in the field.
I think it does in two ways. It's very interesting because when I was in grad school, there wasn't a path really for computer scientists to work on public policy in the way that there is now that's much more common, and so I went to law school. What law school taught me is that the engineering paradigm that I think we typically think about the engineers try to solve problems with, is not the same as the paradigm for lawmaking. Unfortunately, we do see a lot of people who have tech expertise come to DC with a sort of engineering mindset that says, well, law is like code, and once we write it, of course it will work the way that it's intended, just like when I write code in a computer, it runs that way. But humans and the law, as you well know, are a complex system with lots of feedback loops. The things that you write don't run the way that you think that they should, and so I often have to temper some of my enthusiastic engineering background to friends about what is possible in law and when it's an appropriate approach to solve problems and when it isn't. And so I think having window into both of those. Lets me speak across that chasm that I think exists between engineering and law.
I mean, you're studying of artificial intelligence. You indicate that it's evolved already in less than a decade to about one hundred billion dollar industry, but you think in the next decade it could easily grow to something like a trillion, three hundred billion dollars as an industry. The analysts at Pricewater Scoopers estimate that AI will add fifteen point seven trillion dollars to the global economy by twenty thirty. First of all, why is it accelerating that rapidly and what is the nature of its impact?
So this is a difficult question to answer, and I think everybody's trying to predict the future on AI comes up against a single difficult problem, which is that defining what exactly is artificial intelligence is quite difficult. This current wave of deep learning, large language models, I think is what most people sort of settle on as what they're trying to project off of right now. And the reason it's moving so fast right now is that there is this real confluence of capability in these new chips, these new techniques in the transformer model that was developed at Google, but quickly spread beyond that company to lots of researchers, and this demonstrated almost in the Right Brothers manner. This demonstrated potential in something like chat GPT, where people are like, I can't believe this works as well as it does. I don't think that pre chat GPT being released, that people were really aware, even people in the computer science field, that you could have something that was as useful as it turns out these large language models are turning out to be. And so I think once that was demonstrated, there's been such a flood of energy into this space. The impacts I think are still pretty unknown. But when you think of what these large language models and other deep learning models can do, what they can do is they can take a big bunch of unstructured data and they can pull essential patterns out of it in a way that reveals new things that wasn't easy to identify in that data previously. And so I think it has the biggest potential in spaces where we have a lot of data but we don't know what to do with that data. And so healthcare to me is one of the biggest potential impacts. Here we can collect a lot of data about an individual's basic bodily functions, their heart rate, they're breathing, their brain waves. We can see that data, but we don't quite understand what it means. And so I think using these types of techniques of deep learning to pull out meaning from those large data is going to help us do things like personalized medicine where we no longer treat people as just essentially an average human, but we can look at what are the specific conditions and functions of their body, and we can design treatments, including potentially medicines that focus specifically on their particular body. And so I think there's a lot of potential there. There's obviously lots of potential in the creative fields because you can use these techniques to generate well written pros, to translate between lots of different languages, to create even pictures and videos now in a way that's quite impressive, and so I think it brings a lot of powerful content creation tools down to the average person in a way that is going to mean that it's much easier to create high quality content even if you're an individual or a small team. And so I think we'll see an explosion of creativity using these tools in the very near freet as well.
The process could be very, very positive, almost certainly will be. But at the same time, there's a cultural aura that views all of it with fear. Why do you think we have drifted into this fear based response to technological opportunity?
Humankind always has had a sort of technopanic curve where you have the early adopters who are really excited, then you have a sort of peak moment where people are talking about the potential downsides, and then it gets accepted into the community and people forget that they ever debated, like oh, that novels were good or that bicycles were good. And so I think it's part that. But I think AI in particular raises because it's such a vague technology. So much of what we call AI artificial intelligence. There's actually dozens and dozens of artificial intelligence algorithms on everybody's phone, right. It does all the things like helping you search for your photo collection for a particular individual. Those are artificial intelligence algorithms. But people when they hear the term artificial intelligence, they think of sci fi movies, right, They think of the Terminator, or they think of two thousand and one and How and in those movies, almost universally, AI is projected as an evil entity or an entity that's gone wrong somehow that is threatening human safety. And so I think there's that sort of cultural piece. But more generally, I think the US has shifted from a frontier mindset, one where we're trying to push the edges we're trying to explore. We want grand adventures, we want to be the next Right Brothers. We hold up those people as icons for what it means to contribute to society, and we're afraid of that now. I worry that we're now in a time where maybe our kids' greatest adventures will be figuring out what particular trauma they are trying to deal with in their life, when their great adventures should be like trying to come up with the next big invention, or exploring a new space of science, or maybe outer space. And so I think that cultural shift not one hundred percent sure why it's happened. I think in part it might be we've gotten comfortable as a country, maybe, and so we're focusing on problems that are smaller when we should be looking to opportunities and problems that are bigger. But it really is a sort of new phenomenon. When I think of the late nineties, we were a country that was excited to be on the cutting edge of technology, and now I think we're often, at least at the elite levels, people talk about technology as if it's primarily a threat rather than an enormous opportunity For the United States.
Somebody said that the Europeans had decided they regulation over innovation, and the result was in almost every new innovative area, the US was just rapidly pulling away from Europe. And isn't there a real danger that some of our politicians would like to introduce a European style bureaucratic over control.
Absolutely, and in fact almost expressly. When you look at what California has done around some of its approaches to software development and privacy, they're borrowed directly from the European Union. And the European Union has a different mindset around technology, whereas in the US we generally think people have the right to build new things and then we'll see what the effects are and they might have to temper their solutions. Both the market might discipline them but then also there might be real consumer harms aos are possible, Whereas in Europe they have a mindset that basically, until the government sort of authorizes an innovation in a particular space, that nobody is really allowed to do it. And so that mindset is very chilling because inevitably it's not the regulators who are the best at trying to figure out what future technologies might come about. It's the people who are practically trying to make those things happen. And if they live in a culture that says until you get the okay you can't try something new, you know, it's just a very difficult place to innovate in. So the US does and maintains quite a good edge in that space, but we are at risk of, at least at the policy level, giving that up by trying to adopt these precautionary approaches to innovation.
Doesn't there seem to be a pretty big partisan split over regulation versus innovation, with people like send them a Joy Leader Chuck Schumer really pushing for all out federal regulation.
I don't feel like artificial intelligence has been particularly politicized yet, but we have seen many other of cutting edge technologies, especially in the software space really do have a political valance to them. The default, I think would be in the partisan space would be that Democrats tend to be much more precautionary in approaches to technology and Republicans tend to be more permissionless let people build things. That's not one hundred percent across the board, and there are interesting opportunities, I think in the AI space to think about that. On AI in particular, the Biden administration has very much taken a whole of government approach, one that says, hey, we as a government had to figure this thing out, and we need to get all our ducks.
In a row.
And not all of that is bad. Government uses of AI should certainly should be thoughtful. But when we're trying to set up an environment in which we are making sure that innovators have both the incentive and the freedom to develop new things, a lot of government action can be pretty chilling to that. And so I do think the Biden administration has not really set a very positive vision. In contrast, in the late nineties, there was a very clear vision set for the Internet that the commercial development of the Internet, that it was going to be market driven rather than government driven. And we have a very different mindset about artificial intelligence right now coming out of this administration.
You've sort of triggered a couple of thoughts on my part. One is you could do a very interesting survey of the artificial intelligence that's already around us. I mean your whole point, for example, about your cell phone and how many different versions of AI you're relating to, And I think people would be shocked realized that artificial intelligence isn't the future. Artificial intelligence is the present and is going to expand into the future. They're an amazing number of places where we've actually been using a variation of artificial intelligence many many years ago. I went out to San Diego to the Navy's labs and looked at how they had designed a carrier battlegroup defense system, and it was clearly what we would now call artificial intelligence. But it was thirty five years ago. So in that sense, we're already surrounded by a large amount of artificial intelligence.
Absolutely. There's a great quote by AI pioneer John McCarthy, which is that as soon as it works, nobody calls it AI anymore. There was a time at which chess playing was cutting edge artificial intelligence or speech recognition, or recommendation algorithms like what you should watch next on Netflix. There was a point at which these were cutting edge AI research. But now because as they're working, we just call that computers and we don't call it artificial intelligence anymore. And so I do think that people don't realize that, and I think in part it's because these new technologies came out in a sort of chatbot form, right, and so like you're talking and it sort of seems like you're talking to something. I think that feels a little different to people. But ultimately, I do think it is helpful to point out that AI is around us. It's pretty ubiquitous.
In fact. The other side of that is the notion of getting across how many different ways AI helps us. I mean, I'm saying about this the other night because I was trying to go somewhere, but I realized I don't care anymore because I just plug in the address and the GPS system, which is artificial intelligence, knows where I am, knows where I am going, and has a sense of which route will be best. Now, if you think about it, that's an astonishing level of data in real time at no cost. We thought about the number of ways the AI is already helping us, and then you could actually build out the potential over the next decade. I think, particularly for example of helping Alzheimer's patients and parkinson patients, and people have much better richer lives and their families having much better richer lives as new artificial intelligence systems are developed.
Yeah. Absolutely, And you know I was thinking. I had a bit of a health scare with my twenty month old last week where she had a seizure, and one of the things that doctors kept asking me was how long did it last? And I was literally able to look at my fit bit heart rate monitor and see when my heart rate shot through the roof to see when it started, and I could figure out from that. Now, it would have been even better if I could just ask the Alexa that was in my room like hey, you hurt us, yelling like how long did that last?
Right?
But I couldn't do that, in part because I think that people worry about enabling those types of devices to monitor all the time, and sure that can have some downsides. But all I could think was, Man, how great it would have been if I could have pulled that data that I know was available even if we weren't capturing it, and I think AI opens the potential to do that sort of really powerful empowerment of people to make the most of the information that's around them, and so I'm excited about it.
I really want to ask you a little more about the Abundance in City because it's absolutely what I believe in and I'm fascinated that you're doing it. But I noticed that you have focused on artificial intelligence and energy as your two major emphasis. Why is that?
Thank you so much for your interest in what we're doing at the Abundance Institute. As you said, we just launched recently launched this week, actually, although we've been in soft launch for a little while. Right now we're very focused on AI and energy because artificial intelligence is not only a hot topic, but this issue is not going away. The way people are talking about AI right now, the cultural and the policy decisions that we make right now, we'll have ramifications for the next fifty years, if not longer, and in a space that's moving this fast, those are really huge impacts. You already mentioned some of the numbers about why AI could drive such tremendous growth across the US economy and across the world. But if we don't get this right, we're going to see the lead that the US has to other nations such as China, and I think it is really important that we get this right right now, which is part of why we're focusing on it. Energy. Similarly, there's enormous opportunity. There's new technologies in this space that could really drive us into a time of energy abundance. We've been operating in the US since the seventies under a sort of scarcity mindset around energy, the idea being that there's a limited amount of it and that we should conserve it. Our goal should be to conserve energy. Whereas we know that the wealthiest countries are those that produce the most energy, and so here in the US we need to get better at producing energy. We need to do it in a way that's sustainable and creates a good environment for our people. But there's no reason that we are stagnating when it comes to the amount of energy that we're creating. We have the technology, we have the capabilities. The main things that are holding us back are regulatory barriers, and we need to get those out of the way so that we can do things like bring nuclear back to the US in a way that produces a huge abundance of energy that we're going to need for many different things, including artificial intelligence. And so that's why we're very focused in those two areas. I will say that in the longer term, we are also very interested in the biology space. The biotech space. There is a huge amount of potential there, especially as some of these artificial intelligence therapies come online, to really just a total step change and the way that we deal with health and healthcare in this country and around the world. And so we're very excited about the potential in that space as well.
So if somebody wanted to get involved with the Abundance Institute, what would they do? What could they do?
You can reach out to us. Our website is abundance dot Institute and we have an email at Hello at abundance dot Institute. What we're trying to do is invest in talent and talent assembly. We're also trying to build a community of optimists, founders, and inventors to combat this very pessimistic mindset about the future of technology that I think resonates. That's very prevalent in DC, but I think there's lots of people talking that way other places as well, and so we welcome all people of similar mindset who are excited about the future, who think that humans have the great potential to create solutions to big problems, and who want to get involved. We would love to hear from you.
We will certainly put the connections on our show page, and we will encourage people to look at the Abundance Institute and to get involved in helping think about a positive future. Now, I want to thank you for joining me. I'm looking forward to seeing what the Abundance Institute can accomplish. I think it's exactly the right direction. I know you're just getting started, so I hope in a few months you'll come back and join us again and report on what you're working on.
I would love to and in a few months the AI technology will probably be even different than it is now and we'll have plenty to talk about. So very much welcome the opportunity.
Thank you to my guest Neil Chilson. You can learn more about the Abundance Institute on our show page at newtsworld dot com. Newsworld is produced by Gingrid three sixty and iHeartMedia. Our executive producer is Guarnsey Sloan. Our researcher is Rachel Peterson. The artwork for the show was created by Steve Penley. Special thanks to the team at Gingrid three sixty. If you've been enjoying Newtsworld, I hope you'll go to Apple Podcast and both rate us a five stars and give us a review so others can learn what it's all about. Right now, listeners of Newtsworld can sign up from my three free weekly columns at Gingrich three sixty dot com slash newsletter. I'm Newt Gingrich. This is Nutsworld.