Episode 691: The Power of A.I. – An Overview

Published May 2, 2024, 3:11 AM

Newt talks with Professor Ajay Agrawal, a key player in the world of Artificial Intelligence and author of Power and Prediction: The Disruptive Economics of Artificial Intelligence. Agrawal discusses the rapid evolution of AI, highlighting the significant advancements made in the last decade. He explains that AI's core function is to improve prediction, but it also requires human clarity for value judgments within those predictions. Agrawal also discusses the potential for AI to dramatically increase productivity and trigger a large reallocation of capital. He predicts that AI will force society to be more explicit about value judgments and trade-offs, leading to increased transparency.

On this episode of NEWTS World, what do we need to understand about artificial intelligence now? And why has AI become the buzzword of late My guest today is one of the key players in the world of AI. He's written about extensively. His latest book is Power and Prediction, The Disruptive Economics of Artificial Intelligence. I'm really pleased to welcome my guest, Professor I j agro Wall. He is the Jeffrey Tabor Chair in Entrepreneurship Innovation and Professor of Strategic Management at the University of Toronto's Rothmann School of Management. In addition, he is a research associate at the National Bureau of Economic Research in Cambridge, Massachusetts, and faculty affiliate at the Vector Institute for Artificial Intelligence in Toronto, Canada. Jay, Welcome and thank you for joining me on NEWTS World.

Thanks very much for having me newt You.

Know, it sounds like you're very busy.

Well, I suspect you're very busy too. So we're all busy these days.

Well, and you picked a field that is growing and evolving so rapidly. It must be fascinating. You just get up every day and figure out what's going on.

You know, I've been working in the area of economics of innovation for a quarter century, and certainly this feels like nothing I've felt before in terms of the pace at which all this is moving.

There's sort of an s curve of technological change where you start really slowly and then suddenly you shoot straight up. Do you feel like we're at the early stages of that kind of sudden acceleration.

Yes, that's the perfect way to describe it. It feels like the slope is changing. I've been working in this area for about a decade regards to machine intelligence in particular, started to focus on it in twenty twelve, and in the last twenty four months the slope has been changing.

What happened in twenty seventeen there was sort of a game changer in the world of AI.

Well, a number of things happened, in particular the use of particular technique some people call transformers, and in addition that started to demonstrate, for is the value of compute for machine intelligence. So I think in terms of the timeline. So in twenty fifteen, Elon Musk recruited one of my colleagues, a fellow named Iliah Suskiverer, to co found with him open Ai, and then in twenty eighteen circulated around the first draft of GPT for comments and feedback, and then twenty nineteen the second version of GPT for more feedback, and then later that year Microsoft made their for their billion dollar investment in open Ai, which was quite astonishing because of that time Opening I was still very small company, so it was a big investment for such a small team. And then in twenty twenty two when they released in November twenty tweeny two, they released chat GPT that seemed to really set off outside the computer science community. That set off a real shift in people's perceptions. And I recently saw a graph of the number of times jenerative AI has been mentioned on corporate earnings calls, and it's exactly what you describe in your s curve. It was almost nothing, almost nothing, and then November twenty two and all of a sudden it started to take off, and then through twenty three it just went through the roof.

One of the things that seems to be evolving on all of this is the concept of scale in a way that would not have been trused, say fifteen years ago. What does that mean and why is it so important.

So usually when people refer to scale, they're talking about the size of their AI model, their neural network. They'll use something like, for example, the number of parameters in the model. So you can think of that as just the size sort of picture in your mind a neural network, or even a tree, a tree with branches, and you know it's got some big branches in each big branches, smaller branches, and those branches of smaller branches, and so it just you know, branches out, and so as you make the model bigger, it's got more and more branches and sub branches that the amount of computation required in order to what they call train the model increases. So to give you just a sense of scale, the language models produced by open AI, the chat Gypt the first version had about one hundred million parameters, the second version had one point five billion, so fifteen times as much, and then the next version had something like seven hundred billion parameters, so from one point five to seven hundred billion. You can sort of see just the huge jumps. And when they demonstrated what the capability was at that scale, it went from kind of a neat toy that computer scientists would show each other. You know, oh, look you can type in a prompt and a machine generates a response. But it was not usable in a professional setting. To once they scaled it up and they showed the same kind of model, but at a much larger scale, could now generate text that was indistinguishable from human So it went from not usable to usable, and at that moment everybody took notice and said, okay, this has now become an exercise in scaling, and that meant buying access to compute, and all of a sudden, Nvidia and Taiwan moved onto everyone's radar as mission critical.

In a sense, it's like a catalytic moment in chemistry where all of a sudden, the things that had not quite blended together. I mean, it's just an accurate kind of parallel.

Yes, it's exactly right, Newt. We've had a few of these catalytic moments, I think that in this current wave that we're in. The first one happened in twenty twelve, where a professor at Stanford called Faye A Lee. She was making a very big bet on her career that labeled data would be very important for creating machine intelligence. But she hadn't been able to prove it. And meanwhile in Toronto, as a professor named Jeff Hidden had been building an algorithm type but he hasn't been able to prove that his algorithm was really as powerful. And then in twenty twelve, some of his students brought his algorithm to fay fe Lee's competition at Stanford, and that was the first catalytic moment. It was bringing those two things, showing that algorithm now called deep learning with her large set of what they call labeled data that was putting labels on pictures like this is a door, this is a cat, this is a horse, and bringing those two things together that was a catalytic moment. And then the chat GIBT has been another catalytic moment. And now, as we all read in the papers every morning, how much capital people are pouring in in order to race to get compute to scale their models.

What does deep learning mean?

Deep learning is a type of computational approach and it's the basis of a lot of what today we call artificial intelligence. And probably the most important thing for your listeners to consider is when they're hearing about AI or they try something like chat GBT and it seems magical. There's no ghost in the machine. It's all computational statistics that does prediction. So deep learning is a type of computational statistics that does prediction. And almost all the AI that we talk about today is some form of computential statistics doing predictions. So, for example, if you see AI, that's let's say, like the language chat GBT, that is a model where you give it a prompt and then it predicts. It is a prediction machine. It predicts what's the most plausible sequence of words to follow your prompt. If you hear about AIS being used at banks, for example, fraud detection, that's AI being used as computational statistics that predicts when a transaction is going through, whether it's a legitimate transaction or fraudulent. If you hear about AI and medicine doing diagnostics like early detection of breast cancer, early detection of Alzheimer's, the AIS reading the medical records and reading the medical images and then making a prediction of the likelihood that, for example, someone's got early on set of breast cancer. All of these applications of AI, every one you hear of, many of them are using some form of deep learning, and that is effectively doing statistics to generate predictions.

And Vidia suddenly showed up as a really big player. What is it that they did and why is that central to the next phase of AI.

In Vidia has produced chips that are processors that are particularly well suited for this type of machine learning, deep learning, these neural net type of calculations. There's two main approaches or things people have to do when they're building AI systems. One is they have to train their AI model and that requires a lot of compute to train the model, and then when you're running it what they call inference to actually make its predictions. Then that's a second category of compute.

And so in.

Vidia's chips are particularly well designed for doing these types of calculations.

And in Nvidia chip, which is a much more powerful chip as I understand it, then the traditional chips that is now has replaced an effect for AI. It actually allows you to have an embedded algorithm in the chip itself. Is that accurate.

Different applications of AI are using these differently. The most common applications that you're hearing about are people that are using in Vidia chips in data centers, and so they are setting up their model in someone's data center, and that thing is running very often on Nvidia chips, so they might be running it either on their own premise or for example, on Google Cloud or on Amazon Web Service or Microsoft's Cloud, and very often those are using in Vidio chips.

I have a sense that one of the choke points in the next phase of AI may turn out to be electricity. That the sheer volume of electricity that this system takes is astonishing and getting bigger. Your point about scale I need as we get better and better and do more and more complex things, as I understand that we're going to use more and more power in your judgment, looking at the way that things evolving, Where's that's that power going to come from?

Some people were speculating about this five ten years ago, but I don't think anybody expected that we were going to start arriving at this power topic as quickly as we have. So you're right, especially when that catalytic moment that you refer to when everybody saw chat ABT for the first time in November of twenty twenty two, and that revealed what the benefits of scaling. In the short term, the bottleneck is an electricity. It's compute. It's getting access to these chips, and the demand for the chips shot up once people saw what you could do with them. At some point, as more and more supply comes online, and you know, we've got lots of policies now to create further incentives to create capacity on shore and the Chips Act and so on that eventually the supply of compute will start to catch up with the demand. And as that supply increases, the cost of the chips will go down. We'll use more and more compute, and therefore the final stopping point is power, like you say, and the demand for energy is already meaningful, but it will become much much bigger as the cost of compute goes down, because right now what's limiting the use of electricity is just the compute so expensive. As compute becomes cheaper, then we demand more and more power to light up these chips. And so when you ask where's it going to come from, that's a great question for people who are trying to figure out what does the evolution of the energy market look like. Obviously, we're making all kinds of investments in trying to create domestic sources of power, and for national security reasons as well as for cost reasons and the urgency of that. I think NEWT is much greater today than it was even twenty four months ago, because we never expected the demand for power to shoot up as fast as it has.

Two it just for a minute. What is a data wall?

Is there a moment in time where the sheer volume of information is so massive that we sort of run into a wall because we just literally can't handle anymore.

There are concerns about hitting a wall. It's not so much that we can't handle any more data, it's that we've run out of the data. In other words, these models have ingested everything on the Internet and there's nothing left for them to read. People talk about running out of kind of high quality data and moving to more and more marginal quality data. I'm not convinced that that is a meaningful wall. And the reason is most of the people saying that are people who have largely come from the community working on what's called NLP natural language processing, and what they're focused on is they've run out of words on the Internet. But there are all kinds of other data that we are just beginning to scratch the surface on One key example of that that I think is particularly promising is real world data. So in other words, there's the data on the Internet that's you know, words in digital format, but there's all the data of how we interact with the world, and so far we are just beginning to collect and use that through robots, like robots that touch things, that pick up, things that throw, things that read, things that speak to people and get reactions from them. All of that is a whole new frontier of data in order to train models for the real world. Probably the largest collection of real world data is owned by Tesla with all their cars. Think of that. That's just one company with one modality of sensors on a car. And there's so much more of that type of data and we've only begun to collect it. So I know that the language people have been starting to raise alarm bells that they're running out of words, but I think words are just the beginning, and there's a lot more data that we're really just beginning to explore.

What's your sense of about the impact of AI on society over the next ten or fifteen years, I.

Think it's going to have a very significant impact because the capability is so foundational. Think of all the things that we currently do with computers. If somebody calls you to ask you something, they might ask you something, you'll interpret it with your brain, and then you might go look something up in a database.

You'll go and.

You'll use computers for specific things where that are well structured numerical or well structured data in a database. And then every time you get to something unstructured, you know, a person will do it. And so our workflow has like people, people and then access of machine for a database or a lookups seart something, and then back to people people. Now that machines can do language, it's really a very significant unlock to do so many things. The way I'm thinking about it, I think of three things. First off is productivity. That I'm meeting now with a number of boards of directors and senior leadership teams, and where I see people converging is in you know, CEO will ask each of their direct reports to come back to them with a plan for how they're going to improve productivity in their area by twenty percent over the next two years. So twenty percent seems to be a number that many are converging on. And that's for everybody, every direct report in a company to the CEO. So some of them run a business, so they'll have a p and L, a profit and loss, and their job is to come back with how are they going to increase productivity by twenty percent in two years using machine intelligence and then others what they call functional areas, So not a business unit that has a profit and loss, but let's say finance or marketing or HR. They are also to come back with a plan for how they're going to improve their productivity by twenty percent. So twenty percent productivity means either you can increase the numerator like the output so let's say that's the sales or something by twenty percent, or the denominator like taking out twenty percent of the costs. So that's the first thing, is productivity. It seems like right now, over the past years, companies have been largely running pilots. Now they are starting to move them from pilots into production. So my guess is over the next two to three years we'll start seeing the first companies have very significant productivity gains. And when that happens, I suspect that will trigger the second thing, which I think is be very significant, which is I wouldn't be surprised if we see over the next decade or two the largest reallocation of capital since the Second World War. And the reason that that will happen is because there will be leaders and laggards in adopting AI. You know, everyone right now is focused on the AI companies themselves, like the Googles and Microsoft's and metas and so on.

But you know, the.

Real impact is not going to be them. It's going to be all of the companies that are enabled, and so whether they're energy companies or car companies, or drug discovery companies, or retailers or banks, insurance companies, whatever they are, and they'll be leaders and laggards. In the beginning, the leaders will just be viewed as a little more tech savvy, but very soon it will become those that have the twenty percent productivity lift become so much more productive that the others simply can't keep up. And that's when we'll see the big capital reallocation, all the capitals starting to move to the ones that are more productive, and that will be I think a period of very significant creative destruction. The third thing is seeing new things. The first two things I described are really just a reallocation of banks doing what banks do, but more efficiently, and retailers do what retailers do, but more efficiently. Then there will be all the new stuff, and those will be things like when you described all the technological revolutions that you've seen, where we start seeing things like we've already seen some canaries in the coal mine. An example is Uber. We've lived through the transition of going from every city's got a handful of taxi companies to a global company that's really a software company that uses machine intelligence to allocate riders and drivers and predict the best route between two locations. In the United States, before Uber, there were about two hundred thousand people that drove taxis and limousines. Today there's between three and four million people that drive for Uber. Imagine, let's say each of those people brings a twenty five thousand dollars car into the road. Twenty five thousand dollars and four million people. It's one hundred billion dollars of cap X brought into the transportation system unlocked by a navigational AI.

You talk about a concept I find fascinating, and that is the notion that there are between times. Describe for us what you mean by between times.

Yeah, the between times, we think is the time between when we've all observed the power of the technology on the one end, and yet it's not yet widely deployed on the other side. So that would be equivalent to the late eight hundreds. In electricity. In the late eighteen hundreds, there was a demonstrations of electricity and everyone got to see what it could do, but almost nothing was electrified. Even twenty years in after the demonstration of electricity, less than three percent of factories in the United States were electrified, and so it took quite a bit of time for it to move into broad use. That's the period where now with electricity, you know, linking that to your question on the transformation. In the case of electricity, the original value proposition of electricity was it will reduce your input costs. If you have oil lamps in your factory, you make it a little cheaper. Nobody wanted to tear apout their infrastructure in their factories just so they could shave off a little bit of input costs.

You know.

Once a few entrepreneurs were building new factories and decided to experiment with electricity. They realized that not only did it reduce their input costs. For example, they didn't need all the heavy infrastructure that was required to hold up the steel shafts that were previously used like if you had steam, for example, turning the steel shafts, and the steel shafts had big wheels on them, and on those wheels were pulleys, and the pulleys were attached to each of the machines. That you didn't need any of that infrastructure, and so then the construction of the factories become much more lightweight and cheaper. And then you didn't need multi story factories anymore because now you didn't need to have all your machines so close to the power source, because you could just have a cable and a motor anywhere on the factory floor. So it allowed for single story factories. And then once you had single story factories, you could redesign the layout of the workflow of the materials and the people and the machinery. So they started getting all these additional productivity gains as they kept innovating on what they could do that was all unlocked by moving to electricity, and that's what led to the between times of why there was a very slow back to your S curve, very slow adoption in the early days of electricity, and then as people started to realize the increasing benefits of using electricity and how they enabled a total redesign of the factory, that more and more adopted electricity. So we hit the very steep part of the S curve. The period where now is that between times we've seen the capability of AI, we've hardly scratched the surface of its use. We're just getting going. But when we hit that steep part of the curve, there's likely to be a very dramatic amount of capital reallocation.

One of the things I work on, I'd be very curious to get your reaction. I'm trying to get the Congress to put together a special Committee on the Application of Artificial Intelligence to.

Rethinking the government.

Not to think about how to apply AI to the current government, but to ask yourself, if this is really the future, then what would the shape and nature of an effective system be? And I tell people the Pentagon was opened the year I was born, nineteen forty three. It was designed so that twenty six thousand people could manage a global war with a manual typewriter, carbon paper, and filing cabinets. Today we have smartphones, iPads, laptops and twenty six thousand people. There's got to be some information exchange rate between carbon paper, manual typewriters, and filing cabinets and our current system. I tell audiences all the time if you reduce the pentagon to a triangle, you would actually have a dramatically better system. So part of what I've been thinking about is to back out from how do we apply AI to get marginal improvements and instead ask the question, given the emerging reality of AI, if you didn't already have it, what would the nature of the government be that you would design? Because I think it would be dramatically different.

Yeah, I mean that's a great question. I think first of all, it begs the question what is the role of government? Because my first step would be to go back to first principles of what is government there to provide? I'll just take a couple of elements of government. One of the things that government does it makes decisions that are designed to benefit people, and those decisions always have trade offs. In other words, any decision that benefits some will harm others. And so how we make those trade offs is a function of values. And so what the AIS will be able to do that people cannot do is they will be able to hold much more information kind of think of it as holding it in their minds concurrently and optimize. So there's been a number of great, careful studies of the ability of AIS to make decisions that are superhuman. For example, doctor decisions when somebody comes in and they might be having a heart attack and the doctor has to decide whether to send them for a test, which is costly, and the research team at University of Chicago did a great study where the AIS learned how to make these decisions and became superhuman. And so we're able to send more people who should have been recommended for a test, and many of those who ended up getting tested in retrospect didn't need the test. The AI does not send them for a test, so it's much more efficient. It's a much better decision maker than docs on average for a complex decision like that. So, on the one hand, these ais are much better at holding information is kind of lots of disparate information in their minds and then making predictions. What they don't have is judgment. AIS have zero judgment. Only people have judgment, and so judgment has to do with those trade offs, which is, if we advocate for a policy that does X, it's going to benefit these people but harm those people. Which are the trades that we should make under what conditions? And so I would imagine this government of the future will be far more efficient. It will make much higher quality decisions. Now whether we decide to follow them as a separate issue, but it will force us to be much more explicit about our value judgments are trade offs, because that will be explicit. In other words, we'll have to explicitly communicate that to the AI in order for it to complete its optimization tasks. So I think it's going to make things far more transparent than they are today. In order to benefit from the power of the AI, will have to be much more transparent.

I hadn't thought about this WAYVID. You're really saying that to effectively interact with them, you actually have to improve your own understanding of what you're asking AI to do. It actually enhances the requirement for human clarity as a prelude to getting a machine clarity.

That's exactly right. One of the scenarios that people often describe. A very common one is what they call the trolley problem, where the cars all of a sudden spinning out of control and the driver has to make a split decision of whether to hit two older people on the side of the road or hit one child on the other side. That doesn't show up on the driving test, but when you're put into these scenarios, you're racing along a street and it's wet, and it's nighttime, and the light turns yellow, and you have to decide whether to kind of step on the gas and get through the light or try and step on the brake, but were you might skid into the intersection. You've got to make a split decision. Now, no one's ever asked us what we would decide in each of those scenarios, and you and I might be different, But for the AI, it needs to be given guidance. It'll be put through many scenario and it will predict what it thinks it should do, and humans will observe it and then either say yes or no, you should do that or you should That means we'll have to make it explicit. That's a small example of let's saying the trolley problem. Each of those cases you have to make explicit and basically imbue the AI with our values.

Which, guess me, my last big question for you, which is you talk about that AI's core function is to improve prediction. If I'm listening to you, you're also suggesting that we have to improve with clarity the value judgments we want inside that prediction.

Yes, this is going to be The practical use of philosophers have been reasonably limited over the last number of decades because of philosophy degree is not that practical. I think we're going to enter a period where all of a sudden that becomes valuable again because we're going to have to make as a society a lot of esophical decisions that we've already started to see the very first glimmers of with for example, that happened with Google's Gemini that was in the news. Of ais that are doing things and we have to decide are they aligned with our values. When they're just doing a few things and they're like party tricks, it doesn't really matter. But when we start embedding machine intelligence into our banks, deciding who gets loans, into our insurance companies, deciding whether a claim is legitimate and should be paid when we're putting them into our medical systems, deciding which people should get which treatments, all these things in every case we are going to be confronted with. This is a decision that we are in some sense handing to the machine because it's got such superior prediction capabilities, but it doesn't have judgment, and so we will now have to make our judgment explicit, and it can't be left to the discretion of the person because the person is not doing it, and so it will become I think, a flourishing of debate and discussion and bringing our judgment from the recesses of our minds out into a public discussion.

Well, let me say this has been an exhilarating conversation. Onna, thank you for joining me. Your contributions to artificial intelligence and machine learning are amazing. The way you're thinking it through is extraordinarily helpful. I want to encourage our listeners to get a copy of your book, Power and Prediction, The Disruptive Economics of Artificial Intelligence. It is available now on Amazon and the bookstores everywhere, and people can learn more at your website Agrowall dot CA. I really appreciate you taking time to be with us.

Thank you very much for your interest in my work. It's been a real pleasure to be here.

Thank you to my guest professor, Agrowall. You can get a link to buy his book, Power and Prediction on our show page at nutsworld dot com. Newtsworld is produced by Gangrad three sixty and iHeartMedia. Our executive producer is Guernsey Sloan. Our researcher is Rachel Peterson. The artwork for the show was created by Steve Penley. Special thanks to the team at Gingish three sixty. If you've been enjoying Newtsworld, I hope you'll go to Apple Podcast and both rate us with five stars and give us a review so others can learn what it's all about. Right now, listeners of Newtsworld concerner for my three free weekly columns at gingrishfree sixty dot com slash newsletter.

I'm Newt Gingrich. This is Newtsworld.

In 1 playlist(s)

  1. Newt's World

    825 clip(s)

Newt's World

Join former House Speaker, professor, historian, and futurist Newt Gingrich as he shares his lifetim 
Social links
Follow podcast
Recent clips
Browse 818 clip(s)