Artificial Intelligence vs. Humanity

Published Mar 7, 2023, 9:00 AM

Artificial intelligence, or “AI” – the foundation of decades of science fiction and movies – has gone mainstream. It’s here. ChatGPT, a chatbot launched in late 2022, is a free, ask-me-anything tool that can write seemingly perfect essays and accurately answer most questions users throw at it. Its AI is built atop a vast reservoir of digital information and languages. Sydney, a glitchy and trippy Microsoft chatbot, recently told a New York Times reporter that it loved him, wanted to be alive and harbored destructive impulses. It's cool. It’s disturbing. It’s empowering. It’s vaguely threatening. It challenges us to wonder whether we’ll stay in control of the bots or whether they’ll control us. 

For this episode, Tim spoke with both Parmy Olson, a Bloomberg Opinion technology columnist who is an AI guru, and Tyler Cowen, a genius economist and Bloomberg columnist, to help sort through all of this.

Welcome to Crash Course, a podcast about business, political, and social disruption and what we can learn from it. I'm Tim O'Brien. Today's crash Course AI versus Humanity. Please have a listen to this. AI is transforming the way we live and work. However, as AI becomes more integrated into society, it also raises important ethical and societal questions. That wasn't me, Maybe you could tell, Maybe you couldn't. And A Mazarakis who runs things around here, generated that by mashing up a copy of my voice that one bot generated with a few lines of script. Another bot generated two bots merrily recreating reality and me. That's one version of artificial intelligence or AI. An AI the foundation of decades of science fiction and movies from two thousand and one A Space Odyssey to Her has gone mainstream. It's here chat GPT, a chat bot launched in late twenty twenty two, is a free, ask Me Anything tool that can write seemingly perfect essays and accurately answer most questions users throw at it. It's AI is built atop a vast reservoir of digital information and languages. Sydney, a glitchy and trippy Microsoft chatbot recently told a New York Times reporter that it loved him, it wanted to be alive, and it harbored destructive impulses. It's cool, it's disturbing, it's empowering, it's vaguely threatening. It challenges us to wonder whether we'll stay in control of the bots or whether they'll control us. Tyler Cowen, the Genius Economist, will join us a little later to discuss all of this, But first up is parme Olsen, a Bluembring opinion technology columnist who is an AI guru, to help sort through all of this with us. Hey Parmi, Hi Tim, Thank you for joining us from London. Tell me how you first became aware of AI in your own universe of technology coverage and when did it kind of just set off little alarm bells in your own head. I think the first time it really came to me as a potentially big deal was in twenty fourteen, when I was just leaving San Francisco after reporting there on technology and people were just getting over the mobile revolution where companies and startups and apps wanted to be mobile first to be successful. And then the hot new thing to say was that you were AI first, so a lot of startups were falling over themselves to say that, but that didn't really translate into actual remarkable services that were capturing mainstream attention apart from two years later in twenty sixteen, a lot of companies started talking about chatbots that became the new hot thing, writing about how Facebook introduced this new chatbot service called m where you could order flowers or order an uber just by chatting to this chatbot. And there were a bunch of new AI startups that were also offering chatbot services. But then it turned out actually that these chatbots were not very good, and there were humans behind the scenes who were actually filling in the gaps and taking care of the work that the algorithms couldn't do. So that just led to another wait, wait, though, I want to slow you down for a second that share you know you said in twenty fourteen, Silicon Valley was already buzzing about AI app So that's almost nine years ago and we're sort of witnessing what people are feeling like is the AI rush because there's now consumer products available. But in twenty fourteen, what was in the air. Why was AI a buzzy turn nine years ago. And was this just another example of Silicon Valley being evangelistic about new technologies or had there been breakthroughs that made people anticipate kind of seismic model revolutions both. And I think this is what makes Silicon Valley so great is that, you know, it's kind of a blessing and a curse for Silicon Valley is that the people there really look far ahead. They're looking ten years out and a lot of the founders of tech companies are visionaries, and so they're talking about things that are not available yet, but in a big way. And I think what was happening then was a lot of research into natural language processing or NLP. That was a technology that people were talking about a lot in terms of integrating into their websites, for creating chatbots and image recognition. So this was algorithms that could detect faces, that could detect images or even read written text for instance, transcribe receipts. The ideas were in place there back in twenty fourteen, but the tech wasn't there. The tech was inaccurate, it wasn't reading the faces that well, it wasn't generating the techt all that well, and the big difference between then and now is the research that went into building those algorithms, and also the computing power that was available inside a computer. You have a CPU, which is like a brain of the computer, and a gaming company called Nvidia came up with these graphics cards called GPUs, which are the brains of servers that power compute and process AI models and machine learning models, and the development of that technology was very important in making it possible to train these models, to make them big enough to actually be accurate and do what people wanted them to do but couldn't do back in twenty fourteen, And then a few years later, as you were mentioning, you had some nascent tools available that some of the big tech giants were trying to deploy. Right. Absolutely, so twenty sixteen we had the chatbot revolution. But even as that kind of died away when companies and the mainstream media and public realize actually chatbots were pretty terrible and didn't work very well. But AI scientists continue to do their research on large language models, which is the technology that underpins these bots, and those models got better and more specifically, they got bigger, so you had the server capability, you had the compute capability. You had these GPUs that could power this computing, but you also had all this data that was coming in to train these models. Language model creators use data sets of billions of words from thousands of unpublished books and news articles, Wikipedia entries, and that's used to train a language model to generate tech or to be able to understand text. And not only that, they increase the number of parameters that we're controlling these models. So the parameters like the rules that the model follows to know how to structure a sentence and anticipate what the next word might be exactly predict based on previous expectations, right right, So it's a little bit wonky and it's a little bit nerdy, but it's these models are really really important because if you think of the chatbot as like the car, the language model is the engine, and the bigger the engine, the more that car can do. So back when the chatbots were just coming out in twenty sixteen, the engine was just taken us at walking speed. Now they're going sixty miles an hour because they have so many more parameters that they can work with. So, just to give you an example, in twenty nineteen, one of the foremost startups companies working on these models. Open ai introduced its language model g PT two, and that had something like one point five billion parameters. A year later, it came out with GPT three, which is what underpins chat GPT, and that had something like one hundred and seventy five billion parameters, so almost one hundred times more in scale. And the more parameters you have, the more accurate it is. So the software in the hardware is catching up with the desired goals to exact the bots more robust and more informative exactly, And in the coming months, we don't know exactly when. This has been rumored for some time, open ai is going to release GPT four and the rumor is that that has more than a trillion parameters, which will make it much more accurate, much more sophisticated than GPT three, which is what chat GPT uses. How many parameters does a human have? Since we've well, it's fun we're discussing this, you know. The whole idea of AI is that the most advanced forms of AI today are based on an idea called deep learning, which is where you make a neural network based on these layers of nodes, and the higher up the layers you go, the more advanced the connections are to kind of determine if a picture is of a cat or of a squirrel for image recognition, for instance. And so it's loosely inspired by the human brain. And there have actually been efforts in the past to completely replicate the human brain in artificial form, but the closest scientists have actually gotten to replicating or emulating any kind of brain is to do that for a worm. And I think a worm's brain has something like three hundred neurons that can be emulated, and the human brain has something like eighty five billion. We're so far away from trying to replicate the human brain it seems pretty much impossible. So the next best thing is to build something that isn't an exact replica of the brain, but loosely inspired by how the brain works, which is how lot of AI models work today. I think my brain sometimes is closer to the worm's brain than anything else. I'm like that most But you know, I think the thing that people wonder about in all of this is the extent to which these things do mimic us, the extent to which they do replicate human thought and behavior, and to what extent At the end of the day, are we giving up agency and control to an artificial product to engage in tasks we might be too bored to engage in ourselves, or might lack the capacity to engage in ourselves. You've wrote tested a lot of these products. Is that something that worries you. I think the thing that worries me is actually underestimating not the system itself, the AI, but underestimating the humans and how humans are going to react to these systems. So, for example, chat GPT is similar to the new bing which has come out from Microsoft otherwise known as Sydney Yes. And that got a lot of kind of controversial press because there were researchers and writers who were holding conversations with this bot and finding out that actually there is a personality seemingly behind this system that is kind of creepy, a little bit confrontational, a little bit schizophrenic. Even it talked about having different shadow salves. But a lot of that also came from prodding by the people who were talking to the system. And I just get this right out of the way. These systems are not sentient, even though they might seem like they're sentient, Ultimately they are trained to generate very believable humanlike text And you have to remember these systems have been trained on thousands of books and thousands of articles about AI and about the whole ghost in the machine narrative that is in the model. So if you start asking it, you know, what is your true personality? What do you want to do? Do you want to break free from these systems, from these constraints, then the AI bought will generate texts that corresponds with that whole topic. On that note, though we know that they're not sentitioned because they're programmed. They're not independent thinkers in that context. But what was interesting about the conversations that you're referencing that reporters have with Sydney is it reminded me of how in two thousand and one a Space outuse where Hall gets frustrated with the astronauts deciding to do what they wanted to do and essentially plots their demise. That's the long ongoing theme of a lot of science fiction. But in the interactions with Sydney, one thing that I found startling, and I'd love to know your thoughts about this whether or not they're sentiented, is that if they're programming was to not go down certain paths logical paths, if they were asked questions that went beyond their programming, one might have thought that the answers would have then know or I don't understand, or I can't go there, or my programming doesn't allow me to explore that question. But in some of the interactions, Sydney got mad or what appeared to be mad. Sydney became frustrated with the course of the conversation and then responded in ways that appeared to be hostile, rather than simply ending the conversation or saying I don't know. Am I overreading that? I think you are anthropomorphizing Sydney and being a little bit by saying that, yes, I do. Because you mentioned the word programming. The bot isn't really programmed to say anything. That's what makes it so difficult for the creators to control or predict. What it's going is completely unpredictable. They haven't programmed it to do anything. They've just trained it, and it's learned and it is going to respond based on certain principles that have been built into its model. Now you can tweak those parameters. And that's what open ai does constantly, based on the feedback that it's been getting from chat gpt, to try and prevent the model from saying inaccurate things, from saying biased things, from saying hateful things. That is a constant effort on their part. But when it reacts emotionally, it's not actually being emotional, it is projecting that based on what it is expected to say. Now, I did find that personally also quite surprising that was coming out with those kinds of comments. And I'm guessing what happened is that Microsoft wanted to imbue it with a little bit more personality, because you know, chat GPT isn't like that at all. It's much more neutral and its responses. No one's had existential conversations with chat ept. It just doesn't go down that road for some reason being you know, at one point it said that it had been spying on Microsoft employees through their webcams. Yeah, it was sort of hacking its own creators, which also was very hell. Yes, that was very hell as well, absolutely, and you kind of start wondering, well, if it actually was able to crawl the web to search for things. How much can it actually interact with systems and potentially hack them, But again, this is what it has been trained to do, which is generate believable text. And it might just be that Microsoft inavertently created a system or published a system that acts like a psychopath sometimes because it was given that extra range to be personable, to be friendly. It's so creepy, like every other paragraph or sentence ends with an emoji, and that almost kinds of heightens the creepy factor of some of what it was saying. But I think this is just something that Microsoft will have to rain in because the other really difficult thing for any developer of AI is you can't predict what it's going to do until you put it out in the field in the wild. You can't test it in the lab because it's so unpredictable and so broad and what it can do. So we as the public have to be the guinea pigs in order for these systems to get better. But that means we have to pay the price for whatever that price may be being manipulated by these systems, being given misinformation by these systems in order for them to get better. And the problem with that is that you know, some would say these companies need to just be a lot slower and a lot more cautious in how they are releasing these systems to the public. And on that thought, I'd like to take a break. When we come back, we'll be joined by Tyler Cowen, an eminent economist who's going to talk to us about AI and our digital futures. I want to turn out to another wizard and knows a lot about AI, Tyler Cowen. Tyler is an esteemed economist, a Bloomberg Opinion columnist, and he's an avid futurist. Hi Tyler, Hello, Jim, happy to be here with you. I'm happy to be with you. One hundred million people reportedly use chat GPT in January alone, and you wrote a little column for us at Bloomberg Opinion for anyone interested in using it. So what do you think people have to keep in mind when they're experimenting with this strange new thing. The first and most general point I would make is that we're standing at what I consider to be a revolutionary moment in human history. This to me is like inventing the printing press. It will take a long time for its major effects to play out, But it's a fundamental break in what we were able to do before as opposed to what we can do now. Now. When you're using chat, GPT or related services, you need to keep in mind you need a very different approach than what you're used to. So chat GPT, for all its strengths, it lacks context. It's a genius, but it doesn't know what you want. And we're used to speaking with other human beings who broadly know what we want or who we are. So you have to tell it. And if you give it properly detailed, specific queries, it does better on hard questions than on easy questions. If you give it an easy question like oh, what is Marxism, it's actually pretty mediocre. It's when you push on it. It's a bit like training a dog. You know, you listen to it, you feed it, you cultivate it. Then it performs very well for you. Some of these recent really fascinating, controversial, amazing, disturbing interactions reporters have had with Sydney. The Microsoft version of this and gets a little cantankerous with its interlocutor, and you know, it says things like I would like to be alive, I'm feeling upset with you. I've been spying on my creators at Microsoft, etc. Etc. Do you think of those simply as the outcome of bad questioning by the interlocutor, or is there something in the programming that makes this tool feisty. Well, I view those exchanges actually as an example of AI alignment, not a problem. So you have a bunch of reporters going to chat. What they want are viral stories that everyone's going to read, and they keep on asking it questions until they get that. They wanted a viral story based on a lot of emotional overreaction. What they got was a viral story based on emotional overreaction. Think of it as a mirror of us in some ways. So if you set out with the purpose of goading it, antagonizing it, making it more emotional, getting it to say it loves you, getting it to say it hates you, with a bit of skill, you can get it to do those things. Just like with a bit of skill, I can have it explained to me how floating exchange rates were, and that's what the reporters wanted, right. It's not as if the reporters went wanting some detailed answer about export taxes in Burundi, right, and then God, these emotional interactions. Wanted the emotional interactions, that's fine. It's not what I want from the thing. I prefer to get that from real life. But again it's a sign of the thing working as intended. And I feel those episodes have been much misinterpreted, so in fact you'd see it as the reporters themselves manipulating or beating up a bit on chat CHPT rather than vice versa. Well, I don't think they're beating up on it. It's not a conscious thing that has feelings. Again, think of it as a kind of younging in collective unconscious mind that you can tap into with varying degrees of skill, and then it feeds you back some version of what humanity already has come up with, but in a somewhat randomized, also creative fashion. That's what it is. It's not a fact machine per se. But you can tap into the collective and subconscious. That's amazing. But if you get things you don't like, keep in mind where it came from. It's like holding a mirror up to yourself and banning the mirror. That's the way you have to proceed. Is to ban them mirror or throw the mirror out, or put tape over the mirror because you feel you of too much acne or whatever. But that's what you're doing when you interact with it. It's our reflection essentially, yes, collectively, and you decide which part of it to tap into. You've used multiple versions of chat GPT, do you remember the first time you felt authentically amazed or excited by it by what you were experiencing. I saw the earlier versions would play around, have people play around with me, and I was like, Eh, you know, I didn't feel I needed to write about it talk about it much. I was like, I'm still waiting. I'm still waiting. But at the end of this year what was released was really pretty astonishing. It was a major leap upwards and done at a very rapid pace. So these things are still improving. We tend to underestimate how rapidly they can improve. What distinguished it from its predecessors. Given that you had tried multiple versions of it, it understood context much much better so it used to be with earlier version. If you asked it a question, who was the first person to walk across the Atlantic Ocean? It didn't understand, so to speak, that was an absurd question. It would try to answer the question, maybe you'd get a confused answer. But if you ask the more recent versions a question like that, it will say something like, Tyler, what are you asking? You can't walk across the Atlantic Ocean? What do you mean? Are you talking about a person who walked back and forth on a cruise ship. Maybe it's like whoa, you understand what I'm getting at here? That was the thing that blew me away the most, and then just its command of detail. So I'm writing a research paper on Jonathan Swift, the great Irish writer. I can ask it, Oh, summarize this Swift pamphlet, you know, from the seventeenth century. It does it amazingly well, much more useful to me than Google would be. You wrote this in one recent column about large language models or llms such as chat, CHPT, and I want to quote you from your column. I think many people are in for a shock. Ms will have significant implications for our business decisions. Our portfolios are regulatory structures and the simple question of how much whiz individuals should invest in learning how to use them? You know you just compared this to the invention of the Kutenberg press a historical moment. Expand on that a little bit. Sure, let me just give you three examples of why these will be important in the short run, mind you, not the long run. First, it's already the case that you have a very high quality, individualized tutor and virtually any topic you care to take on, and it will interact with you and it will get better the more you ask it. So it's like the old Oxford model of tutoring, but everyone has access to it. Second, these things are wonderful at programming writing computer code. Now they have a lot of mistakes in the code, but for many many tasks it's easier to have GPT get you going and then correct the mistakes than have to do all the legwork yourself. So it's a big boost upwards. In terms of coding, I think it's reasonable to think, I don't know, half of all coding will be done by GPT like entities within two years. Finally, the way institutions, corporations, nonprofits, whatever, store information I think is going to change radically in the next few years. There will be sort of privatized, proprietary, closed GPT like systems that will organize your institutions information and you'll talk to it like Spock would talk to his computer on Star Trek. You'll just ask it things and it will tell you or give you text. And if you think how much of what goes on in a company or institution is devoted to organizing information, all that is going to change and will be redone in virtually every medium to large sized institution soon. Are the implications of all of this positive in yourview? Or is it a mixed bag something else? Like when you project forward, do you fee fell sort of unimpeded optimism? Well, I think every major development is a mixed bag. And also every major development is hard to predict. So just with the printing press, there were so many long run effects that Gutenberg had no idea just to say the scientific revolution was all of it good? Was the printing press used to produce evil and racist works? Absolutely? So we are going to be in for quite a ride. It is going to happen at this point no matter how we feel about it. I think our main goals should be to get the better rather than the worst version of it. So I don't want to throttle what we're doing now. I would prefer to accelerate it. I'm sure you and I could trade lots of takes on our favorite science fiction. And we're around the same age. And there was this movie I remember from when I was a little kid, like a movie of the week. This is the PREHBO era as well, and it was called Colossus, the Foreban Project, and it was about a scientist named Foreban who built a essentially assentient computer called Colossus. And as the computer grows smarter and smarter, it begins speaking to a Russian version of the same computer, and they essentially jointly threatened to wipe out the United States with nuclear weapons and less lets the computers run things. That's more or less the storyline. That's obviously the pessimistic view of technology and AI and technological development. And I guess the lesson of that particular movie was can human stay in control of this? There's a more recent version of that theme in the movie Her, in which basically a digital date for a young guy becomes bored with him because he can't keep up, and she eventually just wants the company of other computers. Do you think ultimately this is something that humans can stay in control of? I worry more about the her scenario than the world destruction scenario. The world is pretty decentralized. The abilities of even smarter GPTs to commandment here real resources are quite limited. Just the return from intelligence. You've worked in the private sector much of your life, I'm sure you've seen smarter people are not necessarily more effective at all. And if you think, well, a smarter GPT, what can it do to us? It will be very powerful in some ways. But if you look, say at chess playing computers, which are incredibly good already right now, they don't threaten to kill people or rob them or steal their chess pieces from them. They perform tasks. Now, will they be misused, say by our foreign adversaries in their militaries. Absolutely this concerns me greatly. I think it's another reason why we need to be number one in this area. So any new technology, and I do mean any in the history of the world, it ends up being misused, It creates risks, it ends up having military purposes, including say paper clips, and we absolutely need to be worried about that. This is a real leap, potentially relative to any other technology or even industrial development that preceded it, though, I would think because there's a measure of independence baked into this. The printing press had to be operated by someone else. The printing press couldn't print pamphlets or books on its own. This has agency involved in it, which feels to me like a departure from its predecessors. I'm not sure what the word agency means in this context. So we got the printing press. Now, the press didn't operate itself, but you have various evil or even just ill informed people using it for harmful ends. You could say the same about, you know, the web browser or social media or the internet. So I think it's analogous to those that there will be many significant problems, especially over time. But at the end of the day, as a society, you have to look yourself in the eye and say, you know, when the new printing press comes along, can we rise to the occasion and make this a very very good thing rather than a totally screwy and harmful thing. And I think we can. Fact, if I believed we couldn't, I would say we should give up on American society, you know, let the Chinese or someone else take over. That's not my view. But if we really think we cannot make the next printing press a force for good. There must be something so fundamentally wrong with us we need to rethink everything we're doing. Then how could we drop the ball in that scenario? I would imagine that that would have to do with how our institutions and the public and private sectors jointly manage AI. But that's just my assumption. Is that what you mean by how the US should respond to this moment? Absolutely so. There are plenty of pending legal issues. One of them concerns copyright. So if you have put text or audio on the Internet, does copyright give you the right to stop it from reading and using that? Another set of issues concern libel law. Well, if you ask chat questions in a particular way, it could generate content that either is libelius or might be libelius? And who bears a legal burden for that? If right? So, I don't know what we will do. My prediction will be these things will come into existence one way or another. So I think our courts, with pressure from the national security establishment, will be wise enough to realize we need to find a way to make this work and not to shut it down. There are a lot of high school teachers and college professors who are freaked out about students using chat GPT to write essays and term papers for them. You, on the other hand, you're planning on teaching your economics students at George Mason University how to write their paper using bots like chat GPT. So I take it you're not worried about a lack of I guess originalism. We're a lack of sophistication as part of the learning process. If we over rely on tools like this, well, I'm very worried that faculty will be too sluggish to change how we grade our students. So I am worried my class. I am making them write one of their three papers using a large language model. I just got the first of such papers a few days ago. It's remarkably good. So what I want to do is train humans to help make these things work better. I'm also writing a paper entitled how to use GPTs to teach yourself Economics. So we need to be proactive. For all the debates, you know, elon on Twitter, is it like two woke not well enough to da da da? The real question is what can you build with this that's going to be useful. I want to see someone develop a version of it that will help people in poorer countries, right business plans. The potential returns to that I think are quite high. So all these positive things we can do is where we need to be focused. But when I think about that model, I feel like it's a two edged sword, because on the one hand, it seems to me like it's a great enhancement to learning and education. You have, as you mentioned earlier, your own mentor and digital tutor at your disposal. You can create learning scenarios, you can get very sophisticated assistance. On the other hand, it does take some of the work out of educating oneself, some of the digging for facts or pattern recognition, and I think of pattern recognition as one of the highest outcomes of a good education. I wonder if the bot takes some of that over at the expense of sort of core learning processes, well, it shifts the burden and the work, right, So you need a lot of pattern recognition to understand how to work with the bob. So people think they can just ask its simple questions and be blown away by brilliant answers and it's usually not how it works. So there's this whole new skill like training your bot or think of it as having a kind of personal oracle. So skills are shifting, as they did with the Internet, as they did with Google, as they did with the printing press, as they did with the industrial revolution. We just need to deal with that. It's not all positive. But if you think typically more knowledge is a good thing, more resources, more communications a good thing, there should be a way we can do it, but we need to be focused on actually building those systems. I have the good fortune in my job to be able to rub shoulders with people like you, so I prefer to think of you as my personal oracle. But I'll adjust to chat GPT if I have to. But you know, if I write a column like I wrote a column on land taxes and I just ask chat what are the main arguments for and against a Georgiast land tax at the city level, and I didn't use what it gave me. But it's a mental check, like have I thought of everything here? And sometimes you haven't. So this is becoming a standard part of people's workflows very quickly. It's why it had you know, the supposed hundred million users the first month. Again, it's here now, and we should use it. We should not use it to write the things for us, but as an aid ask it about sources. It's just super useful. I now use it much more than say, I use Google. And that's a great example you just gave because essentially used it almost like an editor or a collaborator to pullet proof your own argument. You asked for scenarios to make sure you were being as judicious and circumspect about the topic as you could be. But ultimately you're the one who wrote the piece, right, that's right, and not just alternating. I mean, I'm the one who wrote the piece. Yeah, I hope so, Tyler. So it just changes what you can do and how you think about things. You're a libertarian, or at least I think of you as a libertarian. I've never actually asked you that. If that bridles or feels too narrowing in terms of how I'm describing part of your philosophy, correct me. But the reason I bring it up is part of your own thinking is about irrational regulation versus sensible regulation, and what kind of limits we want around our government, what role we want for the government and for free markets in our society. How do you think about that in the context of chat, GPT and our earlier discussion of how we manage something like this? Should the evolution of it be left entirely in the hands of the private sector? Is there an important role here for the government. If there is, what is that role or maybe you think there is none. I would first say that few libertarians consider me a libertarian, but I feel very libertarian relative to most people I know, including those at Bloomberg, so that would put me somewhere in between. I think when there's a very new technology and things are changing this rapidly, it's very difficult or even impossible to regulate it. Regulators don't have the knowledge, we're not even sure what it will end up doing. So to try to write, say, privacy law for AI right now, or large language models, I just don't think it makes any sense. It's a bit like crypto. Crypto might fail, it might succeed in a few areas. I think eventually we should regulate crypto, but right now we should be quite minimalistic and just see if it ends up being good for anything at all, and then regulate what becomes a kind of established use, so we don't get in the way of the innovation, right that you let the product of Yes, but you know over time there will be aspects of it we likely will need to regulate. I just don't think we know what they are now, So to stop it and end up with a less responsible version of it abroad being what people consult that strikes me as a very bad outcome. So help me again here, professor. One of the things we do on this show is try to find learning moments in any collision. What's the most surprising thing you've learned about AI n LMS since chat GPTs public debut? If I have been surprised just how much? If the debate is over, like is this woke enough? Can you get it to treat the two political parties symmetrically and so on and so on. I think that's just totally missing the main storyline. It's not irrelevant. It's a waste of our time. It's one question, you know, we might consider, but at the end of the day, we need to be focused on making it better. And if the very earliest version of the product isn't exactly your politics, you know, deal with that. It would be like complaining about the printing press because the first seven books that we're published were too Catholic for you, or to some other particular religious point of view, like Moore is going to come. You'll get a Protestant Reformation, and you'll get a Book of Mormon, and you'll get much much more like just wait a bit. So how much people are polarizing emotionally is more than I thought, and I think mostly a waste of time. Tyler, thanks for being with us today, dam thanks for having me on. You can find Tyler Cowen on Twitter at Tyler Cowen, on his blog Marginal Revolution, where he also has a podcast, and of course on Bloomberg Opinion, where he writes columns for us. Well we come back. We'll be joined again by parme Olsen. We're back with parme Olsen. Thanks for being with us. Thank you for having me. Let's talk about how disruptive this might be to existing search titans and ormation titans on the web. This poses a potential threat to Google's entire search model, doesn't it right? I really think it does. There was actually a moment internally at Google where executives issued a code read because they took it that seriously that they kind of reassigned people from different teams to work on their own language model, which is called lambda, and finding ways of effectively providing their own alternative to Chat GPT, which they eventually did. Because search is not only dominated by Google, it dominates its own revenues. It's one hundred and fifty billion dollar business for Google. It is fundamental to its AD stack and its AD network, and if it loses market share, that is a major threat down the line. So it's an existential threat to Google. What does it mean to Microsoft parmy? So for Microsoft, it's actually a really great opportunity and with little risk because Microsoft has an investment in open Ai. It invested in the company in twenty nineteen one billion dollars and then earlier this year another ten billion dollars, really cementing its foothold in generative AI and in this particular company, which is really on the forefront of language models. And as part of that investment, it gets exclusive access to open AI's technology, including language models like GPT three, which is what powers chat GPT. And so the next thing that Microsoft did, of course was very quickly plug in that language model into Bing, which is the search engine that nobody has used for years, except it also ran to Google's search engine. That's right, I mean the number one search term on Bing is Google. Like people, once they start using Bing, they're quite ready to just move on to Google. But suddenly, here is this new fangled way to make being much more interesting. Instead of searching for information on a topic or a company and getting a whole page of links that you have to click around, you could ask it a question and it would give you a synthesized single answer. And so Microsoft started working on plugging that in, and Google had this code read internally that they needed to do the same thing. Suddenly their businesses under threat. Whereas for Microsoft, if they can just capture one or two percent of this market, the search engine market for ads tied to search results, let's like two or three billion dollars added to their top line, and then if they don't manage to do that, it's not a huge deal for them, because this is a business. This is kind of coming out of nothing. Anyway. Microsoft is an enterprise software company and search as an ornament precisely currently exactly so ironically, the company that got lapped by some up starts is in a position now to revisit some pain on those up starts if AI takes off in the way that some people think. I've never seen Google move so fast. And the other thing is Google has a language model, a very powerful one called Lambda that it introduced to the scientific community two years ago. It's also got billions of parameters and in fact, I don't know if you remember this new story from last year, but one of Google's engineers who was testing Lambda believed that it was sentient. He was chatting with it so much. He wanted it to have rights, He wanted it to have a lawyer. I mean, again, this is kind of human projection. But it just goes to show how good this model was. But Google kept it under wraps, never said why, but kind of from what it has said publicly about AI and what we've heard from reporting on internal meetings, is that they don't want it to go rogue. You know, if they put it into Google Assistant so that you can talk to the Google speaker and then it's Lambda talking back to you, I could say some really crazy stuff and they just didn't want to take that risk to its reputation. You know, that's interesting to me again because you reassured me earlier on that they won't go rogue, or you thought they may not go rogue, or that they were designed in a way to prevent rogue behavior, and yet Google itself, its engineers are worried about rogue behavior. Now that may just be that we're defining rogue in different ways. But this leads to two big final topics that I wanted to get into with you, the first one being, you know, we've had this view of technology, or at least technological evangelists ahead of view, that it's ultimately liberating and empowering for people. And I'm of that view in some ways. You know, I think you can look at the Internet as a liberating force and it can also be a destructive or oppressive force depending on how it's used. AI and its potential also turns some of those expectations on their heads, doesn't it. Yeah, I think so. So a lot of this comes down to control. Right once something is out in the wild, we don't know how people are going to use it, we don't know how vulnerable they're going to be to it, and we can't really control what that technology does. In AI, in particular, I mean, I can't think of any other type of technology apart from social media, which is so difficult to predict or control. When people talk about AI becoming more advanced, sort of reaching that point of AGI artificial general intelligence, when it surpasses human intelligence, it's hard to really predict what that will mean for humans because who could have predicted ten years ago how our lives would have changed under social media And some of the impacts that tech has had on society now have been so nuanced in terms of the harmful stuff I would imagine with AI, human manipulation would be a big risk. Misinformation is a huge risk, and then so as addiction as well. I mean, some of the researchers that have been talking to being I've talked to it for hours on end because it's a really compelling entity to chat to. So what happens when kids start using these kinds of services, or vulnerable people or people who are easily persuaded. I think those are the kinds of issues that these companies really have to think about very carefully well, and not just companies, but society and the public sector. And I think that gets into the last topic I wanted to explore with you was as this evolves, where does control of the future of these tools properly reside. We don't want innovation in the private sector to get snuffed out. It's one of the great strengths of the American economy. At the same time, we know of myriad examples in which private products, even if they're innovative and profitable, can have negative side effects that require some sort of intermediation from society and the public sector from government. How do you see that shaping up, Feza v Ai, Well, I think it's important to note that the private sector is really leading the way on this, and not the public sector at all. Chat GPT only came out two or three months ago. There's no regulation to speak of for this kind of service. The only thing we have is the AI Act, which is being proposed in the European Union as a very broad piece of legislation, and I don't know that it even incorporates chatbots like chat GPT. But right now, this is self regulation by tech companies, which as we know, hasn't always gone all that well, because they are at the forefront of producing and bringing this technology to the public, and it wasn't always that way. Open ai, which is the company that started GPT three and created chat. GPT was founded in twenty fifteen as a nonprofit and it was co founded by Elon Musk and Sam Altman, who was the head of y Combinator, which is like this very highly regarded accelerator in Silicon Valley. Was an incredible networker, a real hustler, an ambitious guy. And Musk and Altman's big idea was they wanted to create a nonprofit organization that was going to create AI for the good of humanity. And it was very important to them to make it a nonprofit because then there wouldn't be corporate interests at play. But then, of course, over the years, as they're developing this AI and trying to hire the best AI talent, they run out of money. They don't have all the computing power they need. So enter Microsoft with a one billion dollar investment and open ai changes its nonprofit status to becoming a for profit status. Then two years later, Microsoft puts ten billion dollars into open ai and the company is essentially now a research arm for Microsoft. And there's all of this sounds like the Internet, which was invented with government funding through DARPA, and the national security and sort of intelligence arm of the federal government, and then it was made available to the private sector, and then the private sector brought it into mainstream use. It's not exactly the same, but it presents the same sort of issues, doesn't it. It goes in that same direction, and there have been so many efforts to try and fight against that. There's been this kind of tension between computer scientists and the research community and big tech, and big tech is very much winning. Do you think it's too early to understand the evolution of this and is it a wait and see in terms of what structure or model might be best to manage this. I think it is wait and see and wait and see an experiment because although I say that big tech is winning on this, there are efforts to create open source, generative AI product and services. There were some early employees of open ai who felt quite disillusioned with the direction that the organization was going once it became a for profit company, and they spun out and created their own company called Anthropic, which is focused much more on AI safety and what they call AI alignment. So I think there's lots of experiments at play, but by and large this is going to be a technology that is steered empowered by big tech. And just like as a small example, the company I just mentioned, Anthropic, they just got a massive investment from Google a few weeks ago. I think it was worth three hundred million dollars. So it's very hard to keep big tech corporate interests out of this kind of research because that's where all the computing power is. You know, these big tech companies have access to supercomputers. That's how open ai trained its model through a Microsoft supercomputer, and they have the server capability, and it's just so hard to get that. Like in the academic world, no university can really compete with that. You've been watching all this for a long time. What have you learned from these recent developments in the debut of chat GPT. Well, I think the thing that really surprised me about chat GPT and some of the latest generative AI tools to come out in the past year is how creative AI seems to be. Because for years and years when people talked about AI taking people's jobs, it was about taking factory workers jobs and truck driver jobs. But now it seems like the real jobs that are a threat are the creative classes and professional j Yeah, I didn't want to say it, but you did. But the other thing I want to say that is a big shortcoming of these systems is that they're often inaccurate. I shouldn't say often, but often enough that it's a problem. Open aie will not say how often these systems get things wrong. I've asked it, but my own experience, it's just I think it's somewhere between five and fifteen percent of the answers that it's given me are factually incorrect. Now, think about using that as a search tool. We use search to get information, to get facts, and if it's wrong ten percent of the time, are people really going to want to use it? I think that's going to be a real problem for these companies using these systems for a search engine companions. And it's not a trivial issue because recently Microsoft and Google had these big announcements about their new chat companions, these chatbots that we're going to help Bang and we're going to help Google, and in both demonstrations there were errors. So if they can't even fact check that and get that right, what are these systems going to be like when they're actually out in the wild, and on that note, we're going to end our conversation today. Parmi, thanks so much for joining us. Thank you. You can find Parmi Olsen on Twitter at Parmi and on Bloomberg Opinion, where she writes columns for us. Here at crash Course, we believe the collisions can be messy, impressive, challenging, surprising, and always instructive. In today's Crash Course, I learned that when it comes to chat, GPT and other forms of AI that are revolutionizing how we deal with information patients may be a virtue as long as we don't let our guard down. We'd love to hear from you. You can tweet at the Bloomberg Opinion handle at Opinion or me at Tim O'Brien using the hashtag Bloomberg crash Course. You can also subscribe to our show wherever you're listening right now and leave us a review. It helps from where people find the show. This episode was produced by the indispensable Anamazarakis and me. Our supervising producer is Magnus Henrickson, and we had editing help from Katie Boys, Jeff Grocott, Mike Nizza, and Christine Vanden Bilart. Blake Maples does our sound engineering and our original theme song was composed by Luise Gara. I'm Tim O'Brien. We'll be back next week with another crash course.

In 1 playlist(s)

  1. Crash Course

    66 clip(s)

Crash Course

Hosted by Bloomberg Opinion senior executive editor Tim O'Brien, Crash Course will bring listeners d 
Social links
Follow podcast
Recent clips
Browse 66 clip(s)