What does "the ghost in the machine" mean? From philosophy to artificial intelligence, we explore this idiom to understand what it means, how it's used and if the dream of strong AI is realistic.
Welcome to Tech Stuff, a production from I Heart Radio. Hey there, and welcome to text Stuff. I'm your scary but not Harry host Jonathan Strickland, executive producer with I Heart Radio and how the tech are You. We are continuing our spooky episode series and the lead up to Halloween twenty twenty two. Apologies if you're from the future listening back on this episode. That's why the bizarre theme is popping up. We've already talked recently about stuff like vampire power and zombie computers where I'm really grasping at tenuous connections to horror stuff for tech. But now it's time to tackle the ghost in the machine. These days, that phrase is frequently bandied about with relation to stuff like artificial intelligence, But the the man who coined the phrase itself was Gilbert Ryle in nineteen nine, and he wasn't talking about artificial intelligence at all. He was talking about, you know, real intelligence, and he was critiquing a seventeenth century philosopher's argument about the human mind. That philosopher was Renee des Cartes, who famously wrote cogito ergo sum I drink. Therefore, I am sorry it's gonna be impossible for me to talk about philosophers without quoting Monty Python because I'm a dork. Okay, No, cogito ergo sum actually means I think, therefore I am or that's how we interpret it. But that wasn't really the bit that Gilbert Ryle was all riled up about. See de Cartes believed in dualism. Now, I don't mean that he believed in showing up at dawn to sword fight other philosophers. Though, if someone wants to make a philosopher version of Highlander, I am down with that and I will back your Kickstarter. No. No, deke Hart believed that the mind and you know, consciousness and sentience and all this kind of stuff are separate from the actual gray matter that resides in our noggains. So, in other words, intelligence and awareness and all that other stuff that makes you you exists independently of your physical brain. That there is this this component that is beyond the physical. Now, Ryle referred to this concept as the ghost in the machine, that somehow all the things that represent you are you know, to too great effect, ethereally independent of the brain itself. And Ryle rejects that notion. And you know, Ryle appears to be right. We know this because there are plenty of case studies focusing on people who have experienced brain injuries, either from some physical calamity or a disease or something along those lines. And these events often transform people and change their behaviors and their underlying personalities. So the damage to the physical stuff inside our heads can change who we are as people. That seems to dismiss this concept of dualism, that the mind and the brain are not separate entities. Take that decartes anyway. That's where we get the phrase the ghost in the machine. The machine in this case is a biological one in the original sense of the phrase. In nineteen sixty seven, Arthur Kustler he wrote a book called The Ghost in the Machine and or at least it was published in nineteen sixty seven, and he was a Hungarian born Austrian educated journalist. So his book The Ghost of the Machine was an attempt to examine and explain humanity's tendency towards violence, as well as going over the mind body problem that Ryle had addressed when he coined the phrase to begin with in nineteen forty nine. All right, then, flash forward a couple of decades one and we get the title of what was the fourth studio album of the Police, you know, Sting and the Police. Now I say this in case you were thinking that Sting and Company were naming their album off the technological use of the term ghost of the Machine, but they were not. Sting good Old Gordy had read Cussler's book and he took the album's title from the book title. So in case you're wondering, that's also the album that brought us songs like every little thing she does is magic and spirits in the material world. For a really recent exploration of due all is um well, arguably it's more than dualism. I guess you can actually watch Pixars Inside Out. That's a film that solidified my reputation as an unfeeling monster among my friends because I didn't feel anything when bing Bong meets his final fate. I just couldn't care. I mean, he's not even real in the context of the film, let alone in my world, so why would I? Okay, never mind? Anyway. In Inside Out, we learned that our emotions are actual anthropomorphic entities living inside our heads that share the controls to our behavior that we are in effect governed by our emotions, and our emotions in turn are the responsibilities of entities like joy and anger and disgust. They cart probably would have been thrilled, and Ralph likely would have rolled his eyes. Anyway, the trope of having a little voice inside your head that is somehow separate from you and also you is a really popular one. I always think of Homer Simpson, who will often find himself arguing with his own brain for comedic effect. It's another example of dualism in popular culture. But the idiom the ghost of the machine survived its initial philosophical and journalistic trappings, and now folks tend to use it to describe stuff that's actually in the technological world. We're talking about machines, as in the stuff that we humans make, as opposed to the stuff that we are generally in tech. The phrase describes the situation in which a technological device or construct behaves in a way counter to what we expect or want. Uh. At least that was the way it was used for quite some time. So for example, let's say that you've got yourself a robotic vacuum cleaner, and you've set the schedule so that it's only going to run early in the afternoon. And then one fight, you wake up hearing the worrying and bumping of your room ba as it aimlessly wanders your home in search of dirt and dust to consume, and you spend a few sleepy moments wondering if there's some sort of conscientious intruder who's made their way into your house and now they're tidying up before you realize, no, it's that darn robot vacuum cleaner. There's a ghost in the machine. It's decided on its own to come awake and start working. Now, alternatively, maybe you just goofed up when you create the schedule and you mixed up your A, M S and your PMS. That's also a possibility. But you know, sometimes technology does behave in a way that we don't expect. Either there's a malfunction or it just encounters some sort of scenario that it was not designed for, and so the result it produces is not the one we want, and we tend to try and kind of cover that up with this. This blanket explanation of ghosts in the machine kind of stands as a placeholder until we can really suss out what's going on underneath. Programmers sometimes use the phrase ghost in the machine to describe moments where you get something unexpected while you're coding, Like you get an unexpected result, like you've coded something to produce a specific outcome and something else happens instead. So the programmer didn't intend for this result to happen, and so therefore the cause must be external, right, It's got to be some sort of ghost of the machine that's causing this to go wrong. Now I'm joshing a bit here. Of course, Usually this is a way for a programmer to kind of acknowledge that things are not going to plan, and that they need to go back and look over their code much more closely to find out what's going on. Where did things go wrong? Does encoding all it takes is like a skipped step where you know you just you just missed a thing and you you went on one step beyond where you thought you were, or maybe you may to typo you got some missed key strokes in there. That can be all it takes to make a program misbehave, and so then you have to hunt down the bugs they're causing the problem. But you know, if a program is acting very oddly, you might call it a ghost of the machine scenario. Now, I'm not sure about the timeline for when folks in the tech space began to appropriate the phrase ghost in the machine for their work, because when it comes to stuff like this, you're really entering the world of folklore, and folklore is largely an oral tradition, where you are passing ideas along one person to the next, speaking about it. There's not necessarily a firm written record, at least not one where you can point to something and say this is where it began, not like Ryle's version of the phrase goes to the machine itself, which was published, so we can point to that. They're saying, this is where the phrase comes from. This would be what Richard Dawkins would refer to as a meme, a little nugget of culture that gets passed on from person to person. But it also has been used in literature to refer to technological situations. Arthur C. Clark, whom I've referenced many times in this show, as he's the guy who explained that any sufficiently advanced technology is indistinguishable from magic. He also used the phrase ghost in the Machine to talk about AI. Specifically, he used it in his follow up to his novel, his work of fiction, two thousand one, A Space Odyssey. The follow up is called, fittingly enough, two thousand ten Odyssey to. Chapter forty two of two thousand ten is titled the Ghost in the Machine, and the focal point for that chapter is characters discussing how that's the AI system from two thousand one that caused all the trouble. So quick recap of two thousand one for those of you not familiar with the story, UH and two thousand and one. The film, the Stanley Kubrick film gets pretty loosey goosey, So we're just gonna focus on the main narrative. Here. You have a crew aboard a spacecraft, an American spacecraft called Discovery one, which is on its way towards Jupiter. Now, the ship has a really sophisticated computer system called HOW nine thousand that controls nearly everything on board. UH. Also fun little trivia effect, how h L means that the initials are each one letter off from IBM, though Arthur C. Clark would claim that that was not intentional anyway. How begins to act erratically in the mission. At one point, How insists there's a malfunction in a system that appears to be perfectly functional, that it's working just fine. Then How systematically begins to eliminate the crew after learning they plan to disconnect the computer system because they suspect something's going wrong. How figures out that plan by monitoring a conversation that a couple of crew members have in a room where there are no microphones. So How can't listen in on this conversation, But How is able to direct a video feed to that room and is able to read the lips of the crew members as they talk about their plan. So How continues to try and wipe everybody out, and he explains or it I shouldn't give him a gender. How explains that the computer systems being turned off would jeopardize the mission, and How cannot allow that to happen. How's prime directive is to make certain the mission as a success, so anything that would threaten its own existence has to be eliminated. There's also the implication that How does not want to cease to exist, that How has a personal motivation beyond seeing the mission to completion, and so How has no choice but to kill everyone. It's not that Al wants to murder everyone, is just that in order to complete the mission, that's the only outcome that makes sense. Eventually, one of the crew, Dave Bowman, manages to turn off How and How wonders allowed. What will happen afterward? Will its consciousness continue once its circuits are powered down? Will I dream? It? Says? Well? Anyway? In Odyssey Too, you now have this group of astronauts and cosmonauts in a Soviet American joint effort that are trying to figure out what happened with How. Was there something inherently flawed in How's programming? Did some external element cause how to malfunction? Did How's apparent consciousness emerge spontaneously all on its own? Was it all just a sophisticated trick and How never really had any sort of consciousness? It only appeared to. So the crew are kind of left to ponder this themselves. They don't have any easy answers. That's just one example of the ghost of the machine concept being handled in entertainment. When we come back, I'll talk about a different one. But first, let's take this quick break. Okay, Before the break, I talked about Arthur C. Clark and his work with the concept of ghosts of the machine. Let's now leap over to Isaac Asimov, or at least an adaptation of Asimov's work. So the film version of I Robot, which really bears only a passing resemblance to the short stories that Isaac Asimov wrote that we're also collected in a book called I Robot, uses the phrase ghost of the machine. Isaac Asimov, by the way, in case you're not familiar with his work, he's the guy who also proposed the basic laws of robotics, which are pretty famous as well. So in the film, the character Dr Alfred Lanning, who actually does appear in asthmov stories, but he's a very different version than the one that appears in the film. He says in a voiceover quote, there have always been ghosts in the machine, random segments of code that have grouped together to form unexpected protocols. Unanticipated these free radicals in gender, questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together rather than stand alone. How do we explain this behavior random segments of code or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulate should become the bitter mote of a soul? End quote. There's some fun references in there too. Difference engine, for example, refers back to Charles Babbage, who created analytical engines that predate the concept of electrical computers. Now this idea, this, this idea of of consciousness or the appearance of consciousness emerging out of technology, is one that often pops up in discussions about artificial intelligence, even within our world outside of fiction, though usually we talk about this on a more hypothetical basis, unless you're Blake Lemoyne or Lemois, the former Google engineer who maintains that Google's language model for dialogue applications a K A lambda is sentient. That's a claim that most other people dispute, By the way, so maybe I'll do another episode about it to really kind of dig into it. But Lemoyne or LEMOI and I apologize because I don't know how his last name is is pronounced. Has said a few times that he believes that this particular program has gained sentience. But it brings us to another favorite topic among tech fans, which is, of course, the Turing test. All right, So the Turing test was Alan Turing, who he was kind of like the father of computer science in many ways. It was his response to the question can machines think? Turing's answer was that question has no meaning, and you r s L E person goodbye. I am, of course paraphrasing, but as a sort of thought experiment, Turing proposed taking an older concept called the imitation game and applying it to machines, really to demonstrate how meaningless the question of can machines think? Is? So, what is the imitation game? Well, the name kind of gives it away. It's a game in which you have a player who asks at least two different people questions to determine which of them is an imitator. So, for example, you could have an imitation game in which one of the people is a sailor and the other is not a sailor, and the player would take turns asking each person questions to try and suss out which one is actually a sailor and which one is merely pretending to be a sailor. So the game depends both on the strength of the imitator's skill of deception as well as the player's ability to come up with really good questions. And you could do this with all sorts of scenarios, and indeed there are tons of game shows that use this very premise. Turings thought experiment was to create a version of this in which a player would present questions to two other entities, one a human and one a computer. The player would only know these entities as X and Y, so they could ask questions of X, and they could ask questions of why and it replies, So the player would not be able to see or hear the other entities. All questions would have to be done in writing, you know, for example, typed and printed out, and at the end of the interview session, the player would be tasked with deciding if X was a machine or a human, or if why was the machine or the human. Touring was suggesting that as computers and systems get more sophisticated, and things like chat programs get better at processing natural language and formulating responses, though that was a little past during time that it would be increasingly difficult for a person to determine if any given entity on the other end of a chat session was actually a person or a machine, And Touring also rather cheekly suggests that we might as well assume the machine has consciousness at that point, because when you meet another human being, you assume that that other human being possesses consciousness, even though you're incapable of stepping into that person's actual experience. So you can't take over that person and find out, oh, yes, they do have consciousness, You just assume they do. So if you and I were to meet, I assume you would believe I am in fact sentient and conscious even on my bad days. So if we're willing to agree to this while simultaneously being unable to actually experience and therefore prove it, then should we not grant the same consideration to machines that give off the appearance of sentience and consciousness. Do we have to prove it or do we just go ahead and treat them as if they are, because that's what we would do if it was a human. Now, Touring was being a bit dismissive about the concept of machines thinking. His point was that they might get very very good at simulating thinking, and that might well be enough for us to just go ahead and say that's what they're doing. Even if you could, you know, push machines through the finest of sieves and find not one grain of actual consciousness within it. Now, it doesn't hurt that defining consciousness, even in human terms, is something that we can't really do, or at least we don't have a unifying definition that everybody agrees upon. Sometimes we define consciousness by what it doesn't include rather than what it is. This is why I get antsy in philosophical discussions, because being sort of a pragmatic dullard myself, it's hard for me to keep up. But let's jump ahead and talk about a related concept. This is also one that I've covered a few times on tech stuff that also points to this ghost in the machine idea. And this is the argument against machine consciousness and strong AI. It is called the Chinese Room John Searle a philosopher but for this argument back in nineteen eighty and that argument goes something like this. Let's say we've got ourselves a computer, and this can eater can accept sheets of paper that have Chinese characters written on the paper, and the computer can then produce new sheets of paper. It can print out sheets that are also covered in Chinese characters, that are in response to the input sheets that were fed to it. These responses are sophisticated, they are relevant. They're good enough that a native Chinese speaker would be certain that someone fluent in Chinese was creating the responses, someone who understood what was being fed to it was producing the output. So, in other words, this system would pass the Turing test. But does that mean the system actually understands Chinese. Cele's argument is no, it doesn't. He says, imagine that you are inside a room, and for the purposes of this scenario, you do not understand Chinese. So if you do understand Chinese, pretend you don't. Okay. So there is a slot on the wall, and through this slot you occasionally get sheets of paper, and there are Chinese symbols on the sheets of paper. You cannot read these, you don't know what they stand for. You don't know anything about it other than they're clearly Chinese characters on the paper. However, what you do have inside this room with you is this big old book of instructions that tells you what to do when these papers come in, and you use the instructions to find the characters that are on the input sheet of paper, and you follow a path of instructions to create the corresponding response. Step by step, you do it all the way until you have created the full response to whatever was sent to you. Then you push the response back out the slot. Now the person on the other side of the slot is going to get a response that appears to come from someone who is fluent in Chinese, but you're not. You're just following a preset list of instructions. You don't have any actual understanding of what's going on. You still don't know the meaning of what was given to you. You don't even know the meaning of what you produced. You're just ignorantly following an algorithm. So externally it appears you understand. But if someone were to ask you to translate anything you have done, you wouldn't be able to do it. So Sarl is arguing against what is called strong AI. Generally, we define strong AI as artificial intelligence that processes information in a way that is similar to how our human brains process information. Strong AI may or may not include semi related concepts like sentience and self awareness and consciousness and motivations and the ability to experience things, etcetera. So Sarah is saying that machines, even incredibly sophisticated machines, are incapable of reaching a level of understanding that true intelligence can that we humans can grasp things on a level machines simply are unable to reach, even if the machines can process information faster and in greater quantities than humans are able to. Another way of putting this is a calculator can multiply two very large numbers and get a result much faster than a human could do, but the calculator doesn't understand any significance behind the numbers, or even if there's a lack of significance, the calculator doesn't have that capability. Now, maybe Searle's argument is valid, and maybe, as Touring suggests, it doesn't even matter. So let's talk about machine learning for a moment. Machine learning encompasses a broad scope of applications and approaches and disciplines, but I'll focus on one approach from a very high level. It's called generative adversarial networks or g A N s GANS. Okay, as the name suggests, this model uses two systems in opposition of one another. On one side, you have a system that is generative, that is, it generates something. Maybe it generates pictures of cats, doesn't really matter, we'll use cats for this example. So what does matter is that this model is trying to create something that is indistinguishable from the real version of that thing. So on the other side, you have a system called the discriminator. So this is a system that looks for fakes. Its job is to sort out real versions of whatever it's designed to look for and to flag ones that were generated or not real. So with cats as our starting point, the discriminator is meant to tell the difference between real pictures that have cats and fake pictures of cats, or maybe just pictures that don't have cats in them at all. So first you have to train up your models, and you might do this by setting the task. So let's start with the generative system, and you create a system that is meant to analyze a bunch of images of cats, and you just feed the housands of pictures of cats, all these different cats, different sizes and colors and orientations and activity. And then you tell the system to start making new pictures of cats. And let's say that first round that the generative system does is horrific. HP Lovecraft would wet himself if he saw the images that this computer had created. You see that these horrors from the Great Beyond are in no way shape or form cats. So you go into the model and you start tweaking settings so that the system produces something you know less eldredge. And you go again, and you do this lots and lots of times, like thousands of times, until the images start to look a lot more a cat ish. You do something similar with the discriminator model. You feed it a bunch of images, some with cats, some without, or maybe some with like crudely drawn cats or whatever, and you see how many of the system is able to suss out. And maybe it doesn't do that good a job. Maybe it doesn't identify certain real images of cats properly. Maybe it misidentifies images that don't have cats in them. So you go into the Discriminator's model and you start tweaking it so it gets better and better at identifying images that do not have real cats in them. And then you set these two systems against each other. The generative system is trying to create images that will fool the discriminator. The discriminator is trying to identify generated images of cats and only allow real images of cats through. It is a zero sum game. Winner takes all, and the two systems compete against each other, with the models for each updating repeatedly so that each gets a bit better between sessions. If the generative model is able to consistently fool the discriminator, like off the time, the generative model is pretty reliably creating good examples. This, by the way, is a ridiculous oversimplification of what's going on with generative adversarial networks, but you get the idea. This form of machine learning starts to feel kind of creepy to some of us. Like the ability of a machine to learn to do something better seems to be a very human quality, something that makes us special. But if we can give machines that capability, well, then how are we special or are we special at all? That's something I'm going to tackle as soon as we come back from this next break Okay, we're back now. I would argue that we are special. Before the break, I was asking can we be special if machines are capable of learning? I think we are, and that we're able to do stuff that machines as of right now either cannot do, or they can do, but they don't do it very well and they can only attempt it after a ludicrous amount of time. For example, let's talk about opening doors. Several years ago, I was at south By Southwest. I attended a panel about robotics and artificial intelligence and human computer interactions. In that panel, Layla taka Yama, a cognitive and social scientist, talked about working in the field of human computer interaction, and she mentioned how she was once in an office where a robot was in the middle of a hallway, sitting motionless. It was just facing a door. What taka Yama didn't know is that the robot was processing how to open that door, staring at the door and trying to figure out how to open it for days on end. This was taking a lot of time. Obviously. Now, when you think about doors, you realize there can be quite a few options. Right, Maybe you need to pull on a handle to open the door. Maybe you need to push on the door. Maybe there's a door knob that first you have to turn before you pull or push. Maybe there's a crash bar, also known as a panic bar. Those are the horizontal bars on exit doors that you push on to open. Frequently they're seen indoors that open to an exterior location, like inside schools and stuff. You push on them to get out. Maybe it's a revolving door, which adds in a whole new level of complexity. But you get my point. There are a lot of different kinds of doors. Now, we humans pick up on how doors work pretty darn quickly. I mean, sure, we might be like that one kid in the Far Side cartoon where the kids going to the School for the Gifted, and he's pushing as hard as he can on a door that is labeled pull. That could be us sometimes, but we figure it out right, we do a quick push, we realize, oh, it's not opening. We pull. Robots it's more challenging for them. They are not good at extrapolating from past experience, at least not in every field. We humans can apply our knowledge from earlier encounters and we can build on that. Even if the thing we're facing is mostly new to us. We might recognize elements that give us a hint on how to proceed. Robots and AI aren't really good at doing that. They're also not good at associative thinking, which is where we start to draw connections between different ideas to come up with something new. It's a really important step in the creative process. I find myself free associating whenever I'm not actively thinking about something. So if I'm doing a mundane task, like if I'm washing dishes or I'm mowing the lawn, my brain is going nuts free associating ideas and creating new ones. Machines are not very good at that for now. Anyway. They are not that at mimicking it, but they can't actually do it. So, getting back to Laila Takeyama, one of the really fascinating bits about that panel I went to was a discussion on social cues that robots could have in order to alert us humans in that same space. What the robot was up to. This was not for the robots benefit, but for our benefit. The whole point is that these cues would give us an idea of what was going on with the robots, So that we don't accidentally, you know, interrupt the robots. So you know, it might be like the robots in that hallway and it's looking at a door, and you're wondering, why is this robot shut down in the hallway. But then maybe the robot reaches up to apparently kind of scratch its head and sort of a huh, what's going on kind of gesture, and that might tell you, Oh, the robot is actively analyzing something. Don't know exactly what it is, but it's clearly working. So maybe I'll step around the robot behind it and not interrupt its vision of the door it's staring at. The Whole point is that the social cues can help us interact more naturally with robots and coexist with them within human spaces, so that both the humans and the robots can operate well with one another. Also, it helps to explain what the robot is doing, because if you don't have that, the robots end up being mysterious. Right. We can't see into them, we don't understand what they are currently trying to do, and mystery can breed distrust. That leads to yet another concept in AI that gets to this ghost in the machine concept, which is the black box. So in this context, a black box refers to any system where it is difficult or impossible to see how the system works internally. Therefore, the is no way of knowing how the system is actually producing any given output, or even if the output is the best it could do. So with a black box system, you feed input into the system and you get output out of it, but you don't know what was happening in the middle. You don't know what the system did to turn the input into output. Maybe there's a sophisticated computer in the middle of that system that's doing all the processing. Maybe there's a person who doesn't understand Chinese stuck in there. Maybe there's a magical ferry that waves a wand and produces the result. The problem is we don't know, and by not knowing, you cannot be certain that the output you're getting is actually the best or even relevant or the most likely to be correct based upon the input you fed it. So you start making decisions based on this output. But because you're not sure that the output is actually good, you therefore can't be sure that the decisions you're making are the best, and that leads to really difficult problems. So let's take a theoretical example. Let's say we've built a complex computer model that's designed to project the effects of climate change. And let's say this model is so complex and so recursive on itself that it effectively becomes impossible for us to know whether or not the model is actually working properly. Well, that would mean we wouldn't really be able to rely on any predictions or projections made by this model. I mean, maybe the projections are accurate, but maybe they're not. The issue is there's no way for us to be certain, and yet we have a need to act. Climate change is a thing, and we need to make changes to reduce its impact or to mitigate it. It's possible that any decisions we make based upon the output of the system will exacerbate the problem, or maybe it'll just be, you know, less effective than alternative decisions would be. Further, we're getting closer to that Arthur C. Clark statement about sufficiently advanced technologies being indistinguishable from magic. If we produce systems that are so complicated that it's impossible for us to understand them fully, we might begin to view those technologies as being magical. Or the very least greater than the some of their parts, and this can lead to some illogical decisions. This kind of brings me to talk about the church of Ai called the Way of the Future, which was founded and then later dissolved by Anthony Lewandowski. You may have heard Lewandowski's name if you followed the drama of his departure from Google and his eventual employment and subsequent termination from Uber. And then there was also the fact that he was sentenced to go to prison for stealing UH company secrets and then later received a presidential pardon from Donald Trump. So quick recap on Lewandowski. Lewandowski worked within Google's autonomous vehicle division, which would eventually become a full subsidiary of Google's parent company, Alphabet, and that subsidiary is called Weymo. So when Lewandowski left Google, he brought with him a whole lot of data, data that Google claimed belonged to the company and was proprietary and nature and thus constituted company secrets. Lewandowski eventually began working with Uber in that company's own driverless vehicle initiative, but the Google slash Weymo investigation would lead to Uber hastily firing Lewandowski and sort of an attempt to kind of disentangle Uber from this matter, which only worked a little bit anyway. In the midst of all this Weymo slash Uber drama, in two thousand seventeen, Wired ran an article that explained that this same Anthony Lewandowski had formed a church called Way of the Future a couple of years earlier. In twenty he placed himself at the head of this church with the title of Dean, and he also became the CEO of the nonprofit organization designed to run the church. The aim of the church was to see quote the realization, acceptance, and worship of a Godhead based on artificial intelligence AI developed through computer hardware and software end quote. This is according to the founding documents that were filed with the US Internal Revenue Service or i r S. Further, Lewandowski planned to start seminars based on this very idea later in by twenty Lewandowski's jumped from Google to Uber escalated into a prison sentence of eighteen months uh and it was because he had been found guilty of stealing trade secrets. Trump would pardon Lewandowski in January one kind of you know, after the insurrection on January six, but before Trump would leave office in late January. As for the Way of the Future, Lewandowski actually began to shut that down in June of and it was dissolved by the end of but not reported on until like February. He directed the assets of the church some dollars to be donated to the Inn double a CP. Lewandowski has said that the beliefs behind the church are ones that he still adheres to, that AI has the potential to tackle very challenging problems like taking care of the planet, which Lewandowski says, obviously we humans are incapable of doing that. You know, we would put on this system taking care of things that we understand to be important, but we seem to be uh incapable of of handling ourselves, almost like we're children. Thus looking at AI like a godhead, so we should seek out solutions with AI rather than locking AI away and saying, oh, we can't push a eyes development further in these directions because of the potential existential dangers that could emerge from AI becoming super intelligent. I don't think there are actually that many folks who are trying to lock AI away at all. Mostly I see tons of efforts to improve aspects of AI from a million different angles. I think most serious AI researchers and scientists aren't really focused on strong AI at all. They're looking at very particular applications of artificial intelligence, very particular implementations of it. But not like a strong AI that acts like deep thought from the Hitchhacker's Guide to the Galaxy. Anyway, maybe Lewandowski's vision will eventually lead us not to a ghost in the machine, but a literal deos x makina that means God out of the machine. That seems to be how Lewandowski views the potential of a I that are unsolvable problems are almost magically fixed thanks to this robotic or computational savior you know. In fiction, du s x makina is often seen as a cop out. Right, You've got your characters in some sort of iron clad disastrous situation, there's no escape for them, and then in order to get that happy ending, you have some unlikely savior or unlikely event happened and everyone gets saved. And it might be satisfying because you've got the happy ending, but upon critical reflection, you think, well, that doesn't really make sense. There are a lot of stories that get a lot of flak for using d S X makina uh. The m edge I always have is from classical theater, where you've got all the mortal characters in a terrible situation and then a an actor standing in as a god is literally lowered from the top of the stage on pulleys to descend to the moral realm and fix everything so that you can have a comedy play with a happy ending. For Lewandowski, it's really about turning the ghost of the machine into a god. I'm not so sure about that myself. I don't know if that's a realistic vision. I can see the appeal of it, because we do have these very difficult problems that we need to solve, and we have had very little progress on many of those problems for multiple reasons, not just a lack of information, but a lack of motivation or conflicting motivations, where we have other needs that have to be met that conflict with the solving of a tough problem like climate change. Right we have energy needs that need to be met. There are places in the developing world that would be disproportionately affected by massive policies that were meant to mitigate climate change, and it's tough to address that. Right There are these real reasons why it's a complicated issue beyond just it's hard to understand. So I see the appeal of it, but it also kind of feels like a cop out to me, like this idea of will engineer our way out of this problem, because that just puts off doing anything about the problem until future you can get around to it. I don't know about any of you, but I am very much guilty of the idea. You know what this is Future Jonathan will take care of this. Jonathan right now has to focus on these other things. Future Jonathan will take it. Future Jonathan, by the way, hates Jonathan of right now and really hates Jonathan in the past, because it's just putting things off until it gets to a point where you can't do it anymore, and by then it might be too late. So that's what I worry about with this particular approach, this idea of we'll figure it out, we'll science our way out, we'll engineer our way out, because it's it's it's projecting all that into the future and not doing anything in the present anyway. That's the episode on ghost in the machine. There's there are other uh interpretations as well. There's some great ones in fiction where sometimes you actually have a literal ghost in the machine, like there's a haunted machine. But maybe I'll wait and tackle that for a future, more like entertainment focused episode where it's not so much about the technology but kind of a critique of the entertainment itself, because there's only so much you can say about you know, I don't know a ghost calculator. That's it for this episode. If you have suggestions for future episode topics or anything else that you would like to communicate to me, that a couple of ways you can do that. One is you can download the I Heart Neio app and navigate over to text stuff. Just put tech stuff in the little search field and you'll see that it will pop up. You go to the tech stuff page and there's a little microphone icon. If you click on that, you can leave a voice message up to thirty seconds in length and let me know what you would like to hear in future episodes. And if you like, you can even tell me if I can use your voice message in a future episode. Just let me know. I'm all about opt in, I'm not gonna do it automatically, or if you prefer, you can reach out on Twitter. The handle for the show is tech Stuff H s W and I'll talk to you again really soon. Text Stuff is an I Heart Radio production. For more podcasts from my Heart Radio, visit the i Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows.