What is the technological singularity? How might the technological singularity come about? What do critics say about the idea of the singularity? Listen in as Jonathan and Lauren explore the technological singularity.
Welcome to tech Stuff, a production from iHeartRadio. Say hey there, and welcome to tech Stuff. I'm your host, jonvan Strickland. I'm an executive producer with iHeart Podcasts and how the tech are you. So it's January second, twenty twenty four. As you listen to this, it's actually back in the past in twenty twenty three when I record it and we are on vacation. You know, I brought you a whole bunch of new episodes over the holidays, but I feared it's about time for Jonathan to maybe rest a little bit, So we're going to listen to a classic episode today. This episode is called tech Stuff Enters the Singularity. I mean, after all, we're in a new year, so why not talk about the future a little bit. This was recorded way back in February eleventh, twenty thirteen. Lauren Vogelbaumb now host of brain Stuff and Savor along with other Things, was my co host at the time, and we sat down to talk about this vision of the future, the singularity. What does it mean that when we're talking about the technological singularity? That is So sit back, relax and enjoy this classic episode from twenty thirteen called tech Stuff Enters the Singularity. Get in touch with technologies with tex Stuff from HowStuffWorks dot com. Hey there, everyone, and welcome to tech Stuff. My name is Jonathan Strickland, Host Extraordinaire.
And I'm Lauren vocal Bam, host Extraordinarier.
That's very true, and today we wanted to talk about the future. The future. Yeah, really, we're talking about kind of a science fiction future. We're talking about the singularity. And long time listeners to tech Stuff and I'm talking about folks who listened way back before we ever talked out at thirty minutes, let alone an hour. May remember that we did an episode about how Ray Kurtzwel works and Ray Kurtzwell is a futurist and one of the things he talks about extensively, particularly if you corner him at a cocktail party, is the singularity. And so we wanted to talk about what the singularity is, what this idea, you know, we really wanted to kind of dig down into it and why is this a big deal and how realistic is this vision of the future.
Yeah, because some people would take a little bit of a would would argue with your concept of it being science fiction. They take it extremely seriously.
Oh yeah, they say it's science fact, science fact, it's science inevitability.
Yeah. The term was actually coined by a mathematician, Jean von Newman in the nineteen fifties, but it was popularized by a science fiction writere.
Yeah, it's also a There are a lot of different concepts that are tied up together, and it all depends on upon whom you ask what it means by the singularity. For instance, there's some people who when you hear the term the singularity, what they say is, Okay, that's a time when we get to the point where technological advances are coming so quickly that it's impossible to have a meaningful conversation of what the state of technology is because it changes changes by the milliseconds. Right. So that's one version, But most of the versions that we're familiar with that the futurists talk about incorporate an idea of superhuman intelligence or the intelligence explosion, right.
A kind of combination of human and technological development that just dovetails into this gorgeous you know, space baby from two thousand and one kind of that's.
An excellent way of putting it. The documentary two thousand and one. I remember specifically when the space baby looked at Earth. Okay, that documentary example doesn't work at all. It usually does, but not this Yeah, not this time.
Sorry, space babies are a poor example in this one instance.
But metaphorically speaking, yes, you're right on track because the intelligence explosion. That was a term introduced by someone known as Irving John Good or if you want to go with, his birth name is Adore Jakub Gudak. I can see why he changed it. Yeah. He actually worked for a while at bletch Lee Park with another fellow who made sort of a name for himself in computer science, a fellow named Alan Turing. Oh oh, I guess I've heard of him. Yeah. Touring will come up in the discussion a little bit later, but for right now, So, Irving John Good just just a little quick anecdote that I thought was amusing. So Good was working with touring to try and help break German codes. I mean that's what Bletchley Park was all about, right right, So Good apparently one day drew the ire of touring when he decided to take a little cat nap because he was tired and he was it was Goods philosophy that being tired did not mean that he meant that he was not going to work at his best, and he might as well go ahead and nap, exactly, take a nap, get refreshed, and then tackle the problem again, and you're more likely to solve it. Whereas touring was very much a workhorse, you know he was he was no rest, no rest, We have to.
Do so touring.
When he discovered that Good had been napping, decided that this was the Good was not so good and and touring touring sort of treated him with disdain. He began to essentially not speak to Good. Good meanwhile, began to think about the letters that were being used in Enigma codes to code German messages, and he began to think, what if these letters are not completely random? What if the Germans are relying on some letters more frequently than others. And he began to look at frequency of these letters being used. He made up a table and mathematically analyzed the frequency that certain letters were used and discovered that there was a bias. There was a pattern. Yeah, so he said, well, with this bias, that means that we can start to narrow down the possibilities of these codes, and in fact he was able to demonstrate that this was a way to help break German codes, and Touring, when he saw Goods work, said, I could have sworn I tried that, but clearly that showed that it worked well. And then good and another point apparently went to sleep one day and they've been working on a code that they just could not break, and while he was sleeping, he dreamed that perhaps when the Germans were encoding this particular message, they used the letters in reverse of the way they were actually printed, and so he tried that when he woke up, and it turned out he was right. And so then his argument was Touring, I need to go to bed.
So yeah, yeah, what the moral of the story here is that naps are good?
Yes, and no one should talk to you, right, yeah, yeah, that's how I live my life. But yeah, so so Goods point. Anyway, he came up with this term of the intelligence explosion, and it was this this sort of idea that we're going to reach a point where we are increasing either our own intelligence or some sort of artificial intelligence so far beyond what we are currently capable of understanding, that life as we know it will change. Completely, and because it's going to go beyond what we know right now, there's no way to predict what our life will be like, right because it's beyond our because it is it.
Is Yeah, by definition out of our comprehension.
Yes, as the Scots would say, it's beyond our ken.
Are we going to be doing accents of this episode?
So that was a terrible one. I actually regret doing it right now. I already knew I couldn't do Scottish and yet there I went. Anyway, you're trail placing again. Yeah, So to to kind of backtrack a bit before we really get into the whole singularity discussion, that was just a brief overview. A good foundation to start from is the concept of Moore's law. You know, Originally Gordon Moore, who by the way, was a co founder of a little company called Intel, he originally observed back in nineteen sixty five in a paper that I'm going to I'm going to with this, but it was called something like cramming more components onto integrated circuits something like that. That was actually cramming was definitely one of the words used, and circuit probably was too. Anyway, he noticed that over the course of I think originally it was twelve months, but today we consider it two years.
Eighteen to twenty four months, I think is the official, unofficial.
Right, right, right, Yeah, that the number of discrete components on a square inch silicon wafer would double due to improvements in manufacturing and efficiency, so that in effect, what this means to the layman is that our electronics and particularly our computers get twice as powerful every two years. So if you bought a computer in nineteen ninety eight and then bought another computer in two thousand, in theory, the computer in two thousand would be twice as powerful as the one from nineteen ninety eight. This is exponential growth. That's an important component, this idea of exponential growth, right, And it goes without saying that if you continue on this path, if this, if this continues indefinitely, then you know, you quickly get to computers of almost unimaginable power just a decade.
Out certainly, although I mean I still don't really understand what a gigabyte means, because when I first started using computers, we were not counting in that. I mean, I mean, I was still impressed by kilobytes at the time.
So yeah, Now, I remember the first time I got a hard drive, I think it had like a two hundred and fifty megabyte hard drive. I thought, you're like, who needs that much space? Now? Grat that's that's space we're talking about, not even processing pay right, absolutely, So, yeah, it's it's it's one of those things where the older you are, the more incredible today, is right, because you start looking at computers and you think, I remember when these things came out, and they were essentially the equivalent of a of a really good desktop calculator. Right. So, but Moore's law states that this advance will continue indefinitely until we hit some sort of fundamental obstacle that we just cannot engineer our way around.
Oh right, you know, and people that's why it's it's kind of in contention right now, because people are saying that, well, there's there's only so much physical space that you can fit onto with silicone. There there's there's a physical limitation to the material in which there's only so much it can do about it. And so does More's law still apply if we're talking about other materials and what's you.
Know, right, and and how small can you get before you start to run into quantum effects that are impossible to work around. Uh, and then do you change the geometry of a chip? Do you go three dimensional instead of two dimensional? Would that help? And yeah, there are a lot of engineers are working on this, and frankly, pretty much every couple of years, someone says, all right, this is the year Moore's Law ends to end. It's over, it's gone, it's done with. Five years later, you're still going strong. Yeah, And then on the six years someone else says More's Law is gonna end.
It's a little bit of a self fulfilling prophecy. I think that a lot of companies attempt to.
Keep it going. To keep it going, oh sure, yeah, yeah, yeah. I mean, no one wants to be the ones to say, uh, guys, guess what, we can't keep up with More's law anymore. No one wants to do that, so it is a good motivator.
Also, if I can footnote myself real quick, I'm pretty sure that I just pronounced silicon is still a cone, and I would like I would like to stay for the record that I know that those are two different substances.
Okay, that's fair. Anyway, I was I was going to ask you about it, but by the time you were finished talking, I thought, let's just go Yeah, that's cool. It's all right. If you knew how many times I have used that particular pronunciation to hilarious results, excellent. So moving on with this whole idea about Moore's law, I mean, the reason this plays into the singularity is with the technological advances, you start to be able to achieve pretty incredible things, and even within one generation of Moore's law, which kind of a meaningless term. But let's say you arbitrarily pick a date and then two years from that date you look and see what's possible with the new technology, and getting to twice as much power however you want to define it doesn't necessarily mean that you've only doubled the amount of things you can do with that power. You may have limitless things you can do. So with that idea, you're talking about being able to power through problems way faster than you did before. And there's lots of different ways of doing that. For example, grid computing. Grid computing is when you are linking computers together to work on a problem all at once. Now works really well with certain problems parallel problems we call them. These are problems where there are lots of potential solutions and each computer essentially is working on one set of potential solutions. And that way you have all these different computers working on it at the same time. It reduces the overall time it takes to solve that parallel problem. And so like if you've ever heard of anything like folding at Home or the SETI project, where you could dedicate your computer's idle time, So the idle processes, the processes that are not being used while you're serving the web or writing how the singularity works, or I don't know, building an architectural program in some sort of CAD application. Anything that you're not using can be dedicated to one of these projects. Same sort of idea that you don't necessarily have to build a supercomputer to solve complex problems if you use a whole bunch of computers, whole bunch of small ones. Large Hadron Collider does this, although they use very nice advanced computers, but they do a lot of grid computing as well. So just using those kind of models, we see that we're able to do much more sophisticated things than we could.
Otherwise if we were certainly, Yes, networks, as it turns out, are pretty cool.
Yeah, and networks play a part in this idea of the singularity. Actually, I guess now is a good time we'll kind of transition into Werner Venge's and I honestly, I don't know how to say his last name. I say Vinge, and it could end up being ringy. But I just went with what you said. So that's great, that's fine. Let's do it. What we'll say that Venge says everything is silicone. So Werner though he vern, I call him Vern. He suggested four different potential pathways that humans could take, or really that the world could take, yes, to arrive at the technological singularity. Okay, what are they? The four ways are we could develop a superhuman artificial intelligence, So computers suddenly are able to think on a level that's analogous to the way humans think and can do it better than better. Right whether or not that means computers are conscious, that's debatable. We'll get into that too. Computer networks could somehow become self aware. That's number two. Okay, So yes, skynet, so like the grid computing we were just talking about that. Somehow using these grid computers. The network itself.
Having enough cycles and enough pathways and enough loops back around, it starts going like, hey, I recognize this, yeah, and starts thinking about.
Like thinking about IBM's Watson. But it's distributed across a network. So computers. You can think of computers as all being super powerful neurons in a brain, and that the network is actually neural pathways. And it's definitely a science fiction ye way of looking at things. Doesn't mean it won't happen, but strangers, my friends, it feels like a matrix kind of thing to me. Then we have the idea that computer human interfaces are so advanced and so intrinsically tied to who we are, that humans themselves evolve beyond being human, we become trans human. Okay, So this is an idea that we almost merge with computers at least on some.
Level via kind of nanobot technology, you know, stuff stuff running through our bloodstreams, stuff in our selves yep.
Or we have just brain in our faces where our consciousness, our consciousness is connected to So for example, we might have it where instead of connecting to the internet via some device like a smartphone or a computer device. Yeah, it's right there in our meat brains, so that you know, you're sitting there having a conversation with someone. Then you're like, oh, wait, what movie was that guy in? Let me just look up IMDb in my brain. And then you you know, depending on how good your connection is, which means, by the way, if you are a journalist and you attend CEES, you will automatically be dumber because all the all the internet connectivity will be taken up, and so you'll be sitting there trying to ask good questions and druel will come out of your mouth, which to me is a typical CEES.
I can I can only assume that that wireless technology would advance also at this point, but one can only hope fingers crossed.
There are certain technologies that are not advancing at the exponential rate of Moore's law, which is another problem. We'll talk about that. Yeah. And then the fourth and final method that Werner had suggested the world may go would be that humans would advance so far in biological sciences that they would allow us to engineer human intelligence so that we could make ourselves as smart as we wanted to be. This is sort of that Gattica future where we've got all the another another great documentary where we engineer ourselves to be super smart. Right, So those are the four pathways artificial intelligence, computer networks become self aware, computer human interfaces become really really awesome, or we have biologically engineered human intelligence, and all four of these lead to a similar outcome, which is this intelligence explosion. And this is the idea that some form of superhuman intelligence is created, either artificially or within ourselves, and that at that point we will no longer be able to predict what our world will will be like, because by definition, we will have a superhuman intelligent entity involved. And because that's superhuman, it's beyond our ability to predict. Right, which is you know which.
Which makes that experience experiments about it a little bit uh.
Philosophical. Yeah, that's the kind way of putting it pointless, would be another way of putting it, like we could we could, you know, sit there and and spitball a whole bunch of possible futures. But that's the thing, they're possible. We don't know which one could come out. We don't even know if these four pathways are inevitable. We have futurists who truly believe that this is something that will happen at some point. There are other people who are more skeptical, but we'll talk about them in a bit. So one of the outcomes that Werner was talking about, and it's a fairly popular one in futurist circles, is the idea of the robo apocalypse. Essentially, right, this is where you've got the humans are bad, destroy all humans idea, Essentially the ideas that humans would become extinct, either through definition because we've evolved into something else or because whatever the superhuman intelligence is it besides, we are a problem.
Yeah, and a lot of futurists are a lot more positive about that. They're more looking forward to it than being scared of it. It's less of a oh no, big scary robots are coming to take over our society and more of a robot it's are coming to take over our society like free day.
Yeah, yeah, exactly. Yeah, I don't have to work anymore, and and I don't because robots are supplying all the things we need. There's no need for anyone to work anymore. There's no need for money anymore because the only reason you need money is so you can buy stuff. But if everything's free, then you don't need you. So it becomes Star Trek and we all, you know, run around in jumpsuits and right punch people. And if you're Kirk, you make out a lot. I mean a lot. That dude every week Becrden Riiker. If you add them together, make one.
Kirk and yes in this documentary series.
Yeah, Star Trek. Yeah, I don't know about our trick because I never watched Enterprise, So you guys have to get back to me on that. Yeah, sorry, sorry about that. That.
It's also a gap in my personal understanding.
I just took one look at that decontamination chamber and I said, yep, I'm out anyway. So that's that's Werner Revenge. It's he's sort of popularized this idea, but he's there are other people have kind of I think their names are synonymous with it, and we will talk about them in just a minute and now back to the show. So Werner Venge again very much associated with the idea of the singularity. But there's another name that comes up all the time, Ray Kurtsweile.
Ray Kurtsweil, and this is a fellow who has been referred to in various circles as the Thomas Edison of modern technology, or or, perhaps more colorfully, the Willy Wonka of technology. That was by Jef Duncan of Digital Trends, and I just wanted to shout out because that was great, nice.
But you get nothing. I shared a remix of Willy Wonka earlier today and it's still playing through my head.
We're fans, we might be fans of the Gene Wilder Willy Wonka. Everyone, homework assignment, go watch that. It has nothing to do with the singularity, the singularity at all.
I don't know there's some chocolate singularity in there, Chocolate Singularity. I want to do an episode on.
If I were better at cover band names, I totally would have said something whitty right there.
Yeah, all right, well, fair enough, we'll say it's the Archies for Sugar Sugar, Oh dear, Oh my goodness.
Okay, So Ray Kurtzweil, Yeah, Ray Kurtzweil is the kind of cat who you know, when he was in high school invented a computer program. And this is in the mid nineteen sixties. This isn't like last year or something in the mid nineteen sixties, created a computer program that listened to classical music, found patterns in it, and then created new classical music based on that.
So as a computer that composed classical music, yes, following the rules of classical music that other composers had created. Yes, that's kind of cool. That's just that's just something he did, you know, And yeah, that's dude's got credentials. Yeah.
He also kind of invented flatbed scanners, has done a whole bunch of stuff in speech recognition, and.
Which that's interesting because we'll and we'll talk about that in a second. But one of Kurtzweld's big points is that he thinks that by and this all depends upon which interview you read of Cartswell, but in various interviews he said that essentially, by twenty thirty, we will reach a point where we will be able to make an artificial brain. We'll have reverse engineered the brain, and we'll be able to create an artificial one. And there's a lot of debate in smarter circles and the ones I move in. That's not a slap against my friends. They're pretty bright, but none of us are neu quirologically gifted at that point. I include myself in that circle. So, but there are some very bright people who debate about this point, whether or not we'll be able by the year twenty thirty to reverse engineer the brain and design an artificial one. And I think the debate is not so much on whether or not we'll have the technological power necessary to simulate a Sure we can simulate brains on a certain superficial level today, well, I.
Mean hypothetically we could connect enough computers that we could make it go.
I think, yeah, we could probably get the computer horsepower, especially by twenty thirty, to simulate a human brain. The question is whether we will understand the human brain enough to do so exactly. So that's sort of where the debate lies. It's not so much on the technological side of things as it is the biological side of things, which is kind of interesting. I've read a lot of critics who have really jumped on Kurtzweld for this. Particularly PZ. Myers has written some pretty yeah strongly worded, strongly worded criticisms to Kurtzweil's theories, saying that Kurtzweil simply does not understand neurological development and activities, and that you know, the nature between the environment and the way our brains develop over to versus the you know, nurture versus nature, all of this stuff with the hormonal changes, electrochemical reactions.
Saying that there's there's so many little bits that make up our brains, so many hormones, so many processes, and we understand such a small fraction of what they do. This is why a lot of psychiatric drugs, for example, are kind of like, oh, well, we invented this thing, and we guess it does this thing right, take it and see what happens?
We do stuff? Yeah, we don't. It tends to make you happy. It also makes you perceive the color red as having the smell of oranges, like you know that that's we don't. We don't understand it fully. And in fact, there are other people like Stephen Novella, who is uh he's the author of the Neurological Blog, and he also is a host on a wonderful podcast called Skeptics Guide to the Universe. If you guys haven't listened to that, you should try that out if you especially if you like skepticism and critical thinking. But he's he's a doctor and a proponent of evidence based medicine, and he talks about how we don't know how much we don't know about the brain. We have no way of knowing where the endpoint is as far as the brain is concerned, and therefore we cannot guess at how long it will take us to reverse engineer. It's simply because we don't know where the finish line is.
Right right, Yeah, Kurt Kurtzwell's Kurtzwell has a new book new as of we're recording this in January twenty thirteen. It just came out in November of twenty twelve called How to Create a Mind. The Secret of Human Thought Revealed. And in the book he theorizes that Okay, if you'll follow me for a second, adams are tiny bits of data, Okay. DNA is a form of a program. The nervous system is a computer that coordinates bodily functions, and thought is kind of simultaneously a program and the data that that program contains.
Gotcha. See, now, this is another problem that some scientists have, yeah, is reducing the human brain to the model of a computer.
Right, because it's you know, it's it's a very it's a very elegant, interesting proposition. Sure, and and it's kind of sexy like that because you go like, oh, well that's that that sort of makes sense. Man, Like, let's go get a pizza and talk about this more.
Yeah, let me let me get a program that will allow me to suddenly know all kung fu.
Right, and when you're a programmer, that's a great plan. Yeah, I mean, yeah, that sounds that sounds terrific. But yeah, there's one one specific guy found. Jaren Lanier wrote a terrific thing called One Half of a Manifesto, which is a really entertaining read if you guys like this kind of thing, where he was saying that what futurists are talking about when they talk about this the singularity is basically a religion. He was calling it cybernetic totalism, you know, like like a fanatic ideology. He compares it to Marxism at some point. Interesting, Yeah, and he was saying that that, you know, it's this, this theory is a terrific theory if you want to get into the philosophy of who we are and what we do and what technology is. But that you know, cybernetic patterns aren't necessarily the best way to understand reality, and that they're not necessarily the best model for how people work, for how culture works, for how intelligence works. And that's saying so is an gross over simplification.
That's a good point, and we should also point out that it all depends on how you define intelligence as well, because Kurtzwell himself has worded his own predictions in such a way that some would argue. Novella argues, for example, that he has given himself enough room where he's going to be right no matter what, like saying that by twenty thirty, we will be able to reverse engineer basic brain functions, and Novella says, well, technically you could argue that now, So that kind of gives you a lot of room, a little bit of a gimme there. Yeah, But whether or not it means total brain function, and that's that's a totally different question. And so the other point is that we could theoretically create an artificial intelligence that does not necessarily reverse engineer the brain. It doesn't follow the human intelligence model. I mean, that's IBM's Watson again, a good example of artificial intelligence that you know, in some ways it mimics the brain because it kind of has to. You know, we're coming at this human beings are the ones creating this technology, and so as human beings creating this technology, it's going to follow the rules that as we understand them. So there's going to be some medocry there. Right. But but IBM's Watson, you know, you think about that. It doesn't really understand necessarily the data that's passing through it. It's looking for the connections and making.
It really savvy at making connections and recognizing patterns and spitting out useful information.
Yeah, it's looking for whatever answer is most likely the right one. It's all probability based, right, So, and if it does doesn't reach a certain threshold, it doesn't provide the answer. So if arbitrarily speaking, i don't know what the threshold is, so I'm just making a number eighty five percent. Let's say it has to be eighty five percent certain or higher for it to give that answer. If that if the if a certainty falls below that threshold, no answer is given. That's essentially how it worked when Watson was on Jeopardy, right. It would analyze the the the answer in jeopardy terms and then come up with what it thought was probably the most accurate question for that answer, and occasionally it was wrong to hilarious results. But it did sort of seem to kind of mimic the way humans think, at least on a superficial level.
And I mean the thing about humans is that they're they're wrong a lot more than a lot more than what that fifteen percent of the time.
Yeah, it's you know, we've we've got well, we give answers even if we're not eighty five percent short a question. I certainly do.
Because we all know from going to trivia nights. Yeah, and there's there's a lot of I've read a lot on online about arguments of how it's our deficiencies, our memory, biases, our rational behavior are weird hormonal stuff going on or what make us human, and that you can't teach a computer to be irrational.
That's true, although you can't teach you to swear. You can. We just we read a story last week, yeah, where IBM allowed Watson to read the Urban Dictionary and then I Watson got a little bit of a botty mouth.
It got it got kind of fresh.
It did it did it started it started to say that. Uh. Oh see what was it? Oh, I'm going to say something and it's going to be bleeped out, right Tyler Tyler, Tyler just said so. Uh. Anyway, so there's one point where a researcher asked a question of Watson, and Watson included within the answer the word bo. So since that was bleeped out, you probably don't know what it was, so go look it up. It was funny. It was really funny.
Yes, And then and then they basically nuked that part of Watson from orbit. They were like, you know what, never mind.
It was the only way to be sure. They wiped out the Urban Dictionary from Watson's memory. They also said that a very similar thing happened when they let Watson read Wikipedia. Oh no no judgments here, just saying what what IBM said. Anyway, Again, the computer was unable to determine when, when there, when it was appropriate and what's the appropriate context for dropping a swear word. Yeah, it didn't know, so it just started to speak kind of like my wife does. So yeah, it was I'm going to pay for that later. So anyway, that that that's an interesting point though. Again you're you're showing how machine intelligence and human intelligence are different because the machine intelligence doesn't have that context for sure.
And of course, you know, we're talking about about science fiction or science future, however you want to term it so that, you know, we might very well come up with fancy little program script that lets you that lets you introduce that kind of bias. But you know, yeah, but again from that documentary Star Trek, I mean, data never figured out those contractions.
That's true, that's true. Touring actually had a great mental exercise, really and it's called the Touring test, and this applies to artificial intelligence. Touring's point, and we've talked about the Touring test in previous episodes of Tech Stuff, but just as a refresher, Touring had suggested that you could create a test and that if a machine could pass that test at the same level as a human, in other words, if you were unable to determine that the person who took that test was human or machine, the machine had passed the Touring test and had essentially simulated human intelligence. And it usually works as an interview, so you have someone who's who's conducting an interview, and you have either a machine answering or a human answering, and there's a barrier up so that of course the person asking the questions cannot see who is responding. And of course they're responding through you know, text usually because if they're responding through a voice and it's like I think the answer as far you know, you'd be like, well, either it's a robot or the most boring person in the world. The idea being you would ask these questions over a computer monitor, get text responses, and if you were unable to answer with a certain degree of accuracy whether or not it was a machine or a person, then you would say the machine passed the Touring the test test. And you could argue, well, that could just mean that the machine's very good at mimicking human intelligence, it does not actually possess human intelligence. Turing's point is, does that matter? Because I know that I am intelligent. I speak with someone like Lauren, who I assume is also intelligent based upon the responses she gives, But she could just be simulating intelligence. However, I have already bestowed in my mind the feature of intelligence upon Lauren because what she does is very much akin to what I do. So Touring said, if you extend the courtesy to your fellow human being that they are intelligent based on the fact that they act like you do. Why would you not do the same thing for a machine? Does it matter if the machine can actually think, If the machine simulates thought well enough for it to pass as human, then you're giving it the same benefit of doubt that anyone else.
You mean, right, this is what a lot of science fiction movies are about.
Actually, yeah, there's a lot of philosophy.
And yeah, a lot of philosophy, a lot of Isaac Asmo, have a lot of Blade Runner and that's not an author.
Sorry, well no, but you know Philip K. Dick, look him up. So anyway, thank you do. Android's dream of Electric Sheep. I won't I won't spoil it for you. They to kind of wrap this all up, getting back into the discussion of philosophy. We had very recently, we did a podcast about our we living in a computer simulation, right, right, and that kind of plays into this idea of the singularity, because that argument stated that if the singularity is in fact possible, if it's inevitable, if we are going to reach this level of transhumanism where we are no longer able to really predict what the present will be like, because it'll be beyond our understanding. Then one thing we would expect to do is create simulations of our past to kind of study ourselves, sure, right.
And to see what happens play around variables yams.
Yeah, and we could, if we're that advanced, we could, in theory, create such a realistic simulation that the inhabitants of that simulation would be incapable of knowing that they were artificial and would be completely, you know, self aware of themselves. You know, that was totally redundant, self aware, but unable to know that they were a simulation. He said that if those things are possible, then there's no way of knowing that, you know, the the overwhelming possibility is that we are in a computer simulation.
It's a computer simulation, yeah, right now.
Because yeah, if that's if that's what's gonna happen, then there's no way of saying with certainty that we are not in fact the product of that. And so, uh, the point being not necessarily that we are in fact living in a computer simulation, but that perhaps this singularity, this transhumanism thing might not be realistic, It might not be the future that we're headed to. Maybe it ends up being a pipe dream that's not really possible for us to attain. Or maybe we'll wipe ourselves out through some terrible war or catastrophic accident. Maybe we create a biological entity that wipes us out allah the stand, or we create a black hole at the LHC. Which come on, people don't write me. I already know about that and how tiny and and alm non existent they are because it lasts so quickly, it totally happened. Let's say that they do that thing where you look at that one website where the black hole forms in the parking lot outside the LHC and you just see the whole picture go which funny video. Anyway, that argument plays back into this. So I don't know. I don't know if we're going to ever see a future where the singularity becomes a thing. Oh and we never really talked about it. But one of the big points that Kurtzwell really punches in his Singularity talks is the idea of digital immortality, right right.
And he's been obsessed with this, and obsessed is probably a judgmental word. I apologize that's but he's been very focused on this concept. His father died when he was about twenty four, and he's been exploring theories on life extension ever since then, and supposedly takes all kinds of supplements and sells them as well to extend life. Has all kinds of kinds of health plans.
Yeah, dietary that he has exercise all the idea that the idea being that if he can preserve his own life.
Last long enough that we hit the singularity, then he can become immortal.
Right, and either that, you know, we attain immortality through one of a thousand different ways. For example, we end up uploading our own intelligence into the cloud, right, and then we become part of a group consciousness, so we are no longer really individuals. Or we merge with computers in some other way so that we are technically immortal that way. Or we just conquer the genes that all guide the aging process and we stop it, and we stop disease.
You know, we take like in transmit, you just take a cancer pill and then you don't get cancer because that's what you do.
Yeah, So again the singularity. That's kind of why I think a lot of critics also point to it as being more of a religion because it's kind of this sort of utopian pipe dream in their minds.
There's the former CEO of Lotus, Mitch Kapor Kapper, I'm not sure how you say it, once called called it the intelligent design for the i Q one forty people.
Yeah, ouch ouch. Well, meanwhile, Kurtzwild's kind of laughing all the way to the bank. I hear that a company that rhymes with Shmugel hired him Little little people.
I mean, we're you probably wouldn't have heard of him, Yeah, but yeah, they just tired him on to be uh. I have it in my notes the official title. I think it's the director of engineering. Yeah, a director of engineering over there.
Yeah, they get they get some big names. I mean they had Vince Surf as the chief evangelist, and of course he was one of the fathers of the Internet. So Google's Google's got a They're known for getting some really smart people. And to be fair, while the singularity may or may not ever happen, I think it's important that we have optimists in the field of technology who are really pushing for our development to try and make the world a better place for people.
Now, you know, oh, absolutely so, even if we're even if we never reached the point of digital immorte in our lifetimes or any other it's I mean, if someone wants to think so big that they want to put in nanobots to make my body awesomer, I mean, and not that that came out. That came out possibly crude. It mostly means that I don't get cancer and die kind of stuff. That's that's terrific. Can I can't argue with any part of that.
Yeah, I'm going to be on video so much this year that I definitely need my body to be awesomer, So I'm all for that well either way.
Yes, And and Google, you know, Google looks forward so much to augmented reality. Augmented reality. I'm sorry, I can't pronounce anything today. I am on a non roll. It's okay, in the Internet of Things and all of that wonderful future tech that it seems like a terrific fit.
Yeah. Yeah, so we'll see how it goes. I mean, obviously, the nice thing about this is that all we have to do is live long enough to see it happen or not happen. And most predictions have the singularity hitting somewhere between twenty thirty and twenty fifty. Yeah, it all depends upon which futurist you're asking. And also it's one of those kind of I think it's one of those rolling goalposts as well. You know how certain technologies are always twenty years away, or five years away or ten years away. So we'll see. Maybe by twenty twenty we'll be saying, all right, we've revised our figures.
By twenty seventies, definitely, but who knows.
We'll see. Guys, if you have any suggestions or future episodes of tech Stuff, well here's what you can do. You can write us an email. And a lot of people have been asking about our email address. I do say that every episode, but in case you've missed it, listen carefully. Our email address is tech Stuff at Discovery dot com. Send an email. I'll prove it by writing back, or drop us a line on Facebook or Twitter. Our handle there at both of those is text Stuff hsw and Lauren and I will talk to you again in the future. And that was the classic episode tech Stuff Enters the Singularity, way back from twenty thirteen, more than a decade ago. Holy cats, I have been doing this show for so long because Lauren, of course, was my second co host. I had already done a couple hundred shows with a different co host. Wow really does hit me right in the brain. So I hope you all enjoyed that. I'm looking forward to seeing what comes next. I don't think we're gonna hit the Singularity this year. Maybe I should have put that as one of my predictions, but who knows. Maybe open Aiye is gonna create the next generative chat bot and it'll program the sky Net like system that will bring us onto the Singularity kicking and screaming, or if you're me, I'm probably already kicking and screaming just just because I'm grouchy. Anyway, I hope you're all well. I hope you had a happy and safe New Year, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.