In this podcast, Toby Walsh is joined by Vision Australia assistive technology guru and 'Talking Tech' presenter David Woodbridge to chat all things Artificial Intelligence.
Toby Walsh is Laureate Fellow and Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales, research group leader at Data61, adjunct professor at QUT, external Professor of the Department of Information Science at Uppsala University, an honorary fellow of the School of Informatics at Edinburgh University and an Associate Member of the Australian Human Rights Institute at UNSW.
Need more from Professor Toby Walsh?
Listen to our recent Vision Australia Library presentation as part of the Melbourne Writers Festival, In conversation with Toby Walsh as he examines the possibilities and perils of emerging AI technologies, from ChatGPT to facial recognition and self-driving cars, with Elizabeth McCarthy.
https://omny.fm/shows/interview-highlights/melbourne-writers-festival-special-in-conversati-1
Hi folks, and welcome to a Vision Australia Radio special podcast. And today I'm joined by Professor Toby Walsh. And I've already forgotten what I'm going to call you, so I'll just call you, um, chief science person from the Eye Institute of University of New South Wales, which I probably just strangled it. Um, so, Toby, thanks for coming on the program today. And like I did, have seven questions that I'll get to very, very quickly. But my first one, and it makes me want to grind my teeth a lot, is why is there a need in even with AI itself and the people behind it, to actually try and make AI systems sound and react like human beings? That's a.
Fantastic question. It's it. And it's a it's a fatal mistake that we're making all the time because it confuses people and it messes with people. Um, we give them, we give them names, often female names. Um, um, you know, we shouldn't call them Siri or Cortana. We should call them robot or computer or or or or, you know, names that certainly aren't gendered. You know, you could call them Alpha or Omega or, you know, Greek letters of the Greek alphabet, whatever it is. Yeah. Um, but we it's a it's a bad human trait. We anthropomorphize, we, we apply human attributes to things that are not human. Um, no.
Exactly. And because in the in the demo from OpenAI, um, last week, as we record this week, they had it doing things like I think it was supposedly flirting with the person. Um, it was laughing at its own jokes. And I kept thinking, you're a computer program. This is all false and not real. And that's why I keep thinking of, um, I read your book, which is the, you know, the Faking It, um, excellent book about faking intelligence. And it this just reminds me that this is just so up there amongst faking it. I just don't know why people don't shrink away from this stuff and go, look, guys, I think we really are getting a bit carried away.
This. The story is actually even worse than that. So actually today they've actually had to retard the female voice for the new OpenAI chat bot because, um, they tried to buy the rights for Scarlett Johansson. She was the voice on the movie her where it was her voice. She was nominated for an Oscar, indeed, for the part where she was the AI operating system. She declined to license her voice to OpenAI. But the voice that they had was eerily like her voice. Lots of people said it was reminded them of the movie, and many of Scarlett Johansson's friends apparently said they thought it was her. Right now. Um, because I think she's probably threatened to sue. She's probably taken a lawyer's less legal, um, action and the bad publicity. They've actually had to take the voice down. Yeah.
Because, I mean, as a blind person, I've had talking computers since 1980. So, you know, in one form or another, I've had computers speak to me. So is it this thing that I don't know for a, for sighted people, it's really novel to have a computer quote speak to you. Or is it the fact that it's supposedly this real thing behind your computer screen that's really talking to you?
It is a deceit. I mean, it is fooling us that maybe there's more behind the scenes than there actually is. As an example, another example, if you use one of these chatbots like ChatGPT, it slowly types out the answer to your question. If you actually understand how it works, it has the answer in a flash. It doesn't actually have to slowly type out the answer, but they've they've actually built the interface. So it does that. So you get the feeling that there's a person behind the screen that's slowly typing away to you to make it more personalized, to try and engage you more, to to fool you more that it's smarter, more intelligent, maybe even more sentient than it is.
That's right. And a lot of the the nicer the voice, the more I tend to actually, um, dislike it more because as far as I'm concerned, you're a computer. Don't care what you sound like. Give me the information and make it accurate. And the one of the issues that I've got, particularly with generative AI, is that if it's just doing straight, you know, word, sentence, paragraph, page prediction, um, how can it actually get things so badly incorrect sometimes? I mean, yes, it's almost like a default. Sometimes. Most of the time it'll get it correct almost accidentally, I think sometimes based on its data. But when it gets things wrong, it really gets things wrong.
Well, because it's, it's that's the thing that we should remember that it's artificial intelligence. It's not human intelligence. It's not going to break like human touch. I mean, the remarkable thing about human intelligence is, is how robust we are. Whereas computers, people who program computers quickly realize computers are very brittle. They break and they break very catastrophically. They don't have all of our common sense. They don't have our remarkability I can take you, I can drop you into a new circumstance and you could start doing stuff. Whereas a computer, you change the input very slightly and it falls over completely. Um, because it's an artificial, very different intelligence to human intelligence. And we we're quick, especially when it sounds like us to think it's going to be like us, but not realizing. No, actually it's quite a different type, quite a different flavor of intelligence. I can remember.
This. I can't remember what Star Trek episode it was, but I think it was one of the original episodes with Spock involved because he had to sort of like, you know, he had to do these, um, his, you know, mind meld. But he did it with a computer, and I can remember this computer. It was some sort of bastardized name of Voyager, because NASA sent out the Voyager probe and this thing came back, which was this alien spacecraft. And it was interesting because what they were basically saying in the episode is that you can have all the knowledge in the world. It actually doesn't mean you're intelligent. And I just think it's such a misused word because I know it's artificial, but I think it's a bit bold to actually use the word intelligence because it's actually not. You actually just all it's doing is just looking up facts. It's not really extrapolating anything. And I just, I just want to actually have a new name for artificial intelligence because it's not intelligent.
Well, it nearly got called many other things. It nearly got called cybernetics in Europe. They call it informatics. Um, yeah. It's at the end of the day, it's a terrible name. Intelligence is very poorly defined. So what the heck could artificial intelligence be? But Spock is a good example because, you know, he was quite a smart guy, but he was somewhat lacking in emotional intelligence. And you realize there are many different facets to intelligence your social intelligence, our emotional intelligence, our creativity that are very important, really important to to our interactions with each other. And Spock, despite he was clearly a very smart cookie. Mm. Was also quite lacking in his interpersonal skills. Yes.
Yeah. No it was it used to be quite funny. It was great.
And the same would be the same is true of machines. They don't have. Um, no. Some cognitive intelligence. They have some ability to, you know, answer general knowledge questions and do maths and those things. But they are very severely lacking in emotional intelligence. And that's likely to be a significant handicap, that they have a long time going forwards, in part because they're not they don't have emotions themselves. I mean, obviously they don't have emotions. Emotions are biochemical. So they they're electrical devices. So they don't have anything like that. And, and one of the great things that we have is that, well, we can reflect on, on our own emotions. We can say, well, how would I feel if, um, you know, someone said that to me and then I could think, oh, I'd be really upset. Um, and so machines can't do that. They don't have any insight to reflect and think about how I would feel, because they have no feelings themselves. So they're going to be very severely handicapped, certainly that facet of their intelligence. Um, for a long time. Absolutely.
I, I can remember the first, um, thing I played on my little Apple TV of speech back in the early 1980s. Was that horrible, ridiculous psychotherapy program. Eliza. Um. And I used to try and trick it. I'd say, oh, look, my, my my bed is feeling very sad today. And it's like, oh, when was the last time your your bed felt sad? And it just got more and more ridiculous. And I thought, yeah, all you're really doing is just word prediction. You're just doing straight reflection. Um, so we're we're smart speakers these days, are they? Because I'm assuming, you know, our traditional smart speakers lie. So the, the upper ones, the Amazon ones and the Google ones, I'm assuming at the moment they're still not using large language models. I'm still they've sort of got a I'm assuming they've got like some sort of set series of instructions and that's all they're currently working on at the moment.
Yes. Something like Alexa. Very it's very heavily programmed, very heavily scripted. Mm. Um, but the future is going to be that ultimately AI is going to be the operating system of those devices. And indeed all our devices, your your smartwatch, your smart phone, your smart home, your smart toaster, your smart front door, your smart light switch, they're all going to have, uh, it it is actually when we come back to Scarlett Johansson, is is going to be like that movie. Her AI is going to be the operating system of all of those devices that's going to be allowed you to interact with them so you can talk to them. They understand at a much higher level what you want them to do, and then do that stuff for you. So it is, um, you know, we're only at the beginnings of that journey where, um, devices get upgraded with more and more AI that allows us to have a richer and richer conversation with. Um, so that, you know, ultimately, you know, people will, I think, become somewhat attached to these devices that they're always talking to.
No. Yeah. The one thing that always gets me very nervous about, um, anything to do with AI is when it comes to computer vision. Um, because I know in your book you were talking about, you know, um, you know, there's lots of data being trained on, on, on medical stuff, on, uh, doing stuff for radiology and all that sort of stuff. The problem I have as a blind person is, you know, whenever I use my smartphone to take a picture of something in the backyard or out the front and so on, there's still about a 75 or maybe even 80% chance of if I was being nice, that what the camera's actually telling me what the object is is actually incorrect. And I just thought, is that just the state of computer vision? Or is the fact that, you know, it doesn't have enough data to base the fact that you know, my red garbage bin is not a fire hydrant on my grass verge, for example?
Yeah, it's it's getting better and better as we train them on more and more data. Um, you know, the idea ultimately of computer vision is to be able to understand the world so that computers can robots, for example, or autonomous cars can navigate around it. Mm. Uh, and I'm pretty confident we will get there, um, in part because, for example, um, we're driving increasing. You know, if you're driving a Tesla, you're helping to train the next generation of Tesla. Yeah. Um, and what and the other thing that's really making a big difference is, interestingly enough, is the is that, in fact, actually Tesla's do more of their computer vision training on simulators than they do in the real world that the simulators now, the car driving simulators are so good, so realistic. Mm. Um, that they drive, you know, more than ten times the distance every night in simulators. And of course, if you're doing it in a simulator, you can do it faster than real time. You can you can speed up the world ten times or 100 times, um, and train the computer systems on those. And then the other great thing about simulators, of course, is you can you can do things that wouldn't, you wouldn't be allowed to do in the real world. You know, you can have you can practice accidents. You can make the conditions really difficult. Yeah. Um, you know, you can put the sun into the eye of the driver and you make the road wet. You can do things that you know, where you might be risking the life of the driver and and cause an accident, and then no one dies, which is great. And then, of course, what you can do in a simulator is that you can say, well, okay, um, the, the computer failed to do it that right that time, but let's train the algorithm again. Let's try and see if we can correct it and then run it again in exactly the same circumstances. You. Mm. Replication of the experiment, which you can't of course, do in the real world. You can never replicate exactly those conditions again. No, you can say let's repeat it until the computer gets it right and doesn't kill the driver. No. That's true. And so so being able to train computers and simulators is actually going to really help us move forwards in leaps and bounds.
So is there any I mean, there are there any true level five self-driving cars on the road, or are there sort of or or back down to the level? I think it's a level two isn't it? They're not really self autonomous vehicles at the moment.
2 to 3. Yes. Um, yeah. I mean, the bad news there is that unfortunately sighted people are going to get, um, autonomous driving before people with, um, visual impairment because it's slowly being put into our cars without us realizing. I was driving my car the other day, and I realized I'm driving less. I'm not looking over my shoulder as often as I. I suspect it's going to be our children who don't drive cars. They will, but they won't get around to get their licenses. And in our case, for people who already can drive. Mm. It's going to slowly happen to us. Mhm. So um, I imagine at some point in the future I'm going to go to the RTA to renew my driving license, and they're going to say to me, well, Mr. Walsh, we, um, we checked your computer records and it seems that you haven't been doing much driving recently, but it's actually been the car, been doing all the driving, all the assists, all the autonomous assistance in the car. And you really don't have the hours under your belt. So you have two choices now, Mr. Walsh. You can either you can either take your test again to prove that you can still actually drive the car. It's not just the all the intelligent assistants, the the lane following and the automatic braking and all of that and the automatic steering, the parking. Or we can give you this new, um, non-driving driving license that, that and I think I expect well, I it was too painful getting my license in the first place. I think I'll get the non-driving license. And they say, oh, by the way, Mr. Walsh, you'll get a discount on your insurance now.
Yeah. That's right. Yes.
Remove the dangerous human from the loop.
That's right. My my wife often says it's it's actually the other drivers that, um, are the.
And that is part.
Of the problem. Yes, dealing with the uncertainty of other drivers. Once vehicles can talk to each other, then there would be much less uncertainty. But I think it's slowly creeping up on us. And then there will be it will happen in special places. So there'll be the, you know, inner city congestion charging zone where you're only allowed to go in if you're in an electric car that's autonomous or the high speed of the highway, you'll only be able to enter the high speed of the highway where you can platoon the cars together if you've got the autonomous aids.
So the only question I've got to with autonomous cars is it's fine to take you somewhere. But there seems to be no mention at the moment about how I could then say, okay, so the car's parked you in the car park at Woolworths. I'm now going to actually let you I'm going to guide you to where you can start entering into the actual shop itself inside the complex. Is that the sort of stuff that I could actually do for people that are blind? So your GPS gets you to the Or the car gets you to the location, but then, then the AI has then got to get you out of the car park into the building.
Yeah, yeah, it's the last 100m or the or the last kilometer. That is a problem. And of course, GPS doesn't work inside. No. And GPS doesn't have the accuracy or, you know, really to do that, you know, very high precision. Um, last 100m, which is exactly where you want AI to step in, which is exactly why we work on things like computer vision. Because you have to see the world. You have to see where the door handle is. You have to see there's a, you know, another pedestrian coming out of the door and someone's left, um, a dog tied to the railings. And you.
Yeah.
Um, that's why you have to treat, teach computers how to see the world so you can navigate those last 100m.
Mm.
So is there any is there any sort of, like, 3D modeling.
Or.
Sort of computer vision type stuff where, um, I don't know, a shopping center could say, well, look, here's a complete 3D visual map of the whole complex, and then we'll, we'll actually use that data, imagery, that sort of stuff, then to guide the person around. So I can say, okay, so, um, the software or the AI knows that I'm in the building. I want to go to the storage section in Bulworth's so I, you can see where I am. Can you, can you guide me to that spot just based on the stuff that you write, the stuff that it's actually sees around me?
Yeah.
The real problem, um, with that is that the world keeps on changing. So they have high precision maps and high precision, uh, 3D models of the world. But the world has always changed. So one of the things that we try and do in, in AI is what's called Slam. It's simultaneous location localization and mapping, which is look at the world, understand where you are, and also map the world at the same time. So actually the computer vision is not only working out where you are with perspective to, you know, the front door and the door handle, but also working out what is in the world to see actually, oh, there's a dog over there now, which was obviously not in my model of the world because it's just stepped into frame. Mm. Um, so that's something that, you know, we put a lot of effort in. I trying to build systems that can actually situate themselves in the world, but also map the world as they find it, because the world keeps changing and you can never have, you know, up to date maps and models of the world. They're always out of date. You really got to go there and actually perceive the world and work out what state is the world now in.
Yeah. And look, that that's my ultimate thing to do with I mean I can, I can sort of take or leave self-driving cars. But if I knew I had a computer vision system that could 100% independently navigate me around, um, into shopping centers or public infrastructure, um, transport hubs, the airport, that sort of stuff. That would be absolutely amazing. So for me, the.
Good news is that's coming, right? Okay.
So it's not.
It's not pie in the sky.
Stuff.
It's not magic because we do it right. We do it with our two eyes. Stereoscopic vision. Yeah. We managed. Humans have managed to do that. Um, um, and we are slowly getting to the point where, increasingly with increasing accuracy, we can get computers to do the same thing. Mm. And the great thing about computers, of course, as well, is that they're not limited to the visual spectrum. So they can also see the world in microwaves and infrared. And um, so they can see. And so if it's low light or bad weather, then they can also still see the world when, um, uh, where vision alone may, may make it really challenging problem. So I have the possibility ultimately of actually helping us to see the world better. We can see the world because they can do it even if it was dark or even if it was pouring with rain.
Okay.
And just finally, has there been much research into how AI could impact people with a disability moving forward into the future?
But, I mean, there has been work. Um, not enough work, actually, I'm sure. Um, and, um, because when I think about what are the technologies that are going to help people with limited hearing or with limited vision, they are exactly I the I is the technology that allows computers to hear the world and see the world, and then convey that information to those people who would otherwise be more isolated than they need be. So. So I, I have a lot of hope. The problem. The problem, of course, the fundamental problem is not a technical problem. It's a societal and a financial one, which is how do we ensure that there are incentives for the tech companies and business to do that? Because to begin with, it's going to cost them money.
Mm.
It's much easier for them to cater for the, you know, the vast majority of people who are, uh, you know, normally sighted and, um, normally hearing and not, um, you know, invest the time and effort and actually use this as a way of making, um, it more accessible for people with with disability.
No.
And I get this asked a lot about robotics and artificial intelligence, but there's some interesting stuff coming out for mobility, stuff to do for blind and low vision. And one of the major products is called glide from glide technology. And what it is, it's basically a well, people say it looks like a little vacuum cleaner. You've got little two little robot with a handle on it, and the blind person hangs onto the handle, and then it steers you around obstacles. And because it's connected to your smartphone, you've got an app that will tell you what's around you, as in businesses and that sort of stuff. But I'm just wondering how trustworthy is such a system, because I'm assuming it's using lidar and radar and infrared and all sorts of amazing stuff, but I just, I just think at the end of the day, it it it doesn't. And I come back to that original thing that I talked about. It doesn't have the level, a level of human intelligence or thinking outside of the box, if you'll pardon the pun, if the wheels fall off, for example.
Yes.
Yeah, it's obviously it doesn't replace what those wonderful guide dogs do. Mhm. Uh, because if you know, I've had the pleasure to meet some blood blind people with their dogs and they have a wonderful relationship with their dog. Mm. Um, and it is about that as much as anything that they can trust the dog to, to, to guide them across, you know, a busy road and putting their life in the hands of the dog, literally. Um, and, uh, that is a significant technical, um, milestone to me. Mm. But equally. Mm am also aware of how expensive it is to train guide dogs and how limited supply they are in. So, um, I do think, you know, there is a possibility there that we might be able to provide that mobility that perhaps, you know, some people can't because we don't have enough blind. You know, we don't have enough guide dogs to help people around. It would be wonderful if we had more and we could afford more. We trained more. But yeah, I'm not sure that's the world that we're in. And so.
Um.
If it is not perhaps as such a good solution.
Yeah, we might look.
And the other thing too, that I always think about, like I'm always thinking about the. Yes, but if then when type stuff all the time. Because to me the world's not black and white. It's got various shades of gray. And, you know, it's fine. A self-driving robot type thing might be fine for just trundling down the footpath and maybe going around the occasional car that's parked on the footpath, or it sees a branch and it goes, oh, hang on a minute. Um, I know my, my, my user is, um, 2.2m. That branch is two metres. I'll, I'll stop and go around it or do something like that. The problem is when you get into the highly unpredictable things. So I remember I was yelling at my guide dog one day because she wouldn't go in a straight line. She kept zigzagging and, um, I got to you and I went freaking bloody my dog. She was doing all sorts of nut stuff. And they went, yeah, somebody was actually repainting the manhole covers, and they were lifted off down that footpath. And I went, oh, what a good guide dog my guide dog was.
So you're right.
These, these unpredictable things, these these black swan, these corner cases, those are going to be the challenging ones. And which is why you're going to see these things turn up in more controlled environments. So so suppose you live you know on an estate, you know, an old person's estate where it's, you know, a gated community and it's, you know, it's a much more controlled environment. And then we're trundling around that environment. Um, you might be quite safe with a robot where, you know, as soon as you leave the gates of that community, you're out of the big, wild world where there are people who inconveniently leave manhole covers off. Might be better off. Um, you know, using the using the services of a of a of a guide dog. Um, so I can see it, you know, those more constrained settings being the ones where this sort of technology is first turn up, where you can be more sure that there are, you know, all of these black swan events that are waiting to manhole covers, waiting to trip you up or. Um.
Yeah.
And it just seems with the whole of, um, I there's no I remember that famous saying I can't remember it exactly, but something about who will guard those self-same guardians. And I'm just wondering, is there any sort of fallback system that says, no, the AI system is completely lost. The plot. It's wrong. Um, it's got the information wrong. It's got the orientation mobility wrong. It's got everything wrong. Now it's time to stop and just put a human being in charge.
This Custodiet custodes. Who guards the guards?
That's the one.
My, my, my classical education finally got.
Well done.
All of that. Studying my Latin and Greek finally got used once.
Uh.
Yes. At the end of the day, um, we ultimately humans have to be in charge because only humans can be held accountable. So, um, um, we there are plentiful places where I think we're going to have to make sure that humans are left with, you know, overall responsibility.
Yeah. No, look, I tend to agree because, um, I mean, every time I look at I and it mostly gets it wrong or somebody says to me, this Onam system is the best thing since sliced bread, and it still gets it wrong. Um.
You you're right. It gets it's still making mistakes. It's still not perfect, but it's also it's easy to be forgetful of how it has advanced. I mean, I remember speech recognition systems 20 years ago. Mm. Incredibly painful. They had to be speaker trained. Mm. Um, that you you had to train them on your voice. You had to train them. They didn't work in the wild. You had to use the proper microphone and and quiet environment. And now people expect to, you know, buy a new smartphone, open it up, walk down the street and start talking. Right.
True.
And it doesn't do a perfect job of transcribing what people say, but it does a pretty good job. Mm. And and you're just thinking of the advance that we've got. I mean, that was just unthought of 20 years ago to think the idea that it wouldn't be trained for your voice, it would work in the wild with all the street noise and wind around you. And you could just you could just start talking and it would get, you know, 90% of the words. Right. Mm. Is pretty, you know, someone who's been working in the field for those 20 years, I find that pretty amazing. You know, it's still not it's still not good enough. It's still not perfect. You still you still, you know, shouldn't stake your life on it. Um, but for, you know, wandering around a town, strange environment, and, you know, a country where they speak a different language, it's good enough to, you know, make yourself understood.
Well.
That's right. Yeah. And look, and I know when I first had a look at the Kurzweil personal reader that, um, Ray Kurzweil brought out in the mid, the mid late 70s, I mean, that thing wasn't perfect either. And I was sitting there going, oh my God, who the who? The Blazers can afford something that's worth about $55,000 Australian. And now it's in our pocket. Yeah. Um, and then I remember when I got my first Apple Tui, the synthesizer was even worse than the original Daleks on Doctor Who, because for a long time it kept saying to me there was an unclosed error on the Apple Tui. And it wasn't until somebody started said it's actually saying unknown, but it's actually saying the K as a C, right. Um, and I thought that was only that was only 40 years ago.
Yeah. Oh another example.
I mean subtitles on the TV now. I mean, it used to be we people had to transcribe it. The only way you could get anything reasonable as a subtitle out. Now it's done. Pretty automatic. Not done perfectly, but it's good enough to, you know, be able to work out what's happening if you can't hear the TV.
No, no. Exactly, exactly. And look, I'm I mean, I'm because I'm an Apple geek. I mean, I'm really looking forward to, um, what's coming up at the Worldwide Developers Conference at WWDC to see what Siri morphs into.
Apple has been very slow, though, on the uptake of AI that's going to change. Um, and the Apple was the great thing about Apple is they're going to do it more and more on your device.
Mm.
Exactly. Yeah. And that's what I'm looking forward to because I really don't want my, you know, my conversations or whatever else I might be looking at via an image going out to a cloud somewhere, because you've got no idea what where the information is going to end up on the cloud. So the more stuff is done locally, and to me that's going to be more appropriate.
But that's that's going to be the future. And certainly Apple was one of the companies that's been, um, you know, promoting that. But the idea is that increasingly you don't want to share your data with everyone and anyone. And increasingly, we'll have the sophisticated AI algorithms will be small enough and smart enough to run actually on your device. Um, that solves lots of other problems, and it solves the privacy problem, but it also solves the latency problem. So there's lots of places where you don't have the connectivity you're in, you know, an urban canyon, you're in a tunnel, whatever it is. Yeah. Can't depend upon, um, the connectivity to be able to, you know, send the data to the cloud and have it transcribed or the computer vision on your car to do stuff. You need to be able to.
Do it on.
The device. So but that's the future of AI. Increasingly, it's going to be powerful enough to run on the limited amount of hardware that you're actually carrying on your person.
No. And look, that's what I'm looking forward to. I mean, I've got I've got about ten questions that I always keep checking in with AI every, every three months or so. And at the moment it's it it's getting better. Um, the first couple of ones that actually broke dramatically on my on my questions. Um, I was funny because, um, when I was going to interview, I, I checked up and it doesn't think that you're a poker player anymore, which is cool.
Um.
It knows how many bees are in the in the webinar now, which is also pretty cool.
Oh, Taramasalata will call it now, though.
All right.
Although I must admit I struggle to spell Taramasalata myself.
But anyway, yeah.
Actually my favorite word at university for philosophy was, um, reductio ad absurdum. I love that word. Um, I was like, oh, that's pretty cool. Um, all right. So look, if people want to find out more about your books because I know there's there's faking it, there's the 202,062, uh, there's machines Behaving badly, which I absolutely love. Um, now there's a fourth book, which, for the life of me, I can't remember.
It's alive. Artificial intelligence from the piano to killer robots. Actually, the first book I wrote, which is.
Oh, okay.
About the history of.
AI. Okay, now I've.
Got all of them on Kindle. Have you done anything with audible? So people that want to sort of sit back with a glass of wine?
Good news is.
Coming out next month. I have just recorded my, um, uh, book. I've just been put out by Belinda. Um, Faking It is coming out in audio, and I have to say, I'm an absolute convert. I, I think it is the best version of my book. Um, because I realized that, you know, I've written it in some places. I write it with, you know, I'm writing things and I'm annoyed and upset or excited. And if you're if you read the page carefully, maybe, hopefully you can tell that in whether what I've written. Um, but in the audible book, it's very clear when I, when I spoke the book, I had the privilege of pleasure of speaking my own book. I, you know, I could laugh at the jokes in the book. I can I can express disgust with what the tech companies are doing with my data. Um, so there's a lot more information conveyed in the audible book that's not conveyed in the written book. Maybe.
And, uh, because I, when I was reading the stuff about the, um, the local autonomous weapon systems, I thought that's just absolutely appalling. Giving, giving drones the capability to be able to actually kill people without any human intervention. That's just getting a bit beyond the pale. Basically it.
Is.
Um, and I just thought, oh, I wonder what you would have sounded like reading that particular particular paragraph, because.
Next month you could discover.
There you go, I will.
The great thing about the book was I actually only speak half the book.
Oh, because the.
Book is all about how ChatGPT and examples like that make mistakes and get things wrong and, um, amuse us and, and, um, I said to the publisher, I said, well, I could read this. Um, but why should I read this? Why not get the computer to read it? So I say half of it and ChatGPT says the other half.
Yeah. Does it use your voice to do the the second bit, or does it just use a generic computer synthesizer voice?
Oh, no.
They've, um, they're getting a very nice, uh, synthesized voice for that. Um, so the publisher is very excited. They said this is the first book. They've they've really, um, embraced the technology. Right. And it's so, so it is a conversation between me and the computer, and I hope you enjoy it.