Can a computer create art from scratch? How will Google's Magenta project actually work? And can a machine be truly creative?
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
Brought to you by Toyota. Let's go places. Welcome to Forward Thinking. Hey there, and welcome to Forward Thinking, the podcast that looks at the future and says rewritten by a machine with new technology. I'm John and I'm Joe McCormick. So, guys, you know, we've talked a little bit about AI and its potential role in creativity a few times. Back in September, we recorded an episode called the Future of Music Composition, and we talked about electronically aided composition as well as the idea of AI being able to compose music all by itself. You know. One of the funny things is on that episode, I think we we talked about, you know, will we ever have something that and and then it's stuff we already have right now. Yeah. Yeah, to be fair, some of it may not have been as as publicly available in as it is. No, I mean now today, not necessarily now, back then we had earlier versions of them. Yeah. Oh, the base base technology that we're going to be talking about today, really deep learning didn't kind of was not announced to the people. Yeah, we can. We cannot be blamed, is I think what we're trying to say here, But we're futurists, not psychics. We're not even really futurists. Reporters, Yeah, we're not even really reporters. Kind of sit in front of microphone. We're just gonna keep stepping back. We don't know what we are, but we know what we're excited about. We're excited about this idea of AI being able to engage in works of creativity. And before, just before we jumped into the studio, Joe, you came across an example of AI that works with music. Yeah, so I I was wondering if you guys had listened to any of the creations of this app called juke deck, And before you asked us, I had not, But Lauren and I listened to a track. I decided to generate a music track that was in the electronica genre with a mood of aggressive. Was it aggressive, No, it was kind of. It was kind of a little bouncy yeah, yeah, aggressively bouncy. Yeah. What juke Dick does is it will let you specify a mood and specify a genre from like, you know, four different choices. You can pick a piano thing, a folk thing, an electronic thing, or something else. I can't remember the last one, and then it'll let you pick a mood to go with that and say how long of a track you want, and then it'll just generate for you an original track of music that is royalty free that was created by a I. So I first told it I wanted a ninety second original track of electronic music for a chilled mood, and it gave me a track called Holistic Adventure. Ours was called Sweltering Seas. That's pretty good. And then it also made me a sixty second original track of folk style music for a melancholic mood, which was called Infinite Atoms. And beyond that, if you were to go further into it, they actually have other genres available as well, including cinematic, and they're uh, just to pay up for those. Yeah, it's it's when you've subscribed to the service. And one thing we should point out is that not every mood is available with every style of music, So you know, you can't really necessarily get a lot of aggressive folk necessarily. Yeah, So I'm not exactly sure how this works. So the way they explain it is, quote, our AI uses machine learning to understand how to write music chord by chord and note by note, this means that every track you create using juke Deck is truly unique, and so what they're saying is that it truly is ground up auto generated. Um, you know, artificial intelligence created music and they're not just like working from templates or something like that. But then again, I mean, it's it's hard to know what what's really going on on the back end, but at least if it works as advertised, I'm impressed. Neither one of the tracks I heard was like great music it you know, it wasn't blowing my mind. I wasn't just amazed, oh man, that's so good, but it was absolutely passable for its genre, totally good enough to be elevator music or the kind of music you'd playing a store or something like that. I was telling Lauren that it reminded me of the kind of music you would encounter in like a small independent video games, Like you're playing a little video game where clearly it's maybe one or two people who have worked on that, and it totally was something that you would hear from that. Not bad, not like a spectacular entry into the field, as you were saying, Joe, but not bad. Um. It's also I don't know for a fact that you can do this, But I was playing with it and it looked like there might even be the possibility of combining genres. So I wanted to find out what would happen if I did folk electronic with a really kind of action oriented music. But I didn't get a chance to get far enough into it to find out if, in fact, that is possible. Well, we have that little example there. And I don't mean to say little as in like I'm I'm dismissing it. I mean it's it's a it's a relatively um modest approach to it, and there are some really intelligent people behind it. It's a it's a group of folks from originally Yeah, so smart folks. Meanwhile, we also covered another related topic in August two thousand and fourteen, we did a show called the Future of Art, and we talked about the merger of technology and art. And today we're gonna talk about a project that's really trying to push this entire idea forward, Google Magenta, And of course it's by Google. When you're wearing your Google Fiber is coming, sir, I am that was not I didn't think about it when I put it on this morning. But yeah, I'm wearing my my Google Fiber Georgia shirt. Uh and and I understand they're laying Google Fiber along North Avenue right now as we speak. Yeah, which is which is right outside our office building. In fact, nor that all what all those police cars were escorting this morning, the Google Fiber escorting the fiber down pondstantly in Avalue right like it gets like a presidential motorcade level of welcome. One can only hope, right, I'm still I'm really holding out hope that I get Google Fiber before too long. At any rate, you might be wondering what exactly is Google Magenta. Well, it's a project that falls under the Google Brain team. That's the department within Google dedicated to using machine intelligence focused on deep learning, and in deep learning, engineers program networks of like virtual neurons that can look at data. So so let's use like images for example. Okay, the neurons can look at a picture and assess some factor of it, the shapes or the colors maybe, and then the neurons can decide what those shapes or colors were minded of and assign that decision of probability. Um, probably this is a boat. Maybe it's a banana. Most likely it's not a centipede, but it's slightly sent a petish udmule recognized boats that don't have sales. I mean, you know, it all depends on what you fed it to tell it. It was about, remember this is This is very similar to that idea we talked about, where feeding all those pictures of cats to machine learning, so that eventually the computer, without being told this is a cat, starts to learn what a cat is. It knows that it's got certain features it's really aloof, and it really doesn't care if you live or die exactly. So, so the neurons, the turons passed this information around to other neurons in their layer, and then that layer kind of compiles and passes its information up through other layers, and and the system makes increasingly educated guesses about what exactly is going on in this picture. And as you feed the system more and more pictures, it makes more and more associations with particular shapes and colors, so it learns right. And we also talked about a related uh products that came out of this group Deep dream. Oh, this is the project that turned my dog into a big old mess of caterpillars. It turned me into a big old mess of dogs. Yeah. This is This also came out of the Brain Team group. That's the Artificial Neural Network project that teaches computers how to recognize patterns out of visual data, even when the patterns aren't necessarily there. It's not it's not too different from when you look up at the clouds and you say, oh, there's a it's very like a whale. Right, So we were teaching computer programs how to hallucinate. Yeah, pretty much. And we talked about that in an episode called deep Dreaming with Google and that published in July two thousand fifteen. And and basically what is going on with deep dream here is that instead of just making a guess that a picture contained a dog, for example, um, it changed the image so that everything that appeared to be a dog in it would look more dog like. So like a layer would say, well, this this is probably a dog based on the shapes in it, So let's enhance the doggie shapes to really emphasize the dogginess. And and then as that extrapolated image is passed through Deep Dreams layers, each one emphasizes whatever it guesses is going on in the photo, and yes, frequently what it guesses is going on in the photo is a nightmareescape of dog faces as far as the eye can see. Right. So well, I think the most popular ones were definitely the animal recognition ones, but you could tweak the algorithm to recognize all kinds of stuff. And it was because that they had started with just feeding so many different animal images. That was like that was sort of their their starting point over at Google when they were training this how to recognize different visual patterns. So if a fold in your clothing looked even remotely similar to a dog, guess what you're wearing? Dogs? Now, according to arms, your arms have bugs in them, your your shirt, and your shoulders or dogheads, your dog actually is not a dog but a bunch of cats taped together. Yeah. I like that. Everything became very dog lee for a while. But one of the other projects under the Brain team is one we're going to mention a little bit later called TensorFlow, which is a machine learning engine, and it's an open source project, meaning lots of people anyone really can access those tools and not just access them, but tweak them, improve them, evolve them, and and grow them, and that's really an interesting and potentially exciting development in machine learning. Yeah, but you can see how pro projects like this might be sort of evolving toward ultimately generative powers in AI, not just recognition and modification of visual images and other types of sensory input and data, but but actually building things from the ground up, recognizing patterns and saying I've got enough sense of what the patterns are that i can make one on my own, right, exactly like you, You no longer are taking a a uh, you know, an image or a concept and then saying, here, enhance this or a this in some way or recognize it. And you're now saying, hey, you know what one of those happens to look like? Make one? Yeah, And that's that's a big leap, right, that's a huge leap. And you can do the same thing you somebody could say, tell me the plot of a James Bond movie that doesn't exist. You've seen enough of them. You know, there's a standard pattern. You gotta have the gadgets, you've got to have the you know the yeah, yeah, yeah, you gotta have a pit of sharks or something like. You you can put the pieces together. You gotta have at least one one lady who is pretty much a good guy and one lady who's pretty much a bad guy, and Bond's gotta mess with both of them. You know, there's there's certain rules that you probably both what's that messed with? Hey, I'm being very uh, you know, being very family friendly here, so like James Bond, Yes, exactly, James Bond, so family friendly. So the Magenta project is aimed at developing artificial intelligence capable of actually creating art, not not just altering something so that it looks like art, but to create art, both visual art and music. Now, the official launch date of Magenta is June one, two thousand sixteen. We are recording this on May twenty six, two thousand sixteen, so it has not launched as of the time we're recording this, UM, but we wanted to kind of talk about it, and it was just recently announced to the world, although not officially unveiled. Douglas k who's working on the project, announced Magenta at Mogue Fest I assume named after the Mogue synthesizer which everybody lives and um that's actually a music and technology festival that takes place in North Carolina, so it was not too far away, and also stressed that while they're working on this project and while they have high hopes for it, he says, AI is still a very long way from creating long narrative arcs. So it's not like this is going to be, you know, within a year, we're going to have computers righting the next great American novel or anything along those lines, but that this is the first step to word computers making a creative problem solving and moving to a point where that problem solving isn't about tackling a question, but about creating something new, like music or a painting or anything along those lines, or even video. Well, I mean, I think something literary and long and coherent like a novel would would be one of the most difficult things because that involves just the most uh what might you call it, semantically diverse array of things that you're working with. I mean, in a lot of moving parts, not just not just the words that have to make sense and sentences and in paragraphs and in chapters, but also character development, character motivation. I mean, it's not even that doesn't even make sense to a computer, world building, etcetera. Right, Yeah, exactly. There are a lot of things, a lot of elements in creating a long narrative that are would take a long long time to teach a computer. What does this mean? I mean you can imagine that a book written by a computer before it has a full grasp on that, it could end up being incredibly dull. You're just you're reading a very mundane account of a person or or it could be the opposite. It could be like well, it could be like a Dan Brown novel where every page ends with a cliffhanger and you think I need to break you know. I was looking up Douglas X so I was reading a little about him on his Google research page, and one of the things he had previously worked on was was content. What would you call it? Uh oh, I'm losing the word for it. He worked on Google Play music delivering you the kind of music you would want. There's a term for that, curation. Curation sure, yeah, based on yeah, yeah, yeah, based on what you like to listen to. Figuring out, Okay, what's the other type of music you haven't heard yet that fits in with the profile of the stuff that you like, which is, you know, that's kind of a tough job, because how does the computer know that one song? Oh you know, if Jonathan really likes this song by they might be giants. He'll probably really also like this song by Slayer. Yeah I think that. Yeah, they're like, hey, like like you like rock and roll Joe, so probably you like this Nickelback song which is rock and roll. Yes, yeah, yes, So at least with the Music Genome Project, which is what Pandora is based off of, the way that works is they have human beings who meta tag every song with every kind of descriptor that would be relevant to that particular song, and then the algorithm starts looking for other songs that have several of those same meta tags associated with it and say yeah, yeah, and so it's going like you like melancholy guitar solos because you listened to the Decembrists, so probably you're going to like the dashboard confessionals, right, So that that may be similar to the way Google Play Music does it. I don't know, because I don't know. He made it sound more like that this was a more automated process. That's really interesting, more difficult. Yeah, yeah, you're not having a human hold your hand taking you through Okay, now this is what moody sounds like. This is melancholy, right, that's really interesting. Well. At at the mog Fest conference, Magenta team member Adam Roberts showed off a digital synthesizer program where he could feed a few musical notes into it. I think it was a sequence of four notes, and then allowed the program to build a melody off those basic notes. And what I'm picturing this kind of going off the deep dream concept, is you know like, Okay, well these notes sound like the beginning of a big band slow dance, So I'll just add more notes to make it look more like that. Sound more like that. Yeah. The the example that they showed in the in the actual festival, when I when I watched it, I thought, ah, they picked up bad for initial notes. It just because you listen to it and you're like, to me, that sounds like just someone aimlessly plunking on a synthesizer keyboard. It didn't sound like someone actually creating melody. But I would argue that you're really in this case, the way this works, the the final tune is really only going to be as good as the initial input you give to the computer. Because it's not creating it out of whole cloth. It's taking a foundation and then building upon it. If the foundation is faulty, then you can't really expect the rest of it to be awesome. Um So, but maybe that was just me. I also don't have the best ear so perhaps I'm being particularly harsh. Also, this this is an early prototype that he's showing off, right, Yeah, and maybe that the what we see on June one will be a more comprehensive demonstration of the abilities of Magenta. And keep in mind the Magenta we had it composed this song and destroy these four countries? What what did you do? Now? Um? We think? Uh. For one thing, Magenta is supposed to be an ongoing project, right. This is not something that's a fully fleshed out product and they're going to reveal it on June one for people to play with. It's more like, here's the concept behind the project, here's how we're going to try and accomplish our goals. Here's where we are now. That's more likely to be the announcement on June one. So uh. Ex hope is that by feeding enough musical information to Magenta, enough songs, in other words, it will be able to produce its own music that is esthetically pleasing to human like people persons, and that those yeah, I mean that my list is pretty small, but I know a couple. So the program first has to learn what makes music work? What are the rules of music? So what are the sort of things that we like to listen to? And once you know those rules, when is it okay or even preferable to break those rules? When are when is it all right to stray from the conventions of any particular musical genre and do so in a way that's interesting and and maybe it's pleasing, maybe it's maybe it's not pleasing, but it's the sort of thing that catches your attention and that's what makes the music really stand out to you. Right, So these are not easy concepts, even for musicians, like human musicians it you know, it can take years of study to really understand musical theory and be able to craft something that is most likely to evoke the reaction your hope to get from your audience. You know, I mean, I know, I've written a couple of songs. They're terrible, uh, rather than any rate. So music is just the first type of art the Magenta is going to tackle. They're going to actually use the same sort of approach to generate images and perhaps even video in the future that they mentioned eventually text too. Yeah, so again getting to that point where maybe we can get to that narrative arc. And this also kind of feeds into when we were talking of our I don't know if we've ever talked about on the podcast, but Google recently in the news is uh, a lot of people poked fun at Google because it was feeding romance novels too. It's um digital assistant, so it would learn better about how people converse. The important thing being that romance novels tend to follow a very similar uh uh pattern right, you're very they're very formulaic, but different romance novels say the same thing in different ways. And so the hope is that by feeding this kind of information, and romance novels were just one genre that we're fed to it, but everyone focused on it because of course it's funny. You know, this idea that your digital assistant is going to be making some very sassy recommendations to you or explain the weather in ways that are probably inappropriate. But the idea being that by feeding all this information using machine learning, that you would be able to get your finished product to be more capable of interacting with people using natural language, very similar to what Magenta is doing, except in that case it's music and art. Right. You're feeding more and more music in to Magenta so that it has a quote unquote understanding of what music is and can more likely produce something that is similar to what a human would make without it actually just copying something that a human has already made. Yeah, I mean I wonder sort of what the what the end goal, like, what the expectation is here, because like, on one hand, if we're to believe what's supposedly going on in the back end that creates these tracks we listen to a juke deck, I feel like, here's the system that is already creating perfectly passable music. Again, like I said, it's it's not amazing. So I'm wondering, is is Magenta aiming to create music that's really going to be amazing and people will be like, wow, I love that song. I think Magenta's not to put words into the mouths of the people in the project, but I think Magenta, by using a very specific goal, is really out driving machine learning further. Yeah, so I agree. I think that the art is absolutely secondary and kind of the headline grabber, right um, and and that right the other applications like in a voice recognition or something like that, or what they're really aiming for, right So. So for a great example of this would be looking at the private space industry and saying, you know, you've got some pretty big, big ideas, how do you focus that in a way where you can actually engineer toward a solution? And then you just start taking specific questions like how do we make sure that astronauts can can breathe in space? And you take that first question and you start trying to solve that and and it ends up being that it's one part of a much bigger picture. I think that that's what we're going to see when the GENTA. Yeah, i'd agree. I mean I just wondered about the art itself, like how what do they think it's gonna be? Like, I'm very curious about that too. And obviously since we're so early into the project, it's hard to say. I would love to be able to to revisit this. In fact, maybe we will be able to revisit this, uh sometime into the future and listen to some of the stuff Magenta has produced and say, does this sound like a human being made it? Or you know, the fact that we know a computer made it, does that change what we feel about it? And anyway you might wonder how the heck is this thing working well? To to kind of build upon what Lawrence point was with the neural networks. Uh, they're specifically using this set of tools called TensorFlow, which falls into that category. UM, that's the open source machine learning set of tools. UH. And generally speaking, first, it's going to accept midi files, that's a very common version of music files. UH. And that's going to be submitted by a community of contributors to teach itself the basic rules and concepts around music. But TensorFlow itself, according to its web page, is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays or tensors communicated between them. The flexible arc actually allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API found artistic to me, couldn't be more transparent or simple. So I mean, I think the simple version is just that TensorFlow is. They say it's open source software for machine intelligence. Yes, and it's and it's essentially modeled after, at least inspired by, rather the way brains work. Yeah. Now, tensor flows particularly well suited to process information for visual analysis and recognition as well as speech recognition. So that's that's what they were primarily intending it for when they made it open source and said that different developers could use this kind of tools set in order to give their various apps or projects the capabilities that they would need, uh, in order to process visual information or to do some sort of speech recognition or speech activation kind of process. Now, the ideas that developers can use these tools to create the stuff they need. They can test the stuff that they have planned and see if it works, and if it does work, they don't have to do any more new code. They can actually use the code from TensorFlow as and incorporate into their their project. Um And like I said, it's not a complete set. It's something that will continue to evolve over time as people find shortcuts or they might find a more elegant way to do something or a more robust way, which is part of why it's open source. I'm sure that they're hoping that the people who want to get in there and work with it will also want to improve it. Right, so uh and and again going back to Lawren's point about probabilities, it really is all about assigning probabilities, assessing those probabilities, refining them so that you start looking at your individual options and determining which option is the best out of all of them. This might sound familiar to you if you remember IBM S. Watson when it was on Jeopardy. There was a big bit about how does it come up with the answers? How does it know what the answer is? Well, IBM S. Watson would receive the clue and then it would come up with potential answers and assign each one a probability of how certain it is that's the correct answer. And if the probability was above a certain threshold, that is when Watson would buzz in and and send that message in. And I think it was something like an eight percent uh certainty, something along those lines. It was somewhere around there. And this is very similar to when we were talking about the Kepler space telescope and how if you had a probability greater than of a signal being an exo planet that was considered a verified exo planet. So TensorFlow does the same sort of thing. It looks through these probabilities and then it goes with the highest option. Uh. It kind of makes you wonder how that works with music. And I don't know the answer to that because we've reached the limit of my understanding of this particular approach to machine learning. But I wanted to talk a little bit about a related project, not directly related to the Google team, but one about a computer trying to generate art. Yeah, and this was one that you covered for our video show. Now. Yes, this would be a project that a group of art historians and computer scientists and researchers and developers tackled together. And it was in the efforts of creating a new painting in the style of Rembrandt, and the idea of being that this should be a painting that Rembrandt could have painted himself. But if only he'd had a three D printer, if only you had a three D printer. So, yeah, they specifically use machine learning, computer algorithms and a three D printer to create a new quote unquote new Rembrandt painting, at least in the style of Rembrandt. And it's not quite the same thing as generating completely new art because again, you're using Rembrandt's style as your starting point, that's your foundation. It's kind of like those intro notes when they were building out that melody. Yeah yeah. It also like a very specific type of Rembrandt painting in order to create a very specific type of ram Brant paints right right, instead of feeding it every single Rembrandt painting that ever was ever, they took a specific subtype of Rembrandt painting. Now, granted it was a specific subtype that rembrand did a whole lot of. Sure. It was of a dude, white white, an old white dude, well you know, old ish, not probably younger than I am actually, but at any rate, sitting sitting and not quite you know. It's a little bit of a profile shot, not a full profile um wearing black with a big old white collar and a big old black hat, looking somewhat pensive, as Rembrandts subjects often did, probably thinking is this guy gonna let me sneeze or something on those lines and it uh. The The way they did this was they actually had the computer analyze ice portrait after portrait after portrait in this style, and the computer began to take measurements of all the different little elements of these these portraits to determine what is the typical Rembrandt portrait, like, like, what is the spacing of the eyes right, how is the nose shaped in comparison to the eyes right when you get to the corner of the mouth, how does that look in a Rembrandt painting? All of these ideas, and of course, how do the brush strokes look. So they used very high tech scanning technology to get the texture of the brush strokes as well. Once they did all this, they then fed all that information and generated a Rembrandt style portrait using all of those points of data as kind of a roadmap, a guide, saying, make sure that the eyes are this far apart, make sure that they are this large, make sure that their space this far from the nose, all these little basic rules that they had established through the analysis of all those other portraits, And so the computer generated one and then they sent it to a three D printer which was able to replicate the ridges you would find from brushstrokes, and the end result was a painting that looked an awful lot like a Rembrandt portrait, enough so that if you put it in a gallery of Rembrandt portraits and you brought a non expert into the room who someone who was not familiar with every painting Rembrandt has ever done, and said, pick out the one that was done by a computer, I bet it would have been really hard to do because it looked pretty much like every other rem Breand but again, that was more about copying a specific style, right. It was a little It's incredibly impressive. I don't want to downplay how impressive the the achievement was. It took them two years to do this. But it's not the same as trying to teach a computer what art is and then tell the computer, now, make something right. So it's a little different because you're giving the computer way more of a roadmap in the rembrand approach. Now, assuming we get to a point where we actually are able to have a I produce music and art that we think has value to it, it doesn't just seem like a random representation of whatever, right, But we're not just tuning in for the novelty of it. What does that mean? I mean, that's a big question. What does it mean to us if AI is capable of of creating something that falls into the realm of art. Now, I would argue, at least in the foreseeable future, we wouldn't say that the computer is actually expressing itself. Uh no, no, And I mean for them more like the computer isn't really the artist, Like the programmers of the computer are kind of the artists. Yeah, just several stages removed. Yeah, the computer is the tool that they are using. I think of I think of it personally, and let me know if you guys disagree. I'm really curious to hear your thoughts on this. I think of it as the computer could be thought of as an early stage artist, someone who is learning their craft by copying the work of others, not directly copying their work, but copying the style and and not necessarily expressing themselves through that art, but rather just trying to master the tools of creation for that art, but has not made that next step where they are the ones who are being able to express something deeper, something new, something unique. To themselves, right, rather than rather than simply I now understand how this works. I think that the computers will be on that first step uh and not in the second one. But if you disagree, or if you feel like I am being way too uh or narrow minded of how to define this, or that you don't even think like I don't even think that's a You're looking at it far too utilitary in a way. I mean, that is perfectly fine. And I didn't put this in the note. So that's why I'm just springing it on you to find out what you think I mean. I mean, I guess the question is really like, do you think that uh, an artist needs for for for lack of a better term, a soul in order to actually create art? Or can a computer, not being truly self aware, actually create art? Right? Here's another question. If an artist accidentally spills a bunch of paint onto a canvas and it turns out to be something that people think is very beautiful and they want to look at, is that art, well, it depends what is the artist's opinion of this. They aren't feel that the act of accident was truly a moment of chaos or was it something that was instigated through I mean, like, I mean, these are questions people have been asking all of that, you know what, what actually counts is all right? In fact, this is actually one of the things I wanted to look at. So one of the back to Douglas x uh Google research page. One of the things he says on that page about Magenta is he wants to ask the question can machines make make music and art? If so, how? If not? Why not? And that very last sentence was actually the most interesting part to me. I like, I like the inclusion of this question why not, because it's it's an interesting way to phrase it. It illuminates one of the roles Magenta could play in the larger world of AI development. So the study of artificial intelligence, to me, it's not just about can I get a computer program to perform or X or y intelligent behavior? It's about understanding the nature of the behavior to begin with the nature of intelligence and intelligence based labor. And in this case that might mean that, you know, computers could help give us insights into questions like what doesn't mean to create a piece of art? What we're just talking about a minute ago? I mean, think about in the past, um all the for weird pieces of abstract art that people looked at they said, that doesn't that's not art, you know, Jackson Pollock, does that counts art? John Cage? Is this really music? He's just playing one note over and over or he's just like turning radios on and off? Is that really music? And it? I don't know. It makes me think what makes us want to pay special attention to a particular display of shapes and colors or a particular ordered sequence of sounds. And there's difficulty there because when we encounter art in our lives, it's not devoid of context. Like so when you encounter a particular sequence of sounds or group of shapes and colors, it might come with social pressures, right, like people saying, like, you know, this thing here is a piece of art. You should know and the mere fact that a piece of art is hanging in a museum gives it weight exactly, So people people are telling you to pay attention to this thing, and that might make you pay attention to something that you wouldn't pay attention to otherwise, or maybe you would who knows. But then again, this could also apply to artificial intelligence, because what if it's a an AI making a piece of music and you're saying, well, maybe I should listen to this because I want to hear what artificial intelligence can come up with, and you wouldn't. Really, it wouldn't be all that interesting to you. Otherwise what you were talking about earlier actually is it? Is it just the novelty of it? Sure? Um, which I I guess really, I mean, I don't know. I would suppose that, uh, post postmodernism, I would define art as a thing that makes you think or feel h and therefore a machine can totally create absolutely by that definition. I would say that if you were to have your your AI create some music and that made you feel something, and you define that as music that makes me feel something, is is art, whether it's you know, happy or sad or energetic or whatever, then it would by that definition you have to say that the puter was able to generate art. And also, let's not forget that there are a lot of pieces of music out there throughout the ages where musicians were taking very calculated decisions on how to craft that music to get a specific kind of feeling for it, to the point where you could even be cynical about it and be like that song was manufactured day one to be a single and get play on like the pop charts. And first of all, I don't think there's anything wrong with that. I I don't look down at my nose at the idea of a manufactured piece of music. If it makes someone happy, that's awesome, and that's all that really matters in my eyes. I know a lot of people have other opinions about it, um and that's not bad either. But if a computer were capable of doing that, I would I would agree with you, Lauren, I'd say, well that that counts is art. H On a related note, so let's imagine that we've got computers capable of making music. And they talked about the work of trying to create text. One wonders if then you could actually have computers capable of creating entire songs, like, not just music, but music with lyrics. We talked about this last time. We were saying that, you know, I can easily imagine computers creating perfectly passable instrumental music. Not so much music with vocals. That seems a lot more difficult to me. So let me give you some poetry written by machine. Remember when I told you that Google had fed it's artificial intelligence all these novels. It started to try and create sentences, and it wasn't attempting to create poetry. It was creating a series of sentences that other people have looked at and said, this is there's something interesting here. Now there's not an intent behind it, necessarily, but there is something that feels like really morose poetry. So here's here's a poem from Google's AI. It made me want to cry. No one had seen him since it made me feel uneasy. No one had seen him. The thought made me smile. The pain was unbearable. The crowd was silent. The man called out, the old man said. The man asked, it's a poem by Google AI. Yeah, I can dig that. I mean, it's just it's it's interesting, though, because this is just this is just AI trying to suss out the meaning or or the intent behind words so that it can better understand when we communicate to it and ask for something using different language, how it should respond. It's not attempting to create anything of meaning, but because we're humans, we find meaning where perhaps there was none intended. And I thought that you know, you could argue that that's maybe it's maybe the artists created not through the computer right get down, but through us reading it. Maybe that's where the artist created. And I mean, and I've definitely read some some like like found poetry from from spam emails for example. You know, back before all of our email filters were so good that they didn't don't let spam through anymore. That often um that the like just terrific keyword salad that you would get that was beautiful, not intending to be. So the funny thing we found poetry is who gets the byline on found poetry. It's the person who put it together. I mean, it's not the where the your text originally came from. It's the person who well, I I edited together this found poem out of some text I put this in this order. So so so is the suggestion here that the artist, when we read works by AI is the reader. I mean, that's a possibility. It's it's a question that I think has has merited. Uh yeah, I don't know. Well, I mean, in the case of yours, Jonathan, I suspect that somebody was that just it was that a sequence fully generated by the AI itself or somebody pull pieces. That's an excellent question. I suspect it was the latter, And in that case, I would say that the poet is the is the person who pulled those pieces together. Actually, here it is. Here's the way it works. The team. Now that I'm reading this more because when I first saw this, I just saw the examples and I didn't see how they were generating it. What they were doing was they gave the computer a starting word or sentence and an ending word or sentence, and then the computer had to generate a series of uh sentences to link the two together. And they began they began to say that these were telling stories. Now they were sometimes abstract stories. Um, some of them are some of them are really weird. I wish we could find the one that I that I saw where it was really really sad, and then it became about horses at the end. It was amazing. I was like it was like Tina Belcher from Bob's Burgers had written a poem. But it was a pretty phenomenal and and you know it's I bring it up mainly just to say that we're seeing some really interesting work in this field of machine learning and creation that could ultimately lead to things that perhaps don't replace any sort of human creativity, but in either enhance something. Maybe you do a partnership quote unquote with a computer to create something, just to put that out there like this is part of me, part machine. There's something interesting with that as well, or maybe just it'll be you know, another another option. One of the things that the team talked about or or that people have chatted about as far as the prospect of computer generating music is using it to enhance or suppress certain moods. So, for example, you're wearing a smart watch. It's got a uh activity tracker on it exactly, and it detects perhaps that you're being stressed out. It knows that you're not moving around, but the text, because of your physiological changes, you're getting stressed out. And so the headphones you're wearing you start listening hearing music that's more soothing to you, and it's generated on the fly. It's unique music. It's not something that you're gonna listen to and then just tune out because you've heard it a billion times before, or maybe you're it detects that you're working out, and it says, Oh, we need to generate some nice, fun up temposts type of stuff to keep the activity going at the right level. And it starts to create that on the fly. So that's a possible application for this, and that that would be fascinating because there's all this research into if you you listen to music that has the similar beats per minute to your active heart rate, then you will keep going at that active heart rate for longer. I've found I've found just anecdotally, yeah, I think I think by your research, I mean, like sports blog right right, Anecdotally, I have certainly found that to be the case. Like if you know, I walked to and from the office and if I'm listening to music and I'm listening to a podcast, I'm just strolling. If I'm listening to music and and something with a beat goes on there, if I'm not paying it, if I do suddenly pay attention, I realize I'm stepping on the beach. You're staying alive, absolutely exactly, Yeah, doing that CPR. So it's it's pretty cool. And also, you know, to kind of conclude this discussion, really, I think ultimately what this is going to do is, uh make a more robust machine learning system for problem solving. In general, you could think of creating a piece of art as a problem, not a problem in the sense of, oh, gosh, I've got a problem, but like an engineering problem, right, but imagine that you're able to create machine learning so that you could present a computer with a problem in the more colloquial sense, the more like I've got a problem, I don't know how to fix this, and the computer, because it's studied all of the analysis, you've got a problem. You I'll solve it, and then it takes your problem and gives you the problem the solution that is most probably the right one according to its machine learning algorithm. And ultimately we could get to a point where we consult the great oracle, perhaps made by oracle that tells us what we should do for some questions that are particularly tricky, where you've got lots of different variables, and the suggestion is always have a huge party, like I got the dance mix right here, get going, and it's all generated, has really morose poetry in it. Um, who knows, Maybe that's the answer. I will not say no to a party butt. I'm just throwing that out there, but I think that this particular project is really interesting. I think the the possible outcomes could be really cool, not just for the art that it creates, but how it advances machine learning in general. And like I said, maybe in a year will come back to this and take a look and see if the magenta has produced anything you know, no worthy. Yeah that's a pun musical nets all right, So I'm gonna wrap this up. Guys. If you have any suggestions for future episodes of forward Thinking, you should write us our email addresses f W Thinking at how Stuff Works dot com, or you can drop us a line on Twitter we are FW thinking there, or you can go to Facebook search f W thinking. In the little search field, our profile will pop up. You can leave us a message there. We love hearing from you, guys. So if you've got any suggestions for future episodes or questions or comments or anything, leave them there. We read all of them, and we will talk to you again really soon. For more on this topic in the future of technology, visit forward thinking dot com, brought to you by Toyota. Let's go places