Artificial intelligence has been the hot topic of debate in the business world for the last few years.
But increasingly, it’s an area that is encroaching in on the creative industries.
The latest OpenAI update is so advanced, fans online have used it to eerily replicate the hand drawn art style of Japanese anime favourites, Studio Ghibli.
It’s just the latest sign of AI coming for the arts, with recent headlines also highlighting concerns over entirely artificial models in ad campaigns, and fake movie trailers that look close to the real thing.
What protections are there in place for our creative sector, or could they become one of the first industries to fall to our new AI overlords?
Today on The Front Page, University of Sydney business school Associate Professor Sandra Peter is with us to take us through the impact of these emerging technologies.
Follow The Front Page on iHeartRadio, Apple Podcasts, Spotify or wherever you get your podcasts.
You can read more about this and other stories in the New Zealand Herald, online at nzherald.co.nz, or tune in to news bulletins across the NZME network.
Host: Chelsea Daniels
Sound Engineer: Richard Martin
Producer: Ethan Sills
Jiaoda.
I'm Chelsea Daniels and this is the Front Page, a daily podcast presented by The New Zealand Herald. Artificial intelligence has been the hot topic of debate in the business world for the last few years, but increasingly it's an area that is encroaching on the creative industries. The latest open ai update is so advanced fans online have used it to eerily replicate the hand drawn art style of Japanese anime favorites Studio Ghibli. It's just the latest sign of AI coming for the arts, with recent headlines also highlighting concerns over entirely artificial models in ad campaigns and fake movie trailers that look close to the real thing. What protections are there in place for our creative sector or could they become one of the first industries to fall to our new AI overlords. Today on the Front Page, University of Sydney Business School Associate Professor Sandra Peter is with us to take us through the impact of.
These emerging technologies.
Sandra, you've recently written about open aiy's latest update. Can you explain what's happening here with these Studio Ghibli style images?
So Open Eye's latest update to chat jipt was a significantly improved image generation capability. This means that it allowed users to create like really convincing images in the style off And everyone's been trying out the Ghibli style because it's been one of those very very clearly identifiable styles, and it was also used by some altmann to tell people about this thing, and it's been really enormously popular, so much so that basically their systems have crashed since they've released it. And I think it was about another ten million people that have joined the open aies efforts based on this new image generation model.
And so how is this possible?
Has openaiy just scraped all two dozen Ghibli movies in order to replicate this style.
They haven't. They've scraped the Internet, and the internet has a lot of things on the internet, and we can talk about what goes into training some of these models, but the idea with Generative AI is that they've changed the way that they create these images. So they've kind of moved from the traditional what we call diffusion models, where we have models that gradually refine just noisy data into something called autoregressive algorithms. And no one wants to go into the details of that. All you need to know about is is that basically it treats images like language. So Chagipiti now predicts words in a sentence, but it can also predict visual elements in an image. This means that you can basically use all the things that jipt has learned about Ghibli style, for instance, the fact that it's got things like soft pastels, it's a Japanese type animation, and so on, and it can use those to more accurately create these images from precise prompts that people give it.
What are the copyright implications here of this happening.
Oh, there's a lot of copyright images. And can I actual say when it comes to generative AI, this comes on top of many other controversies that we've had. But the ability to work with styles, and when I say styles, I don't just mean the Ghibli style. Everything becomes a style for generative AI. That means things like bananas or cats, or bloody corporate emails, they all become styles. The ability to work with these styles it becomes the heart of this controversy because for many artists, like our distinctive approaches to how we create art is not a style. That can be applied in a specific prompt. However, the traditionally the copyright law doesn't protect styles, only very specific expressions because we don't want to stifle creative expression. Right if you could copyright things like impressionism, that would limit what people can do. But there's a very clear difference between a general style and then the highly distinctive style that a person might have. If you remember a while back, there was a guy called Greg Rutowski. This was a Polish artist, and people were using his style over and over again to generate images on unstable diffusion. Now, if you try to do them in the style of Greg, this threatening sposed his livelihood and his craft. So creators have taken legal action against this on a number of fronts.
The newest trend is to create studio I say create is to imitate Studio ghibli images truly just coming for one of our most beloved examples of human manual creativity. Everyone loves studio Ghibili because it's beautiful and also because it's painstakingly created by humans who love art. And now to actually bt is trying to imitate that and creating images that are just worse. They're just soul less versions of Studio Ghibli. It's also worth noting that Miyazaki, the mastermind behind Studio Ghibli, he once said of AI, I would never wish to incorporate this technology into my work at all, and once when he was presented with an example of how AI could be applied to his work, he saw the footage and said quote, I strongly feel that this is an insult to life itself.
It emerged a few months ago that allegedly Meta CEO Mark Zuckerberg approved of his company using pirated versions of copyright protected books illegally uploaded to sites like libjen to try its AI software. If true, it seems like it might be breaching all sorts of laws, I suppose, But is there actually anything that can be done about it.
This is one of those huge, if true, kind of things, and also something that is still being debated and contested. There is a good likelihood that things like libjan were ingested to train these models. We don't know at what stages and at what scale. It's also true that when we ingest what is commonly known as the Internet, we will end up ingesting things that have copyright that we don't necessarily think of being ingested. I think an easier way to think about that might be the controversy around generative AI models being able to write really really good dialogue for movies because they've ingested OpenSubtitles dot org, which has pirated subtitles for all the movies in all the languages, and even though it wasn't trained on actual movies and on movie script ingesting the pirated subtitles means that it's now very good at doing this. Yes, things can be done.
The law will.
Evolve much as the technology has, except that technology moves a lot faster than the law. But there's a lot of work on the way on new legislation to try to balance the fact that we need an enormous amount of data to create these models. We're protecting things like artists' identities and their creative work. And it's obviously not just studio Ghibili right. There's same concerns in written text in music. I've had Billie Eilish and Pearl Jam voicing concerns about generative AI in music. So this is very widespread and something that is being debated around the world now. The question of how we'll be able to kind of balance the fact that we do need a lot of data and there is scarcity in data, the fact that this legislation is has there are varied approaches around the globe. There have been caused in the US to allow companies to use copyrighted information to this go away for the company's training generative model. That we have an AI race, this is all this, this will all have to be worked out in the next couple of years.
We've also recently seen the YouTube channel screen Culture demonetized. You may have seen some of their AI generated trailers that they make look like the real thing.
How advanced is AI getting.
Are we going to be able to tell the reels from the fakes in a few years time, And what does that mean for film studios and the like.
That's a that's a complex question. Let me let me, let me, let me take a step back and talk about the bigger picture. This is this is an arms race, and at the moment we're not really winning it. In that we are able to create very very good fakes, deep fakes, and any type of content that is indistinguishable from the real thing. We were already able to do this really quite well with images a few years ago. We are now increasingly good at doing it with a voice with audio, we are increasingly good at doing it with video. I think the thing to focus on here is first that most of the generators, that all of the generators that are out there online cannot reliably tell you if a work is AI generated or not. If you remember the Pope in a white Valenciaga jacket controversy, it's been an arms race since, and that most of us can create convincing content using over the counter, commercially available software. So if you were trying to create a deep fake of my voice, it would likely cost you two dollars a month with commercially available software, and it would be a fantastic clone of my voice that can fool my mom. If you were looking to create video, you'd probably need a minute and a half of video of me in the public domain to create deep fakes of me. So it is an arms race, and you know we're not good at telling these things apart.
And it's not just movies.
Recently I saw global fashion giant H and M announced plans to use AI by making digital twins of thirty of its models. Now the company has said that the models would own the rights to their likeness. But this is something that's been worrying the fashion industry insiders for some time. Does there need to be better regulations in place before companies start to innovate and use AI in this way?
Obviously there needs to be better regulation in this space, but I think first it will be down to companies to figure out how they want to use these technologies ethically. Technology always evolves faster than the law, so the work in the legal space will take some time. So I think it's really really important for executives, for leaders in this space to upscale themselves around artificial intelligence, to understand what the ethical challenges are, what the practical challenges are, but also what some of the huge opportunities are in that space, and to make informed choices about what they do in their organizations. This has been a long time coming, right. We've had digital humans like like Little Mikaela back in twenty sixteen. They were on Instagram, then they became you know, digital flesh and blood. They did ads for things like I think it was Prada back in the day and bal Main. We had them evolve music careers on Spotify. So this is not new, but it's up to companies in the first instance, to really figure out how they want to do this the right way, and also a huge opportunity for companies to lead in an ethical way in that space.
One of the most iconic brands bringing in the holiday season, but take a closer look at the new Coca Cola commercial and you might notice that it was made with artificial intelligence. Social media is certainly caught with it shows how lifeless that Christmas commercial is. Book A Colloges put out an ad and ruin Christmas and their entire brand.
It's less festive, more creepy holiday vibes. We's push for marketing efficiency, right, how do we create more with less?
That's just a sort of business one on one.
I've also seen, you know, entire ad campaigns generated by using AI. Campaigns that would usually require entire teams, days of shoots, makeup artists, stylists, lighting technicians, directors, photographers all replaced by just typing words into this kind of generator.
There's something sad about that, isn't there.
Well, there's something sad about the volume that we can make in this space. There again, opportunities in that space as well. If you think about the kind of a deep fake of me. We're not using it obviously to create videos of me, but if we need to do a little pick up, it might be easier to do it with a typing word in rather than booking the whole studio and everything else again and disrupt everybody's day. So there are opportunities in that space, but there's also a real danger that we might be just rehashing, recombining all things and not giving artists, creators, directors, producers, writers the opportunity to really bring our humanity to this. I think the useful way to think about AI is as an assistant. This is not a technology that should come to replace what we do. It's not there to take our jobs, but it's there to enhance us. So I would encourage people to think about what they can do with AI that we weren't able to do before, and think of it as adding to our capabilities rather than replacing us. There will be disruption, and I think all of us, all of us know that that will happen. I always say it's not coming for your job, but it's definitely coming for your job description. But it's an important moment, I think for especially people who lead businesses lead creative things to upscale themselves on what this technology is. I would, of course say the University of Sydney had does the best work around the effluency anywhere in the world. So do come and do this with us, but find find ways to understand what this technology can do. It's really quite different to what we've been able to do before. Even the idea of style engines. It's not intuitive to us to think that, you know, things like bananas or corporate emos become styles. Catness is a style, right, So try to understand what the tech can do and then really be very mindful and very deliberate in how you implement it in your organization. I think it's up to people who lead, whether it's the creative industries or whether it's any business or government, to lead the way on how we think we ethically want to be doing this. It's not happening to us. We are making this future happen and the next two years will be crucial.
Are there any protections that you'd like to see in place in order to better protect the creative industries? Because I think that there is, given how fast the technology is evolving, there is a fear that it could stifle that creative production and that creative process.
I'm not a lawyer, so I live the I leave the lawmaking and the details of this to those who who know better. But I would want to see protections around artists, right. I do want to see protections around how their work is being used to train these models, because I think at the moment it's a bit of the wild West out there, and I am worried that AI generated content might end up in the long term severely diluting the earnings for original creators, which also means that fewer people will want to join the creative art So ultimately, I think owners consent in using any of this should be should be a requirement.
Thanks for joining us, Sandra, Thanks for having me.
That's it for this episode of The Front Page. You can read more about today's stories and extensive news coverage at enziherld dot co dot nz. The Front Page is produced by Ethan Sills and Richard Martin, who is also a sound engineer.
I'm Chelsea Daniels.
Subscribe to the Front Page on iHeartRadio or wherever you get your podcasts, and tune in on Monday for another look behind the headlines.