The future is here, and it looks like deepfakes of real people saying fake things, chatbots claiming to have human-level consciousness, and evil robots ready to take everyone’s jobs. Artificial Intelligence, while only just recently becoming widespread and accessible, is transforming our world in ways that make understanding it more crucial than ever. Joining me for today’s important conversation on the ethical implications of AI is Dr. Joy Buolamwini. She is the founder of the Algorithmic Justice League, an award-winning researcher, and poet of code. Dr. Joy is the author of the national best-selling book, Unmasking AI: My Mission to Protect What is Human in a World of Machines.
During our conversation, we examined some of the basic definitions, players, and concerns associated with AI, how biases are transferred in the creation of AI and then reflected in its application, and lastly, the specific challenges AI poses particularly for communities of color.
About the Podcast
The Therapy for Black Girls Podcast is a weekly conversation with Dr. Joy Harden Bradford, a licensed Psychologist in Atlanta, Georgia, about all things mental health, personal development, and all the small decisions we can make to become the best possible versions of ourselves.
Resources & Announcements
Grab your copy of Sisterhood Heals.
Where to Find Dr. Buolamwini
Support the Algorithmic Justice League
Read ‘Unmasking AI: My Mission To Protect What Is Human In A World Of Machines”
Stay Connected
Is there a topic you'd like covered on the podcast? Submit it at therapyforblackgirls.com/mailbox.
If you're looking for a therapist in your area, check out the directory at https://www.therapyforblackgirls.com/directory.
Take the info from the podcast to the next level by joining us in the Therapy for Black Girls Sister Circle community.therapyforblackgirls.com
Grab your copy of our guided affirmation and other TBG Merch at therapyforblackgirls.com/shop.
The hashtag for the podcast is #TBGinSession.
Make sure to follow us on social media:
Twitter: @therapy4bgirls
Instagram: @therapyforblackgirls
Facebook: @therapyforblackgirls
Our Production Team
Executive Producers: Dennison Bradford & Maya Cole Howard
Senior Producer: Ellice Ellis
Producer: Tyree Rush
Associate Producer: Zariah Taylor
Welcome to the Therapy for Black Girls Podcast, a weekly conversation about mental health, personal development, and all the small decisions we can make to become the best possible versions of ourselves. I'm your host, doctor Joy hard and Bradford, a licensed psychologist in Atlanta, Georgia. For more information or to find a therapist in your area, visit our website at Therapy for Blackgirls dot com. While I hope you love listening to and learning from the podcast, it is not meant to be a substitute for a relationship with a licensed mental health professional. Hey, y'all, thanks so much for joining me for session three eighty six of the Therapy for Black Girls Podcast. We'll get right into our conversation after a word from our sponsors.
Hi. I'm doctor JOYBLAMWINNI, and I'm on the Therapy for Black Girls podcast today. I'm in session unpacking everything you need to know about Artificial intelligence.
ASIS. We're seeking an experienced and passionate AD sales strategists to join our team here at Therapy for Black Girls. We're looking for somebody who can help us to strengthen and maintain our existing brand partnerships and who can help us identify and cultivate new brand partnerships that align with our mission. If you are someone who has five to seven years in AD sales or media buying, or similar position with a proven track record of success, we love to chat with you. Go to Therapy for Blackgirls dot Com Slash ad Sales to learn more about the position or to apply. The future is here, and it looks like deep fakes of real people sayings, chatbots claiming to have human level consciousness, and evil robots ready to take everyone's jobs. Artificial intelligence, while only just recently becoming widespread and accessible, is transforming our world in ways that make understanding it more crucial than ever. Joining me for today's important conversation on the ethical implications of AI is doctor Joy Bullamweini. She is the founder of the Algorithmic Justice League, an award winning researcher and poet of code. She's also the author of the national best selling book Unmasking AI, My mission to protect what is human in a world of machines. During our conversation, we examine some of the basic definitions, players, and concerns associated with AI, how biases are transferred in the creation of AI, and then reflected on its application, and lastly, the specific challenges AI poses, particularly for black people. If something resonates with you while enjoying our conversation, please share with us on social media using the hashtag tpg in session, or join us over in the sister circles to talk more about the episode. You can join us at community dot therapy for blackgirls dot com. Here's our conversation. Hi Notctor Joy? How are you?
Hi, Doctor Joy? I'm well, how.
Are you double the joy today?
All good?
Yes, I'm very excited to chat with you. There's been a lot of conversation in our audience, a lot of questions about AI and just all the things that are happening. So you are the new author of the book Unmasking AI, my mission to protect what is human in a world of machines, and I'd love for you to get started by telling us how you first got interested in AI.
Yes, so I've been interested in tech since I was a little kid. I'm the daughter of an artist and a scientist, and actually grew up going to my dad's lab where he would feed cancer cells, but he would also use these huge silk congrassic computers to help him support his research. So I grew up around computers and technology, and I also grew up watching a lot of TV, but it was a very strict diet, the PBS diet, so I could only watch public TV for about two hours at a time, and I loved watching the different science tech shows and one of them actually showed a robot named Kismet, which was a social robot that could smile at you and try to interact like a person would. And before then I always thought of robots as being more industrial, so that really sparked my imagination and wanting to, in my terms, become a robotics engineer and go to MIT. I didn't realize there were requirements. I was a kid at the time, but that's what got me interested in exploring science and technology.
Wow. I love that there's such an early childhood connection for you to the work that you're doing today.
Yes, there is definitely very.
Cool, very cool. So I want to hear more about your experiences at MIT and would love for you to talk about how many black and brown people you were interacting with as you are doing the work of AI.
Now, that's such a great question. So I made it to graduate school. My dream school was to go to MIT, and in twenty fifteen I entered as a master's student. The numbers, I'm pretty sure were handful at the time. I'm not sure if it has gotten too much better since then, but that was pretty typical of most of my educational experiences, being one of one or one a few definitely black people, black women, oh my goodness, even fewer of us in certain spaces, and so it was not surprising in that regard, but it was still clearly lacking representation, which I didn't know would go beyond on the actual people and manifest into the AI systems themselves. So when I got to MIT, I was really excited. First year, I take a fun class called Science Fabrication. You're supposed to read science fiction and create something you probably wouldn't make otherwise, and so I thought, okay, here's a good chance to do shape shifting. So I'm from Ghana and I was very much inspired by stories of a Nazi the spider, this trickster spider, and I thought, okay, cool, it's shape shift. We only had six weeks, though, and I probably wasn't going to change the laws of physics anytime soon. So my thought was, if I can't shift my physical form. Maybe I can shift the reflection of myself in a mirror. And so I started experimenting with this material that makes a regular mirror actually have light shine through it in the back. And so essentially what I was able to do is create a filter like you might see on a video stream, like a Snapchat filter or any of these apps. But now instead of it going through a video screen, it was actually on a mirror in front of you. So it had a really cool effect. So then I thought, okay, can I have it follow me in the mirror? So I added a camera on top, a little webcam and got some software that was meant to track your face. This is where things go sideway. So here I am. I'm sitting at MIT. It's supposed to be this epicenter of innovation. I'm trying to have this machine I'm building detect my face, and it's not detecting my face consistently. So I literally draw face on my palm, hold it up to the camera. The smiley face more or less I drew on my palm was detected, and so at that point I thought, okay, anything is up for grabs. It was around Halloween time. The book actually starts on Halloween. So I grabbed a Halloween mask I happen to have. I didn't even have to put it all the way over my face before it started being detected as a face. And so that was really this shocking moment for me, just to see how easily it detected the white mask, and then when it came to my actual human face, we were running into some challenges. And I mean, Fenan already said it black skin, white mask. I just didn't think it would be so literal. And it was this reminder that even though I had, in a sense quote unquote made it to MIT, there was still a long way to go in terms of not just representation of the student population or the faculty population, but also representation within the technology we were developing. And so my question was, is this just my face or is this experience that I'm having what I now call an encounter with the coded gaze that you might have heard of the male gaze or the white gaze. This is a extension of that kind of terminology, the coded gaze. There at MIT, I was seeing, Okay, us not being represented is going to impact the technology, and.
It feels like that's a beautiful tie back to the cover of your book, right, I'm guessing that that is where some of the cover story or it comes from.
You'll also notice where my finger is in relation to the white mask itself in terms of whose turn it is to speak. And for those with some really really sharp eyes, you'll notice that my earrings are also a neural network, which is one of the ways in which people are developing architectures to power machine learning systems that are undergirding so much of the AI we're seeing. It also is meant to evoke a bit of an African mask to give some of my Ashanti the Gary heritage, and a face mask as well, so you have multiple things going on in the cover, including the gesture of the hand and so forth. But yes, now, I'm glad you noticed. So that white mask and masking AI comes from my experiense of literally putting on a white mask to be seen by a machine.
Scene right right, scene in quotes. So, doctor Joy, there's so much conversation about AI, and I think it's important that we just start with the basics. So how would you explain AI to a five year old? What is the basic definition of what we mean when we say AI.
I always think of AI as this ongoing quest to give machines abilities people have, right, So it might be the ability to communicate, to talk back and forth, it might be the ability to perceive the world, to detect an object like a car, a cat, a face, a house, a train. Those can all be forms of AI. And then you also have AI systems that are about decision making, so deciding who gets an opportunity or not, so how many cookies someone might get, or if you get a house, right, So you have AI involved in making decisions about people's lives. You have AI systems that are about communicating back and forth as you might interact with at chatbot. You have AI systems that are about perceiving the world. So if you want to have self driving cars, you might want to detect the people walking around. And I like to say it's an ongoing quest because what AI is keeps evolving as the technology advances, and so you'll find that there are many different definitions of what counts as AI, what doesn't, and it continues to expand, so it is ever evolving.
I really appreciate that definition because I think now when I think about AI, I am thinking about it as this chant GPT kind of machines taking over the world kind of thing. But when I heard you explain it as giving machines human capabilities, I thought about text to speech software that helps people who maybe read and comprehend differently. That would be considered AI. And that is like an earlier form of AI than where we are now.
Right, or even some of the basic things we learn when we're learning about AI and skull ocr optical character recognition banks use this all of the time when you have checks or other sorts of things you're writing in right to get zip codes if you're trying to ship packages and so forth, and so that is a type of AI. But again, as things evolve, people like, well, okay with the other stuff, right.
Yeah, it does feel like now it's like, oh, wait a minute, what's actually happening here? What's going on? So I wonder if you can explain to us how something like a chat GPT is able to answer our questions so quickly and like, for the most part, pretty coherently.
I think it's really important when we're talking about AI systems to understand that even if the answers appear coherent, it doesn't mean that they're accurate, right, And so I just want to put that first. And now let's get into how does something like a chat GPT work. So chat GPT is an app that's built on top of something called a large language model. And what a large language model is is basically what I would call a pattern recognition and production system. And so we just talked about what are some of the capabilities you might try to give AI or a machine, and one is the capability to communicate, okay, if we want it to communicate like a human. What scientists and researchers figured out was instead of trying to code in every single way a human might respond, which trusts me, that takes too much time, what if we could learn patterns of language. And so the the way these systems have been created is actually based on large and large data sets of written language. So it can be languaged from newspapers, from magazines, from websites, a lot from Wikipedia, and all of that gets put into a system that then trains the AI model to recognize language patterns. And so some people actually call chat GPT like spicy autocomplete. You know, if you're typing in or texting and you're about to type something. Over time, your little system might learn which words you tend to say next, right, so please close the what would you say door? Door? Right? And so there are word patterns that we know over time what words are likely to follow, And so that is the basic idea that's being used. But then you expand it to be much more complex. So then you start looking at sentences, at paragraphs, right, at much longer phrases. But that is what it's building on. What is the next most likely word based on this huge example I have of humans communicating through emails, through online blogs and forums and all of that. Right. So, now if you know that these sorts of systems are being trained on information online, this means you get the good, the bad, and the ugly, right. And this is really important because sometimes what these systems are learning are racial bias online. There's a recent paper that came out that showed that systems like large language models which would powerchat GPT, actually have what they're calling covert racism. So overt racism we know like okay, using the N word or saying black people are at a negative stereotype or shouldn't do something positive, right, covert is being politically correct, but still holding racist attitudes. So how they tested for this when it came to large language models is they would present different characters, one speaking quote unquote Standard English and one speaking quote unquote African American English, and then they would ask the chatbot about how long one character or the other should be sentenced for jail, and the one speaking African American English would have a longer jail sentence. And so this is what I mean by the covert racism, which can be even harder to find, and you have to be more clever in terms of distinguishing that. So then when it comes back to your question about how are we able to type into something like chat GPT and it seems really coherent while it's been trained on a lot of human language and using that spicy auto complete, it has learned over time what seems to be coherent. But you can be coherent and convincingly say the wrong thing. And so that's the other part that makes me cautious and something to also consider. Right, we've heard coherent people who can talk a good gag, but what they're saying isn't necessarily true. And we see the same thing with some AI systems and might call them bs machines, and we've seen this even in demos from some of the largest tech companies. There's a sixty minute segment they were looking at a system from Google bart at the time, I believe, and it seemed to be a very impressive demo. It had been asked to give a list of book recommendations. It gave the books. When they went back to research those books, those books didn't exist. So knowing the pattern of language and knowing how to produce it in a convincing way doesn't make it true. And so that's why we have to be very careful when using these systems. Because there is so much fluency with the language, it's easy to be lulled into it thinking right that what you're getting is actually an accurate representation of the world, and what it is is that representation of the online world with stereotypes and misinformation and all.
I'm glad you said that, because that was going to be my follow up question. Because we know something we've heard from like students trying to use chad GBT or something to write an essay, it will sometimes produce citations that don't actually exist. And so you sharing the example about the books makes me think. So it doesn't just search what's already there. It's also trying to put things together based on patterns it recognizes from other citations exactly.
And so citations look like this. Let me make up a citation that looks correct. You actually had a lawyer who lost their license. They were debarred because they were using one of these systems in a case and it was citing case law. I believe that didn't exist. It looked plausible, right, it had the right form, but it wasn't actually something that existed. And that's why I'm saying it's really dangerous because if you're not an expert, you might not know the difference. If that's not your area of focus and it looks right, I mean unless you know how else. So I would always be very skeptical. And to your point. The other thing though about students using chat, GPT or others is we've also seen another kind of bias when it comes to bias in AI detectors. Right, So teachers will say, ah, students are using AI. We want to see if somebody actually wrote the paper or not they actually found You have bias with these systems that follow a long life English as a second language, right, or some students with different kinds of learning abilities, and so you were more likely to be flagged as having cheated even if you didn't if English was your second language. So that's another type of bias built on top of this.
So, doctor Joy, who are some of the major players that we should have on our radar? You already mentioned like Google was building something with board, Like who are some of the beer names in this space to be paying attention to?
Oh yeah, so I do think you're big tech giants, right, So you definitely want to be paying attention to Microsoft. And Microsoft invested in open ai, right and open ai they're the creators of chat GPT. We already mentioned Google. Facebook is a really important player here as well, because they are creating what are known as open source models, which is to say, we are going to make our code available for other organizations to use, and so that gives them a different type of power if they're controlling the systems that some of the other big tech companies aren't necessarily using, but they're letting smaller guys get into the game. So that's another area to explore. You have to definitely consider Amazon because at the end of the day, the data needs to sit somewhere, the compute power to process all of these things needs to sit somewhere, And oftentimes, if you look under the hood, Amazon Web Services are involved in hosting these systems, deploying these systems, computing what's going on behind the scenes as well. So those are certainly a few companies to keep in mind for sure.
What would you say, are some of the environmental implications of AI huge?
So all of these systems that we're just talking about, they are costly to make, right, So we're talking not millions, not tens of millions, but hundreds of millions of dollars to train some of these systems. And when these systems are being trained right, to process all of that data with all of that compute requires energy. It doesn't just come from anywhere, and so there's this environmental impact that's happening alongside the entire AI development life cycle. So when the systems are being trained themselves, there's an environmental impact. And in this case, think about the data centers that have to be built. They also have to be cooled. They're being cooled by water oftentimes, right, so there's this water impact that's happening as well as what you typically would think of in terms of environmental impact with the carbon footprint. But that's just to train it. Now you have people who are using it, you're at the deployment section. There are different estimates for this. What I've seen is every time you put in a prompt, depending on the time of day, you can imagine it as drinking a half of a glass of water for each prompt you put in. And again it varies. You have different sized prompts and so forth. But just to give you a sense that it's not just the amount of energy to train the system, but it's the amount of energy and also water that's being used each time you're exploring these systems. And then you have to ask which communities are these data centers being put into, and whose water sources and supplies are being impacted, And let me tell you, probably not so surprisingly, it tends to be communities of color.
Mm hmmm. You've already kind of alluded to a lot of this, but I would love for you to talk more about, like how the human involvement in creating AI really leads to some of the biases that are transferred to the process of like the answers that we get when we type in a prompt.
Yes, Well, I'll come back to my own space of facial recognition technologies. Right, Why is it that we have these AI systems that supposedly detect human faces? But I'm coning in a white mask and I'm here out on MIT what's going on? And so I think of something called power shadow. So we're just talking about the ways in which you now have machine learning techniques being used for AI. So what's the machine learning from data? Where are you getting the data from? Oh? Okay, this is where the human footprints and the human fingerprints come into play. When you're collecting data, Let's say, in the early days, for collecting data for face data sets, we would do something like what the AI companies doing the large language models are doing. Go online, find some faces. In this case, find faces of public officials and public officials who hold power, who tends to hold power all around the world? Men? Right, And so if that is your source for face it's not so surprisingly then when you start getting these data sets that are seventy percent or more of mald labeled faces. And that's what we're getting in early days of this sort of technology, and this is something I call a power shadow, right, So the inequalities of the world being basically reflected in the data set itself. So this is how we end up with such a male skew when it comes to face data sets. Now, let's think about the color side. Why are they mainly lighter skin right in the earlier days eighty percent lighter skin or more. Well, just like my example with the white mask, even if you're searching online for photos of people, if you don't detect dark skin people as faces, you're not going to have them in the data set in the first place. So you already missed those who were be there in some cases. But also let's go back to media representation. Who's featured in the media, Which stories get the most airtime? Right, Even if you're watching a film, who's in the lead role and who gets the little side character? Right. It's only more recently we've seen some diversification, and even that is being rolled back a little bit, So you have to think about the full representation. So when you think about the representation of who is positioned as worthy, who's positioned as expert, who's positioned as desirable, lighter skin tends to be on top, and so it's not so surprising. Then if what we're doing is let's grape the internet for faces, you end up with faces that are largely pale and largely male. So these pale mal data sets are a reflection of these power shadows in terms of who's more likely to be represented. And so going back to your question, right, how does the bias happen? In some ways, the bias is reflecting some of the inequalities in society at times. I say, the past dwells within our data, right, and so that's what we're seeing. And then that is the diet. That very bland or homogeneous diet is what's fed to the AI systems, so it's not so surprising when they encounter something different than what they have been exposed to. We get some issues.
More from our conversation after the break, but first a quick snippet of what's coming up next week on TVG.
I mean, really, when you look across the board at every health indicator, we are scoring in the highest with regards to being diagnosed and with the poorest outcomes. I think that we really need to be mindful about the lives that we're curating and making sure that it's okay to stay not now right. It's okay to say, what is this doing to me? It's okay for you to pay attention to how you're building your business. It doesn't have to scale to a million dollars today job. You can take your time, but ultimately we have to be realistic with who we are and fold that in to the lives that we're living and not just look at the outcomes as the measurement of our value.
Hey, since we're seeking and experienced and passionate AD sales strategists to join our team here at Therapy for Black Girls, were looking for somebody who can help us to strengthen and maintain our existing brand partnerships and who can help us identify and cultivate new brand partnerships that align with our mission. If you are someone who has five to seven years in AD sales or media buying or similar position with a proven track record of success, we love to chat with you. Go to Therapy for Black Girls dot com slash ad sales to learn more about the position or to apply. I'd love to hear you talk doctor Joy about because I feel like some of what I've been hearing in terms of trying to combat this bias that exist in AI is Okay, let's have more black people use it, right, And then that causes me pause, probably for some of the reasons we're going to talk about today, because then it feels like we're giving all of this information, but then how is it being used? So would you say that the answer to decreasing some of this bias is for just more people of color to be interacting with the AI, for us to be building the systems, Like, what would you say about that?
I like to remind people that accurate systems can be abused, and so because so much of my early research was around facial recognition technologies, it meant that as tempting as it was to just say, Okay, let's make the data sets more inclusive and we're done, I had to contend with, wait a second, we're saying facial recognition on drones with guns, lethal autonomous weapons systems, right, These are real use cases. So it wasn't just the question of how act a system was, even though we had huge accuracy disparities and we continue to do, but it's how will a system be used? And that's why we have to be cautious because there's a power in balance. For example, one project we had explored doing right was creating more diverse spaced data sets and having people label it as part of a crowdsourcing situation. Then we're like, wait, all of this free labor being fed into these systems that are then sold back to people, but then are also adopted by law enforcement agencies that then go out and use that technology in ways that can be oppressive to our communities. And in the book I talk about grappling with this. I have some of the CEOs from the biggest tech companies asking me to help them quote unquote improve their facial recognition, and I had to say, it's more than a technical conversation, because if I lend my expertise in a way that actually create more accurate systems that are used to oppress people like me in communities I care about, That's actually not why I'm bringing up these issues. I'm bringing up these issues not to say we need more accurate facial recognition technologies that can then be used and weaponized in harmful ways, but we need to attend to all of the ways in which AI is being developed, where we have assumptions about accuracy that don't hold. So when we're thinking about AI systems being developed for medical purposes, right we're looking at clinical research, we don't want to kit ourselves into a false sense of progress, the same kind of false sense of progress we had around facial recognition technology. So this was very much a cautionary tell with just face detection, gender classification, facial identification, and verification as examples to say, this is how we get it wrong in this domain, but there are lessons to be learned in other domains. And so I wanted to really challenge our assumptions about AI systems because what I was saying is currently we had this narrative of if it worked on our gold standard benchmarks, then it worked for the rest of the world. But truly we had misleaning measures of success. I'm looking at what is supposed to be a gold standard and we're not represented and so then you can say the system works xyz. And now again let's think about something like melanoma or skin kitser right where you want these systems to be inclusive, you want the data to be gathered in an ethical way, and you have to actually test that it works. You can't just assume it worked on one population, so then it's going to work on another population. So those were the broader lessons that I was seeing from the work I was doing, because I was looking at the process of creation, how these power shadows get embedded and all of that, which had implications for all of AI. When we look at these generative AI systems, right, they're perpetuating that bias, and we're going from a mirror to what I like to call a kaleidoscope of distortion. So here's an example. You had Bloomberg News. They did a test where they decided to use these image generation AI tools and they would give it a prompt for show me a CEO, Show me an architect, right, usual suspects, right, pill mail or lighter skin veil, Show me a social worker, Oh, women coming in. Show me a school team. Okay, there's some diversity there. Show me a drug dealer, show me a terrorist, right, show me these criminal stereotypes. And what they found was that these systems weren't just mirroring society, they were amplifying by right. So black people would be over represented as criminals right in that sort of situation. And so these concerns that I had starting from that white mask experience, actually have implications across these different iterations we're seeing in the evolution of AI, which is to say, you can't say you have robust AI systems if they only represent a small portion of the world, and you can't say you have ethical and responsible AI systems if your even more accurate systems are being used in abusive ways. So it's both the question of how well do these systems work, but also how are they being used? And that is a question around our values as a society.
So I want to talk about some of the most common concerns that I think people have around AI and have you weigh in on them. So the first one is deep fakes and voice modulation without consent.
Huge. I mean that we saw earlier this You're the robocall with President Biden telling people not to vote, which was not even his voice, and so we are seeing a proliferation of deep fakes and even on the voice piece. I actually start unmasking AI with an example of Jennifer Destaff. No, she gets a call mom, Mom, these bad guys have me. She's hearing her daughter's voice. She had the wherewithal the message her daughter daughter's fine, she's chilling. She's like, why are you worried? Mean, while this guy's asking for money. But because we have so much social media that's out there, and then you have organizations that are making AI tools open source, it's not too difficult right to have the replication. So it does mean we have to be more critical consumers of information or if we hear something that's emotionally triggering to us, take a pause. Some people are going to safe words and other sorts of things just to make sure the call is who they think it is. But this is a major issue because it is polluting our information networks, right, So we're coming to a place where it's hard to know if you can trust what you see with your eyes or hear with your ears. So it's huge. And there's another aspect to that, which are non consensual explicit deep fakes, right, and so thinking about things like deep fake porn, and we see that over ninety percent of deep fakes are actually in this category, and of that another over ninety percent are of women and girls. We saw more attention come to it when this happened to Taylor Swift. The Defiance Act was introduced. But you have middle school girls right, who are facing this kind of digital abuse. And so that's another area that is rising in something for students and parents and caregivers to be aware of. And then even in the generative AI space, I remember MIT Tech Review there is a journalist and they had these AI apps that are giving you profile photos, and her male colleagues would get to be astronauts and explorers and things like that, and she saw she was being put into skimpy clothing and even had a child photo of hers made into an explicit photo, right, And so these are definitely just reaffirming what you're saying, rising dangers, which is why it's important that we actually have safeguards. So what the Defiance Act would do, for example, is give you the right to sue. So right now, what's happening is there are no consequences if I have a tool that creates these sorts of images. As long as there are no consequences for it, there's no reason I have to stop. And so what something like the Defiance Act and other types of legislation like that would do is say, there are consequences.
Got it? Do you have any tips for how we can tell and authentic image or voice versus like an AI generated one.
It's challenging some of what people are doing, and it really depends on the resolution is with the eyes. For example, they are using the way that the eye reflection looks across both eyes to see if they would actually match up based on some physics principles. But you can imagine in this sort of race cat and mouse that okay, once people figure out what the reflections are supposed to look like, eventually they will try to simulate it. But that's one area people currently take a look at. Hands still tend to be pretty difficult for many in AI system. That's actually why on the book I have hands. Yeah, it's almost like a flex to AI. Now will this be a forever thing, not sure, But in that moment when I was writing the book, right you could definitely take a look at the hands and that would give you a sense of was their human touch involved things like ear low attachments and so forth. But again, all of these over time will change. This is as bad as AI is ever going to be. And if you're seeing what's out there, it's pretty convincing. And here's the thing, it only has to be convincing for a short amount of time. You have deep fakes where AI is involved, but you also have cheap fakes, right, think of photoshop and other things where you just say something happened, You spread the lie, you start the rumor, and that itself has the impact. So it does mean we have to be more critical consumers of data, and so that's where you'll see some people looking more for the verifications or things like that. So we're not trying to certify the whole ocean, but we're saying this cup, right, this one you can verify. So I think we are moving towards that world where you have to be just much more critical about what you are seeing for sure. And again it has to go back to consequences, right, because if there are no consequences for producing deceitful information or for propagating it, then it will propagate. So I like to think about it as Frankenstein's monster in the basement. Right. If the monster stays in the basement, it's still terrifying, but it's contained. Now if the monster gets on the road, thinks social media, right, and it starts to spread, that's where it becomes even more impactful. And so we have to also make sure that there is accountability when it comes to distribution. So right now there's very little accountability. So everybody's just letting whatever fly ride for now. That cannot last if we actually want to curb the issues with the fakes.
What about the concern around AI taking jobs?
True concern? And what's interesting about this that I've seen. It's not even just AI's capabilities, which vary it really depends on what specific jobs we're talking about in functions, but the hype around AI, the stories we tell ourselves about AI capabilities have implications for jobs. And let me give you this example. I saw a headline. It was probably beginning of the week when I saw the headline. It was something to do with NETTA, National Eating Disorder Association. Their workers were, I believe, unionizing, and so they said, we're not dealing with this. So they fined the call center workers and they replaced it with the chat bot. Right. And at that point, you're hearing all of this aihype. Right, chat bot can replace the humans, all of this customer service call centers, and we are seeing that. So they replaced the humans. I don't think it was even five or six days later, the next headline was chat bot shut down was the chatbot shut down. Chatbot was actually giving advice that's known to make eating this orders worse. Now, this is an organization whose whole mission is around addressing eating disorders, right and helping people who are struggling with that. And so not only did they compromise their workforce, they've compromised their mission not because AI was so great, but because they believe the height in the AI system. And so I think it's so easy to assume it's you can automate the thing you don't know as well, right, or so it's like, okay, that's easy or that's easy enough. But if you're the person in the day to day, you might actually realize there are some nuances to the work that you're doing. So that's one piece, but there are other areas right that. Again, when it comes to customer service and things of that nature, you are seeing companies reduce their workforces now because they're a portion of that work that can be automated. And so when people say, oh, the AI is just here to enhance, the rhetoric of enhancement soon becomes the reality of replacement. Right If you're looking at trajectories and so there is certainly an economic impact of AI on jobs that is here and that is coming. That's the reality of the situation because it's all about cost cutting and quote unquote efficiency as opposed to quality. And I truly believe where you want the quality work, you want humans. And you'll hear a lot of people saying, well, humans with AI. It depends on the type of AI, because we also don't want a case where we're living in I call this the apprentice gap. So you say AI is taking over the entry level jobs, right, how do you ever gain mastery if you never have the entry level jobs. I recently started playing my guitar again and I had to my younger self. I still had my calluses, right, And I think sometimes we forget the professional callouses we develop by going through the process of gaining mastery in anything. Sometimes it seems like drudge work or things that could be automated, but you're also learning things in that process. And if you take that away, then there is no on ramp to the mastery stage. And so then when we end up living in the age of the last masters, right, I mean, one day I might tell my future children, I remember when humans wrote books. I'll tell them I wrote each word of no props. And then there are other authors will say, let me take my work and put it through an AI system and see if they're themes or summaries or way to scaffold my writing process, which is a different sort of use than let me not go through the process of creation and activity. And so I do think there are hybrid modes, but we have to be very intentional to make sure we're still working our cognitive functions. Right, just like if we don't work our muscles, they're not as strong. If we're not working our cognitive functions, they're not going to be as strong. And I also think there are ways in which we can use AI tools, right. I think about things like alpha fold and my dad and the work that he does that having a system that's showing you these protein structures actually allows you to have certain types of scientific research that just wouldn't be humanly possible because of the amount of data that would need to be analyzed and understood. And so I certainly think there are opportunities where that human AI collaboration actually does make a difference. But I think you have to be really careful when companies are talking about those opportunities. Meanwhile, they're lacking the data of artists and writers and giving you spiciattle complete They're not the same thing, right, right.
More from our conversation after the break, I appreciate the example you shared about the eating disorder work, because that is something, of course that I'm paying attention to in my industry. Right, is this idea of therapist GPT. So this is a platform that has been built to kind of offer advice and support in I think the tone of a therapist, and I think about like earlier this year, there was a company that was paying clients to secretly record their therapy sessions and then upload it to some system, presumably I'm guessing to train it for something like a therapist GPT. So I wonder if you could talk a little bit more about like the concern about something like a therapist GPT existing in an effort to maybe make mental health more accessible, but like you mentioned, offering suggestions that actually make treatment worse.
I'm so glad you're bringing this up, because again, there is that enticing narrative of democratizing, right, or making something more accessible that would otherwise be out of reach, and so that becomes the origin story and you have the veil of doing good. I'm thinking about Character AI right now, company that was actually started by people who spun out of Google, some of the people who created the Transformer architecture that's behind so much of the generitive AI systems that are happening, and they were somewhat complaining that at Google we're not able to explore more of the creative, riskier side of AI, and so they decide to create basically AI companions, AI companions for entertainment. Right, maybe you want an AI boyfriend or something like that, and it was supposed to be all funning games. Fast forward and you have the story of a fourteen year old committing suicide after engaging for quite some time with one of these character dot AI chatbots that he had customized, and his mom is pressing charges and saying she believes he'd still be alive if not for this kind of emotional connection that was being formed with this AI system that is not human, that has no agency, that has no care, that has no understanding. It is all an illusion, a very good one. Just like when I watch Finding Nemo or some other animated series. I'm still caught up in all of it okay, Kong Fu Panda. I know this is a series of images. I know it's an actor with the voice, but when it's manipulating the way we perceive and give agency to things because of our human psychology, then we still have that illusion, even though we know. So when I think of people interacting with these chatbots, it is like that illusion, except for now it's an interactive illusion and it can have data about previous conversations. This is so dangerous. I think this is actually one of the most dangerous uses of AI that doesn't get as much attention, because again, it can be like oh, funny games, right until someone gets hurt, until people commit suicide. Even before the fourteen year old boy we were just mentioning, there's also a man in Belgium whose widow says he'd still be alive had he not started engaging in these conversations with a chatbot in that instance, and sadly there are likely more stories like that that haven't hit the news cycles in the same sort of way. So I think it is so tempting to say here is the technological fix for our human needs for connection, our human need to be understood. We know that we have a huge problem with loneliness. It was made even worse by the pandemic. It's been exacerbated by social media and now the very companies and the people who built those companies that built these systems that we're supposed to connect us but left us isolated, say, now we have a new pill, this AI companion, to fill the voids that only other humans can in an authentic way. And so now you have the substitute. And at the end of the day, the substitute isn't real, and it leaves people emptier. And that's what we're seeing with some of these AI companions. And maybe if you're using it in an entertainment way, that's one thing, But what we're seeing is people are forming real emotional dependencies. And we're talking children here, you see adults the adult man in Belgium, right, So it cuts across age range, but it's especially dangerous at that developmental stage. I'm sure you can speak more to it than I can, as it's more of your area of expertise, But as somebody who's from the AI side of the fence, I think that's one of the most dangerous uses of AI. When you can emotionally manipulate somebody. And that's what's happening when you're forming what seems to be a connection, because again you have this illusion, but instead of it being Kung Fu Panda or Finding Nemo some of my favorite animated films, and I know that illusion, you're blurring the lines and you're forming real attachments.
That's a Joe in your mind. What would it look like to have an equitable, culturally aware AI driven platform.
I think, first of all, if it's culturally aware and equitable, what's being centered people, not the technology? And I find this so often, and actually this is what drew me to computer science. I won't lie. People are messy.
I'm like, great, give me the algorithms, give me the tech. I found science and tech ists for me. Let the humanities folks deal with the social science people great, everyone in their lane, everyone.
In their life. You know, it's so easy to want technology to be the savior when we, at the end of the day, have to save ourselves. And so I think, when I'm thinking of the use of AI, what are we doing when it comes to broader social inequities and inequalities right? Because AI will not solve poverty because the power dynamics that lead to poverty and the profit motive are not technical questions. AI won't solve climate change if we can't actually shift the economic incentives and economic structures. Even if the AI helps us discover new materials or ways to be more efficient, you still have that broader question of what are we going to value as society? And so I can see AI being a tool that helps us explore, but we still have to deal with the messiness of human dynamics, right, We have to deal with questions of power, who has a say and who doesn't. We have to deal with economic questions as well, who profits who doesn't. And so for me, when I'm thinking of this future, it is very much where we're thinking through data is not a way to destin you to discrimination. Where the people who are impacted by AI systems actually have a voice and a choice in how these AI systems are created. And I love to say, if you have a face, you have a place the conversation around AI. So how do we build governance structures that allow those who are most impacted right to be part of the process of creation, not just receiving and responding and playing bias whack amol.
And that's why it's really important. So, in addition to being an author, you're also the founder of the Algorithmic Justice League, which is an organization combining art and research to illuminate artificial intelligence, social implications and harms. Can you tell us a little bit more about the organization and how people might be able to get involved.
Oh? Yes, So my day job is that work with the Algorithmic Justice League, and we amplify issues of emerging AI harms. We also connect people with resources if they've been harmed by AI. Anyone who's been harmed by AI, we call them the ex coded. You could be Taylor Swift with the deep explicits. You're ex coded. You're the student who was flagged as cheating with an AI system just because English is your second language. You're also among the ex coded Poorsia Woodrofe arrested eight months pregnant due to faulty facial recognition filled by AI. And so we connect those who've been excoded with resources, but also campaigns. So one of our recent campaigns is the Freedom Flyer's Campaign, and this was to raise awareness and continue to raise awareness of the increased use of facial recognition at airports. So right now, you can go to the airport and you've likely noticed that they have face scanning going on. What most people don't know is for domestic flights, these scans are optional. You can actually step to the side and say you want the standard check. It's literally that easy, right, and you should be able to go through. Now we know they're power dynamics people behind you in line, you're trying to get to your flight and so forth. So if you feel comfortable opting out, we invite you to join the opt out club. You have cool swag, you know, the whole thing. But regardless of your experience, we ask everybody to fill out a TSA scorecard dot AJL dot org. Because this actually allows us to hold TSA accountable. They're saying, oh, there's notice, people don't even see the signs. You have light blue on dark blue text you're not gonna see, you know, if it's even visible. So things of that nature. So those are some of the campaigns we do so people actually know where they have the ability to push back. We recently did this with LinkedIn, so LinkedIn for all of their US users automatically enrolled you in so that your content and personal data can be used to train their AI systems. Now you can opt out if you go to the right settings. We have opt out blue dot AJL dot org which will take you straight to those settings so you can opt out of it. This is the kind of design practice we push back against. It should always be opt in. I shouldn't have to hear on therapy for black girls though, this is why you should be listening. Right, You can actually opt out of that. I should have had a choice in the first place. So automatically enrolling me and saying I could have opted out and hence I had a choice is not really informed consent. So those are the sorts of things we do with the Algorithmic Justice League, just so people know what is going on where they have the ability to push back where that is possible, while also pushing for the larger societal changes. Right, because some of these changes are not an individual pushing back in one or two ways. It's we have made sure that there are consequences for abusing AI systems. We've put in procurement processes. So before you even adopt an AI system, is this going to be discriminatory? Right? Has it been proven safe and effective? Are there meaningful alternatives and fallbacks. All of these are part of the blueprint from an AI bill of rights that should actually be put in place by law, which isn't just yeah. But we have ways of thinking about how do we develop AI in a way that's actually going to help more of us, not just the privilege view. And so that's some of what we do with the Algorithmic Justice League. That's my day job. And then I am also a writer and a poet, and so the art practice is really important for me. Yes, we have the book Unmasking AI, but we also have the Emmy nominated documentary Coded Bias available on Netflix, and so if that's something you're interested in learning more about, it's an opportunity to educate yourself also entertain your family and community around topics of AI bias. And what I love about that film is it features so many highly melanated women dropping knowledge as experts in the field and also leading the charge, like Trenee of the Brooklyn tenants, She's the one who got the information for other tenants to say, hey, we don't want the installation of this facial recognition system in our home. And so I think even just from seeing who gets to be part of technology and who gets to make change. We see ourselves represented in that way is a powerful depiction and part of why I even wanted to be part of the documentary in the first place, while also raising awareness about all of these different ways AI can be a biased whether we're talking employment, the economic impact, healthcare, education, criminal justice, in so many other areas.
So, how do you see AI evolving within the next ten years?
Within the next ten years, I ultimately think it's up to us. I really truly believe in human agency. So there's a version of AI, right that exacerbates an equality. There's a version of AI that leaves more of us behind fewer jobs. Right, there's a version of AI where we are claiming to be more equitable, hiring, more equitable healthcare system, when what's actually happening is those inequalities are getting worse. But you're saying, now you have an algorithmic gatekeeper, and because of that, it's actually hard to hold anybody accountable. So that's one of the futures we can have. Right, we have another kind of future and actually look at different parts of the world where we see what's happening with Europe. They passed the EUAI Act where they said we're going to actually put certain restrictions on high risk use of AI. Things like live biometrics are going to be something we don't use. I see a future where you might have face free societies or free face societies where we say we don't want our children's biometrics scanned right, and that will actually be a privilege. And in other societies where you don't have those protections. Your scan from the moment you're born, from cradle to grave, you're part of this biometric system, whether it's your iris, your face, your voice, and so forth. So I see these parallel worlds, and part of our job is to vote for the world we want and push towards that.
Thank you so much for that, doctor Joey. I have so many more questions that I want to ask you, but I know we are out of time. This has been so so informative, so fascinating. So I really appreciate you spending some time with us today. Let us know how we can stay connected with you. Where can we grab our copy of Unmasking AI? Tell us the website as well as any social media handles.
So for supporting the algorithmic Justice League. Donate dot AJL dot org. We need all of your support to continue fighting for algorithmic justice. Forgetting your copy of Unmasking AI, you can literally go to ww dot Unmasking dot ai. All of the information is there. I recorded the audiobook three and a half days and so if you're not tired of my voice yet, I definitely checked that out. I'm at poet of Code on Instagram, so you can follow there, and then ww dot put of code dot com is my main website. And as a poet, I can't come on this podcast and not leave you with any poetry. So can I drop a few lines?
Absolutely?
Okay. We have AI anti Woman, which is literally an ode to black women. So I feel that will probably be the best poem for this podcast. And so to give you a bit of context, I wrote AI ain to I a Woman as a grad student, and it was inspired by Sojourner Troop's Akron Ohio speech ato I a Woman, which was really pushing the women's rights movement at the time to think about intersectionality. It's like, great, all these rights for white women, what about the rest of us? Right? And that was very informative to the research that I did at mit that showed some of these huge biases from Amazon, Microsoft, IBM and so forth, And so let's get into it. Ai, ain't I a woman? My heart smiles as I bask in their legacies, knowing their lives have altered many destinies. In her eyes, I see my mother's poise in her face. I gliped my Auntie's grace. In this case of deja vu, a nineteenth century question comes into view and a time when sojourner Truth asked, ain't I a woman? Today? We pose this question to new powers making bets on artificial intelligence, hope towers, the Amazonians peek through windows, blocking deep blues as faces increment, scars, old burns, new urns, collecting data chronicling our past, often forgetting to deal with gender, race, and class. Again, I ask ain't I a woman? Face by face? That answer seem uncertain. Young and old proud icons are dismissed. Can machines ever see my queens as I view them? Can machines ever see our grandmothers as we knew them? Ida b Well's data science pioneer, hanging back stacking stats on the lynching of humanity, teaching truth hidden in data, each entry and omission a person worthy of respect. Shirley Chisholm embossed and unbal the first black congress woman, but not the first to be misunderstood by machines well versed in data driven mistakes. Michelle Obama, unabashed and unafraid to wear her crowd of history. Here her crown seems a mystery to systems. Unsure of her hair, a wig of a fond a to pay? Maybe not? Are there no words for our braids in our loves? The sunny skin and relaxed hair make Oprah the first lady, even for her face well known, some algorithms fault her, echoing sentiments that strong women are men. We laugh, celebrating the successes of our sisters with serena smiles. No label is worthy of our beauty.
Oh my gosh, what a beautiful way to end this conversation. Thank you so much, doctor Joy for spending some time with us. I really really appreciate.
It, and thank you, doctor Joy.
I'm so glad doctor Joy was able to join me for this conversation. To learn more about her and her work, or to grab a copy of her book, be sure to visit our show notes at Therapy for Blackgirls dot Com slash Session three eighty six, and don't forget to text this episode to two of your girls right now and tell them to check it out. If you're looking for a therapist in your area, visit our therapist directory at Therapy for Blackgirls dot com slash directory. And if you want to continue digging into this topic or just be in community with other sisters, come on over and join us in the Sister Circle. It's our cozy corner of the Internet designed just for black women. You can join us at community dot Therapy for Black Girls dot com. This episode was produced by Elish Ellis, Zairea Taylor, and Tyree Rush. Editing was done by Dennis and Bradford. Thank y'all so much for joining me again this week. I look forward to continuing this conversation with you all real soon. Take good care,