"If you open a hole on the internet," UCLA professor Sarah T. Roberts tells us, "it gets filled with sh*t."
The tragic death of Megan Meier was a turning point for MySpace. As the first social media company to operate on a massive scale, MySpace and its users were forced to grapple with the consequences of that scale.
In this episode, Joanne is joined by Thomas Kadri of the University of Georgia School of Law to discuss how our legal system was ill-equipped to deal with the social media era. UCLA professor and author Sarah T. Roberts chronicles the early days of content moderation. And Bridget Todd and Scott Zakarin are back to talk about bullying in the MySpace era.
This is an iHeart original.
What's interesting though about the dream case though, of course, is that I think it's a very sympathetic impulse to.
Want to use some.
Sort of law here to convey that what Laurie Drew and her daughter did in this case, not only did it have a tragic ending, but the act itself was quite cruel.
I'm Joanna Ocneil, and this is main accounts the story of MySpace episode seven. My safety users regularly experienced online harassment on MySpace, hateful and hurtful comments, even bullying, especially users who were very young yet at the time, with social networking still new and regarded as an experiment, these experiences weren't often taken seriously, and to those who did experience online harassment in the context of MySpace and at the time and the oughts, it was so unexpected or bizarre that they often didn't know how to process it. Take for example, Roommates, the web series MySpace produced from two thousand and seven to two thousand and eight. The show was available to watch on MySpace. It was also promoted through the social network. It was critical for driving engagement. The team set up accounts for individual characters on the show, audiences could interact with the characters like there were real people. It was exciting, but as Roommates creator Scott Zacherin soon found out, there were drawbacks to being this public available. Some of the comments made about the characters played by real people actors we're really cutting.
Yeah, I mean it's happened a few times. I mean in our early shows when people would see the photos and people would comment on their books. Any actor, male, female, it was really painful for them. At first, we removed the the garden variety hater or we warned them we only would go to MySpace if there was something that, you know, was beyond what we should be doing, so you can get a sense of you know, okay, this guy can be salvaged or ignored, but somebody else if they start to get you know, too sexual, or you know, something that goes beyond our standards and practices, that's when we would kick it up to MySpace.
The line between Stan and Stalker is kind of a thin one. Like the people who are like really get enthusiastic about a celebrity, it can be little really can I get very easily tip the other way.
Because I think people are more nervous about doing that back then because they didn't know what have caused. Now it's tearing people apart is common interactivity.
Most users on my Space did not have access to MySpace the company, like the roommates, cast and crew. They cannot make special requests for the company to intervene. Typically, it was expected back then that if you were harassed online, the only thing to do was ignore it, don't feed the trolls. This is something bridget Todd, host of Beef and there Are No Girls on the Internet, commented on in our interview.
I think for far too long, people in positions of power, or like parents and educators and administrators, people who are in positions where they're meant to help young people understand the world around them, we have been telling them this complete fiction that what happens on the Internet is just the Internet, and like your real life is your real life, and like who cares what they're saying, It's just words on a screen. And so when young people are facing this kind of thing, there's oftentimes not a lot of adults in the room who can really understand what's happening and like talk to them about it in a real way. That's going to be meaningfully helpful when we're talking about things like online harassment. It is really important to keep that in mind. And I think that we're seeing that that attitude sort of slowly change, but I think it's changing far too slowly to actually, you know, deal with the problem at any kind of scale.
The tragic death of Megan Meyer, which resulted in major news coverage ongoing for years following the Glory True trial, was a rackoning the thinking until then, a mix of on the Internet no one Knows You're a dog as a famou A New Yorker cartoon caption from nineteen ninety three put it, combined with a moral panic over youth online that we addressed in earlier episodes, belide how sometimes the real threat of online harassment is more prosaic. Your own neighbor could create indescribable pain for your family. Lourie Drew, the mother of Megan Meyer's classmate, a neighbor of the Myers, became virtually universally scorned when her role in the bullying of Meghan became public. But while most people familiar with the case believe that her behavior toward Meghan was cruel, there was no clearly drawn path to accountability. There were no laws that perfectly prevented someone else from behaving the way that Laurie Drew had on MySpace. She was taken to court in the case United States Versus Drew and faced felony computer hacking charges. Drew was charged in two thousand and eight with misdemeanor offenses of unauthorized access to MySpace. This was overturned in two thousand and nine and Drew was fully acquitted.
Now to a new development, in the case of the Missouri teenager who took her own life after she was harassed on the internet. Her family wanted the mother allegedly behind the hopes to be prosecuted. That authorities hit a roadblock, But now there has been a surprising development.
What kind of reactions to the Drew case do you get from your students in one case?
Specifically, I would say that by the time in the semester where I'm introducing the students to this case, they've already been quite outraged by the scope of the CFA in certain other cases that I think many of them are primed to think that the prosecutor hear was really overreaching. They already have Aaron Schwartz's story in their minds as somebody who was massed downloading academic articles from jstore and was prosecuted and then ultimately tragically took his own life after being charged under the CFA. They have him in their minds. They have other types of cases in their minds, where you know, LinkedIn is trying to stop another company from scraping its website the public profiles that people have posted on LinkedIn, and they're trying to use the CFAA for that, and so by the time they get to the Drew case, I think many of them are already a little skeptical of the ways in which especially sort of corporate actors can use a law like the CFAA to exert forms of power and control over websites that they create.
That's Thomas Cadre. He teaches at the University of Georgia School of Law, and he's an affiliated researcher with a Cornell Clinic to end tack abuse, and as he just mentioned, United States versus Lori Drew is a case that he brings up in his classes.
What's interesting though about the Drew case, though, of course, is that I think it's a very sympathetic impulse to want to use some sort of law here to convey that what the what Laurie Drew and her daughter did in this case, not only did it have a tragic ending, but the act itself was quite cruel. And so I think there's at least, you know, there's this perception that what happened was it was at the very at the very least uncivil and mean spirited. And so I think the students do have a tough time squearing their opposition to this very far reaching federal law that probably makes all of them a cyber criminal. In my classroom, in every class when they drift off for a second and they go on a website and they, you know, they violate a term of service without even realizing.
After the break, we'll learn more about the CFAA and to find out why this and other laws were slippery to hold Lauratry to account.
Essentially, the CFA, the Computer Fraud and Abuse Act, is a federal criminal law that makes it a crime to access a computer without authorization or to exceed your authorized access on a computer. Now, what all of those magic words mean has been the subject of now decades of scholarly debate, different court decisions, many different interpretations have been kind of put forward, and different cases have kind of tested the boundaries of what sort of those key terms might mean, especially the idea of what it means to access a computer without authorization or to exceed your authorized access. That concept of unauthorized access is really at the heart of a lot of these disputes. Colloquially, the c IF is talked about as the federal hacking law, but of course even what constitutes hacking is a kind of disputed conce and some of the confusion surrounding the interpretation of the set she reflects some of those kind of colloquial tensions as well.
One thing that I just was curious about because it is a law that last, I'm mistaken. It's been on the books for decades now. So has a perception changed over the decades because of changes in the technology or what does it mean to have a.
Lot of that old Absolutely?
Yeah.
So one of the interesting things about the CFA is that I think best sceptions surrounding the law have changed, and the law itself has also changed. It's been amended by Congress several times since it was initially passed. In the nineteen eighties, and so we've got these kind of two parallel changes that are going on, and they're not always synced up. So sometimes I would say public perceptions surrounding the law has changed in response to a case like the law Drew case, or a situation like Aaron Schwartz, one of the founders of Reddit, who was famously charged under the CFAA. We've had other high profile cases more recently than those two, but there are these kind of moments where there's increased public consciousness surrounding the CFAA, usually paired with opposition to how it's being enforced or interpreted.
The origin of the CFAA, possibly apocryphal, is that President Reagan watched wargames at Camp David. It's that Matthew Broderick movie from nineteen eighty three. You know it. The only winning move is not to play yeah that one. Reagan was allegedly so disturbed by the hacking depicted in this movie that he whipped up legislation that extremely broadly outlawed computer access without authorization. In the summer of twenty twenty one, there was a Supreme Core case which narrowed the scope of the CFAA. The way that the CFAA was applied in the laureatory trial would probably be considered obsolete. But also, I am very, very obviously not a lawyer, so I'll that Thomas Cadrew take it from here.
I still teach the case to my students because I think it really helps to kind of ground the stakes of what the Supreme Court was really doing in this case last summer in saying, well, there's this one way that we could read the statute that might cover all of these forms of conduct, some of which may be harmful, but maybe not harmful in the way that this statute was designed to cover, an other of which may not be harmful at all, maybe really inocuous. And of course, one of the main things that said issue in the Drew case and in many other cases involved in the CFA is whether the violation of terms of service or some other form of contractual agreement or written policy, whether violating that kind of a restriction that isn't bypassing some sort of technical restraint on access to a computer, that is really doing something that you're not supposed to be doing under some sort of rule that's written down or that's conveyed to you in some way or maybe that's implied. Those were the cases that always, I think, gave judges some of the greatest discomfort in saying that the CFA should apply there. But some courts and some judges, including the judge and the Lorry Droot case, felt compelled to reach that conclusion in part because the terms in the statute to say that you're doing something, you're accessing a computer without authorization, the statue gave no specialized definition about what without authorization should mean, and so often when judges are faced with interpreting a law like that, they just look to the ordinary meanings, and we know what without authorization means. It's synonymous with things like without permission, not allowed. And so if something is forbidden in a written policy and you go ahead and do it anyway, it sort of makes sense to talk about that as lacking authorization of some kind, lacking permission, And so judges, like the judge in the Laurie Drew case felt compelled to say, well, these actions because Laurie Drew and her co conspirators, as the court puts it, co conspirators here being her daughter and her eighteen year old employee. The mother's eighteen year old employee. They violated various terms of service that my Space had laid out, and so they were acting without authorization and therefore they violated. Therefore they violated the law. And so I still teach it to my students because it's a fascinating case to kind of show. I think a lot of people, my students included, they have some sympathy with the idea that operators of websites should be able to set certain rules and if those get violated, it's not just a question of oh, you broached the contract, but you did something that violated a criminal law, and so we should be able to use the criminal law to get at those kinds of permissionless uses of computers. But I think the Drew case kind of pushes some of those impulses to say, well, if this is allowed, then this is the this is the extent.
That you could go to. One question I have about this case and then going back to this moment in time in two thousand and seven, two thousand and eight, I think a lot of reactions to the story of Megan Meier is that something terrible happens and this you know, what does justice look like in this in this situation? What is on the books at all, and if the CFA is a perfect legislation, and was there anything at the time that could have been more suitable, or in the years since then have there been developments to enforced cases of what I would say extreme online harassment of this nature or any nature.
At the time, there certainly weren't as many laws that would apply as there are now, And that's one of the reasons why I would imagine federal prosecutors reached for a law like the CFAA that, given its interpretation at the time, was something of a capsule, or at least it could help fill in the gaps where some other laws wouldn't apply. And so you might have had certain laws that prohibited forms of harassment but that didn't yet apply to internet based harassment. Or you might have had claims that could be brought for intentional infliction of emotional distress. That's a thought that had existed for a long time, but the government can't bring that as a criminal charge. That's a private lawsuit that needs to exist between people. So you know, Megan Meia's parents, for example, might have been able to sue for intentional infliction of emotional distress, but actually There are all sorts of very complicated reasons. We won't get into that why it's difficult for Paris to sue when something like that happens to a child. But anyway, that the bigger point is that, yes, that's one of the reasons why prosecutors reached for a law like the CFAA, where you can you can bring in. It gives a legal basis for our sense of moral outrage that something bad happened and somebody needs to be held responsible. Since the Drew decision, which ultimately, remember right, even though she was convicted, it was her conviction was ultimately overturned because of a constitutional challenge that she raised to her conviction. Since then, there have been a whole slew of cyber bullying and harassment and stalking statutes that have been passed in many states across the country, including one in Missouri, the home state where these events kind of mainly took place. Missouri passed Meghan's Law, which was a statute designed to get at various forms of cyber bullying and cyber harassment that the terms of the statute certainly seemed much they seem much closer to what happened here, right, it's actions that are taken, you know, with the purpose to frighten and intimidate and cause emotional distress. There are different provisions that apply depending on whether the perpetrator is a minor or an adult. So laws like this have since been passed, but they've also been subject to a lot of constitutional challenges as well, usually First Amendment challenges based on the freedom of speech. So courts tend to get, let's just say a little more skittish when laws make it illegal to communicate with people with the intent to annoy, with the intent yeah, pester, you know, if it's with an intent to threaten, if it's with an intent to harass. Generally that's you know, courts are a little less likely to strike down those laws as unconstitutional. But the story of kind of cyber bullying laws across the country has been one of a few successes and many failures in terms of those laws standing. This sort of the wealth being being upheld by courts when.
When they're challenge Yeah, that was a really helpful explanation there. It raised a question that I have now, which is it seems like with online harassment legislation you have different stakeholders, the users, the people, the business, the executives of a platform, the victims of cyber bullying or online harassment. How do you negotiate I mean, again, I imagine it's a very imperfect process. But how do you negotiate with these that balance between the First Amendment rights and the accountability? Who is responsible for what? And how has this process evolved over the past couple decades.
Yeah, it's constantly evolving. It is by no means settled. And I'll add one additional complication, not that we need anymore. We've got enough to be getting along with. But law is only one possible regulatory tool that can be used here to address some of these harmful forms of conduct communication, interaction right that are conducted through technology. Technology itself is another regulatory force here. Technology can enable and constrain different forms of behavior in ways that is certainly not a direct analog to law, that can be complementary to law and sometimes not so complementary to law. And there are other regulatory forces as well, right. There can be certain market constraints on some of these forms behavior, and social norms are working in the background as well to again push certain types of behavior, enable it, or constrain it, But technology in particular is really important to think and talk about in this context because your question asked about, you know, how do we navigate, for example, First Amendment rights to free speech or just the political value of freedom of expression right with laws and other forms of regulation, including technology that might seek to regulate this kind of behavior. And this is a constant process of evolution. I would say that we see play out write everything from when a former president of the United States gets kicked off Twitter, whether that is a First Amendment issue, a free speech issue, whether those things are one and the same. Right they aren't, but they often get lumped together. The question of online harassment by cyber mobs, dosing, non consensual distribution of intimate images, other forms of kind of networked harassment, the values that are at stake in each of those different situations, the types of regulations that might be appropriate to deal with them, the constitutional issues at stake. In some ways, I like to think that that you know, they are all deserving a very distinct treatment because they do often raise very different questions of how to try and mitigate or address some of those harms, and yet at the same time they're all intimately connected. The types of lines that you draw in one context will inevitably at least have to be reckoned with in the other context, even if they don't directly apply. And so if we want Twitter to be able to or you know, let's use my Space, right since it is still around, if we want my Space nowadays to be able to address certain forms of networked harassment or targeted threats that are you know, communicated through its platform, that has a certain vision of the ability of those platforms to kind of govern and police their spaces that they've created online. That might also apply in the context of trying to de platform somebody or remove somebody's ability to engage in these kinds of expression, and how they go about doing that right, Sometimes it's going to be a question of law. Sometimes it's going to be a question of other forms of regulation that they might put in place, But it's all pretty connected. In this ecosystem.
Social media ad scale is difficult to govern. Any proposed law that might aim to rid social networks of online harassment and prevent future lory jrews could backfire in countless ways. But while online harassment is real, what you believe constitutes online harassment depends a lot on who you are when I.
Write in this area, and when I teach these issues, I can't just teach law. I have to teach technology as well. I have to teach to some extent social norms because they're all interacting in this space. There are occasionally laws that are going to be a major motivating factor, but often there are going to be other forces that are actually pushing some of the key protagonists in this space to act in certain ways, to remove certain types of content, to protect people from certain types of harm.
Well, it seem like no on the corporate side of MySpace cared what the users were doing. In fact, there were workers in MySpace who were on task to remove objectionable content from the social network. More in the MySpace content moderators. After the break, MySpace seemed like a free for all, a place where you could post or upload anything, and some took advantage of the lax rules. There were users who uploaded incredibly vile content.
I once interviewed, very early on in my research time, a person who had been an executive at a digital media company, and that person said to me, very very wrily and sagely, if you open a hole on the inner it gets filled with shit, and that was like, you know, like mic drop. So MySpace opened this hole in the Internet for people to fill in with photos and imagery.
You know.
There were like also like you know, kind of crude computer graphics that were part of it too, So you can imagine how quickly swastikas would have shown up or you know what I mean, just whatever crappy thing people could do, they took the opportunity to do it.
You know.
It reminds me of like.
When there's a fresh piece of sidewalk cement that they've put in, you know, and they put some barriers around it when they put it down, and then in no time people are in there writing on it and putting their face in it, like Michael Scott in the office, and just doing stuff to it, you know.
And that's what this is. Like, it was like this blank slate and then.
What that's Sarah T Roberts, author of Behind the Screen and professor at UCLA. MySpace was the first social media company at massive scale, which meant that things like kicking people off the platform for say, posting swastikas was not an easy process.
There's no size of labor force that you could employ that could have even gotten all the material on MySpace, you know, much less on some of the platforms that are out there now that are just exponential in comparison.
A lot of Sarah's research and writing focuses on content moderation. Big social media platforms like Facebook and YouTube employ massive teams of workers, usually content workers, to remove photos of violent or sexual content that users have uploaded. The worst thing you can imagine, well, someone has probably tried to get that up on a social media platform. At some point.
Almost every major platform thinks of content moderation a little late, like they think of it because some crisis has predicated a new conversation within the firm like uh, oh, we actually have to have some policies, or oh my god, I didn't think someone would nefariously do this, But here are a bunch of people doing this thing with our tooling or our systems, and not only is that distasteful to us, but maybe it's illegal, you know, in the case of circulating child sexual abuse material, which people do all the time, all the time on social.
Media to this day, and it is illegal.
Right.
The thing about content moderation of social media is that it's treated as a trade secret.
It's treated the practices, the specifics, who does what and where and exactly how. There's no consortium of social media companies getting together and being like, hey, we all have the same problem.
MySpace, as the first social media company at a massive scale and one that was largely image based, was the first social network to grapple with the consequences of scale.
There were like a series of maybe moral and ethical.
Responsibilities that my Space felt, and then there were also maybe some potential legal ones that kind of came into play, and so all of that necessitated some some gatekeeping of some sort. But there the firms have a hard time thinking about that kind of activity, gatekeeping, taking material down, enforcing rules, thinking about what can't be done.
They have a hard time thinking about that as revenue generating.
If you were a user who encountered some of this file stuff, maybe someone left a testimonial with a picture of dead animals, it wasn't clear how to flag this material, and it wasn't clear what would happen if you did.
They often would have had no idea where it was going, and I think in many cases probably just presumed, oh, I'm sending it off to the computer whatever that meant when in fact, you know they were sending it off to people, but they were doing some labor on the front end of triaging that material already. So like maybe at one point you had to go through a series of menus to find where you would report. Now, it's usually the the convention is like to have that much more available to users, like that those buttons us the red button or something, I've got to report this.
But you know, it was a it was a process of.
Like flow chart logic where you would find this place to report, and then this is the macro category of why it's.
A problem because it's violent, or because it's.
Inappropriate sexual material, or because it's some other kind of thing.
I mean, I would argue that making.
A better, safer, more comfortable place for people ultimately will generate revenue, but that's a long that's kind of a longtudinal argument for companies that want quarterly returns, so it's hard to make that case. So what happened was, in the case of MySpace, you know, they had to build up a content moderation department, which meant they also had to create a bunch of policies simultaneously. Because the policies governed the operations of content moderation.
Executives often rationalize these haphazard content moderation workforces with haphazard workflows. They assume it will all get automated eventually, and for those who work as content moderators, the experience can be traumatizing. Sarah talked to moderators from multiple platforms for her book, including someone who moderated MySpace content.
She said, well, for the three years after I worked at MySpace, if I met someone, I wouldn't shake their hand. So I said, can you tell me more about what you're saying with that? She said, well, I know how people are, and people are nasty and their growth, and I don't want to touch a stranger's hand because I know the stuff they do. So this is kind of how she went forward in the world after that experience. She told me that she had coworkers or employees that she was worried of out unleashing back into the world because of the harm that they underwent in what they were doing and seeing.
You know.
She told me, maybe some of these people started out a little bit weird and this job just you know, took them to the matt psychologically and she said, you know, she often worried about what became of those people, where did they end up?
And in case you're wondering, automating content moderation would be extremely difficult to do. In fact, many of the much heralded AI applications depend on this kind of labor too. A recent Time magazine feature revealed that workers in Kenya moderate and filter chat GPT content for less than two dollars an hour.
Does it have to be that way? I guess.
Companies think so for now, and they throw a lot of resources on the you know again, computationally, but there's no getting away from the human.
The human ability to discern that is so uniquely human.
To take all of these inputs symbols, language, cultural meaning you know, the specificities of a particular region in Mexico, you know, for example, and the political situation in that place, and having someone who knows intimately that area, uh and can respond to it like.
That is nuanced and it's so it's so uniquely human in some ways.
It's like that discernment and judgment, like yes, if it's you know, there's too much boob in the photo, okay, A computer can like make a decision about that, Yes, But when we bring in all of these elements language, culture, symbols, politics, you know, like regional politics in some cases very specific religion, all of these elements that are so complex that people spend entire careers studying them or you know whatever, and then ask very lowly paid people in a completely different part of the world to decide about it, or we try to create algorithms that can imitate those decisions. You know, things fall through the cracks and it's a really hard, hard problem to solve under the current business model social media, which says post it and we'll sort it out later.
My Space is still blame with content moderation because MySpace still exists. It is still around, It exists as a company, It exists as a platform. A collapse, certainly, no one I know has used it in a decade by people still work there, people post on it. What is MySpace now in twenty twenty three. In the next episode, we're going to explore what's left of it. Thanks for listening to Main Accounts The Story of MySpace and iHeart original podcast Main Accounts The Story of MySpace is written and hosted by me Joanne McNeil, editing its sound design by Mike Coscarelli and Mary Do. Original music by Elise McCoy, Mixing and mastering by Josh Fisher, Research and fact checking by Austin Thompson, Joson Sears, and Marissa Brown. Show logo by Lucy Kintania. Special thanks to Ryan Murdoch Grace Views at The Heat Frasier. Our associate producer is Lauren Phillip, our senior producer is Mike Coscarelli, and our executive producer is Jason English. If you're enjoying the show, leave us a rating and review on your favorite podcast platform Sadly, my MySpace page is no longer around, but you can find me on Twitter at Joe Mick. Let us hear your MySpace story and check out my book lurking main accounts. The Story of MySpace is a production of iHeart Podcasts.