We've heard that social media can warp perceptions about our bodies, dieting and appearance. So what happens when TikTok invents a new type of algorithm – one that's incredibly powerful at tapping into our inner desires and anxieties? Join us for a new series of PsychTalks and hear Dr Scott Griffiths chat about his astonishing study on how TikTok can harm those vulnerable to eating disorders.
Discover more about Scott’s research and check out the Butterfly Foundation.
Featuring Dr Scott Griffiths, Psychology Researcher at the University of Melbourne’s School of Psychological Sciences. Scott specialises in body image and physical appearance.
Episode credits: Hosted by Associate Professor Cassie Hayward and Professor Nick Haslam. Produced by Carly Godden and Louise Sheedy. Mixed by David Calf. Music by Chris Falk.
This podcast was made on the lands of the Wurundjeri people, the Woiwurrung and the Boonwurrang. We'd like to pay respects to their elders, past and present and emerging.
From the Melbourne School of Psychological Sciences at the University of Melbourne. This is PsychTalks.
Welcome to a new series of PsychTalks featuring your new hosts. I'm Cassie Hayward, an associate professor here at the University of Melbourne School of Psychological Sciences. My specialty is in applied psychology and behaviour change, and if I seem familiar, you might have heard me last series talking about why fear based ads can be so effective at changing our behaviour.
And I'm Professor Nick Haslam, also here at the Melbourne School of Psychological Sciences. I'm a social psychologist, and I'm interested in areas like stigma, psychiatric classification and mental health, and we're switching things up a little from last series in this series.
Each episode we chat to a psychology expert about some of the amazing findings they've uncovered in recent research and then hopefully helping you, our listeners, to apply these findings to your own lives.
First up, we're exploring the world of social media and exploring just how well the algorithms that power them know us and service the content, perhaps secretly, that we're looking for, which can be great if we're in a positive headspace. But what if we're not feeling so great about ourselves, perhaps struggling with issues around our body or appearance?
Yes, so we're delving into Dr Scott Griffiths' most recent study, and what I took away from it was just how powerful these new generation social media algorithms are. So on Tik Tok, for example, it's almost like it knows you better than yourself, even when you're not consciously choosing certain videos. And that can have pretty serious consequences for people who are at risk of eating disorders.
Exactly. Then, how do you go about policing this harmful content? Well, our colleague Scott is here to give us some ideas. He's a psychology researcher focusing on body image, and he leads the Physical Appearance Research Team in the School of Psychological Sciences.
Thanks for joining us, Scott. Can you tell us a bit more about how the social media landscape today has an impact on people with eating disorders or with a risk of developing an eating disorder.
Context around what social media used to be like will be really helpful here. So if I talk to my parents and I ask them what is social media, they're going to tell me that it's Facebook. It's the updates from the friends and family that they follow in their timeline, and the order with which those posts are presented is roughly chronological. The algorithms that powered that Facebook news feed and that powered all of social media, you know, a decade to 15 years ago were quite simple to the point where the term social media algorithm probably not something anyone was talking about or thinking about.
In 2023 with TikTok in particular and, to a lesser extent, Instagram, these are algorithmically- driven, hyper -personalised content delivery machines.
When you are on TikTok, you are not seeing content predominantly from people that you follow or are in any sort of family network. You probably don't follow family members on TikTok. It's a very different beast, and it's that that has really changed the game for vulnerable people on social media.
So that's really helpful for old people like me, Scott, because I'm probably more like your, uh, your mum and dad in this so just backtracking a little bit to get more detail into the different social media, Um, platforms. So obviously, social media is a bit of a catch all. Can you give us a bit of a more detailed rundown about just precisely how Facebook versus Instagram versus TikTok happens? And then we can maybe go more into the, uh, TiktTok algorithms specificity there.
Great.
So your experience on Facebook is you choosing who you're going to follow. They will largely be family and friends, and then your timeline or news feed, the content that it delivers to you, will largely be whatever they are posting maybe pages that you follow as well. You might get advertisements from the Facebook marketplace add-on that that runs in parallel with Facebook.
And then Instagram as kind of the millennials' social media platform,
similar in the sense that you have a lot of content from, uh, people you follow. But it also introduced this thing called the Discover Page, which was entirely algorithmically driven. And this would use a lot of volitional actions that you might take on the platform so content that you like, for example, and to a lesser extent, non-volitional actions.
In a nutshell, what's the distinction between volitional and non-volitional?
Volitional social media? Uh, the things that you do consciously like liking a video or following someone and non volitional. That's the more unconscious things that you do, like the amount of time you spend watching a video or how long it takes for you to flick away from a video that you don't want to see. And then enter Tiktok, which is the Zoomer Generation Alpha's platform of choice and now
the entirety of your use there is algorithmically driven. There's no friends and family that you follow before you see your first video, it arrives at you, and what you see is less determined by your volitional actions. So you might think, Well, OK, if I don't wanna see content of type X, I just won't like that content or follow people who produce that.
And social media of old would have been quite sensitive to that. That is not the case for newer platforms. They're responding to non volitional actions like the time you spend watching something.
And if engagement is just the time you spend watching it, it no longer has an effective valence. You can get more content that has made you anxious if you've spent more time watching it. Even if you didn't like it, that's quite a substantial difference.
Hm. So non volitional in this sense just means stuff that you're not deliberately controlling.
That's it.
So it's still choice at some level, but not you're not aware of what, how that choice is being manipulated or used.
Yeah, that's it. And in order to have control over those things like, say, time spent watching, you would need to be really critically engaged. You'd have to be this perfect, critical consumer of social media, which no one can be at all times, especially if you're using it in vulnerable moments.
Now, from one standpoint, you think this is marvellous. I don't need to make any choices. It'll just curate things specially suited to me. It'll, uh know what I'm thinking, know what I care about even without me having to make choices? Um, that sounds like it's wonderful. Uh, but I think what you're suggesting is there might be a kind of, um, dark side to this, especially in relation to eating disorders. Can you clarify how that works?
Yes. So there it is gonna be a double edged sword.
I really like TikTok. It's really captivating and entertaining. And the way I explain it to people is that contemporary algorithm driven social media will intensify the energy that you bring to it. If you are in a good place, then you're going to get delivered content that will interest you and that will make you laugh and will teach you things and you'll spend more time looking at that. You might also like it, follow those accounts, and the algorithm will be your best friend because it will just be this source of entertainment, you know and insight.
But if you're in a vulnerable place, it will intensify those vulnerabilities. You might spend a little more time looking at a video that makes you feel guilty about the size and shape of your body. Now you will get that. If you didn't like it, you're more likely to get that. So in the context of eating disorders, where people, definitionally are vulnerable to concerns anxieties about their their eating, their body weight, their body shape,
they are then at risk of falling into these intensifying over time eating disorder echo chambers, for which they may not have frame of reference because they don't know what other people's TikTok feeds look like.
And how extreme are those pro eating disorder types of content?
Pro eating disorder in this concept is unequivocally extreme material. So a pro eating disorder video might be something like teaching young people how to convince others that their weight loss is due to something else other than their desire to be thin, so that they won't trigger any red flags or can get away with the weight loss for longer. It's the kind of content that take 100 people. 99 are gonna 'go that's toxic'. It's not in this ambiguous dieting space.
And you can get there through, what you explained, these non volitional moments of spending more time on content that might lead you down that path to that more extreme content.
Correct.
And I presume, this kind of pro eating or pro eating disorder or pro muscle content, or whatever your pathway ends up being, exists on the other social media platforms. So how is it so different in the TikTok algorithm in terms of either what gets served to you or how that content is kind of found by the user?
Good question. Yes, that content exists on, for example, Instagram. So TikTok and Instagram both have this type of content there. They both have efforts that they, you know they do in in good faith, I presume, to censor uh, some of the material that is hash tagged with obvious hashtags like pro-ana that can be detected and and censored.
Pro-ana means pro anorexia, right?
Yes. The distinction is that in Instagram, you again are predominantly being served content from users that you follow or like. And because that algorithm is more sensitive to volitional actions, you can put a set of changes in place, perhaps banning certain influences or or you blocking them, and it becomes reasonably low, in likelihood that you'll just come across it.
TikTok because it's preferencing, non volitional behaviours. It's always funnelling you back to that content. It's the arc is always there, and that's what gives it the greater potential for for harm.
And I guess, another side of TikTok that I think you will mention in your research the idea of that you can use filters. So you can use glamour filters for your face or skinny filters or six pack filters for your body. How do those play into - because that's a volitional, you're choosing to use that filter. So that's a bit different to this kind of leading down an algorithmic path. But how do they play into the research that you're doing?
Great question. The the research we've done has just been us getting inside the algorithm. But an entire 2024 line we want to do is on exactly those, uh, beautification filters, uh, appearance changing filters because they're at the point now where the sophistication is really, really impressive.
Uh, two examples. Two filters came out on Tiktok earlier this year; bold and glamour. And these were the first two filters on social media that could update your appearance dynamically with you interrupting the, uh, the frame of field right or or just the field. So what this means is that where your old school filter might have required you to stay still and you can get a photo, this one.
You can record a whole video moving around, put your hand in front of your face and you will have whatever changes it's suggested. And it's changing the way people think about ideal bodies because 10 to 15 years ago, talk to plastic surgeons and people bring in photos of the people they wanna look like. Might be a model. And I want her nose or I want his jaw.
Now it's becoming more common to have the photos of yourself. Just better. That's and it's a very challenging one, too, because you can always be a little bit more symmetrical or a little bit better. So people's idealised bodies are shifting to themselves, filtered versions of themselves. And
I mean I think it's fascinating. We'll hopefully know more about it, Um, next year.
Is that likely to be better or worse? I mean more or less dangerous because you could say, Well, if you simply want a touched up version of yourself, then at least that's a reason to have some level of pride. You sort of like, uh, the foundations. Even if you could maybe improve around the details versus if I wanted to turn into Rob Lowe, again showing my generation, uh, that's not gonna happen, right? So is this. You could say this is more benign if if people are coming to a plastic surgeon with a, um, enhanced version of themselves versus someone completely different.
Or does it have a downside in that people can now actually see in real time dynamically their better looking self? Uh, does that make them more passionate about having to change To be that self?
Great question. And I'm we're gonna stay open -minded about it because I think there can be pros and cons. Plastic surgeons often are having to deal with managing the expectations of clients that come to them.
And in theory, it's a lot easier to make a tweak, which is based on someone's actual face than them bringing in the face of someone who is not them.
In theory. But whether or not that's also making them more likely, or making them more fervent in their pursuit of cosmetic surgery, not sure.
Is it likely to be the same for facial beauty and physical shape and muscularity?
I suspect so, and these algorithms now can give you that muscularity like if you wanted to see what you would look like with a more V shaped torso as a as a young man, you can you can do that now.
The sophistication, uh, that's out there is considerable, and I suspect that it is the variety of offerings and how niche you can, you can get things that will encourage that, that micromanagement of people's appearance even more so than what's allowable now.
Because if you wanted, for example, to see what you know, just what would my face look like? If my ears were a little bit different? You can look into that specific filter now. I think there's an endless technical ability to critique one's own appearance.
It's fascinating and terrifying. I want to change tack a little bit. About 20 years ago, I'm showing my age now. Sam Gosling did research, looking at how you could give a pretty good personality assessment of someone if you just looked at their office space or their living space or their Their bedroom
Is the Tiktok For You Page the kind of new and improved extreme version of this kind of personality assessment? Do you, h ow much could you tell about someone just by looking at their For You Page?
Plenty.
A large part of the reason that we embarked on this was because of early media reports in in 2018, 2019 of people being flawed by the ability of this algorithm to show them things about themselves that they knew but then also reveal things about them that they - it's not that they didn't know it, but they certainly weren't at the point of speaking about it yet.
And those reports just kept cropping up. And and I love social media and I've been using it for, you know, all the way back since MySpace. Uh, TikTok's algorithm was a a leap beyond whatever had come before it. It made Instagram's Discover algorithm look pretty weak.
The difference, surely, is that whereas Sam Gosling in these old studies was looking at things people had chosen. So if you put photos of your kids or your family or loved ones on your desk in your office, it's a sign that you're an extrovert. And if you have a lot of nature shots, you're an introvert. Uh, Or if you have your, um, um your bedroom, you also did bedrooms, y ou've got, uh,
um, messy stuff everywhere, y ou're probably not so conscientious, and if it's everything very neatly organised, you you are conscientious.
It's all those volitional acts. But I guess what Tiktok is doing the algorithm is doing is picking up on all these non volitional things which say an awful lot about you.
Yeah, and I wanted to put that idea to the test a bit. So in our study, which I'm sure we'll talk about in a bit more detail in in just a second, we looked at how successful we were predicting whether or not someone had an eating disorder, which we knew about in advance. Based on their TikTok data, we could use either the TikTok algorithm, and its preferencing, which is all of the non volitional stuff. Or, just the volitional stuff. So we pitted what users are liking up against what they're spending time watching in a a set of others.
And in terms of their predictive accuracy, it's the non volitional. They're significantly more accurate than the volitional. What you like is less important from a you know, predictive standpoint than just what you watch. So I think part of why TikTok's been successful, whether it intended it or not, is because of that shift.
Well, I mean, that means TikTok is sort of a behaviourist, so to get all, um, theoretical here. It's picking up what you actually do, which may be a truer portrayal at some level of who you are than the deliberate choices you make. And that's, uh, quite radical. And the fact that it's so powerful is maybe, um, because it's capturing that sort of, uh, behaviour, you know, represented through, you know, as you say, how long you watch and stuff like that. But, um, I think you agree. So let's go on to your study. Tell us about it.
OK? So 18 months ago, we saw the writing on the wall in terms of TikTok and its ascendency to become the the the cultural driver of youth culture, which I think is now the case, just as Millennials had Instagram. I think Gen Z Alphas have TikTok, and it's likely it will be there for I would be comfortable saying at least five years.
So we wanted a study that would not, as was being done in the age of Instagram, simply link using social media with a consequence like eating disorders, because that's reasonably straightforward to do. I can give you a a self report questionnaire and you say I use it all the time. You do an eating disorders questionnaire and now we can say it's positively correlated.
So what we did is we used, uh, the Australian Privacy Act, and it has a principle in it called App 12. And that means that TikTok in Australia is legally obliged to provide users data if they're requested.
So with our user's consent, two groups, folks with eating disorders and and healthy controls, we had them request their data from TikTok. Now TikTok is obliged to provide you that data, but they're not obliged to provide it in a usable format. So we made sense of it, collated it all together into a master data file. But then, that's a bunch of video URLs . It doesn't say much to you at that point.
So we created an algorithm that would take, you know, these hundreds of thousands of TikTok videos, right? And then scrape TikTok's application programming interface, or API. That is what allowed us to pull out all of the the metadata hashtags, whether or not someone had liked the content, if they had saved the content.
And over time we were progressively able to build out more and more of what we could see of the user experience. And, you know, now a as of this point in 2023 if you give us your consent, we can see every video the TikTok algorithm has ever sent you. Everything you've ever liked. Uh, if you've saved it. From the very first day, no matter how many years ago. So it's this fulsome data capture
that can have a perfect story of how your 2023 TikTok algorithm has unfolded. And in terms of clinical utility, that's that's that's useful.
Is it just the data that you've got in that file? Is it just the likes and the saves and the anything you've commented on? Or is it also measuring that non volitional content that you mentioned before in terms of dwell time?
Yeah, we have the dwell time as well, so we
Yeah, it it if it was just a video that appeared on your feed for half a second and you went up, We we've got that too.
Wow. And what can you tell us about what your how your groups differed in that content. Your eating disorder group and your healthy controls.
So the the first thing we wanted to see was are the algorithms of folks with eating disorders bias toward the kind of content that we know can intensify eating disorders.
So we, you know, created buckets of hashtags, and hashtags were how we were defining videos. Um looked for things that were about dieting, things about physical appearance in general. Eating and exercise. And then also the pro eating disorder content we talked about earlier. So stuff that is unambiguously bad
but is ostensibly censored, so or or not to be appearing in any significant numbers. And the algorithms of folks with eating disorders are notably more biased than folks without for each and every video that the algorithm sends you. So it has to make a choice every time about what you're going to get next.
The chances of you getting a dieting video if you have an eating disorder is between 350% and 550% more likely. Uh, if it's physical appearance in the order of 200% to 250%. If it's around, uh, fitness or or exercise 150% to 250%. And then the pro eating disorder is just a smidge over 4000%.
A smidge over 4000%?
Four thousand per cent.
So that's just exponentially different to these, like wellness videos or healthy eating. And that's not just from their volitional actions?
Well, it was incumbent on us to see and answer the question of OK, there's all of this content that we're worried about. Is it the case that users are simply liking themselves into these echo chambers?
Because it is reasonable to think that this could be a problem with two contributors here? Our the algorithm is not working in the favour of users, but users volition or actions may not be working in their favour, either if they're liking this content. All of our users, eating disorders or not, like this content less than all other content, significantly less. Ao anywhere from 50 to 30% less likely to be volitionally liking dieting videos, etc .
And when we track this with eating disorder symptoms because we want to see if it interacted, it only reverses for people with extremely high levels of eating disorder symptoms for dieting videos.
So it is the case that once your symptoms are very intense, you are through your volitional actions, contributing to these echo chambers developing. But in general, the evidence argues against people liking themselves into these issues. So our our participants, who nearly 100 you know, young women who it's very easy to dismiss as well don't like and follow that stuff. They're not significantly less likely to be liking any of it.
So they might come across one of these videos from the algorithm serving them something that's more extreme diet or more extreme exercise. They don't actually like it, comment it on it, share it, anything. They spend a little bit more time watching it, and then they're more likely to get it. And then they're more likely to get more extreme versions of that.
And you're saying that tracks along with these eating disorder symptoms.
Yeah, All of these are predictive of eating disorder symptoms over the time period where we've assessed the algorithm.
And this is more extreme than you know, we joked before about, you know, predicting someone's personality from looking at their For You Page, you're saying you can predict if someone is likely to have an eating disorder from the content that they're being served.
Yeah, uh, compellingly for us. The self report symptom measure does OK, around 75% accuracy, which is actually quite good for a complex mental health disorder. Then users' volitional actions sit at around 80 to 81% and then the algorithm itself is up at 90%.
Oh, wow.
So are you able to make the claim or defend the claim that the algorithm is making people's symptoms worse?
I think to have that at the level of rigour that we'd like, we would need to have longitudinal data not in the sense of the TikTok data, because that's longitudinal, but users eating disorder, symptom data. We need to know about that at more than one time point.
So what I really want to do, and what will challenge us, is to design a study that will allow us to use a technique called convergence cross mapping. You can think of it like you've got all of this TikTok data, millions of videos, and they're all perfectly time stamped, So that's a perfect fabric of content.
Now your eating disorder self report data say, from a symptom questionnaire like the EPSI, you wanna have a fabric, maybe not as intense in terms of, uh, how many data points you have across time, but a lot more. So if we had, say 100 people and we followed them for six months and they were sent at random 100 self report surveys of just their eating disorder symptoms,
then that 100 times 100 is now at the point where you have, you know, a fabric of self report symptom data to put against this perfectly time stamped TikTok data and the properties of convergence, cross mapping allow cause and effect in this circumstance. And that's what you need to be able to say, credibly, that social media causes eating disorders.
And that's where it really needs to get to to satisfy all the empiricists in the room.
But it sounds like it, now, it's the it's the best predictor of someone's actual eating disorder symptoms is looking at the algorithmic content that is being served to them.
Correct, which is what you would expect if we expect this cause and effect question to be true.
Now you mentioned that this is with females, have you done, or are you planning to do a similar study that looks at say, you know, we see the kind of body dysmorphia, or pro muscle content in young males? Does the same pattern exist?
We're looking into that now. Our poor honours students on this project have to work very hard, work with Grindr, so a dating app that caters to sexual minority men.
And our arrangements with Grindr mean that we can we can slide into the DMs of every Grindr user in the country for a whole day. And we have launched one of those ads last week. Uh, I'm actually gonna be in a meeting after this where my honours student tells me how we've done for recruitment. But that is exactly the idea to see if this also tracks with a different population, vulnerable population with similar but different clinical end points.
Like you said, muscle dysmorphia, steroid use, the more male oriented side of this disorder spectrum.
And I imagine you're talking about body dysmorphia, steroid users, eating disorders. But I imagine there's a a kind of less benign version of this where it's just wellness gone a bit too far. So you could probably see the same pattern with females or males who might just be going getting pushed to go to the gym a bit more or run a bit more or eat a bit less, but not to that level of clinical eating disorder.
Do you think that same pattern will would emerge in that kind of non clinical sample?
It will be curious to see.
Yeah, I wouldn't be surprised.
I'm gonna put my mum hat on here because a lot of what we've talked about today, I think, will scare a lot of parents about social media. And I think just banning social media is obviously not going to be the answer to, to help kids. But what can parents do to help let their kids use social media safely?
Maybe inoculate themselves against these risks that you've talked about or help them out of a bad place if they know that their kids have kind of gotten to that level?
Well, just as a just as a proviso, intent is not to scare parents. Again, social media is great. Well, it can be great. It can be poor For the same person, even someone vulnerable, social media might be great start of the year. It might make things worse for them later in the year. It might also help them come out of it.
They could find support groups.
Precisely, precisely. So you are right. You can't just outright say no to social media. It's not going to work.
Our approach is a harm minimisation one, which makes sense for really anything out there that has a lot of utilities, has potential downsides and which is ubiquitous enough that banning is not practical.
What we have been arguing TikTok needs to do is to take responsibility first and foremost, because if you change your algorithm so that it is more sensitive to users non volitional actions than their volitional, then you shift culpability for the manner, the harm of that algorithm, from the user to you.
And the way you provide users more control over that is, to simply that, provide them with control. Users have very limited control over what they see the provision of things like, you know, a little flag so you can report this content and see less of it is
useful, but you would have to do that for every single video and people see tens of thousands in our data per month. It's not reasonable. What we want is for TikTok to work in a model of volitional filters, which of which one is what we're proposing. I'll talk about in a bit. But also to provide users insights into what their algorithm is showing them.
It's reasonably easy to visualise an algorithm you can imagine like a, we call it a waffle plot, where you've got an indicative 100 squares in a 10 by 10 grid, and this represents your algorithm. And then every red square might be a dieting video, for example, and it can show you if these have been increasing over time. This is the proportion of your algorithm that is dieting.
Has it increased since last year? Up arrow, if not down. And if you can just tap it and see less of it, do you want to see less? Yes. Then it can be a filter wide change that doesn't require you to select each and every video that comes across that you don't want to see. And then a user can actually be insulated from the content that the algorithm will otherwise always be shunting them toward, over time.
Like that's that's a useful approach.
I should say that Cassie's mum's hat is very stylish, has lots of feathers. I mean, those here cannot, listeners can't can't see this, but it's a remarkable hat.
So this idea that harm minimisation is not just up to the consumer, but actually up to TikTok as well, Um, sounds great, but what kind of leverage do you have with a huge company like this? What can you possibly hope to achieve?
Uh, really smart strategic question. So TikTok has a reputational crisis at the moment, them being dragged over the coals in the US Senate hearings earlier in the year. The fact that their provenance in China calls into question, you know, some unfounded, but also some founded questions about their, uh, care for the health and well -being of users.
Unlike technology companies that they compete with, say, Meta, who definitely have friends in Washington, DC, TikTok doesn't really have those friends. So for them to avoid the legislation calling for their outright banning in certain states in the US, it's even been discussed here, should TikTok just be banned in Australia? The legislation they're up against is company ending.
So for them to get a win in terms of user well-being is a strategic imperative. So in a sense, we're taking advantage of this moment where TikTok feels more compelled to act in the interest of users' health and safety than perhaps they otherwise would. Which is why the pressure needs to be, you know, multi fronted, multifaceted.
And do you think it's likely that that will happen?
I think insofar as the kinds of kind of filter that we're advocating for them to set up, I think we can, because we positioned it as a a win win win for everyone involved and I think it is like I actually think it is.
And I love this systems kind of approach that we can kind of fix the the algorithm, fix the experience so that it's not up to the user.
But until that happens, I'm keeping my mum hat on, how does a parent help their kids navigate that experience more safely until you know you've got these checks and balances in place at a at a kind of systems level? What can we do in the meantime?
Well, education is part of it. I think a useful thing to tell kids, students, in which we do is that social media intensifies the energy you bring to it. If you're feeling good, it'll probably bring you content that makes you feel good.
If you're feeling poorly about yourself, you'll probably get stuff that intensifies. Those feelings includes anxiety as well. If you punch in, do I have a ADHD , which is an anxiety that a lot of young kids have, you will not feel better after the content you get given. It's not gonna take away the anxiety.
So, it's social media intensifies the energy you bring to it. If you're in a bad place, iIf you're feeling upset or stressed or anxious, don't get on it. Like save it for when you're feeling good, which is kind of an approach for a lot of stuff that has harm minimisation as part of it. You know.
Don't use it when you're feeling bad, because you might end up feeling worse. You can apply that to all sorts of things, like drinking, Um, and then on top of that, it would be to take advantage of the features that TikTok has in as inelegant as they are to try shape what you see, don't rely on just Oh, I didn't like that video.
If it's a video that made you upset, say that you don't want to see more of that. Because anything short of you saying I don't want to see more of that is a signal for you to see more of that because you probably spent more time looking at it by virtue of having that thought. And I don't I don't think most people use social media like that. They just they very rarely will say I didn't like it. They just scroll off it, but it doesn't matter anymore, unfortunately.
I love that advice of, well, both of those points of of advice. But I just want to go back to the idea of telling kids not to use social media when they feel bad, upset whatever, and I love it as a piece of advice. But I think it's really hard in practice because, say, a kid has a really bad experience at school, gets in a fight with someone, gets left out, not invited to a party, something, had a falling out with someone.
We know from research that looks at rejection and ostracism, that after an experience like that, they are gonna want to regain that sense of belonging somehow, right? And for kids, that sense of belonging might come from going to their social media communities, and I use that term very loosely. But for them, that's where they might then get that sense of belonging back, right?
So how do we convince them that that's the wrong strategy? That these aren't real communities, that these aren't the way to get your sense of belonging back. And in fact, it's gonna make you feel worse, right?
A caveat here should probably be that if someone already has a community that makes them feel good and they want to turn to that community on TikTok , they're feeling poorly. Then you're kind of bypassing the algorithm in a sense, right?
You can just go straight into interacting because if you've got the community, presumably there might also be in your DMs . So you opening TikTok and using it could just be talking to people that you already have a nice rapport with. Or perhaps you've followed a bunch of those accounts and that, uh, maybe that's a quite a significant part of your your network, so could be useful, i n that context.
It's more the less mindful scrolling where the algorithm is going to send you in directions that it shouldn't.
And another caveat is that sometimes there are communities that will give you a sense of community. But those communities entrench the disorder. In pro anorexia, that's what you get and you being funnelled in the direction of that community is is is not, uh, beneficial.
I think that one's a whole new podcast. Uh, how we create and entrench Uh uh, mental illness. Where is this research juggernaut heading next, Scott?
We want to get this filter operational.
So what it will be is a volitional filter, and it has to be volitional for some strategic reasons, where users can turn this on. It's time locked so once it's on, it's on for, say, a period of one or two months, and what it does by virtue of it being a volitional filter is gives TikTok and us the licence to be very heavy handed in what is delivered to you.
So I've pitched it as like the 'iron butterfly', kind of a play on, you know, the Iron Curtain and the Butterfly Foundation.
For those who don't know who the Butterfly Foundation is, Scott, can you just clarify that?
The Butterfly Foundation is Australia's peak eating disorders charity. And what we do is take Butterfly Foundation's excellent optics because they do great work in the community. Our team runs the technical implementation. TikTok gets to wash their hands clean of having to make this work because they are already up against, you know, claims of censorship.
But they also have no hope of of doing this. Uh, they're not eating disorders experts, nor would you expect them to be. And what it means is that by our simulations, we can probably drop the amount of appearance oriented content in general by over 90%. And if we can, as part of that, turn off beautification filers, etc.
It means that if someone doesn't want to see this stuff or they've had a moment of lucidity and clarity and thought, you know what, I don't think this stuff is good for me. I want to see what it's like if I don't see it. Or you're having an eating disorder and you're seeing a therapist. Then you can shut off a giant part of what makes social media dangerous for you without taking away your access to it or the positive sides of it.
And I, I think we've got a very genuine shot of getting that instantiated. TikTok, in principle, are are doing it.
That sounds amazing, that it will actually could actually happen. And in the near future?
Yeah, but expectation management, I always worry that the whole apparatus will fall apart, but I've had that worry for 18 months. And so.
But as you say, I think they are motivated to do this, if for no other reason than it might stop them getting banned in certain places.
Yeah, it also requires reasonably little of them beyond the technical implementation. And if it doesn't work, it's also not on them. Which is handy from a strategic standpoint. It's a risk for Butterfly and our ourselves. But that's that's the reality of strategy at that level.
And the back end of this, and we've been calling it a filter. But it's different to the filter that you use on TikTok. But your but your tool that will prevent this content getting through.
Could that be then used for other things that, I know your expertise is in eating disorders and looking at that content, but the tool behind it? Could that be used for other things that people might not want to see on social media?
Yeah, and this has been one of those, like projecting forward conversations where if you had a filter like this that someone could could turn on, there's no reason you couldn't have this for all kinds of things.
Now, in some domains, like I don't want to see stuff that contributes to an eating disorder, there's not many people pushing back in the opposite direction. For some issues, you will absolutely find that as soon as you start talking about something more contentious, more culture war-y like, uh, the manosphere, for example, or feminism. Suddenly, any push this way will just be interpreted as censorship from the other.
I think TikTok has an understandable concern about what the future of social media might look like if we're all able to create our own echo chambers by turning ourselves off of other things.
But again, as I've said, it's worth a try. Because the current setup of users have no control is less useful, I think, than providing users with control and then seeing how that turns out.
Well, I think if you achieve even one quarter of what you aim for, Scott, with this, uh, work, it will have been a just a remarkable story of research impact. So thank you so much for talking to us today.
Thanks for having me.
I have such huge high hopes. My kids are eight, seven and six. So if you can just get it all sorted out before they're out on TikTok, that would be great for me. Thanks, Scott.
You've been listening to PsychTalks with me Nick Haslam and Cassie Hayward. We'd like to thank our guest for today, Dr Scott Griffiths.
This episode was produced by Carly Godden with production assistance from Louise Sheedy. Our sound engineer was David Calf. Of course, you can find the links to the Butterfly Foundation and other resources in our show notes. Thanks for tuning in and we're very excited to bring you more episodes of PsychTalks in this new series. So watch this space. Bye for now.