We've changed the way we get information, and that has changed our perceptions of the world. How have echo chambers formed through tech and what can you do to make sure you're not just getting the information that confirms your perspective?
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
Get in touch with technology with tech Stuff from how stuff works dot com. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with How Stuff Works in a love of all things tech, and welcome to a week of darkness. I know we just did Nuclear Power, but we're not done with some dark and scary topics. This week we're going to explore some stuff, uh in the world of the web that is of concern, and today specifically, we're gonna talk about echo chambers. Um So, also I should mention before I really jump into this, I'm still dealing with a cold. You may have heard from the Nuclear Power episodes that I was dealing with the cold. That cold is slowly moving from my head to my chest and it is affecting my voice. So I could get a little extra crokey over the next few episodes. So back in April, Facebook founder Mark Zuckerberg appeared before Congress to talk about some pretty heavy topics. Now, one of the big ones was about Cambridge Analytica, and the next two episodes will be about Cambridge Analytica. But the other was about how dozens of accounts linked to Russia had used Facebook as a platform to influence US voters during the two thousand and sixteen election season, and this episode is going to be more about how Facebook and other social media platforms got to the point where they could be exploited in that way. And before I get deep into this topic, I am not going to take a specific political stance in this episode, so you don't need to worry about that. I am not here to argue one philosophy over another. This is not a pro party episode in either or an anti party episode in either case. Instead, I'm going to talk about the way so media platforms in general, and Facebook and Twitter in particular, work to explain how features that were meant to do one thing could be twisted to do something else. And this can affect anyone of any philosophy, so it can be exploited by anyone who has ulterior motives, doesn't matter whether that's for one party or for a different party. So a lot of what I'm going to talk about relate is related to the concept of the metaphorical echo chamber. And I'm sure most of you have heard about that and you probably know what it is, but just in case, I'm gonna lay it out. It's a phrase we use to describe an environment in which our persons previously held beliefs are boosted through repetition of those beliefs within a closed system. So imagine that you're in a room full of people who all love one sports team and they all hate another sports team. Okay, so they all share these common traits. They all love Team A and they all really hate team B. And they talk about how great their team A is and how rotten Team B is. They all feel validated in their beliefs, right. They are getting validation from their peers who are also saying, you know what, you're right. Teammate does rule, and Team B can go kick sand. There are no dissenting opinions. There's no objective point of view. Everyone is biased. By the end of the meeting, everyone's pretty much sure that their team rules and the other team is literally like the worst. That is an echo chamber. Social media platforms can become echo chambers too, and often do partly because it's hard to have a nuanced conversation online where two people with different points of view can come to some sort of mutual understanding. That seems to be the exception as opposed to the rule. When you have two people of opposing viewpoints interact online. The culture of flame wars and memes, coupled with the lack of tone and bodyline which and other communication tools, means we're far more likely to have a shallow interaction. We're either patting each other on the back for confirming our common beliefs, or we're tearing each other down because we share different points of view and we tend to get pushed more towards radical ends of our respective philosophies. It becomes an us versus them and not a wee. So instead of we should work this out, it becomes how can we defeat them? This is just something that happens in general online more frequently than it does in the real world, although it can happen in the real world too. Obviously, now there have been actual studies about this sort of thing. I'm not just spouting armchair philosophy, but whenever it comes to psychological studies, I'm always personally a little bit cautious. Not because I lack respect for psychology. I have a great deal of respect for it, but rather it is particularly challenging to design good scientific tests for psychologic issues and and scenarios due to the enormous number of variables that come along with being a human being. But there is a study that published in eleven called Effects of anonymity, invisibility and lack of eye contact on toxic online disinhibition that explores causes for our bad behaviors when we get behind a keyboard. The experiment looked at how anonymity, invisibility, and lack of eye contact could affect interpersonal interactions online and the researchers took subjects test subjects and paired them together randomly, and each pair received a dilemma that they were supposed to discuss and resolve as best they could. The researchers saw the greatest evidence of negative behaviors, of toxic behaviors when the subjects did not have eye contact with one another in their interactions. So whenever they were in a setup that prevented them from being able to make eye contact with each other, they saw that the toxicity of those interactions would increase. Where you have people of a very different points of view arguing about what's the best solution to a dilemma. Now, maybe this means that when we're online and when we're interacting with someone else online. The fact we cannot see that person and make eye contact with them means that to a certain extent, that person doesn't seem real to us. We're interacting with what appears to just be words on a screen, so not a human being. This is text. The fact that there is a real human being on the other end of those words is a level of abstraction. We usually don't bother the process, like we know that's the case. We know most of the time anyway, that there is an actual person on the other end of those words, but that doesn't you know, knowing it doesn't filter into our our immediate behaviors. Dr John Sewler defines what he calls the online disinhibition effect, which consists of six factors that remove or reduce inhibitions that we typically feel when we interact with one another in public spaces. Those factors are dissociative anonymity. That means the anonymous persona we adopt online isn't quote unquote really us, right, So we might act in ways that don't reflect who we are as individuals in our day to day lives because we're taking on a persona online and maybe this makes us more bold than we normally would be, or more what we think of as outspoken, and other people might think of as really freaking rude. Um, anyone who has spent any time online can probably say, yeah, you know, I get that. Like I, I'm a different person online than I am in real life. Then there's invisibility that means no one can judge my tone or tell what I look like when I'm online, so that disinhibits me. Right. So, if I have certain inhibitions I experience in my day to day life because of the way I look or the tone I have, and I'm constantly adjusting my behavior because of that, that no longer applies when I'm online. There's a synchronicity that is, your actions are not unfolding in real time. You can um read a response, you can write something down, days can pass in between, and that changes things as well. Then there's solipsistic introjection, which is because I cannot see this person, I must fill in the gaps as to what they intend and who they really are. So, in other words, when I interact with you online, I cannot see you, and I start to a sign intent and behavior to the words that you're sending me because I can't see them. I can't witness them myself, and now those are pieces of information that are kind of necessary for me to understand the meaning of what you are saying, So I start to just assume what it is you mean. So this is where you get into some of those problems where someone says, oh, no, I was I was trying to make a joke, but you took it seriously, um, which, by the way, can be a real lousy way of trying to cover up bad behavior saying oh, it's just joke. Um. Sometimes it can literally be just a joke and it ends up snowballing into something terrible. But there are a lot of people who will use just a joke as a way to try and excuse bad behavior in general. So it's not a great response, but it does mean like if you send me a message that was intended to be a joke, like let's say, legit, it was meant to be a joke and it wasn't mean spirited or anything like that, but I misinterpreted because I have assigned a motivation to you based upon those words, that's me engaging in this particular behavior. Then there's dissociative imagination. That is the uh, the idea that this is all online, which isn't in real life. Therefore the are not real people that I'm interacting with. And there's minimizing authority, which is there are no Internet police, so I can do what I want. It's like the Wild West, so I'm not gonna get punished, So why should I worry about limiting my behavior? Now, those factors will influence people and lead them to behaving in ways they might not behave normally in the public world, and sometimes they do that in a positive way. This doesn't have to be negative. It could mean that you might be more honest and open, and you might be more accepting. But other times it might mean being more negative than lashing out in attacks in a way that you would never do in any other context. And it may even be that those who attack are normally pretty decent people that you know in their day to day lives. They aren't uh aggressive or obnoxious or insulting, but when they get online, that changes and the factors remove those inhibitions they would typically feel that guide them to being a any decent person. Now, I'm not sure what that says about the real person underneath all that. It doesn't seem great. If the argument is this person only behaves well because he or she feels they have to based upon the community they exist in, that's not a great argument for that person's character, but it is a reality. So out of the real world, we have communities and we adapt our behaviors to the communities we belong to, and this is a survival mechanism. Human beings are social creatures, and to be social, one of the things you have to have is the ability to get along with people. If you start to alienate everyone you come into contact with, eventually you'll get ostracized from the community. And if we're talking about you know, primitive humans, that might mean that you have severely reduced your chance for survival. So a survival mechanism that is important to develop is how do I get along with everybody else and contribute in a way where I can be part of the group. So, whether it's laws or just the desire not to rock the boat, we tend to behave within the context of our community's values and rules. But online those things aren't nearly as present, or established or enforced, and so more destructive human tendencies tend to come to light there. In addition, communication online tends to be fairly short with each blast, and that means we have very little time to reflect on what we are actually saying before we say it. Now, back in the old days, gather around the fire, my friends. We would write letters on paper with pen or pencil, and it took ages to write something out of any substance. And by the time you were finished, then you have to go find an envelope and address the envelope and put the letter in the envelope, but a stamp on the envelope, go out to the mailbox or to the post office to mail it. By the time you do all that, you might have thought better of the words that you wrote down, right, You might have thought, you know, there probably was a better way for me to put that. Maybe I shouldn't say that, uh that you know you are a horse's rear end. There might be a better way of putting my thoughts down on paper, and you would try again, or maybe you toss everything out. Venting on the page might work through your feelings. Maybe even through this process, you think about how the recipient of your letter will react to the words that you have written, and that might guide you to write them in a much more constructive way. But online, when we can zap off a quick zinger in no time, it's instant gratification. A tweet or a Facebook update zooms out there before we've even really thought about the consequences of what we have just said, and the nature of online communication itself has catered to our more base natures. Now, to be clear, social media platforms did not create this problem. They just facilitate it very efficient lely, and if managers of those programs do not intervene or do not enforce behavior policies, things can get out of hand. Now, next, we're gonna take a look at how Facebook's algorithm works to get a better understanding of how Russian accounts Russian hackers were able to really exploit that system. But first let's take a quick break to thank our sponsor. One of the big goals for any web based property is to drive engagement. Engagement might mean viewing more pages, which ends up counting towards page views for advertising, or it might mean encouraging people to buy stuff from an online store, or it might mean getting people to sign up for newsletters or to join groups online. But it's all about getting people to go from being a momentary visitor to something more than that. And Facebook measures engagement in a few ways on their site. One of those ways is through the number of likes or reactions a post gets. If a post gets a lot more people hitting that like button, then engagement is high. If nobody has reacted to it, engagement is low. Another is in the number of times a post is shared to various people's walls. If I see acute meme that features foxes, I know I need to share that on my friend Shaise wall because she loves foxes. So I'll do that, and sharing it counts toward the engagement of that original post. And the third major way that Facebook gauges engagement, which I realized now sounds a little repetitive but isn't, is through comments. So if a post gets a lot of comments, particularly from a lot of different people, not just the same like two people going back and forth, Facebook registers that as a post with high engagement. Now why is that important? Why should you care? Well, the reason engagement is important with Facebook is that you, as a user. Let's say that you have a Facebook profile. Some of you may not, but if you do, you do not see all the stuff that your friends post on Facebook, even if they're posting it publicly or to their friends. Even if they are not specifically leaving you out of the posts, you're not seeing all their stuff. And this has been a common complaint among users, including myself. Uh. A lot of people have argued, I wish that Facebook would just send everything chronologically. It's my job to sit there and go through it all and read up on things, and if I miss stuff, I miss stuff. But Facebook doesn't do it that way. First of all, you don't necessarily get all the posts in chronological order, even when you said it, because they keep changing the darn settings. But that's another argument for another time. But you might notice when you log on some of your friends seem like they're pretty consistently really active, and others appear to rarely post. In fact, some of those people who appear to rarely post might be posting regularly, but you just aren't seeing it unless you actually take the effort to pop on over to that friend's wall, and then you can see all the posts that you been missing. Well, what Facebook is doing is serving you a selection of the stuff your friends are sharing on the social media platform and they leave out the rest, and Facebook will serve up posts in your news feed that have high engagement. So if one of your friends shares a post from someone else on their feed that got a ton of responses, you'll probably see that when you scroll through your feed, or if the post has anything to do with anything you're interested in, you'll likely see it then too. Stuff goes viral, helped in no small part by Facebook's algorithm to make sure the more visible posts get even more visibility and they get the most engagement. Now, let me be clear, on the face of it, this seems like a no brainer. After all, if a lot of people are interested in something, if someone writes a very thought provoking post and it gets a lot of engagement, chances are you will be interested in that as well. You don't want to be left out the next conversational topic or the next fun meme. But there's a big problem with this philosophy, and that is you can exploit it as long as you can make posts that drive a lot of engagement, and it doesn't have to be positive engagement, which makes it way easier. It can actually just be about stirring the pot and making people mad, and you can do it without directly violating Facebook policies. Now, if you go and post something that's outright hateful, racist, misogynistic, or offensive, that might get flagged and Facebook might remove it. It might not. I've seen some pretty awful things up on Facebook that have been left there even after people have complained. But Facebook does have policies about this sort of stuff, and if enough people flag it and Facebook takes notice, those posts might get taken down. Um, So let's assume for the moment that Facebook has established some firm rules and it's enforcing them, and just take that off the table. What you could still do is linked to stories that are written specifically to rile people up. There may not be an ounce of truth to those stories. They might be complete fabrications, specifically crafted just to get a reaction from people. People who agree with whatever the perspective is of the story will share a link of that story on their social media page, or at least they'll be more inclined to, and others will share that and then they'll comment or they'll engage in some way, and in this manner, the message gets elevated and it gets spread around a little bit more, and then you start to see it go viral, and there doesn't ever have to be an ounce of truth in the original story for this to happen. I've been seeing this on Facebook for a few years, and it's not always political. The stuff I'm talking about today relates largely to politics, but I've seen it for all sorts of different stuff. And I'm sure many of you have had the experience of looking at a post that a friend of years has shared and there's an article that it links to with a crazy headline, and you might say to yourself, hang on, that can't possibly be true. And then you click through to the article and you do a little digging maybe, and you find out that the site that's hosting that article is a quote unquote satire or parody site. Typically for these sites, you'll find an about page somewhere that lays this out. But because our real journalism has become so focused on creating clickable or a clickbait if you prefer headlines, the quote unquote joke articles don't seem that unreasonable within that same context, right, You've got legitimate news outlets that are creating ridiculous headlines because they work. They get people to click on the stories, and the stories might be very well written, very well researched. But that headline culture means that when we encounter a quote unquote satire or parody headline, it's not always clear that it's a joke, because we see crazy stuff all the time. Now click bait has made that a reality, and so it only matters if you go further in and read the actual article to see, all right, is this legit or is this a joke? What is this? Sometimes it's a joke, and sometimes it's very evident from the beginning, like you start reading the article and you think, all right, this is a joke. But my friend clearly didn't read the article. They just saw the headline and shared it and that became their contribution to the conversation, which really all it did was add more traffic to this website. In other cases, it may not obviously be a joke, and it's only when you go to the about page on the website that you see that everything is supposed to be satire, although for it to really be satire, you have to know it's satire um In reality, I would call those websites just purveyors of lies. They just make up junk and they pass it off as true. And it's only if you dig in the website that you find the little disclaimer saying, oh, this is a satire site. That's not really satire, that's fake news. Um. And I hate using the term fake news because it's so politicized, but there really is that stuff out there, and it's been there for years. And like I said, it's not just politics. I've seen it for things like, uh, the entertainment industry, where you'll see some ridiculous headline you think, well, that's just bizarre, and only through digging you realize, oh, there's nothing to this. So the owners of those sites are really in a sweet spot. They can point to that about page and say, hey, it's not our intent to be taken as a serious source of news. You just read our about page. You see that we're a joke site. But they also keep publishing articles that don't seem to be so much of a joke as an outright fabrication, and they end up making huge amounts of money off of ads. High engagement means high traffic. High traffic means high page views, high page views means you're start making money off the ads that you serve against your site. So those deceitful articles are really just a means to an end. The consequences of those deceitful articles, the idea that they might be spreading misinformation, that's not really a consideration or concern for a lot of these sites. And I've read articles, I've read interviews by people who wrote for these sites, and it's clear that they were just trying to think of ways that would get more people to click on stories, and that beyond that, they didn't care. They just wanted to drive traffic, to get a lot of traffic, to make a lot of money, and so they would just come up with whatever outlandish stories they could think of that would play right into people's preconceived ideas in order to make money. It's very cynical, and as someone who works very hard to create content that I consider to be of high quality, I find it quite insulting to just both as a reader and as someone who creates content. All Right, so you've got these writers cynically creating inflammatory articles to drive traffic to a site. They might incorporate just enough real world facts to give the article some believability, but even that isn't really necessary, as a lot of people are going to just share an article if the headline gets their attention and seems to confirm they're already held beliefs. And if you believe deep in your heart that your local government was I don't know, replaced with pod people or something, the headline that says as much confirms and validates your belief. And hey, you're busy, right. You can't be expected to read every article you come across just to see if it's legit, or even do further digging to make sure the site that hosted the article is in fact a real news site, so you just share it. Add to this the fact that there are countless sites out there that only exist to repurpose other people's content that performs well in order to exploit that content for advertising revenue. And you have a recipe to make things worse. So these are outfits that are all about let's look and see what's trending, and hey, there's this article. Let's say it's on BuzzFeed. There's this article on BuzzFeed. It's to end super well, let's do a version, our version of that exact same article, and we're going to piggyback off their success. Um, there are a lot of different sites out there that are essentially doing this. They're taking other people's content. They might change enough stuff so that it's not, you know, just a hatchet copy and paste job, but ultimately they're just again about trying to get as much traffic as possible. The content can be terrible. By the way, it doesn't matter if the content is good or not. It just has to drive engagement. So now you've got people trying to make legitimate, good content, the content that is thorough, it's investigative, it's objective, it's of high quality, and they're competing with people just throwing junk up online as fast as they possibly can to drive as many page views as possible. This is not a good environment if you want to make good content, because you get drowned out by all the noise. You can hope that your reputation is good enough so that people take you seriously, but you're still going up against people who just don't care about quality. They care about quantity and engagement, and that is very demoralizing as you go on. Now, so far i've been talking about this just as a way to make money through serving up ads against lousy content. But when we come back, I'll talk about the dreaded fake news for political gains. I'm talking about the stuff fabricated to guide conversations and influence political elections. So stay tuned because that's coming up next. But first let's take a quick break to thank our sponsor. Propaganda is a really old idea, and the definition of propaganda is that it is information, typically biased, sometimes disinformation or misleading information, maybe an even outright lie, but it's used to promote a particular philosophical or political point of view. All sorts of organizations used propaganda to build support among the general public for a particular stance or action, such as electing a leader or putting your faith in an organization. Russian propaganda is kind of in a class all of its own. For decades, the Soviet government used propaganda to praise Soviet leaders and demonize Western countries, particularly the United States, as well as the concept of capitalism. During the Soviet era, the various publications in the Soviet Union were state owned, so the government got to dictate what was communicated down to the citizens. Us very much a propaganda machine. Russian propagandists included artists who are great at capturing the public imagination. Now these days, at least when it comes to the Internet, Russian propagandists are like assembly line workers. In March two thousand eighteen, Time magazine published an article titled a former Russian troll explains how to spread fake news Now. That person, vitally best Belove, explained that he took a job with a company called the Internet Research Agency. That particular company, a lot of people call it a troll factory. It's just infamous for churning up people who do this professionally in Russia. The real purpose of this organization is not to conduct research, despite the name Internet Research Agency, but rather spread propaganda as quickly and as effectively as possible. The employees were given instructions to create fake accounts on various social media sites like Facebook and Twitter, and to leave comments and posts that followed the directions of their superiors. So those directions could involve sharing an article that was written specifically to appeal to people with particular political or social views, or to leave comments on posts of that nature, and the whole point was to make those article will seem relevant and important and elevated and popular to get a post rolling with enough momentum and make it go viral so that it could affect as many people as possible. The Russian government, by the way, doesn't place the same value on free speech as say United States citizens do. The Russian government has laws that make it illegal to post certain types of material on social media pages, such as material meant to quote threatened public order end quote, or posts that are extremist in nature. Now, on the face of it, that sounds reasonable, because you don't want people to incite others to violence, But the Russian government's interpretation of this tends to be we don't want you posting anything that criticizes the Russian government in general, or Vladimir Putin in particular, or any of Putant's buddies generally speaking. So the Russian rules seem to be we want to make sure that everything that is posted is true and liable and objective. But in reality, it's more about we don't want to see you posting anything that's critical of our president. Twitter has also been a target for these types of tactics. In July two thousand eighteen, NBR ran a story about how that same Internet Research agency had created nearly fifty Twitter accounts claiming to represent various US newspapers. Most of those newspapers were fake. They did not exist. They were just made up by the Twitter accounts. So you might see a title, their city name and a newspaper title, and there's no actual newspaper called that from that city. The accounts were steadily gathering followers, and they were posting links to news that was relevant to the various regions the papers claimed to represent, like Chicago or Seattle. And again, some of these were taking names of papers that one day existed but haven't existed for decades, so they were trying to trade on that legitimacy, and they were also trying to establish a sense of legitimacy by sending spreading links to real stories that were from those areas, and they were unbiased news reports. This was not a misinformation campaign at this point, there was no fake news being spread around. But NBR had uncovered this trend and realized that what was happening was that the Internet Research Agency was building trust online through these fake accounts and gathering followers that way and being seen as a reliable news source, and it was all in preparation to begin a misinformation campaign. It's just that NBR found out about it before they had moved into that phase. Twitter, like Facebook, measures engagement and stuff like replies, retweets, quoted tweets, likes, that kind of thing, and those metrics guide Twitter to occasionally show tweets beyond the feeds of people that are directly following those accounts. So, in other words, if I make a Twitter account and I follow let's say five people, and I'm I would expect every time I log into Twitter, I'm just gonna see the tweets from those five people and the stuff that they retweet or quote, and that's all I'm gonna see. So I might see tweets from others occasionally, but because one of the people I followed retweeted it or quoted it. Except that sometimes Twitter will show me tweets beyond those five people that were not retweeted or quoted by them. And you might not follow one of those fake accounts, but you might still see a post from it because it drove a lot of engagement, and thus Twitter would serve it up to you saying, well, this particular post seems to be really relevant. A lot of people are responding to it, so maybe we should show it to more people because I bet more people will find it interesting. And that's one way Twitter can facilitate the uh the spread of misinformation, or they can pay for promoted tweets, which means Twitter will serve it up to a larger number of people, regardless of whether or not they follow the original account. And of course, if someone you follow retweets are quotes a tweet from differ account, you're going to see it then. So, like Facebook's algorithm, this approach is something people can exploit, and that seems to be the case. Since the twenty six election in the United States, both Facebook and Twitter have cracked down on accounts that were fake in the sense that the entity the account claimed to represent was not the real owner of the account, or they were accounts that existed specifically to spread misinformation. So you can see, our love of social media and the business model that supports social media has created a perfect situation for savvy people who wish to spread a specific message. We humans are less likely to feel empathy in online interactions with each other, were more likely to respond quickly and with very little inhibition. This leads to other people responding to our words in a similar manner, either in support of our view or an attempt to rip us to shreds, and the process continues. That same process fuels the visibility of the original piece of content that prompted the flame war in the first place, which means even more people see it and react to it. This rewards the social media site, as the more engagement needs more money from ads served on the site itself. So Facebook once more engagement, they want to make more money through ads and the uh It rewards whatever entity posts the content because it means that the message is getting out there is shaping the perception of whatever the topic happens to be, and it can also be a monetary reward if the content is served up with ads for those entities, and it serves to create a deeper divide between the people who have opposing points of view on a given subject. And since it is a more challenging prospect to engage online in an empathetic way, we are not likely to come to any sort of agreement on that object. Instead, we're gonna push ourselves more toward that us versus them mentality. Now I didn't even touch on other problems, such as the belief that journalism must give an equal amount of time and opportunity to all perspectives on the topic. Now, journalism should be objective and unbiased. It should be investigative. It should not take anything at face value. But that is not the same thing as giving all sides of a discussion equal opportunity to use the platform to get a message out. If there's a hate group that exists primarily to oppress some other group, it's not responsible journalism to give that group a platform to espouse those beliefs. Now, it would be objective journalism to investigate that group to determine what are the motivations behind that group's philosophy, why do they believe the things they do, and why do they act in the ways that they have chosen, and to publish that that response, That that investigation. But it is not journalism's responsibility to act like a stage for anyone to jump up and use it. So what can we do about this? Well, one thing we can try to remember is that the people online are mostly people mostly. I mean there are bots, their fake accounts that have made this kind of muddy, makes it a little more tricky to just make a blanket statement to be honest. And in some cases, the quote unquote person on the other end may either not be a person at all, or they may be someone who is posting something they don't necessarily believe or care about. They're just posting it because it's literally their job to do it. They're just filling out an assignment. They don't care because they have no investment in whatever the messages. But we should remember that most of the people that we interact with online are people, and we need to keep that in our minds. If we are not willing to make that that assumption, then that says something very troubling about our own character. Now, no, when I say this, I do not mean that we need to entertain racist or misogynist or hateful ideologies just because people hold them. I don't think there's any place for that kind of stuff, at least no place I want to be in. Another thing we can try to do is seek out information from reliable sources, not just information that seems to support our own personal worldview, but objective information about any number of topics. And it might mean that you find yourself questioning your own perspective about certain things, and maybe you even change your mind. For example, if you asked me a year ago why I thought about universal basic income as a concept, I would probably be pretty positive about it, and I still am. I'm still more or less positive about it. But I think instead of universal basic income, what I would prefer to see is some sort of universal guaranteed jobs program. So, in other words, I would like to see a program where anyone who wants to find a job can get a job. It's a guarantee. Now, those jobs would have to be created by various governments at various levels. It could be everything from a local government to federal programs, but it could include all different types of work as well, and that would be very important. But I don't say this to convince anyone that my beliefs are the only legit ones, that my approach is the only right way, and that everyone should just subscribe to my idea, or at least the idea that I have already subscribed to. It's not my idea. Other people have had universally guaranteed job program UH suggestions for years and years and years. I did not come up with that idea, but I just wanted to give you an example as an idea where originally I was thinking, yeah, universal basic income, that makes most sense to me. But I actually think that the guaranteed jobs makes more sense because it creates more direct benefits, and I think it's an easier concept to get up to to get support behind. So there are reasons why I've changed my mind. But that wouldn't have been possible if I had not sought out more information about the subject from a variety of different sources. If I had only just kept reading people who were advocating for universal basic income, I never would have taken any time to consider alternatives. So that's a simple example, and also it's an example that I admit is is fairly shallow. Right, it's not a huge shift to go from universal basic income to universal uh guaranteed job program. That's not an enormous leap. It would take a lot more for me to change a more fundamental idea. I have to something that is more in opposition to that idea. But it is important for us to seek those pieces of information out so that we aren't just confirming our biases, whether we consider ourselves liberal or conservative, whatever it may be. It's very important to try and seek that out. Now, the real trick there, obviously, is trying to find sources that are as objective as possible and you're not just reading, you know, propaganda that is supporting one viewpoint over another. I don't think we're going to get to a spot where everyone magically becomes totally objective and empathetic all at the same time and makes decisions that are responsible from a social and physical point of view. I don't think that's going to happen. But being aware of how online information, which has become the primary source for information for a growing number of people, can be used to manipulate those people, that's of critical importance. Only then can we spot what's happening and we can do our best to shut down abuses of the system and make informed decisions based on real information and maybe remember that we're all human beings in the process. Uh. It's a big request, but I think we could do it if we wanted to. That wraps up this episode about echo chambers. In our next episode, we will start our discussion about Cambridge Analytica and the enormous mess that company found itself in. UH, if you guys have suggestions for future episodes of tech Stuff. Send me an email the addresses tech stuff at how stuff Works dot com, or you can drop me a line on Facebook or Twitter. The handle of both of those is tech Stuff hs W. Don't forget we have a merchandise store at t public dot com slash tech stuff where you can get all your tech stuff merchandise needs. And make sure you follow us on Instagram and I'll talk to you again really soon. For more on this and thousands of other topics, is that how stuff works dot com