Tech News: You Got AI in my Tech News

Published Feb 28, 2023, 10:45 PM

We've got a ton of stories relating to AI to talk about today. Plus, VW's Car-Net service refuses to help detectives track down a stolen car (with a toddler inside it) unless they first pay the $150 reactivation fee. Ford proposes a future where cars repossess themselves. And everyone is banning TikTok.

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio. And how the tech are you. It's time for the tech news for February twenty eighth, twenty twenty three. For such a beastly month as February twenty eight, days as a rule are plenty. Shout out if you recognize that reference. It is another AI dominated news day, though I promise there are a few stories I'll be covering that do not have AI as the central topic. But starting off Elon Musk repartedly wants to found a new AI lab to compete against open Ai. Now, for those of y'all who remember my episode about open ai, maybe this comes as a surprise because Musk was actually one of the original co founders of open ai in the first place. But back then, open ai was a not for profit organization, and it had the goal of using an open source approach to developing an evolving artificial intelligence in a way that ideally would be universally beneficial. You know, none of this AI that benefits one group at the expense of everyone else. Kind of nonsense. But then Musk stepped down from the board of open Ai because ostensibly because Tesla was also developing AI of its own and there was a potential conflict of interest, though Musk has later said that he was critical of open AI's direction, and since that day, Musk has gone on to criticize open Ai, particularly once the organization founded a for profit arm ostensibly to help fund the nonprofit part, and Musk has also criticized chat GPT, saying open ai is quote training AI to be woke end quote Yeah, Musk, I get it. You're a billionaire white guy. Why not punch down because there's nowhere else beat a punch right? What a jerk. I'm, of course being a little facetious. My opinion of Musk is obviously pretty low, but that's beside the point. I know no one really wants to hear that, so I'll drop it. Musk wants to create this AI lab and pursue AI chat butts that are unfettered by the chains of wokeness. Considering how Musk has shown that his free speech absolutist stance isn't actually in alignment with his behavior, you can see also how Twitter had banned mention of competing services like massadon Instagram and others on its platform. To see evidence of this, I suspect all of this is going to come back to haunt him should he actually achieve this goal. Meanwhile, Tesla investors are likely further aggravated to see that the company's CEO continues to direct his attention to yet another endeavor rather than address problems with Tesla. And then over at open Ai, the company is introducing a platform for developers that will give them access to open AI's tools, namely the company's machine learning models. So this is the very powerful AI compute systems that to build out yourself would require millions and millions of dollars. Open ai is calling this offering Foundry, and it means that people who have an idea for apps or services that would feature AI in some way could have access to compute assets without having to build them all themselves. That is an enticing offer for developers who might otherwise have a great idea but they lack the funds to be able to execute upon it. Details are somewhat scarce. Open ai has not announced when we might expect Foundry to launch, but we do know that it's going to set developers back a pretty penny, actually a whole bunch of pretty pennies to access these services. According to tech Crunch, three months of access to the lightweight version of GPT three point five would set you back seventy eight grand. That's seventy thousand dollars for three months of access wowsers. So this is well beyond the reach of your average home developer. We're really talking more about startups and companies that have a real shot at seeing a return on investment, but they lack the infrastructure or money to build out their own machine learning systems. Over at Meta, Mark Zuckerberg announced that the company is pursuing its own AI strategy. I'm just gonna just read out his Facebook post because it tells you everything you need to know. Quote. We're creating a new top level product group at Meta focused on generative AI to turbo charge our work in this area. We're starting by pulling together a lot of teams working on generative AI across the company into one group focused on building delightful experiences around this technology into all of our different products. In the short term, will focus on building creative and expressive tools Over the longer term will focus on developing AI personas that can help people in a variety of ways. We're exploring experiences with text like chat in WhatsApp and Messenger, with images like creative Instagram filters and ad formats, and with video and multimodal experiences. We have a lot of foundational work to do before getting to the really futuristic experiences, but I'm excited about all the new things will build alone the way end quote. So it sounds like Meta like Musk is on the path to create its own AI approach, or perhaps Meta will turn to Open AI to tap into the power of chat GPT. It's still early days and we're not done with AI yet. Snapchat is also jumping into the AI game with a product it calls my AI. Only subscribers to Snapchat Plus will have access to this. According to Snapchat, the AI will do stuff like if you ask it, it will give you recommendations for presents that you could buy friends and family. I mean, presumably Snapchat would scan everything you've ever said to these people and start to pull suggestions out of that, or it might give suggestions about things you could do with somebody, like hey, I want to hang out with so and so, what's a good activity and they might say, well, they really like the outdoors, how about you go hiking that kind of thing. You can also apparently name this AI. However, the company also owns up to the fact that chat GPT, which is the system powering my AI, isn't always reliable. We've talked about that a lot over the last couple of months, or as Snapchat actually put it, quote, as with all aipowered chat bots, my AI is prone to hallucination and can be tricked into saying just about anything. End quote. Oh. Also, all that communication with AI is logged for the purposes of review and development, so anything you do say to my AI is being recorded. So that means it's best not to confide in my AI all your secrets, like Grandma's chocolate chip cookie recipe or where you hit the bodies, because someone somewhere could be reading over that log. The whole announcement dedicates a surprising amount of space that warns users that the tool might not work as intended, and that almost raises the question about why they're deploying this tool so early. If they're taking this amount of effort to say, hey, y'all this thing might go heywire, and they really point out all the different ways that you can flag stuff so that people can review it and thus address any issues that pop up. Like it's it's a significant amount of the announcement that is all about covering their butts, so to speak. So I suppose one answer as to why they're deploying it so early is that this turns snapchat plus subscribers into QA testers that they don't have to pay. Right, these aren't employees. They could turn the community into the QA team. It's the basic concept behind open beta programs. Right. You find out by using a wide deployment where the problems are, and then you fix them. Before you know, you deploy it to an even larger audience in the future. Jordan Parker Herb wrote a piece for Insider titled I asked Chat GPT to write messages to my tender matches. A dating coach said they gave off a creepy vibe. Now, I don't think anyone's really surprised by that. Heck, if you again turned on Nothing Forever the AI powered endless Seinfeld episode, you would probably guess this would be the inevitable outcome. Because those episodes can get a little unsettling as well. And this piece in an Cider indicates that chat GPT's responses fell into some pretty common traps when one attempts to navigate the complicated world of modern dating, namely that chat gpt wrote responses that are way too long. This, by the way, it tells me that if I were single, I would probably be single forever at this point, because come on, y'all, there's no denying. I will use a thousand words when ten would do just fine. So I would never do well on these kinds of apps. Also, chat gpt leaned heavily on its emoji game, and as the title of the piece points out, some of the responses came across as creepy. Also, the coach pointed out that it's best for folks to just be themselves when using dating apps because otherwise your perspective date will get the wrong impression, and that pretty much guarantees things aren't going to go well. Like if a tender matchup thinks that the response was really cool, but the really cool response was written by AI, and then they meet you and you do not have the same vibe. That's a problem. I feel like I'm describing almost every romantic comedy that was written in the eighties and centered on teenagers, except that instead of it being AI, it's typically you know, the well meaning popular kids who are attempting to transform a person so that they become popular. It feels like that, except I guess I'm describing the next generation of teen centered comedies. I would not be surprised if we find a movie like that. Anyway, I highly recommend reading the actual article. Some of the AI generated examples that Jordan's shares are absolutely hilarious in that awkward, cringe e sitcom kind of way. Again, it's a piece in Insider, and it is called I asked chat gpt to write messages to my tender matches. You could just search for that. I recommend reading it. It's good for, you know, a laugh. Of course, AI goes well beyond chatbots and machine learning. We've talked about other uses of AI and the dangers that they can present. One example that springs to mind because it comes up time and again is facial recognition technology. Even if the application of this technology is benign, there are frequently problems with the underlying tech, unintended bias, has been a huge issue with facial recognition technology for years, ranging from some services being unable to detect a person of colors face properly to misidentification, which can lead to traumatic experiences such as being targeted by law enforcement simply because a computer can't tell the difference between different people. Frequently again people of color, and they are disproportionately targeted and affected by such technology. While last week New Scientists presented another example, one with truly grim and terrifying implications. The magazine found a contract between a tech company called real Networks and the United States Air Force. Real Networks offers a facial recognition platform that they call Secure Accurate Facial Recognition or SAFER, and the implication is that the Air Force is incorporating this technology into its Unmanned aerial vehicle or UAV program, you know drones. A lack of information has led to some speculation, some of which I think is definitely understandable and believable. After all, Special Forces units have been involved in clandestine operations that are at least difficult to separate from stuff like assassinations, and sometimes impossible. Sometimes it's just outright an assassination. So it is not a stretch to imagine a unit like a Special Forces unit making use of a drone with this technology in an effort to identify and acquire targets. But the potential for misuse of such technology, let alone the chance that the techs could make a mistake, has led critics to raise the alarm about this approach, and I think that is a wise reaction. Even if the tech works perfectly, you still have to wrestle with the fact that people can sometimes be the absolute worst. They can abuse technology for their own purposes, and when it comes to something as potentially lethal as this, that is a major problem. Okay, we're going to take a quick break. When we come back, we'll have a lot more tech news to cover. We're back, and we're not done with AI yet. I do promise we have other stories besides AI, but we've got a couple more to get through, and one of the stories we have is about how AI is complicated, not just because of the technology, but because of people and the way we react to AI and interact with AI. I think that this is a truly fascinating topic that relates heavily to both psychology and technology. So I want to talk about a recent study out of my alma mater, the University of Georgia Go Bulldogs. Nicole Davis, who is a graduate student at UGA, participated in a research project that I think is both interesting and has some upsetting but not really surprising conclusions. The project brought together a bunch of people and then ask them questions about stereotypes that relate to white, Black, and Asian ethnic groups, and generally, the response is indicated that people saw Asians as being the most competent people and Black people the least competent people for any given task. I guess it's a really ugly stereotype, but it's also undeniably a pretty common one. Then the users were given a task and it was to try and find a way to reduce the expense of a vacation rental, and they were going to make use of an AI powered bot, a chatbot. They had a little avatar representing the AI So these were cartoonish avatars and there were some that were white, some that were black, and some that were Asian in design, and the users were later asked to comment on the bot's performance, specifically how human and warm it was and how competent it was at helping the user reduce the vacation rental cost. Davis said, quote when we asked about the bot, we saw perceptions change. Even if they said yes, I feel like black people are less competent, they also said yes, I feel like the black ais were more competent. Davis said. This is an example of expectation violation theory, which pausits that if someone enters into a situation and they have low expectation and then their experience is a positive one, they walk away feeling that it was an overwhelmingly positive experience, not just that was good, but because it exceeded their expectations, it was even better than that. Davis goes on to say that more research is needed to find ways in which bought representation can help to impact consumer perception in positive ways, like perhaps breaking down barriers they might otherwise have because of these stereotypes that they maybe unconsciously have of different people. But this is obviously a complicated and sensitive challenge. Amazon has been using AI to help monitor delivery drivers for a while now, but this recently got more attention when a TikTok user would the handle. Amber Gertz gave an explanation of how the delivery truck's camera systems monitor driver behavior. She is an Amazon delivery driver. She created this TikTok that explains the whole thing, and she says that the system logs violations if a driver breaks protocol in any way. This can include stuff like failing to come to a complete stop at a stop sign, which, hey, that makes sense. This is like one of the biggest violations a driver can commit, and you definitely need something to help ensure drivers follow this process because I mean, they're on the road all day, so they have the potential for getting involved in collisions more than the average person does. You know, the average person is not on the road all day. And it also tracks whether or not the driver has buckled their seat belt at the conclusion of each stop, or whether or not they've gotten out of their seat. However, the system will also trigger if a driver takes a drink without first pulling over to the side of the road to come to a complete stop. So if you got your morning coffee with you and you're an Amazon delivery driver, you have to come to a complete stop before you can take a sip of coffee or you will get you your image captured and a violation will be hit on your profile. Also, drivers aren't allowed to touch the center console without first stopping because that's considered a distraction. And the cameras are not providing a live feed for the whole day. It's not like there's some security office within Amazon where there's this one person looking at a wall of monitors trying to keep up with all these different drivers. It would be impossible to do that. Instead, AI incorporated into the system monitors the camera view and captures video should a driver do anything that violates these policies, and it's all in the name of safety. As Amber Gert says in the TikTok, pretty much every Amazon driver hates this system, which includes multiple cameras set up within the vehicle and also forward facing cameras to keep things like how far away you are from the traffic in front of you. But she also generously says this is all in an effort to keep drivers and others also Amazon drivers can dispute violation reports, and Bergarts even mentions a case where a driver scratched his beard while he was driving and the system mistakenly believed he was talking on a cell phone and so dinged him with a violation, and so he was able to dispute that and get it reversed. Now I can honestly say I feel really conflicted about this whole approach. On the one hand, this is taking employee surveillance to the extreme, there's no doubt about it. But on the other hand, the system has also allegedly contributed to reduction and collision rates by thirty five percent, and considering that collisions often result in injuries and property damage, that's significant. And I kind of wonder what Ben Franklin would have to say about all this, with his views on liberty versus safety and all By the way, that famous quote is more complicated than the quote itself would indicate. I recommend looking into what he was talking about when he was chatting about liberty and safety, and hey, I mentioned TikTok. Let's talk about that really quickly. Canada has now banned TikTok from federal government devices. The White House here in the United States has done the same and has given federal employees thirty days to wipe TikTok off any government owned devices. There are a few special cases where there are exceptions for things like security research or law enforcement, but for the most part this is a federal government wide band. Several state governments in the US have done the same sort of thing. The EU has started to take action as well for the US and Canada. The main concern here is that TikTok's parent company, byte Dance, is a Chinese company, and as such could potentially be scouring the app for data in an effort to gather intelligence on behalf of the Chinese government, specifically the Communist Party. For the EU, it gets a little more complicated because even if you ignore the connection to China, TikTok itself is based in the United States, and the EU is a real stickler when it comes to protecting EU citizen data from being collected and exploited, and that includes keeping the information of the government safe, so they don't want the US to just get access to that. Meanwhile, China's Foreign Ministry Office issued a statement saying the US quote has been overstretching the concept of national security and abusing state power to suppress other countries companies. How unsure of itself, can the US, the world's top superpower, be to fear a favorite young person's favorite app to such a degree end quote. First of all, I don't think that favorite thing was meant to be repeated, but secondly, shots fired China's sick burn. Of course, I should also point out that there are literal laws in China that compel citizens and companies to act as agents on behalf of gathering intelligence for the Communist Party, so there's not a healthy leg to stand on there. Also, China, oddly enough, has famously blocked tons of apps and services originating in the West in an effort to prevent their citizens from accessing them. So again, not exactly taking the high ground on that front either, but yeah, you use sing us China. Last Pass, the password vault company, revealed that hackers were able to access and employees home computer and in the process they got access to a decrypted vault, a corporate vault, not a user vault. This is on the corporate side. Now. You might recall that the same service revealed last year that hackers had penetrated some customer vaults through other means. Currently, last Pass says it does not look like this attack and those previous attacks were connected at all. Whatever the case, last Pass users should change not only all the passwords that they had stored in last passes vault, but also their master password for their last pass account. This is a worst case scenario, and while Ours Technica points out that we do not know yet if hackers have access to individual users vaults and their passwords, you have to operate under the assumption that they do, and that further, this data could end up being sold on the dark web, So you definitely want to get out there and start changing this stuff now. I've long advocated for password vaults as they make the worst parts about passwords a little more user friendly. That is, by using a password vault, it's easier to create unique, strong passwords for every service that you access. These passwords are difficult to crack, but they're also hard to remember, and because they're all unique, you've got this ton of different passwords that are hard for you to just keep in your memory. So it gets to the point where it can be impossible to remember all of your unique passwords. So a vault's a great solution unless something like this happens. And while these security events are rare, we've seen they're not impossible and that it then falls to us to take action to make sure we keep our data and our services as safe as possible. Last Pass is not the only target to have a catastrophic breach. Another is the United States Marshals Service, which announced last week that attackers were able to gain access to secure systems or assumed secure systems and potentially retrieved sensitive information. The service did say that the information may include data about subjects who are currently under investigation, It might include administrative information and also personal data regarding some of the staff of the agency, among other things. However, one system that they said was unaffected was the Witness Security Program, which is more commonly known here in the US as the Witness Protection Program. This is the famous program that aims to create new identities for witnesses who are involved in cases for major crimes, and this is all in an effort to keep those witnesses safe from retaliation. It's pretty much a key ingredient in a ton of movies and TV shows that are about the mafia. It's frequent that someone gets put into witness protection so that the mafia is unable to track them down and target them. According to the agency's representatives, the hackers were unable to breach that particular database, so that is some good news. Okay, we're gonna take another quick break. When we come back, I've got a few more news stories that we need to talk about. We're back, all right. So last year, News Corps that's Rupert Murdock's company that owns multiple newspapers and some other media outlets, announced that hackers had gained access to corporate systems. We found out about this February twenty twenty two. However, now we have a little extra information, and it's that the hackers had essentially embedded themselves inside News Corps systems for nearly two years. In a recent letter to at least some of the company's employees, the corporation revealed, quote, based on the investigation, news Corp understands that between February twenty twenty and January twenty twenty two, an unauthorized party gained access to certain business documents and emails from a limited number of its personnel's accounts in the affected system, some of which contained personal information end quote. The letter also says the newscore doesn't believe the intrusion was focused on stealing personal data, and that identity theft is likely not the purpose of this attack, but rather that the intruders were gathering intelligence. When you look into the information that the hackers were able to access, it gets pretty gross for the employees who are affected because it includes not just stuff like their name, aims, and addresses and birth dates, but also things like their Social Security number, their driver's license number, their passport number, that kind of thing. It's understandable that employees who are affected would be very much concerned about this, so the company is providing effected employees the option of experience services to protect against identity theft and that kind of thing. The identity of the attackers remains unknown, so it's not really possible to say definitively what they were up to or how they intend to use the information they accessed. The leading hypothesis is that the attackers were aligned with the Chinese government, so this could be an example of a state sponsored attack, But from what I've seen, there's nothing that definitively shows that, or at least nothing that anyone has publicly acknowledge, and my guess is the investigation is probably ongoing. The website the Drive has an article that brings up a potential hazard with autonomous vehicles, then I hadn't really considered before, which is silly because it's such an obvious use case that I'm sure most of y'all are way ahead of me. So this is really an oversight on my part, but it's at the Ford Motor Company has recently been awarded a patent regarding vehicle repossessions. So instead of sending Amelio Estevez to repossess a car after the owner falls behind on their payments, shout out to anyone who recognizes that reference. Ford is suggesting that future vehicles that are outfitted with autonomous operation features would just drive themselves to a location where a tow truck could meet up with it, or it would go straight to the repossession agency or maybe even a junkyard. This would save the people who are driving tow trucks the potentially dangerous job of going to an owner's property to repossess a vehicle, So, in other words, a car would effectively repossess itself. Ford's patent also describes features for cars that would not necessarily have autonomous capabilities that Ford would be able to shut down certain options within the car remotely, some of them not even being optioned, some of them just being outright features, So things like power locks or cruise control or air conditioning, or even disabling the engine itself, rendering the car inert. The patent describes the process by which an owner would be alerted in advance, which would give the owner the opportunity to make good on payments. Otherwise, well, a car might start to lose all those features, or eventually even drive itself to the repossession agency, or, like I said, in cases where repossession would be viewed as being too expensive, like a bank would say, oh, it does make sense financially for us to repossess this vehicle, they might just have the car drive itself to the junkyard, which gets kind of sinister when you think about it, right, because a car autonomously driving itself to a junkyard for it to presumably be junked. That's grim stuff. Pixar could have a field day with that concept. And now for a horrifying and infuriating story involving a car company fully embracing a dystopic philosophy, or rather a third party that works with a famous car company doing so. A sheriff's office in Illinois encountered unthinkable resistance from Volkswagen's car Net service while trying to track down a stolen VW vehicle that had a two year old boy inside it. So the story goes, this mom drives home with her two kids and she pulls up in her driveway, gets out, takes one kid inside, comes back out to retriever second kid. But meanwhile, a group of car thieves had driven up behind her vehicle. They ended up beating up the woman, stealing her car, running her over, and driving off with her two year old son in the car. She was able to call nine one one and report the car and her child having been taken by these thieves and get medical attention as well. So anyway, the sheriff calls Karnet because Carnet is a service that allows Volkswagen really Carnet itself to remotely track and even control vehicles to an extent. So the detectives are like, we need to know the location of this vehicle right away, and the representative from Karnet says, well, she let that subscription lapse, so I'm going to need one hundred and fifty dollars reactivation fee before I can give you that information. A boy's life was hanging in the balance, and this representative for Carnet is like, can't give you that info till you cough up the fee. The detective actually did pay the one hundred fifty dollars because the detective was aware that a boy's life is worth more than one hundred and fifty dollars. It is taking everything in me not to swear during this news item. It is so unthinkably awful. The detective then, of course, posted about this incident on Facebook. Volkswagen has responded by calling it a quote unquote serious breach for its process of how it works with law enforcement. And again, to be clear, Karnet is a third party service that partners with Volkswagen, so ultimately Karnet is responsible for this horrible incident, but Volkswagen shares some of the blame as well as for the child. I am happy to report that the child was found safe. A witness saw thieves pull into a parking lot. They took the kid out of the car, then they drove off, and this witness was able to rescue the kid before he could wander into traffic. The police subsequently found the woman's Volkswagen. The woman, who was seriously injured as she tried to rescue her son during the theft, is currently recovering in the hospital and I think Carnet has a really long way to go to atone for this. This was unspeakably inhumane. Finally, on a brighter side, Competition in Markets Authority or the CMA, this is an antitrust kind of organization in the UK. This is one of those organizations that looks to make sure that the marketplace remains fair and competitive. CMA has said that third parties indicate consoles could be moving away from the eight to ten years cycle that we're familiar with, right, that typically there's around eight to ten years between generations of consoles, and that we might see them move to three to four year cycles instead, So every three to four years you would have a new version of say Xbox or PlayStation, and that concerns me a little bit simply because of the economic side of things like it could be really exciting to people who are thinking, oh, every three or four years, I'm going to get a chance to buy a new console with components and better features and that sort of thing, and that is exciting. The thing that concerns me, however, is that currently the way companies like Microsoft and Sony typically market their consoles is they sell them at cost or sometimes even at a loss, And the reasoning behind it is that you go out, you buy your console, and then you end up spending money on games and services, and that's how companies like Microsoft and Sony end up seeing a profit from those sales. It's not from the hardware where they're taking a loss, but it's from the use of that hardware. Well, that use is stretched over a decade essentially, or eight years, that that's a long tail for you to be able to make your profit off of these pieces of hardware. If that gets reduced down to three to four years, then we're probably also looking at a future where these consoles are going to cost more because they're not going to be as willing to take a big loss on the hardware sales because there won't be as much time to recapture those costs over the lifespan of the consoles. If people are changing every three to four years, then they're not necessarily, you know, realizing the profits they would off a single console generation than they would with a longer development cycle. So my guess is that such a future would see consoles being more expensive that at least you'd be looking at the companies moving away from selling them at a loss. Maybe they would continue to sell them at cost, but I would guess they would choose a way to see better profit margins off the hardware sales, because otherwise they're leaving money on the table. It doesn't make sense to me otherwise. This is just my assumption. I don't know that for sure. Also, this is based off the CMA citing third party reports. This isn't coming from Microsoft or Sony. So until we start seeing those announcements come from the actual companies, we could say that this is just a rumor, but it's one that makes me think that if that were to come to pass, that we would see more expensive consoles in the future. That's just my feeling, my gut feeling on the matter. I don't have anything to base that off of, except, you know, thinking it through, and I could be totally wrong on this. All right, that's it from the Tech News for Tuesday, February twenty eight, twenty twenty three. I hope you are all well. If you have suggestions for me to talk about stuff what's on this show, well, I had a couple ways of reaching out to me. One is to go to Twitter and to tweet at tech Stuff hsw that's the Twitter handle for the show. Let me know what you would like to hear, or you can download the iHeartRadio app. It's bringing to download free to use. You can navigate over to tech Stuff by putting it in the search field. Hit that little microphone button that I'll let you leave a voice message up to thirty seconds in length, and you can talk to me goose. Okay, that's enough references to the eighties in this show, Collie Gee willikers you can tell I'm getting old all right. I hope you are all well, and I'll talk to you again really soon. Text Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

In 1 playlist(s)

  1. TechStuff

    2,447 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,444 clip(s)