Clean

TechStuff Looks at Three AI Startups

Published Dec 11, 2024, 9:35 PM

2024 has been a huge year for AI. But why should we focus only on the enormous companies like OpenAI, Microsoft, Google, Meta, and Amazon? We look at three startups that hope to make it huge in the AI space. 

Welcome to Tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host Jonathan Strickland. I'm an executive producer with iHeart Podcasts. And how the tech are you? Y'all? Were getting up to the time of year when I like to look back on, you know, the months that have passed and reflect on all that's happened in the tech space, and for twenty twenty four, it's been a heck of a lot. But I thought today I would talk about three startup companies in the AI tech space that are really hoping to make it big in the coming years. I'm sure I'll talk a lot about AI when we do our year in review episode because this year has been largely about AI and different aspects, a lot of them kind of scary, right, But I wanted to talk about some startups because that's something we don't focus on that much in the news episodes. Most of the talk has been around companies like open AI and Microsoft and Meta and Google and Amazon. I thought maybe we could take a look at some startups in the space because there are lots of those two. There are tons of AI startups, some of which are doing all right right now, some of which may be struggling. And honestly, like, there's this growing concern in the AI field that perhaps some versions of AI are starting to hit like a wall when it comes to advancements, like not that they're not continuing to evolve, but that that evolution is happening on a slower time frame than what we saw previously. Which typically that's what happens, right, Like, usually you have lots of early gains and then it starts getting harder and harder. Those of y'all who work out and know what I'm talking about. Anyway, I thought I would give a shout out to the true team tru I see over at startups Savant. That website has put together a list that they called the one hundred top startups to watch in twenty twenty four. Now, that's a huge number of startups, right and I'm not gonna go through all of that. I'm going to look at just a few of them, and I want to come in a bit on what's been going on. So artificial intelligence is still something of a boom period right now. You know, Yes, there are these these areas where AI could potentially be brushing up against some limitations to certain approaches, particularly in the large language model space. But AI, as I've said many times, is a very complicated topic. There're lots of nuance to AI. It's not just one big, monolithic discipline. It's lots of much smaller, subtle disciplines that collectively make up artificial intelligence. To that end, I want to talk about some of the startups mentioned in this piece in Startup Savant. One I would like to chat about is Suno. Suno because simultaneously puts on a really darn impressive display while also being indicative of some of the more challenging aspects of generative AI in particular. So Suno is based out of Cambridge, Massachusetts. Technically it's a twenty twenty three startup, and it released its first build of the generative AI tool that they created back in December twenty twenty three. But the first stable release of that application came out just this past November. So I think Suno really qualifies as a twenty twenty four AI company if you ask me. I actually downloaded the Suno app to give it a try, and I have to admit it is pretty impressive. So Suno uses generative AI to create music based off user prompts, so those prompts can be really specific or really vague. So, for example, you might write a prompt like create a high energy dance track with lots of synthesizers and drums about going out on the town with a group of friends, sung by a female vocalist right, Or you might give it much more broad direction, like create an Appalachian folk tune about witches. Suno can compose music including lyrics and synthesized vocals, which on a phone speaker sound pretty darn convincing. Like when I made it make folk tunes. You could even hear like the breathing sounds. I guess someone will go, you know, it sounded organic, at least on a phone speaker. I'm sure if I were listening on really high end speakers, I could probably detect a bit of the artificiality. But to my dumb ears, they sounded pretty good. Now, what you do with that music track from that point forward depends on whether or not you're a paid subscriber to the service. If you are using their free basic plan, then you can use those tracks for any non commercial purposes. The actual ownership of the music tracks itself, those belong to Suno, so you don't have ownership of the tracks you make if you are a free user of the basic service. But let's say that you were running a role playing game session, like you're the game master and you really need some ruddy mysterious music playing in the background as your adventurers walk through a dungeon. Well, you could use Suno to generate that music for you if you liked like it could be instrumental pieces orchestral pieces, you know, synth wave, whatever it may need to be to suit the needs of your game. You could do that. And because it's non commercial, you know, you're just playing with friends, there's no problem there. Now, if you wanted to make commercial use of the music generated based off your prompts, then you would need to be a subscriber of the Pro or Premier level, and then you own whatever tracks are generated. You own the intellectual property of those songs, and Suno grants such users a commercial license. Copyright gets really complicated because, at least here in the United States, you cannot copyright a work generated by a I And what you can do is you can monetize the track, right like if you wanted to make a commercial podcast, Like you want to monetize the podcast, and you wanted to use Suno to generate the theme song for your podcast, you could do that at the pro or premier level, right, because that's a commercial use. You'd be using it for a commercial podcast. You could then do that. This does start to raise some questions, however, I mean, you could just start churning out songs, you could switch out prompts, you could tweak approaches in an effort to make something, you know, anything that resembles a catchy hit, and you don't have to hire musicians or singers or anything. You're just working with a computer and you're you know, if you have lots of time in your hands and you have one of those subscription levels where you don't have a lot of limitations on how many times you can use the app, well you can just keep rolling the dice and maybe you get lucky and you have a track that ends up getting crazy amounts of attention, and you're just flooding streaming services with track after track after track of AI generated music. This is something that is happening, and it's somewhat concerning because meanwhile, you have actual human being musicians who are trying to get noticed, and if there's just a glut of AI generated material hitting the various streaming platforms, it gets harder and harder to get discovered as an artist. And that doesn't seem terribly fair, right. But to the person who's just using this tool to make track after track, it's a licensed print money baby. Of course, it's not as simple as that, and there's definitely some discomfort in the music industry over the rise of these AI tools. The stuff made by Suno sounds retive of the various genres, at least to my ears. In other words, if you give it the direction to make a song in a specific genre or subgenre, what you get sounds like it. It fits pretty much. I didn't try things like rockabilly I should. I should try and do a rockabilly song about something and just see what it sounds like. But you know, for very broad categories like classical or folk or R and B or funk, the stuff I was getting sounded fairly representative of those genres. Not necessarily brilliant, but you know, it's not like something that you would hear if you were tuned into a radio station that catered to that specific genre of music. One thing to keep in mind is again, anything generated by AI here in the United States is ineligible for copyright protection, because in order to qualify for a copyright, a work has to be created by a human being, and writing a text prompt into a field and then having an AI model generate music does not qualify as a work created by a human being. That means that if you did create this musical track, and if you did have a commercial license to make money from it, there's no copyright protection that would allow you to go after anyone who infringed upon your intellectual property and made copies of your music, whether in whole or in part, or sampled it or whatever. And I say your music in the sense of you own it, not you created it. And that's a real issue. Beyond that, there's a bigger concern in the music industry over how SUNO trained its models in the first place. Like AI doesn't just magically know that a folk song sounds like a folk song, or that you know, rhythm and blues sounds a specific way, or Chicago blues having a different style to New Orleans blues. The models had to learn the rules of music theory and the styles and the things that go along with the various style. It had to have some form of understanding, which is a tough word because I don't mean to imply that the model has general intelligence. It doesn't understand like humans understand, but that it recognizes these qualities that are associated with different kinds of music, right like, it has to have that reference bace, and it has to be able to generate music that we find nice to listen to. Otherwise it's just spitting out random noise. So to get to that point, Suno presumably trained its models on a data set that included lots and lots of songs made by real life, living, and breathing human beings, and that also raises concerns about the potential for plagiarism. Now, Suno asserts that its model has guardrails in place to prevent it from just say, lifting chord progressions and melody lines and such from existing songs. But the music industry isn't so quick to accept that explanation. I mean, we just kind of had a pop culture moment in the form of Heredic. If you saw the movie Heredic had Hugh Grant in it. That brought up the issue of plagiarism a couple of different ways, and one of the ways was he plays a song by the Hollies called The Air that I Breathe, and then he plays Radioheads Creep and shows like, here's two songs that have the same chord progression. It's so similar that the Hollies sued Radiohead and were ultimately allowed to have some writing credit and royalties from Creep because of the similarity between the two songs. Hugh Grant's character also points out that Lana del Reys get free shares that same chord progression and most of the melodic line from Creep, and then in fact, Radiohead or Radiohead's agents or whatever sued Lana del Rey for plagiarism as well, which is wild because Radiohead had already been sued and essentially admitted to or at least acknowledged the similarities with the Hollies, and then the Radiohead then goes and sues Lana del Y. This is a really sticky subject in music already, like this issue of how similar can one song be to another? Before you start to say, hey, wait a minute, I think you actually copied this other piece of music, as opposed to you both independently arrived at the same structure from different pathways. But imagine how much more complicated it gets if a company were to argue that an AI model was plagiarizing protected works by generating songs that conform to the rules of music in a way that's already similar to existing pieces. In June of this year, the Recording Industry Association of America the RI double A, filed a lawsuit against Suno. The RI double A claims that Suno has engaged in copyright infringement by training its models on protected works. The lawsuit seeks damages in the form of up to one hundred and fifty thousand dollars per copyrighted work that was used in training. Now, I'm going to guess that would be a huge amount of money, but I don't know for sure because we can't actually see the data set that Suno used. Notably, it is behind closed doors, so we aren't allowed to see how many songs or what kinds of songs are from what catalogs Suno dipped into in order to train up its AI. I mean it had to be a lot. When you're training AI, you need lots and lots and lots and lots of training material to get your model to start to hone in on what it is you want it to do. In the case of image recognition, that could be millions of images in order to train a model to recognize one thing versus another. So presumably the training material for Suno was pretty darn extensive. I think it's pretty safe to say that every single record label out there likely has some music from their catalog that was used in the training. I don't know that for sure, that's just a guess, because again, you know, it's not like you can have the AI scan the radio for like a second and then say oh yeah, I got this, and then they could just go and generate all that. Okay, well, we've got more to say about Suno plus some other AI startups Before I get into any of that, though, let's take a quick break. Okay, we're back. We're still talking about Suno here, the AI company that makes a tool that allows you to create a song just based off a text prompt. So I think Suno really illustrates the pros and cons of generative AI in a neat little package. So the pros are if you are not musically inclined, but you need to create some original music for whatever reason. Suno is incredibly easy to use, and you can even continue to work on a piece. Like let's say you prompt Suno to make something, you listen to it, you're like, well, that's close, but it's not really what I want. You can keep on refining it by continue to type in and get that shaped into a place where you really like it. Now, if you don't know any musicians or you can't afford to hire musicians, arguably this is a tool that could work for you. But personally, I feel really icky about the idea of leaning too hard on Ai to create art of any kind, or even calling AI generated material art in the first place. That doesn't seem right to me. I like to think that art has to have some sort of human intent behind the piece. There needs to be a human motivation beyond just a text prompt for it to be art, at least in my opinion. But art is a very subjective thing. And also I'm very old fashioned, so it's entirely possible that I'm out of step here, but it doesn't feel that way to me. Now. The cons are that Tsuno could really impact human musicians who otherwise could be hired to devote their skill and craft toward creating new stuff, thus making it harder for people who have honed their art to make a living off of that art. Like think of the countless hours that songwriters and musicians and vocalists spend to get good at what they do. You know, people aren't just naturally flawless in their execution, right. They take years of practice to get to where they are. And they do this not just from a love of the art, although I'm sure that's a big part of it, but also with the desire to make a living off of all that hard work. Well, tools like SUO arguably create a shortcut that might be very tempting for some folks out there to just bypass all that messy human stuff and the costs that come with it and go for you know, let's just go with AI if it's popular. Who cares if humans didn't make it, because, you know, to the producer side, anyway, the goal is to make money off of music. You don't really care where the music came from or how it was generated. You just care that it makes money. You don't even care if it resonates with your audience. You just need it to sell well. So that makes me feel icky. The balance between commerce and art is always a tricky thing. I had a really good conversation with an artistic director of a theater about this and how that's always a difficult balance, Like how do you balance between commerce and art? And it's tricky because you know, we don't live in a world where we can just create art and not have to worry about paying the bills. We have to do both. Then there's also the plagiarism issue. So at what point do you say an AI tool is creating music the way a person might create music. So for example, people lean on inspiration from previous works all the time, right, Like you might hear something in a musical piece and think, oh, that's interesting and you want to build off of it. I mean, sampling is built off this. Their entire genres of music that are rooted in this idea where you take something that was used in one piece and you repurpose it to create something brand new and transformative. Doing that is okay, right, even copyright losses, that's okay if it's transformative to a certain extent. Like you can get into trouble if you don't do it correctly, but you kind I would get to a point where you say, is the AI, you know, taking inspiration from this previous corpus of music or is it actually just copying something that's already been done because the AI is determined this is the best way to do it. That's tricky. Now we'll have to see where companies like Suno go in the future. I know that media companies are simultaneously curious and worried about this technology. If the media companies can make a buck off the tech, then it will come as a surprise to no one when we're flooded by AI generated tunes. But if those media companies determine that working with AI puts their relationships with human artists at risk. You know, these are artists who could potentially be earning media companies billions of dollars every year, then it's a little different, right, Like it would be a dumb move to experiment with AI if in the process you're alienating the superstars you regularly work with. I say this because I've listened in on meetings where there have been discussions about using AI to create music, and these were with people who worked with companies that had tight relationships with musicians and artists, and I had to ask, like, what do you think that's going to do with your relationships, your professional relationships with these other people who clearly have a vested interest in not having AI flood the market. And that really gave pause to the meeting. I'm known as Debbie Downer at those types of meetings because instead of, you know, kind of blue skying the whole AI thing, I say, let's take this into context. Let's really think critically about this, because otherwise we're putting lots of people's careers at risk. And not only that, but potentially we are suppressing artistic expression because again, I don't think you can call it artistic expression if it's AI generated. It might be conforming to certain conventions and rules in an effort to create music that sounds like, you know, whatever the prompt was, But that's not the same thing as artistic expression. All right, let's switch to a different startup. This is one that is also a couple of years old, but it completed its Series A funding just this year, and that's web Ai. Now, I guess I should kind of cover what Series A funding actually means. So in the world of startups, and this is not just in tech, this is startups in general. But in the world of startups, there are typically multiple rounds of investment funding that are necessary for a business to get to a point where it can operate like a business. So first up, you've got your seed funding, and this is used to get a company established in the very early stages. You might use seed funding to do things like, you know, incorporate, to design a logo, to secure some office early office space, which might be like a sharing situation. Early on, there are a lot of businesses that started off as taking up a corner of an existing business's office, for example. And I think of seed money kind of like how buskers will put a few dollars in their hat before performing on the street. Right, you see that street musician, they get their hat out, it's got some money in the hat. Often these musicians will put a couple of bucks in there to start with, and that money is seed money, and you're hoping it's going to grow and blossom as more people throw dollars into the hat. Seed funding is kind of similar. Seed money typically comes from folks like angel investors, so these could be really influential people in the sector, maybe people that the startup owners already know personally, could be someone they went to college with, or advisor or a relative. It could be something like that. It could also be from an incubator group. Incubators exist in order to foster ideas that could potentially grow into viable businesses. The incubator gets a stake in whatever the company is and thus profits if the company does well, and in return, the startup gets a little bit of stability and access to some assets. But seed money only goes so far, and typically it's enough to keep a company afloat for just a relatively short amount of time. What typically comes after that is Series A funding, and in this series the startup opens itself up to investments beyond that initial group of seed money investors. This could then be followed by additional rounds of investment. You can have Series B, Series C, et cetera. And it does mean that the investment crowd gets more dense as you go on because you get more investors, which means you have to pay out more shares. In the long run, the goal is to either become a scaled operation that has sustainable growth, potentially going public at some point and everyone makes their money back plus interest, or you get swallowed up by some bigger fish for a handsome payout and everyone goes home rich. So webai concluded its Series A funding this year and saw about sixty million dollars flood into the company. Coffers analysts value Webai in the seven hundred million dollar range, So yeah, they got sixty million in investment, but they're valued at around seven hundred million dollars, So they're closing in on that Unicorn status where you hit a billion dollar valuation. So what the heck does webai do? What's interesting because they're called webai, but in fact they're focused on creating on device AI solutions, which by that I mean instead of relying on a cloud based AI server farm, you do or AI processing on devices that are local to you, whether it's an individual or a business. Now this is important because most businesses aren't necessarily keen on using an off site AI solution out of concern that the AI provider could possibly train AI models on the company's proprietary data. Right, let's say that you are an analysis firm and you're using AI to assist in the analysis. If you find out that the company you're using that provides these AI tools is actually training its AI on your information. Well, that brings your information security and privacy into question. If your business handles sensitive information, and let's face it, most businesses do to at least some extent, then you could have legitimate concerns about a third party gaining access to that data and further potentially exploiting that access by training its own models. You could even get into some real legal trouble if you are working with other parties like partners, that have agreements that would prevent you from legally being able to share their information in the first place. So there are a lot of sticky situations around this particular approach to business. And this is not a hypothetical issue. This concern about AI potentially compromising security and privacy because we have seen examples of various AI tools generative AI tools pulling information from other user interactions with the AI. Now, usually this is because there's been some sort of error on the back end of the AI side. So theoretically, each customer's interactions should be siloed from everybody else's, but now and again, mistakes happen, and you might be in a conversation with an AI chat bot and you end up starting to see information that was inserted by some other customer, right you start to see interactions that they had with the AI and that obviously is a huge breach of privacy that has happened a few times over the last couple of years. Now. Companies obviously don't want those sorts of mistakes to include their intellectual property that could include things like code for software or business strategies or you know, trade secrets, all that kind of stuff. You don't want that to suddenly get just dumped into an AI large learning model and then someone else is like, hey, what does company XYZ think about this? And then you find out because those trade secrets have been included in the training material that would be bad. So a better solution, if you plan on making use of any sort of AI process might be to make sure that you can handle all that processing yourself. Now, depending on what your business does, that may or may not be practical from the classic standpoint, like open ai has an enormous enormous number of computers running AI processes, and most companies would not be able to replicate that. And if you're talking about a big company that needs to run some hefty functions of AI processing, going that server route might not be a viable option. Those server farms for open ai are so large and they're so expensive that for a while this year it looked like open ai might even spend itself out of business just in order to pay the bills. But investors ultimately did sweep in and injected open Ai with a mega truckload of cash, so bankruptcy has been saved off for now. Like you might think of open ai as the next too big to fail company, even though open ai is spending money at like a truly eye popping rate, because AI processing is hard. It takes lots of processing power if you're doing it the way open ai does it. All right, we're gonna take another quick break. When we come back, i'll talk more about web ai, and then we'll follow up with a third AI startup. Okay, we're back, and we're going back to Webai. So webai has developed some products that allow for local AI processing. So none of this goes over the cloud, none of it goes over the Internet. It's all contained on premises. Some of Webai's products that they're offering sound pretty nifty. The company claims to have taken a tailored approached for every single customer and they optimize the strategy to fit whatever that customer's needs happen to be. The company also says that no coding is needed to use webi or Webai, I should say, to build out functions, but the programmers can take advantage of advanced settings if they want to stretch themselves a bit. Now. I haven't used Webai's tools, so I don't know how webi integrates AI into the different applications and processes that businesses have, Like, I don't know what that looks like. Like I imagined, it's got to be more complicated than just see this, do this automatically. It has to be a little more complicated than that. I don't know how much more complicated because I haven't had hands on time with the tools. But some of the applications Webai lists on their web page include stuff like airline and airport logistics. You know, using webai to help reduce turnaround times and improve efficiency at airports so that planes are spending less time at gates and you have more flights arriving on time or ahead of schedule, and just improving efficiency in general, or using Webai to help develop educational applications to customize learning approaches for classes or even on student to student levels. That's something that's been talked about for a very long time, this futuristic vision of imagine a world where every student has an education that is catered to their style of learning. That's obviously something that teachers cannot do right now. It's impossible. If you have a class of thirty students, you don't have the time to be able to craft and design and maintain a teaching plan for each and every student. But with technology, the idea seems closer. It may be that we can never really achieve it, but it seems like it's possible. Then we ai man I just say webi all the time. Webai it also suggests medical applications that could improve the quality of patient care. And if all of this is sounding vague, that's because webai is offering a platform upon which many different services and products can be built, so it's impossible to go through every single variation that webai could empower. It's more like these are some examples of what the tools are able to do. They're able to improve the performance of various processes in all these different fields. By the way, this is one of those approaches in AI that I can really get behind. I feel that creating on premises processing capabilities and optimizing an approach that makes sense for specific companies, that's the way to go. You know, don't do a one size fits all approach because that just isn't realistic. And I'm not saying that cloud based AI services don't have a place, but cloud based AI services concerned me for lots of reasons, privacy and security being two of the big ones. Plus you know, some AI companies are making some choices that I personally find a little questionable. Cough open AI pairing up with Lucky Palmer's defense company, cough. But how about we cover an AI startup that has a much more focused purpose. This is the third and final of the three that I wanted to highlight today that brings us to Overjet. This is an AI company that caters to the world of day dental care. Yep, we're bringing together the terminator and dentistry to create an unstoppable tooth scraping supervillain. All Right, I'm going a little far. I apologize. I recently rewatched Little Shop of Horrors and it has rubbed off of me. My apologies to all the dental hygienists and dentists out there now. This year, Overjet secured a Series C round of funding, so this was the third round after seed investment. In that Series C funding round, they raised more than fifty three million dollars in the process. The company was originally founded back in twenty eighteen, and according to a press release, the company's mission is to quote make dentistry patient centric by providing dental professionals with the AI tools they need to operate efficiently and give patients exceptional care in the quote. So how do they do this? Well, from what I can gather, one way is for overjet to use image analysis tools to help dig dental diseases in patients and suggest methods of care to help treat or prevent dental issues. So, for example, if you were to get X rays done at your dental appointment, Overjet could be used to analyze those those X rays and diagnose any issues. And maybe it tells you, hey, it looks like on this one side of your mouth you've got a little bit more damage, Like maybe you've got more build up a plaque, or maybe you've got a you know, maybe you have the beginnings of cavities. Over here, it suggests that perhaps you're not reaching your teeth effectively when brushing that part of your mouth, and that it's something to be mindful of, or more seriously, it might detect early signs of oral disease and give your dentist more time to address the problem before it becomes much more serious. Moreover, a goal of overjet is to create a sort of centralized point of information for every patient, and that would give dentists and other doctors, plus patients and even insurance companies a common foundation to work from. So the goal is to smooth out any rough spots or miscommunication that could otherwise crop up between these different parties and to make sure everyone has the same understanding. And I could definitely see where that could be helpful right where you have this tool that dentists could use to say to the insurance companies, for example, that this is something that absolutely has covered and needs to be reimbursed or whatever. Because we have this common point of contact where we can have an understanding of what's going on with this particular patient. Overjet is actually the first artificial intelligence company to receive clearance from the US Food and Drug Administration or FDA to make use of AI in the detection and diagnosis of oral disease. And I think that's an incredible achievement. I mean, you have to be able to pass lots of inspections and analysis in order to do that. The FDA doesn't just rubber stamp stuff, and this is there's definitely an AI application that I really like, you know, anything that can help give more precise care to patients and improve that quality of care. To me, that's a really good thing, as long as the technology performs reliably and consistently across all patients. Obviously, we have seen cases of AI. I'm not talking about medical AI necessarily, but we have seen examples of AI that have performed really well with one set of people and not so well with other sets of people. I'm thinking primarily of stuff like facial recognition technology and how it is not as reliable when looking at anyone who isn't a white dude. Essentially, if you're a white dude, it works pretty well, and then if you're not a white dude, the reliability of the tool starts to decline. Let's say I want to make sure that any medical AI doesn't fall into those same sort of biases where just because something might be true for one subset of the population doesn't mean it's going to be true for everybody. And again, this takes us back, sort of like with education, to this futuristic view of a world where healthcare is customized down to every single patient. And that's the dream, right where every person is given individualized healthcare so that they get the optimized approach to taking care of themselves, either to prevent illness and disease and conditions or to treat the ones that they have. Because we don't live in a one size fits all world, you know, what works for me might not work for you. In fact, I personally experience this the hard way this year. You might recall, at the end of twenty twenty three, I had what I like to refer to as my little medical whoopsie, and I was sent to the emergency room. And at the end of that I was prescribed a blood pressure medication and turned out that that particular type of blood pressure medication was not effective for me. I didn't know it at the time until three days later when I was so poorly off that I had to be admitted to the intensive care unit at the hospital. So I upgraded from ER to ICU. And part of the reason for that was that my blood pressure medication I was prescribed wasn't cutting. It turned out my kidneys were really badly damaged. Fun times, they're much better now, by the way, just so y'all know. So at that point, I was then put on a different kind of medication, and then they fine tuned that so that it would work best for me, and that way, you know, I wouldn't die. And that's kind of how it has to go right now for most patients. Like it's not something where a doctor can speak with one hundred percent confidence that a specific medication at a specific dosage is going to do the trick. Often it requires a lot of trial and error. The hope is that with AI we can get to a future where patients can receive a much more individualized approach to care that minimizes the risks of complications and hopefully the impact of stuff like side effects, Like you're never going to get a rid of side effects, but hopefully you'd be able to use these complex technologies to design a type of care that gives a patient the higher quality of life. That's the goal. Now we still have a very long way to go before we get there, but I feel like Overjet's story is evidence that it's at least a potentially achievable goal, maybe not one hundred percent achievable. I don't want to paint AI as being a perfect solution that's going to get rid of all these problems and that will be magically living in a Star Trek universe. I don't want to suggest that. I do want to say that I think it can help us, assuming it is responsibly and accountably designed and maintained, It can help us reach a better future depending on how we implement it. So there are lots of ways where I think AI is going to be a good thing moving forward. I know on this show I can get really critical of AI, but that's because it does have the potential to do good things. It also has the potential to do really bad things, either intentionally or as we have often seen, unintentionally intentionally. I worry about companies like open Ai pairing up with defense contractors because I don't see that ending well. I see that going to a place that's very dark and honestly something that I thought would only exist in science fiction throughout my life, and it turns out I was being naive unintentionally is arguably just as bad because it shows a lack of oversight on the part of whoever's developing the AI process. And often when we have our site set on a specific goal, we can have blinders put up to the potential consequences our choices, and that can be a really bad thing too. But that doesn't mean we should shy away from AI. It just means that we have to be extremely mindful and careful as we develop and deploy AI solutions because the potential for them to really improve our lives is definitely there. We just have to make sure we're doing the right stuff in order to get there. That's it for this episode, just a quick look at just three AI startups. I mean, there's obviously lots more out there, but I wanted to kind of pick three that would be fun to talk about for today's episode. We'll be back with more new episodes, including hopefully some special guests in the very near future, and I will talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple podcasts, or wherever you listen to your favorite shows.

In 1 playlist(s)

  1. TechStuff

    2,453 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,450 clip(s)