In this week's monologue, Ed Zitron walks through how reckless members of the media continue to inflate the AI bubble - and the detailed notes he's been taking along the way.
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
Zone Media, Hello, and welcome to your weekly Better Offline Monologue. I, of course amed Xytron. So, as you'll find out in tomorrow's episode, the Future of jennetor of Ai, hinges heavily on open Ai, raising tens of billions of dollars, the majority of it from SoftBank, a huge Japanese investment firm that has to take out billions of dollars of loans to fund them, along with their contribution to the Stargate data center project. And as I went into in yesterday's episode, Microsoft is pulling back from over a gigawatt of data center capacity, and it certainly looks like these moves are an intentional move to distance themselves from open Ai, cutting back on data center expansion just as America's worst company needs more of them. Meanwhile, Mark Benioff, CEO of Salesforce, he's sounded Neil arm saying a recent CNBC conference that he believed hyperscalers were under hypnosis in their aggressive pursuit of data center to expansion and training larger and larger language models. Benioff believes that and I quote ish it, referring to data center expansions and larger language models has to be rethought. Exactly what are you doing and why are you doing this? That's a bloody good question. Mark. To be clear, Mark Benioff has been saying that Salesforce was adding some sort of Einstein AI shit for the best part of a decade as a means of boosting his stock price. So why is Big tech's most effusive bullshit as saying this. Do you think it's because things are going well? Do you think it's because sales of Agent Force and other associated products are doing really well? Look, as I've repeatedly said, where is the money in this industry? What have these companies actually built with generative AI? Where are the products that matter and why do they matter? Do you really think chat GPT is revolutionary? Do you think any of this is revolutionary? We are two years in and I'm still getting dms from people asking me what would it take to make you believe that this is all the future? And I'm so fucking tired of being asked this. The arguments I make are grounded in numbers and things that have happened, not just financial details and statistics, but in objective evaluations the products in question. There arefficacy at tasks and the people involved. Yet somehow, I and other critics are continually made to justify themselves. While Sam Altman of open ai and Dario Amadee of Anthropic vaguely suggests that we'll have a conscious autonomous computer by the year twenty twenty seven. When I ask how open ai survives as it spends nine billion dollars to lose five billion dollars, I'm obliquely threatened by Casey Newton of Platformer and Hardfork that he's taking detailed notes about anyone who believes that open ai might go bankrupt or run out of money. When as Recline suggests that AGI is about to arrive in a conversation with some sort of former Biden administration aicon artist, I'm sent the link thirty times with people saying does this mean you're wrong? I realize I'm complaining, but I'm justified in doing so. Why the fuck do I other critics have to make rigorously founded in persuasive arguments while AI companies spout fantastical nonsense. Why does Sam Mortman get headlines when he posts about and this did just happen? By the way, making an AI that can do creative writing, and I wish it was just ignorance. People like casing you and a Nezrakline aren't stupid, but they're also fully willing to back the narratives of powerful people that they want to be friends with. They want the rich and powerful to win, and they want to be the people that write their narratives and get their interviews. And yeah, I'm being petty. These are people that ostensibly compete with my work. But people with such a large audience have a responsibility two said audience not what they wish would come true. And really, I've got to ask, how does all of this end? Right now, we've got Anthropic, a company allegedly makes one hundred and fifty million dollars a month according to the Information, but loses over five billion dollars a year or so, reported by the Information, And they make a commoditized product one very similar to open Aiyes, a company that will also likely lose a shit ton of money eleven billion dollars or more in twenty twenty five. These companies are dependent on receiving billions or tens of billions of dollars a year in funding for an indeterminately long period of time. For an equally indeterminate goal. I'm being completely objective here. There's nothing that these companies have made that suggests anything will change. Every new version of claud sonnet or GPT is iterative, and the products we see today are alarmingly similar to the ones we saw in the last two years. Despite everybody talking about agents, the actual agents that exist don't really work, and those that are able to kind of complete a task costs thousands of dollars and again don't always work. This industry is unprofitable, unsustainable, and does not appear to be able to create a product that people want to pay for, let alone one that they pay enough to put the company making it in the green. We're two years in. How do we not have one profitable generate AVII company other than Boy is it churing? They're a consultancy? It does not count. I do want to say, though I'm not cheering the apocalypse, what I've been describing for the last year is a group delusion where hundreds of billions of dollars got funneled into an environmentally and financially destructive distraction from the real problems that humanity faces. The longer this goes on means that it will be worse for the tech industry because once this bubble bursts, it will puncture everything, tens of thousands of people laid off, brutal damage under tech valuations, and likely a glut of tech talent that depresses wages across the valley. What's important to know is that so much of this could have been avoided. Microsoft could have chosen not to continue sustaining open AI, as could Google and Amazon have refused to back anthropic or just not do this nonsense so called reporters that Casey Newton and Ezra Klein could have made these companies justify themselves rather than operating as so called cautious optimists that end up mostly just parroting marketing materials. And the larger media could have covered generative AI based on what it does, rather than what they're told it might do by somebody who has the financial incentive to lie. In any case, when this collapses, my words, I have been taking very, very detailed notes. I've been watching those who have sustained this bullshit narrative and other bullshit narratives and cryptocurrency in the metaverse. People willfully misleading the public in the process, and when the time is right, I will coldly and clinically read you every single time they've done so. Anyway, enjoy tomorrow's episode