In this episode, Ed Zitron is joined by Carl Brown, a veteran software developer and host of The Internet of Bugs, to talk about the realities of software development, what coding LLMs can actually do, and how the media gets it wrong about software engineering at large.
https://www.youtube.com/@InternetOfBugs
New GitHub Copilot Research Finds 'Downward Pressure on Code Quality' - https://visualstudiomagazine.com/articles/2024/01/25/copilot-research.aspx
Report: AI coding assistants aren’t a panacea - https://techcrunch.com/2025/02/21/report-ai-coding-assistants-arent-a-panacea/
Internet of Bugs Videos to watch:
Debunking Devin: "First AI Software Engineer" Upwork lie exposed!
https://www.youtube.com/watch?v=tNmgmwEtoWE&t=3s
AI Has Us Between a Rock and a Hard Place
https://www.youtube.com/watch?v=fJGNqnq-aCA
Software Engineers REAL problem with "AI" and Jobs
https://www.youtube.com/watch?v=NQmN6xSorus&list=PLv0sYKRNTN6QhoxJdyTZTV6NauoZlDp99
AGILE & Scrum Failures stuck us with "AI" hype like Devin
https://www.youtube.com/watch?v=9C1Rxa9DMfI&t=1s
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
Also media, Hello and welcome to Better Offline. I'm your host ed zet Trunk. Also today, I'm joined by Carl Brown, a veteran software engineering host of the excellent YouTube channel Internet of Bugs. Cayl, thank you for joining me.
Thanks for having me.
So I'm going to start with an easy one. What is the software developer like? What actually is that?
So?
Basically what we do is we take ideas about problems that people want to solve, generally, and we write software. We write code that tells computers instructions how to make the computer do the thing that needs to do to solve the problem the person else is to solve.
Right.
Gaming programming is a little bit different, but that's most software development is basically that.
And this is another quite silly question, but necessary. How much of that is actually writing code?
It depends on how good you you're the people that are asking for stuff. Is As a general rule, I would say maybe between ten percent and twenty five percent.
Okay, just really want to be ten to twenty Even if we say thirty percent of the job, which is more than you said, that means the majority of this job is not actually writing.
Code right now.
That's that's largely for folks that are farther up the chain, Right, So, if you're fresh out of school and you don't really you're not in the job in the you don't understand how to manage requirements for any of that kind of stuff. Yet someone's going to basically hand you a thing to do, and in that kind of case, you're going to be spending a lot more time writing code than that. But for me, it's you know, it's far far more talking to people and stuff than actually code.
Right. The reason I asked that, and the reason we're doing this as well, is the there have been a lot of stories around like LM's replacing code as LLLMS replacing engineers, claiming that junior software engineers will be a thing of the past due to LLLMS. How much validity is there in.
That, Well, when it comes to the really really really fresh out of school kids, right.
That you have to basically break everything.
Down in hand, the little chunks of work, and LM can kind of do that, although the kid will get better over time and the LLM is pretty much fixed right right, But past that it doesn't do a good job of being able to do any kind of long term thinking, and that's largely the job, right, I mean this is this is not a set of you know, I come in today, I do a thing today, I come in tomorrow and having no understanding of what happened yesterday, and do another self contained thing and so on and so forth.
Right, that's not the job.
The job is a long sequence of building up on things, day after day after day after day until we get to the point where the whole thing together works, And that's what it's supposed to do.
So I think that I've known, and one of the reasons I had you on as well, is that really there are so many of these stories that claiming that, like this software engineer's job is gone, that these companies be writing all of their code with AI, and it doesn't even seem like that is possible. One of your videos, you did a really good thing around like the twenty to thirty percent a link to this in the notes, twenty to thirty percent of code that behind meta and I think Google it was is written by AI. Now, again, how much valuidity is there to them?
Well, I mean so if one of the quotes was something to the effect of thirty percent of the code is suggestions that were given by autocomplete that a human accepted, right, which could be as much as you know, the thing said, oh wait, you spelled this wrong.
Let me give you a suggestion about how to spell it correctly.
Right, right, I mean how much of the actual text that you write is you know, it is corrected by a spell checker?
Right?
If all that counts as AI, then what percentage of your stuff is written by AI?
Right?
Well, in my case, absolutely nothing. But that's just kind of a free I'm a just a complete free But no, I get your point, and it's without being a code of myself. It's something I've really noticed across these stories where people just kind of blindly push them out and they say, oh, it's twenty to thirty percent of the code is written by but there's no verifying this. And also it feels like it might create a bigger problem, which is say, we accept this idea even though I don't, and it sounds like a pretty spurious one kind of silly to do so at some point, isn't code not just the series of things that you write to make a program work. It's connected to a bazillion other things, which if you don't know why that was written because you had something generated. Is that not a huge problem?
Yes?
But worse, what what we're finding when code gets generated is that basically you end up doing the same thing in a bunch of different places, but in each one of those different places.
You do it a different way.
Can you give me an example.
So, for example, when you need to go fetch a thing from a server, right, well, you overhear in this code you fetch a thing from a server. Over Here in the code you fetch a different thing from the server. Normally, you'd be able to use the same block of code to do that, so that if there's a mistake in it, you can change it once and it's fixed everywhere, right, Right, But the way the llms work is you say, hey, I want to fetch a thing from the server, and it says cool, and it writes a whole thing.
For you that may or may not work the same way as the previous one.
Right, And so now you find, okay, under some circumstances, we're having a problem fetching things from the server. I don't know which one of these twelve implementations that go fetch from the server is the one that's actually causing the problem.
Right, Well, so isn't there isn't there a security issue of having large language models, Like wouldn't all the code be quite similar or at least more similar depending on if everyone's using Claude or everyone's using well get hub copile.
I guess this Claude now, No, not really.
It basically kind of picks a random number at the beginning and goes okay, So that's the I think if it kind of like you deal a deck of cards, right, whichever deck of card gets turned over first, that's the beginning of the autocomplete that it starts. And so depending on which example it's I don't want to say thinking of, but depending on which example represents that, I'm grastically over simplifying. But depending on which example is represented by that card, it's going to go down one path or another.
Right, And so what they actually what these large language model coding tools actually good for? Because I get a lot of people who who respond by saying, this is proof that AI is a big deal, and I'm just kind of like, I'm not even looking for a particular answer, just truly what's useful about them?
So they are decent at when you know what you want and what you want is a fairly simple, self contained thing, and you know how to tell whether or not in the self can think thing does what you want, It can type it faster than you can, like wells so correct. Basically, Yes, it's like auto complete if you if you know exactly what you want. Yeah, I mean so I use it a lot because I programm in a bunch of different programming languages a lot, right on different projects at the same time or on the same day or the same week, And it's really easy for me to go, Okay, wait, which language am I in? Right?
Now?
Okay, how do I do this in this language?
Right?
So it's kind of you can.
Actually understand the generation when it comes.
To Yeah, it's like I know what kind of loop I want, but I don't remember the syntax for this particular language where I don't want to. So I use it kind of like a Google Translate kind of thing to go from one programming language to another sometimes.
But you wouldn't trust it to build a full software package.
Oh not at all?
Why not?
Well it wouldn't work to start with.
Why wouldn't it work?
Well, I mean, so I've done some experimentation on that where I've taken fairly complicated.
Challenges.
Challenges are intended for programmers to basically get better at their craft and that kind of thing. And I've run ai you told it, you know, step by step, okay, the the challenge says, this is your next step.
Do this.
The challenge says, this is your next step.
Do this on really simple challenges in programming languages like Python that it's got a lot and a lot of examples for it does okay past the point where you're in the really simple kind of language, things they just they sometimes get to the point where they can't even create anything that builds it all.
Huh, why is there? Why does so many engineers swear by it? Then?
Honestly, I'm not sure to what extent the engineers are swearing by it. I've talked to a lot of folks who are like, you know, my group, you know this big bank, you know friend of mine, My group is getting copilot gemmed on our throats whether we want it or not. And the executives are all really excited about it, and none of us are interesting.
So it's executive. I've I've personally had this theory that it's like executive pushed, and that it's all about it's all about what the bosses want to see rather than even do.
Sorry, there's a lot of wish fulfillment.
There's a lot of like, we want to not have to deal with these programmers anymore, so we would rather deal with the AI thing, and we're just gonna hope that the AI thing is going to be, you know, just as good as the programmers are close to us, just as good as the programmers, and not nearly as annoying.
Seems like a definitional well maybe that's not the right word. Seems like the difference between a software engineer and a software developer almost because it's not just about flopping code out, it's about making sure the code does stuff.
Yeah, I mean, those terms get mashed to keep yeah, I mean.
So part of the problem is that in like I live in Texas, and in Texas you're not allowed to call yourself an engineer unless you passed the engineering exam. Right, So I can't literally, I literally can't call myself a software engineer legally in Texas As. I understand it. I'm not a lawyer, but that's my understanding. So it's like the terms get all confused.
Right, So somewhat related, what is it that people misunderstand about the job?
Then, well, I mean, so one of it is what you said earlier, which is that a very small percentage of the job is actually slinging code. A lot of it is basically trying to figure out what it is the code should do based on what you've been told that the problem, you know, the solution of the problem that you're trying to solve.
Another thing is that.
A lot of the problem with the job is that every little decision builds up over time, and at some point a bug is going to happen. They're inevitable, and when that happens, basically there's this process where what you need to do, if you're being competent, is roll back through the series of decisions, figure out what caused that bug, and then figure out what other bugs are likely to have been caused by that same set of decisions, and then fix not just the bug you're working on, but the bugs that you know, not just the bug that's been reported, but the bugs that might have also been caused by the same problem. Right, And that kind of long term thinking is not a thing I've ever seen LM exhibit at all. I talk about it like l limbs or generative AI is good at solving riddles, but actual software development is more like solving a murder.
Yes, you said that in that wonderful video. Yeah, I And it almost feels as if we are building towards an actual calamity of sorts, maybe not an immediate one. Maybe it'll be kind of sectioned off into areas because you've got a new generation of young people coming into software engineering or what have you, learning to use AI tools rather than your videos definitely talk about this as well, actually how to develop software and make sure it works, and make sure that it has the infrastructural side inline, and also that you're building it with the long term thinking of someone else might need to understand how this works, and they're not learning that, So you've just got a generation of kind of pumping the internet and organizations with sloppier code.
Yes, although I mean one of the problems we're having at the moment is that the hiring process for really junior engineers is actually pretty broken at the moment, and a lot of people are not hiring people that are fresh out of school because they're expecting that the AI will be able to do.
Basically, a senior or a mid level developer with.
The benefit of AI, with the benefit of AI that's in air quotes, will be able to do the work of that person plus a couple of fresh outs that they normally would have hired, but they're not hiring at the moment. There's some statistics about how the people that are fresh out of school these days are historically underemployed relative to the general population, at least in the US where I live.
It also feels like there's no intention behind the code, like it's just if you're just generating it. You don't really know why you made any patity. You could say I chose these lines, But is that at some point if you have large amounts of software developers using it, however large, But the young people in an organization using it to generate their code, they're neither learning to write better code, nor are they learning how to just learning how to fill in blocks they'll kin within the job.
Yeah, I mean.
The the trick is that those of us that have spent a whole lot of time debugging software right and like finding the problems and digging into them and trying to figure out what's going on that kind of stuff. It's going to be really hard for younger folks to get hired into those jobs so that they have time to build the experience to be able to do that. And I'm afraid we're going to end up with basically an older generation or generations retiring and a newer generation that hasn't had the experience of doing that kind of debugging. And then it's going to be a real mess, especially since from what I can tell, the code that the AI's generated are a lot buggier and buggier in weirder like randomish kind of ways. Stuff just kind of comes out of nowhere in a way that I don't. I mean, I've debugged code from people that don't speak the same languages as I do, you know, all that kind of stuff AI code is different. It's just like, Okay, why would anyone want to put that block there that doesn't have anything to do with what we're trying to do at the moment.
And why is that? Is it just because it's probabilistic.
I guess so.
I mean it's hard to say why. I mean, the idea of why an LLN does what it does is kind of a you know, anybody's guess.
Yeah, it's just I keep thinking of the word calamity because you sent me these studies as well about how they found like a downward pressure on the quality of code on GitHub. Would you mind walking me through what that means?
Yeah, So basically what that study found, there were there have been a couple of them, but what that particular study found is that there's what they call code churn has gone up. And code churn is basically when you push something you like, add a line of code, you push it into two test or to production, and then in a short period of time, like I don't remember exactly what the definition was, like in a month or two months, that line of code changes, right. So basically what that means is that the line of code that got created, somebody decided after it got put in, oh wait, no, that doesn't work right, we don't We're not happy with that. We're going to change it to be something else, right, right, And the percentage of lines or the number of lines that that get changed fairly quickly after they get submitted, has gone way up since the since the implementation of get hub copilot. So and this is across like most of the giant you know, millions of lines of codes on GitHub and.
For a simpleton me, why does it being changed? Why is changing it so often bad?
Like? Well, I mean, so, I mean if you do it right the first time, you can move on to the next thing.
Ah.
Right.
If it's like you know, if you're writing a document and you put put the document in there and then you like you're get you're in in Google Docs and you're like tracking changes and it's like, Okay, this sentence has changed seventeen times.
Obviously the person isn't happy with that, right.
So the generative code isn't good, right, and so people see you need to change.
That's the presumption, yes, and.
So I it also said the code quality itself? Is that the only way they is that the only way they measured it? Or is it there were other things as well, So.
They measured that they measured.
Uh like moved code.
Yeah, that the move code.
The thing I was talking about earlier, where the uh you've got a bunch of different places in the code that all do the same You try to do the same function, but they do it.
In different ways. Normally, what would happen is you'd have your you do it.
You do a thing here, right, and then at some point in the future you need to do that thing again in a different place. And so what you do is you would move that original block that does the thing someplace else, and then you would call that block from both places because it already works, right, And then that way you've got you know, however, you go that stuff from the server, you're fetching it the same way, but with this thing. Basically instead of doing that, you've got copy paste. Okay, when we put another one here, and we put another one here, let me put another one here, and it's a it's a maintenance nightmare.
So for the for the audience as well, how does the software developer actually use GitHub? Like really simple stuff? I realized, but I think it's important for people to It just occurred to be like, this may be something that most listeners don't know, which is good to I think it's good.
Yeah.
So so what we do is we basically make changes to code. We get to the point where we the developer, are happy with the way it's set up on our machine, and then we do what it's called a push, and we basically send all that code, submit all that code up to get hub, and then theoretically, you know, there can be automatic processes that kick in that like check that code for particular things and run tests on and that kind of stuff. And then at some point we have a thing called a poor request, which is basically a thing that says, Okay, I would like this to go into production now, or more or less, I would like this to get promoted into the next phase now, and then someone theoretically will look at it and go okay, that's fine, and then click the yes button or say, hey, you forgot about this, go look at this or that kind of thing, right, And the poor requests is kind of the unit of work kind of.
So with get hub you almost use it like an organizational code dump, centralize all the code. Sorry just for the non coding as well. And I think it's I think that the LLLM industry has done a really good job of dancing around these terms and selling them to people like me. While they weren't selling they didn't work on me. I am too stupid, But it's where they've just like been like, okay, yeah, well lots of people use co pilot, that's good, and this is good because software is coding. But it kind of feels like, I don't know, all of this is taking the one thing, like one major pot out of software development and ruining it. And I don't even mean coding. I mean it's the intentionality behind software design and infrastructure and maintain Like there's it seems like they're removing intention in multiple parts.
So the way I would say it is when they talk about the AI being able to do the work of a programmer, what they're doing is they're devaluing all of the stuff that's not just hacking code, right, And so what they're saying is that basically the job of a developer is basically just you know, typing, basically, right, And that all of the work that we do to understand what the problem actually is and how it needs to work, and you know what other problems are likely to show up when we try to do that, and how to avoid those things as we go and that kind of thing, all that work is basically not important.
And I mean I two words which would probably annoy you this is I feel like vibe coating is the other part of this. So if I'm correct, correct me if I'm wrong, vibe coating is just typing stuff into an LEM and software comes out and hopefully it works.
Yeah. Vibe coating is basically when you intentionally try it.
Well, I don't know, but intentionally, but basically you make a point of not digging into the code and looking at what the LM is doing, and you basically say, Okay, I would like something that does X right. I would like a game where I fly airplanes around a city or something right, And then you get what it spits out, and then you say, you know, okay, let me try it. Okay, well, can we have more airplanes? And okay, can we have some balloons with you know, signs on them now? And can we do this kind of thing? And then you don't think about what the side effects are. You don't think about what things could go wrong, You don't think about air conditions that kind of stuff, and you just hope that this whatever you look at and has the right vibe and that you know, if it if it looks like kind of what you wanted, that probably it's going to be fine, or hopefully it's going to be fine.
How do you feel about vibe coating?
So I do it.
Sometimes vicoding is great for a thing that you're going to do once and then throw away.
Yeah.
Right, So if it's like, you know, okay, I want to I want to do a thing. I want to translate this thing to you know, I want to make this table go into this format over here or that kind of thing. You do it, you get the output you want, you throw the code away. No big deal, right, like a prototype almost, yeah, basically, And so you know, we call them spikes or tracer bullets. Sometimes it's like a you know, let me get a thing that works at all, right, and then let me see what I can learn from that to move into my big maintainable project. But for anything that's like, you know, this thing needs to run for a while, this thing's needs to not get hacked.
This thing needs to you know, not crash. It's a really bad idea.
Yeah, And at some point I feel like someone building a product that they don't really understand the working So of it's kind of almost identical to generating a story with at GPT, except kind of more complex and more prone to errors.
Yeah.
And the other thing is that there's an adversarial component, right, so people will intentionally try to go hack that thing that's sitting on the internet, right, Yeah, in a way that they don't intentionally try to go mess with the story that you wrote, right, right, And so even if it works all by itself, that doesn't mean it's going to work when somebody starts pounding on it intentionally trying to break it. And if they can break it, then that's a whole other set of problems that you now have.
It feels like quality assurance is just never part. Oh no, they are they claiming they're going to do quality assurance with large language models. Yet they must some.
People are Yeah, I mean, to be honest, a lot of companies have just been getting rid of quality assurants over the years.
Right.
Really, when I worked at IBM, we didn't have quality assurance at all. They would, no, seriously, they would do this. I was in IBM's Cloud group and they would do these these what do they call them, uh, packathon kind of things that they didn't call that.
I don't know what they called it.
But basically everybody in all the other development groups would get together and basically bang on the code that was about to get released from some other group to try to see if they could break it. Right, But they didn't have dedicated testers anymore because they decided I guess that they didn't think they were worth the money.
I don't know, but we had some issues because of that.
When spinal movement happened.
I was in I don't know, so I was at IBM in like twenty seventeen, twenty eighteen, right, so it would have been sometime prior to that.
When I got there, they didn't have any QA folks.
Really just feels like the it's the management problem as well. It's the management cotton people.
I would think.
So it's a real shame as well. And I forgive me if I'm forgetting exactly where You've mentioned as well that there is like compound scar tissue from AI generated code, a larger problem of lots of this code being generated with AI.
Well that's that's my expert, right, Yeah, just a potential worry, right.
Right, so that the more of this we get and the more issues that we have, the more stuff we're gonna have to dig out of, right, And what I'm honestly envisioning at some point in the I don't know how long it will take.
The crypto bubble took way longer to pop than I expected.
So I don't know how long it's going to be before this one does, but I'm expecting that there's going to be this big push to try to clean up a bunch of this crap here in a few years, once people realize that a lot of the code that's being written and generated right now is has all of these vulnerabilities that nobody's bothering to check for at the moment.
Right, and those vulnerabilities again non technical way, I read that it was like the call upon things on GitHub that don't exist, so bad actors create something that resembles what it's pulling from.
That's so that's that's a more specific kind of one. I mean, there are a lot of things. I mean, so there have been computer viruses since the eighties, right right, you know, the Morris worm and that kind of stuff. And basically there are no own ways that code if you have to write it in a particular way in order for it to be secure, right right, And even then sometimes people come up with novel ways of making something not secure, and.
How how do you have to write it to make it secure? If it's possible to explain.
Well, I mean, there's a big, long list of rules, right. I mean, one thing you can do is you can use languages that are what they call safer. But still you have to make sure that any input that you get from the network, you're really really careful to make sure that it doesn't get to overwrite parts of your program that actually execute things. You have to make sure that it doesn't have the opportunity to be able to write to places on your disc that it shouldn't be able to write to. You have to be able to make sure that it doesn't have access to read data that it shouldn't be able to read, you know, all that kind of stuff. And when those things don't happen, you end up with you know, so and so got hacked. You know, turns out that somebody, we think maybe China, is reading the email of the you know, people in Congress. You get another letter in the mail that says your social Security number has been you know, leaked by you know, some credit checking firm or something like that.
Even even like I think it was what the big hot target data breach from a while back was through the HVAC system. It was it was it's just except now we've got and that was with humans writing the code right, imagine if we didn't know. Oh god, it really does feel like the young people are going like that. Actually, no, I take it back. You were talking about agile the other day. I'm going to ask you to explain that in a second. But it's like, it sounds like for almost decades they've been gnawing away at management's been gnawing away at the sides of building good software and building good software culture.
Yes, I mean, there's an argument that says we never got it right in the first place. But I mean, I mean, if you think about it, software has been a thing for what fifty years, sixty years, seventy years, right, I mean compare that to like construction engineering or bridge building or that kind of stuff. Right, we're still, you know, relatively speaking, in the infancy, In our infancy as a as an industry. You know, it's been a it's been a constant evolution, and a lot of times the things that were the things that we did to solve a problem that we had ended up causing other problems.
Right.
So, going back to agile, in the long long ago, right, we used to manage software projects the same way we manage like build you know, bridge building and building building project, you know, construction projects. And it turns out that when you're going to build a bridge, you know beforehand what you need to build a bridge to do. When you're building software, a lot of times people are changing their minds as you go. Right, and you build a thing and you show it to them and they're like, oh, why don't we put this over here, and why don't we change this? And that kind of thing right right, because you don't have the same kind of constraints physical constraints that you do when you're trying to build a bridge. And so we gotten this problem where you would create these project plans about how you were going to to build this thing, and you would never be anywhere close to on time because things would change the whole time. And so they created this thing called the agile methodology. I'm drastically simplifying. There were steps in the middle, but basically, so this agile thing is where we instead of saying, okay, so this is what the whole project's going to look like, we're gonna be doing. We're going to be done in six months, and then things changing along the way, we basically block off a thing called a sprint. It's a week or two or a month, maybe it depends. And then you know, everybody picks their own sprint length and then you go, okay, I'm only going to talk about what's going to happen in the next sprint or two, right, And then you get to the end of that two weeks and you go, okay, cool, this is what we got done. What do we want to do next? And then okay, that's what we got done, and what do we want to do next? And that kind of thing, and that way, as you go, you have the opportunity to change things. You have an opportunity to roll changes into the process, that kind of thing. Right. The problem with that is kind of the same way that dates always ran out in water in waterfall, land projects can go way way longer than they were expected to at the beginning because everybody's focused on just two weeks at a time, and you never kind of take a big step back like you ought to and go, okay, wait, you know we were supposed to be done, you know, two months ago.
When are we going to wrap this up?
Right? And how has that led to things agting worse? Is it that just software culture software development culture has been focused on short term perpetually.
Is the short term is part of it.
Part of it is there are you know, unscrupulous developers out there that basically want to extend the length of the project so they can get more money out of.
It, right right, That's that's always the case. But the other thing is that you end up with a real.
A lot of times, you end up with a real lack of like long term planning and long term understanding, right right, because everybody's you.
Know, some kind of thing. You know, companies are only worried about what happens next quarter.
Right.
If you're only worried about what's going to happen the next week or the next four weeks, the things that you look on, look at, you know, tends not to have the longer term implications that sometimes you need. Right And there are times you get close to the end and you're like, oh, you know, we didn't think about this problem yeaheah.
And also if you're in a two or three week thing, you're probably not thinking even what you did last sprint like.
It maybe last one, but not like two or three, two or three ones ago.
Is this a problem throughout organizations of all sizes? Is this a consultancy problem. Is it everywhere?
It's most places, Huh, there are, there are some places that are.
Usually in startups, we're a lot more ad hoc and we're a lot more you know, focused on trying to get things done. The basically, the the the idea is the larger you get as an organization, and the more money you're throwing at it, and the more management control you want, the more of this overhead you put in place, and the more complicated things get as just as a as a management structure kind of thing.
And this in the big So this is something you'd seem like a Google and an Amazon as well.
Oh absolutely so.
Do you do you think it has the same organizational effects.
Or largely yes.
The so those organizations tend to be well, those organizations historically have tended to be.
Before the recent like in shitification wave.
Those I'm assuming I can swear on this, Yeah, yeah, those organizations have historically been fairly more engineering driven, which means that you typically have people higher in the organization that are technical and have been programmers and who understand some of the implications, and so they tend to try at least we try to run interference with management and to try to, you know, make sure everybody's on the same page and that kind of stuff. A lot of a lot, not all, but a lot of problems can get get lessened if you have people in the organization that are at higher level whose job is not to manage people, but whose job is basically to keep track and coordinate between different groups that are doing different technical things right, to.
Make sure people aren't building the same thing I'm guessing, or are building the right thing in the right way.
Yeah, and that how what this group is building is going to impact what this group is building at some point in the future. And making sure that when you get to the point where those two things need to talk to each other, they're both aware enough of what the other one is doing that the two things hook together correctly.
Yes, So, based on my analysis at these companies, that's definitely gone out the window. I mean, even with LLLM integrations. So there was a Johnson and Johnson story that went out Wall Street General a couple of weeks ago where it was like they had eight hundred and ninety LM project Generative AI project, of which Taitla the Pereto principle wins again ten fifteen percent of them were actually useful. And the thing that stunned me about that of them, the fact to confirmed my biases, which I love, was the fact that they were eight hundred and ninety at the fucking things and no one was like should we have this many that There was no like like selfare engineering culture that was like, hey, are we all chasing our tails? Is this useless? But it sounds like they were all focused on their little boxes.
Yeah, I mean so the other thing.
So understand that again greatly oversimplifying a lot of the new stuff that's happened with large language machines, large language models, and generative AI. People didn't expect, right. It was kind of a surprise when you throw a whole bunch more data at a large language model and it started spitting out text in a way that nobody really There was no like mathematical reason to expect it to be able to be as good at generating rottocomplete.
Stuff as it is.
It was, right, And so there's this belief that if we did the thing and we unexpectedly got more than we asked for, if we do more of the thing, maybe we'll unexpectedly get more of what we wanted, right that hasn't seemed to really pan out the last couple of years from what I can see, but that we don't really understand enough about this to know whether it's going to work, So we might as well throw spaghetti at the wall and see if it sticks, because it might. Kind of mentality is kind of pervasive at the moment, and everybody's there's a lot of fomo. There's a lot of like, you know, well, our competitors are probably doing this, and so we don't want to get left behind. It kind of reminds me of the rumors that they talked about back in the eighties when the CIA was doing all this psychic research, because supposedly the Russians were doing psychic research and it was all complete crap, but both sides were convinced that the other side was making some progress, and so everybody was dumping a ton of money into it.
LMM Kultrum exactly. Yes, the title of the episode, So, okay, koltr aside, is this something you're seeing in software development though, because I know I've seen that in management or it's just going to like shut the ship in there. This seems like it's an important thing, right or is this Are you seeing it within software development?
So I am seeing it within.
Software planning, right, So when managers are sitting down and saying, Okay, we need to build this new thing, we need to create a new group, we need to split this group apart, we need to decide what our headcount is going to be for next year, there's a lot of okay, and what do we think the AI is going to do next year?
And how many headcount do we think that's going to save us?
In that kind of thing, right, There are some companies do a Lingo is one, Klarna is one OP sorry, BP, the former British Petroleum of what last year had a thing where they said they were cutting seventy percent of their contract software developers.
And in most of these they've kind of rolled them back as well.
And I don't think dual lingo has yet.
This is just being unfit to you. They like a day ago, really just like that would kind of It's so funny. It's so funny. It's so funny that this just keeps happening in exactly the same way. It's like, oh, what a surprise, human beings to do stuff. Yeah, but it kind of gets back to I think what you've said about everything with l lams. It's like you can teach something to say, Yeah, I think the right The thing you're looking for is this, but you can't teach it context. And that's been a point you've made again and again, Like it seems the job of a software engineer is highly contextual, unless you're like in the earlier days.
Yeah, and I like it.
It sometimes to the Memento guy from the Momento movie, right, where like can't form long term memories. Then do you do you really want the Momento guy to be the person that's building the software that makes the seven thirty seven max be able to compensate for its control input. Yeah.
Well, the thing is, though, with that argument, they would argue, and I know that there is a better argument here. They would argue, well, what if we just give it everything that's ever happened? What if we just show every single thing we've ever done in GitHub? Surely then it would understand.
So the what I have seen from the papers that I have read is that lllms have a basically squishy middle context problem kind of the way that you do right. So, if somebody gives you a big document to read or a big long documentary to watch or something, and then they ask you questions. What they're going to find is that you remember a lot more from the beginning of that and the end of that than you do from the middle of that. Right, and lllms have the same kind of problem. Right, And the other problem that the LMS seem to have is that when you give them a whole bunch of instructions, just instructions, polled on instructions, pulled on instructions, they can either get confused and forget some of the instructions, or they deadlock, or they just start going, Okay, I can't satisfy all of these I'm not even going to bother to satisfy any of them, or they'll pick one or two. The fact that you can take a million tokens and you can stick that in the memory block that the the GPU is going to process, doesn't necessarily believe, doesn't necessarily mean that all of the tokens in that memory block are actually going to be treated equally and going.
To be understood. Right in theory, maybe if you could.
Train your if you could like custom train an LLM and modify all of its weights based on exactly what your stuff was, and do that like day after day after day after day. As things changed, you would theoretically get better. I still don't think it would be you know, I still think would understand the context as well, but that would be ridiculously expensive.
Yeah, and at that point you could train a person, yes.
I mean the person would probably be more annoying. So that's I mean the point. I mean, a lot of this is seems to be really you know, we don't like dealing with the Prima Donna programmer kind of thing, right that there's this you know, I mean not just programmers, right, we don't also don't want to deal with the Prima Donna reporters or the Prima Donna illustrators or just want to get rid.
Of these people. Right. He's annoying. They ask for stuff, they want money.
Yeah, and days off and sickly even you know, healthcare, and.
It's just disgusting, how Dad. It's so it's frustrating as well because across software development and everything, but especially with self A developers, it feels just very insulting because it doesn't seem like this stuff. Actually, here's a better question. Have you seen much of an improvement with like one to oh three like these reasoning Do you I think the reasoning models change things for the beta. If so, how.
So a little that they don't make as many stupid mistakes.
It is basically what it what what it boils down to going back to your your first thing, though, right, I mean so. There was a piece, actually a couple of pieces recently. One of them was about, you know, tech workers are just like the rest of us, They're miserable. There I'll I'll give you blinks to these. The other one was a Corey doctor opiece that was like the future of Amazon coders is the present of Amazon warehouse workers or vice versa. There's there's a lot of there has been a lot of deference given to software developers over time, because you know, we have been kind of the engine that's made a lot of the last twenty thirty years work, and there's a desire to make that not so anymore, and to make us just as interchangeable as everybody else. I guess, you know, from a from a economic standpoint, I kind of don't blame them.
I understand why they're trying to do what they're doing.
I don't I mean, I don't think that the warehouse workers should be treated the way the warehouse workers are treated, you know, much less everybody else gets treated that way. And it's been a lot worse since the giant layoffs at Twitter now X. When that happened and the thing didn't crash and completely burn like everybody was or not everybody, but a lot of people were expecting it to, the the sentiment became, well, maybe all this, all these software developers aren't as important as they you know, we've always thought they were.
And you know, we will see over time what the end result of that is. My guess is it's going to be end up being a mess.
But you know, I'm a I'm a software developer, right, I'm gonna it behooves me for it to be a mess, right, So it might just be my bias that's getting in the way.
I actually I think that you're right though, because I remember back in twenty twenty one and onward the kind of post remote work, the remote works. There was the whole anti remote work push, but there was the whole quiet quitting and things like that. That's twenty twenty two where it's like software engineering, they just they expect to be treated so well because twenty twenty one's all the insane hiring, right. You saw tech companies like parking software workers. I think that played into it as well, where all of these companies who chose to pay these software engineers, they were the ones that made the offers, got pissed off that they'd done. Someone thought we should cut all labor down to size, and then along comes coding. Almost makes me wonder if most of these venture capitalists talking about this don't know how to code themselves. Yeah, gotta wonder.
I don't know many that do. Yeah, I know some that have at some point.
But the best thing it's at some point it's like they're not part of modern software development culture, which I know sounds kind of wanky, but I mean, just how an organization builds software feels like something they should know. But then again, they don't know how to build a real organization ethos. Who the fuck? Yeah?
Well, I mean, honestly a lot of it.
I've been in organizations that VC's basically killed, right because you know, we built a thing. That thing was, you know, a reasonable business, But vcs don't want a reasonable business. They want either one hundred ex return or they want to tax write off, and they don't want anything in between, right, yeah, So I mean what what they're looking for is really I mean, they're not trying to run a regular business, right, They're not trying to do the normal process. They're trying to either you know, hit one out of the park or throw it away and move on. And so they're they're the rules for them are different because what they're trying to accomplish is not what the rest of us.
Are trying to accomplish. As a general rule.
The theme of the fucking show, it's just like, it's just like you have these people that don't code saying how cod is should code, like Dario amat Day the other day saying that this year we're going to have the first one person software company with a billion dollars revenue or something like that, and it's just I feel like there are some people who should not be allowed to speak as much sometimes, but it's just frustrating and insulting. And it's but now that you've got me thinking about it, it does feel like this is an attempt to reset the labor market finally coming for software developers. And I don't mean finally in a good way, right.
I mean, it feels like that being in the being in that organ being in that industry at the moment, it really feels like that.
Is it scary right now?
Is it scary right now?
Not for me because I'm old enough to be semi retired, right, But I mean, I've been talking to a lot of folks. I've been having a interviewing how much folks that are that are listeners from my channel and kind of trying to get a feel for what's going on. And I've talked to folks that are you know, like I said that, I talked to some folks that were like, you know, I work for a big bank. They're cramming copilot, dinner throat or eeveryoneted or not. I've talked to some folks that are like, every time I sit down with my boss, I'm thinking that, you know, this is going to be the day that I'm going to find out that my group is getting cut the way the other three groups in the company is getting cut.
There's a lot of.
Artificial productivity requirement increases kind of thing, which is like like one, just you know, we you know, we expect more tickets closed per you know, two week period than you know we've had before because we were giving you this AI now, so you ought to be more productive that kind of thing.
Would the ticket necessarily be something that you just write toad for it more than just.
Well, so generally it's more than just that. But but generally the ticket.
That's kind of the way that we track the work that we do in a lot of organizations, right, And some tickets are like, I'm building a new thing, and those are kind of easier to predict. And some tickets are this thing isn't behaving right, go figure out where the bug is. And those are a lot harder to predict, but they have these things. Agile has this thing called a velocity graph where basically you see how many tickets per person get closed over time, and people want to see the slope of that line change because they're giving you AI.
And I'm guessing the people telling you to change that don't know what they're talking about.
That seems to be the case.
Great, so I mean the good news in theory, right, I don't know to what extent this is going to happen, but in theory, if they keep telling people, you know, that slope of that line should be changing because you have AI. Now, over time, if we see the slope of that line not changing though, right, then theoretically it will be proof that the AI is not providing the return that people expect it. Well, you're not using it, right, Well, yes, there's always that you're not prompting it, right.
That is that is basically what I am people. One of the many reasons what you want is like, I want to have people that actually code on to talk about this stuff, because it's really easy as a layman myself and for others to just be like, oh but this does replace coding, right, and it does? It sounds like it really doesn't.
Like it can help.
It can be like a force multiplier to an extent, but even past the initial steps, it just isn't there.
Well, I mean, so the best analogy I've always found to writing code is actually just writing, right. I Mean, you can get chat GBT to spit out a few paragraphs for you, right, but you know, you end up with, you know, the legal briefs that have the story that's made up or the you know, just things that aren't connected to reality or stuff that you know, when people read them, they're like, I mean, you you can tell the difference between AI slot generated you know, like the stupid the insert from the Chicago Sun, Yeah, yeah, the Philadelphia Inquirer. You know, all the books, all the things you can do this summer, right that like made up books and all that kind of I mean, like, but even even the articles that weren't the ones that we're making up stuff. You read the you know, this is what's going to be happening this summer. This is what the weather's going to be like or whatever. And you're reading and you're like this this there's no like insight here, there's no thought here, there's no you know, there's nothing in here that I get to the end of this. I've read the whole thing. I understand the whole thing, but I don't have anything I can walk away.
With, right and I AI agents aren't coming along to replace software. But that you're not scared of Devin.
I am not scared of Devon, so I well, actually I kind of am. I am scared that Devon is going to make a mess of things and then more things are going to get hacked, and that's going to end up being worse for everybody.
On the unit. Right, how would it do that?
By I mean like we were talking about before, right, So when you write code that isn't secure, right, and you write code that you know uses a library that's got an old version of a thing that they that there's a known bug in it, but you don't bother to check to see if there's a fix for that bug. Or you don't use best practices when it comes to writing code and that kind of thing, or you don't think about the the kinds of maintainability issues that you're going to have, and you do things like you ship out code in a in an Internet of things thing a light bulb right, or Internet Wi Fi router that cannot be patched over the Internet that has a bug in it, right, And now it's like that thing is going to have a bug in it forever, and you're gonna have to find all the ones on the on the earth and turn them off before someone's not going to be able to take them and be able to hack them and use them to attack somebody else from there.
I mean, IoT is a huge probe low. Oh yeah, but the cheap ones have like the spywab stuff and panto mining it.
Just but yeah, the ones, the ones that have they have like really nasty vulnerabilities, and they have no way of being updated once they leave the factory, right, and it's just as long as they're out there, they're going to be a problem literally for everybody on the internet.
Jesus, Well, what can to wrap us up? What can a new engineer, someone new to software development? What can they learn right now? You've kind of done a video on this, but I think it's a good place to wrap us up. What can they start learning to actually get ahead, to actually prepare for all of this.
That's a really good question. So you can't these days, you can't really be able to be an engineer. You can't get hired as an engineer without some ability to talk about being able to do prompts and use you know, some kind of AI code editor or that kind of thing. It's just an expectation of the job. Now, whether it should be or not a different thing. The I mean, like I said before, there are situations where you tell it what you want and it will type faster than you possibly can. So you know that's not necessarily bad. You need to understand that you need to figure out Well, okay, I'll get back to something else. You need to figure out basically how to test the thing, right, So how do you make sure that the code that it spits out does what you meant it to do. And what I'm expecting is that we're going to spend more time thinking about testing and thinking more about, you know, trying to find exceptions and that kind of thing than we have in the past, because the code that's actually being generated is going to be less likely to be quality than it was in the past. Right. The problem is it has become the case in the in the programming industry that the things you need to do to get through the interview to get hired have very little resemblance to the things that you actually do on the job that you need to actually do a good job. And so that's a whole different We could probably have a whole other podcast episode just about the interviewing problem.
But the main thing right now, it's so right now the whole hiring thing.
And this isn't I don't think true for just programmers, but it's especially true for programmers. Is all you know, bots that customize your resume and write a custom color cover letter and then send them over to the submit the thing to the bot that's screening the resume and screening right, and that getting it to the point where you can actually talk to a human is a nightmare right now, So the whole hiring system is kind of broken, so that the actually getting to the point where you can get hired is a nightmare at the moment. But the thing that you can do is figure out what kinds of things that AI are good at is good at. And one of the things that AI is pretty good at is things that don't matter as much, right. So, like you know, AI can pick the layout of a site potentially right, and you could have it picked two or three of them, and you can basically do what's called an A B test, and you can randomly assign people to it. You can figure out which one of them performs better, and you can throw the rest.
Of the money.
And even then at some point you will probably want the design customized.
Yeah, I mean, but but.
I think there will be a lot of things where people can kind of get something that's kind of good enough to get started right, right. And I think that to some extent this is going to be kind of a boon for the industry in the longer term where somebody who can't program right now, but who has some idea of kind of what they want can do like a vibe coding thing. They can validate that the market that they want to try to attack exists, right, and that people want to use the kind of thing that they built, and then they can bring in somebody to actually build it, right, you know what I mean. And those kinds of things wouldn't necessarily have been able to happen in the complete absence of AI.
So it's not, I don't think, completely useless.
And there there's times when as a as a developer, there are things that we're not good at, like you know, writing marketing copy and that kind of stuff that if we're trying to do a project for ourselves, you know, a lot of that stuff we can just outsource to the AI because it's not the thing that keeps the project from actually breaking and getting hacked in that kind of thing, right. So it's kind of like there's this concept where you need to keep the things that are part of your competitive advantage in house, and everything else you can kind of outsource to somebody else. The kinds of things you can outsource to somebody else are the kinds of things that you potentially you could throw an AI at because they're.
Not even even then it's like, it doesn't seem like that's a ton of things right now or will.
Be again, it's the so it's basically two things.
It's things that where the quality of the thing doesn't matter really, right, which every business has those kinds of things, right, And they're the kinds of things where you can define a metric that you can test the AI against and let it try over and over and over and over and over again until it gets to the point where it's good enough.
Yeah.
Right, So if your metric is more people click on this button than the button before, right, then you can have the AI create a whole bunch of different ways to skin that button, right, and then you can say, okay, so the one that tested best is the one we're going to keep. That's the thing you can throw an AI at, right, because you've got a well defined way of checking in no telling how long it's going to take, but you have a well defined way of checking to see if it's working right or not.
So yeah, I mean, for years I've had the theory that this industry was a twenty to twenty five billion dollar total addressable market pretending to be a trillion dollar one. And everything you're saying really is just it's like you're describing things like platform as a service. They like like, yeah, things that you use in tandem with very real people in intentional ideas.
Yeah, this is I don't see a world in which this is a we replace all the humans. You know that the whole Like, you know, this is going to displace eighty percent of the white color workers in the world. I just you know that the only the only people that are really going to be replaced anytime soon are people that either weren't doing a great job to start with, or people whose bosses don't understand what they were doing to the point the boss thought that what they were doing mattered. And my guess is that there's going to be regret at that point and that at some point they're gonna have to bring those people back.
Well, Carle, this has been such a wonderful conversation. Where can people find you?
I am Internet of Bugs at YouTube is probably the easiest place to find me, and then there are links on that channel to point at other things.
And you've been listening to me at Zichron you've been listening to Better Offline. Thank you everyone for listening, and yeah, we'll catch you next week.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Mattawsowski. You can check out more of his music and audio projects at Mattasowski dot com, m A T T O S O W s ki dot com. You can email me at easy at Better offline dot com or visit better Offline dot com to find more podcast links and of course, my newsletter. I also really recommend you go to chat Where's your Head dot at to visit the discord, and go to our slash Better Offline to check out I'll Reddit. Thank you so much for listening.
Better Offline is a production of cool Zone Media. For more from cool Zone Media, visit our website Coolzonemedia dot com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts