TechStuff Tidbits: The Laws of Tech

Published Jan 26, 2022, 7:45 PM

From Moore's Law to the Laws of Robotics, we take a look at some famous "laws" about technology.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Welcome to tex Stuff, a production from I Heart Radio. Hey there, and welcome to tex Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio. And how the tech are you. Well, it's time for a tech stuff tidbit. And in the world of tech, we often refer to various laws that aren't actually laws at all, either in the legal or the scientific sense. So I thought I would dedicate a Tidbits episode to some of those laws, not all of them, and talk about where they came from and what they mean. So keep in mind, these laws are really more observations of things like trends in technology, and they have a tendency to be relevant, but there's no fundamental aspect of the universe that actually forces them to be true. Also, keep in mind, again we're just looking at some of the observations we referred to as laws in tech. There are a lot more of them out there than the ones I'm going to cover, and some of those end up getting super technical. Um And in fact, we're going to take a pretty high level on most of these. But the first one we should start off with is of course Moore's law. That's perhaps the most frequently referenced law quote unquote in tech now these days, we generally interpret More's law to mean that every eighteen months to two years the computers we produced double in processing power, meaning a computer produced today has twice the processing capability of a computer that was produced in twenty Assuming that you're actually listening to this episode in two that's when it originally aired. This is a fairly loose interpretation of the observation that Gordon Moore made in his paper Cramming more components onto integrated circuits way back in n More observed that, due to many different factors, not all of them directly technological, there was this trend for semiconductor companies fabricators to double the number of transistors on a silicon chip every year in the early days, and that held true for about a decade. But More would later revise his observation in the seventies to say every two years, and that's mostly where it's sat ever since then. So, in other words, an integrated circuit in nineteen sixty five would have twice the number of transistors as one that was produced in sixty four, and an integrated circuit in nineteen sixty six would have twice as many transistors as the one in nineteen sixty five, And you could project this out and make predictions and so on, and then eventually we hit a point where this slowed down a bit, and it took two years to double the number of components rather than just one. Now, Moore's observation took into account not just the advance in technological capabilities that would be required to make this happen, right, we'd actually have to build out the systems to make these components smaller and then cram them onto a silicon chip, As he would say, we also had to take into account the economic drivers that would push companies to pursue this trend. See, there has to be a reason for the push to cram more components on the circuit, because if there's no reason to do it, companies wouldn't pour the money into making it happen. After all, building out more complex circuits means investing a lot of money, time and expertise into finding new ways to produce smaller and smaller components and then designing a chip architecture that takes advantage of those smaller components. So there has to be that economic driver or else, the expense of doing this would be prohibitive. You wouldn't do it. Now. Lots of folks have predicted Moore's law would end as semiconductor fabrication facilities started hitting increasingly challenging obstacles. A one big obstacle is truly a physical one, like like physics. It's it's once you get your components down to the nanoscale, you know, like a nanometer is a billionth of a meter. Once you get down there, you start having to account for quantum effects. And these strange effects don't happen in the macro scale, so they seem almost magical to us. It seems like stuff that would be impossible because in our daily experience we don't encounter anything like this. So, for example, there is an effect called quantum tunneling. So imagine you've got a sub atomic particle like an electron, and let's say you've got a channel, and this electron can travel down the channel. It's a one way channel, so you can just go from one end to the other, and you've built it just for this purpose. And at the end of the channel you have a gate blocking the end of it. But it's a very very thin gate, and the electron just moves down your channel and then surprise gets close to the gate, and at some point it just appears on the other side of the gate, and the electron continue is on it's very little way. Uh now it's to you. It looks like the electron somehow dug a pathway through the gate and kept on going. But the electron didn't dig a path. It was just on one side of the gate at one point and then on the other side of the gate. And the reason for this is that electrons occupy more of an area than a specific point in space, or that they can uh inhabit any point within a certain area at any given given times. So there's the small region where the electron could possibly occupy any of the points within that region. Now, if the gate is so thin that this region of possibility overlaps the gate to the other side, well that means that there is a chance. It might be a very small chance, but there's still a chance that the electron could be on the other side of the gate without the gate ever having opened or the electron physically passing through it. And if there is a chance, that means that given enough opportunities, it will happen like that's kind of what chance means. So this is a bad thing if you want an integrated circuit, which you can think of as being a very very complicated system of pathways and gates for electrons to pass through or to be held back from. So for that reason and for lots of other ones that we don't really need to get into, semiconductor manufacturers don't really reduce all the size of components the way they used to back in the sixties and seventies and leading up to fairly recent years actually, but they have kept the naming convention. That is, you designate the type of the semiconductor node, the chip node with a metric such as like you might call it a four nanometer chip. Now, not that long ago, the metric would actually refer to the size of specific components on the chip itself. But these days it's really more of a way to say this chip performs at a higher level than say, a ten nanometer chip would, So it's really more to tell you about performance, but it's not necessarily uh in reference to the actual size of anything that's located on the chip itself. Now, there are a bunch of other laws in tech that reference or build off of Moore's laws. So, for example, there's Rocks Law, which is also sometimes called Moore's second law. This observes that as computational power increases, the cost to perpetuate Moore's law also increases, So in other words, it gets progressively more expensive to meet the fabrication requirements in order to build more powerful processors that keep pace with Moore's law. Moore's law is something of almost like a self fulfilling prophecy. There are companies that push themselves to keep pace with Moore's law, even though again there's no like fundamental law of the universe that requires them to do this. And this touches on something that I talked about in a recent episode about how Taiwan play such an important part in the semiconductor industry. Taiwan began investing in fabrication facilities in the nineteen seventies and built on that considerably in the nineteen eighties. Meanwhile, you had companies in the United States that we're really starting to shift and focus primarily on chip design, but not chip fabrication, because the fabrication cycle required a huge recurring investment. You would spend millions of dollars to build out all the tools and facilities you would need to make a chip with components of a certain size. Meanwhile, your designers are coming up with the next generation of chips and those are all going to need their own, you know, special equipment and facilities. So it's just a never ending cycle of having to reinvest in your process. So you have these chip designers who are creating a chip architecture, but then they would outsource the actual manufacturing to companies around the world, with Taiwan taking the lead. Also, this implies that a lot of chip companies out there are just consciously working to keep Moore's law going, even if it might not economically make much sense to keep up that pace. Um, there's almost like the weight of expectation upon it if you're a Luisa out there. Some observations playoff Moore's law in other ways or seem to evoke More's law. For example, there's Worth's law w I R T H that's named after Nicholas Worth, who wrote an article that's titled a Plea for Lean Software. Worth observation was that as computers get faster, software is getting slower, and effect software it gets slower at a rate that's greater than hardware's improvement. So hardware is getting faster, but software is getting slower faster than hardware is getting faster. If you will, so words law explains why you might go out and buy a brand new computer and it doesn't necessarily feel that much faster than the one you had a couple of years ago. Now, it's pretty natural for us to think, like, let's say we're using a computer, and we might think, oh, if I buy a new one in a couple of years, it's gonna leave this one in the dust. It's gonna be so much faster. But this ignores the fact that within those same two years, developers are going to make increasingly complex software that gobbles up those precious resources on the new machines. The software two years from now would likely require more than what your current machine could even handle. So Worth was saying to software developers, hey, guys, chill out and uh figure out ways to create less demanding software. Don't just pounce on computational capabilities just because they're they're this. Don't treat it like everest and you're climbing it because it's there. And this always makes me think of Triple A video games, which frequently on their highest settings anyway, have extremely high graphics processing demands, and they often push even the most powerful GPUs to their limits. And then you get the next entry in the franchise, and it will be even more demanding and will push whatever the current bleeding edge GPU is to its limits, and so on. It never eases off. On a sort of similar note, and one that involves not just tech but human nature is Brooks's law, and this comes from an observation by Fred Brooks back in nineteen He says that under some conditions, adding a person to a project, particularly software development, when the project is running behind schedule, will push the project to um to even further behind. So, in other words, if a project is not going to meet deadline, adding another person will likely make things worse. And there are lots of different reasons for that. I'm sure many of you are anticipating some of them. If you've ever gone through any sort of onboarding process, either with a company or a team, or if, bless your heart, you have been in charge of on boarding someone else, you know that it takes time for a new team member to get their bearings and get understanding of how things are working and find ways to contribute to that process. And until they get to that point, while the new person is more likely to be a drain on resources rather than adding to the resources. And it's not it's not their fault. They don't magically know where the project is and it's development cycle, or what is working or was not working, or how to make it better. It's just kind of the way things are. Other issues can also contribute to a slowdown. For example, as you add more people to a team, communication becomes more complicated. And boy, how do you do I feel this one? If you've ever used any sort of project management or communication platform to communicate with a team, you know, things like Slack or base camp or whatever, you know that things can get pretty chaotic. As more people join into a project. You can get cross talk, you can get conversations that maybe should happen within a subgroup rather than the whole group. You can have points where various subgroups haven't communicated with one another, and so things get all jump lee, just a lot of stuff that gets harder as more people join in. Uh and uh, you know that's something that we're going to touch on again. A little bit later in this episode, when we talk about processors adding people or resources to some jobs sometimes makes sense if those jobs can be divided up into individual tasks, but it makes less sense if the job isn't divisible, And we'll come back to that idea in just a moment. First, before we get into more loss stuff, let's take a quick break. We're back e Room's Law. Here's another one that's all play on Moore's law, because the room is actually more spelled backwards. It's e R O O M. It describes the process of developing new drugs like pharmaceuticals, and that developing new drugs gets progressively harder over time, and that means it takes longer and costs more money to develop new drugs as time goes on. Despite the fact that you've got scientists and engineers and chemists who have developed some really super cool high tech equipment specifically for the purposes of drug development, that even though our capabilities grow, the difficulty still grows. And generally, the Rooms Law says that the cost of developing a new drug doubles pretty much every nine years. Okay, now, way back in three, which was more than a decade before Gordon Moore's article about cramming components on integrated circuits would come out, there was a person named Herb Gross who observed that computer performance increases as the square of the cost. So, in other words, as a computer's price doubles, its computer processing power should be four times as great. So if you're looking at two computers and the first one is five dollars and the second one is one thousand dollars, the second computer should be four times as powerful as the first computer. Uh. This is one of those laws that at various times wasn't you know, totally accurate. But generally it's saying that as computers get more expensive, the price of computational performance, if you were able to like divide that up on a per unit basis, is actually coming down. Then there's Kumi's law, which describes the power efficiency of hardware over time. Essentially, Kumi said that every one point five seven years, the amount of battery you would need to handle a fixed computing load would fall by a factor of two, meaning that if you were to perform the exact same computational load on computers that were made essentially a year and a half apart from each other, the more recent computer would do it without consuming nearly as much power. After two thousand, Kumi adjusted this to say the trend would actually follow every two point six year instead of one point five seven. And you might wonder, well, why doesn't this mean that we see laptop computers that have batteries that have you know, an all day operation, Like why don't they last many, many, many more hours than old laptops? After all, if we're talking about it, the battery load decreasing by a factor of two every even every two point six years, shouldn't that be the case. Well, then we have to remember Worth's law and the fact that software bloats at are really fast rate, so we're not running the exact same computational loads as time goes on. The computational loads are getting bigger as time goes on, so it all kind of negates each other, all right. Then we have Gustafson's law, which gets back to that issue I was talking about with Brooks law. You know, that was the one that says adding more people to a project doesn't necessarily speed things up. Gustupson's law covers how much faster a computer with parallel processors will complete given task compared to a computer with a single core processor. All right, now, when it comes to parallel processing, I use this analogy pretty much every single time to describe single core versus multi core processors. But let's do it again. Okay, Let's say you got yourself a math class and there are six students in the math class. One student, and we're gonna call her Annie, is like super good at math. She is a genius. Now, the other five students in the class are really good at math, but they're not at Annie's level. One day, the teacher comes in and proposes a contest. She's going to hand out a pop quiz with five math problems on it. Annie will have to complete all five problems, but the other five students will each only have to solve one of the five problems. So student one gets problem one, Student two gets problem to et cetera, And then the teacher starts the clock and lo and behold the group of five students finish. There's collectively first, Annie is faster at answering individual questions than her counterparts are, so she can answer question one before student one can do the same. But keep in mind, students two through five are still working on those other problems while Annie is still finishing up question one, so she is not able to finish her quiz faster than the collective classmates for that particular pop quiz. This is kind of like parallel processing. Parallel processors are great for certain types of computational problems, namely problems that can be solved in parts. The various cores can tackle the different parts of the problem and then together they all arrive at the solution, and potentially they can do this much faster than a more powerful single core processor could. However, not all problems are parallel. Some problems require a serial approach s e. R I. A L not captain crunch, and a serial approach means that you can't just divide the problem up into parts. You have to go from beginning through all the way to the end. And in those cases, the more powerful single core processor is going to solve the problem faster than the parallel processor where each core is not running at the same you know, high level clock speed. So Gustopherson's law takes all this into account and gives a mathematical expression to estimate how much faster a parallel processor can solve a given type of problem if you happen to know how much of that problem is serial rather than parallel. Alternatively, you could use this to estimate how slowly a single core processor would solve a parallel problem. All right, Well, what if you were to look at Moore's law and say, hey, that's nice and all, but it's way too slow. Well, then you could take a gander at Nevin's law. This is named after Hartmut Nevin, who works in discipline is like quantum computing in robotics, and his law proposes that quantum computers increase in power at a doubly exponential rate, which is quick and crazy fast. And we can sort of see why this is too so. For classical computers, the basic unit of information is the bit, and a bit is a binary digit, so we designated as either a zero or a one, which you can think of like a switch being turned off or on. Using lots of bits, we can tell machines to do all sorts of stuff based on specific input. This is the basis of computer science. But quantum computing has a different basic unit. It is called the cubit or quantum bit, and a bit can either be a zero or a one, right, it has to be one or the other. It is binary, But a cubit, thanks to stuff like superposition, can effectively inhabit both the zero state and the one states simultaneously, as well as technically all states in between. So as you add in the ability to handle more cubits, as you build quantum systems that have more cubits incorporated into them, you vastly expand what the computer is capable of doing. Your collection of cubits can, in a sense, perform drastically larger numbers of simultaneous processes to solve a specific subset of computational problems. That sounds like word sellad, but it does make sense if you start to break it down, And it doesn't mean that a quantum computer is good for every kind of computational problem, just like a parallel processor is not going to be good for are not going to be better at at a serial level kind of computational problem than a single core processor could be. A quantum computer is not going to be great at every single type of computational load, but for a subset of them, they could be phenomenal as long as computer scientists develop effective algorithms to lever the quantum computing UH, and these are problems that would take a traditional computer decades or maybe centuries or thousands of years to complete, depending upon the complexity. So one example of of the type of problem that quantum computers might be really good at tackling is UH is known as the traveling salesman problem. And here's a version of this problem. Let's say you've got a salesman whose region includes you know, ten different cities, and the salesman's trying to figure out what is the most efficient route that will allow the salesman to visit every city at least once with the least amount of travel time. And in a classical computer system, a computer would have to go through every single possible variation of every route between every city and record the results. And then once it had run every single variation, it could then compare all the results against each other UH and then determine which route would be the fastest, and that, depending again on the complexity of the problem, could take hundreds of years. A quantum computer with a sufficient number of cubits along with the appropriate algorithm, could potentially solve this problem much faster. Also, interestingly, the solutions that quantum computers. Generator actually listed in probabilities, not certainties, so you would get an answer that what might have like a threshold of certainty saying that this is the right answer, which means it might not be, but it probably is. Now I'm being very fast and loose and very very high level with this description. It gets so much more complicated and technical than what I'm I'm saying. But this is just to give you an idea of what quantum computers will be used for. They won't be used for everything, but for the applications that they can be used for, they can potentially be far more powerful than any classical system. All Right, We've got a few more laws to cover before we wrap this up, but let's take another quick break. Okay, let's let's talk about a few more laws. Robert Metcalf made an observation that we now call Metcalf's law, and this has to do with the value of a telecommunications network. And by value, we're kind of talking about the number of possible connections that can exist within a network. And the mathematical expression of this law is N times and minus one all divided by two, So N represents the number of nodes in a network. All right, Let's let's start using actual examples to kind of explain what this means. So let's say we have the simplest network imaginable. Let's say we've got a couple of kids and they've got to tin cans and a string connecting to them, so the two kids can talk to one another. But that's it, right, you can only have two people using that network, and so that means our n in this case, the number of nodes is too. We have two people, two nodes. So then we take that mathematical expression you know, in times in minus one divided by two, So that means it's two times two minus one divided by two, and two minus one is one, so then you have two times one. That means you've got to you're divided by two, you're left with one. So one is the number of connections we can make with this network. We have two nodes, only one connection can happen. What happens if we add a third person in there. Let's say we've graduated from tin cans and string to actual like um telephone system. Well, now our n is three, so it's three times three minus one divided by two three minus one is two two times three six six divided by two is three. We went from one connection to three potential connection connections just by adding one person. Now let's say we go up to twenty. We go through that same mathematical expression, which I'm not going to walk you through because you've heard it several times, but it ends up with one. So again, with two people, you have a maximum number of connections set at one. With twenty you have one possible different connected pairs. So the more nodes you have in the communications network, the higher the value of the network. On a similar note, David P. Read R E. D. Observed that the usefulness of large networks, specifically social networks, scales exponentially with the size of the network. So even adding just a few people to a social network creates exponential growth in that network's utility. And read described this in terms of subgroups, that the more people join a network, the more subgroups can exist within that network. And again you have a mathematical expression you can used to describe this. In this case, you would say that the the number of subgroups is you take the number of people in your network, or a number of components in your network. This is your n and you take two to the power of n, then you subtract in from that number, and then you subtract one from that number. So let's say we've got eight people in this social network. Two to the power of eight is two hundred fifty six. We substract, subtract eight from that, and we get two forty eight. We subtract one from that, and we get to forty seven. So with eight people, you could potentially have as many as two hundred forty seven subgroups. And obviously it just gets crazier from there as you start to add people. So this points out how social networks can very quickly grow and become more important. Though it's obviously not a guarantee because obviously those those eight people could have potentially as many as two forty seven subgroups, But that doesn't mean you would actually see every single variation of a group play out in real life, Like maybe any inner mathematics classmates are all part of those eight people, and any refuses to be in any subgroup with our five classmates, Well, that would eliminate a ton of options there. But you get the idea. And then we have a couple of dark laws in tech. Laws of observations that show some of them not so pleasant sides of technology. One of those is called Zimmerman's law, which states that the capability for computational devices to track what we're doing doubles every eighteen months. And when you think about stuff like social networks, location tracking, targeted advertising, all this kind of thing, you start to see what Zimmerman was saying with this. And some of this is based off the capabilities of the technology, how the technology is getting better at doing this stuff, but some of it also has to do with how we choose to interact with tech. Like obviously, if we didn't, if we weren't part participants in this, or at least we weren't eager participants in this, it wouldn't have quite the same level of effectiveness. So if more and more people were to say, you know what, I'm not going to do social network stuff anymore, maybe they don't even use a smartphone or anything like that, it would cut way back on this. But most of us are willing participants to some degree or another in this system, and that just makes it more effective. And then we have Godwin's law. Anyone who's been part of any online community is likely to be aware of Godwin's law. It's named after Mike Godwin, and this law essentially says that the longer any online discussion goes, the more likely someone will bring up a comparison that involves Nazis or Hitler, and that should a conversation go on long enough, the probability becomes a certainty. And God would observe this way back in the youth net news group days, essentially, people would get into conversations, someone would disagree with someone, and else things would get heated, and ultimately someone would compare either a person or an idea to uh, something that the Nazis would have celebrated or something that Hitler would have proposed, and then Godwin's law would be complete. Like you would have said, yes, this is Godwin's law. This conversation has reached the point where someone made that comparison, and essentially they were saying that if a conversation goes long enough, the probability that would happen ends up being a certainty. Now, some folks say that if someone invokes a comparison to Nazis or Hitler in a conversation, that person automatically loses whatever the argument was about. Essentially, the idea is that you were unable to defend your position, so you ended up resorting to this comparison. This, this emotionally charged comparison. Therefore, your position is indefensible and you lose, You get nothing. As Willy Wonka would say, Uh, Now, I do not want to end on that note. Obviously it's really a bummer of a note. So we're gonna throw in the bonus here. Of the laws of robotics as originally proposed by science fiction author Isaac Asimov, and originally there were just three laws of robotics. The first law states that a robot may not cause harm to a human being or through inaction, allow a human to come to harm. Uh. The second law is that a robot must obey any and all orders given to it by a human, except if doing so would violate the first law. So I couldn't tell a robot to um to grab a specific chair just as you're trying to sit in that chair, because if the robot did that, you would miss the chair and you would sit down and hit the floor and possibly hurt yourself. And so the robot would not be able to do that, even though it is otherwise compelled to obey every command given to it by a human. The third law is that a robot must protect itself unless by doing so it would come into conflict with the first or second law. So if the robot protecting itself would allow a human to come to harm, well, then the robot will take whatever action is necessary, including actions that would harm itself. Now later, as Amov would add the zero law, this states that a robot may not harm humanity or through an action, allow humanity to come to harm. So not just a human, but humanity in general. Uh, this is the law that really helps you get around that science fiction trope in which engineers create a super powerful artificial intelligence like superhuman AI, and then you know, they use the AI as a decision making engine and they ask for it to bring about world peace, like this is a problem so big that humans can't solve it, but you are smarter than humans, make world peace, And the AI ends up saying, well, the only guarantee for world peace is if I wipe out all the humans, because then they can't fight each other, and that's the only guarantee for world peace. So I guess a better get to it launch nuclear weapons now, the zero law would presumably tell the AI, Nope, that's off the table, try again, and then the AI would probably dissolve into goo because the problem we gave it was way too hard. Anyway, those are the basic laws of robotics. Obviously, science fiction authors, including Isaac Asimov have played with those in various ways to create scenarios in which robots would behave in in what you might consider an unpredictable manner, because it was the robots method of attempting to complete a task while also trying to obey the laws of robotics. As the science fiction stories show, these basic ideas, while it seems like it's like covering all your bases, doesn't necessarily result in that when you put it into practice, uh, which, again, like good science fiction should be something that that teaches us something either about ourselves or about the potential for us to have blinders on when we make certain decisions and not foresee the consequences of our actions, so that maybe we spend a little more time considering things before we act on them. Anyway, I hope you enjoyed this Tex stuff. Tidbits ended up being now almost the length of a regular episode, so I'm again really bad at this whole tidbit thing. But yes, that's just a selection of some of the quote unquote laws in technology. Maybe sometime I'll do a follow up, or I'll cover some of the more specific laws and talk about, you know, how those came to be? Uh they there, I would argue, more obscure except for specific sectors of the tech industry, and thus you're less likely to come across them. But we might do a follow up episode in the future. If you have suggestions for topics I should cover in episodes of tech Stuff, feel free to reach out to me. The best way to do that is over on Twitter, and a handle for the show is tech Stuff h s W and I'll talk to you again, really sis y. Tech Stuff is an I Heart Radio production. For more podcasts from my Heart Radio, visit the i Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows.

In 1 playlist(s)

  1. TechStuff

    2,447 clip(s)

TechStuff

TechStuff is getting a system update. Everything you love about TechStuff now twice the bandwidth wi 
Social links
Follow podcast
Recent clips
Browse 2,444 clip(s)