How did IBM define computer graphics standards for the PC? What's the difference between the different standards? And why did the company get out of the graphics game?
Welcome to text Uff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio and how the tech are you. I am currently on vacation, which means we're going to have a couple of reruns for the rest of this week, and today's episode is called what We're c g A, E G A and v G A, which originally published on April twenty twenty. Hope you enjoy. For today's episode, we're going to learn about old computer graphics standards. Don't run away. This is actually really interesting. We'll learn how they became standards in the first place, and what the company IBM had to do with all of this, and why some early decisions by IBM would lead to the company x trickating itself from the personal computer business altogether a couple of decades later. Now, when I was growing up, my family owned a couple of personal computers over the course of my childhood. We were in that rare small percentage of households with a personal computer back in the nineteen eighties, and our first computer was an Apple to E with a mono chromatic screen that could only display Matrix green style letters. I seem to recall that we eventually got a monitor one hundred, which is Apple's color monitor, and that was compatible with the two E, assuming that you had a to E with the appropriate interface card installed. But honestly, that memory might be conflated with the second personal computer that my dad would purchase. See. Dad got these computers in order to work on his novels. He wrote his first couple of books on the old Apple to E. I don't know if he still has them, but for years he had these novels stored on old five and a quarter inch floppy disc and those discs could hold about a hundred forty kilobytes worth of information each, so to be safe, Dad would typically store two to three chapters per disk, since his novels were too long to fit onto just one five and a quarter inch disc, and the Apple to E had no hard drive. Anyway, I digress, but I love thinking about those old times. I remember going through sleeves of discs and seeing Dad's old novels on there. Our second computer that we owned as a family was a two eight six. But what does that actually mean? Well, it was a personal computer that relied on the Intel eight two eight six central processing unit, and it also relied on MS DOSS as the operating system. So this computer fell into what we would call an IBM compatible computer back in the day. It used components and an operating system that allowed it to run any software designed for those IBM s ccific machines. I think of this as an interesting part of personal computer history, and it helps illustrate a sharp contrast between IBM S strategy and apples. So let's backtrack a little bit now, before there were personal computers, back when you needed to work for a special research facility or be enrolled in an engineering course in the university, or maybe one of a handful of folks who knows about computers and works for a big financial company, or maybe you're in the military. Back in those days, computers really didn't have monitors at all. Computer graphics weren't even a thing yet. The computer would typically print out the results of a computational process on some sort of paper or paper tape. Richard Garriott, who would go on to create the Ultimate Computer Game series before You would become one of seven private citizens to visit the space station, programmed his first games on a computer that would print out each move of his dungeon crawler. So imagine a top down view of a done gin crawler, except you're not looking at it on a screen. You actually have to print out each move. So you make a move in the game, the printer would print out a new display of what had happened, and all the figures were represented by the basic characters that the printer could replicate, so it was limited to whatever the printer could print, and that was typically stuff like your standard letters, numbers, and symbols on a keyboard. So making a move would require the whole system to print out a new picture showing the results of that move. So playing the game took a while. Obviously the refresh rate was terrible, but eventually engineers began to create a way for computers to display information over a screen. You might connect the computer to a regular old television system and you might have a little adapter to do that, or, as would later become the norm, you would build computer monitors specifically for the systems you were creating, and later we would call these displays, but I'm so old I still refer to the them as computer monitors because that's just how it cemented itself in my brain. Obviously, you've got to have some sort of bridge for a computer to be able to send meaningful information to a display, which will then follow the instructions sent by the computer to represent the information to the end user. There's got to be some sort of interface to make this happen on the computer side, as well as a port that allows a user to connect the computer to the display. There has to be some sort of physical connection between the two, and in the early days of personal computers there was no set, standardized way to do this. The technology used in one computer system wasn't compatible with another, so you couldn't mix and match monitors and cables and based systems together. These were the wild West days of computing, when making a choice as a consumer was complicated because you had no way of knowing if the computer you chose was going to stand the test of time. You could end up purchasing a system at rate cost and see the parent company crumble and all support for that system would wither away. And software developers were affected by this too in a big way. Developing software can be an arduous process. Back in the early days, it was feasible and even common for a single programmer to produce a piece of software for a system. But developers had to make the same sort of bets that consumers were making. They had to choose which systems they would develop four and they would hope that they made the right bet, and it often been dedicating a lot of their time to learning how to program for that particular computers operating system. Since the OS of say, an Apple computer was different from that of the Texas Instruments trash a D system, which was different from the Commodore sixty four, et cetera. So in the early days of personal computers, there were many competing systems to choose from, both as a consumer and as a developer. Apple, Commodo or in Texas Instruments were three of the big ones here in the United States, and they weren't alone, but they didn't have to contend with a really big name in computers for a few years, and that was I b M. And that's because initially IBM chose to concentrate on its traditional enterprise focused business and not really get into the consumer market. They were making products and services for other companies, not for end users like me and you now. That would change in nineteen one when IBM introduced the IBM Personal Computer or the fifty one fifty. IBM didn't invent the term personal computer, but the fact that this juggernaut had used the phrase for its own product would shape the terminology for computers in general. We all know that ultimately the two major systems to emerge from those early days were Windows based PCs and Mac computers from Apple. These would be the two big ones for consumers. There are obviously others out there there, Linux systems, for example, but for the majority of people out there, it's the Windows based PC and Apple's Mac. Well, we call the Windows based machines PCs. Because of IBM and its influence, a MAC is a personal computer, to a Mac is a PC in the sense it's a personal computer, but you wouldn't call it a PC typically because to us, PCs means a machine built upon ibm s approach, and that leads us into the choices IBM made that would ultimately contribute to the company getting out of the personal computer business. Further down the road, it all comes down to how they chose to get into it in the first place. You see, when IBM was making the personal computer. The company wasn't exactly putting its full support behind that effort in order to produce the system cheaply, which would mean the company could sell the manufactured systems at a premium and have a really sweet profit margin. And you know, you you buy cheap and you sell high. IBM engineers built the PC using off the shelf components. The company didn't build a custom made microprocessor or anything. Instead, the original IBM PC used an Intel eight chip as the CPU. In a similar fashion, the engineers used other standard components to build out the PC, and they made an arrangement with Microsoft to supply the operating system for this new personal computer. And the story behind all of that operating system stuff gets really super juicy and bonkers. That has betrayal and backstabbing. It's like a Game of Thrones episode. For one thing, Microsoft was not the company to originally develop DOSS, but it sure is. Heck profited from it, but that's another story. The operating system that IBM used was called PC DOSS, but IBM did not establish an exclusivity agreement with Microsoft, and so Microsoft would also develop another OS called MS DOSS, which was to all intents and purposes identical to PC DOSS, and it would remain so for several versions. Now, all the pieces were in place for IBM's eventual decision to get out of the consumer PC market, and it was just at the point when it was getting in. You see, the basic components for the computers were available to anyone, and the operating system was likewise available through licensing with Microsoft. So an enterprising computer company with much lower operating costs than a behemoth like IBM could conceivably swoop in, build a reasonable facsimile of an IBM PC machine using similar components, and include a licensed version of MS DOSS as the operating system. In presto, you have a computer that runs just like an IBM PC, including support for all software designed for the IBM system, and it's at a fraction of the cost. This gave birth to an entire subclass of computers called the IBM clones or IBM compatibles. The two eight six I mentioned at the top of this episode was just such a machine. We didn't known an official IBM birstal computer, but rather a machine with the same sort of guts inside and running MS DOSS. It would take a long time for all of this to actually catch up to IBM. Mind you, it's not like they were shot and sunk as soon as they launched. The company would ultimately pull back from the PC business, but it would stick around long enough to make an enormous influence on computers and programming, and that includes graphics. When the IBM PC debut in the company offered two options when it came to graphics. Each was a type of circuit board that could be plugged into the motherboard of the computer, the sort of an expansion slot. These types of cards were called add in boards or A I B s, and they represent in the ways to add capabilities to a base computer model. Sometimes those abilities were fairly simple additional features. Sometimes, like in this case, they were required in order to send images to an external display. So without one of these two cards you wouldn't have any way of sending information to a computer monitor. The first of the two was called the Monochrome Display Adapter or m d A. This was a video card installed on the PC that would output monochromatic signals to the monitor. Furthermore, it didn't do so in a pixel addressable way. So wait, wait, wait, what does that mean All right. So let's remember that the images we see on displays and monitors and screens like on smartphones are made up a little points of light. By changing the brightness and color of those points of light, you can create full images. It's not that different from the technique used by the famous painter George Serat in his fame its work A Sunday Afternoon on the Island of Lagrange Jatt. In that painting, all the images consist of tiny dots of paint, but when you view it from a distance, they form the shapes of people spending a lovely day at a park along the Sin River. It's an example of a style called point to lists, and it's perhaps the most famous version of this of all time. But television's computer monitors and electronic displays like the one smartphones use have a similar technique, except they use points of light rather than points of paint. Now, as I mentioned, the m d A wasn't pixel addressable, and addressability refers to the capacity to separately access individual units of something, so in this case pixels. A pixel addressable approach allows the computer system to send specific instructions to each and every pixel, which, in turn, let's computers send full images and graphics to a connected monitor, but m d A didn't have that capability, so you couldn't send a black and white photo to display on a connected monitor. The m d A was dedicated purely to text mode. The screen consisted not of pixels so much as it did of character cells. So imagine a box that's large enough to hold the largest text character, like an uppercase G or W or something. Now imagine that the entire screen is a grid of those boxes. Each box is exactly the same shape, so it can a lot for the largest of characters inside of it, but that's all they can fit inside. Each box is one character. You couldn't create more complex images, only pictures that consisted of those basic characters, just like the old printers that I mentioned earlier that Richard Garriott had been playing with. Well, with these displays, you could get really good resolution on those characters, so the images were crisp and clear. The picture here's the text was incredibly clear to read. It was very simple too. With these displays. You could get really good resolution on those characters. The text is crisp and clear. And that was a big drop because a lot of these computers were meant to go towards small businesses, where presumably the applications you're running are mostly text based. There were some trade offs. Because the screen was made up of a grid of equal sized boxes, and each of those boxes could contain one character, every letter would use up the same amount of space on the screen. So an upper case W, which is about as wide as it gets, would take up the same amount of space as an upper case I. Now I don't mean that the upper case I would be wide, but rather it would occupy a spot that would be surrounded by an invisible box the same size as the invisible box that goes around the upper case W. So you get this weird spacing between letters in the same word. If you're using a collection of wide and narrow letters, it would just look off. H It's called monospace font. It's it's the same sort of thing that you would see with a lot of printers and typewriters because they were limited to having all of their stamps at the same size, even if the letters were different sizes. In contrast, most fonts we use today are proportional fonts, which means individual characters are given space proportional to their own size, so you don't get these odd gaps between letters that should be right next to each other. But that was just one option for the IBM PC. The other option had direct addressability for pixels. It also had support for colors, so you could have color graphics with this version, and it was called c g A, and we'll talk about it more after the break. So c g A stands for Color Graphics Adapter, and describing this technology will also require us to examine a couple of other sets of standards that affected the graphics that displayed on old c g A systems. C g A had big limitations had compared to graphics cards today, they seem absolutely stone age. The c g A system could support four different modes officially, but clever programmers figured out ways to boost this. We'll get into that. There were two text modes and two graphic modes for the c g A card. The first text mode supported four bit color and could display up to forty characters per line, with twenty five lines making up the total screen space, so twenty five like you could stack twenty five vertically or you could stack forty horizontally across the screen. The pixel aspect ratio was one to one point two. But what that mean, Well, these pixels were not perfect squares. They were actually taller than they were wide. With that ratio of one for width to one point two for height, this would mean that the visual resolution of the screen was more like three twenty by two forty. In actuality, it was three twenty by two hundred. So why the three two forty. Well, because the pixels were longer than they were wide. If you were clever with the way you create your computer graphics, it would seem almost like you had stacked more pixels vertically and you could take advantage of things and make a picture that had that sort of look as if it was a resolution of three twenty by two forty. However, if you needed to cut things short and the ratio just wasn't working for you, it would become a detriment, not an asset. However, if you do the math, you'll see that this means every character on screen would have eight pixels dedicated to it. And here how I did that. You just take the resolution with that's three twenty pixels. You divide that by the number of characters that could fit on one line. Remember it's forty characters across, so three twenty divided by forty you get eight. The same is true vertically. You can have twenty five characters stacked from top to bottom on the screen, and the vertical resolution is two hundred pixels top to bottom. Two hundred divided is eight, So each character and the adapter supported two fifty six different characters could use eight pixels for display purposes. The four bit color part also needs explaining. So a bit is a single unit of computer information, and we represent it as either a zero or a one, So that means a bit has one of two possible states at any given time. You can think of it as off or on, zero or one. We have four bits for four bit color, so that means we can think of having to the possible number of states per bit. Raised to the power of four, that's equal to sixteen. So four bit color could support sixteen different colors total, not all at once, but total. Like, that's the number of colors this display could show. In text mode, programmers could choose a foreground and background color, choosing from those sixteen pre made colors. In addition, a bit for the foreground color could be dedicated to make the character blink, so you can have blinking text in the foreground. The blinking bit, the bit responsible for that blinking command, was repurposed for the background color, and it's served as an intensity bit instead. Intensity essentially means how dark or bright that particular color happens to appear. The second text mode was an eighty by twenty five four bit color mode, so that meant you could fit eighty letters across in a line twenty five lines per screen. These letters were half as wide as the forty by twenty five versions. Makes sense, right, you could fit twice as many across the screen. They must be half as wide as the forty by twenty five. The pixel ratio that this would create for a visual representation of the resolution was six forty by four eighty. Now, in reality, those pixels again were taller than they were wide. In fact, they were notably taller than they were wide, so the real resolution, the true resolution was six forty by two hundred, but it looked like six forty eight. More programs were written in this mode because that you could fit way more text on a screen than you could with the forty twenty five mode. It was less chunky, but most text based programs relied on the eighty by twenty five approach. If you were using a word process or something, this was the style that you were most likely looking at. That being said, the resolution of text on a c g A machine was lower than what you would have found on the monochromatic m d A computers, so it was a tradeoff. You could have a c g A ibm PC running on this eight by twenty five text mode for a specific program and it'd be fine. It just wouldn't be as crisp and clear as the monochromatic m d A text specific machines. Onto the graphics modes, however, that's what we're really interested in, right, What actually made the images, not just the text on these computers. Well, the graphics mode for the c g A machine had, like I said, two different modes to it, two different official modes to it. One was a three twenty by two hundred resolution, but the pixel ratio was one to one point two, so again it looked more like three by two forty. This mode could just lay up to four colors at any one time using one of two pre selected palettes. This is why if you ever look at old c G A games, they all start to look really similar. They're all using the exact same colors. For colors, the programmers were working under some really tight restrictions. The first palette of colors included black, green, red, and yellow. This was palette zero. The second palette, a k A Palette one had black, cyan, magenta, and white. Now, as you can imagine, it's pretty tough to create good graphics with this limited color selection. Now, on top of that, programmers could use low intensity or brightness or high intensity or brightness. So that would add another variation. And I've seen the same screen presented in both palettes at both levels of intensity, and there are differences, like you can get a very different effect going from one to the other. So programs had a little bit of flexibility, but not by much. In both palettes, black is color zero, and color zero was actually customizable. You could swap it out. You could choose one of the other fifteen colors that c g A supported and use that as color zero. Black would no longer be used. The flip side of this is that the new color would replace color zero in all of the image. So if you wanted the image to have black in it, it would get replaced by whatever color you had now designated as color zero. If you wanted to have green included with your white, cyan, and magenta, then it would mean that if you had a scene with a night sky, that night sky is going to be green because it would normally be black, but you've designated that color to go to green instead of black. So yeah, very limited. However, another trick programmers could do is leverage the way see our T screens work. I'm gonna gloss over the details, but in CRT screens there is an electron gun and it paints the back of the screen with electrons. That causes phosphor to glow as the phosphor absorbs electrons. But the painting is the important part. It happens at the top line on the screen. It goes all the way across horizontally, then it moves down the line and it does this again, and it does this really fast. A slow CRT monitor would repaint the entire screen sixty times a second. But this means that if you're programming, you know precisely what parts of an image are going to be painted first, because it's going to go top to bottom. So if you're meticulous. You can swap from one pallette set to the other palette set in mid screen draw. That allows for slightly more colors to display on screen at one time, or at least what we perceive to be at one time, because our reception lags behind this refresh rate. So in any given band of horizontal lines, you would be limited to four colors because you'd be limited to one palette. However, you could swap from band to band, so you might have a screen with an image in it that has the four colors represented from pallette zero, and then at the bottom you swap out to pallet one and you get a little more variety that way. The second official graphics mode that the c g A chip supported was a six forty by two hundred one bit color mode. Now this was a monochromatic approach, so you had black the background color, and then whatever the foreground color was, whether it's white or green or amber. With color monitors, you could technically choose any of the sixteen colors the c g E chips supported to be the foreground color. And the bonus of this was that it allowed for more fine detail. It is a greater resolution than what you would find in the their mode, but now you were reduced to just one color in addition to the background. This mode was primarily meant for users who had a monochromatic display but who wanted to have graphics support. They didn't want to just get the text based m d A approach. You could enable this mode on a color display and swap out that foreground color like I said, but you were still limited by that one color on a screen at a time. There were a couple of other tricks programmers could use to to kind of fool the system to get more colors on screen. One involved using the text mode instead of the graphics mode. So the text mode actually supported more colors on screen at once. And if you could just make your game out of text, then you could have much more colorful games. However, there are games that are made up of text, So how do you adjust for that. Well, one of the two fifty six characters that you could choose from was a simple shape. It took up half of the character cell, so one half of the cell would be this color and the other half would be the background color. So you have a foreground color in the background color. However, what if you set both the foreground and the background to the exact same color. Well, you would get a solid block of that color, and using those blocks you could create simple graphics. But it's kind of like using wooden blocks that you would have as a kid. Right, you can make stuff out of it, but it's gonna be chunky. You're not gonna get the fine graphic detail you would down to the pixel level. Now, your pixels are much much bigger than they would have been otherwise, so the resolution was just one sixty by one hundred in this mode, but you'd be able to use a lot more colors. The last trick programmers could rely upon had to do with the monitors themselves, so there were two big categories at this time. The IBM PC had an r G B I monitor, and r g B I stands for red, green, blue and inten city, which again is the brightness of a color. But you could also use a composite video monitor like a television set. You could use that as your computer monitor, and you could feed video to it through a composite cable that's the yellow r c A cables of old. That one cable would carry out all the video information to the display. However, composite video monitors had an interesting tendency. Colors would bleed into each other a little bit, and that bleed that melding of colors would present other colors that you might not otherwise be able to create in c g A graphics. So you could kind of create through the process of transmission, brand new colors. So it's not like it's in the programming. It's literally impairing two different colors that could be represented in c g A next to each other, because you know when it's going to be shown on a screen, they're going to bleed together a little bit, so you get a a more rich from a color perspective image. However, there was a drawback to this as well. It would mean that the image is a little more blurry and not as sharp, so it would almost be like you're ending up with a lower resolution image. However, you would get more colors. So it just depended on what was most important to you when you were putting these things together. But why was there such a limitation on colors in the first place, Like, what was the factor that was making this be so primitive. Well, it wasn't because of display technology, like color televisions have been around since the seventies, really earlier technically, but definitely commercially. They had been available since the seventies, and there's no reason why a monitor wouldn't be able to handle lots of different colors. The real issue lay with computer memory. See in the early days, computer memory was a pretty valuable and scarce resource. It was expensive, it was hard to implement. Most computers had a very limited amount of random access memory or RAM. Computers pulled data into RAM from some other storage source like a floppy disk or a hard drive, and then the computer response to input provided by the user or by some program and performs operation on the data in this memory, thus producing output. The more information the computer can hold in RAM, generally speaking, the better because it brings downloading times and speeds things up quite a bit. But RAM was pretty precious in the early days of computing. The IBM PC shipped standard with just sixteen kill a bytes of RAM, so rather than eat up that memory by supporting more colorful graphics, IBM chose to give limited support to color representation and reserve that RAM for other stuff, like, you know, actually helping the computer execute programs. Other companies looked at IBM s c g A approach and they reverse engineered it. Soon they could also produce computers that supported a c g A graphics. Thus c g A approach became a standard, and originally you could just think of it as being proprietary. It was an IBM proprietary technology, but through reverse engineering it became a standard in computer graphics. And some of these third parties took this approach a step further. There was a company called Hercules Computer Technology that introduced the Hercules Graphics card in n two. The card came about as a matter of necessity. The developer needed a way to display Thai characters from from the language of Thailand, and that was his native language, was Thai, and in a resolution similar to I B M S M D A. That was the goal, like to have these very clear, crisp figures in the Thai language, but the M D A didn't support that alphabet. The Hercules Graphics card had a resolution of seven twenty by three fifty, but unlike the m D A, it was pixel addressable, so it could display both text and graphics at high resolution. It was a monochromatic technology, so you weren't going to get full color this way, but the resolution was superior to the c g A standard, So you could program a game in the c g A one bit mode, that monochromatic graphics mode of c g A, but at a much higher resolution than what you would do with the c g A computer. Now that being said, not many programmers actually took advantage of this, because it wasn't standard for developers to cater to a specific add in board like that, but man those times would change. However, a lack of BIOS support for this card meant not many programmers would actually take advantage of this and develop games specifically for computers with that type of card. Other companies would begin producing similar cards, and IBM was hard at work on the next generation of graphics capabilities. We'll talk about how they enhanced graphics in just a second, but first let's take another quick break in IBM boosted the graphical capabilities of its line and personal computers by a decent amount, though again by today's standards, still primitive. The company introduced e g A, or Enhanced Graphics Adapters. These add in boards, similar to c g A, included a bunch of chips that would show a marked improvement over the old c g A approach, e g A could support sixteen colors at the same time for some resolutions, So think of that four times the number of colors on screen at once. Wow, and it could pull colors from a palette of sixty four total options. No longer were you forced to decide between supporting dark yellow or having brown. C g A chose brown because it was decided that that was a color that would far more frequently be used than dark yellow. The resolution support for graphics had increased as well. E g A support resolutions of up to six forty by three fifty, though there are some caveats I'll get to in a second. The card itself included sixteen kilobytes of RAM. RAM is read only memory, and as the name suggests, read only memory cannot be written to or changed. Data stored in ROM typically includes sets of instructions that are necessary for doing stuff like booting up a program or running a critical process. In the case of e g A cards, the RAM included basic instructions for graphics applications that took some of the load off the host computer's own memory. In addition to those kilobytes of RAM, the card also had sixty four dedicated kilobytes of RAM or random access memory. This is like the short term memory stuff, you know, the memory where a computer stuff's data in order to access that information rapidly while carrying out operations. The card also allowed for a secondary memory card to boost the capability of e g A another sixty four kilobytes, which is good because at the base level of sixty four kilobytes from the basic e g A card, you would only get four colors on screen at once if you were showing graphics at the full resolution of six forty by three fifty. The e g A card provided support for both the c g A and m d A modes of IBM's previous graphics adapters, in addition to the new capabilities of the e g A itself. And IBM provided extensive documentation on the e g A, and that documentation came in handy not just for people who wanted to program for systems with an e g A card, but for companies that wanted to produce their own version of the e g A card. It would go on to become one of the most cloned cards in computer history, and only that companies were upping the anti by including more ram on these cloned cards, providing greater graphical support than what IBM was offering out of the gate. So while a basic e g A card would support four colors at full resolution, these clones would allow for all sixteen colors simultaneously at that same resolution. OUCH. Just two years after IBM introduced e g A, we saw more than twenty companies offering up clones of that technology. Some iconic games that came out during the e g A era include Ultimate five Warriors of Destiny. I mentioned the Ultimate series earlier in this episode. The first several Ultimate games came out for the Apple platform primarily and then we're later reported to other computer systems. Ultimate Five included e g A Support, and I remember this game fondly. In fact, it's my favorite of the Ultimate series. But other iconic e g A games included Cosmos, Cosmic Adventure, Commander Keene, and the original Newcomb platforming game, and many more. One of the big advances in graphics found its way into e g A, which was the concept of bit mapping. So remember when I said that images on a screen are made up of individual points of light called pixels. Well, in the older version of interlaced graphics. You would include information about each pixel, so you might say pixel in column one row one is red pixel, and column two row one is red pixel and column three in row one is red. That gets pretty tedious. Bit mapping allowed for a different approach. With bit mapping, you would only include data on a pixels color if the color was different from the pixel immediately before that one. So if pixels one, two, and three are all red, you would only have to define it for pixel one. The system would understand that if you didn't have any new information for pixel two that it would also be red, the same as for pixel three. It would only be when you had new information that would say, all right, now we have a new color like blue. This made displaying shapes that all were the same color throughout much more efficient. There's more to it than that, but it gets technical and we'd have to talk more about electron guns and stuff, so we'll just leave it off from here. But it was a big fans. It wouldn't be long before IBM introduced another advance in graphics technology. E G A debut in nine four, and just three short years later, IBM introduced the video graphics array or v G A. No longer were we talking about adapters. Nah, This here was an array. So what does that mean? Well, it actually matters in this case. The c G A and e g A adapters were added boards that you would slot onto the main frame circuit board of a computer. So you'd open up the computer case. There would be these little slots where you could slide in circuit boards. You'd slide the circuit board in and it would have a port in the back that would poke out the back of the computer case and you could plug stuff in that way. This was very typical VEXT still is to this day. There's still computers that do this with expansion slots. So v G A was different. V g A was hardwired onto the motherboard itself for the IBM computers. Later, third party companies would make v g A adapter cards to give computers that did not have the v GA installed directly on the motherboard the added capabilities of the new graphics standard. So while IBM took a different approach to this, other companies would replicate what IBM was doing on expansion cards that you could then plug into an existing machine. So what were those capabilities? Well, you could use lots of colors if you were also using lower resolutions. So at a resolution of three twenty by two hundred pixels, the array could support up to two hundred fifty six colors simultaneously. Wow, But if you want better resolution, then you had to reduce the number of colors. Higher resolution mode of six forty by four eighty supported just a modest sixteen colors. The palettes could draw from a global collection of more than two hundred sixty thousand colors. One other big difference between v G A and its predecessors is that v G A would send out data in an analog signal. E G A and c G A used digital signals. So what's the difference there, Well, an analog signal is continuous. It's unbroken, so you can plot that as a smooth wave. Uh, it doesn't have to be like a smooth, gentle repeating pattern. It can be all over the place, but it's unbroken. It's a continuous signal, so it can get really squiggly, but it's still one continuous, unbroken signal. So imagine playing a stringed instrument and you strum a string and it's playing a tone, but then you move your finger up the front board while the string is vibrating. That increases the frequency of the strings vibration, and thus we perceive that as the pitch of the note going up, and you can bend the note up. So if you've ever heard that kind of sound, you know, oh, well, that's like a continuous experience. It's not like I've heard it play low and then play high. I heard it shift through all those different frequencies until it reached its its ending frequency. It was a very smooth transition. That's kind of like describing just an analog signal, this smoothness. Digital signals are done in a series of steps, so this is more about taking slices of time and applying a specific value to whatever signal you're sending out in that slice of time. The finer you slice the time, so the smaller or thinner the slices, the smoother you can make the signal. But in turn, it requires way more information to describe that signal. So rather than it being smooth and continuous and unbroken, if you were to zoom in on a digital signal, you would see these little edges of these steps of time as the signal goes up or down, depending on whatever it is you're measuring or indicating here, but it indicates a discrete amount of time and the data associated with that discrete amount of time. If you've got a lot of processing power, you can make those time slices very very very thin. And if you can do that thin enough, then it's almost as if you're listening to an unbroken signal. You get beyond the level of human perception. But there is a point where human perception definitely picks up on this stuff. So one downside of analog is that analog cables, if they're not properly shielded, can suffer from interference problems. Digital cables don't. You don't get interference with digital cables, but generally speaking, with an analog cable, the longer the cable, the more prone it is to interference issues. Uh. And the shielding, as I said, is a big factor. So if you think of a cable as having several wires inside of it, if the individual wires are not shielded properly, you could get interference between them and that would result in poor performance. From graphics perspectives, v g A really did set a new standard for computer graphics. On the PC side of things, and it would also lead to IBM no longer being the entity that would define those standards. The rise of third party companies creating IBM clones by this time we pretty much just called them PCs would prompt ANYC home Electronics to announce the intention to form a new organization. This organization is called the Video Electronics Standards Association or VESA, and the purpose of vesas to come up with technical standards for computer video displays and graphics. The group would build upon the v g A proprietary standard to create what has collectively been referred to as super v g A. So think of v g A, but with even more capabilities and no longer dictated by a single company, but rather by a consortium of companies that have decided what the standards should be. Super v g A could expand the resolution up eight hundred by six hundred pixels. Again, it's not one single standard, it's rather a collection of super sets of the v g A standards. So it's a little tricky to talk about super v g A. It's not just one thing. IBM would continue to go on to create the Extended Graphics Array or x g A, but by that time super v g A had kind of taken on a life of its own as the new model for computer graphics. IBM would no longer be front and center when it came to defining how PCs would display graphics on a monitor. By this time, we're getting into the mid nineties, and the term IBM clone was pretty much dropped in favor of PC, and that would apply to any computer running MS, DOSS or later like after nine five or so Windows. IBM's decision to cut down costs by going with the off the shelf components, coupled with the failure to secure an exclusive license for DOSS from Microsoft, meant that IBM set the stage for its own competition in the consumer space. Ultimately, those competitors got big enough to create their own standards organizations, and so it became a group effort to come up with the way computers would continue to work. This, in turn, made it easier for lots of companies to enter the space, offering up competing products at competitive prices. IBM, for its part, would exit the personal computer market completely by the mid two thousand's. The company sold off its PC division to Lenovo in a deal that was valued at one point seven five billion dollars a princely some IBM was just finding it impractical to compete in that space and instead would return a full focus on enterprise level products and services. But if it weren't for IBM, we wouldn't have seen this particular progression with computer graphics. I'm sure we would have arrived at some sort of place similar to where we are now without the IBM PC. But who knows what it would look like. You know, maybe there's a parallel universe out there, and which we see a world where IBM never gotten to the consumer market at all and someone else took on that role, and maybe computer graphics themselves would be very different from the way they are today. But I can't travel in parallel dimensions, so I'll just have to imagine it. That was what we're c G A, e G A and v G A from a couple of years ago. I will be back with all new episodes next week, so I look forward to chatting with you then. As always, if you have suggestions for topics for me to cover on tech stuff, or suggestions for people I should have on the show, anything like that, let me know on Twitter. The handle for the show is Text Stuff hs W and I'll talk to you again really soon. Text Stuff is an I heart Radio production. For more podcasts from my heart Radio, visit the i heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows. Zero