Once upon a time, a text-based Internet navigation system was poised to become the primary way we interact with the net. What happened?
Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio and how the tech are you? It is time for a tech Stuff classics episode. This episode is called What Was Gopher? And it originally published on April nineteenth, twenty seventeen. Hope you enjoy and you guys have been crushing at sending tons of suggestions my way, and I just want to say I really appreciate it. So remember, if you have a suggestion for a topic I should cover, send me the message the emails text stuff at hellstuffworks dot com. But this week's listener mail episode comes to us from Ryan, who says, please do an episode about the Gopher protocol, such a different approach to utilize the Internet that no one knows today. And so today we're gonna walk back in Internet history and learn all about the Gopher protocol. But first it kind of benefits us to remember what exactly the Internet is. And trust me, this is important in the grand scheme of things. But I'll be nice and high level and brief about it, so when you get down to it, the Internet is essentially just a bunch of computing devices that are all connected to each other, and that's it. Really. There's a lot of infrastructure there that allows this to happen, but ultimately it's a bunch of computers that are talking to each other. It's a network of networks using various types of connections and protocols to send and receive communications. But when you just break it down to its simplest of terms, you're really talking about a bunch of interconnected machines. Now, broadly speaking, we can classify most of these machines as either servers or clients. Now there are other machines as well, like route and switches and stuff, but I'm mostly talking about the devices that are directly communicating with each other rather than the ones that facilitate that communication. So as a server, well, it does exactly what it sounds like. It does. It serves information. Clients request information. So your smartphone or your computer or your tablet that you use to serf the Internet or read email or whatever it is you do online, that is a client. The servers are the machines that contain all the information you're looking at and serve it up to you. And the Internet is older than the World Wide Web. A lot of people, probably not as many today as were around for like the late nineties, but a lot of people would equate the World Wide Web with the Internet, like the two are synonymous, and that's not really true. The Web is really just a layer on top of the Internet. It's an interface that makes it easy to access and interact with information, but it's just one part of the Internet. Before it came along, people were facing a bit of a challenge. So imagine you've got this network of networks. Information is on thousands of servers around the world. You want to get access to some specific information, like a very particular file. There's one file that you want to get, But how do you track down that file? How do you know which computer out there in this vast network of networks. How do you know which one actually holds the file you want. You can't just scream out, hey, I need this one text file. You've got to know where it is in order to retrieve it. So the way this would work normally is that you'd send a message from your computer across the network to the specific server in question. So ideally you would know what computer hosts your file, and you would use some method of connecting to that server. Maybe you're using telnet, maybe you're using FTP file Transfer protocol, but you're using something to send a message to that server and then return the server responds to you, and then you request access to that file. It's really just that the message you send is just a request for information stored on that server. And if you have access authorized access to that information. In other words, if it's not protected by like a password or something that you don't have, the server would send that information back across the internet to you. But what happens if you don't know which computer holds the information you want, how do you get access to it? Back in the old days, this would be kind of like picking up a telephone to call somebody. Let's say you're trying to call me, only you don't have my phone number. I never gave it to you. I'm not listed anywhere, so how do you call me? There are millions of phone numbers out there. If you knew my phone number, everything would be just fine because you could just dial it directly. But since you don't know my number and there's no directory to consult, you're faced with a nearly impossible task. What do you do, just start randomly hitting buttons until the right person picks up. Well, that was the problem facing the Internet. It arose because the network became more complicated. Just as in the old days, you might have very few telephones in a single town. Like I'm talking the real old days. When telephones first became popular, you didn't need to worry about phone numbers. Maybe three people in town had a phone. These days you got to worry about it because millions of people have phones. Well, same thing was true with the network of networks. As more computers came online, as more systems joined the Internet, it became more difficult to track very specific stuff down. So before the Internet, there was a sort of a precursor, not really the Internet. Before there was an Internet, but it was a system of networked computers called ARPA NEET now the arpenet. That was a project from the Department of Defense in the United States and was something that required a bunch of computer scientists to sit down and work out protocols. It was essentially a big but relatively simple network of computers. Now I say simple because if you look at the Internet by comparison as a network of networks. It is far more complex than what Arpenet was, but I am not casting aspersions on arpenet. It was still incredibly sophisticated, particularly for its time. It allowed people working on different computers that used different operating systems, different languages, the ability to communicate with each other. And that's phenomenal because usually, you know, these are like computers that if you were to connect directly to one another, they would not be able to communicate with each other. They were fundamentally different. So they designers of the ARPINET had to figure out a set of rules that all these different types of computers could follow in order to exchange information with one another. So that required a lot of work to create these protocols or sets of rules to facilitate communication. And I've talked about arpinet before. If you want to hear a lot more about it, you can track down the classic episode. It originally published in twenty twelve. It was titled Happy Fathers of the Internet Day and then Chris Palette and I talked a lot about the people who designed the various protocols that allow Internet communication to happen. Well, anyway, two of the protocols in question during the days of ARPINET were the Transmission Control Protocol or TCP, developed in nineteen seventy five, and the Internet Protocol or IP in nineteen seventy eight. These are almost always grouped together whenever anyone talks about Internet protocols, they'll say TCPIP, But they are technically two different sets of rules that together end up guiding the computers on how to communicate across this network of networks. These are the rules that gave arpnet that ability to create global communication across various different computers. The two protocols were officially adopted by ARPINET in nineteen eighty three, so while they were developed in the seventies, it wasn't officially adopted until nineteen eighty three. Now, they were being used before that, but you know, ARPINET was a government thing, and just like you would suspect, it moves super fast because it's government. Well, the following year nineteen eighty four, a lot of things happen. We got that really killer Apple Mac commercial that was like the Orwellian nineteen eighty four that was phenomenal. But also Sun Microsystems unveiled a system called the Network Filing System or NFS that was sort of like a file directory and storage system for networked computers running on a heterogeneous Unix environment. So you had to have a bunch of computers running the same Unix implementation essentially, or same version I guess I should say not implementation, but the same version of Unix across all these different machines. And it was kind of like your basic file management system you'd have on a personal computer. So if you've ever used a computer that had a variation of this, like Windows File Explorer is a good example. You typically have a hierarchy of files. You'll have like a top level selection of files. If you go into one of those, you might see folders, like sub folders underneath that main heading. Go into that one, you'll see more sub folders, maybe some files, that kind of stuff. Well, it was essentially that, but for an entire network of computers. So you could share information between diferent computers on the system by saving files to this network drive, something that's very familiar to most of us today, but back then was really new. And then obviously you could retrieve files from the appropriate directory on that same system. Also in nineteen eighty four, a team over at Carnegie Mellon University released the Andrew File System or AFS. Now. Originally, the AFS provided Carnegie Mellon students a campus wide file directory system, and later on a company called Transarc sort of spun off and turned this into a commercial product. Now, these file systems allowed users to access files on other computers, but they also came with a really steep learning curve. If you weren't particularly computer savvy, it was really challenging to navigate through these file directories. It would be kind of like taking someone to a computer and it has all line commands and the person had never really worked with a line command interface. They'd only worked with graphics by line command. I mean, you have to type in the command and file names you want in order to access stuff. Well, if that's what you've been doing the whole time, it's not so bad. You start to pick up on it. Even as new systems are brought on, you can kind of adapt to it pretty quickly because you already have that basic level of knowledge. But for someone brand new to computers, it was really intimidating. It felt like you needed to have a lexicon of different commands at your disposal all the time, and a lot of people would have to use cheat sheets and take a look at those constantly so that they could figure out just how to navigate a computer. It wasn't very user friendly, and it was really hard to find what you were looking for if you didn't already know where to start from the beginning. The trick with filing systems is that you have to understand the organizational strategy. So not only do you have to understand the language to navigate the system, you also have to understand the person who set it up in the first place. So there's a bit of machine savvy you have to have, but there's also how well do I know the person who set up this filing system and how does that person's brain work, Because maybe they'll set things up in a different way then I would set it up, and if I were to try and navigate using my common sense, I might find it frustrating. So here's an example of what I mean. Back in the day, I used to organize all of my DVDs by genre and then by alphabetical order by title. So I had all my action films in one place, I had all my horror movies in another place, all my Tim Curry movies were together all that kind of thing. You know, Tim Curry is a genre unto himself National Treasure. But if you didn't know that I did that, and you didn't know where one section began or ended, you could have trouble finding a specific title. Maybe you would have had a totally different approach. You would have organized everything alphabetically from the start, so it didn't matter what genre is, and you would know, all right, well, the title i'm looking for starts with a D, so I need to find the d's and it's going to be there. Or maybe you would even have a more crazy organizational strategy, maybe you would organize everything chronologically by release date, which would confound everyone but the geekiest of film buffs. The point is that these file systems back in the day, with these network drives, they weren't always intuitive. Even if you weren't immediately put off by the interface, you might still not be able to find what you're looking for. Now. Later still came Alan Mtaje's Archie tool, and he was a University of Montreal employee student. You know, he had gone from one to the other really when he invented this tool to search for files on the Internet that would be transferred via FTP file Transfer Protocol. So instead of requiring the user to know a direct path, you just had to know the name of the file, how was the file named. Now, that still isn't ideal, because I'm sure you've encountered files that had weird or non intuitive names that may have had nothing to do with what the file actually was about. So you had to know precisely how the file was named or else it really didn't do you any good. But at least it didn't require you to also know the full path name of where a file was, and it allowed you to actually do a search across the Internet. Now, once the Web came along, and I'll talk a lot more about the World Wide Web later on in this episode, and once programmers began building search engines, this problem became less difficult. Right. Search engines started to make things much easier for people. The web made navigation more intuitive. It made it more accessible to people who weren't experienced with computers or had inside knowledge of directory organization. But in those in between times before the web up, but after the Internet came online, that's when Gopher emerged. Now, there were several developers behind Gopher. Really six people in total, and one of the more important ones I shouldn't even say it that way. One of the leads, I should say, because all six were important. One of the leads was Mark McHale, and Mkale attended the University of Minnesota. Actually they were all part of the University of Minnesota, but he had attended as a student, earning a bachelor's degree in chemistry in nineteen seventy nine. But while he earned his degree in chemistry, he would actually go on to join the campus microcomputer Center as a programmer, and by the late nineteen eighties he was leading a team of programmers and together they built an email client called pop mail, first for the Macintosh computer and then later on for the PC platform. The POP in pop mail stands for post Office Protocol. So the way worked was that you had a server that would kind of act like a virtual post office. You would whenever you were checking email or sending email, it would involve a virtual visit to this post office. You could drop off and receive mail there. So if you wanted to send a message through your popmail client, the client, as in your computer, the workstation you are using, would relay the message through the post office server, which would then direct the message onto your destination. When retreating messages, you would actually be going to this popmail server to pull the emails off of it in order to read them. So they were the ones who developed this, and it was a very popular type of way of getting a hold of email. The next big project that this team tackled was creating an interface for navigating the university's local network. So the question was, how can we find whatever we're looking for? How do we organize all this information we have in a way that makes sense because they could see that the Internet was creating this incredible opportunity to share information, but it was getting increasingly difficult to find stuff and to organize stuff properly because there were more and more computers coming onto the system and it just made things even more complicated. So together Mikhail Bob Alberti, Farad Anklesaria, Paul Linda, and Daniel Tory began to work to develop a solution, and as it turned out, their solution had applications far beyond the campus they worked on. In fact, it would turn out their solution had applications outside of the campus they worked on before it actually was implemented by the campus itself. It would work across the Internet in general. Now I'm going to jump into what they did in just a second, but before I do that, let's take a quick break to thank our sponsor. So the team got to work in nineteen ninety one. They knew that they wanted a system that could organize information in an intuitive way, giving users an opportunity to navigate the information they needed quickly. They wanted it to be a client server system. This was really important to them. What they perhaps the biggest decision they made, the most important one to them, was to not go a centralized mainframe organizational strategy. They wanted to get away from mainframe computers. They saw that the future was in desktops. But desktop machines, especially in ninety one, were slow, they had relatively limited capabilities, so whatever system they created had to be optimized for these less powerful machines. They also wanted to make this system scalable, with options to have various departments register servers with a top level server. So in other words, you could have different departments having their own networks, but have them linked to the main network so that you can navigate throughout all of these different systems, and that way, if you connected to that top level server, you could navigate to any lower department through a master menu. They also wanted the tool to be efficient and lightweight, meaning they didn't want it to tax those normal PCs and workstations. They didn't want a tool that would move at a snail's pace, and that was one of the reasons that they really focused on text based communications, because it doesn't require a whole lot of data to send text, and it meant that it could be relatively fast and informative. They also decided that the server would not retain states. Now that means that the server wouldn't need to keep a record of all the different information requests from multiple clients. Once a connection closed, the state was white, and it would close the connection after every transaction. Essentially, so together they worked on this tool and they decided to call it Gopher. The name, by the way, serves many purposes. For one, the mascot of the University of Minnesota is the Golden Gopher, Minnesota is the Gopher state. Also, the idea of a metaphorical gopher appealed to them. Gopher's tunnel and in a way, the Gopher protocol kind of tunnels through networks. They also started to refer to the networks that were using Gopher as Gopher space. And finally, gophers also the term for someone who runs errands and fetches things because the gopher will gopher like coffee or you know, you can go for snacks, that kind of stuff. The Classic Muppet Show series had Scooter who was a gopher. There were a lot of jokes about that about how Scooter was a gopher by job, but he wasn't a gopher by critter, like he was not the actual animal gopher. Anyway, here's the weird thing. So the team knew they were onto something when they designed Gopher, but it didn't get a lot of traction right out of the gate despite its utility, because the University of Minnesota they just didn't see how this would be useful. They didn't see the value in it, so they didn't really put any support behind it, and they didn't implement it at all. So the team decided to sidestep all that red tape, and one of the team members uploaded the software to an FTP server and then posted a message on an Internet mailing list saying, hey, guys, we developed this kind of cool tool. We think it is a great way to organize information on networks and allow people to find stuff. Give it a try. So people began to download it and put to use, and not long after that it started to take off. A lot of places were using it and they decided that it made a whole lot of sense, and eventually the University of Minnesota administration relented and implemented Gopher as well on their campus. Now let's go ahead and talk about Gopher implementations and what it was actually like. So there were two main ways you could interact with Gopher. One of them was to install software on your computer. And this is looking at interacting with Gopher as a user as opposed to as an administrator. So you want to use Gopher, well, one thing you could do is install software on your computer, and this was a client side Gopher application, which really is not that different from a web browser like Firefox or Chrome or Safari. It's essentially a similar thing. It's a program that runs on your computer that acts as the interface for this protocol. Those are all client side applications, just like the Gopher one was. But you didn't have to do that. What you could do instead is use a Telnet application. Telnet was another old protocol that's still used in some places. Tel Net would allow you to virtually log in to another terminal, and you just had to have the correct address and port number and everything in order to tell Net into that room. So you could use Telnet to log onto a remote server and use the server side Gopher client to access directories. And in that case, all the work is being done by the server and you don't have to actually have a client based application on your side. The style of the Gopher client really dependent upon its implementation. The team wanted to make sure that it was customizable, but it was always simple. There's always text based. It could be extremely simple where all you have are text options under a menu, or it could have some very primitive kind of graphic user interface elements in it. Graphic user interface also known as GUY but less what's the right word, less impressive or extensive as what you would find in the World Wide Web later. And there were two worlds in the Gopher verse, the world the client could see and the world the server worked with. So on the server side, every item within a Gopher directory would have certain elements to it, and that included a client visible name, a selector which was usually something that used the path name that would connect the server for location purposes. Then there would be a host name which would tell the top level server which computer actually contained the file in question, and an IP port number. Ports are kind of like telephone lines, so a computer I had to know which port it should use when contacting other computers. Now on the client side, on the user side, only those client visible names would show up. Everything else would be hidden. And to use an analogy, imagine that you go to a party and it's one of those super hip, geeky tech parties that I assume happened but I never get invited to anyway. Everyone gets a little name tag, and the name tag just has the person's name on it. That's it. That's the only information about the person that is visible to the average attendee. So when you walk around the party, you can see each person's name, but that's it. But let's say you're a super duper awesome VIP at this party, and all the super duper awesome VIPs at this particular party get a little swag bag that has a pair of smart glasses in them, and if you put the smart glasses on, you see projected above the name tag of each person more information, and it tells you where that person lives, what their phone number is, tells you about their favorite lunch spots. In other words, you know a few different ways you could reach those people if you needed to. Well, in the gover verse, that sort of same approach was applying but only to files, not to people, so there wasn't anything creepy going on. But Mikail and his group decided that users didn't need to see all the extra information. They don't need to know where a file is on a network. They don't care. They don't care if the file they want is on computer A or computer B. They just want access to that file. So they decided that all of that excess information would be invisible to the user. It would still be there for the servers because the servers would need to know where everything was in or to direct users to the right location, But to the users themselves, they don't care, So get rid of it, and ended up being a much more elegant system that way. So let's say you've logged into a college campus Gopher server because the tool was originally developed for colleges. So what would you actually see if you did this, Well, you'd start off with a menu of options. It's kind of like a table of contents, and that would allow you to explore different areas of the Gopher verse more thoroughly. A typical top level menu might include and by the way, I'm taking this largely from a request for comments that Mikhail and his group created in order to discuss what Gopher is, what it was meant for, and how it works. The first item might be something like about Internet Gopher, and it would literally just be that little phrase, and if you selected that one, it would open up something that would allow you to learn more about Internet Gopher. It would pull the files that could tell you about the version and Gopher in use and how best to navigate it. It may even tell you about the site administrator. And in fact, in that original request for comments, they suggested that every administrator for a Gopher server include his or her name, email address, and phone number so that you could get in contact with them should something go wrong. Yikes. Obviously, this was in the days when very few people comparatively speaking, were using an Internet tool. Because you could just imagine how much traffic you would get from various emails and phone numbers. If you just put your actual address out on a major website, that would be tough. The next element on this Gopher list might be something like around the campus, and that could include more directories that give you links to files about various areas of campus. You might get some menus of different options. For example, at University of Georgia, if I saw around the campus on a Gopher site and I clicked on that, it might take me to a couple of different sub menus, one for North campus and one for South campus, because the University of Georgia, at least at the time when I was attending it was roughly divided into North and South campuses. Another option might say courses, schedules and calendars, which would obviously take you to information about the classes offered by that college and when they happen, as well as a list of campus events throughout the year, and so on and so forth. Those are just the kind of basic table of content stuff you would see, and again it's all text based. Typically the options on any list would have indicators to let you know if you were looking at the name of a file or if you were looking at the name of another directory. So again, in an example given from that RFC paper that was written back in nineteen ninety three, the group showed directories as being followed by an ellipses and files were not. So if it said campus schedule with no ellipses, then you knew that clicking on that would give you a file with the schedule on it if you chose that option. But if you were looking at a list that said college activities dot dot dot, you knew that the dot dot dot that ellipses indicated that it was a link to another directory. So if you clicked on college activities, you would get another menu that would subdivide up college activities. Maybe it would be sports, after school activities, it might be extracurricular student groups, that kind of thing. Now, Gopher's design allows other servers to interact with a top level server, so it didn't have to just be a single computer running all of this information. In fact, that was beside the point. The whole purpose of this was to allow multiple servers to connect with each other and create an overall strategy for navigation to allow students or other users to learn more information so you could have say the Computer Science department server registered under a top level option for the entire university, and the English department could have its own server, and the Math department could have its own server, and so on and so on. So each departmental server would further point down the chain to specific files relevant to that department. And Gopher also included the ability to stack, meaning that you could actually retrace your steps, which is kind of like hitting the backbar or the back button rather on a browser. So if you went down one path and then decided, hey, I need to see something else that was further back in my browsing, you could back up the pathway until you got to that specific fork and then take the other fork. And that was actually pretty innovative in the early nineties. It wasn't like it was something that they were copying from other people. Unlike today's websites, the information on Gopher pages tended to remain fairly static, which actually was also a huge help for Gopher space. You didn't have to update things quite so frequently. Occasionally you might need to update a file, or you might need to add a file to a server, but that didn't happen at the frequency we see today on the web, there weren't a lot of dynamic pages out there. It was important for Gopher to be able to adapt to changes and evolving conditions, so the team designed a system that it would allow them to update files and path names and pieces so that you didn't have to do a full update of the whole system regularly. You could just focus on whichever sections had new information and just update those bits. Everything else could remain the same. That sped everything else up, Like if you are only making changes to two percent of your network, then there's no reason to update all one hundred percent. They could save time by just updating that two percent. They also made another very clever choice. They recommended that all top level servers have a clone and that way, if one server were to fail for whatever reason, another one could take over the job. Also, if you had a really busy network, you could have both servers acting at the same time, sharing the loads. So some users are using server A, some users are using server B, but both of them are identical. They have complete navigational tools to go through the whole network, and so it doesn't really matter which one you land on as a user, it's going to be the same experience, and since most of the files remain static, you didn't really have to worry about the two top level servers getting out of sync. They would be pretty much ready to go all the time. I mean, occasionally you'd have to update at least parts of the system, but it was pretty manageable, especially in those early days before things got super complicated now. The team also suggested that all registered servers in a system have an alias that the Gopher client could use to locate those servers, and this way the client would know each registered server by its alias, rather than the server's primary name, which was more or less unchangeable. Now why would you want to have an alias, Well, the reason was to improve portability. So if as an administrator, you decided that you needed to move files from one server to another server, that could be a huge headache because it would mean having to update all these different path names so that your top level machine knows where everything is. Now, right, if you had to say, oh, well, instead of being on this machine, it needs to be on this machine. I've got to update all these different files and these different path names so that people navigating the system will get to where they need to go using the alias was actually a very clever idea to get around this. Clients wouldn't require an update or any alteration. They don't need to know that computer one, which used to be known as Bob, has gone offline and now computer two has all of computer one's stuff. Instead, the administrator would just name computer two Bob. They would give computer two the alias of the old computer one, and then the clients would just end up going to computer two because they're not looking at one or two, They're just looking for Bob, and if computer two is Bob, now that's where they go. It was an ELI solution, particularly once network's got really complex. Now, the brilliant thing behind Gopher is that the end user didn't need to know any of this. That didn't matter to them. They just needed to have a simple interface to interact with. They needed to be able to see the choices they wanted to make a choice and follow that pathway, and it was very similar to kind of a choose your own adventure book. As long as the directories were clear and organization of the information made sense, it was easy to use. And then there was the killer app, the ability to search. So to search on Gopher, you had to set up an actual Gopher search server, it would be its own purpose was to essentially have all the different files that are connected to that particular Gopher space indexed, and these servers could perform full text searches in all the files on all the registered servers in that Gopher space. So, by default, search queries assumed that if you were to type in multiple words, the spaces between those words were an and in boolean logic. In other words, if you typed in Spanish dance in the query field, so you just have Spanish space dance, the search would actually assume what you are saying is give me all the files that have both the words Spanish and dance in them. Files that didn't meet that criteria would not be returned. So if there was a file that had the words Spanish in it, but dance never appeared in that file, you wouldn't get it. Same thing if it were dance but Spanish was never there, you wouldn't get it. Now, this structure only worked with servers that were registered to the same system, So these Gopher spaces were kind of like islands in the ocean. You could connect Gopher spaces together, you could end up registering the servers of different Gopher spaces so that they made larger networks, and that would allow users to navigate from one system of servers to another. But that wasn't necessarily the aim of Gopher. So if I did a search on a Gopher space for a specific term, like again at UGA, if UGA was its own contained system, its own network of servers throughout the campus, but none of those had connections to other campuses, I'm not going to find files that are stored at the University of Minnesota because the two systems are distinct. I'll only find files that are on UGA system. So it was a little different from the World Wide Web. It had these again, these kind of islands of information. Well, I've got some more to talk about, including the rise and fall of Gopher itself, but before I get into that section, let's take another quick break to thank our sponsor. All Right, we're talking about the nineteen nineties. Now, the Internet was still pretty darn young back then, especially an Internet that was accessible to people beyond the computer science field, and no one was really sure what it would evolve into in those early nineties. In nineteen ninety two, the Internet Engineering Task Force or IETF held a meeting in San Diego, and they invited McHale and also Anklesaria to come out and to speak to the group as a whole about the work they were doing with Gopher now. The meeting included a whole bunch of different important figures in Internet history, including Tim berners Lee, who is the guy who pioneered the World Wide Web. He built the first web page at CERN, that's the same organization that runs the Large Hadron Collider. So berners Lee was there to talk about his approach to Internet navigation using what he called the web and hypertext, which I'm sure you're familiar with because that's what everybody uses today. He also was very interested in trying to find a way for the Web and Gopher to work together. The folks over at Gopher they were looking at berners lee approach to navigation and they said, well, this doesn't make any sense using hypertext of different words to leap to connected but distinct ideas. That makes an organization impossible. It's like the opposite of what we're doing. We're trying to create hierarchies of information, and the web is a web. It's all this interconnectivity. So they never really got to work things out with berners Lee, something that they said in retrospect perhaps was a mistake, but they didn't know that at the time. No one was short at the time whose approach was going to work out best. So it was as a weird experience, right. And besides, the web programmers learned about other means of accessing and navigating the Internet. There were other ones like a Prospero and waste or wais so be. Clifford Noyman and was the pioneer behind Prospero, which was meant to create personalized directories of the Internet in response to user queries, so he could organize information and allow users to search grouping resources together so that users didn't have to go down several different individual paths and just hope for the best. Ways stood for Wide Area Information Server. It was a text searching system that could look for specific strings of text on files on a server, retrieving any hits that it might find. And so it was also very different from the Gopher approach or the Prospero approach. It was a pretty basic strategy to search and retrieval. So at this gathering, Mikhail discovered that people already knew a lot about Gopher. He thought he was going out there to tell people about a system that he had developed and kind of launched the year before. But everyone there seemed to really be familiar with it. A lot of them were running their own Gopher servers. They were really curious to hear more about the team behind Gopher. The software had spread rapidly because of its easy implementation and interface, and adoption was incredible. By nineteen ninety three, Gopher's traffic growth rate hit nine hundred ninety seven percent. Now that definitely sounds astronomical, but keep in mind percentages are tricksy. For example, if I sold one newspaper in a week, and then the following week I sold ten newspapers, I'd have a growth sales rate of nine hundred percent. So while nine hundred ninety seven percent growth is spectacular, we're not talking about enormous numbers. By April nineteen ninety four, the number of Gopher servers, or at least the ones that the team was aware of, had hit just under seven thousand servers. But Gopher did have a big head start on the web. It took the web a bit longer to catch on because you had to have the development of the clients that people could use to browse the web, and you had to have the development of the actual software people were using to create web servers. Gopher was popular enough for the team to start to hold conferences of their own called Gopher cons. They even created t shirts and merchandise with the Gopher logo on it. Colleges and companies were adopting it. Even the White House launched a Gopher site back in those days. But hot on the heels of Gopher was the Worldwide Web concept, which had its own compelling methods of navigating and presenting information. Even though it seemed non intuitive to the Gopher group, it turned out that people really like it. Anyone who spent a lot of time on a site like Wikipedia dancing from one topic to another, just diving down those rabbit holes, they know what I'm talking about. Someone, ironically, I would say Tim berners Lee actually was able to spread the word about the World Wide Web using a Gopher site. That's how people were able to get the files they needed to create web servers and web clients, as well as find more information about the web itself. And the Gover team was running into problems with the University of Minnesota as well. The administration was always something of a reluctant partner with this group. At first, the college had been a roadblock. You remember I mentioned they refused to implement Gopher in those early days. They only did it after it had already been implemented at other college campuses. Members of the gover development team were looking at the possibility of creating a private entity, like a spinoff company, to create Gopher as a commercial product. But back in those days, the Internet was still largely the domain of universities and government offices. In fact, a lot of people looked at the possibility of private enterprise getting involved with the Internet as something they did not want. A lot of people kind of compared it to public broadcasting. They said the Internet should remain like PBS and all these companies. That's like ads supported TV. We don't want that here. So there was a lot of resistance even among the group themselves. It's not everybody was in agreement that they should go the privatizer route. And remember the dot com era had not yet begun at this point. Then. The University of Minnesota instituted a licensing policy for Gopher. They said that any organization out there that was operating for profit would need to pay a license fee to the University of Minnesota in order to use Gopher, and the cost of this license fee was sliding. It depended upon the size and type of business in question, and the university was looking to keep all that money for themselves, really i Meanwhile, the development team found that the university still wanted them to do their regular desk jobs as programmers for the University of Minnesota, so they were supposed to both develop Gopher and work on their normal jobs, and they weren't really given any more resources to do this. It was still just the six of them, so their morale began to decline. And then they had the university expecting them to do these two jobs at once, as well as having an entire population of users out there who were angry about this licensing policy that the university has had put in place, and the team was kind of caught in the middle, and at one point, the university even considered outsourcing Gopher to other programmers, essentially taking it away from the team that created it and giving it to someone else. Meanwhile, the web was just getting started. Remember that I said, Gopher grew by nine hundred ninety seven percent back in nineteen ninety three. Well, the web also grew that year by three hundred forty one thousand, six hundred thirty four percent. I was also the first year of the commercial web browser, Mosaic. Mosaic proved that you could go commercial on the Internet. So while the Gopher group was reluctant the website, things were going in a very different direction, and the web was incorporating stuff that Gopher couldn't really support, like images on web pages. Now, as an image on a web page was not a huge asset because it took forever to render a picture on a screen, and it caused some people to say that WWW actually stood for worldwide Weight. But modem speeds were on the rise as well, and that helped reduce some of that loading time, And some of the Gopher team rather sardonically said, we knew the game was up when we realized that people could get naked photos on their computer screens using the World Wide Web, something that you could not be done with Gopher, and they were being pretty serious about it at the time. By the spring of nineteen ninety four, web traffic exceeded Gopher traffic. So even though Gopher had a headstart, Web had caught up and passed Gopher traffic. Companies were discovering that there were some possibilities on the web, though really web commerce was still a couple of years away from really taking off, but it offered opportunities beyond just sharing information, which was really the only thing Gopher was good for was really sharing information and distributing files. It wasn't meant to do really more than that. So the web had versatility on its side as well. And one of the big things that helped drive the knife home and Gopher had nothing to do with how people use the Internet. It had to do with a scandal unrelated to anyone on the Gopher team. Doctor John Nigerian, a transplant surgeon at the University of Minnesota, was accused of fraud, tax evasion, and embezzlement. The charges led the National Institutes of Health, which is a federal government office here in the United States, to withdraw funding to the university, and that was a serious blow to the school, so they went to emergency mode to try and fix the situation. They, as part of that, asked the Gopher team to create a new system to track accounts and file paperwork, because a large part of this accusation really dealt with the way paperwork was failing to be filed on time. So the team eventually developed a web based transaction program, which is also kind of ironic when you think about it. That was a web based approach. It was the world's first web based transaction program. But it also meant that while they were developing this, they could not continue working on Gopher because they just didn't have the time to do it. So by the time they finished the web based application, Gopher was dead in the water, or I guess dead under the ground, I suppose, if we want to stick with the metaphor anyway, It's not really dead dead. It's not entirely gone. There are still Gopher servers out there if you know where to look. But as an evolving entity, Gopher was essentially a thing of the past, and at some point ten no one's really sure when someone took the original Gopher server known as Mother Gopher, offline. The University of Minnesota supported it for several years after Gopher had clearly lost the race to the web, but that is no longer the case, as for Michael. He would go on to join Duke University, where he still works today, and other members of the team have gone on to do other things. Some of them retired, some of them album's working for Google. One of them stayed with the University of Minnesota, oh And doctor Nigerian, who was accused of those crimes, was ultimately exonerated of those accusations of fraud and embezzlement. The jury returned with a verdict of not guilty, but by then the damage was done in more ways than one. Gopher had faded away due to lack of support, and the university found itself struggling to regain the trust of the nih And that was the Textuff Classics episode What Was Gopher, which originally published April nineteenth, twenty seventeen. I hope you enjoyed that episode. For the older listeners out there, let me know if you used Gopher. I mean, some of the younger ones may have too. It's just less likely, I think, but yeah, let me know if you is Gopher and your thoughts on it. I would love to hear from you, and I hope you are all well and I will talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.