What Claude Shannon Figured Out

Published Jan 16, 2025, 5:30 AM

 Claude Shannon is a major figure in the history of technology. Known as the father of information theory, Shannon spent decades at Bell Labs and MIT. But what exactly did Claude Shannon figure out, and why is it so important?

To answer that question, Jacob talked with David Tse, a professor of electrical engineering at Stanford who studied under one of Shannon’s students, and who teaches Shannon to his own students today. David used Shannon's work to make a breakthrough in wireless communication that underpins every phone call we make today.

Pushkin.

Hey, Happy New Year. We're very happy to be back, and i have one request before we start the show. I'm asking you a favor, and the favor is this, would you please send us an email to problem at pushkin dot fm and tell us what you like about the show and what you don't like about the show, and specifically what kinds of things you want to hear more of, and perhaps what kinds of things you don't want to hear Again, it's problem at Pushkin dot fm. I'm going to read all the emails, so thank you in advance for sending them. Claude Shannon is this huge figure in the history of technology. He's one of the key people who worked at Bell Labs in the middle of the twentieth century and really came up with the idea that made modern technology possible. But I'm going to be honest with you, I never really understood what Claude Shannon figured out that was such a big deal. But the people who know about technology, who know about the history of ideas, they say Shannon's a giant. Claude Shannon is like the nerds. Nerd he's the techno intellectuals, techno intellectual and so For today's show, I wanted to understand what did Claude Shannon figure out and why is it so important.

For the modern world.

I'm Jacob Goldstein, and this is What's your problem. My guest today is David Shay. David is a professor of electrical engineering at Stanford. He has studied Shannon for decades. He teaches Shannon's work to his students, and David used Shannon's work to make a breakthrough in cell phone technology. And that breakthrough, that breakthrough that came to us via Shannon and Shay, it affects every phone call we make. David and I talked about Shannon's key insights and about how David's own work built on Shannon, and we also talked about the big chunk of Shannon's life that was taken up with juggling and riding unicycles and building mechanical toys. But to start, we talked about how in the middle of the twentieth century, Bell Labs wound up driving so much technological innovation.

Yeah, so Bell Labs was the research lab of AT and T. Aightenh at that time was the phone company. Okay, nowadays we have many phone companies. We have Verizon, we have T Mobile, et cetera. But those days there was only one phone company, and that's a monopoly. So a monopoly needs to justify its existence.

Huh. So it doesn't get broken by the government.

It doesn't get broken up. Of course, it eventually got broken up, but at that time it was a monopoly. And so one way of justifying its existence is to say that. Okay, he says to the American people, to the government, that we will always spend a certain percent of our revenue on this research lab called bow Labs, and whatever bow Labs come up with is kind of our contribution, know only to our bottom line, but also to technology of the country.

So they have this sort of public mission to prevent the government from breaking them up.

Yeah, and so therefore it also allows researchers a very free reign to do research that not necessarily tied to, like say, a particular business unit. Okay, So they can be very creative. And that's the atmosphere about so bad Labs attracted a bunch of very smart people because smart people wants to work on their own problem, not the problem that the manager gives them. Yeah, okay, that's the that's one characteristic of smart people. And so yeah, that was the heydays of Bat Labs. Lots of smart people inventing amazing stuff. Laser was invented there, information theory, the transistor was invented there. Sort of almost all the foundation of the information age. Yeah, where there's hardware, algorithm, software is in some sense all have the roots at Ball Labs. So that was the contribution to mankind. Actually, I should say, no, only to America.

So Shannon gets there at this time, right, he's there with you know, when they're inventing certainly the transistor.

What's he do?

Tell me about his his work there? When he gets there, what's he working on?

Yeah? So I think Shannon always have his own agenda, right. We know for a fact that he has been interested in the problem of communication, that idea of having a grand theory of communication, even back in nineteen thirty eight, I think thirty seven thirty eight, because he wrote a letter at that time to a very famous person named Venera Bush. Yeah. Vera Bush is very famous, as he was I think president of MIT or dean of MIT, and then he became sort of a scientific advisor to the president, and so he wrote a letter to Venera Bush in nineteen thirty eight and say, hey, you know what, I'm really interested in this question of how to find one theory that unifies all possible communication systems. There's so many different communications system out there, but I think there's something at the heart of every system, and I'm trying to get to the heart.

And like nobody had thought of it in that way, right. It seems like part of his part of what Shannon's such a big deal is like as I understand that people it was like, you know that people understood like they were trying to figure out how to make the phone work better, and they were trying to, you know, make movies be clearer or whatever. But there wasn't this idea that you could abstract it until Shannon came along.

And the reason is very simple, actually, because if you have a physical system, then you want to build right what do you see right? You say, hey man the video for example, i'm seeing you right now, I'm not seeing you very clearly, have to say yes.

I'm in a closet, a closet.

Right Then I would say, how to try to improve the imbage? Maybe I can try to, you know, fix this pixel or do some filtering of your noise. So I'm very tied to the very specific details of the specific problem. Because why I'm the engineer. I need to improve the system, not in ten years, but tomorrow. You know, tomorrow.

You don't need a theory of the system. You just want to creer picture.

Yeah, yeah, I mean I'm in the weeds, right, I'm in the weeds. And Shannon, because of his training, and also because of the atmosphere of a place like bout Labs, could afford to st step back and just look at the broader forest as opposed to the details of specific trees.

So so, okay, Shannon's big idea comes out in this paper he publishes in nineteen forty eight. The paper is called a Mathematical Theory of Communication. It's like his great work. Tell me about that paper.

So that paper is actually a very interesting paper. In fact, when I teach information theory, I teach from the paper itself, because I thought it's an amazing way not only of learning information theory, but learning how to write a scientific paper properly. Huh okay. And you know, not everyone does research and information theory, but everybody has to write uh huh okay. Every researcher has the right to express their ideas to the peers and to the audience. So in that paper, very interesting. The first paragraph of the paper. Okay, it's already very interesting because typically when people write a paper nowadays, they tell you, oh, how great my invention is. It's going to change the world. Every paper is going to change the world. But in fact, his first paper paragraph focused on telling you what his paper is not achieving. Ha ha, I mean that's a master. That's the masters, right, I mean, how many papers that you read nowadays tells you in the beginning, Hey, you know what, guys, expectation management here, this paper is not about this. Hey, don't get your home. Yeah, exactly, That's exactly what he did. Expectation management nowadays, that's what today we will call it expectation management. And now those days, I guess he just calls it honesty. And his whole point was often people associate information with meaning, okay, and then he said in this we ignore meaning, we ignore meaning. Huh okay. So that was the first thing he did, which is brilliant because once you high information with meaning, then he will never be able to make any progress. It's just too difficult and too broad and too vague a problem.

Everybody gets stuck on this idea of meaning and what is meaning? And he's like, forget about meaning. So we're gonna forget about meaning. What is left?

Yes, Actually, the biggest I think breakthrough of that paper is to really focus on the thing that matters and cut away a lot of stuff that really doesn't learn that it doesn't matter, but it doesn't matter in terms of solving the communication problem, the communication. So then he said, okay, what is the communication problem? The communication problem is the following is that there are multiple possibilities of a word, and my goal is to tell the receiver destination which of the multiple pospiity is the correct prosperity.

Yeah, and so in language, it's basically it's a finite set. Language is a finite set. It's very large. But if we're speaking and we both know that we're speaking English, then essentially you are hearing the words and decoding them, and you know that it is a series of words, and you just have to figure out which words I mean like that for example, Yes, like that? Okay, so that's the frame he builds then.

Why okay, all right, Then once you have this framing, right, then you can ask the question, Okay, what is the goal of communication? The goal of communication is to communicate as fast as I can, right, And the natural question is why is there a limit on how fast I can communicate to you? Because if there's no limit, then amazing world. Right, we can communicate so fast it's.

Like instant telepathy. It's like you instantly beat me every thought in your head.

Yeah, okay, exactly. The natural question it has once you set up this finite set, as you mentioned, is okay, given these finite sets, is there a limit on how fast I can communicate to you? And so that was the question that was the heart of the paper, which is to so he formulated this notion of a capacity. That communication system is like a pipe. It's like you're pushing water through this pipe, and the size of the pipe limits of how fast you can push water through it. And now, justly in communication, there's this notion of a size of the pipe, which is called a capacity. And you figured a way of computing this capacity for different communication medium, any communication medium, you can actually compute a capacity for that community, and that limits how fast you can communicate information over that medium, whether that medium is wireless, over the air or over the widline. Like I'm talking to you, I communicate over the air, I talk to my WiFi. The wi Fi goes through some copper cabo, some optical fiber. H He's a physical medium, but he can compute a capacity for each of these different mediums.

And I know that part of the paper looks at, say, redundancy in various modes of communication and on related note patterns. Right, there's this whole section of the paper where he looks at the frequency with which letters occur in English and kind of builds builds an idea around that. Tell me about those pieces.

Of the paper. Yeah, so let's talk with the word redundancy.

Yeah, that comes off right.

No, no, no, no, no, no no, that's not only not the wrong word, but it's actually the most important word. I would say almost because you go back to the question to the thing I was talking about, which is how fast you can communicate? Right, So what he discovered was actually there's no limit on how fast it can communicate. You can always communicate very fast. But what the guy can hear is gibberish, and he cannot really distinguish what you're trying to say is like so much noise in the system, okay, that he cannot really figure out what he's.

Saying, even if you're face to face, right, even if you're face to face, you're not going to over the phone or whatever. If you talk too fast, the listener won't understand because you're going to.

And anybody who goes to a crazy professor's lecture would know about this, where the professor just keeps on talking and million miles per hour and the students, the sister and nobody understood the thing, and the professor cours the day when it's finished. So so basically he's what he's saying is that, hey, you know what to make sure that the information goes through reliably, reliably, that's the first word you need to introduce, redundancy, redundancy in your message, okay. And what you figured out is in some sense the optimal way of adding redundancy, because you know, you can always be stupid in adding redundancy. For example, I can keep on repeating the same word one hundred times to you and then you probably get it, and then I move on to the next word. I cannot move on the next word, but that would take me one hundred times slow. Yes, right, and so that's not a very smart way of adding redundancy. So what do you figured out is an optimal way of edding redundancy so that you can communicate reliably and yet at the maximum what he calls capacity limit. And that was a totally amazing actually formulation of the problem and highly non obvious. And I think that is some of the amazing contribution of this guy, Shennon.

Yeah, it's optimization. He optimizes communication across any channel where you're balancing efficiency or speed and reliability. That is the tradeoff, and he figures out how to optimize for that trade off.

Yes, yes, he figured out how to optimize that trade off. But that tradeoff turns out to be very interesting. Uh huh. It's a very interesting tradeoff. So typically when we think about tradeoff, we think about like a smooth curve, right, as when you tune something that you can get better performance. But what he showed was that there's kind of like a cliff effect, Okay, And the cliff effect is that if you communicate below this number called capacity, then you can always engineer system to make your signal the communication as reliable as you want. Huh so reliable, that's completely clean. Wow. Whereas you communicate above this number of capacity, then there's nothing you can do to make a signal clean. It's just completely gibberish. Huh. So it's a very sharp tradeoff that he identified. It's not a smooth tradeoff.

And if you're running the phone company, that's exactly what you want to know, right, So then you can tune it all the way to capacity and then not try and tune it anymore after that, because it's not going to get any better.

Correct, And that's the goal of sixty years of engineering to achieve his vision, his vision nineteen forty eight. It took people around sixty years to get to that implement his vision.

Well, so you are part of that story, right, Let's let's let you walk onto the story now. So you tell me about your work and how Shannon's work. You know, how you built on Shannon's work. Tell me about how you built on Shannon's work.

Yeah. So I did my PhD in the nineties. In the nineties. My advisor was a Shannon student, and so I learned information theory him. Okay, Now, at that time, information theory was almost a dead subject. Okay. When I was a PhD student, the first thing my advisor told me, maybe following Shannon, is hey, don't work in information theory. Wow, you'll never find a job. You never find a job with this stuff. Okay, that's a tough moment.

That must be a tough moment for.

Pretty tough, yeah, because at that time, there's not much progress made in the theory, and there's no killer applications either. There's no very killer applications that need all this sophisticated information theory. Okay. So it's like a dead field.

Was there a while when people used it to like whatever make landline phones work better, like in the fifties or something with where people like, oh great, now we've got this theory and we can make the phone work better.

Yeah. So the thing is that the solutions that people come up with to achieve these capacity limits is very complicated, okay, and the electronics that technology is just not enough to build these complicated circuits. So information theory have had not a very significant applic impact in the fifties, sixties, or even seven sounds.

So it's like one of those cases where the theory is just too far ahead of the technology.

To be useful it. Yeah, and so people can start losing interest in the theory is that, yes, this is a bunch of maths. It's not impacting the real world, and so students are drifting away from the field. But there's still always a few students, okay, who are just so enumorated by the theory that they keep on pursuing it. And my advisor is one of the leading professors in this area, and he would have like one student every decade, every decade to do research in if.

You were that student, you were and I was.

Not that student, And I was not that student, Okay. At that time, that slot was already taken by an earlier student who was ways more than me, who's ways more than me. And that's that he was that he was a student of the decade in information theory. Okay. Now, so I was assigned to work on some other problems okay, completely and related Okay, But anyway, the point though, is that when I graduated, something happened, okay, And that was the beginning of the wireless revolution. That was the time when only a million people have cell phones, and those cell phones I don't even remember. It's like gigantically break yeah.

Like there's that famous scene from the movie Wall Street, right, that's the one that everybody talks about where it's like bigger than a brick. People say brick, but it's actually bigger than a brick. It's like a big hardback book or something.

Yeah. And actually those days, because there's some few of six post it's like a prestige. It's like it's prestige to have this brick. Yeah, okay, yeah.

You couldn't get that brick. You had to be rich to get that brick. Yeah.

Yeah. And so the wireless revolution was happening because people realized that hey, you know what, be able to communicate anytime anywhere is really viable, and so people are now getting interested. And at that time, what people realize is that whoa this wireless physical media, it's really tough to communicate over because the bandwidth is so limited and the noise is so much. Right, FCC was limiting the bandwidth allocation to these applications a lot.

Aha, and so Communications Commission the government wasn't letting wireless companies use much bandwidth for.

Cel phone yeah, because all the bandwi most of them are allocated for military purposes and there's only very little bandwidth allocated at that time for civilians, and so those bandwidth were auctional to companies with a very high price, and so it became very important to be very efficient in using this very expensive property. Aha, Okay, and then people realize, hey, if we want to be really efficient, then we need a theory which is about efficiency. So people start thinking, okay, all right, so information theory was dead, but now it's going to come back to life because we have this really important problem, really expensive spectrum that was allocated by SEC, and we want to squeeze as much of it as possible.

As much communication. We need a sort of mathematical theory of communication, if you will.

And that was the renaissance of information theory, spurred by this amazing technology of wireless, which took us from one million phones to ten billion phones.

Today everybody has one point one.

Phone, and information theory play a big role in that revolution.

In a minute, how David used quad shannon It's nineteen forty eight paper to come up with an idea that we all use every time we make.

A phone call.

Let's talk for a moment about your your role, right, like you actually played played an important role there.

Yeah, so I was at Ball Labs. Uh huh, just like Claude. So it's like Claude to Yeah. Yeah, So I spent one year at ball Lapse as a so called postdoc right after my PhD, before I moved to Berkeley to become a professor there. I spent one year there. And that's what people were talking about that time of beat Labs. Hey, this new thing, wireless information theories come back to life. We can try to use information theory and adapt it and extend it to this wireless communication problem. And so that's when I said, whoa this information theory I learned from Bob Gallagher. Finally there's a place to use it. Finally I can actually make a living, make a living out of it. And like what my advisor told me, is not dead, it's come back to life. Yeah, and so that's sort of my start in the field. And uh yeah, so I did, I, you know, invented a bunch of stuff and actually apply this connect this information theory to the real world. And uh, every time you use a phone, you're using my algorithm, which is based on the theory of information. Huh.

And so you're you're that's a cool thing to be able to say. First of all, that's a very good flex your algorithm, it's the proportional fair scheduling algorithm, right, yes, yes, what is that? What's it do?

All right? So I should tell you a little bit story. I think the story is, uh, and then I'll tell you what it does. Okay. So I went to So that was the end of nineteen ninety nine, around nine ninety nine. So I was doing all this information theory stuff at Berkeley, writing many papers. But then I always have a thought the back of my mind, which you say, is this stuff going to be useful? And so I went to a I decided to go to a company, a wireless company who actually build these things and see whether this theory can be used. And the company went to is called Quaker.

Okay, I've heard of Quaker.

You've heard of Quaker, but at that time it was a small company, it was not very big, Okay. And at that time they have this problem they're working on. Okay, which is the following. All right. So in wireless communication, there's a concept COUD based station okay, and the base station serves many cell phones in the vicinity of the bay station. It's cost sout.

Is it like a tower? Is what we would call it a tower?

That's right, it's always on the tower. There's there's some electronics there. Yeah, and that's how the bay station is supposed to beam information to many phones, and many phones.

You still see them. You see them when whatever on top of a big building or when you're driving down the freeway. Right, that's what you're talking about.

That, Yeah, that's right. And sometimes on fake trees.

I love the fake trees in New Jersey.

They love the fake New Jersey. That's right, New Jersey, fake trees. Yes, So at that time they would look at this problem, which is, hey, okay, my bandwidth is limited, but I have many users to serve. Yeah, okay, how do I schedule my limited resource among all these users? Right? Because I only have one total bandwidth. And so at that time people think, okay, maybe something simple. I give one end of the time to the end user. Right, so the five users, I serve this user for a little bit, and I served the second user for a little bit, and served user for.

The ideas, you're switching really fast. You're just like switching.

Switching really fast, yeah, exactly. And then when I went there, I said, okay, good is the problem is a good problem. And I said, hey, instead of fixating on this particular scheduling policy, why don't we do any Shannon thing, a.

Card Shannon thing. You thought of, your thought of it? Yeah, okay.

The clause shaming thing is what is to look at the problem from first principle, Uh not reassume a particular solution or a particular class solution even and ask ourselves what is the capacity of this whole system, and how do I engineer the system to achieve that capacity? Okay? And it turns out that if you look at the problem this way, then it turns out that the optimal way of scheduling is not the one that they will try and design. And the reason is because in wireless communication there's a very interesting characteristic which is called fading. Okay, okay. When I talk to you over the air, the channel actually goes up and down, strong and weak, strong and weak, very rapidly. What I mean is when I send an Electromnett signal from the from the base to the phone, that signal get amplified and attenuate it very rapidly. It goes up and down.

Okay, Okay, can we say it gets stronger and weaker. Can we say it gets stronger.

And stronger and weaker? Okay? Yes, And so the alcomal way that Information SERVI it has to do is actually not divide the time into slots blindly, but really try to SKATEU a user when the channel is strong.

Ha.

And then from that on I designed a scheduling album from which is more practical by sort of leverage of this basic idea from information theory.

And so so the base station is basically monitoring the strength of the incoming signals from all the different phones and saying, Oh, that one's strong, I'm gonna grab that one. Oh that one's strong, I'm gonna grab that one.

That's what's happening, correct? Correct?

And how does that I mean I get in a kind of big first principles way sort of analogously, it follows from Shannon. But is there anything sort of specific in Shannon that leads you to this algorithm?

So remember Shannon is a very general theory. Yeah, Okay. It basically says that given any communication medium or any communication setting, yeah, you can try to calculate this notion of a capacity. So the very general theory, what I did was to apply it to a very specific context, which is this base station serving multiple user setting. Yeah, and then apply his framework to analyze the capacity of that system.

Huh.

And in the process of analyzing the capacity, you can also figure out what is the optimal way of achieving that capacity. Remember you mentioned capacity is really an optimization problem, and Shannon was able to solve this optimization problem in general. But now I specialize it in some sense to this pretty specific setting, except that the setting is used by everybody. But at that time, it was like, you know, research is about timing, and I was there at the right place at the right time. Because Qualcom turns out to completely dominate the entire third generation technology. So when I was able to convince them that, hey, your way of doing things is no good. This way suggested by Shannon's actually far better. Please use this way. It took me a few months, but I was able to persuade them to implement it, and then it got into the standard through the domination, and then every standard after that uses the same basic, the same algorithm. So it was good because, as I said, I'm at the right place at the right time. You know, when you try to contribute to engineering. It's too late if the system is built, because people don't want to wreck change the whole system to accompany or new idea. But it was very early in the design phase.

So so okay, So you made this breakthrough in wireless communications using Shannon's work. Were there similar breakthroughs in, you know, in other domains?

Any communication median right, it could be optical, fiber, it could be DSL modem, YESL modem, underwater communication. Almost all these communication systems are now designed based on his principle. So as impact of this theory is kind of global, it's the entire communication landscape.

There's a story I read in about Shannon that when he is developing information theory, he he takes a book off the shelf and he reads a sentence to its actually his wife, and it's something like the lamp was sitting on the and she says table and he says, no, I'll give you a clue. The first letter is D and she says desk. And when I heard that story, what I thought of was large language models, Like that sounds exactly like a large language model, And so I'm just fishing, I'm just curious, like does his work matter for machine learning, large language models, et cetera or no.

Yeah, so that's a very interesting point. Now I'm not an expert by any means in AI or large language models. Yeah, I'm not a professional researcher in that area. But I think you can actually see some commonality, right, is that you know these models in some sense, they don't care about meaning either.

Yeah, very good, Very good. Yeah.

Right, Actually I've just came to my discussion is very interesting because it's really just patterns. It's just which patterns are more likely than other patterns. Right. The example you gave about desk and is basically about patterns, and information theory is really analyzing sort of the number of possible patterns in some sense. So there is definitely a philosophical connection, I believe, starting from Shannon to these large language models.

So let me ask you about one other, and this is one that you are professionally involved in cryptocurrency and blockchain. You have studied it and you started a company.

Right.

Is there a connection between Shannon's work and cryptocurrency.

Yeah, So what attracts me to work in this area blockchain, is that blockchain actually has one very common philosophical connection to information theory, which is a following in blockchain. The problem is not communication per se. It's called consensus. Okay, it's a different problem, but it's essentially allow a bunch of users at different places to come to an agreement on something. Okay, yes, Now, the goal of designing blockchain is really to be so called for tolerant, tolerant, which means for torerant, which means that even if say one third of the users are bad guys and send you some gibberish message, you can still the rest two third people can still come to an agreement. Okay, all right, So you look at this problem, it's actually not that different from communication information theory because it's kind of combating.

The bad guys are the noise that the good guys at the signal.

And the good guys at the signal, and they try to reintroduce redundancy, okay, to help them to fight against these bad guys.

And there's an optimization problem where like the more redundancy you have, the sort of slower the system is, the more ponderous.

And so you tried an optimization problem is to try to figure out what is the optimal number of bad guys that you can tolerate and your system still works. That is the analogous to the capacity problem. So I find the philosophical connection very appealing, and so that's sort of one reason why I got attracted to work into this area.

Why do you think more people don't know about Shannon? Like all of the sort of intellectuals in technology say, he's like one of the great thinkers of the twentieth century, but most people have never heard of him. Why do you think that is?

So? Shannon was actually a very shy person, very shy person. He hates publicity, He hated when people interview him. You remember, right, it's basically a very modest person. Remember the first paragraph I talked you about. Yeah, he tells you what he is not accomplishing. Yeah, And so he's a very modest, very shy person, not into publicity. And I think that sort of impact not only himself, but also everybody who works in that field. Uhha, and dob this as kind of like a metric, right that Hey, we should all be modest, because what look at this guy who accomplished so much and he's still so motist. Who are we? Who are we? Right? So, as a result, the field doesn't really sell himself very well. The marketing engine, the marketing DNA is not there. Yeah, and so people don't know about him.

So I want to talk for a minute about the rest of Shannon's life. He writes this huge paper when he's in his early thirties, eventually goes on to be a professor at MIT, and he seems to spend a lot of his career juggling, writing a unicycle, building mechanical toys, building games, and he never, you know, does sort of great influential work again, and I'm curious, you know, what do you what do you make of that? How do you sort of fit his whole career together?

So there's a single there's a theme that unifies all this in my mind, which is playfulness. Because in his mind, research is really about puzzles. Uh, he doesn't understand something. It's like a puzzle to him, and he's trying to figure out the pieces of the puzzle. Information theory was like that the puzzles. He sees all these real war systems, they seem to all share some community, but nobody understood it. So there's a puzzle, and it's always thinking about the puzzle. And finally his paper basically solve that puzzle. So everything to him is playfulness. I think it's playing. There's a game puzzle and needs a soft to puzzle, and that is mine. That's how it's my work. So although it seems very different things that he did pre and post inflamation theory, but it's actually in my mind quite strongly monouns.

We'll be back in a minute with the lightning round. So I read that you recently asked people at your company to give five minute talks. I'm curious why you did that. That's interesting to me. Why'd you do that?

So just short to talk the harderest to give. So you can't explain an idea in five minutes, then I think your idea is actually not very good.

Ah, that's good.

Most good ideas you can get the point to across in five minutes. Remember, I'm an information theorist by training, so communication to the limit is what I'm passionate about.

If you had to give a five minute talk, what would it be about?

How about Shutton? I guess he's my hero. He's my hero.

So one you talked about the importance of timing in research of not only finding the right problem, but finding the right problem at the right time. Right, both in terms of Shannon's work and in terms of your work. You know, you're also a professor and you know a manager, Like, how do you help other people find the right problem at the right time?

Yeah, finding the right problem at the right time is probably the most difficult because you know, time is everything. However, this is hard to teach. What we try to do is to be ready. So one very famous information theorist told me this. He said, you know, everybody will get lucky at some point in time in the career. However, most people, when they get lucky, they're not ready, so they don't realize that they get lucky, and so they missed the opportunity. They went a different direction. Luck tells you should go this way, but you went the other way. Lost it.

That makes me so scared.

And so what I teach my students is always be ready. It's like your muscles. You have to be always training your muscles so that when you are lucky, you can capitalize on the lucky.

Do you So you talked about Shannon's playful nature like he was a juggler. He wrote a unicycle You do anything like that? You have any weird hobbies.

No, No, the only weird hobby is I love to talk to people like.

You fair, you love going on podcasts. That that is the juggling of the twenty first century. Who's your second favorite underrated thinker?

My advisor Ah Gallagher? Well gallaghery. He taught me how to think about research because you learned from Shannon and I learned from him.

And if you boil down what your advisor learned from Shannon and what you learned from your advisor, what would it be? What did you learn?

Yeah, and learn about taking a very complicated problem and strip it down to the essential and then formulate a problem around that and solve it. That's an art. It's not something you can to convert it into a mathematical formula and teach students. It's just based on intuition, experience. And that's what Shannon talked my advisor, and that's what my advisor taught me, and that's what I try to teach my students. Really, teaching is not really about giving the follower is really just learning by examples. I observe what he does, and then my students observe what I do as I interact with them. And hopefully this art will carry on from generation to generation.

Finding the essence of the problem.

Yeah, David.

She is a professor at Stanford. Today's show was produced by Gabriel Hunter Chang. It was edited by Lyddy Jean Kott and engineered by Sarah Bruguier. You can email us at problem at Pushkin dot FM. I'm Jacob Goldstein and we'll be back next week with another episode of What's Your Problem.

What's Your Problem?

Every week on What’s Your Problem, entrepreneurs and engineers talk about the future they’re trying  
Social links
Follow podcast
Recent clips
Browse 149 clip(s)