Claude Shannon created the theory we use to communicate everything in the digital age. James Gleick is trying to quantify all that communication.
It’s a lot of information. More importantly, it’s a lot of information we actually understand.
For that, we owe thanks to Claude Shannon. As a graduate student at MIT in the decidedly predigital year of 1938, he moved computing one giant leap forward with information theory, a mathematical method for determining how much data can be sent across a line and still be understood on the other end. It’s the foundation of all those exabytes of internet traffic.
“He understood that information is fundamentally digital,” says Thomas Cover, the Kwoh-Ting Li Professor of Engineering at Stanford University. All information, Shannon recognized, could be conveyed using binary digits, or as Shannon later called them, “bits.” With these bits of information—1 or 0, yes or no, true or false— anything can be represented and transmitted.
Before Shannon, muddled telephone calls and distorted radio signals perplexed engineers who used trial and error to improve transmissions. “What Shannon did was say you can always get an absolutely undistorted version, as long as you put in a finite number of bits,” says Cover. Shannon’s theory gave us a way to determine what that finite number is for each line of communication, whether it’s a telephone wire, a radio wave, or a smoke signal.
The same formula is what allows modern computers to know how many megabytes to send per minute over a cable modem or cafe? Wi-Fi without making your Facebook photos fuzzy. It’s a formula that has transformed the world.
In The Information: A History, a Theory, a Flood, science writer James Gleick puts it this way: “We speak of compressing data, aware that this is quite different from compressing a gas. We know about streaming information, parsing it, sorting it, matching it, filtering it. Our furniture includes iPods and plasma displays, our skills include texting and Googling, we are endowed, we are expert, we see information in the foreground. But it has always been there.”
GOOD spoke with Gleick about Claude Shannon, the birth of information theory, and our data-saturated future.
GOOD: What’s the difference between data and information?
JAMES GLEICK: In colloquial use, we think of data as the dry, computer thing, and information as the thing we like. It’s not an accident that the Star Trek character is named Data, and he’s supposed to be emotionless. Shannon’s scientific definition of information was
the one that really resembles what we like to call data, but there’s another thing we worry about: knowledge. We’ve got a lot of information, and some of it is just noise. T.S. Eliot said this long before the Electronic Age. “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”
G: Do you think the science of information will become the theory of everything?
JG: I’m not crazy about that idea, that there’s ever going to be some set of equations that explain the whole universe—some ultimate unified theory. On the other hand, I do believe that our world is made of information, and that information is the thing we need to make sense of human history and of our modern predicament. I believe information is what matters. We used to think that what was important was energy or matter. The world was made of atoms. Now, even physicists are looking at informa- tion as the most fundamental quality.
G: What are the implications in trying to quantify everything? Are there things that don’t fit into data sets, like love or skill?
JG: You’ve thrown me for a bit with the word “love.” I think what we manage to communicate isn’t infinite, and therefore it’s quantifiable, it’s measurable, it’s not limitless. I’m not trying to say everything can be reduced to bits. Rather, measuring information has enriched our understanding. It’s enabled us to realize and appreciate the many different channels for transmitting information. The telephone was exciting 150 years ago because you could hear the voices of your loved ones, in real time, from a great distance. But it was just a narrow channel, and compared to the presence of your loved ones—where you have not just sound but sight, smell, and body language—the telephone was a drop in an ocean. The fact that we can talk about these things is because mathematicians and engineers have given us the language. It doesn’t mean they were trying to diminish the grand possibilities of human communication and knowledge.
G: George Orwell worried about information control, whereas Aldous Huxley thought it was more likely that we’d drown information in a sea of irrelevance. Which is riskier?
JG: Having access to thousands of times more information than we did a generation ago hasn’t instantly made us any smarter. It’s empowered us in very real ways. On a good day, it has given us something that feels like omniscience. When there’s an earthquake in Japan, the visual images come to us in real time. When I want to look up the answer to an obscure question that would have taken me a day in the library just a few years ago, the answer’s at my fingertips. I can pull a little device out of my pocket and find the answer, but that doesn’t necessarily make us any smarter. Look, a significant portion of the population believes stupid things. They doubt the place of birth of the president
of the United States. Some people are not persuaded that Osama bin Laden is dead. People can be willfully stupid, or they can be stupid for political purposes. Or they can just be confused because a mass of information doesn’t translate into clearer thinking. So we’re back to T.S. Eliot: “Where’s the knowledge we have lost in information?”
G: How does researching in the archives at the British Library compare to reading a PDF at home?
JG: Presumably the British Library has had to be very selective in what they’ve managed to gather and preserve over the centuries. On the other hand, we know that the Library of Congress claims that they’re going to archive all the world’s tweets. A sort of hierarchy is created right away. If you’ve got everything, then most of what you’ve got isn’t important. If you need to be very selective about what you preserve, then it’s likelier that what’s preserved is important. When I started to work on an earlier book, a biography of Isaac Newton, I went to the Morgan Library in New York, which has a tiny notebook, the first one young Isaac Newton ever kept. It’s maybe two by three inches and it’s made of vellum. He wrote on it, in ink, in a hand that is so small you really do need a magnifying glass to read it. I had seen facsimiles, but the facsimiles are blown up by about three times to make them read- able, which I didn’t realize until I saw the actual item. Just in terms of the words that Newton wrote, I didn’t need to go to that library. But there’s still value in the physical artifacts, and I believe the biographers of the future will still want to see some of these real items. There’s information that isn’t only in the words.
G: In the digital age, information is so abundant. Is there ever too much? We know that there’s sometimes too much information.
JG: We have initials for it: TMI. In the end, though, I’m optimistic. I think we’ll be able to cope. Now that we recognize that this superabundance is not the solution to all of our problems, we’re learning how to cope. Any given person in any given day can only read an almost infinitesimal fraction of the messages that are tweeted—much less the books that are printed—which is sad. And it’s sobering. But it’s not anything that needs to terrify us. It doesn’t mean that we’re necessarily missing what matters to us.
Introduction by Alex Goldmark. Interview by Peter Smith. Illustration by Matthew Moore.