By Freeman Dyson
The Evolution of Cooperation is the title of a book by Robert Axelrod. It was published by Basic Books in 1984, and became an instant classic. It set the style in which modern scientists think about biological evolution, reducing the complicated and messy drama of the real world to a simple mathematical model that can be run on a computer. The model that Axelrod chose to describe evolution is called “The Prisoner’s Dilemma.” It is a game for two players, Alice and Bob. They are supposed to be interrogated separately by the police after they have committed a crime together. Each independently has the choice, either to remain silent or to say the other did it. The dilemma consists in the fact that each individually does better by testifying against the other, but they would collectively do better if they could both remain silent. When the game is played repeatedly by the same two players, it is called Iterated Prisoner’s Dilemma. In the iterated game, each player does better in the short run by talking, but does better in the long run by remaining silent. The switch from short-term selfishness to long-term altruism is supposed to be a model for the evolution of cooperation in social animals such as ants and humans.
Mathematics is always full of surprises. The Prisoner’s Dilemma appears to be an absurdly simple game, but Axelrod collected an amazing variety of strategies for playing it. He organized a tournament in which each of the strategies plays the iterated game against each of the others. The results of the tournament show that this game has a deep and subtle mathematical structure. There is no optimum strategy. No matter what Bob does, Alice can do better if she has a “Theory of Mind,” reconstructing Bob’s mental processes from her observation of his behavior.
By Mina Teicher
It is known that mathematicians see beauty in mathematics. Many mathematicians are motivated to find the most beautiful proof, and often they refer to mathematics as a form of art. They are apt to say “What a beautiful theorem,” “Such an elegant proof.” In this article, I will not elaborate on the beauty of mathematics, but rather the mathematics of beauty, i.e., the mathematics behind beauty, and how mathematical notions can be used to express beauty—the beauty of manmade creations, as well as the beauty of nature.
I will give four examples of beautiful objects and will discuss the mathematics behind them. Can the beautiful object be created as a solution of a mathematical formula or question? Moreover, I shall explore the general question of whether visual experience and beauty can be formulated with mathematical notions.
I will start with a classical example from architecture dating back to the Renaissance, move to mosaic art, then to crystals in nature, then to an example from my line of research on braids, and conclude with the essence of visual experience.
The shape of a perfect room was defined by the architects of the Renaissance to be a rectangular-shaped room that has a certain ratio among its walls—they called it the “golden section.” A rectangular room with the golden-section ratio also has the property that the ratio between the sum of the lengths of its two walls (the longer one and the shorter one) to the length of its longer wall is also the golden section, 1 plus the square root of 5 over 2. Architects today still believe that the most harmonious rooms have a golden-section ratio. This number appears in many mathematical phenomena and constructions (e.g., the limit of the Fibonacci sequence). Leonardo da Vinci observed the golden section in well-proportioned human bodies and faces—
in Western culture and in some other civilizations the golden-section ratio of a well-proportioned human body resides between the upper part (above the navel) and the lower part (below the navel).
By John Hopfield
All of us who have watched as a friend or relative has disappeared into the fog of Alzheimer’s arrive at the same truth. Although we recognize people by their visual appearance, what we really are as individual humans is determined by how our brains operate. The brain is certainly the least understood organ in the human body. If you ask a cardiologist how the heart works, she will give an engineering description of a pump based on muscle contraction and valves between chambers. If you ask a neurologist how the brain works, how thinking takes place, well . . . Do you remember Rudyard Kipling’s Just So Stories, full of fantastical evolutionary explanations, such as the one about how the elephant got its trunk? They are remarkably similar to a medical description of how the brain works.
The annual meeting of the Society for Neuroscience attracts over thirty thousand registrants. It is not for lack of effort that we understand so little of how the brain functions. The problem is one of the size, complexity, and individuality of the human brain. Size: the human brain has approximately one hundred billion nerve cells, each connecting to one thousand others. Complexity: there are one hundred different types of nerve cells, each with its own detailed properties. Individuality: all humans are similar, but the operation of each brain is critically dependent on its individual details. Your particular pattern of connections between nerve cells contains your personality, your language skills, your knowledge of family, your college education, and your golf swing.
It has been said that the goals of modern mathematics are reconstruction and development.1 The unifying conjectures between number theory and representation theory that Robert Langlands, Professor Emeritus in the School of Mathematics, articulated in a letter to André Weil in 1967, continue a tradition at the Institute of advancing mathematical knowledge through the identification of problems central to the understanding of active areas or likely to become central in the future.
“Two striking qualities of mathematical concepts regarded as central are that they are simultaneously pregnant with possibilities for their own development and, so far as we can judge from a history of two and a half millennia, of permanent validity,” says Langlands. “In comparison with biology, above all with the theory of evolution, a fusion of biology and history, or with physics and its two enigmas, quantum theory and relativity theory, mathematics contributes only modestly to the intellectual architecture of mankind, but its central contributions have been lasting, one does not supersede another, it enlarges it.”2
In his conjectures, now collectively known as the Langlands program, Langlands drew on the work of Harish-Chandra, Atle Selberg, Goro Shimura, André Weil, and Hermann Weyl, among others with extensive ties to the Institute.
Weyl, whose appointment to the Institute’s Faculty in 1933 followed those of Albert Einstein and Oswald Veblen, was a strong believer in the overall unity of mathematics, across disciplines and generations. Weyl had a major impact on the progress of the entire field of mathematics, as well as physics, where he was equally comfortable. His work spanned topology, differential geometry, Lie groups, representation theory, harmonic analysis, and analytic number theory, and extended into physics, including relativity, electromagnetism, and quantum mechanics. “For [Weyl] the best of the past was not forgotten,” notes Michael Atiyah, a former Institute Professor and Member, “but was subsumed and refined by the mathematics of the present.”3
By Juliette Kennedy
In 1900, David Hilbert published a list of twenty-three open questions in mathematics, ten of which he presented at the International Congress of Mathematics in Paris that year. Hilbert had a good nose for asking mathematical questions as the ones on his list went on to lead very interesting mathematical lives. Many have been solved, but some have not been, and seem to be quite difficult. In both cases, some very deep mathematics has been developed along the way. The so-called Riemann hypothesis, for example, has withstood the attack of generations of mathematicians ever since 1900 (or earlier). But the effort to solve it has led to some beautiful mathematics. Hilbert’s fifth problem turned out to assert something that couldn’t be true, though with fine tuning the “right” question—that is, the question Hilbert should have asked—was both formulated and solved. There is certainly an art to asking a good question in mathematics.
The problem known as the continuum hypothesis has had perhaps the strangest fate of all. The very first problem on the list, it is simple to state: how many points on a line are there? Strangely enough, this simple question turns out to be deeply intertwined with most of the interesting open problems in set theory, a field of mathematics with a very general focus, so general that all other mathematics can be seen as part of it, a kind of foundation on which the house of mathematics rests. Most objects in mathematics are infinite, and set theory is indeed just a theory of the infinite.
By Matthew Kahle
I sometimes like to think about what it might be like inside a black hole. What does that even mean? Is it really “like” anything inside a black hole? Nature keeps us from ever knowing. (Well, what we know for sure is that nature keeps us from knowing and coming back to tell anyone about it.) But mathematics and physics make some predictions.
John Wheeler suggested in the 1960s that inside a black hole the fabric of spacetime might be reduced to a kind of quantum foam. Kip Thorne described the idea in his book Black Holes & Time Warps as follows (see Figure 1).
“This random, probabilistic froth is the thing of which the singularity is made, and the froth is governed by the laws of quantum gravity. In the froth, space does not have any definite shape (that is, any definite curvature, or even any definite topology). Instead, space has various probabilities for this, that, or another curvature and topology. For example, inside the singularity there might be a 0.1 percent probability for the curvature and topology of space to have the form shown in (a), and a 0.4 percent probability for the form in (b), and a 0.02 percent probability for the form in (c), and so on.”
In other words, perhaps we cannot say exactly what the properties of spacetime are in the immediate vicinity of a singularity, but perhaps we could characterize their distribution. By way of analogy, if we know that we are going to flip a fair coin a thousand times, we have no idea whether any particular flip will turn up heads or tails. But we can say that on average, we should expect about five hundred heads. Moreover, if we did the experiment many times we should expect a bell-curve shape (i.e., a normal distribution), so it is very unlikely, for example, that we would see more than six hundred heads.
By Scott Tremaine
The stability of the solar system is one of the oldest problems in theoretical physics, dating back to Isaac Newton. After Newton discovered his famous laws of motion and gravity, he used these to determine the motion of a single planet around the Sun and showed that the planet followed an ellipse with the Sun at one focus. However, the actual solar system contains eight planets, six of which were known to Newton, and each planet exerts small, periodically varying, gravitational forces on all the others.
The puzzle posed by Newton is whether the net effect of these periodic forces on the planetary orbits averages to zero over long times, so that the planets continue to follow orbits similar to the ones they have today, or whether these small mutual interactions gradually degrade the regular arrangement of the orbits in the solar system, leading eventually to a collision between two planets, the ejection of a planet to interstellar space, or perhaps the incineration of a planet by the Sun. The interplanetary gravitational interactions are very small—the force on Earth from Jupiter, the largest planet, is only about ten parts per million of the force from the Sun—but the time available for their effects to accumulate is even longer: over four billion years since the solar system was formed, and almost eight billion years until the death of the Sun.
By Edward Witten
In everyday life, a string—such as a shoelace—is usually used to secure something or hold it in place. When we tie a knot, the purpose is to help the string do its job. All too often, we run into a complicated and tangled mess of string, but ordinarily this happens by mistake.
The term “knot” as it is used by mathematicians is abstracted from this experience just a little bit. A knot in the mathematical sense is a possibly tangled loop, freely floating in ordinary space. Thus, mathematicians study the tangle itself. A typical knot in the mathematical sense is shown in Figure 1. Hopefully, this picture reminds us of something we know from everyday life. It can be quite hard to make sense of a tangled piece of string—to decide whether it can be untangled and if so how. It is equally hard to decide if two tangles are equivalent.
Such questions might not sound like mathematics, if one is accustomed to thinking that mathematics is about adding, subtracting, multiplying, and dividing. But actually, in the twentieth century, mathematicians developed a rather deep theory of knots, with surprising ways to answer questions like whether a given tangle can be untangled.
But why—apart from the fact that the topic is fun—am I writing about this as a physicist? Even though knots are things that can exist in ordinary three-dimensional space, as a physicist I am only interested in them because of something surprising that was discovered in the last three decades.