School of Natural Sciences
By Freeman Dyson
The Evolution of Cooperation is the title of a book by Robert Axelrod. It was published by Basic Books in 1984, and became an instant classic. It set the style in which modern scientists think about biological evolution, reducing the complicated and messy drama of the real world to a simple mathematical model that can be run on a computer. The model that Axelrod chose to describe evolution is called “The Prisoner’s Dilemma.” It is a game for two players, Alice and Bob. They are supposed to be interrogated separately by the police after they have committed a crime together. Each independently has the choice, either to remain silent or to say the other did it. The dilemma consists in the fact that each individually does better by testifying against the other, but they would collectively do better if they could both remain silent. When the game is played repeatedly by the same two players, it is called Iterated Prisoner’s Dilemma. In the iterated game, each player does better in the short run by talking, but does better in the long run by remaining silent. The switch from short-term selfishness to long-term altruism is supposed to be a model for the evolution of cooperation in social animals such as ants and humans.
Mathematics is always full of surprises. The Prisoner’s Dilemma appears to be an absurdly simple game, but Axelrod collected an amazing variety of strategies for playing it. He organized a tournament in which each of the strategies plays the iterated game against each of the others. The results of the tournament show that this game has a deep and subtle mathematical structure. There is no optimum strategy. No matter what Bob does, Alice can do better if she has a “Theory of Mind,” reconstructing Bob’s mental processes from her observation of his behavior.
By Freeman Dyson
John Brockman, founder and proprietor of the Edge website, asks a question every New Year and invites the public to answer it. THE EDGE QUESTION 2012 was, “What is your favorite deep, elegant, or beautiful
explanation?” He got 150 answers that are published in a book, This Explains Everything (Harper Collins, 2013). Here is my contribution.
The situation that I am trying to explain is the existence side by side of two apparently incompatible pictures of the universe. One is the classical picture of our world as a collection of things and facts that we can see and feel, dominated by universal gravitation. The other is the quantum picture of atoms and radiation that behave in an unpredictable fashion, dominated by probabilities and uncertainties. Both pictures appear to be true, but the relationship between them is a mystery.
The orthodox view among physicists is that we must find a unified theory that includes both pictures as special cases. The unified theory must include a quantum theory of gravitation, so that particles called gravitons must exist, combining the properties of gravitation with quantum uncertainties.
Following the discovery in July of a Higgs-like boson—an effort that took more than fifty years of experimental work and more than 10,000 scientists and engineers working on the Large Hadron Collider—Juan Maldacena and Nima Arkani-Hamed, two Professors in the School of Natural Sciences, gave separate public lectures on the symmetry and simplicity of the laws of physics, and why the discovery of the Higgs was inevitable.
Peter Higgs, who predicted the existence of the particle, gave one of his first seminars on the topic at the Institute in 1966, at the invitation of Freeman Dyson. “The discovery attests to the enormous importance of fundamental, deep ideas, the substantial length of time these ideas can take to come to fruition, and the enormous impact they have on the world,” said Robbert Dijkgraaf, Director and Leon Levy Professor.
In their lectures “The Symmetry and Simplicity of the Laws of Nature and the Higgs Boson” and “The Inevitability of Physical Laws: Why the Higgs Has to Exist,” Maldacena and Arkani-Hamed described the theoretical ideas that were developed in the 1960s and 70s, leading to our current understanding of the Standard Model of particle physics and the recent discovery of the Higgs-like boson. Arkani-Hamed framed the hunt for the Higgs as a detective story with an inevitable ending. Maldacena compared our understanding of nature to the fairytale Beauty and the Beast.
“What we know already is incredibly rigid. The laws are very rigid within the structure we have, and they are very fragile to monkeying with the structure,” said Arkani-Hamed. “Often in physics and mathematics, people will talk about beauty. Things that are beautiful, ideas that are beautiful, theoretical structures that are beautiful, have this feeling of inevitability, and this flip side of rigidity and fragility about them.”
In early April 1972, Hugh Montgomery, who had been a Member in the School of Mathematics the previous year, stopped by the Institute to share a new result with Atle Selberg, a Professor in the School. The discussion between Montgomery and Selberg involved Montgomery’s work on the zeros of the Riemann zeta function, which is connected to the pattern of the prime numbers in number theory. Generations of mathematicians at the Institute and elsewhere have tried to prove the Riemann Hypothesis, which conjectures that the non-trivial zeros (those that are not easy to find) of the Riemann zeta function lie on the critical line with real part equal to 1⁄2.
Montgomery had found that the statistical distribution of the zeros on the critical line of the Riemann zeta function has a certain property, now called Montgomery’s pair correlation conjecture. He explained that the zeros tend to repel between neighboring levels. At teatime, Montgomery mentioned his result to Freeman Dyson, Professor in the School of Natural Sciences.
In the 1960s, Dyson had worked on random matrix theory, which was proposed by physicist Eugene Wigner in 1951 to describe nuclear physics. The quantum mechanics of a heavy nucleus is complex and poorly understood. Wigner made a bold conjecture that the statistics of the energy levels could be captured by random matrices. Because of Dyson’s work on random matrices, the distribution or the statistical behavior of the eigenvalues of these matrices has been understood since the 1960s.
By Robbert Dijkgraaf
I am honored and heartened to have joined the Institute for Advanced Study this summer as its ninth Director. The warmness of the welcome that my family and I have felt has surpassed our highest expectations. The Institute certainly has mastered the art of induction.
The start of my Directorship has been highly fortuitous. On July 4, I popped champagne during a 3 a.m. party to celebrate the LHC’s discovery of a particle that looks very much like the Higgs boson—the final element of the Standard Model, to which Institute Faculty and Members have contributed many of the theoretical foundations. I also became the first Leon Levy Professor at the Institute due to the great generosity of the Leon Levy Foundation, founded by Trustee Shelby White and her late husband Leon Levy, which has endowed the Directorship. Additionally, four of our Professors in the School of Natural Sciences—Nima Arkani-Hamed, Juan Maldacena, Nathan Seiberg, and Edward Witten—were awarded the inaugural Fundamental Physics Prize of the Milner Foundation for their path-breaking contributions to fundamental physics. And that was just the first month.
Nearly a century ago, Abraham Flexner, the founding Director of the Institute, introduced the essay “The Usefulness of Useless Knowledge.” It was a passionate defense of the value of the freely roaming, creative spirit, and a sharp denunciation of American universities at the time, which Flexner considered to have become large-scale education factories that placed too much emphasis on the practical side of knowledge. Columbia University, for example, offered courses on “practical poultry raising.” Flexner was convinced that the less researchers needed to concern themselves with direct applications, the more they could ultimately contribute to the good of society.
By David S. Spiegel
Until a couple of decades ago, the only planets we knew existed were the nine in our Solar System. In the last twenty-five years, we’ve lost one of the local ones (Pluto, now classified as a “minor planet”) and gained about three thousand candidate planets around other stars, dubbed exoplanets. The new field of exoplanetary science is perhaps the fastest growing subfield of astrophysics, and will remain a core discipline for the forseeable future.
The fact that any biology beyond Earth seems likely to live on such a planet is among the many reasons why the study of exoplanets is so compelling. In short, planets are not merely astrophysical objects but also (at least some of them) potential abodes.
The highly successful Kepler mission involves a satellite with a sensitive telescope/camera that stares at a patch of sky in the direction of the constellation Cygnus. The goal of the mission is to find what fraction of Sun-like stars have Earth-sized planets with a similar Earth-Sun separation (about 150 million kilometers, or the distance light travels in eight minutes).
By Graham Farmelo
On Wednesday, July 4, shortly after 4 a.m., the Institute’s new Director, Robbert Dijkgraaf, was in Bloomberg Hall, cracking open three bottles of vintage champagne to begin a rather unusual party. He was among the scientists who had been in the Hall’s lecture theater since 3 a.m. to watch a presentation from Geneva on the latest results from the CERN laboratory’s Large Hadron Collider. In the closing moments, after CERN’s Director-General Rolf Heuer cautiously claimed the discovery of a new sub-atomic particle—“I think we have it, yes?”—applause broke out in the CERN auditorium and in the Bloomberg Hall lecture theater. Within minutes, the IAS party was underway.
The new particle shows several signs that it is the Higgs boson, the only missing piece of the Standard Model, which gives an excellent account of nature’s electromagnetic, weak, and strong interactions. Although some physicists had come to doubt whether the boson existed, Nima Arkani-Hamed, Professor in the Institute’s School of Natural Sciences, was so confident that in 2007 he bet a year’s salary that it would be detected at the Large Hadron Collider. In the week before the CERN presentation, Arkani-Hamed invited colleagues to the party and organized the catering. Convinced that he had won his bet, he bought three bottles of champagne, including two of Special Cuvée Bollinger.
By John Hopfield
All of us who have watched as a friend or relative has disappeared into the fog of Alzheimer’s arrive at the same truth. Although we recognize people by their visual appearance, what we really are as individual humans is determined by how our brains operate. The brain is certainly the least understood organ in the human body. If you ask a cardiologist how the heart works, she will give an engineering description of a pump based on muscle contraction and valves between chambers. If you ask a neurologist how the brain works, how thinking takes place, well . . . Do you remember Rudyard Kipling’s Just So Stories, full of fantastical evolutionary explanations, such as the one about how the elephant got its trunk? They are remarkably similar to a medical description of how the brain works.
The annual meeting of the Society for Neuroscience attracts over thirty thousand registrants. It is not for lack of effort that we understand so little of how the brain functions. The problem is one of the size, complexity, and individuality of the human brain. Size: the human brain has approximately one hundred billion nerve cells, each connecting to one thousand others. Complexity: there are one hundred different types of nerve cells, each with its own detailed properties. Individuality: all humans are similar, but the operation of each brain is critically dependent on its individual details. Your particular pattern of connections between nerve cells contains your personality, your language skills, your knowledge of family, your college education, and your golf swing.
By David H. Weinberg
Why is the expansion of the universe speeding up, instead of being slowed by the gravitational attraction of galaxies and dark matter? What is the history of the Milky Way galaxy and of the chemical elements in its stars? Why are the planetary systems discovered around other stars so different from our own solar system? These questions are the themes of SDSS-III, a six-year program of four giant astronomical surveys, and the focal point of my research at the Institute during the last year.
In fact, the Sloan Digital Sky Survey (SDSS) has been a running theme through all four of my stays at the Institute, which now span nearly two decades. As a long-term postdoctoral Member in the early 1990s, I joined in the effort to design the survey strategy and software system for the SDSS, a project that was then still in the early stages of fundraising, collaboration building, and hardware development. When I returned as a sabbatical visitor in 2001–02, SDSS observations were—finally—well underway. My concentration during that year was developing theoretical modeling and statistical analysis techniques, which we later applied to SDSS maps of cosmic structure to infer the clustering of invisible dark matter from the observable clustering of galaxies. By the time I returned for a one-term visit in 2006, the project had entered a new phase known as SDSS-II, and I had become the spokesperson of a collaboration that encompassed more than three hundred scientists at twenty-five institutions around the globe. With SDSS-II scheduled to complete its observations in mid-2008, I joined a seven-person committee that spent countless hours on the telephone that fall, sorting through many ideas suggested by the collaboration and putting together the program that became SDSS-III.
The proof of the fundamental lemma by Bao Châu Ngô that was confirmed last fall is based on the work of many mathematicians associated with the Institute for Advanced Study over the past thirty years. The fundamental lemma, a technical device that links automorphic representations of different groups, was formulated by Robert Langlands, Professor Emeritus in the School of Mathematics, and came out of a set of overarching and interconnected conjectures that link number theory and representation theory, collectively known as the Langlands program. The proof of the fundamental lemma, which resisted all attempts for nearly three decades, firmly establishes many theorems that had assumed it and paves the way for progress in understanding underlying mathematical structures and possible connections to physics.
The simplest case of the fundamental lemma counts points with alternating signs at various distances from the center of a certain tree-like structure. As depicted in the above image by former Member Bill Casselman, it counts 1, 1–3=–2, 1–3+6=4, 1–3+6–12=–8, etc. But this case is deceptively simple, and Ngô’s final proof required a huge range of sophisticated mathematical tools.
The story of the fundamental lemma, its proof, and the deep insights it provides into diverse fields from number theory and algebraic geometry to theoretical physics is a striking example of how mathematicians work at the Institute and demonstrates a belief in the unity of mathematics that extends back to Hermann Weyl, one of the first Professors at the Institute. This interdisciplinary tradition has changed the course of the subject, leading to profound discoveries in many different mathematical fields, and forms the basis of the School’s interaction with the School of Natural Sciences, which has led to the use of ideas from physics, such as gauge fields and strings, in solving problems in geometry and topology and the use of ideas from algebraic and differential geometry in theoretical physics.