Articles from the Institute Letter

Additional articles from new and past issues of the Institute Letter will continue to be posted over time and as they become available.

By Danielle S. Allen

Scholars of network theory have shown that increased social connectivity through bridging ties, in particular, brings improved social outcomes.

College campuses struggle with how to think and talk about diversity of all kinds, a struggle that has gone on for more than two decades now. Every year, there are stories from around the country about anonymous hate speech and offensive theme parties with equally offensive T-shirts as well as controversies about “political correctness.” Nor has there been a year in my roughly two decades in higher ed when I haven’t read or heard someone wondering, “Why do all the black kids sit together in the cafeteria?”

What are the stakes for how well we deal with diversity on college campuses? There are two answers to this question, one concerning the stakes for the campuses themselves, the other the broader social stakes.
First, for campuses. Social scientists have long distinguished between two kinds of social tie: “bonding ties” that connect people who share similar backgrounds and “bridging ties” that link people who come from different social spaces. Since the 1970s, scholars have been aware that “bridging ties” are especially powerful for generating knowledge transmission; more recently, scholars have argued convincingly that teams and communities that, first, emphasize bridging ties and, second, successfully learn how to communicate across their differences outperform more homogenous teams and communities with regard to the development and deployment of knowledge.


by Steve Awodey and Thierry Coquand

Depicted here is a mathematical torus (a donut-shaped object that cannot be deformed into a sphere in a continuous way without tearing it) made of logical symbols. It represents homotopy type theory, a new branch of mathematics that connects homotopy theory (the study of continuous deformations) and type theory (a branch of mathematical logic).

In 2012–13, the Institute’s School of Mathematics hosted a special year devoted to the topic “Univalent Foundations of Mathematics,” organized by Steve Awodey, Professor at Carnegie Mellon University, Thierry Coquand, Professor at the University of Gothenburg, and Vladimir Voevodsky, Professor in the School of Mathematics. This research program was centered on developing new foundations of mathematics that are well suited to the use of computerized proof assistants as an aid in formalizing mathematics. Such proof systems can be used to verify the correctness of individual mathematical proofs and can also allow a community of mathematicians to build shared, searchable libraries of formalized definitions, theorems, and proofs, facilitating the large-scale formalization of mathematics.

The possibility of such computational tools is based ultimately on the idea of logical foundations of mathematics, a philosophically fascinating development that, since its beginnings in the nineteenth century, has, however, had little practical influence on everyday mathematics. But advances in computer formalizations in the last decade have increased the practical utility of logical foundations of mathematics. Univalent foundations is the next step in this development: a new foundation based on a logical system called type theory that is well suited both to human mathematical practice and to computer formalization. It draws moreover on new insights from homotopy theory—the branch of mathematics devoted to the study of continuous deformations in space. This is a particularly surprising source, since the field is generally seen as far distant from foundations.


by Andrej Bauer

Recommended Viewing: An animated visualization of the GitHub collaborations of over two dozen mathematicians working on the HoTT book over a period of six months may be viewed at

Since spring, and even before that, I have participated in a great collaborative effort to write a book on homotopy type theory. It is finally finished and ready for pub­lic consumption. You can get the book freely at Mike Shulman has written about the contents of the book (http://​­te­x­, so I am not going to repeat that here. Instead, I would like to comment on the socio-technological aspects of making the book and in particular about what we learned from the open-source community about collaborative research.

We are a group of two dozen mathematicians who wrote a six-hundred-page book in less than half a year. This is quite amazing since mathematicians do not normally work together in large groups. A small group can get away with using obsolete technology, such as sending each other source LaTeX files by email, but with two dozen people even Dropbox or any other file synchronization system would have failed miserably. Luckily, many of us are computer scientists disguised as mathematicians, so we knew how to tackle the logistics. We used Git and In the beginning, it took some convincing and getting used to, al­though it was not too bad. In the end, the repository served not only as an archive for our files but also as a central hub for planning and discussions. For several months, I checked GitHub more often than email and Facebook.


By Michael Lesnick

A still developing branch of statistics called topological data analysis seeks to extract useful information from big data sets. In the last fifteen years, there have been applications to several areas of science and engineering, including oncology, astronomy, neuroscience, image processing, and biophysics.

The story of the “data explosion” is by now a familiar one: throughout science, engineering, commerce, and government, we are collecting and storing data at an ever-increasing rate. We can hardly read the news or turn on a computer without encountering reminders of the ubiquity of big data sets in the many corners of our modern world and the important implications of this for our lives and society.

Our data often encodes extremely valuable information, but is typically large, noisy, and complex, so that extracting useful information from the data can be a real challenge. I am one of several researchers who worked at the Institute this year in a relatively new and still developing branch of statistics called topological data analysis (TDA), which seeks to address aspects of this challenge.

In the last fifteen years, there has been a surge of interest and activity in TDA, yielding not only practical new tools for studying data, but also some pleasant mathematical surprises. There have been applications of TDA to several areas of science and engineering, including oncology, astronomy, neuroscience, image processing, and biophysics.

The basic goal of TDA is to apply topology, one of the major branches of mathematics, to develop tools for studying geometric features of data. In what follows, I’ll make clear what we mean by “geometric features of data,” explain what topology is, and discuss how we use topology to study geometric features of data. To finish, I’ll describe one application of TDA to oncology, where insight into the geometric features of data offered by TDA led researchers to the discovery of a new subtype of breast cancer.


By Boaz Katz, Subo Dong, and Doron Kushnir

Remnants of the type Ia supernova first observed by medieval astronomer Tycho Brahe

On the evening of November 11, 1572, twenty-six-year-old astronomer Tycho Brahe was about to make a discovery that would change his life and consequentially boost the scientific revolution significantly. While casually staring at the night sky, he suddenly noticed a very bright unfamiliar star in the Cassiopeia con­stellation. The star, which was as bright as Venus, was located in the inner parts of the famous W-shaped constellation, which was well known to many common people, let alone astronomers. What Tycho saw looked like the appearance of a new star (nova stella). He was so astonished that he sought the confirmation of others to assure himself that he was not hallucinating.

Unknown to Tycho, such new stars had appeared during the previous centuries (“guest stars” in Chinese records), with a much brighter star reported in 1006. While these events were very important to astrologers, they had no lasting effect on astronomical thinking at the time. Tycho, however, realized that such an event was revolutionary. By accurately and repeatedly measuring the position of the “nova,” Tycho showed that it was much further than the moon. In one night, Tycho managed to scientifically falsify the millennia-old Aristotelian belief that anything beyond the sphere of the moon cannot change. This convinced Tycho that the “known” cosmology was wrong and motivated him to devote his life to performing measurements of stars and planets to study the “true” cosmology. His hard, lifelong work paid off. His careful measurements of the positions of the planets enabled the discovery of the law of gravity by Johannes Kepler and Isaac Newton. Kepler would later say that if Tycho’s star did nothing else, it produced a great astro­nomer. Yet, even Tycho and Kepler could not have appreciated that what seemed like a new star was actually an explosion of unimaginable power and that such explosions are crucial for our existence.