IAS Scholars on Artificial Intelligence

With the launch of chatbots such as ChatGPT and Bard, artificial intelligence (AI) has never been more prominently placed in the popular imagination. At the Institute for Advanced Study, the technology has been the subject of interdisciplinary discussion for some time. In 2019–20, the Schools of Mathematics and Social Science convened two cross-disciplinary workshops and hosted two public events that reflected on the social and ethical challenges of developing machine learning and deploying it "in the wild." Since then, AI has been utilized in numerous fields of research at IAS. For example, machine learning recently enabled a team of scholars, led by Lia Medeiros, AMIAS Member in the School of Natural Sciences, to generate the sharpest ever image of a black hole

Steering AI panel discussion2
Maria O'Leary
Panelists at the Steering AI panel speaking in a packed Wolfensohn Hall

In June 2023, Alondra Nelson, Harold F. Linder Professor in the School of Social Science, also convened a meeting of the IAS AI Policy and Governance Working Group on campus, bringing together experts from a mix of sectors, disciplines, perspectives, and approaches to compile a set of recommendations on sociotechnical mechanisms to ensure accountability in the responsible use of AI tools and systems. Members of the Working Group also engaged in a public panel discussion on the related subject of Steering AI for the Public Good. Following this dialogue, the IAS campus has continued to serve as a hub for debate surrounding the technologies.

The following quotes from Faculty, Members, and Visitors from all four IAS Schools demonstrate the breadth of discussion:

 

"With the explosion in data driven by new digital detectors, AI has become an essential tool for the analysis of astronomical observations. It is used to pick out the interesting and unusual needles in the haystacks of data generated by modern instruments. At the same time, AI is being used in novel ways to find patterns and relationships in data that have not been recognized before. There is a lot of hope that in this way, AI might lead to breakthroughs in long standing puzzles, such as the nature of turbulence in fluids. What I find fascinating is that the same algorithms scientists use to model data are also applied in so many other ways in so many other domains, including, of course, the social sciences and humanities. Rarely does one computational method impact so many aspects of science and society simultaneously."

James Stone
Professor, School of Natural Sciences

 

"I am not worried that technology has become too smart. I am worried that it is not smart enough. I don’t fault the technology for its shortcomings; I fault the humans who make wrongheaded decisions because they attribute to software intellectual prowess it does not have. We are led astray by the word 'intelligence' in AI. It is too amorphic, and it causes us to extrapolate from our vague sense of what 'intelligence' is in humans. We then attribute to the software capabilities it does not have and assume it can 'evolve' the way humans learn new skills. Instead of losing ourselves in 'AI' we better name new computational technology for what it is—a tool. It could be extraordinarily helpful, or it could be used for nefarious purposes. The only way to identify its potentials and dangers is to stop treating it as a black box."

Yulia Frumer
Founder's Circle Member (2022–23), School of Historical Studies

 

"The recent developments in artificial intelligence are both incredibly exciting and deeply unsettling. If rapid developments continue, it is not hard to imagine both tremendous benefits and harms to society. Further, some of the biggest impacts may end up being things that we can’t even imagine today. This is a critical moment because the research and policy choices today may shape the future of this technology and how it interacts with society. To rise to this occasion, we need new research and thinking about what is—and is not—possible, what is—and is not—happening, and perhaps most importantly, what we want the future to be."

Matthew Salganik
Infosys Member (2022–23), School of Social Science

 

"We are living through an AI revolution. Previously unthinkable amounts of data and incredibly powerful mathematical, statistical, and physical models are powering this revolution in ways we never imagined. Whilst AI poses profound challenges to us as a species, it also opens up new avenues through which we can transform and revolutionize our lives. Of particular interest is what is referred to as a 'digital twin': data-charged models based on physics that can help us in accurate predictions and decision-making. Digital twins can help in building safer and more efficient engineering structures. They can lead to better medical outcomes for us all. They can also offer us a deeper understanding of the natural world. But this is just the start. With fast-moving improvements in physics-based machine learning models, scalable methods in data simulation and decision making, and high-performance computing, it may at some point be possible to have a digital twin of our entire planet. Where that will lead to is anybody's guess. But one thing is for certain. Our lives will never be the same again."

Zahra Lakdawala
Visitor (2022–23), School of Mathematics

 

"Machine learning has been an important tool in the field of ancient world studies for well over a decade, and recent technological advances have significantly improved the power of that tool. It is a critical responsibility of the digital scholar to make evolving technology serve meaningful ends for human users, and to apply human checks to digitally generated data. At the Krateros Project, we interact with a subset of machine learning called 'computer vision', which is devoted to the analysis of digital images. Our project produces high quality images and 3D objects based upon text-bearing artifacts called 'squeezes'. We have been working with experts for the last several years on the process of training an algorithm to identify the text in our digital files, so that the attention of human experts can be drawn more efficiently to places of potential dissonance between published texts and text-bearing artifacts."

Aaron Hershkowitz
Research Associate (2018–23), School of Historical Studies

 

"Now that chatbots are capable of acing standardized tests, it is time to put them to work as research assistants. It is in this spirit that I am utilizing AI in my current research. The aim of my work is to study human memory for stories, and to collect experimental data from many participants reading many stories. The challenge with this type of research, which requires quantifying story recall, is that language processing is notoriously difficult to automate at the semantic level—machines don't really understand stories. This has completely changed in recent years, with natural language processing having taken an enormous leap in performance. Thus, the scale at which we can analyze data and develop a quantitative scientific study of human memory for narratives is only possible now thanks to these chatbots being repurposed as a kind of scientific instrument. This is why I’m less excited about self-driving cars, and more interested in seeing AI used as a tool to deepen human knowledge."

Tankut Can
Eric and Wendy Schmidt Member in Biology (2021–24), School of Natural Sciences

 

"We are currently leading a collaborative, Mellon-funded global book project, which includes the development of tools for the study of handwritten texts. Using AI and machine learning, our interdisciplinary team of computer scientists (at Notre Dame), Ethiopian Studies scholars, and digital humanists (at the University of Toronto), developed an Optical Character Recognition transcription tool for the automated large-scale processing of Ethiopian manuscripts handwritten in Gə’əz, the liturgical language of the Ethiopian Orthodox Church, still in use today. This tool, which transforms handwriting into searchable transcriptions, will enable research that includes distant reading, provenance research, and computational textual studies in Ethiopia and abroad. Thousands of Ethiopian manuscripts in Gə’əz have been imaged, far more than any one scholar can analyze. In a new article published in Digital Humanities Quarterly, we discuss the methodology for developing this open-source, low-barrier tool which can be run offline and without a Graphics Processing Unit, providing open access to vital resources for sustaining the history and living culture of Ethiopia and its people."

Suzanne Conklin Akbari
Professor, School of Historical Studies

Melissa Moreton
Research Associate (2021–24), School of Historical Studies

 

"I am truly amazed by the capabilities of ChatGPT; it has far exceeded my expectations. From constructing websites to crafting summaries, its versatility is astonishing. What is even more remarkable, though, is that the underlying technology powering ChatGPT is perhaps not entirely novel. To me, this sheds light on human intelligence and opens up new avenues for understanding the very nature of intelligence itself. While I haven't yet discovered a groundbreaking way to incorporate AI into my research, I can't help but anticipate that, in the near future, ChatGPT might trigger a revolution in how researchers approach their work. A question that likely piques the interest of many is whether AI will eventually supplant human labor. Currently, I am of the belief that AI offers advantages akin to the industrial steam engine of this era. As a result, individuals might be liberated from the burden of repetitive cognitive tasks.”

Pei Wu
Member (2021–23), School of Mathematics

 

"Currently, using ChatGPT for historical writing and research is extremely problematic. If you ask it to put together a bibliography for even the simplest topics, half or more of the references will be fake or slightly off. If you ask it to compile a syllabus—a list of topics, say—or to write a short essay, it will spawn some of the most generic ideas and prose. And still, you have the issue of it generating incorrect information. This makes sense, of course, because it’s predictive text based on a pool of relatively limited information; it’s not actually 'intelligence.' This means responses to queries are not going to be original or compelling. The only 'successful' thing about open AI so far is that it has added extra labor to our teaching. Marking papers or coming up with new assignments (that either engage with open AI or discourage its misuse) are activities that now entail another layer of complication."

Esther Liberman Cuenca
Member (2022–23), School of Historical Studies

Date

Press Contact

Lee Sandberg
lsandberg@ias.edu
609-455-4398