An ELSI for AI: Learning From Genetics to Govern Algorithms

In a recent article for Science, acclaimed sociologist Alondra Nelson, Harold F. Linder Professor in the School of Social Science, proposes an approach to artificial intelligence’s unchecked proliferation and escalating harms: a reapplication of the Ethical, Legal, and Social Implications (ELSI) program developed for the Human Genome Project.

The ELSI framework, unlike reactive legislative efforts or lawsuits meant to bridle AI, takes a proactive approach. In 1990, leadership of the Human Genome Project designated a portion of their research budget to examining the social, ethical, and legal questions inherent to genomic work. In other words: predicting and preventing potential harms of biotechnology were embedded in the creation of the biotechnology itself. Moreover, the work of this oversight was distributed across multiple agencies and research centers. 

Nelson argues that ELSI’s “co-design ethos”––where social scientists, historians, philosophers, legal experts, and community members were treated as partners to genomic scientists––should be centered “upstream” in our approach to research in artificial intelligence, if we want to avoid further AI harms. Crucially, the legacy of ELSI also offers pitfalls to be avoided, including the program's reliance on limited, elite perspectives. 

From Nelson’s point of view, the summer of 2025 witnessed many avoidable tragedies, and such a season does not need to be repeated. “AI development now stands at an inflection point, but with higher stakes and faster timelines,” Nelson writes. “The question is whether we will act while AI governance frameworks are still forming or will normalize preventable deaths, discrimination, and erosion of public trust as the cost of doing business. The choice is between proactive responsibility and reactive crisis management.”

Read the article in full in Science

Date