Delving Into Digital (In)Equality

In the nineties and aughts, the term “digital divide” came into widespread use, marking an early and decisive frontier in digital scholarship. It articulated two essential relationships to technology, defined along an axis of access. (Some research, including a landmark report by the National Telecommunications and Information Administration, used the language of “haves” and “have-nots” to refer to these newly delineated social groups, imagining the world as a strict binary: people with computers and the internet, and people without them.)1 The metaphor popularized—and concretized—the notion that digital technologies could shape a person’s opportunities. Like any other resource, the web was a vector of social power.

This theory branched, as theories do, into more specific fields. Scholars studied the “gender gap” and the “age gap,” and defined inequalities in usage, as well: digitally marginalized populations had more access to entertainment and less to education and creative endeavor, for example. Over the course of the next thirty years of digital expansion, however, this framework began coming up short. Digital inequality could no longer be solely imagined as a function of exclusion. Digital inclusion, too, creates systematic inequalities.  

With these challenges in mind, the Institute’s School of Social Science is turning its focus for the 2025–26 academic year to the theme of Digital (In)Equality. The parentheses in the title gesture to what Alondra Nelson, Harold F. Linder Professor in the School and the theme year’s organizer, refers to as the “double-edged-ness” of the contemporary digital ecosystem. Nelson’s convening insight challenges the dominant narratives around technology: both techno-optimism and techno-pessimism miss the point, she contends. Digital technologies are simultaneously creating new forms of equality and new forms of inequality. Understanding this co-constitution, rather than choosing one narrative over the other, is essential to the development of the analytical frameworks needed to better understand contemporary society.  

Put differently: As more and more of our encounters, labor, and lives are digitalized, the potential goods and potential harms of these technologies on individuals and groups also become more pronounced. Both societal possibilities are occurring rapidly and concurrently: electronic health record (EHR) technology, for example, can help doctors better understand the social determinants of a patient’s health, yet that same data collection also increases the risk of racialized surveillance.2 The acceleration itself represents a threat: scholars of technology and society repeatedly warn that digital technologies are outpacing our ability to research them. Without the time and resources to thoughtfully implement our new tools, it is difficult to manage their consequences.  

Scales
Donghyun Lim

This is a precarious balance. One can envision new technology, like artificial intelligence, accelerating social progress, bringing previously underserved communities into liberatory networks of communication and exchange; or envision it depleting resources, dispossessing workers, and consolidating wealth and power. It does these things already. Alondra Nelson and her collaborators want to face this precarity head-on, moving beyond the “digital divide” to pose questions adequate to today’s landscape. Moreover, they hope to explore further the ways that digital inequality and digital equality are co-constituted, actively shaped by and shaping one another. The Digital (In)Equality theme year is Nelson’s attempt to do what technology policy has consistently failed to do: bring together the scholarly resources needed to think seriously about how digital systems concentrate and distribute power, and to envision genuinely democratic applications and oversight. 

Nelson spoke with The Institute Letter this fall about the perils of technological domination—how dominant groups in society might protect and extend their power via digital channels. She spoke too about the work being done to imagine otherwise, and the singular promise of a year dedicated to focused collaboration.  

Above all, Nelson wants to cut through the assumption that digital technology is inherently “neutral.” This isn’t always intuitive. Digital tools are so integrated into our interactions—with goods, with services, with opportunities, and with one another—that they seem utilitarian, or else a condition of modernity. Though they are not necessarily designed to cause harm, Nelson argued, they emerge from systems that already do. “Even if it feels like it’s ‘just coding,’” she said, “scholars would say that anything that comes into the world as technology is the culmination of all of these flows of power, and materials, and social networks.”  

One salient example is algorithmic bias, which can occur when discriminatory patterns result from embedded design choices or the way an algorithm’s training data is collected, labeled, or sampled. “Existing data sets the fundamental conditions for what we can predict about the future. That means that we are often dragging this bag of historic inequalities into the present,” Nelson explained. To illustrate this idea, Nelson pointed to redlining, a historical practice of racial discrimination in the housing market. Community members in redlined areas saw essential services, like insurance and loans, withheld from entire neighborhoods on the basis of their racial and ethnic makeup. “If that community or zip code has always been understood not to have access to mortgage loans for whatever reason, and then you build an algorithm that says, we want the good predictors in the past to be predictors for the future, then you have a whole swath of people who are being discriminated against.”  

“And then we also have something that’s more material,” she continued, “because those are forms of structural inequality where the algorithm becomes part of that infrastructure. The fundamental issue is that we are very much constrained by the world that data allows us to create.”  

In addition to (ostensibly) determining an individual’s likelihood to default on a loan, algorithms are now used to sort résumés for potential employers; decide eligibility for social services; predict crime and recidivism; and diagnose health conditions from X-rays, among other uses.3

If the world is increasingly defined and mediated by information, and that information is increasingly a kind of algorithm echo chamber, society loses its problem-solving abilities—its agency.  

Algorithmic bias is just one touchpoint. All of the theme year participants seek to articulate the compound, complicated ways in which existing data and its applications have limited—and might limit, in novel ways, in the future— choices and lives, even as they appear to enhance them. At the same time, part of the work of the theme year is to imagine other, more just digital infrastructure. The latter ambition often relies on the former project. What would it mean for technology to genuinely support social mobility or amplify political voice? Do data and the digital have a use in efforts to achieve equality?  

Nelson believes they do. “Digital equality looks more like taking users as partners in the work of innovation,” she offered.  

This is just one (as yet, somewhat hypothetical) approach. But the idea has legs, and history. In a recent article for Science,4 Nelson recalls a model of this kind of equality effort: the Ethical, Legal, and Social Implications (ELSI) program developed for the Human Genome Project. In 1990, leadership of the Project designated a portion of their research budget to examining the social, ethical, and legal questions inherent to genomic work. Crucially, predicting and preventing potential harms of biotechnology were embedded in the creation of the biotechnology itself, and the work of this oversight was distributed across multiple agencies and research centers. The paper argues that ELSI’s “co-design ethos”—wherein social scientists, historians, philosophers, legal experts, and community members were treated as partners to genomic scientists—should be centered “upstream” in our approach to research in artificial intelligence, in order to avoid AI harms.  

Chess
Donghyun Lim

The work of Shobita Parthasarathy, Member in the School of Social Science, follows a similar argument, critiquing the notion that AI equity and justice concerns can be solved from the top down (e.g., by policymakers, academics, and the technical community themselves). These approaches, such as educating software developers about the impact of algorithmic bias, “may address some harms […] but will always be behind the curve of inequities that emerge as AI makers exercise, and strive to protect, profit-seeking prerogatives.”5

Instead, AI agenda-setting for social good requires incorporating thinking from members of marginalized communities, not for optics, but because those voices are the ones with the most at stake. The advantages are plural: these efforts allow for the governance agenda to reflect those whose welfare it seeks to protect, while at the same time fostering democratic engagement in emerging technologies and the decisions made about them.  

This idea is not the only path forward for digital equality, nor is it a solution per se. It is, however, an example of the kinds of inquiry that the Digital (In)Equality theme year will move towards: thinking that is cautious but hopeful, resourceful while grounded in research. The work of the year is, then, to reclaim agency over our digital futures, which requires facing thorny theoretical questions, as well as big existential ones, head-on.  

One of the theme year’s participants will research whether and how women’s marginalization from the venture capital sector—and therefore the rooms in which technological ideas are literally invested in—further embeds gender inequality into innovation. Another will focus on the use of media and journalism by communities denied access to media power, examining the relationship between online virality and African American history. Yet another studies the indigenous borderlands of the Pakistani state; how lives and livelihoods there are shaped by China’s Belt and Road Initiative. All will tread the theme’s two “edges”: goods and harms, optimisms and pessimisms.  

Generative, urgent questions can yield generative, urgent answers—when afforded the right conditions. Nelson’s approach to making the theme year into such a space is less prescriptive and more about collective imagination. “I am pretty committed to the synergy of scholars working together and figuring out what the collaboration looks like,” Nelson said. Her refusal to prescribe outcomes is itself a methodological stance: one that prioritizes scholars from different social science disciplines working across fields and perspectives to develop genuine collective insight.  

The fact that Members come only for a fixed period of time—and are outside of their usual academic contexts, and possess rich thinking lives elsewhere— heightens what Nelson calls the “magic” of their encounters with one another’s scholarship, in the Institute’s formal and informal settings. “These eight or ten people have never been in a room together before. Ever. And now they’re going to be in a room together, every fortnight, for a year. Moreover, they’ll be neighbors and they’ll have lunch together,” described Nelson. “What are the conversations, projects, theorizing, writing that could only happen through these people being here together at this time? That was the question: What can we distinctively do here, together?”  

The theme year therefore offers both prescient material to be worked through on campus and a handhold for continued collaboration once scholars have departed. The group that was gathered by Nelson for PLATFORM, the 2023–24 theme year, continues to correspond and co-imagine. Nelson considers this a testament to the rare thinking enabled and enriched by the theme. This past June, PLATFORM participants gathered for a weeklong reunion at the Institute, and special issues of the journals Poetics: Journal of Empirical Research on Culture, the Media and the Arts and Limn are forthcoming from the group, as is a book, “Auditing AI” (MIT Press). “It’s still going because people want to do it and it’s work that they’ve created. It’s not anything that I’ve done,” said Nelson. “It’s organic to the experience of being here together.”  

She concluded: “It’s just such a rare opportunity to both deepen our thinking for individual projects and to have more impact at scale, in the work either while they’re here or in the years to come.”  


[1]  National Telecommunications and Information Administration. 2001. “Falling through the Net: A Survey of the ‘Have-Nots’ in Rural and Urban America.” In Compaine, B. M. (ed.) The Digital Divide: Facing a Crisis or Creating a Myth? https://doi.org/10.7551/mitpress/2419.001.0001 

[2]  Cruz, T. M. 2023. “Racing the Machine: Data Analytic Technologies and Institutional Inscription of Racialized Health Injustice.” Journal of Health and Science Behavior. https://doi.org/10.1177/00221465231190061 

[3]  Le, V. and Moya, G. 2021. “Algorithmic Bias Explained: How Automated Decision-Making Becomes Automated Discrimination.” Greenlining Institute

[4]  Nelson. A. 2025. “An ELSI for AI: Learning from Genetics to Govern Algorithms.” Science. https://doi.org/10.1126/science.aeb0393 

[5]  Parthasarathy, S. and Katzman, J. 2024. “Bringing Communities In, Achieving AI for All.” Issues in Science and Technology. https://doi.org/10.58875/SLRG2529