The State and the Machine: Governing—and Governing With—AI

Government use and regulation of artificial intelligence must balance innovation, accountability, and trust in a fast-moving technological landscape.

  • Governments face a dual role as both AI regulators and users.
  • Public trust depends on transparency, measurement, and accountability.
  • Institutional reform is needed to turn principles for responsible AI use into practice.
  • Government use of AI should focus on practical, low-risk projects to enhance public service delivery and public trust.

Artificial intelligence (AI) has entered a new phase of influence. Beyond the private sector, governments are embedding it in decision-making, social benefit delivery, and public communication. Yet this raises a fundamental question: Can governments exploit a largely untested technology, such as AI, for efficiency gains without unintended consequences or losing public trust?

A recent expert panel at England’s Jesus College, Oxford, convened by the Institute for Ethics in AI and the AI Policy and Governance Working Group (AIPGWG), argued that success will depend less on new laws than on new capabilities: better measurement, smarter procurement, and a willingness to experiment safely.

How, when “trust in democracy and governments is low,” could these tools strengthen, rather than erode, confidence in public institutions, asked Dr. Alondra Nelson, co-founder of the AIPGWG, inaugural Accelerator Fellow at the Institute for Ethics in AIHarold F. Linder Professor at the Institute for Advanced Study (IAS), opening the discussion. Through the evening, three related challenges emerged: how to measure and govern AI systems, how to build and sustain public trust, and how to turn high-level principles into day-to-day practice.

We cannot govern what we cannot measure

Measurement emerged as one of the clearest fault lines in the discussion. Computer scientist Elham Tabassi, of the Brookings Institution and formerly of the U.S. National Institute of Standards and Technology (NIST), responded to Nelson’s opening question by cautioning governments to look beyond the promise of AI-enabled efficiency gains. “Governments should optimize for and prioritize public trust,” she said.

For Tabassi, the former NIST AI chief, regulating and deploying AI require different expertise. “Regulating AI needs technical expertise to audit external systems,” she said, while responsible deployment “requires procurement reform, operational evaluation, and continuous monitoring.” Without the ability to evaluate models and systems in practice, she warned, “We cannot effectively govern what we cannot measure.”

These measurement challenges sit within a broader institutional dilemma, according to Dr. James Phillips of University College London, a former adviser to the UK government: “In the UK, we have situations like this with healthcare: government provides something at the same time as regulating it.” The answer, he suggested, is to separate the functions and strengthen independent advice. He noted that the UK AI Security Institute (AISI) was established in part because the government’s only source of information about AI was the tech industry. There was a need for advice from people who “weren’t on the payroll of big tech.”

Trust is earned in drops and lost in buckets

Trust was the second major theme. Representing the UK AISI was Andrew Strait, the Institute’s head of Societal Resilience, who argued that governments should start with “low-risk, high-benefit” uses. He gave the example of planning-commission software that automates form-filling, describing it as managing “relatively boring admin tasks that nonetheless saves enormous amounts of money, time, and resources.” When we get it wrong, it really matters, so it’s sensible to focus on areas that offer productivity benefits without threatening citizens’ rights.

The thread was picked up by Cassandra Madison of the Center for Civic Futures, who has worked inside and alongside U.S. state government. She warned that “trust is earned in drops and lost in buckets.” When handled well, routine interactions can, over time, shape citizens’ perceptions of competence. But one significant failure can have huge negative consequences.

Capability, capacity, incentives

Translating lofty AI principles into day-to-day practice is still a struggle for governments, panelists agreed. “There is a persistent gap between AI principles and operation,” said Tabassi. Agencies have adopted the language of “transparency, accuracy, fairness,” but not the mechanisms to apply it.

Procurement rules, written for static goods and services, are notoriously difficult to adapt to AI systems, which evolve over time. Once a contract is signed, technical knowledge often stays with the vendors, while civil servants lack the skills and budgets to monitor performance.

The solution lies in capacity-building rather than more ethics charters. Tabassi argued: “We need the technical capability, capacity, incentives, and the budget to actually operationalize all of them.” The speed of AI development makes this gap “more acute” than for previous technologies—a point that led other panelists to argue for controlled experimentation inside government.

We’re going to have experiments, like it or not

Picking up on the need for capacity and feedback, Strait challenged the notion that safety stifles progress. “If we solve and address issues around safety and security, we build public confidence that this technology should be used,” he said. By understanding risks such as fraud and misinformation, governments can, he said, “build the ability to absorb and prepare for the challenges and shocks AI will bring.”

Governments must be willing to experiment with AI, said Phillips, even if political culture means they are wary of being seen as “experimenting on the public”, adding: “We’re going to have experiments whether we like it or not—it’s whether we’re in control of them.” Structured trials are preferable to unplanned experimentation.

Allowing civil servants to experiment safely with AI “can make them better buyers, and more sophisticated about the rules and regulations that should be applied,” Madison said. By reducing the likelihood of highly visible failures, governments can protect those “buckets” of trust.

Technology is built on FOMO

Audience questions covered employee recruitment, national AI sovereignty, and hype, underscoring how institutional capacity and market structure shape the use of AI. Madison said the difficulty of hiring technologists is not just about pay, but that governments don’t know where to start when it comes to finding tech talent.

On AI sovereignty, Britain’s edge lies not in computer chips or cutting-edge AI models, but in practical applications and open-source ecosystems that reduce vendor lock-in, agreed both Phillips and Strait. Tabassi warned that concentration in the AI stack, where “the chip, the hyperscaler, the foundation-model developer” are from the same company, remains a systemic risk.

The discussion ended with the question of hype. “Technology is built on a lot of FOMO—fear of missing out—and belief that this technology is far more capable than it is,” Strait said. Nelson echoed that skepticism: “Are AI systems powerful? Yes. Might they be transformative? Possibly. But the hype is well above that for reasons of commercial interest.”

Governments must see through the hype to find an equitable balance of innovation and accountability. In the end, governing technology, and governing with it, will test not just the machinery of the state, but its capacity for foresight, integrity, and trust.

Follow the work of the AI Policy and Governance Working Group to stay informed on future discussions about ethics, governance, and the state.