A strange thing is happening in the world of artificial intelligence. The very people who are leading its development are warning of the immense risks of their work. A recent statement released by the nonprofit Center for AI Safety, signed by hundreds of important AI executives and researchers, said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Extinction? Nuclear war? If they're so worried, why don't these scientists just stop?

It's easier said than done. Nuclear scientists didn't stop until they perfected the bomb. And AI has innumerable benefits, too. But the statement, alongside a chorus of recent calls for government regulation of AI, raises several questions: What should the rules governing the development of AI look like? Who crafts them? Who polices them? How do these norms exist in tandem with society's existing laws? How do we account for differences among cultures and countries?

For answers, FP’s Ravi Agrawal spoke with the academic and policy advisor Alondra Nelson, who served in the White House for the first two years of U.S. President Joe Biden's administration. Nelson was the first African American and the first woman of color to lead the Office of Science and Technology Policy and led the drafting of an influential Blueprint for an AI Bill of Rights. Nelson is currently a professor at the Institute for Advanced Study, an independent research center in Princeton, New Jersey. FP subscribers can watch the full discussion or read an edited and condensed transcript, exclusive to FP Insiders.