Our experts

Abi Olvera

Senior Fellow

Council on Strategic Risks

Areas of expertise: Biosecurity, Cybersecurity, International competition, National security, US AI policy
EmailTwitterLinkedIn

Adam Gleave

CEO

FAR AI

Areas of expertise: Adversarial robustness, AI alignment, Finance, Interpretability, Model security
EmailTwitterLinkedIn

Ajeya Cotra

Senior Program Officer, Potential Risks from Advanced AI

Open Philanthropy

Areas of expertise: AI alignment, Capability advances, Explaining AI risk, Extinction risk scenarios, Reinforcement learning from human feedback
EmailTwitter

Prof Anton Korinek

Professor

Brookings, University of Virginia

Areas of expertise: AI & the developing world, Automation & inequality, Finance
EmailTwitterGoogleScholar

Austin Carson

Founder & President

SeedAI

Areas of expertise: AI regulation, Cybersecurity, Open source AI, Standards and evaluations, US AI policy
Email

Aviv Ovadya

Research Fellow

newDemocracy

Areas of expertise: AI regulation, Capability advances, Corporate governance, Democracy, Democratic AI, Evaluations, Misinformation and deep fakes, Misuse
EmailTwitterLinkedIn

Buck Shlegeris

CTO

Redwood Research

Areas of expertise: Evaluations, Interpretability, Reinforcement learning from human feedback
Email

Carl Robichaud

Program Officer

Longview Philanthropy

Areas of expertise: International competition, National security, Nuclear
EmailTwitterLinkedIn

Chris Painter

Member of Technical Staff

ARC Evals

Areas of expertise: Evaluations, Standards and evaluations, US AI policy
Email

Christine Parthemore

CEO

Council on Strategic Risks

Areas of expertise: Biosecurity, Military, National security, Nuclear
EmailTwitter

Prof Christoph Winter

Assistant Professor / Director

Harvard University / Instituto Tecnológico Autónomo de México / Legal Priorities Project

Areas of expertise: Authoritarianism, Explaining AI risk, Law
EmailLinkedIn

Dan Hendrycks

Executive and Research Director

Center for AI Safety

Areas of expertise: Adversarial robustness, AI alignment, AI regulation, Biosecurity, Corporate governance, Democratic AI, Explaining AI risk, Extinction risk scenarios, International competition, Misuse, ML community, Nuclear, Related risks
EmailTwitterLinkedIn

Daniel Privitera

Executive Director

KIRA Center for AI Risks & Impacts

Areas of expertise: Democracy, EU AI policy, Explaining AI risk, Misinformation and deep fakes
EmailTwitterLinkedIn

David Krueger

Assistant Professor in Machine Learning and Computer Vision

University of Cambridge

Areas of expertise: Adversarial robustness, AI alignment, Critiques of AI risk arguments, Explaining AI risk, Interpretability, ML community
EmailTwitterGoogleScholar

Fynn Heide

Research Scholar

Centre for the Governance of AI

Areas of expertise: China AI policy, International competition
Email

Dr Giulio Corsi

Research Associate

University of Cambridge

Areas of expertise: Climate, Democracy, Misinformation and deep fakes
EmailTwitter

Haydn Belfield

Academic Project Manager & Research Fellow

University of Cambridge, Leverhulme Centre for the Future of Intelligence & Centre for the Study of Existential Risk

Areas of expertise: AI regulation, Antitrust, International AI policy, UK AI policy
EmailTwitterLinkedIn

Holden Karnofsky

Director of AI Strategy

Open Philanthropy

Areas of expertise: Corporate governance, Extinction risk scenarios, Standards and evaluations, US AI policy
EmailLinkedIn

Huw Roberts

PhD researcher

University of Oxford

Areas of expertise: AI regulation, China AI policy, Fairness, International AI policy, UK AI policy
EmailTwitterLinkedInGoogleScholar

Prof Jacob Steinhardt

Assistant Professor

University of Berkeley

Areas of expertise: Adversarial robustness, AI alignment, Evaluations, Extinction risk scenarios, Forecasts and surveys, Interpretability, ML community
EmailTwitterGoogleScholar

Jan Brauner

PhD researcher

University of Oxford

Areas of expertise: AI alignment, Evaluations, Explaining AI risk, Extinction risk scenarios, Healthcare
EmailLinkedIn

Jeffrey Ding

Assistant Professor

George Washington University

Areas of expertise: China AI policy, National security
Email

Jeffrey Ladish

Executive Director

Palisade Research

Areas of expertise: Adversarial robustness, Cybersecurity, Explaining AI risk, Misinformation and deep fakes, Open source AI
EmailTwitterLinkedIn

Jonas Schuett

Research Fellow

Centre for the Governance of AI

Areas of expertise: Corporate governance, EU AI policy, Standards and evaluations
EmailTwitterLinkedIn

Dr Joseph Carlsmith

Senior Research Analyst

Open Philanthropy

Areas of expertise: Critiques of AI risk arguments, Explaining AI risk
EmailTwitterLinkedIn
SP

Katja Grace

Lead Researcher

AI Impacts

Areas of expertise: Critiques of AI risk arguments, Explaining AI risk, Extinction risk scenarios, Forecasts and surveys
EmailTwitter

Lee Sharkey

Co-founder, Chief Strategy Officer

Apollo Research

Areas of expertise: Evaluations, Interpretability, Standards and evaluations, UK AI policy
Email

Lennart Heim

Research Fellow

Centre for the Governance of AI

Areas of expertise: Compute governance, Corporate governance, Hardware, Model security
EmailTwitterLinkedInGoogleScholar

Luke Muehlhauser

Senior Program Officer

Open Philanthropy

Areas of expertise: Compute governance, Standards and evaluations, US AI policy
EmailTwitterLinkedIn

Marius Hobbhahn

CEO

Apollo Research

Areas of expertise: Evaluations, Interpretability, Standards and evaluations, UK AI policy
EmailTwitterLinkedIn

Markus Anderljung

Head of Policy

Centre for the Governance of AI

Areas of expertise: AI regulation, Compute governance, Explaining AI risk, UK AI policy, US AI policy
EmailTwitterLinkedIn

Michael Aird

Acting Co-Director

Institute for AI Policy & Strategy

Areas of expertise: AI regulation, China AI policy, Compute governance, Corporate governance, EU AI policy, Explaining AI risk, Extinction risk scenarios, Forecasts and surveys, Government use of AI, International AI policy, International competition, Misuse, National security, Standards and evaluations, UK AI policy, US AI policy
EmailLinkedIn

Niki Iliadis

Director, AI and the Rule of Law

The Future Society

Areas of expertise: AI regulation, Democracy, International AI policy, Law, US AI policy
EmailTwitterLinkedIn

Dr Robert Trager

Co-Director

Oxford AI Governance Initiative

Areas of expertise: AI & the developing world, Compute governance, Explaining AI risk, International AI policy, International competition, Military, Nuclear, UK AI policy, US AI policy
EmailTwitter

Saffron Huang

Co-Director

Collective Intelligence Project

Areas of expertise: AI regulation, Democracy, Democratic AI, Evaluations, Fairness, Law, ML community, Standards and evaluations
Email

Dr Seán Ó hÉigeartaigh

Director, AI:Futures and Responsibility

University of Cambridge

Areas of expertise: AI regulation, China AI policy, Critiques of AI risk arguments, EU AI policy, International AI policy, UK AI policy
EmailTwitterLinkedIn

Timothy Fist

Fellow, Technology & National Security

Center for a New American Security

Areas of expertise: Compute governance, Hardware, National security, US AI policy
EmailTwitterLinkedIn

Tom Davidson

Senior Research Analyst

Open Philanthropy

Areas of expertise: AI for innovation, Automation & inequality, Capability advances, Forecasts and surveys
EmailLinkedIn