Jack Clark
@jackclarkSF
AI SAFETY@AnthropicAI, ONEAI OECD, co-chair @indexingai, writer @ http://importai.net Past: @openai, @business @theregister. Neural nets, distributed systems, weird futures
David Duvenaud
@DavidDuvenaud
AI SAFETYMachine learning prof @UofT. Former team lead at Anthropic. Working on generative models, inference, & latent structure.
Dwarkesh Patel
@dwarkesh_sp
CREATORHost of @dwarkeshpodcast https://www.youtube.com/DwarkeshPatel https://open.spotify.com/show/4JH4tybY1zX6e5hjCwU6gF https://apple.co/3ujLQkZ
Joe Carlsmith
@jkcarlsmith
AI SAFETYPhilosophy, futurism, AI. Working on Claude's values @AnthropicAI. Formerly @coeff_giving. Opinions my own.
augustus odena
@gstsdn
RESEARCHERSomething new. Previously: AI research at TBD Labs / Meta; cofounder at @AdeptAILabs; Invented Scratchpad / Chain-of-Thought; Google Brain
Owain Evans
@OwainEvans_UK
AI SAFETYRuns an AI Safety research group in Berkeley (Truthful AI) + Affiliate at UC Berkeley. Past: Oxford Uni, TruthfulQA, Reversal Curse. Prefer email to DM.
Ajeya Cotra
@ajeya_cotra
AI SAFETYHelping the world prepare for extremely powerful AI. Risk assessment @METR_evals. Writing at Planned Obsolescence (about AI), Good Bones (about whatever).
Dylan HadfieldMenell
@dhadfieldmenell
AI SAFETYAssociate Prof @MITEECS working on value (mis)alignment in AI systems; Safety & Alignment Advisor at http://Character.AI; @dhadfieldmenell@bsky.social; he/him
Buck Shlegeris
@bshlgrs
AI SAFETYCEO@Redwood Research (@redwood_ai), working on technical research to reduce catastrophic risk from AI misalignment. bshlegeris@gmail.com
Stephanie Chan
@scychan_brains
AI SAFETYStaff Research Scientist at DeepMind. Artificial & biological brains 🤖 🧠 Societal impacts of AI + Science of AI. Views are my own.
⿻ Andrew Trask
@iamtrask
AI SAFETYi teach AI on X building AI with attribution-based control @openminedorg, @GoogleDeepMind, @OxfordUni, @UN, @GovAIOrg, and @CFR_org
rishi
@RishiBommasani
POLICYSocietal/economic impacts of AI; AI policy & governance @StanfordHAI Stanford CS PhD w/ @percyliang @jurafsky Cornell CS undergrad w/ @clairecardie
Evan Hubinger
@EvanHub
AI SAFETYAlignment Stress-Testing lead @AnthropicAI. Opinions my own. Previously: MIRI, OpenAI, Google, Yelp, Ripple. (he/him/his)
William MacAskill
@willmacaskill
AI SAFETYConsider donating 10% to effective charities: http://www.givingwhatwecan.org/pledge Or a career for impact: http://80000hours.org My research: http://forethought.org
Rob Wiblin
@robertwiblin
CREATORHost of the 80,000 Hours Podcast. Exploring the inviolate sphere of ideas one interview at a time: http://80000hours.org/podcast/
Josh Clymer
@joshua_clymer
AI SAFETYTurtle hatchling trying to make it to the ocean. Preparing for my automation @OpenAI. Contact via email: jclymer@openai.com.
Alex Tamkin
@AlexTamkin
AI SAFETYmachine learning, science & society @AnthropicAI | recently: Clio, Anthropic Economic Index, Claude Artifacts | prev: phd @StanfordAILab, @stanfordnlp
Connor Leahy
@NPCollapse
POLICYUS Director @ControlAI - Leave me anonymous feedback: http://bit.ly/3RZbu7x - I don't know how to save the world, but dammit I'm gonna try
Toby Ord
@tobyordoxford
AI SAFETYSenior Researcher at Oxford University. Author — The Precipice: Existential Risk and the Future of Humanity.
Allan Dafoe
@AllanDafoe
AI SAFETYAGI governance: navigating the transition to beneficial AGI (Google DeepMind)
Rosie Campbell
@RosieCampbell
AI SAFETYForever expanding my nerd/bimbo Pareto frontier. AI welfare 🤝 AI safety. Managing Director @eleosai, Ex-OpenAI, 2024 @rootsofprogress fellow
Brian Christian
@brianchristian
AI SAFETYResearcher at the University of Oxford & UC Berkeley. Author of The Alignment Problem, Algorithms to Live By (w. Tom Griffiths), and The Most Human Human.
Cas (Stephen Casper)
@StephenLCasper
AI SAFETYAI safeguards & gov. research. PhD student @MIT_CSAIL (mnr. Public Policy), and Fellow at @BKCHarvard. Fmr. @AISecurityInst. https://stephencasper.com/
Adrien Ecoffet
@AdrienLE
RESEARCH ENGINEERTrying to make AGI go well. Researcher at @openai. Views my own.
Daniel Eth (yes, Eth is my actual last name)
@daniel_271828
AI SAFETYResearching effects of automated AI R&D | pro-America, pro-tech, & pro-AI safety
Matt Clifford
@matthewclifford
FOUNDERCo-founder @join_ef; Chair @ARIA_Research; Make Britain Rich Again.
Steven Adler
@sjgadler
AI SAFETYAI safety researcher (ex-OpenAI: danger evals, AGI readiness, etc), writing at https://clear-eyed.ai
Eli Lifland
@eli_lifland
AI SAFETYAI forecasting and governance @AI_Futures_. Co-author of AI 2027 and the AI Futures Model. Also @aidigest_, @SamotsvetyF. Prev @oughtinc
Marius Hobbhahn
@MariusHobbhahn
AI SAFETYCEO at Apollo Research @apolloaievals prev. ML PhD with Philipp Hennig & AI forecasting @EpochAIResearch
Peter Wildeford🇺🇸🚀
@peterwildeford
POLICYHead of Policy @AIPolicyNetwork. Ohioan. World champion forecaster. Former data scientist. Protecting liberty and prosperity in the age of superintelligence.
Toby Shevlane
@tshevl
FOUNDER@_Mantic_AI cofounder & CEO, on a mission to solve forecasting. Prev: research scientist @GoogleDeepMind, PhD at @UniofOxford.
Benjamin Todd
@ben_j_todd
FOUNDERFounder @80000Hours Writing about AI, careers and doing good 🦑 New book with Penguin: http://80000hours.org/book
Samuel Hammond 🦉
@hamandcheese
POLICYChief economist + AI Policy Director, @joinFAI. Nonresident fellow @NiskanenCenter. Pluralist. 'The world is second best, at best.' | samuel@thefai.org