Jan Leike
@janleike
AI SAFETYAI research @AnthropicAI. Previously OpenAI & DeepMind. Optimizing for a post-AGI future where humanity flourishes. Opinions aren't my employer's.
Sam Bowman
@sleepinyourhat
AI SAFETYAI alignment + LLMs at Anthropic. On leave from NYU. Views not employers'. No relation to @s8mb. Into @givingwhatwecan.
David Duvenaud
@DavidDuvenaud
AI SAFETYMachine learning prof @UofT. Former team lead at Anthropic. Working on generative models, inference, & latent structure.
Matthew Johnson
@SingularMattrix
RESEARCH ENGINEERResearcher at Google Brain. I work on JAX (https://github.com/google/jax).
Riley Goodside
@goodside
ENGINEERScreenshots of chatbots since 2022. Formerly: Google DeepMind, Scale.
Leo Gao
@nabla_theta
AI SAFETYworking on AGI alignment. prev: GPT-Neo, the Pile, LM evals, RL overoptimization, scaling SAEs to GPT-4, interp via circuit sparsity. EleutherAI cofounder.
Victoria Krakovna
@vkrakovna
AI SAFETYResearch scientist in AI alignment at Google DeepMind. Co-founder of Future of Life Institute @FLI_org. Views are my own and do not represent GDM or FLI.
Owain Evans
@OwainEvans_UK
AI SAFETYRuns an AI Safety research group in Berkeley (Truthful AI) + Affiliate at UC Berkeley. Past: Oxford Uni, TruthfulQA, Reversal Curse. Prefer email to DM.
Kevin Roose
@kevinroose
CREATORNYT tech columnist, Hard Fork co-host, high-perplexity language model. Writing a book about AGI.
Neel Nanda
@NeelNanda5
AI SAFETYMechanistic Interpretability lead DeepMind. Formerly @AnthropicAI, independent. In this to reduce AI X-risk. Neural networks can be understood, let's go do it!
Dylan HadfieldMenell
@dhadfieldmenell
AI SAFETYAssociate Prof @MITEECS working on value (mis)alignment in AI systems; Safety & Alignment Advisor at http://Character.AI; @dhadfieldmenell@bsky.social; he/him
Theo Weber
@theophaneweber
RESEARCH ENGINEERResearch scientist @ DeepMind; currently working on thinking/reasoning in Gemini.
Rishabh Agarwal
@agarwl_
RESEARCH ENGINEERReinforcement Learner @periodiclabs, Adjunct Prof at McGill. Ex Meta, DeepMind, Brain, @iitbombay. NeurIPS Best Paper, On-Policy Distillation
Christopher Potts
@ChrisGPotts
ACADEMICStanford Professor of Linguistics and, by courtesy, of Computer Science. Member of technical staff @stanfordnlp and @StanfordAILab. Co-founder @ Bigspin AI.
Eric Zelikman
@ericzelikman
FOUNDERcofounder & ceo @humansand - building ai for humans // was lgtm-ing @xAI, phd-ing @stanford
Stephanie Chan
@scychan_brains
AI SAFETYStaff Research Scientist at DeepMind. Artificial & biological brains 🤖 🧠 Societal impacts of AI + Science of AI. Views are my own.
Scott Gray
@scottgray76
RESEARCH ENGINEERGPU Geek at @OpenAI. I have a long standing interest in neuroscience and its application to machine learning. He/Him.
Andrew Lampinen
@AndrewLampinen
RESEARCHERInterested in cognition and artificial intelligence. MTS at @AnthropicAI. Previously @DeepMind, cognitive science @StanfordPsych. Tweets are mine.
rishi
@RishiBommasani
POLICYEconomic impacts of AI; AI policy & governance @StanfordHAI Previous: Stanford CS PhD w/ @percyliang @jurafsky, Cornell CS
Rohan Pandey
@khoomeik
RESEARCH ENGINEERdescending cross-entropy to ascend entropy @PeriodicLabs || prev research @OpenAI @CarnegieMellon '23
dr. jack morris
@jxmnop
RESEARCHERresearch // language models, information theory, science of AI // formerly @cornell
Alex Tamkin
@AlexTamkin
AI SAFETYmachine learning, science & society @AnthropicAI | recently: Clio, Anthropic Economic Index, Claude Artifacts | prev: phd @StanfordAILab, @stanfordnlp
Josh Clymer
@joshua_clymer
AI SAFETYTurtle hatchling trying to make it to the ocean. Preparing for my automation @OpenAI. Contact via email: jclymer@openai.com.
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
@teortaxesTex
CREATORWe're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1
Kory Mathewson
@korymath
RESEARCHER@GoogleDeepMind generative AI models + agents | get great tech into the hands of great creative people
Jimmy Apples 🍎/acc
@apples_jimmy
CREATORWagmi. 2025. As featured in Bloomberg. As quoted by Nobel Prize winner Demis Hassabis. As mentioned on the Lex Fridman Podcast💺
Allan Dafoe
@AllanDafoe
AI SAFETYAGI governance: navigating the transition to beneficial AGI (Google DeepMind)
Alex Mordvintsev
@zzznah
RESEARCHERMad Scientist, DeepDream creator. Designing Self-Organising Systems and Programmable Artificial Life. http://youtube.com/zzznah
Alexia Jolicoeur-Martineau
@jm_alexia
RESEARCHERAI Researcher 🐱💻 2025 ARC Prize Winner I build generative AI for images, videos, text, tabular data, weights, molecules, and video games.
David Bau
@davidbau
AI SAFETYComputer Science Professor at Northeastern, Ex-Googler. Believes AI should be transparent. @davidbau@sigmoid.social @davidbau.bsky.social http://baulab.info
Zhiqing Sun
@EdwardSun0909
RESEARCH ENGINEERLead Agents @Meta MSL TBD Lab. previously posttraining/agent research @OpenAI. CS PhD @LTIatCMU
Yu Bai
@yubai01
RESEARCH ENGINEERTraining Accelerations @OpenAI. Previously @SFResearch, PhD @Stanford.
Pavel Izmailov
@Pavel_Izmailov
ACADEMICResearcher @AnthropicAI 🤖 Assistant Professor @nyuniversity 🏙️ Previously @OpenAI #StopWar 🇺🇦
Alexey Guzey
@alexeyguzey
CREATORhttp://guzey.com, http://newscience.org, @openai; in pursuit of a just, beautiful future.
Adam Gleave
@ARGleave
AI SAFETYCEO & co-founder @FARAIResearch non-profit | PhD from @berkeley_ai | Alignment & robustness | on bsky as http://gleave.me
Brian Christian
@brianchristian
AI SAFETYResearcher at the University of Oxford & UC Berkeley. Author of The Alignment Problem, Algorithms to Live By (w. Tom Griffiths), and The Most Human Human.
Ian Hogarth
@soundboy
AI SAFETYinvestor at @pluralplatform, chair UK AI Security Institute, co-founder @songkick