Sam Bowman
@sleepinyourhat
AI SAFETYAI alignment + LLMs at Anthropic. On leave from NYU. Views not employers'. No relation to @s8mb. Into @givingwhatwecan.
Yee Whye Teh
@yeewhye
ACADEMICFind me @yeewhye@sigmoid.social Professor at @OxCSML, @oxfordstats and Research Director at @GoogleDeepMind. All opinions are my own.
Zico Kolter
@zicokolter
ACADEMICProfessor and Head of Machine Learning Department at @CarnegieMellon. Board member @OpenAI and @Qualcomm. Chief Scientist @GraySwanAI.
Riley Goodside
@goodside
ENGINEERScreenshots of chatbots since 2022. Formerly: Google DeepMind, Scale.
Yarin
@yaringal
ACADEMICProfessor of Machine Learning, University of Oxford @OATML_Oxford Group Leader Expert Advisor to AISI "One of the top machine-learning people" - Tim Berners-Lee
Brandon Amos
@brandondamos
RESEARCHER🧙 RL @Reflection_AI past: @MetaAi @GoogleDeepmind @SCSatCMU @Cornell_Tech
Geoffrey Irving
@geoffreyirving
AI SAFETYChief Scientist at the UK AI Security Institute (AISI). Previously DeepMind, OpenAI, Google Brain, etc.
kipply
@kipperrii
RESEARCH ENGINEER"uncanny ability to be mentioned in every slack thread about code that's mysteriously breaking" - claude
Victoria Krakovna
@vkrakovna
AI SAFETYResearch scientist in AI alignment at Google DeepMind. Co-founder of Future of Life Institute @FLI_org. Views are my own and do not represent GDM or FLI.
Owain Evans
@OwainEvans_UK
AI SAFETYRuns an AI Safety research group in Berkeley (Truthful AI) + Affiliate at UC Berkeley. Past: Oxford Uni, TruthfulQA, Reversal Curse. Prefer email to DM.
Ajeya Cotra
@ajeya_cotra
AI SAFETYHelping the world prepare for extremely powerful AI. Risk assessment @METR_evals. Writing at Planned Obsolescence (about AI), Good Bones (about whatever).
Dylan HadfieldMenell
@dhadfieldmenell
AI SAFETYAssociate Prof @MITEECS working on value (mis)alignment in AI systems; Safety & Alignment Advisor at http://Character.AI; @dhadfieldmenell@bsky.social; he/him
Theo Weber
@theophaneweber
RESEARCH ENGINEERResearch scientist @ DeepMind; currently working on thinking/reasoning in Gemini.
Buck Shlegeris
@bshlgrs
AI SAFETYCEO@Redwood Research (@redwood_ai), working on technical research to reduce catastrophic risk from AI misalignment. bshlegeris@gmail.com
Stephanie Chan
@scychan_brains
AI SAFETYStaff Research Scientist at DeepMind. Artificial & biological brains 🤖 🧠 Societal impacts of AI + Science of AI. Views are my own.
rishi
@RishiBommasani
POLICYSocietal/economic impacts of AI; AI policy & governance @StanfordHAI Stanford CS PhD w/ @percyliang @jurafsky Cornell CS undergrad w/ @clairecardie
Katja Grace 🔍
@KatjaGrace
AI SAFETYThinking about AI destroying the world at http://aiimpacts.org and everything at http://worldspiritsockpuppet.substack.com. DM or email for media requests.
Alexander Wei
@alexwei_
RESEARCH ENGINEERReasoning @OpenAI. Co-built CICERO @MetaAI | @Berkeley_AI PhD '23 | @Harvard '20
Josh Clymer
@joshua_clymer
AI SAFETYTurtle hatchling trying to make it to the ocean. Preparing for my automation @OpenAI. Contact via email: jclymer@openai.com.
David Bau
@davidbau
AI SAFETYComputer Science Professor at Northeastern, Ex-Googler. Believes AI should be transparent. @davidbau@sigmoid.social @davidbau.bsky.social http://baulab.info
Adam Gleave
@ARGleave
AI SAFETYCEO & co-founder @FARAIResearch non-profit | PhD from @berkeley_ai | Alignment & robustness | on bsky as http://gleave.me
Ian Hogarth
@soundboy
AI SAFETYinvestor at @pluralplatform, chair UK AI Security Institute, co-founder @songkick
Summer Yue
@summeryue0
AI SAFETYSafety and alignment at Meta Superintelligence. Prev: VP of Research at Scale AI, research at Google DeepMind / Brain (Gemini, LaMDA, RL / TFAgents, AlphaChip).
Adrien Ecoffet
@AdrienLE
RESEARCH ENGINEERTrying to make AGI go well. Researcher at @openai. Views my own.
Gretchen Krueger
@GretchenMarina
AI SAFETYOver at Bluesky. Researcher affiliated w @BKCHarvard, Volunteer @evitable. Previously @openai @ainowinstitute. Views mine. #justdontbuildagi #talktomenotchatgpt
Daniel Eth (yes, Eth is my actual last name)
@daniel_271828
AI SAFETYResearching effects of automated AI R&D | pro-America, pro-tech, & pro-AI safety
Matt Clifford
@matthewclifford
FOUNDERCo-founder @join_ef; Chair @ARIA_Research; Make Britain Rich Again.
Steven Adler
@sjgadler
AI SAFETYAI safety researcher (ex-OpenAI: danger evals, AGI readiness, etc), writing at https://clear-eyed.ai
Eli Lifland
@eli_lifland
AI SAFETYAI forecasting and governance @AI_Futures_. Co-author of AI 2027 and the AI Futures Model. Also @aidigest_, @SamotsvetyF. Prev @oughtinc
Marius Hobbhahn
@MariusHobbhahn
AI SAFETYCEO at Apollo Research @apolloaievals prev. ML PhD with Philipp Hennig & AI forecasting @EpochAIResearch
Tristan Harris
@tristanharris
AI SAFETY• Co-founder @HumaneTech_ • Former Google Design Ethicist • Featured in Netflix's @SocialDilemma_ • #TIME100 for AI • Host #YourUndividedAttention podcast
Toby Shevlane
@tshevl
FOUNDER@_Mantic_AI cofounder & CEO, on a mission to solve forecasting. Prev: research scientist @GoogleDeepMind, PhD at @UniofOxford.
Sayash Kapoor
@sayashk
RESEARCHERI tweet about AI agents, AI evals, AI for science. AI as Normal Technology: http://bit.ly/ai-nt Book: http://bit.ly/ai-so
Chris Painter
@ChrisPainterYup
POLICYpresident @METR_Evals, evals accelerationist, working hard on AGI preparedness
Jason Phang
@zhansheng
RESEARCHERFoundations at @OpenAI. PhD @NYUDataScience, @AiEleuther, 🇸🇬. Prev: @Google, @Microsoft
Saffron Huang
@saffronhuang
RESEARCHERhow shall we live together? societal impacts researcher @AnthropicAI • ex @GoogleDeepMind @AISecurityInst⋅ @collect_intel co-founder • views mine