Sam Bowman
@sleepinyourhat
AI SAFETYAI alignment + LLMs at Anthropic. On leave from NYU. Views not employers'. No relation to @s8mb. Into @givingwhatwecan.
Amanda Askell
@AmandaAskell
AI SAFETYPhilosopher & ethicist trying to make AI be good @AnthropicAI. Personal account. All opinions come from my training data.
Joshua Achiam
@jachiam0
AI SAFETYFreedom, flourishing, and abundance. Chief Futurist @openai. Main author of http://spinningup.openai.com
Catherine Olsson
@catherineols
RESEARCH ENGINEERHanging out with Claude, improving its behavior, and building tools to support that @AnthropicAI 😁 prev: @open_phil @googlebrain @openai (@microcovid)
Riley Goodside
@goodside
ENGINEERScreenshots of chatbots since 2022. Formerly: Google DeepMind, Scale.
Irwan Bello
@IrwanBello
RESEARCH ENGINEERSupercomputers & Friends, AI & Science. Founding team & Research Lead at @reflection_ai. Past: @OpenAI, Founding team @character_ai, Google Brain
Joe Carlsmith
@jkcarlsmith
AI SAFETYPhilosophy, futurism, AI. Working on Claude's values @AnthropicAI. Formerly @coeff_giving. Opinions my own.
Victoria Krakovna
@vkrakovna
AI SAFETYResearch scientist in AI alignment at Google DeepMind. Co-founder of Future of Life Institute @FLI_org. Views are my own and do not represent GDM or FLI.
Owain Evans
@OwainEvans_UK
AI SAFETYRuns an AI Safety research group in Berkeley (Truthful AI) + Affiliate at UC Berkeley. Past: Oxford Uni, TruthfulQA, Reversal Curse. Prefer email to DM.
Ajeya Cotra
@ajeya_cotra
AI SAFETYHelping the world prepare for extremely powerful AI. Risk assessment @METR_evals. Writing at Planned Obsolescence (about AI), Good Bones (about whatever).
Helen Toner
@hlntnr
POLICYAI, national security, China. Part of the founding team at @CSETGeorgetown (opinions my own). Author of Rising Tide on substack: http://helentoner.substack.com
Buck Shlegeris
@bshlgrs
AI SAFETYCEO@Redwood Research (@redwood_ai), working on technical research to reduce catastrophic risk from AI misalignment. bshlegeris@gmail.com
⿻ Andrew Trask
@iamtrask
AI SAFETYi teach AI on X building AI with attribution-based control @openminedorg, @GoogleDeepMind, @OxfordUni, @UN, @GovAIOrg, and @CFR_org
andy jones
@andy_l_jones
RESEARCH ENGINEERengineering & research at anthropic. i don't check twitter DMs. email me!
rishi
@RishiBommasani
POLICYSocietal/economic impacts of AI; AI policy & governance @StanfordHAI Stanford CS PhD w/ @percyliang @jurafsky Cornell CS undergrad w/ @clairecardie
Katja Grace 🔍
@KatjaGrace
AI SAFETYThinking about AI destroying the world at http://aiimpacts.org and everything at http://worldspiritsockpuppet.substack.com. DM or email for media requests.
William MacAskill
@willmacaskill
AI SAFETYConsider donating 10% to effective charities: http://www.givingwhatwecan.org/pledge Or a career for impact: http://80000hours.org My research: http://forethought.org
Rob Wiblin
@robertwiblin
CREATORHost of the 80,000 Hours Podcast. Exploring the inviolate sphere of ideas one interview at a time: http://80000hours.org/podcast/
Alex Tamkin
@AlexTamkin
AI SAFETYmachine learning, science & society @AnthropicAI | recently: Clio, Anthropic Economic Index, Claude Artifacts | prev: phd @StanfordAILab, @stanfordnlp
Toby Ord
@tobyordoxford
AI SAFETYSenior Researcher at Oxford University. Author — The Precipice: Existential Risk and the Future of Humanity.
Yo Shavit
@yonashav
AI SAFETYpolicy for v smart things @openai. Past: CS PhD @HarvardSEAS/@SchmidtFutures/@MIT_CSAIL. Tweets my own; on my head be it.
Allan Dafoe
@AllanDafoe
AI SAFETYAGI governance: navigating the transition to beneficial AGI (Google DeepMind)
Rosie Campbell
@RosieCampbell
AI SAFETYForever expanding my nerd/bimbo Pareto frontier. AI welfare 🤝 AI safety. Managing Director @eleosai, Ex-OpenAI, 2024 @rootsofprogress fellow
Adam Gleave
@ARGleave
AI SAFETYCEO & co-founder @FARAIResearch non-profit | PhD from @berkeley_ai | Alignment & robustness | on bsky as http://gleave.me
Ian Hogarth
@soundboy
AI SAFETYinvestor at @pluralplatform, chair UK AI Security Institute, co-founder @songkick
Gretchen Krueger
@GretchenMarina
AI SAFETYOver at Bluesky. Researcher affiliated w @BKCHarvard, Volunteer @evitable. Previously @openai @ainowinstitute. Views mine. #justdontbuildagi #talktomenotchatgpt
Daniel Eth (yes, Eth is my actual last name)
@daniel_271828
AI SAFETYResearching effects of automated AI R&D | pro-America, pro-tech, & pro-AI safety
Irene Solaiman
@IreneSolaiman
POLICYai stuff @huggingface 🤗 founded @evaluatingevals views=mine former: @OpenAI @Harvard aspiring ukulele-singer
xuan (ɕɥɛn / sh-yen)
@xuanalogue
AI SAFETYAssistant Professor at NUS. Scaling cooperation for an increasingly automated future. PhD @ MIT ProbComp / CoCoSci. Pronouns: 祂/伊
Thore Graepel
@ThoreG
RESEARCHERPost-AGI research @Deepmind. Prev: AI & ML @Altos_Labs. @DeepMind & @MSFTResearch. #AlphaGo & #TrueSkill, family, music, meditation, Patterns of Thought
Iason Gabriel
@IasonGabriel
RESEARCHERPhilosopher & Research Scientist @GoogleDeepMind | AGI & Society Lead | #TIME100AI | All views are my own
Steven Adler
@sjgadler
AI SAFETYAI safety researcher (ex-OpenAI: danger evals, AGI readiness, etc), writing at https://clear-eyed.ai