Large language model safety
Ethan Perez is a research scientist at Anthropic leading the adversarial robustness team, specializing in large language model safety, alignment, and risks. Known for co-developing RAG and AI safety research publications. Classified as Researcher for his primary public identity in AI research.
Showing only followers who are currently ranked in the top 1000.
Last updated — · 1040 influencers tracked · 2000 ranked users