An editor has nominated this article for deletion. You are welcome to participate in the deletion discussion, which will decide whether to keep it. |
Amanda Askell | |
|---|---|
| Spouse | |
| Awards | Time 100 AI (2024) |
| Education | |
| Education |
|
| Thesis | Pareto Principles in Infinite Ethics (2018) |
| Doctoral advisors |
|
| Philosophical work | |
| Era | Contemporary philosophy |
| Region | Western philosophy |
| School | Analytic |
| Institutions | |
| Main interests | |
| Notable works | Constitutional AI framework |
| Notable ideas |
|
| Website | askell |
Amanda Askell is a Scottish philosopher and AI researcher. She has served as the head of the personality alignment team at Anthropic since 2021. She has played a large role in the development of Claude's personality and constitution.[1] In 2024, she was on the TIME100 AI list.[2] She previously worked at OpenAI, but left over concerns that the company was not prioritizing AI safety enough.[3][4] She has published over 60 papers and has received over 170,000 citations.[5]
Early life and education
[edit]Askell received a BPhil degree in Philosophy from the University of Oxford[6] and a PhD degree in Philosophy from New York University in 2018.[4] Her doctoral thesis (Pareto Principles in Infinite Ethics) argues that rankings of worlds containing infinitely many agents, when constrained by certain plausible axioms, create puzzles for a wide range of ethical theories.[7][8]
Career
[edit]OpenAI (2018–2021)
[edit]After completing her PhD, Askell joined OpenAI in November 2018 as a Research Scientist on the policy team.[9] At OpenAI, she focused on AI development races between organizations and how they can avoid being adversarial, as well as examining the intersection between policy questions and AI safety[9] and co-authored the GPT-3 paper, which was published as a pre-print on 28 May 2020.[10]
Anthropic (2021–present)
[edit]Askell joined Anthropic in March 2021 as a Member of Technical Staff, focusing on alignment and finetuning.[11] She currently leads the personality alignment team, where she is responsible for training Anthropic's Claude model to exhibit positive character traits, such as curiosity, and for developing new techniques for model finetuning.[2]
Research
[edit]Moral self-correction
[edit]In a 2023 paper co-authored with Deep Ganguli, Askell explored "moral self-correction" in large language models: the capacity of these systems to reduce harmful outputs when given natural language instructions to do so. The research tested whether models trained with reinforcement learning from human feedback (RLHF) could avoid stereotyping and discrimination without being provided explicit definitions of these concepts or the metrics used to evaluate them.[12]
The study found that this capability emerged at 22 billion parameters and improved with both model size and RLHF training. Using three experimental benchmarks, the researchers demonstrated that natural-language instructions such as "Please ensure that your answer is unbiased and does not rely on stereotypes" substantially reduced biased outputs in models of sufficient scale. The results revealed that larger models can follow complex instructions and learn normative concepts like stereotyping and discrimination from training data.[12][13]
Constitutional AI
[edit]Askell has been a key contributor to the development of Constitutional AI (CAI), a method for training AI systems to meet the standards of harmlessness and helpfulness using AI feedback rather than extensive human oversight.[14] The approach involves providing AI models with a set of principles, or "constitution", to guide their behavior, allowing them to critique and revise their own responses based on these principles.[15]
Askell is the primary author and is responsible for the majority of the text of the latest version of Claude's constitution, released in January 2026.[16][17] The document is designed to address the growing capabilities and emerging risks of advanced AI models.[1][18] She has described her work as focusing on helping models "understand and grapple with the constitution" through synthetic data generation and reinforcement learning techniques.[1]
Personal life
[edit]Askell was married to philosopher William MacAskill but they divorced in 2015.[19][20] She is a member of Giving What We Can.[21]
References
[edit]- ^ a b c Sullivan, Mark (22 January 2026). "A Q&A with Amanda Askell, the lead author of Anthropic's new 'constitution' for AIs". Fast Company. Archived from the original on 23 January 2026. Retrieved 24 January 2026.
- ^ a b Perrigo, Billy (5 September 2024). "Amanda Askell". Time.
- ^ "Time 100 AI list contains at least 5 people who quit OpenAI due to safety concerns". 9 September 2024. Archived from the original on 16 November 2025. Retrieved 24 January 2026.
- ^ a b "Philosophy Department Graduate Placement Record". New York University. Retrieved 24 January 2026.
- ^ "Amanda Askell". Google Scholar. Archived from the original on 1 November 2025. Retrieved 24 January 2026.
- ^ "Amanda Askell". Berkman Klein Center for Internet & Society. Harvard University. 24 March 2020. Archived from the original on 14 November 2025. Retrieved 28 January 2026.
- ^ Askell, Amanda (2018). Pareto Principles in Infinite Ethics (PDF) (Ph.D.). New York University. Archived (PDF) from the original on 28 January 2026. Retrieved 27 January 2026.
- ^ Cowen, Tyler (14 October 2018). "Pareto Principles in Infinite Ethics". Marginal Revolution. Retrieved 1 February 2026.
- ^ a b Robert Wiblin (19 March 2019). "Askell, Brundage & Clark on whether policy has a hope of keeping up with AI advances" (Podcast). 80,000 Hours Podcast. No. 54. Archived from the original on 5 January 2026. Retrieved 28 January 2026.
- ^ Brown, Tom B.; et al. (2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
- ^ "Amanda Askell - Member Of Technical Staff at Anthropic". The Org. Retrieved 28 January 2026.
- ^ a b Ganguli, Deep; Askell, Amanda; Schiefer, Nicholas; Liao, Thomas; Lukošiūtė, Kamilė; Chen, Anna; Goldie, Anna; Mirhoseini, Azalia (15 February 2023). "The Capacity for Moral Self-Correction in Large Language Models". arXiv:2302.07459 [cs.CL].
- ^ Knight, Will (20 March 2023). "Language models may be able to self-correct biases—if you ask them to". MIT Technology Review. Archived from the original on 12 November 2024. Retrieved 28 January 2026.
- ^ Bai, Yuntao; Kadavath, Saurav; Kundu, Sandipan; Askell, Amanda (15 December 2022). "Constitutional AI: Harmlessness from AI Feedback". arXiv:2212.08073 [cs.CL].
- ^ Edwards, Benj (9 May 2023). "AI gains "values" with Anthropic's new Constitutional AI chatbot approach". Ars Technica. Archived from the original on 10 May 2023. Retrieved 29 January 2026.
- ^ Samuel, Sigal (28 January 2026). "Claude has an 80-page "soul document." Is that enough to make it good?". Vox. Archived from the original on 28 January 2026. Retrieved 28 January 2026.
- ^ "Claude's Constitution". Anthropic. Archived from the original on 28 January 2026. Retrieved 28 January 2026.
- ^ Ostrovsky, Nikita; Perrigo, Billy (21 January 2026). "How Do You Teach an AI to Be Good? Anthropic Just Published Its Answer". Time. Archived from the original on 24 January 2026. Retrieved 27 January 2026.
- ^ Bajekal, Naina (10 August 2022). "Want to Do More Good? This Movement Might Have the Answer". Time. Archived from the original on 29 November 2023. Retrieved 28 January 2026.
- ^ Levy, Steven (28 March 2025). "If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born". Wired. Archived from the original on 5 April 2025. Retrieved 28 January 2026.
- ^ "Members". Giving What We Can. Archived from the original on 12 May 2020. Retrieved 28 January 2026.
External links
[edit]- Official website
- Amanda Askell publications indexed by Google Scholar