Chenyang An
Applied Scientist @ Amazon AWS
New York City
email: cya.portfolio at gmail dot com
I’m an Applied Scientist at Amazon AWS Automated Reasoning Group, working on LLM post-training, reasoning and verification.
I interned at Microsoft Research in Seattle during the summer of 2024, focusing on improving the training efficiency of large language models (LLMs) for reasoning tasks. I also worked part-time at Scale AI as an AI Consultant, contributing to the development of LLM-based web agents and scalable verification systems for reasoning data. In Spring 2025, I joined Amazon AWS Neurosymbolic as an Applied Scientist Intern, where I designed a new reinforcement learning pipeline incorporating a diversity-based reward to encourage the generation of varied chains of thought (CoTs), along with the supporting data preprocessing framework.
If you are interested in any of the topics above, feel free to drop me an email!
news
| Feb 25, 2026 | I’m happy to release an agent pipeline based on Claude Code that help proofread latex source code of paper and books! Check https://github.com/chenyang-an/proofread for details! |
|---|---|
| Aug 30, 2025 | I’m excited to share that I will join Amazon AWS Automated Reasoning Group as an Applied Scientist! |
| Apr 01, 2025 | I’m excited to share that our paper, “The Price of Format: Diversity Collapse in LLMs”, has been accepted to Empirical Methods in Natural Language Processing 2025 (EMNLP)! In this work, we find out that structured templates in instruction-tuned LLMs cause diversity collapse—limiting open-ended generation—even under high-temperature sampling, and systematically evaluated this effect across tasks to show the trade-off between alignment, task performance, and output diversity. |
| Apr 01, 2025 | I’m excited to share that I’ll be joining Amazon AWS Neurosymbolic as an Applied Scientist Intern, where I’ll be working on LLM reasoning in both natural and formal language! |
| Jan 22, 2025 | I’m thrilled to see our publication on ACL 2024 “Learn from failure: Fine-tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving” is featured by Neptunes News Agency! News link. |