Chenyang An
Applied Scientist @ Amazon AWS
New York City
email: cya.portfolio at gmail dot com
I’m an Applied Scientist at Amazon AWS Automated Reasoning Group, working on LLM post-training, reasoning and verification.
I interned at Microsoft Research in Seattle during the summer of 2024, focusing on improving the training efficiency of large language models (LLMs) for reasoning tasks. I also worked part-time at Scale AI as an AI Consultant, contributing to the development of LLM-based web agents and scalable verification systems for reasoning data. In Spring 2025, I joined Amazon AWS Neurosymbolic as an Applied Scientist Intern, where I designed a new reinforcement learning pipeline incorporating a diversity-based reward to encourage the generation of varied chains of thought (CoTs), along with the supporting data preprocessing framework.
If you are interested in any of the topics above, feel free to drop me an email!
news
| Aug 30, 2025 | I’m excited to share that I will join Amazon AWS Automated Reasoning Group as an Applied Scientist! |
|---|---|
| Apr 01, 2025 | I’m excited to share that our paper, “The Price of Format: Diversity Collapse in LLMs”, has been accepted to Empirical Methods in Natural Language Processing 2025 (EMNLP)! In this work, we find out that structured templates in instruction-tuned LLMs cause diversity collapse—limiting open-ended generation—even under high-temperature sampling, and systematically evaluated this effect across tasks to show the trade-off between alignment, task performance, and output diversity. |
| Apr 01, 2025 | I’m excited to share that I’ll be joining Amazon AWS Neurosymbolic as an Applied Scientist Intern, where I’ll be working on LLM reasoning in both natural and formal language! |
| Jan 22, 2025 | I’m thrilled to see our publication on ACL 2024 “Learn from failure: Fine-tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving” is featured by Neptunes News Agency! News link. |
| Jan 22, 2025 | I’m thrilled to share that our paper, “Correlation and Navigation in the Vocabulary Key Representation Space of Language Models”, has been accepted to the International Conference on Learning Representations (ICLR)! This work studies spurious correlation existing in the vocabulary key space of LLMs, and proposes a novel in-context learning method (called In-Context Navigation) to sample high quality results from the key space of the LLMs that otherwise cannot be obtained through the usual top-k inference. |