Chenyang An
Ph.D. Student @ University of California, San Diego

HSS Building UC San Diego
9500 Gilman Dr
La Jolla, California 92092
email: c5an at ucsd dot edu
I am a 5-th year Ph.D. student in the Mathematics Department, UC San Diego, where I am advised by Prof. Sam Buss and co-advised by Prof. Jingbo Shang. Prior to my Ph.D. study, I completed my B.S. in Applied Mathematics and B.A. in Economics at UC San Diego.
My current research focuses on Large Language Model (LLM) Reasoning and Theorem Proving both in natural language and formalized environments. I believe that mathematics will fall well within the capabilities of LLMs probably soon.
My prior research focused on 2D Quantum Gravity and mathematical physics, studying the interplay between algebra, geometry and physics.
I am currently employed at Scale AI as an AI Consultant, working on evaluations of large language models for reasoning and planning.
I was working at Microsoft as a research intern in ML and Generative AI in the summer of 2024 in Seattle, working on LLM training efficiency for reasoning tasks.
If you are interested in any of the topics above, feel free to drop me an email!
news
Jan 22, 2025 | I’m thrilled to see our publication on ACL 2024 “Learn from failure: Fine-tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving” is featured by Neptunes News Agency! News link. |
---|---|
Jan 22, 2025 | I’m thrilled to share that our paper, “Correlation and Navigation in the Vocabulary Key Representation Space of Language Models”, has been accepted to the International Conference on Learning Representations (ICLR)! This work studies spurious correlation existing in the vocabulary key space of LLMs, and proposes a novel in-context learning method (called In-Context Navigation) to sample high quality results from the key space of the LLMs that otherwise cannot be obtained through the usual top-k inference. |
Oct 01, 2024 | I’m excited to share that I will be joining Scale AI as an AI Consultant, working on fine-tuning LLMs for real-world applications. |
Jun 04, 2024 | I’m excited to share that I will be joining Microsoft as a research intern in ML and Generative AI in the summer of 2024 in Redmond, Seattle. |
Jun 01, 2024 | I’m thrilled to share that our paper, “Learn from Failure: Fine-Tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving”, has been accepted to the main conference at Association for Computational Linguistics (ACL) 2024! This work studies the usefulness of the trial-and-error information by fine-tuning the LLMs with it in order to help the models perform reasoning in logic deduction problems. |