news

Aug 30, 2025 I’m excited to share that I will join Amazon AWS Automated Reasoning Group as an Applied Scientist!
Apr 01, 2025 I’m excited to share that our paper, “The Price of Format: Diversity Collapse in LLMs”, has been accepted to Empirical Methods in Natural Language Processing 2025 (EMNLP)! In this work, we find out that structured templates in instruction-tuned LLMs cause diversity collapse—limiting open-ended generation—even under high-temperature sampling, and systematically evaluated this effect across tasks to show the trade-off between alignment, task performance, and output diversity.
Apr 01, 2025 I’m excited to share that I’ll be joining Amazon AWS Neurosymbolic as an Applied Scientist Intern, where I’ll be working on LLM reasoning in both natural and formal language!
Jan 22, 2025 I’m thrilled to see our publication on ACL 2024 “Learn from failure: Fine-tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving” is featured by Neptunes News Agency! News link.
Jan 22, 2025 I’m thrilled to share that our paper, “Correlation and Navigation in the Vocabulary Key Representation Space of Language Models”, has been accepted to the International Conference on Learning Representations (ICLR)! This work studies spurious correlation existing in the vocabulary key space of LLMs, and proposes a novel in-context learning method (called In-Context Navigation) to sample high quality results from the key space of the LLMs that otherwise cannot be obtained through the usual top-k inference.
Oct 01, 2024 I’m excited to share that I will be joining Scale AI as an AI Consultant, working on fine-tuning LLMs for real-world applications.
Jun 04, 2024 I’m excited to share that I will be joining Microsoft as a research intern in ML and Generative AI in the summer of 2024 in Redmond, Seattle.
Jun 01, 2024 I’m thrilled to share that our paper, “Learn from Failure: Fine-Tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving”, has been accepted to the main conference at Association for Computational Linguistics (ACL) 2024! This work studies the usefulness of the trial-and-error information by fine-tuning the LLMs with it in order to help the models perform reasoning in logic deduction problems.