About

Hello! I am a 2nd-year Ph.D. student in Computer Science at Johns Hopkins University, advised by Tianmin Shu. I’m also a part-time student researcher at Meta FAIR, mentored by Jason Weston. My research is funded by the Amazon AI PhD Fellowship.

I received bachelor’s degrees in Honors Computer Science and Mathematics from NYU Courant (2020–2024), where the CS Department recognized me as its Most Promising Student. I was as a research intern at MIT (2023), advised by Josh Tenenbaum. I received an Outstanding Paper Award at ACL 2024 for my work on multimodal Theory of Mind.

I’m interested in developing AI systems with advanced social intelligence (ASI). My current areas of focus include:

Reasoning about Human Interaction: Developing AI that can continuously perceive, reason about, and respond to human behavior and cognition.
Learning from Human Interaction: Developing AI that can continuously learn from real-world human interactions.

I love mentoring undergraduates—several of my mentees have been recognized with top research honors. Please fill out this application form and/or send me an email if you’re interested in working with me!

Besides academics, I enjoy playing Go, piano, badminton, swimming, and so much more.

News

Recent Publications

(* denotes equal contribution; † denotes project lead)
For a more complete list, please see my publication page or Google Scholar page.
The Era of Real-World Human Interaction: RL from User Conversations
Chuanyang Jin, Jing Xu, Bo Liu, Leitian Tao, Olga Golovneva, Tianmin Shu, Wenting Zhao, Xian Li, Jason Weston
arXiv Preprint / 🔍 Invited Talk at Google and Meta TBD Lab / ⭐️ Paper of the Week by Huggingface, DAIR.AI, and TuringPost
We posit that to achieve continual model improvement and multifaceted alignment, future models must learn from natural human interaction. We introduce Reinforcement Learning from Human Interaction (RLHI), a paradigm that learns directly from in-the-wild user conversations. RLHI beats RLHF at the user level and enabling personalized, contextual, and continually improving AI assistants.
AutoToM: Scaling Model-based Mental Inference via Automated Agent Modeling
Zhining Zhang*, Chuanyang Jin*†, Mung Yao Jia*, Shunchi Zhang*, Tianmin Shu (†: project lead)
NeurIPS 2025 (Spotlight)
AutoToM is an automated agent modeling method for scalable, robust, and interpretable mental inference. It achieves SOTA on five benchmarks, produces human-like confidence estimates, and supports embodied decision-making.
OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis
Qiushi Sun*, Kanzhi Cheng*, Zichen Ding*, Chuanyang Jin*, Yian Wang, Fangzhi Xu, Zhenyu Wu, Chengyou Jia, Liheng Chen, Zhoumianze Liu, Ben Kao, Guohao Li, Junxian He, Yu Qiao, Zhiyong Wu
ACL 2025 / ⭐️ Huggingface Daily Papers Top-1
We introduce OS-Genesis, a manual-free data pipeline for synthesizing GUI agent trajectory. It enables agents to actively explore web and mobile environments through stepwise interactions, then derive meaningful low- and high-level task instructions from observed interactions and state changes.
MuMA-ToM: Multi-modal Multi-Agent Theory of Mind
Haojun Shi*, Suyu Ye*, Xinyu Fang, Chuanyang Jin, Leyla Isik, Yen-Ling Kuo, Tianmin Shu
AAAI 2025 (Oral)
MuMA-ToM evaluates Theory of Mind reasoning in embodied multi-agent interactions, revealing that current multimodal LLMs significantly lag behind human performance. To bridge this gap, we propose LIMP, a method that combines language models with inverse multi-agent planning to achieve superior results.
MMToM-QA: Multimodal Theory of Mind Question Answering
Chuanyang Jin, Yutong Wu, Jing Cao, Jiannan Xiang, Yen-Ling Kuo, Zhiting Hu, Tomer Ullman, Antonio Torralba, Joshua Tenenbaum, Tianmin Shu
ACL 2024 (Outstanding Paper Award) / 🔍 Invited Talk at University of Washinton
Can machines understand people's minds from multimodal inputs? We introduce a comprehensive benchmark, MMToM-QA, and highlight key limitations in current multimodal LLMs. We then propose a novel method that combines the flexibility of LLMs with the robustness of Bayesian inverse planning, achieving promising results.

Feel free to check out my undergrad projects. A mountain of gratitude to those who have kindly mentored and inspired me with their vision and passion!

Selected Honors & Awards

  • Amazon AI PhD Fellowship, 2025
  • Notable Reviewer, ICLR 2025
  • Outstanding Paper Award, ACL 2024
  • Presidential Honors Scholar and Summa cum Laude, New York University, 2024
  • Computer Science Prize for the Most Promising Student, New York University (1 person/year), 2023
  • Dean’s Undergraduate Research Fund, New York University, 2023
  • COMAP International Scholarship Award (Top 0.1%), 2022
  • MAA Award in Mathematical Contest in Modeling (Top 0.1%), 2022
  • Bronze Medal of Shing-Tung Yau Computer Science Award (Top 1%), 2019
  • Finalist of FIRST Robotics Competition World Championship (Top 0.2%), 2019
  • NFLS Outstanding Student Leader Award and Zhou Enlai Scholarship (Top 1%), 2018
  • First Prize of Chinese Mathematical Olympiad (Top 0.1%), 2018
  • Champion of International Regions Mathematics League, 2018

© Chuanyang Jin, 2023
Powered by Hydejack