Yongchao CHEN

Assistant Professor

Large models, robotics, and neuro-symbolic AI, and the interdisciplinary field of AI for Science

Education/Work Experience

In 2021, obtained B.Sci. degree from the School of Engineering Science at the University of Science and Technology of China (USTC).(Guo Moruo Scholarship recipient)

From 2021 to 2026, pursued Ph.D. at the School of Engineering and Applied Sciences (SEAS), Harvard University, and the Laboratory for Information and Decision Systems (LIDS), Massachusetts Institute of Technology (MIT).(Advised by Chuchu Fan, Prof. Na Li, and Prof. Nicholas Roy)

He has extensive research experience in industry, including positions at Google Research, DeepMind, Microsoft Research, and the MIT-IBM Watson AI Lab.

Research Directions

Assistant Professor Yongchao Chen’s research centers on developing general and scalable neuro-symbolic models for autonomy, with the goal of enhancing intelligent agents’ and robots’ capabilities in understanding, reasoning, planning, and decision-making within complex environments.

His research spans the following main directions:

1. Foundation Models and Neuro-Symbolic AI

This line of research focuses on post-training, reasoning, planning, and tool use for large language models (LLMs) and vision-language models (VLMs). He explores the integration of symbolic structures with learning-based approaches to develop neuro-symbolic models that improve systematic reasoning and generalization capabilities.

2. Robotics and Autonomous Systems

His work investigates task and motion planning (TAMP), robotic foundation models (e.g., Vision-Language-Action models), tactile perception, humanoid robotics, manipulation, navigation, control and optimization, multi-robot systems, and robot mechanics. The overarching goal is to advance intelligent robotic systems capable of long-horizon autonomy and complex interaction in real-world environments.

3. AI for Science

He explores the application of foundation models and robotics to autonomous scientific discovery, including model construction for scientific problems, experiment decision-making, and automated system

Research Highlights

Centered on the goal of building scalable and generalizable neuro-symbolic autonomous systems, Assistant Professor Yongchao Chen has systematically advanced the integration of foundation models with symbolic computation, search and optimization, and robotic task and motion planning (TAMP).

He proposed frameworks such as AutoTAMP, NL2TL, and TravelPlanner, enabling reliable integration between large language models and formal planners. He further introduced the Code-as-Symbolic-Planner paradigm, using code as a unified intermediate representation. Based on this paradigm, he developed models including CodeSteer, R1-Code-Interpreter, and TUMIX, allowing large models to dynamically switch between natural language reasoning, code execution, and tool invocation.

His work also extends search-based methods to multi-robot collaboration, prompt optimization, and AI for Science, forming a systematic body of contributions across robotic planning, agent reasoning, and scientific discovery.

His research has been published in top-tier conferences including ICLR, ICML, EMNLP, NAACL, and ICRA, and has generated broad academic and community impact. His open-source models and datasets have been downloaded more than 500,000 times on Hugging Face. His work has also been featured by MIT News, MIT News Spotlight, and MIT Technology Review.

For more information, please visit his personal website:

https://yongchao98.github.io/YongchaoChen/

Email

yongchaochen12@gmail.com

Office

Block F, Zhongguancun Intelligent Manufacturing Street

Homepage

https://yongchao98.github.io/YongchaoChen/
TOP