Profile photo of Lihan Zha

Robotics • Machine Learning • Generalist Robots

Lihan Zha

I am a PhD student at Princeton University, advised by Anirudha Majumdar. I am interested in building generalist robots.

Previously, I graduated from Tsinghua University, where I worked with Jianyu Chen on humanoid robots. I was also a research intern at Stanford, advised by Dorsa Sadigh.

Email: lihanzha [at] princeton [dot] edu

Selected work

Research

Project image for PlayWorld: Learning Robot World Models from Autonomous Play

PlayWorld: Learning Robot World Models from Autonomous Play

Tenny Yin, Zhiting Mei, Zhonghe Zheng, Miyu Yamane, David Wang, Jade Sceats, Samuel M. Bateman, Lihan Zha, Apurva Badithela, Ola Shorinwa, Anirudha Majumdar

arXiv

TL;DR: We introduce PlayWorld, a system for training video-based robot world models using autonomous self-play rather than human demonstrations, generating high-quality physically consistent predictions for contact-rich interactions that enable reinforcement learning in simulation with 65% improvements in real-world policy success rates.

Project image for LAP: Language-Action Pre-Training Enables Zero-shot Cross-Embodiment Transfer

LAP: Language-Action Pre-Training Enables Zero-shot Cross-Embodiment Transfer

Lihan Zha, Asher J. Hancock*, Mingtong Zhang*, Tenny Yin, Yixuan Huang, Dhruv Shah, Allen Z. Ren†, Anirudha Majumdar

arXiv

TL;DR: We introduce Language-Action Pre-training (LAP), which represents robot actions as natural language tokens to enable vision-language-action models to transfer zero-shot to new robot embodiments, achieving over 50% zero-shot success on novel robots—approximately twice the performance of prior methods.

Project image for Video Generation Models in Robotics - Applications, Research Challenges, Future Directions

Video Generation Models in Robotics - Applications, Research Challenges, Future Directions

Zhiting Mei*, Tenny Yin*, Ola Shorinwa*, Apurva Badithela, Zhonghe Zheng, Joseph Bruno, Madison Bland, Lihan Zha, Asher Hancock, Jaime Fernandez Fisac, Philip Dames, Anirudha Majumdar

arXiv

TL;DR: We survey video generation models in robotics, covering current applications, core research challenges, and promising future directions for using generative video models to support robot prediction, planning, and interaction.

Project image for Reliable and Scalable Robot Policy Evaluation with Imperfect Simulators

Reliable and Scalable Robot Policy Evaluation with Imperfect Simulators

Apurva Badithela, David Snyder*, Lihan Zha*, Joseph Mikhail, Matthew O'Kelly, Anushri Dixit, Anirudha Majumdar

ICRA 2026 International Conference on Robotics and Automation, 2026

Best Paper Award, CoRL 2025 Eval&Deploy workshop

TL;DR: We present a framework that augments real-world evaluations with simulation evaluations to provide stronger inferences on real-world policy performance that could otherwise only be obtained by scaling up real-world evaluations.

Project image for Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting

Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting

Asher J. Hancock, Xindi Wu, Lihan Zha, Olga Russakovsky, Anirudha Majumdar

ICLR 2026 International Conference on Learning Representations, 2026

TL;DR: We introduce VLM2VLA, a VLA model training paradigm that represents low-level robot actions in natural language to better align the robot fine-tuning data with the base VLM’s representation space. VLM2VLA yields a policy with strong VQA performance and zero-shot generalization to new scenarios.

Recognition

Selected Awards

  • 2024Outstanding Graduate in Beijing, China.
  • 2023National Scholarship. Highest honor for undergraduates
  • 2023Jiang Nan Xiang Scholarship. Highest honor for undergraduates