Miao Liu (刘淼)

Assistant Professor at Tsinghua University, Collage of AI, EX-Research Scientist at META GenAI

I'm an incoming Assistant Professor at Tsinghua University, College of Artificial Intelligence. Previously, I was a Research Scientist at META Reality Labs and GenAI, primarily focusing on first-person vision and generative AI models (such as Llama3, Llama4, and EMU). I completed my Ph.D. in Robotics at Georgia Tech, advised by Prof. James Rehg. I also work closely with Prof. Yin Li from the University of Wisconsin–Madison. I was fortunate to collaborate with Prof. Siyu Tang and Prof. Michael Black during my visit to ETH Zurich and the Max Planck Institute. I enjoyed a wonderful internship at Facebook Reality Labs, where I worked with Dr. Chao Li, Dr. Lingni Ma, Dr. Kiran Somasundaram, and Prof. Kristen Grauman on egocentric action recognition and localization. I am honored to have received several awards, including Best Paper Candidate at CVPR 2022 and ECCV 2024, and the BMVC Best Student Paper Award. Before joining Georgia Tech, I earned my Master’s degree from Carnegie Mellon University and Bachelor’s degree from Beihang University.

As a primary contributor, I have helped construct several widely recognized egocentric video datasets, including Ego4D, Ego-Exo4D, EGTEA Gaze+, and the Behavior Vision Suite, which have been broadly adopted in both academia and industry. I have also proposed multiple algorithms for egocentric action recognition and anticipation, some of which will be deployed in the next-generation smart glasses developed by Meta Reality Labs. During my time at Meta GenAI, I was deeply involved in the training and evaluation of large-scale generative multimodal models, including EMU, Llama3, and Llama4 (multimodal components only).

*This image of Jaime Lannister charging alone at Daenerys and her dragon reveals what it often takes to do science—you must be willing to stand as the lonely warrior.

Our lab is committed to the following research agenda:

Designing AI that sees through your eyes, learns your skills, and understands your intentions.

--构建能“看你所见、学你所会、懂你所想”的下一代人本智能系统。

Our research is dedicated to Bridging Minds and Machines by leveraging egocentric vision and generative AI to enable AI systems that understand and anticipate human behavior and intentions, and thereby assist human daily life. Our key research directions include:

  • Human Skill Transfer: Facilitating skill transfer between humans and from humans to robots through augmented reality, enabling efficient and natural human-AI collaboration.
  • Personalized AI Systems: Building generative AI models that continuously evolve based on user interaction history and preferences, capable of understanding context and adapting to individual users.
  • AI Agents with Theory of Mind: Developing proactive AI agents that model users’ intentions and cognitive load, leading to more intuitive and seamless human-AI interaction.

My group is always looking for talented students to join us on this journey. For students from Mainland China, please see the note here. For international students, please contact me directly via email.

News

  • Jun. 2025: Received the Egocentric Vision (EgoVis) 2023/2024 Distinguished Paper Awards
  • Feb. 2025: Three papers accepted to CVPR 2025.
  • Oct. 2024: Our LEGO paper has been nominated as one of the 15 award candidates at ECCV 2024.
  • Jul. 2024: Two corresponding-author papers accepted to ECCV 2024 (1 Poster, 1 Oral).
  • Jun. 2024: Received the Egocentric Vision (EgoVis) 2022/2023 Distinguished Paper Awards
  • Feb. 2024: Three papers accepted to CVPR 2024 (1 Poster, 1 Highlight, 1 Oral).
  • Nov. 2023: One paper accepted to IEEE TPAMI.
  • Nov. 2023: One paper accepted to IJCV.
  • Jun. 2023: One paper accepted to ACL 2023 as Findings.
  • Nov. 2022: Our paper on Egocentric Gaze Estimation won the Best Student Paper Prize at BMVC 2022!
  • Sep. 2022: One paper accepted to BMVC 2022 for spotlight presentation!
  • Aug. 2022: I started my new journey at META Reality Labs.
  • Jul. 2022: Two papers accepted at ECCV 2022.
  • Jun. 2022: I successfully defended my thesis!
  • Apr. 2022: Technical talk at META AI Research.
  • Mar. 2022: Technical talk at Amazon.
  • Mar. 2022: Our Ego4D paper was accepted to CVPR 2022 for oral presentation, Best Paper Finalist.
  • Feb. 2022: Technical talk at Apple.
  • Oct. 2021: Our Ego4D project has launched! Check out the arXiv paper.
  • Oct. 2021: One paper accepted to 3DV 2021.
  • Jul. 2021: I passed my thesis proposal.

Publications

Teaching

Students

Contact