I'm currently a 4th-year Ph.D. student at Sun Yat-sen University, advised by Prof. Wei-Shi Zheng. My research interests include video understanding and motion understanding and generation. I also have a strong interest in image and video generation .
Previously, I obtained my M.S. degree from Sun Yat-sen University in 2021, advised by Prof. Wei-Shi Zheng. Before that, I obtained my B.E. degree from the University of Electronic Science and Technology of China in 2019.
News
➤ [2025-03] One paper accepted in IEEE Transactions on Circuits and Systems for Video Technology (TCSVT).
➤ [2025-03] Two papers accepted in ICME 2025.
➤ [2025-02] One paper accepted in CVPR 2025.
➤ [2024-12] One paper accepted in AAAI 2025.
➤ [2024-07] One paper accepted in ECCV 2024.
➤ [2024-05] One paper accepted in IEEE Transactions on Circuits and Systems for Video Technology (TCSVT).
➤ [2024-04] One paper accepted in IEEE Transactions on Multimedia (TMM).
➤ [2024-02] One paper accepted in IEEE Transactions on Image Processing (TIP). .
➤ [2022-03] One paper accepted in CVPR 2022.
➤ [2020-07] One paper accepted in ACM Multimedia 2020.
Publications
Below are my publications.
(† Equal contribution; * Corresponding authors.)
ACTION-NET: State-of-the-art action quality assessment in long sports videos using hybrid dynamic-static modeling and attention, introducing the Rhythmic Gymnastics dataset.
Continual-AQA: Enabling sequential learning in Action Quality Assessment without forgetting, using innovative rehearsal and graph-based techniques for superior performance.
GDLT: Quantifying performance grades in long-term action quality assessment for state-of-the-art results.
Efficient Explicit Joint-level Interaction Modeling with Mamba for Text-guided HOI Generation
Guohong Huang†, Ling-An Zeng†, Zexin Zheng, Shengbo Gu, Wei-Shi Zheng*.
IEEE International Conference on Multimedia & Expo (ICME), 2025.
NoiseActor: A Noise-Action Collaborative Framework for Privacy-Preserving Action Recognition without Privacy Labels
Xiao Li, Xiao-Ming Wu, Delong Zhang, Kun-Yu Lin, Yixing Peng, Ling-An Zeng, Wei-Shi Zheng*.
ICME, 2025.
Progressive Human Motion Generation Based on Text and Few Motion Frames Ling-An Zeng, Gaojie Wu, Ancong Wu, Jian-Fang Hu, Wei-Shi Zheng*.
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2025.
Light-T2M: A Lightweight and Fast Model for Text-to-motion Generation Ling-An Zeng, Guohong Huang, Gaojie Wu, Wei-Shi Zheng*
AAAI Conference on Artificial Intelligence (AAAI), 2025.
Privacy-Preserving Action Recognition: A Survey
Xiao Li, Yu-Kun Qiu, Yi-Xing Peng, Ling-An Zeng, Wei-Shi Zheng
Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 2024.
EgoExo-Fitness: Towards Egocentric and Exocentric Full-Body Action Understanding
Yuan-Ming Li†, Wei-Jin Huang†, An-Lan Wang†, Ling-An Zeng, Jing-Ke Meng*, Wei-Shi Zheng*
European Conference on Computer Vision (ECCV), 2024.
Adaptive Weight Generator for Multi-Task Image Recognition by Task Grouping Prompt
Gaojie Wu, Ling-an Zeng, Jing-Ke Meng*, Wei-Shi Zheng
IEEE Transactions on Multimedia (TMM), 2024.
Continual Action Assessment via Task-Consistent Score-Discriminative Feature Distribution Modeling
Yuan-Ming Li, Ling-An Zeng, Jing-Ke Meng*, Wei-Shi Zheng*.
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2024.