About me

I am currently a 4th-year Computer Science and Engineering Ph.D. candidate at University of California, Santa Cruz, fortunately advised by Prof. Yang Liu!

My research interests are robust learning under real-world constraints (label noise in human-generated data, class-imbalanced learning, group distributional robustness), incentive design for data collection, generative modeling.

Previously, I received my Master of Science degree (Data Science) at Brown University and B. S. degree in Honors Science (Mathematics and Applied Mathematics) & Honors Youth (Gifted Young) from Xi’an Jiaotong University.


Actively looking for machine learning research internship (2023 summer). Please feel free to contact me if you are interested or want to collaborate.

Email: jiahengwei(at)ucsc(dot)edu; Wechat: WJH_Derrick


[2023. 01] One first-author paper accepted to ICLR 2023 (work done at Google Brain).

[2022. 12] Joined CROSS as a Research Fellow.

[2022. 10] Invited talk from Domain Adaptation Team at University of Toronto.

[2022. 08] Invited talk from AI-Time.

[2022. 07] Oral presentation at ICML 2022 (Deep Learning: Robustness).

[2022. 07] One first-author paper accepted to ECCV 2022.

[2022. 06] Invited talk from AI-Time.

[2022. 05] One first-author paper accepted to ICML 2022 (Long Presentation, 2.1%).

[2022. 04] Call for participation: 1st Learning and Mining with Noisy Labels Challenge at IJCAI-ECAI 2022 [link].

[2022. 02] I start my journey at Google Brain (student researcher), fortunately advised by Abhishek Kumar and Ehsan Amid!

[2022. 01] One first-author paper accepted to ICLR 2022.

[2021. 12] I become a PhD candidate (with Honors).

[2021. 11] noisylabels.com is online! We collected and published re-annotated versions of the CIFAR-10 and CIFAR-100 data which contain real-world human annotation errors.

[2021. 11] I gave a short talk in the Weakly Supervised Learning (WSL) workshop at ACML 2021.

[2021. 02] One first-author paper accepted to AISTATS 2021.

[2021. 01] One first-author paper accepted to ICLR 2021.