Yixin Liu /ˈjiː.ʃɪn ljuː/ (刘奕鑫)

Email: yila22 [at] lehigh [dot] edu

I am a 2nd-year Ph.D. student working on Machine Learning at Lehigh University, advised by Prof. Lichao Sun. Previously, I obtained my B.E. of Software Engineering from South China University of Techology with honor in 2022.

News 📢
  • [2024.01] Our MetaCloak is accepted by CVPR'2024! A more robust protection framework for safegurading portrait against AI mimicry.
  • [2024.01] I will join Samsung Research American as a research intern in 2024 summer!
Research Interest

My research interest lies in broad aspects of AI Safety and Large-scale Language and Diffusion Model. Currenty, I target in the following directions:

  • AI Safety: i) Safegurading against Unauthorized Exploitation: studying a data-centric approach to safeguarding user's data from unauthorized exploitation (including deepfake, portrait manipulation, and dataset copyrighting) by degrading trained model's generlization ability; ii) Robustness of Explainable AI: study the robustness of explainable AI, including improving the stability and faithfulness of explainable AI tools under adversarial manipulation.
  • Large-scale Language and Diffusion Model + X : study the large-scale language and diffusion model, including the efficiency and safety of large-scale foundation models. Besides, I am also interested applying those models to solve real-world problems, including vision, NLP and multimodal tasks.
Reviewer Service

NeurIPS23, KDD23, CVPR24, ICML24, ECCV24.

Publications ( show selected / show all by topic / show all by date )

Topics: Unauthorized Exploitation / NLP Safety / Explainable AI / Model Compresssion / Applications (*/†: indicates equal contribution.)

Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis
Yixin Liu, Chenrui Fan, Yutong Dai, Xun Chen, Pan Zhou, Lichao Sun

[CVPR 2024]

Medical Unlearnable Examples: Securing Medical Data from Unauthorized Traning via Sparsity-Aware Local Masking
Weixiang Sun, Yixin Liu, Zhiling Yan, Kaidi Xu, Lichao Sun

[In Submission]

Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts
Yuanwei Wu, Xiang Li, Yixin Liu, Pan Zhou, Lichao Sun

[In Submission]

Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
Yixin Liu, Kaidi Xu, Xun Chen, Lichao Sun

[AAAI 2024]

Improving Faithfulness for Vision Transformers
Lijie Hu*, Yixin Liu*, Ninghao Liu, Mengdi Huai, Lichao Sun and Di Wang

[In Submission]

GraphCloak: Safeguarding Graph-structured Data from Unauthorized Exploitation
Yixin Liu, Chenrui Fan, Xun Chen, Pan Zhou, and Lichao Sun

[Preprint]

Watermarking Classification Dataset for Copyright Protection
Yixin Liu*, Hongsheng Hu*, Xuyun Zhang, Lichao Sun

[Preprint]

BadGPT: Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT
Jiawen Shi, Yixin Liu, Pan Zhou and Lichao Sun

[NDSS 2023 Poster]

Securing Biomedical Images from Unauthorized Training with Anti-Learning Perturbation
Yixin Liu, Haohui Ye, Lichao Sun

[NDSS 2023 Poster]

SEAT: Stable and Explainable Attention
Lijie Hu*, Yixin Liu*, Ninghao Liu, Mengdi Huai, Lichao Sun and Di Wang

[AAAI 2023 Oral]

Conditional Automated Channel Pruning for Deep Neural Networks
Yixin Liu, Yong Guo, Jiaxin Guo, Luoqian Jiang, Jian Chen

[IEEE Signal Processing Letters]

Meta-Pruning with Reinforcement Learning
Yixin Liu; Advisor: Jian Chen

[Bachelor Thesis]

Priority Prediction of Sighting Report Using Machine Learning Methods
Yixin Liu, Jiaxin Guo, Jieyang Dong, Luoqian Jiang, Haoyuan Ouyang

[IEEE SEAI 2021; Finalist Award in MCM/ICM 2021]