About Me
Hello, I am Tinghao Xie 谢廷浩, a second year ECE PhD student at Princeton, advised by Prof. Prateek Mittal. Previously, I received my Bachelor degree from Computer Science and Technology at Zhejiang University.
My Research
I hope to fully explore the breadth and depth of safe, secure, robust, and reliable AI systems. Specifically:
- I am currently working around LLM alignment safety. Do you know 🚨fine-tuning aligned LLM can compromise safety, even when users do not intend to? Checkout our recent work on 🚨LLM Fine-tuning Risks [website] [paper] [code], which was exclusively reported on 📰New York Times!
- I also have extensive research experience on DNN backdoor attacks and defenses:
- Check my 📦backdoor-toolbox @ Github, which has helped many backdoor researchers!
- To defense against backdoor attack at inference-time, we introduce a novel backdoor input detection method, by directly extracting the backdoor functionality to a backdoor expert model. Check our work 🛡️BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection [paper] (preprint) for details!
- We proposes a proactive solution to identify backdoor poison samples in a poisoned training set in our work 🛡️Towards A Proactive ML Approach for Detecting Backdoor Poison Samples [paper] [code] (USENIX Security’23) . This is realized via a super intersting method named “Confusion Training” where we prevent an ML model from fitting the normal clean samples by deliberate mislabeling – the resulting model can only fit the backdoor poison samples.
- Before that, our another work 🤔Revisiting the Assumption of Latent Separability for Backdoor Defenses [paper][code] (ICLR’23) studies the latent separation assumption made by state-of-the-art backdoor defenses, and designs adaptive attacks against such backdoor defenses.
- 😈Subnet Replacement Attack (SRA)[paper][code] (CVPR’22 Oral) is my earlier work, proposing the first gray-box and physically realizable backdoor weight attack, collaborating with Xiangyu Qi @ Princeton University, advised by Principal Researcher Jifeng Zhu @ Tencent Zhuque Lab and Prof. Kai Bu @ ZJU.
- When I was an undergraduate, I was fortunate to work with Prof. Ting Wang on backdoor certification[blog] and backdoor restoration[blog] @ Pennsylvania State University (currently Associate Professor @ Stony Brook University) as an intern, meanwhile co-advised by Prof. Shouling Ji @ ZJU NESA Lab.
- Even earlier during my undergrad years (my first research experience actually lol), I worked with Prof. Jianhai Chen, designed and implemented Enchecap[code] – an encrypted (enclave-based) heterogeneous calculation protocol.
Publications/Manuscripts
Click here (or the “Publications/Manuscripts” button in the nav bar) for more details!
📖 Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Xiangyu Qi*, Yi Zeng*, Tinghao Xie*, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal$^†$, Peter Henderson$^†$
Under Review
📰 This work was exclusively reported by New York Times, and covered by many other social medias!
📖 BaDExpert: Extracting Backdoor Functionality for Accurate Backdoor Input Detection
Tinghao Xie, Xiangyu Qi, Ping He, Yiming Li, Jiachen T. Wang, Prateek Mittal
Under Review
📖 Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Xiangyu Qi, Tinghao Xie, Jiachen T. Wang, Tong Wu, Saeed Mahloujifar, Prateek Mittal
USENIX Security 2023
📖 Revisiting the Assumption of Latent Separability for Backdoor Defenses
Xiangyu Qi*, Tinghao Xie*, Yiming Li, Saeed Mahloujifar, Prateek Mittal
ICLR 2023
📖 Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi*, Tinghao Xie*, Ruizhe Pan, Jifeng Zhu, Yong Yang, Kai Bu
CVPR 2022 (oral)
News & Facts
- 💼 Seeking opportunities for research intern opportunities in industry!
- [2023/10] Our preprint 🚨Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! is available. It is exclusively reported by 📰New York Times and covered by many other social medias!
- [2023/06] Our paper 📖 Towards A Proactive ML Approach for Detecting Backdoor Poison Samples is accepted by USENIX Security 2023!
- [2023/01] Our paper 📖 Revisiting the Assumption of Latent Separability for Backdoor Defenses is accepted by ICLR 2023!
- [2022/08] 🐯 Now officially a Ph.D. student in Princeton.
- [2022/07] 🎓 Graduated and received B.E. degree from ZJU!
- [2022/06] 🛡️ Successfully defended my undergraduate thesis, ready for graduation~
- [2022/05] 🏆 Won the championship in Zhejiang University Body Building Competition (70kg level)!
- [2022/03]
My 🍫-abs (6 packs) are visible!!! To lose fat, healthy diets are just important as appropriate exercise plans.(Update in 2023: Losing it due to heavy workload, wonderful food in Princeton student dinning halls, and lack of exercise😫) - [2022/03] Our paper 📖 Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks is accepted by CVPR 2022 (oral)!
- 🧗 Rock Climbing, Skiing, Open Water Diving, Basketball, Swimming, Billiards, Bowling…