How Quantization Impacts Privacy Risk on LLMs for Code?
Large language models for code (LLMs4Code) rely heavily on massive training data, including sensitive data, such as cloud service credentials of the projects and personal identifiable information of the developers, raising serious privacy concerns. Membership inference (MI) has recently emerged as an effective tool for assessing privacy risk by identifying whether specific data belong to a model’s training set. In parallel, model com- pression techniques, especially quantization, have gained traction for reducing computational costs and enabling the deployment of large models. However, while quantized models still retain knowledge learned from the original training data, it remains unclear whether quantization affects their ability to retain and expose privacy information. Answering this question is of great importance to understanding privacy risks in real-world deployments.
In this work, we conduct the first empirical study on how quantization influences task performance and privacy risk si- multaneously in LLMs4Code. To do this, we implement widely used quantization techniques (static and dynamic) to four repre- sentative model families, namely Pythia, CodeGen, GPT-Neo, and starcoder2. Our results demonstrate that quantization has a significant impact on reducing the privacy risk relative to the original model. We also uncover a positive correla- tion between task performance and privacy risk, indicating an underlying trade-off. Moreover, we reveal the possibility that quantizing larger models could yield better balance than using full-precision small models. Finally, we demonstrate that these findings generalize across different architectures, model sizes and MI methods, offering practical guidance for safeguarding privacy when deploying compressed LLMs4Code.