AIware 2025
Wed 19 - Thu 20 November 2025
co-located with ASE 2025
Wed 19 Nov 2025 09:44 - 09:52 at Grand Hall 1 - AIware & Security Chair(s): Weiyi Shang

Large language models for code (LLMs4Code) rely heavily on massive training data, including sensitive data, such as cloud service credentials of the projects and personal identifiable information of the developers, raising serious privacy concerns. Membership inference (MI) has recently emerged as an effective tool for assessing privacy risk by identifying whether specific data belong to a model’s training set. In parallel, model com- pression techniques, especially quantization, have gained traction for reducing computational costs and enabling the deployment of large models. However, while quantized models still retain knowledge learned from the original training data, it remains unclear whether quantization affects their ability to retain and expose privacy information. Answering this question is of great importance to understanding privacy risks in real-world deployments.

In this work, we conduct the first empirical study on how quantization influences task performance and privacy risk si- multaneously in LLMs4Code. To do this, we implement widely used quantization techniques (static and dynamic) to four repre- sentative model families, namely Pythia, CodeGen, GPT-Neo, and starcoder2. Our results demonstrate that quantization has a significant impact on reducing the privacy risk relative to the original model. We also uncover a positive correla- tion between task performance and privacy risk, indicating an underlying trade-off. Moreover, we reveal the possibility that quantizing larger models could yield better balance than using full-precision small models. Finally, we demonstrate that these findings generalize across different architectures, model sizes and MI methods, offering practical guidance for safeguarding privacy when deploying compressed LLMs4Code.

Wed 19 Nov

Displayed time zone: Seoul change

09:20 - 10:30
AIware & SecurityMain Track at Grand Hall 1
Chair(s): Weiyi Shang University of Waterloo
09:20
8m
Talk
CHASE: LLM Agents for Dissecting Malicious PyPI Packages
Main Track
Takaaki Toda Waseda University, Tatsuya Mori Waseda University
File Attached
09:28
8m
Talk
CFCEval: Evaluating Security Aspects in Code Generated by Large Language Models
Main Track
Cheng Cheng Concordia University, Jinqiu Yang Concordia University
Pre-print
09:36
8m
Talk
Security in the Wild: An Empirical Analysis of LLM-Powered Applications and Local Inference Frameworks
Main Track
Julia Gomez-Rangel Texas A&M University - Corpus Christi, Young Lee Texas A & M University - San Antonio, Bozhen Liu Texas A&M University - Corpus Christi
Pre-print
09:44
8m
Talk
How Quantization Impacts Privacy Risk on LLMs for Code?
Main Track
Md Nazmul Haque North Carolina State University, Hua yang North Carolina State University, Zhou Yang University of Alberta, Alberta Machine Intelligence Institute , Bowen Xu North Carolina State University
Pre-print
09:52
8m
Talk
Securing the Multi-Chain Ecosystem: A Unified, Agent-Based Framework for Vulnerability Repair in Solidity and Move
Main Track
Rabimba Karanjai University of Houston, Lei Xu Kent State University, Weidong Shi University of Houston
10:00
8m
Talk
SEALGuard: Safeguarding the Multilingual Conversations in Southeast Asian Languages for AI-Powered Software
Main Track
Wenliang Shan Monash University, Michael Fu The University of Melbourne, Rui Yang Monash University and Transurban, Kla Tantithamthavorn Monash University and Atlassian
Pre-print File Attached
10:10
20m
Live Q&A
Joint Q&A and Discussion #AISecurity
Main Track