AIware 2025
Wed 19 - Thu 20 November 2025
co-located with ASE 2025

Large language models (LLMs) are now widely used to draft and refactor code, but code that works is not necessarily secure. We evaluate secure code generation using the Instruct Prime, which eliminated compliance-required prompts and cue contamination, and evaluate five instruction-tuned code LLMs using a zero-shot baseline and a three-round reflexion prompting approach. Security is measured using the Insecure Code Detector (ICD), and results are reported by measuring Repair, Regression, and NetGain metrics, considering the programming language and CWE family. Our findings show that insecurity remains common at the first round: roughly 25–33% of programs are insecure at a zero-shot baseline (t0). Weak cryptography/config- dependent bugs are the hardest to avoid while templated ones like XSS, code injection, and hard-coded secrets are handled more reliably. Python yields the highest secure rates; C and C# are the lowest, with Java, JS, PHP, and C++ in the middle. Reflexion prompting improves security for all models, improving average accuracy from 70.74% at t0 to 79.43% at t3, with the largest gains in the first round followed by diminishing returns. The trends with Repair, Regression, and NetGain metrics show that applying one to two rounds produces most of the benefits. A replication package is available at https://doi.org/10.5281/zenodo.17065846.

Thu 20 Nov

Displayed time zone: Seoul change

16:00 - 16:50
Evaluation Frameworks, and Quantitative Assessment of LLMs (Part 2)Main Track / Benchmark & Dataset Track at Grand Hall 1
Chair(s): Zhou Yang University of Alberta, Alberta Machine Intelligence Institute
16:00
8m
Talk
PromptExp: Multi-granularity Prompt Explanation of Large Language Models
Main Track
Ximing Dong Centre for Software Excellence at Huawei Canada, Shaowei Wang University of Manitoba, Dayi Lin Centre for Software Excellence, Huawei Canada, Gopi Krishnan Rajbahadur Centre for Software Excellence, Huawei, Canada, Ahmed E. Hassan Queen’s University
16:08
8m
Talk
Beyond Code Explanations: A Ray of Hope for Cross-Language Vulnerability Repair
Main Track
Kevin Lira North Carolina State University, Baldoino Fonseca Universidade Federal de Alagoas, Wesley K.G. Assunção North Carolina State University, Davy Baía Federal University of Alagoas, Márcio Ribeiro Federal University of Alagoas, Brazil
Pre-print
16:16
8m
Talk
Secure Code Generation at Scale with Reflexion
Benchmark & Dataset Track
Arup Datta University of North Texas, Ahmed Aljohani University of North Texas, Hyunsook Do University of North Texas
Pre-print
16:24
5m
Talk
A Tool for Benchmarking Large Language Models' Robustness in Assessing the Realism of Driving Scenarios
Benchmark & Dataset Track
Jiahui Wu Simula Research Laboratory and University of Oslo, Chengjie Lu Simula Research Laboratory and University of Oslo, Aitor Arrieta Mondragon University, Shaukat Ali Simula Research Laboratory and Oslo Metropolitan University
Pre-print
16:29
21m
Live Q&A
Joint Q&A and Discussion #LLMAssessment
Main Track