CGO 2026
Sat 31 January - Wed 4 February 2026 Sydney, Australia
co-located with HPCA/CGO/PPoPP/CC 2026

This program is tentative and subject to change.

Wed 4 Feb 2026 12:30 - 12:50 at Bronte - Optimization Chair(s): Teresa Johnson

Large Language Models (LLMs) for compiler optimization have recently emerged as a frontier research direction, with many studies demonstrating their potential to automate and improve low-level code transformations. While various techniques have been proposed to enhance LLMs' ability to optimize LLVM IR or assembly code, ensuring the semantic equivalence of transformed instructions remains a fundamental prerequisite for safe and effective performance improvement. At the same time, code generated by LLMs is often so far away from being correct that it is very difficult to work out how to improve them to proceed in generating optimizations using the output.

In this work, we present LLM-VeriOpt, a novel reinforcement-learning methodology that incorporates feedback from a formal verifier, Alive2, to guide the training of a small-scale model, Qwen-3B. This facilitates the use of Guided Reinforcement via Group Relative Policy Optimization (GRPO), using semantic-equivalence signals from the Alive2 formal verification tool as part of the reward function. This allows the model to self-correct based on observing and subsequently learning to give correctness feedback during training, giving high code coverage by successfully transforming large amounts of code, while also optimizing it significantly.

We demonstrate our technique by designing an LLM-based peephole optimizer over LLVM-IR. Our method significantly improves the correctness of IR optimizations versus the base LLM Qwen-3B applied with just a prompt and no fine-tuning — achieving a 5.4$\times$ improvement in code successfully modified. The resulting model produces verifiably correct output 90% of the time, comfortably outperforming larger state-of-the-art LLMs, including Meta's LLM Compiler. This yields speedups of 2.3$\times$ over O0-optimized code, comparable to the handwritten LLVM -instcombine pass, and producing emergent optimizations that outperform it in 20% of cases.

This program is tentative and subject to change.

Wed 4 Feb

Displayed time zone: Hobart change

11:30 - 12:50
OptimizationMain Conference at Bronte
Chair(s): Teresa Johnson Google
11:30
20m
Talk
A Reinforcement Learning Environment for Automatic Code Optimization in the MLIR Compiler
Main Conference
Mohammed Tirichine New York University Abu Dhabi; Ecole nationale Supérieure d'Informatique, Nassim Ameur NYU Abu Dhabi; École Nationale Supérieure d’Informatique, Nazim Bendib NYU Abu Dhabi; École Nationale Supérieure d’Informatique, Iheb Nassim Aouadj NYU Abu Dhabi, Djad Bouchama NYU Abu Dhabi; University of Science and Technology Houari Boumediene, Rafik Bouloudene NYU Abu Dhabi; University of Science and Technology Houari Boumediene, Riyadh Baghdadi New York University Abu Dhabi
Pre-print Media Attached
11:50
20m
Talk
Towards Threading the Needle of Debuggable Optimized Binaries
Main Conference
Cristian Assaiante Sapienza University of Rome, Simone Di Biasio Sapienza University of Rome, Snehasish Kumar Google LLC, Giuseppe Antonio Di Luna Sapienza University of Rome, Daniele Cono D'Elia Sapienza University of Rome, Leonardo Querzoni Sapienza University Rome
Pre-print Media Attached
12:10
20m
Talk
Compiler-Assisted Instruction Fusion
Main Conference
Ravikiran Ravindranath Reddy University of Murcia, Sawan Singh AMD, Arthur Perais CNRS, Alberto Ros University of Murcia, Alexandra Jimborean University of Murcia
Pre-print
12:30
20m
Talk
LLM-VeriOpt: Verification-Guided Reinforcement Learning for LLM-Based Compiler Optimization
Main Conference
Xiangxin Fang Queen Mary University of London; University of Edinburgh, Jiaqin Kang Queen Mary University of London, Rodrigo C. O. Rocha University of Edinburgh, Sam Ainsworth University of Edinburgh, Lev Mukhanov IMEC (Cambridge); Queen Mary University of London
Pre-print Media Attached
Hide past events