CGO 2026
Sat 31 January - Wed 4 February 2026 Sydney, Australia
co-located with HPCA/CGO/PPoPP/CC 2026

This program is tentative and subject to change.

Wed 4 Feb 2026 09:50 - 10:10 at Bronte - Tensor Optimization Chair(s): Bastian Hagedorn

A computational graph in a tensor compiler represents a program as a set of kernels connected by edges that describe how tensors flow between them. Tensors may be dense or sparse, the latter storing only nonzero values. Recognizing sparsity allows the compiler to avoid redundant computation and reduce memory usage. This paper introduces Sparsity Propagation Analysis (SPA), a static analysis that conservatively infers sparsity in tensor slices: sets of tensor elements that share a fixed subset of indices. SPA estimates which slices can be treated as zero, via forward, backward, and lateral propagation of sparsity information. SPA applies to tensors of arbitrary dimension and to operators defined over general semirings (e.g., addition and multiplication, MAX and MIN, logical AND and OR). Its runtime grows with the number of slices rather than the number of elements, making it asymptotically faster than executing even a single iteration of the graph. We implement SPA in the Tensor Algebra Compiler (TACO) and evaluate it on benchmarks from the Einsum Collection. Results indicate that multidirectional propagation increases the precision of the sparsity analysis, reduces memory consumption, and enables optimizations that improve runtime performance.

This program is tentative and subject to change.

Wed 4 Feb

Displayed time zone: Hobart change

09:50 - 11:10
Tensor OptimizationMain Conference at Bronte
Chair(s): Bastian Hagedorn NVIDIA
09:50
20m
Talk
Multidirectional Propagation of Sparsity Information across Tensor Slices
Main Conference
Kaio Henrique Andrade Ananias Universidade Federal de Minas Gerais, Danila Seliayeu University of Alberta, Jose Nelson Amaral University of Alberta, Fernando Magno Quintão Pereira Federal University of Minas Gerais
Pre-print Media Attached
10:10
20m
Talk
Synthesizing Specialized Sparse Tensor Accelerators for FPGAs via High-Level Functional Abstractions
Main Conference
Hamza Javed McGill University, Canada, Christophe Dubach McGill University
Pre-print
10:30
20m
Talk
Progressive Low-Precision Approximation of Tensor Operators on GPUs: Enabling Greater Trade-Offs between Performance and Accuracy
Main Conference
Fan Luo Institute of Computing Technology at Chinese Academy of Sciences, Guangli Li Institute of Computing Technology, Chinese Academy of Sciences, Zhaoyang Hao Institute of Computing Technology at Chinese Academy of Sciences, Xueying Wang Beijing University of Posts and Telecommunications, Xiaobing Feng ICT CAS, Huimin Cui Institute of Computing Technology, Chinese Academy of Sciences, Jingling Xue University of New South Wales
Pre-print
10:50
20m
Talk
Tensor Program Superoptimization through Cost-Guided Symbolic Program Synthesis
Main Conference
Alexander Brauckmann University of Edinburgh, Aarsh Chaube University of Edinburgh, José Wesley De Souza Magalhães University of Edinburgh, Elizabeth Polgreen University of Edinburgh, Michael F. P. O'Boyle University of Edinburgh
Pre-print Media Attached
Hide past events