Pure, Low-Level Tensor Program Rewriting via Access Patterns (Representation Pearl)
Tensor kernels in machine learning (ML) often correspond to pure mathematical expressions, making term rewriting an attractive strategy for optimization and mapping to specialized hardware accelerators. However, existing ML intermediate representations (IRs) tend to either be pure but high-level, making low-level rewrites to hardware targets inexpressible, or low-level but impure, hampering the use of term rewriting altogether.
This paper introduces Lakeroad, a pure IR whose core abstraction—the access pattern—enables low-level, layout-aware, hardware-centric program rewrites. We demonstrate how term rewriting in Lakeroad can be used to map program fragments to hardware accelerator invocations and automatically discover classic data layout transformations like im2col. Lakeroad establishes a new foundation for exploring further term rewriting techniques in optimizing low-level tensor programs.
Mon 21 JunDisplayed time zone: Eastern Time (US & Canada) change
16:45 - 19:15
|Machine Learning for Autotuning Production Machine Learning Compilers|
|Pure, Low-Level Tensor Program Rewriting via Access Patterns (Representation Pearl)|
Gus Henry Smith University of Washington, Andrew Liu University of Washington, Steven Lyubomirsky University of Washington, USA, Scott Davidson University of Washington, Joseph McMahan University of Washington, Michael Bedford Taylor University of Washington, Luis Ceze University of Washington, Zachary Tatlock University of Washington, Seattle
|ControlFlag: A Self-supervised Idiosyncratic PatternDetection System for Software Control Structures|
|Predictive Data Locality Optimization for Higher-Order Tensor Computations|