Pure, Low-Level Tensor Program Rewriting via Access Patterns (Representation Pearl)
Tensor kernels in machine learning (ML) often correspond to pure mathematical expressions, making term rewriting an attractive strategy for optimization and mapping to specialized hardware accelerators. However, existing ML intermediate representations (IRs) tend to either be pure but high-level, making low-level rewrites to hardware targets inexpressible, or low-level but impure, hampering the use of term rewriting altogether.
This paper introduces Lakeroad, a pure IR whose core abstraction—the access pattern—enables low-level, layout-aware, hardware-centric program rewrites. We demonstrate how term rewriting in Lakeroad can be used to map program fragments to hardware accelerator invocations and automatically discover classic data layout transformations like im2col. Lakeroad establishes a new foundation for exploring further term rewriting techniques in optimizing low-level tensor programs.
Mon 21 JunDisplayed time zone: Eastern Time (US & Canada) change
16:45 - 19:15 | |||
16:45 60mTalk | Machine Learning for Autotuning Production Machine Learning Compilers MAPS | ||
17:45 30mTalk | Pure, Low-Level Tensor Program Rewriting via Access Patterns (Representation Pearl) MAPS Gus Henry Smith University of Washington, Andrew Liu University of Washington, Steven Lyubomirsky University of Washington, USA, Scott Davidson University of Washington, Joseph McMahan University of Washington, Michael Bedford Taylor University of Washington, Luis Ceze University of Washington, Zachary Tatlock University of Washington, Seattle | ||
18:15 30mTalk | ControlFlag: A Self-supervised Idiosyncratic PatternDetection System for Software Control Structures MAPS | ||
18:45 30mTalk | Predictive Data Locality Optimization for Higher-Order Tensor Computations MAPS Tharindu Patabandi University of Utah, Anand Venkat , Abhishek Kulkarni Intel, Pushkar Ratnalikar Intel Labs, Mary Hall University of Utah, Justin Gottschlich Intel Labs / Penn |