- Published on
GPU Kernel Scientist : An LLM-Driven Framework for Iterative Kernel Optimization
- Authors
- Name
- Martin Andrews
- @mdda123
This paper was accepted to the Efficient Systems for Foundation Models Workshop aka ES-FoMo III at ICML 2025 in Vancouver, Canada.
Abstract
Optimizing GPU kernels for high performance is a complex task, often demanding deep architectural knowledge, extensive profiling, and iterative experimentation. This challenge is amplified when targeting newer or less-documented GPU architectures where traditional development aids are scarce. This paper introduces an LLM-powered "GPU Kernel Scientist," an automated methodology for iteratively refining accelerator kernels.
Our methodology employs LLMs in a multi-stage, evolutionary process: (a) strategically selecting promising prior code versions as a basis for new iterations; (b) generating hypotheses for optimization experiments, based on existing code and assimilated knowledge from general GPU literature; and (c) autonomously implementing these experiments through code modification and subsequent submission to an external evaluation system, using only observed timing data as performance feedback. We detail how this approach navigates the challenges of the AMD MI300 target architecture and leverages LLMs to compensate for limited domain-specific human expertise.
Since quantitative results from an ongoing performance competition were embargoed on paper submission date, we present the architectural design, operational workflow, and qualitative insights, highlighting the potential of LLM-driven agents to democratise and accelerate GPU kernel optimization, especially in resource-constrained or rapidly evolving hardware environments.
Explainer Video

Poster Version
TBA...
Link to Paper
BiBTeX
entry for the arXiv version:
@misc{andrews2025gpukernelscientistllmdriven,
title={GPU Kernel Scientist:
An LLM-Driven Framework for Iterative Kernel Optimization},
author={Martin Andrews and Sam Witteveen},
year={2025},
eprint={2506.20807},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2506.20807},
}
Acknowledgements
Support for this research was provided by the Google AI Developer Programs team, including access to the Gemini models and GPUs on Google Cloud Platform.