Efficient And Numerically Stable GPU Implementations of The Fast Walsh-Hadamard Transform
Publicerad
Författare
Typ
Examensarbete för masterexamen
Master's Thesis
Master's Thesis
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
The Walsh-Hadamard Transform is a fundamental mathematical operation used for many applications. Recent findings from its applicability within Convolutional Neural Networks (CNNs) has sparked a lot of attention, leading to very efficient
GPU implementations of the Fast Walsh-Hadamard Transform (FWHT) algorithm. However, current state-of-the-art FWHT implementations are purpose built for usage within CNNs, thus the input size is heavily limited and the precision supported
is low, reducing the number of alternative applications. Additionally, recent algorithmic improvements by Alman and Rao have not been used in these implementations. This thesis tackles all of these problems, presenting GPU implementations
of both the Folklore and Alman & Rao algorithms. The implementations support input sizes of up to 263 and the numerical formats: FP64, FP32, FP16 and BF16. Both implementations suffer heavily from being memory bound, only outperforming
Nvidia’s implementation of the computationally heavier Fast Fourier Transform on L4/L40s GPUs when using FP64. We find no benefit of using Alman & Rao’s algorithm. Additionally, we employ techniques heavily inspired by Kahan and Neumaier summation, empirically reducing relative errors on a wide range of real world applications. For a given numerical format, the median reduction of the relative error over all experiments is 30% and 75% respectively, while using Alman & Rao’s algorithm leads to a 50% increase of the relative error. All implementations are available as free software: https://github.com/erwinia42/FWHT.
Beskrivning
Ämne/nyckelord
FWHT, WHT, GPU, CUDA, Kahan, Neumaier, Floating Point Errors
