Design and Evaluation of a Software Abstraction Layer for Heterogeneous Neural Network Accelerators

Typ
Examensarbete för masterexamen
Master's Thesis
Program
Embedded electronic system design (MPEES), MSc
Communication Engineering (MPCOM), MSc
Publicerad
2022
Författare
Sreedhar, Aishwarya
Nagarajan, Naga Sarayu
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
Machine learning is becoming increasingly important across a wide range of hardware platforms. Current frameworks rely on vendor-specific operator libraries and cater to a small number of server-class GPUs. To be able to support a variety of hardware accelerators from various suppliers, which may vary over time, it is critical to abstract the hardware in order to deploy the core neural network algorithms nacross this heterogeneous hardware with minimal effort. There are various vendor specific consortiums and standards available in the market by the respective vendors. But to make the software portable, an abstraction layer should be build over the vendor proprietary standards. In this thesis, we have used a compiler that provides an abstraction level above CUDA and OpenCL so that we don’t bother to know the details about CUDA/OpenCL programming, One such type of a compiler is Apache TVM, which is a open source machine learning compiler framework for CPUs, GPUs and other hardware accelerators. We have performed a comprehensive comparison between the model compiled using Apache TVM framework and native compilation for two different hardware vendors such as Nvidia and Qualcomm. Framework models are fed into deep learning compilers, which provide optimised code for a range of deep learning hardware. It exposes graph and operator-level optimisations to enable deep learning workloads with performance portability across a variety of hardware backends. TVM tackles deep learning-specific optimization problems like high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also uses a evolutionary, learning-based cost modeling method for quick exploration of code to automate the optimisation of low-level programs to hardware features. Experiments show that TVM delivers performance comparable to state-of-the-art, hand-tuned libraries for low-power CPUs, mobile GPUs, and server-class GPUs across hardware back-ends. TVM’s ability to target new accelerator back-ends, such as the GPU-based generic deep learning accelerator using CUDA and OpenCL is also demonstrated.
Beskrivning
Ämne/nyckelord
Deep machine learning , Apache TVM , GPU , OpenCL , CUDA , thesis , self-driving cars , Nvidia Jetson , Qualcomm , performance , native programming
Citation
Arkitekt (konstruktör)
Geografisk plats
Byggnad (typ)
Byggår
Modelltyp
Skala
Teknik / material
Index