Design and Evaluation of a Software Abstraction Layer for Heterogeneous Neural Network Accelerators

dc.contributor.authorSreedhar, Aishwarya
dc.contributor.authorNagarajan, Naga Sarayu
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data och informationstekniksv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineeringen
dc.contributor.examinerLarsson-Edefors, Per
dc.contributor.supervisorPericas, Miquel
dc.date.accessioned2022-12-05T09:28:26Z
dc.date.available2022-12-05T09:28:26Z
dc.date.issued2022
dc.date.submitted2022
dc.description.abstractMachine learning is becoming increasingly important across a wide range of hardware platforms. Current frameworks rely on vendor-specific operator libraries and cater to a small number of server-class GPUs. To be able to support a variety of hardware accelerators from various suppliers, which may vary over time, it is critical to abstract the hardware in order to deploy the core neural network algorithms nacross this heterogeneous hardware with minimal effort. There are various vendor specific consortiums and standards available in the market by the respective vendors. But to make the software portable, an abstraction layer should be build over the vendor proprietary standards. In this thesis, we have used a compiler that provides an abstraction level above CUDA and OpenCL so that we don’t bother to know the details about CUDA/OpenCL programming, One such type of a compiler is Apache TVM, which is a open source machine learning compiler framework for CPUs, GPUs and other hardware accelerators. We have performed a comprehensive comparison between the model compiled using Apache TVM framework and native compilation for two different hardware vendors such as Nvidia and Qualcomm. Framework models are fed into deep learning compilers, which provide optimised code for a range of deep learning hardware. It exposes graph and operator-level optimisations to enable deep learning workloads with performance portability across a variety of hardware backends. TVM tackles deep learning-specific optimization problems like high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also uses a evolutionary, learning-based cost modeling method for quick exploration of code to automate the optimisation of low-level programs to hardware features. Experiments show that TVM delivers performance comparable to state-of-the-art, hand-tuned libraries for low-power CPUs, mobile GPUs, and server-class GPUs across hardware back-ends. TVM’s ability to target new accelerator back-ends, such as the GPU-based generic deep learning accelerator using CUDA and OpenCL is also demonstrated.
dc.identifier.coursecodeDATX05
dc.identifier.urihttps://odr.chalmers.se/handle/20.500.12380/305876
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectDeep machine learning
dc.subjectApache TVM
dc.subjectGPU
dc.subjectOpenCL
dc.subjectCUDA
dc.subjectthesis
dc.subjectself-driving cars
dc.subjectNvidia Jetson
dc.subjectQualcomm
dc.subjectperformance
dc.subjectnative programming
dc.titleDesign and Evaluation of a Software Abstraction Layer for Heterogeneous Neural Network Accelerators
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeEmbedded electronic system design (MPEES), MSc
local.programmeCommunication Engineering (MPCOM), MSc
Ladda ner
Original bundle
Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
CSE 22-129 Nagarajan Sreedhar.pdf
Storlek:
3.73 MB
Format:
Adobe Portable Document Format
Beskrivning:
License bundle
Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
1.64 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: