PLSSVM - Parallel Least Squares Support Vector Machine

Marcel Breyer

Marcel Breyer

Stuttgart, Baden-Württemberg

We propose the PLSSVM library, which efficiently brings SVMs to massively parallel accelerators. It implements the basic functionality of the most widely used SVM library, LIBSVM, and can target different hardware from various vendors by using our backends: OpenMP, CUDA, HIP, OpenCL, and SYCL. ...learn more

Project status: Published/In Market

HPC, Artificial Intelligence

Intel Technologies
DevCloud, DPC++, Intel Integrated Graphics, Intel CPU

Code Samples [1]Links [5]

Overview / Usage

We developed our C++17 PLSSVM library, which efficiently brings SVMs to massively parallel accelerators.

We have implemented the basic functionality of the most widely used SVM library, LIBSVM, as a drop-in alternative with significant acceleration aided by GPUs. However, sparse data sets, where all but a few feature entries are zero, are treated as if they would represent dense data, i.e., explicitly representing zeros where necessary. As of now, our implementation only supports binary classification. Furthermore, we support multi-GPU execution for the linear kernel.

We aim to provide vendor-independent scalable performance for high-end compute nodes. To the best of our knowledge, our PLSSVM library is the first SVM implementation, let alone LS-SVM implementation, that supports multiple backends, in particular OpenMP, CUDA, HIP, OpenCL, and SYCL (currently supported SYCL implementations are DPC++ and hipSYCL). This allows us to support a broad spectrum of hardware, e.g., GPUs from different vendors like Intel, NVIDIA, and AMD or CPUs. This distinguishes our approach from most previous implementations, which are restricted to NVIDIA GPUs due to their focus on CUDA.

With the aid of PLSSVM, we want to compare the individual languages and frameworks with respect to their advantages and differences on various hardware platforms.

Our ultimate goal is a short overview of which framework is best suited for which hardware with a given payload in mind. For the results, see our published papers.

Methodology / Approach

We want to compare the performance of our four backends on different hardware platforms (GPUs and CPUs) and discuss the reasons for possible differences. In order to do that, we conduct scaling tests with varying numbers of data points and features.

For the results on 4 different NVIDIA GPUs, 1 AMD GPU, 2 Intel iGPUs, and 3 different CPUs see our paper "A Comparison of SYCL, OpenCL, CUDA, and OpenMP for Massively Parallel Support Vector Machine Classification on Multi-Vendor Hardware".

Technologies Used

We use hardware from different vendors like Intel (Core and Xeon) or AMD (Ryzen and EPYC) CPUs and GPUs from various vendors. We also do not only restrict our experiments to data center GPUs like the NVIDIA A100 or AMD Radeon VII Pro but also to consumer GPUs like the NVIDIA RTX 3080 or integrated GPUs like the Intel Iris Xe MAX or UHD Graphics GPUs. See our publications for a complete list of currently supported and tested hardware.

On the software side, we support many different backends: OpenMP, CUDA, HIP, OpenCL, and SYCL using DPC++ (Intel's public LLVM fork) and hipSYCL. We explicitly do not use highly optimized vendor specific libraries like CUBLASvendor-specific, since we want to compare the differences in the frameworks/languages and not the vendor-provided libraries.

Repository

https://github.com/SC-SGS/PLSSVM

Collaborators

There are no people to show.

Comments (0)