GPU Shared Memory Usage and Image Processing

1 0
  • 0 Collaborators

In general CUDA programming, shared memory is used to speed up data access within the same block, such as matrix multiplication. We want to utilize OneAPI using shared memory on GPU devices to speed up data access. And the natural parallelism of OneAPI is very suitable for image processing. ...learn more

Project status: Concept

oneAPI, HPC

Intel Technologies
DevCloud, oneAPI, Intel CPU, Other

Overview / Usage

We propose an idea that OneAPI can be used to implement migration in a form similar to CUDA programming, which can store shared data in the same block in shared memory for accelerated data access, such as matrix multiplication. At the same time, we note that algorithms for image processing can migrate well to the OneAPI framework due to OneAPI's natural parallelism advantages. In fact, we want to demonstrate the effectiveness of the OneAPI framework in this project by using OneAPI to implement matrix multiplication and edge detection of images.

Methodology / Approach

  • shared memory usage on GPU device to speed up matrix multiplication.
  • Image edge detection filtering operations are mapped to OneAPI multithreaded programming.
Comments (0)