Automatic Distributed Parallelism with MPI (code2MPI)

Easing the process of distributed parallelization coding by suggesting places and segments of the code that need synchronization (MPI functions), i.e., send/receive messages between different processes. This is done by creating a designated database and training a proper language model. ...learn more

Project status: Under Development

oneAPI, HPC, Artificial Intelligence

Intel Technologies
DevCloud

Code Samples [1]

Overview / Usage

We suggest easing the process of inserting MPI functions with NLP techniques.

  • To understand the variables' dependencies and the source code's flow, grasping large sequences is required.
  • The attention mechanism inside transformers outputs contextualized vectors of the input data, and hence, a transformer is the most suitable model for the given task.
  • We ease the process of distributed parallelization coding by suggesting places and segments of the code that need synchronization (MPI functions), i.e., send/receive messages between different processes and, by that, addressing Domain Decomposition problems.
  • The proposed model can be incorporated with an IDE and become an "on-the-fly" parallelization advisor
  • Moreover, it can improve debugging by pointing out possible wrong or missing send/receive functions, mentioning unnecessary communications, and even suggesting non-blocking operations

Methodology / Approach

  1. Database creation – Database creation is necessary for training the model. We will collect C, and Fortran files, using github-clone-all. This script allows searching for source code files stored in github, which is the most well-known code hosting platform. Utilizing github-clone-all enables us to query github for repositories containing the keyword ”MPI” in the title, description or README file, and then extract only C or Fortran files. We assume that MPI scripts will be more common than OpenMP since it is much older, and therefore, we will manage to create a larger corpus than OpenMP. Since a large database directly correlates with the model’s performance, we strive to get as much data as possible. An important stage in database creation is the labeling process of the files, which will result in certain MPI to serial code heuristics while using AST for the code manipulation.
  2. Model Development - To manage the challenging task ahead, a suitable model is needed. Transformers architectures is a wise approach thanks to their context comprehension capabilities even in long sequences, which are required due to the large length of scripts. Performances of a similar model but with shorter code snippets have been proven in previous research. The model has to understand the variable-to-synchronize connection. I.e., given an array modification, the model needs to decide whether to send it or not. Functions of MPI are being applied to the whole code contrary to OpenMP directives, which are solely on loops, and therefore, the given task is even more challenging. Model development will be involved in both approaches and will be, apparently, similar with slight changes.
  3. Performance evaluation – One of the critical parts in the stages of model development is its assessment. We will evaluate the performance of our model and the deterministic tools on several benchmarks, such as Numerical Aerodynamics Simulation (NAS). Contrary to classification tasks, generation commonly doesn’t have absolute truth, making performance measuring of the second research question in the second approach more challenging. Hence, further investigation is needed to determine how to measure its performance.

Repository

https://github.com/Scientific-Computing-Lab-NRCN/code2mpi

Collaborators

There are no people to show.

Comments (0)