FedML: Federated Model Training for Continual Cloud Learning
Dheemanth Joshi
Bengaluru, Karnataka
- 0 Collaborators
This project utilizes federated learning to obtain most recent data trends for continuously training the deep learning models hosted on the cloud . ...learn more
Project status: Under Development
oneAPI, Internet of Things, Artificial Intelligence, Cloud
Intel Technologies
oneAPI,
Intel Python,
AI DevCloud / Xeon
Overview / Usage
Deep learning (DL) models and are usually deployed on cloud systems due to their large sizes and global accessibility. with the demand of LLMs rapidly increasing for addressing latest issues, DL models cannot run inference with static parameters. Heterogeneous and latest data becomes key to keep the models updated with latest parameters.
FedML operates on two key aspects of Deep Learning to address the above issues:
- Federated Learning (FL) for obtaining edge heterogeneous data: We collaborate with various edge data generators to acquire the trends of the data. Since, the generated data is private to the cloud, applications can utilize FL to train their DL
- continuous Learning at the cloud: Global DL model running on the cloud will be updated with the Federated Edge Models, by this way the global server model keeps itself up to date for latest queries and information demand
Methodology / Approach
-
Enable Federated Learning by training instances of the global model on edge device on private data.
-
Update the inference model deployed on the cloud
-
Use Intel extension for pytorch to optimize model and reduce the model sizes running on cloud and edge respectively.
Technologies Used
OneAPI Enabled Optimization and Inference: To optimize DL systems, we utilize rich libraries provided by Intel Extension for Pytorch. This enables us to employ Vector Neural Network Instructon set (VNNI) and Advanced Matrix Extensions (AMX) to accelerate training and inference on edge and cloud systems respectively.