DashAI

Joe Rishon Manoj

Joe Rishon Manoj

Bengaluru, Karnataka

0 0
  • 0 Collaborators

DashAI allows its users to create record-breaking, state-of-the-art models with just their datasets, no coding required. DashAI takes care of hyper-parameter tuning, training, and explainability. ...learn more

Project status: Published/In Market

Artificial Intelligence

Groups
Student Developers for AI

Code Samples [1]Links [1]

Overview / Usage

FastAI intends to ensure that neural nets don't fall into the category of things that are "cool" by that definition. We found that super inspiring.

However, even with the outstanding work FastAI does, it only makes it accessible to the coders on the planet, which makes up only 26.4 million people out of 7.7 billion, only about 0.3% of people. Now, of course, this is a worthy goal, and one deserving of applause, because the actual number of people who can use neural networks right now is even smaller. However, we would like to help FastAI along with their goal and do our part in increasing that number by allowing people to create state-of-the-art deep learning models without writing any code.

Making a no-code deep learning application not just allows non-coders to access the wonders of deep learning, it also lets coders speed up prototyping, allowing them to bring their programs to production much quicker.

Finally, explainability is an up-and-coming field that is needed by the multitudes and utilized by the few, and so we wanted to provide easy access to that as well.

Methodology / Approach

Step 1: Choosing the task.

We provide our users with the ability to choose their type of application early on in the process. DashAI uses this information in later stages, to suggest architectures that have achieved state-of-the-art results in that task. Users can choose one of four tasks: collaborative filtering, tabular, text, and vision.

Step 2: Selecting the dataset.

Users then provide the dataset they intend to use, and they have options to let DashAI know how to utilize the dataset best. DashAI then asks how the user wants to split the dataset (into training and validation sets), how to label it, and what transforms the user wants to apply on the dataset.

Step 3: Selecting the model.

Users then have to choose what architecture they want their model to have. DashAI provides architectures that have achieved state-of-the-art results in the task defined by the user, but the user may use any model built using PyTorch layers.

Step 4: (Optional) Auto ML

At this point, users may choose one of three options:

  • to use DashAI default hyper-parameters;
  • to input hyper-parameter values of their choosing; or
  • to use DashAI's auto ML component, Verum, to select the best possible hyper-parameter values. In Verum, users may choose which hyper-parameters they would like tuned, the number of experiments they want to run, and whether they would like to have the resulting values automatically applied to the model.
Step 5: (Optional) Training the model.

DashAI then provides a simple training interface, where, if they have not chosen to utilize Verum's automatic applying feature, users may input the hyper-parameter values required for training. Users can also pick between generic training and 1-cycle training.

Step 6: (Optional) Explainability

Users can then choose to visualize the attributions in the explainability component of DashAI, DashInsights. They may choose from a multitude of attribution-calculation algorithms, depending on their task. The visualizations can provide insight into why a model is predicting what it is predicting.

Step 7: (Optional) Saving the model.

Finally, if users are so inclined, they can save their models as .pth files. We provide instructions on how to use these files in our Wiki in our GitHub repo.

Technologies Used

We designed our application to leverage the broad use-cases of JSON files for API communication between our front- and back-ends. We were thus able to write our code safe in the knowledge that everything we need to know just needed to be in a JSON file. Further, this allowed us to split our team into two so each could work without having to wait for updates from the other. Also, in production, users could modify values from the JSON, and the Flask server could use these to generate a model, train it, save it, and everything else we do.

We based our application on FastAI for everything that they provide out of the box. We wrote everything else we wanted to add using PyTorch and libraries written on top of it.

For our hyper-parameter tuning component, Verum, we used the Ax library. To allow users to visualize the attributions of their models, we used the Captum library.

Repository

https://github.com/manikyabard/DashAI

Comments (0)