Variable Quantized Ensemble Networks

Aaron Jacob Varghese

Aaron Jacob Varghese

Hyderabad, Telangana

0 0
  • 0 Collaborators

A way to apply selective quantization on subgroups of classes of a dataset. Using selective quantization, more important classes can be given more representation power. Additionally, ensemble training methods can be used to further augment the performance of poorly performing models. ...learn more

Project status: Published/In Market

Artificial Intelligence

Intel Technologies
Other

Links [1]

Overview / Usage

With the proliferation of deep learning into everyday applications, more effort is being focused on efficiency and cost-effective techniques for training as well inference stages of deep learning applications. In high risk tasks such as autonomous driving, while it is extremely important to have extremely high accuracy, it is also absolutely necessary that the systems involved in such tasks be able to make predictions quickly, without compromising accuracy. We present a set of techniques to make the task of semantic segmentation more efficient. We propose a qualitative value based ranking scheme of the classes in popular autonomous driving datasets, which will enable us to segregate classes into groups of not more than 5 classes each. Each such group is trained on a segmentation network that is quantized to a level corresponding to the combined value of the group of classes. The highest ranked group of classes will be used to train the model with the highest number of bits and so on until the lowest ranked group is represented using a binary model. By training using variably-quantized networks, we ensure that high importance classes are represented better than low importance ones. We also allow for multiple classes to be present in multiple groups, giving an extra boost to the representational power of the whole system for such classes. By aggregating these trained networks into an ensemble, we build a system of efficient, low-power networks that present a substantial speedup in inference time while maintaining accuracy comparable to existing state-of-the-art semantic segmentation models such as PSPNet or DeepLabv3+. We ran extensive experiments on Cityscapes and have provided the results for the same in this paper.

Methodology / Approach

We used existing high performance semantic segmentation networks like PSPNet and Deeplab-v3 and modified them to work for a variable number of classes. From here, we divided the datasets into subsets based on the classes and trained each subset on both models, in both full-precision as well as quantized form. The level of quantization was set as an inherent property of each subset.

Technologies Used

PyTorch

Tensorflow

Convolutional Neural Networks

Semantic Segmentation Networks

Indian Driving Dataset (released by Intel and IIIT Hyderabad).

Comments (0)