Deep Learning with Radiomics for the Segmentation and Survival Prediction in Brain Cancer
Subhashis Banerjee
Kolkata, West Bengal
- 0 Collaborators
A novel deep learning method is proposed for the automatic segmentation of brain tumors from multi-sequence MR images. A deep Radiomic model for predicting the Overall Survival is designed, based on the features extracted from the segmented Volume of Interest. An encoder-decoder type ConvNet model is proposed for pixel-wise segmentation of the tumor along the three anatomical planes. These are then combined, using a consensus fusion strategy, to produce the final volumetric segmentation. Novel concepts such as spatial pooling and unpooling are introduced to preserve the spatial locations of the edge pixels to reduce segmentation error around the boundaries. ...learn more
Project status: Under Development
Intel Technologies
AI DevCloud / Xeon,
Intel Opt ML/DL Framework
Overview / Usage
Accurate delineation of tumor region in MRI sequences is of great importance since it allows: i) volumetric measurement of the tumor, ii) monitoring of tumor growth in the patient between multiple MRI scans, and iii) treatment planning with follow-up evaluation, including the prediction of overall survival (OS). Manual segmentation of tumors from MRI is a highly tedious, time-consuming and error-prone task, mainly due to factors such as human fatigue, the overabundance of MRI slices per patient, and an increasing number of patients. Such manual operations often lead to inaccurate delineation, and the need for an automated or semiautomated Computer Aided Diagnosis thus becomes apparent. The large spatial and structural variability among brain tumors make automatic segmentation a challenging problem.
Inspired by the success of ConvNets, I developed a novel spatial ConvNet model which can preserve the edge information during automated segmentation of gliomas from multisequence MRI data. The segmented Volume of Interest (VOI) or tumor volume is then used to extract Radiomics features such as first-order gray level statistics, shape and texture, for predicting the OS of patients.
Methodology / Approach
-
The ConvNet architecture, used for slice wise segmentation along each plane, is an encoder-decoder type of network. The encoder or the contracting path uses pooling layers to downsample an image into a set of high-level features, followed by a decoder or an expanding part which uses the feature information to construct a pixel-wise segmentation mask.
-
Since the dataset is highly imbalanced and training can be dominated by the most prevalent class therefore standard loss functions used in the literature are not suitable for training and optimizing the ConvNet. In such cases most classifiers focus on learning the larger classes, thereby resulting in poor classification accuracy for the smaller classes. Therefore, I propose a new loss function, that is computed between the soft binary segmentation or the probability map generated by the network using the softmax layer (P), and the corresponding gold standard/ground-truth image (G).
-
For the OS prediction task I have extracted 296 3D quantitative imaging features / radiomics features from segmented tumor volume or VOI obtained from each of the four MR sequences. The MR imaging features used in this study are sub-divided into five groups (i) first-order gray level statistics (19 features), (ii) geometrical shape and size (16 features), (iii) gray level co-occurence matrix (23 features), (iv) gray level run length matrix (16 features).
-
In order to avoid overfitting, clusters of highly correlated features are collapsed into one representative attribute having largest inter-subject variability or highest dynamic range. Then finally we use wrapper method to perform feature subset selection using sequential forward selection (SFS), sequential backward elimination selection (SBS) and bidirectional search (BDS). Two ensemble learning algorithms Bagging and AdaBoost with three classifiers Decision Tree, Naïve Bayes and Logistic Regression, are used for selection of optimal feature subset giving the best classification accuracy for classification of the patients into the three survival classes (short, mid, long).
Technologies Used
The ConvNet models were developed using the following libraries:
- Intel® Optimization for TensorFlow
- Intel® Optimization for Keras
in Python.
The experiments were performed on the Intel AI DevCloud platform having cluster of Intel Xeon Scalable processors.
The ConvNet models were trained for 20 epochs, which required approximately 12 hours in Intel AI DevCloud platform.