Smart Garbage Segregation
Raison Sabu
Unknown
- 0 Collaborators
Image classification for recycling refers to the use of machine learning to automatically classify images of waste materials into their respective categories. We have made use of Intel's oneAPI and its oneDNN library which provides highly optimized routines for various deep learning operations. ...learn more
Project status: Published/In Market
oneAPI, Artificial Intelligence, Cloud
Intel Technologies
DevCloud,
oneAPI,
Intel Python
Overview / Usage
Image classification for recycling refers to the use of machine learning algorithms to automatically classify images of waste materials, such as plastic, paper, and metal, into their respective categories. This process involves training a model using a large dataset of labeled images and then using this model to predict the category of new, unlabeled images. The goal of image classification for recycling is to improve the efficiency and accuracy of recycling processes by automating the sorting of materials, reducing human error, and increasing the amount of recyclable materials that can be recovered. In this project we have made use of Intel's oneAPI which is a comprehensive development platform for building high-performance, cross-architecture applications. It provides a unified programming model, tools, and libraries that allow developers to optimize their applications for Intel CPUs, GPUs, FPGAs, and other hardware. Intel OneAPI includes support for popular programming languages like C++, Python, and Fortran, as well as frameworks for deep learning, high-performance computing, and data analytics. We have used oneAPI's library OneDNN which provides highly optimized routines for various deep learning operations, including convolution, pooling, normalization, and activation functions. By using oneDNN, you can expect faster execution times and better performance on modern CPUs, especially those with Intel processors.
Methodology / Approach
These are the steps involved in making this project:
- Importing Libraries
- Data Importing
- Data Exploration
- Data Configuration
- Preparing the Data
- Creating a Generator for Training Set
- Creating a Generator for Testing Set
- Writing the labels into a text file 'Labels.txt'
- Model Creation
- Model Compilation
- Training the Model (batch_size = 32, epochs = 10)
- Testing Predictions
- Saving model as 'modelnew.h5'
- Deploying the Model as a Web Application using Streamlit
Technologies Used
These are the frameworks/libraries used in this project:
- OneAPI / OneDNN
- DevCloud
- Python
- Streamlit