AIROB
NAEEM HADIQ
Thrissur, Kerala
An AI Powered Humanoid robot with gesture, image, vocal recognition and interactive communication systems alongside individual identifications and repetitive analysis completely offline. Such that it can work on its own using self hardware and pretrained neural modules. ...learn more
Project status: Under Development
Robotics, RealSense™, Networking, Internet of Things, Artificial Intelligence, Graphics and Media
Groups
Student Developers for AI,
Movidius™ Neural Compute Group,
Internet of Things,
DeepLearning
Intel Technologies
OpenVINO,
AI DevCloud / Xeon,
Intel Opt ML/DL Framework,
Movidius NCS,
Intel CPU
Overview / Usage
The project when complete creates and an easily replicatable completely offline robot with extended capabilities of Processing the environment and interacting accordingly. It could easily be used as a receptionist or a welcoming assistant or a home robot or if developed further even a robot that can aid in Hazardous operations.
Methodology / Approach
Speech Recognition: Speech to Text conversion on the basis of Baidu Deep speech model using Intel optimized tensorflow and trained with Audiobooks of at least 5000 Books and its text files alongside available open datasets available for speech training.
Image recognition: Derivative of YOLO(Darknet) trained with a dataset of 5Million plus images and added gesture recognition capabilities. Using Tensorflow, Open CV.
The project shall primarily Be run on Devcloud to train huge datasets and then the trained model shall be executed on the robot utilizing Upsquared board and Movidius NCS. Thereby considerably reducing the compute requirements and device cost, Power Requirement for the AI element in the robot.
Response system: A Self-developed and trained model to initiate responses and actions from self-learned or Online Responses to Refined and understood Queries.
Sensory Modules: The system utilizes Gyromagnetic domains to Balance the robot and at the same time record the coordinates and movements alongside Recognition of obstacles and mapping the space around 3dimentionally.
Robot Movement: Each degree of freedom and its movements are coupled and synchronized to work using twin Arduino mega boards externally controlled by our compute server(Upswuared board with Movidius)
Technologies Used
IOT, Embedded Computing, Movidius Neural Compute Stick, Theano, Tensorflow, OpenCV, Devcloud, Linux, Kinect Sensor, OpenVino, DeepSpeech, Linux
Repository
https://github.com/nhadiq97/AIROBOT
Collaborators
There are no people to show.