N.I.A
Srikar Samudrala
Hyderabad, Telangana
- 0 Collaborators
I have teamed up with my friends Sharan Babu and Matev to create a Gaze detection system that lets you control your desktop mouse with your eyes. With the help of Openvino we have completed out project. ...learn more
Project status: Published/In Market
oneAPI, Artificial Intelligence
Intel Technologies
OpenVINO
Overview / Usage
InspirationOne of our team members has a disabled younger sister. She always struggles with using her computer, because her arms disability does not allow her to use the mouse or a keyboard. He told about it to the rest of our team and we understood that there are millions of great people, who are not able to use their electronic devices as efficiently as they potentially could, because of some uncontrollable circumstances. Because of that, many potentially brilliant writers, engineers, musicians, etc. could not conveniently follow their passions and create great things for humanity, and enjoy their lives a bit more. We decided that we are willing and we are capable of taking action towards helping those people. We decided to build software that solves this problem for physically disabled people and makes this world a little bit better.
What it doesOur first prototype enables people to navigate around their desktop, using only their eyes. This includes moving the mouse and pressing the buttons. It also has fun Naruto eyes mode.
How we built itWe built the base in Python 3, we used PyQt5 to create a GUI. Then we used OpenVINO and dlib library's pre-trained models and applied some transfer learning on them, in order to estimate the real-time gaze direction. We used OpenCV for detecting blinks and pyautogui library for controlling the mouse movements, and we used Docker for containerizing our application.
Challenges we ran intoFirst and the biggest challenge was the optimization of the inference of our deep learning models in order to make our app run faster. Solving all the version compatibility issues(we ended up using Docker to solve this problem).
Accomplishments that we're proud ofThe biggest accomplishment that we are proud of is the fact that we were able to turn what we had imagined into life. We are also very proud that even the first version of our software is readily available to help disabled people immediately.
What we learnedWe gained more experience in working with Deep Learning applications and learned new cutting edge technologies like Docker. We also felt all the huge importance and power of good teamwork.
Methodology / Approach
We built the base in Python 3, we used PyQt5 to create a GUI. Then we used OpenVINO and dlib library's pre-trained models and applied some transfer learning on them, in order to estimate the real-time gaze direction. We used OpenCV for detecting blinks and pyautogui library for controlling the mouse movements, and we used Docker for containerizing our application.