Object detector for a soccer player Humanoid Robot using Deep-learning
Lucas Steuernagel
Unknown
- 0 Collaborators
Aiming to develop the Brazilian robotics team “ITAndroids”, I implemented a deep learning algorithm to detect balls and goalposts for the group’s humanoid soccer player team, which consists of five players. Since each robot must be completely autonomous, the algorithm needs to run in real time at each robot’s Intel NUC computer. Thus, I modified the original YOLO (You Only Look Once) algorithm to find the field's goalposts and balls with at least 50% of white color in a variety of backgrounds. I developed this project as an undergraduate research with the help of CNPq and LAB-SCA (part of Instituto Tecnológico de Aeronática). ...learn more
Project status: Under Development
Robotics, Artificial Intelligence
Intel Technologies
AI DevCloud / Xeon,
MKL,
Intel Opt ML/DL Framework,
Movidius NCS
Overview / Usage
In 2017, ITAndroids achieved for the very first time the classification as a contestant team at Robocup in the Humanoid KidSize category. In this sense, the group needed to develop an effective robot to play at the competition. Since 2015 Robocup regulations have determined that the ball should be at least 50% white. The group’s first attempts of developing heuristic techniques that identified white and rounded objects, using Hough Transform, did not work quite well.
Since the detection of a white ball was difficult to implement using Hough Transform techniques, the group has decided to use a deep-learning algorithm for the robot to effectively detect a ball during a soccer game. Nonetheless, the ball detection algorithm is just a part of a bigger software that makes the robot play soccer and runs inside a Intel NUC computer, modified to fit inside the robot. Having achieved a great performance on a soccer game, the neural network has been modified to detect also the soccer field's goalposts.
Methodology / Approach
When working on this project and researching about object detection methods with neural networks, I found out that the YOLO (You Only Look Once) algorithm is efficient in real time object detection because it uses a single neural network and can detect multiple objects. Hence, I decided to implement such an algorithm in python, with the help of Tensorflow and Keras libraries.
However, I have made some changes to the original algorithm, since ITAndroids’ robot has limited computational power. First, I adapted the neural network to detect two types of object in an image. Second, I decreased the number of layers the original neural network has. With those changes, my neural network was able to not only detect a ball in accordance with Robocup’s rules but also run in an acceptable time in the robot’s internal computer.
The implemented algorithm divides an input image in 300 32x32 pixels cells and calculates the probability of the cell’s containing an object, the location of its center and its dimensions. In the end, the algorithm finds the highest probability amongst those calculated. The desired object will be located at the cell with the greatest probability. If this probability is below a certain threshold, the algorithm will assume that there is no object in the image.
Technologies Used
To develop the algorithm, I used Google’s open-source library Tensorflow as well as Keras. Later, I trained the neural network in my computer’s GPU with the help of NVIDIA CUDA. I also used Intel DevCloud to train the newer version of the CNN, which detects balls and goalposts. Furthermore, to deploy the algorithm to run in the robot’s computer, I created a C++ code using Google’s Tensorflow.
ITAndroids’ humanoid robot has an Intel NUC computer that is supposed to run the neural network for real time object detection. To compile the code to run properly on such a platform, I investigated Intel’s processors libraries, such as Intel MKL. I am working on combining the Intel Movidius neural compute Stick to the robot's computer so that the neural network can run faster.