There have been two group meetings since the weekend update, which I will summarize in this blog post:
 |
Recording the rock'n'roll gesture with a helpful friend. |
- As mentioned in the previous post, we planned a meeting on Tuesday in order to use the Leap Motion controller to record the hand gestures of some people unacquainted with our work. I invited two friends to come and we recorded both of them doing ten different gestures ten times each. In addition to that, we also asked a female student in the building who graciously gave us some of her time in order to record the gestures. This gave us a total of 300 recordings and I would like to thank all of them for the help.
 |
Another friend attempts the scissors gesture. |
However, in spite of our efforts we found out that our Leap Motion controller is not a terribly accurate device. After analyzing the data, we saw that the controller often confused the right and the left hands of the user and was not able to accurately detect some of the gestures we had intended to use, even some relatively simple ones. We nevertheless trained a neural network on the data and saw, not surprisingly, that it was not performing as well as the neural network we had used last week. There can be various reasons for this, but probably it is a lot harder to train a neural network to recognize 9-10 gestures than 5 gestures (which we did last week).
After this slight disappointment, we decided to simplify our gestures a little bit and to only work with eight gestures instead of ten. We also decided that in future recording sessions, we would use the Leap Motion visualizer in order to see if the recording seems to be happening successfully or not. The final version of our software should also have a video output in order to show the user how their gesture is being recorded, so that the user can cancel the recording if the Leap Motion appears to be misbehaving. We are also going to see if using a different framework, library or a machine learning technique works better for our data than using PyBrain's artificial neural network.

- The third feedback session happened today, Thursday. We presented the current state of our project and got some feedback from the instructors; they suggested trying a different Leap Motion controller to see if it worked better, and maybe to try some new machine learning algorithms. We also saw the project work of another group which is also using the Leap Motion controller; in case the reader is interested, their blog is at cvml1.wordpress.com. Our work is a little bit different though, since they are only working with static gestures while we expecting the user to do some motion. One aspect of their work interested me though in particular; they said that they are planning to use a weighted k-nearest neighbour classifier, which might not be such a naive method like I thought at first.
To conclude this post: today, we are going to redo some of the recordings that we did on Tuesday while being careful that the Leap Motion controller is working properly. Then we will see if our machine learning algorithm works better with the new data. We will be sure to post an update sometime soon and report if it will finally work correctly, after which we can start working on the user interface and other parts of the final version of the software.
No comments:
Post a Comment