I attended the recurse center in the summer of 2018. While there, I had the opportunity to collaborate with Omayeli on face the music - a website that uses computer vision to let you use your face to make music aka. a face theremin.
The ultimate goal is for someone to be able to play a tune using their facial features, but it’s not there just yet. However, it’s quite fun, regardless.
The way it works is by using clmtracker to track facial features. It then takes note of the distance between various points, like the upper and lower lips or the eyebrows and pupils to determine if a particular note should be playing or not. When the points grow a certain amount apart, it causes tone.js to emit a tone from the current key we are in.
To determine the key that we are in, we take the points on the face and actually map them onto a tonnetz. As the face tils or moves closer / further way, there is a smooth transition between related chords through a linear algebraic transform.
The project ended up being a lot of fun - people from ages 3 to 85 that I showed it to had a lot of fun. Of course, we got tired of hearing the same beats after a month of hearing them.
Some particular issues with face the music is that it is hard to play a melody - the facial recognition is not extremely accurate or it jumps around. We also noticed that many people tried to use their tongue to make noise, but the facial recognition libraries do not seem to do tongue recognition.