Designing, Playing, and Performing with a Vision-based Mouth Interface

10/07/2020
by   Michael J. Lyons, et al.
0

The role of the face and mouth in speech production as well asnon-verbal communication suggests the use of facial action tocontrol musical sound. Here we document work on theMouthesizer, a system which uses a headworn miniaturecamera and computer vision algorithm to extract shapeparameters from the mouth opening and output these as MIDIcontrol changes. We report our experience with variousgesture-to-sound mappings and musical applications, anddescribe a live performance which used the Mouthesizerinterface.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset