Investigation of Gesture Controlled Articulatory Vocal Synthesizer using a Bio-Mechanical Mapping Layer
Johnty Wang, Nicolas d'Alessandro, Sidney Fels, and Robert Pritchard
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2012
- Location: Ann Arbor, Michigan
- Keywords: Gesture, Mapping, Articulatory, Speech, Singing, Synthesis
- DOI: 10.5281/zenodo.1178447 (Link to paper)
- PDF link
Abstract:
We have added a dynamic bio-mechanical mapping layer that contains a model of the human vocal tract with tongue muscle activations as input and tract geometry as output to a real time gesture controlled voice synthesizer system used for musical performance and speech research. Using this mapping layer, we conducted user studies comparing controlling the model muscle activations using a 2D set of force sensors with a position controlled kinematic input space that maps directly to the sound. Preliminary user evaluation suggests that it was more difficult to using force input but the resultant output sound was more intelligible and natural compared to the kinematic controller. This result shows that force input is a potentially feasible for browsing through a vowel space for an articulatory voice synthesis system, although further evaluation is required.
Citation:
Johnty Wang, Nicolas d'Alessandro, Sidney Fels, and Robert Pritchard. 2012. Investigation of Gesture Controlled Articulatory Vocal Synthesizer using a Bio-Mechanical Mapping Layer. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1178447BibTeX Entry:
@inproceedings{Wang2012, abstract = {We have added a dynamic bio-mechanical mapping layer that contains a model of the human vocal tract with tongue muscle activations as input and tract geometry as output to a real time gesture controlled voice synthesizer system used for musical performance and speech research. Using this mapping layer, we conducted user studies comparing controlling the model muscle activations using a 2D set of force sensors with a position controlled kinematic input space that maps directly to the sound. Preliminary user evaluation suggests that it was more difficult to using force input but the resultant output sound was more intelligible and natural compared to the kinematic controller. This result shows that force input is a potentially feasible for browsing through a vowel space for an articulatory voice synthesis system, although further evaluation is required.}, address = {Ann Arbor, Michigan}, author = {Johnty Wang and Nicolas d'Alessandro and Sidney Fels and Robert Pritchard}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.1178447}, issn = {2220-4806}, keywords = {Gesture, Mapping, Articulatory, Speech, Singing, Synthesis}, publisher = {University of Michigan}, title = {Investigation of Gesture Controlled Articulatory Vocal Synthesizer using a Bio-Mechanical Mapping Layer}, url = {http://www.nime.org/proceedings/2012/nime2012_291.pdf}, year = {2012} }