Shaping and Exploring Interactive Motion-Sound Mappings Using Online Clustering Techniques
Hugo Scurto, Frédéric Bevilacqua, and Jules Françoise
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2017
- Location: Copenhagen, Denmark
- Pages: 410–415
- DOI: 10.5281/zenodo.1176270 (Link to paper)
- PDF link
Abstract:
Machine learning tools for designing motion-sound relationships often rely on a two-phase iterative process, where users must alternate between designing gestures and performing mappings. We present a first prototype of a user adaptable tool that aims at merging these design and performance steps into one fully interactive experience. It is based on an online learning implementation of a Gaussian Mixture Model supporting real-time adaptation to user movement and generation of sound parameters. To allow both fine-tune modification tasks and open-ended improvisational practices, we designed two interaction modes that either let users shape, or guide interactive motion-sound mappings. Considering an improvisational use case, we propose two example musical applications to illustrate how our tool might support various forms of corporeal engagement with sound, and inspire further perspectives for machine learning-mediated embodied musical expression.
Citation:
Hugo Scurto, Frédéric Bevilacqua, and Jules Françoise. 2017. Shaping and Exploring Interactive Motion-Sound Mappings Using Online Clustering Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1176270BibTeX Entry:
@inproceedings{hscurto2017, abstract = {Machine learning tools for designing motion-sound relationships often rely on a two-phase iterative process, where users must alternate between designing gestures and performing mappings. We present a first prototype of a user adaptable tool that aims at merging these design and performance steps into one fully interactive experience. It is based on an online learning implementation of a Gaussian Mixture Model supporting real-time adaptation to user movement and generation of sound parameters. To allow both fine-tune modification tasks and open-ended improvisational practices, we designed two interaction modes that either let users shape, or guide interactive motion-sound mappings. Considering an improvisational use case, we propose two example musical applications to illustrate how our tool might support various forms of corporeal engagement with sound, and inspire further perspectives for machine learning-mediated embodied musical expression.}, address = {Copenhagen, Denmark}, author = {Hugo Scurto and Frédéric Bevilacqua and Jules Françoise}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.1176270}, issn = {2220-4806}, pages = {410--415}, publisher = {Aalborg University Copenhagen}, title = {Shaping and Exploring Interactive Motion-Sound Mappings Using Online Clustering Techniques}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0077.pdf}, year = {2017} }