Building Sketch-to-Sound Mapping with Unsupervised Feature Extraction and Interactive Machine Learning

Shuoyang Zheng, Bleiz M Del Sette, Charalampos Saitis, Anna Xambó, and Nick Bryan-Kinns

Proceedings of the International Conference on New Interfaces for Musical Expression

Abstract:

In this paper, we explore the interactive construction and exploration of mappings between visual sketches and musical controls. Interactive Machine Learning (IML) allows creators to construct mappings with personalised training examples. However, when it comes to high-dimensional data such as sketches, dimensionality reduction techniques are required to extract features for the IML model. We propose using unsupervised machine learning to encode sketches into lower-dimensional latent representations, which are then used as the source for the IML model to construct sketch-to-sound mappings. We build a proof-of-concept prototype and demonstrate it using two compositions. We reflect on the composing processes to discuss the controllability and explorability in mappings built by this approach and how they contribute to the musical expression.

Citation:

Shuoyang Zheng, Bleiz M Del Sette, Charalampos Saitis, Anna Xambó, and Nick Bryan-Kinns. 2024. Building Sketch-to-Sound Mapping with Unsupervised Feature Extraction and Interactive Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.13904959

BibTeX Entry:

  @article{nime2024_86,
 abstract = {In this paper, we explore the interactive construction and exploration of mappings between visual sketches and musical controls. Interactive Machine Learning (IML) allows creators to construct mappings with personalised training examples. However, when it comes to high-dimensional data such as sketches, dimensionality reduction techniques are required to extract features for the IML model. We propose using unsupervised machine learning to encode sketches into lower-dimensional latent representations, which are then used as the source for the IML model to construct sketch-to-sound mappings. We build a proof-of-concept prototype and demonstrate it using two compositions. We reflect on the composing processes to discuss the controllability and explorability in mappings built by this approach and how they contribute to the musical expression.},
 address = {Utrecht, Netherlands},
 articleno = {86},
 author = {Shuoyang Zheng and Bleiz M Del Sette and Charalampos Saitis and Anna Xambó and Nick Bryan-Kinns},
 booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
 doi = {10.5281/zenodo.13904959},
 editor = {S M Astrid Bin and Courtney N. Reed},
 issn = {2220-4806},
 month = {September},
 numpages = {7},
 pages = {591--597},
 presentation-video = {https://youtu.be/phj5gkDijnc?si=4ltb2Vyyncxj_tnw},
 title = {Building Sketch-to-Sound Mapping with Unsupervised Feature Extraction and Interactive Machine Learning},
 track = {Papers},
 url = {http://nime.org/proceedings/2024/nime2024_86.pdf},
 year = {2024}
}