Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment
Miles Thorogood, and Philippe Pasquier
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2013
- Location: Daejeon, Republic of Korea
- Pages: 256–260
- Keywords: soundscape, performance, machine learning, audio features, affect grid
- DOI: 10.5281/zenodo.1178674 (Link to paper and supplementary files)
- PDF Link
Abstract
Soundscape composition in improvisation and performance contexts involves manyprocesses that can become overwhelming for a performer, impacting on thequality of the composition. One important task is evaluating the mood of acomposition for evoking accurate associations and memories of a soundscape. Anew system that uses supervised machine learning is presented for theacquisition and realtime feedback of soundscape affect. A model of sound-scape mood is created by users entering evaluations of audio environmentsusing a mobile device. The same device then provides feedback to the user ofthe predicted mood of other audio environments. We used a features vector ofTotal Loudness and MFCC extracted from an audio signal to build a multipleregression models. The evaluation of the system shows the tool is effective inpredicting soundscape affect.
Citation
Miles Thorogood, and Philippe Pasquier. 2013. Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1178674 [PDF]
BibTeX Entry
@inproceedings{Thorogood2013,
abstract = {Soundscape composition in improvisation and performance contexts involves manyprocesses that can become overwhelming for a performer, impacting on thequality of the composition. One important task is evaluating the mood of acomposition for evoking accurate associations and memories of a soundscape. Anew system that uses supervised machine learning is presented for theacquisition and realtime feedback of soundscape affect. A model of sound-scape mood is created by users entering evaluations of audio environmentsusing a mobile device. The same device then provides feedback to the user ofthe predicted mood of other audio environments. We used a features vector ofTotal Loudness and MFCC extracted from an audio signal to build a multipleregression models. The evaluation of the system shows the tool is effective inpredicting soundscape affect.},
address = {Daejeon, Republic of Korea},
author = {Miles Thorogood and Philippe Pasquier},
booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression},
doi = {10.5281/zenodo.1178674},
issn = {2220-4806},
keywords = {soundscape, performance, machine learning, audio features, affect grid},
month = {May},
pages = {256--260},
publisher = {Graduate School of Culture Technology, KAIST},
title = {Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment},
url = {http://www.nime.org/proceedings/2013/nime2013_157.pdf},
year = {2013}
}