The Sound Space as Musical Instrument: Playing Corpus-Based Concatenative Synthesis
Diemo Schwarz
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2012
- Location: Ann Arbor, Michigan
- Keywords: CataRT, corpus-based concatenative synthesis, gesture
- DOI: 10.5281/zenodo.1180593 (Link to paper)
- PDF link
Abstract:
Corpus-based concatenative synthesis is a fairly recent sound synthesis method, based on descriptor analysis of any number of existing or live-recorded sounds, and synthesis by selection of sound segments from the database matching given sound characteristics. It is well described in the literature, but has been rarely examined for its capacity as a new interface for musical expression. The interesting outcome of such an examination is that the actual instrument is the space of sound characteristics, through which the performer navigates with gestures captured by various input devices. We will take a look at different types of interaction modes and controllers (positional, inertial, audio analysis) and the gestures they afford, and provide a critical assessment of their musical and expressive capabilities, based on several years of musical experience, performing with the CataRT system for real-time CBCS.
Citation:
Diemo Schwarz. 2012. The Sound Space as Musical Instrument: Playing Corpus-Based Concatenative Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1180593BibTeX Entry:
@inproceedings{Schwarz2012, abstract = {Corpus-based concatenative synthesis is a fairly recent sound synthesis method, based on descriptor analysis of any number of existing or live-recorded sounds, and synthesis by selection of sound segments from the database matching given sound characteristics. It is well described in the literature, but has been rarely examined for its capacity as a new interface for musical expression. The interesting outcome of such an examination is that the actual instrument is the space of sound characteristics, through which the performer navigates with gestures captured by various input devices. We will take a look at different types of interaction modes and controllers (positional, inertial, audio analysis) and the gestures they afford, and provide a critical assessment of their musical and expressive capabilities, based on several years of musical experience, performing with the CataRT system for real-time CBCS.}, address = {Ann Arbor, Michigan}, author = {Diemo Schwarz}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.1180593}, issn = {2220-4806}, keywords = {CataRT, corpus-based concatenative synthesis, gesture}, publisher = {University of Michigan}, title = {The Sound Space as Musical Instrument: Playing Corpus-Based Concatenative Synthesis}, url = {http://www.nime.org/proceedings/2012/nime2012_120.pdf}, year = {2012} }