Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise
Aline Weber, Lucas Nunes Alegre, Jim Torresen, and Bruno C. da Silva
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2019
- Location: Porto Alegre, Brazil
- Pages: 174–179
- DOI: 10.5281/zenodo.3672914 (Link to paper)
- PDF link
Abstract:
We introduce a machine learning technique to autonomously generate novel melodies that are variations of an arbitrary base melody. These are produced by a neural network that ensures that (with high probability) the melodic and rhythmic structure of the new melody is consistent with a given set of sample songs. We train a Variational Autoencoder network to identify a low-dimensional set of variables that allows for the compression and representation of sample songs. By perturbing these variables with Perlin Noise---a temporally-consistent parameterized noise function---it is possible to generate smoothly-changing novel melodies. We show that (1) by regulating the amount of noise, one can specify how much of the base song will be preserved; and (2) there is a direct correlation between the noise signal and the differences between the statistical properties of novel melodies and the original one. Users can interpret the controllable noise as a type of "creativity knob": the higher it is, the more leeway the network has to generate significantly different melodies. We present a physical prototype that allows musicians to use a keyboard to provide base melodies and to adjust the network's "creativity knobs" to regulate in real-time the process that proposes new melody ideas.
Citation:
Aline Weber, Lucas Nunes Alegre, Jim Torresen, and Bruno C. da Silva. 2019. Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.3672914BibTeX Entry:
@inproceedings{Weber2019, abstract = {We introduce a machine learning technique to autonomously generate novel melodies that are variations of an arbitrary base melody. These are produced by a neural network that ensures that (with high probability) the melodic and rhythmic structure of the new melody is consistent with a given set of sample songs. We train a Variational Autoencoder network to identify a low-dimensional set of variables that allows for the compression and representation of sample songs. By perturbing these variables with Perlin Noise---a temporally-consistent parameterized noise function---it is possible to generate smoothly-changing novel melodies. We show that (1) by regulating the amount of noise, one can specify how much of the base song will be preserved; and (2) there is a direct correlation between the noise signal and the differences between the statistical properties of novel melodies and the original one. Users can interpret the controllable noise as a type of "creativity knob": the higher it is, the more leeway the network has to generate significantly different melodies. We present a physical prototype that allows musicians to use a keyboard to provide base melodies and to adjust the network's "creativity knobs" to regulate in real-time the process that proposes new melody ideas.}, address = {Porto Alegre, Brazil}, author = {Aline Weber and Lucas Nunes Alegre and Jim Torresen and Bruno C. da Silva}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.3672914}, editor = {Marcelo Queiroz and Anna Xambó Sedó}, issn = {2220-4806}, month = {June}, pages = {174--179}, publisher = {UFRGS}, title = {Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise}, url = {http://www.nime.org/proceedings/2019/nime2019_paper035.pdf}, year = {2019} }