From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks
Akito Van Troyer, and Rebecca Kleinberger
Proceedings of the International Conference on New Interfaces for Musical Expression
- Year: 2019
- Location: Porto Alegre, Brazil
- Pages: 272–277
- DOI: 10.5281/zenodo.3672956 (Link to paper)
- PDF link
Abstract:
This paper explores the potential of image-to-image translation techniques in aiding the design of new hardware-based musical interfaces such as MIDI keyboard, grid-based controller, drum machine, and analog modular synthesizers. We collected an extensive image database of such interfaces and implemented image-to-image translation techniques using variants of Generative Adversarial Networks. The created models learn the mapping between input and output images using a training set of either paired or unpaired images. We qualitatively assess the visual outcomes based on three image-to-image translation models: reconstructing interfaces from edge maps, and collection style transfers based on two image sets: visuals of mosaic tile patterns and geometric abstract two-dimensional arts. This paper aims to demonstrate that synthesizing interface layouts based on image-to-image translation techniques can yield insights for researchers, musicians, music technology industrial designers, and the broader NIME community.
Citation:
Akito Van Troyer, and Rebecca Kleinberger. 2019. From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.3672956BibTeX Entry:
@inproceedings{VanTroyer2019, abstract = {This paper explores the potential of image-to-image translation techniques in aiding the design of new hardware-based musical interfaces such as MIDI keyboard, grid-based controller, drum machine, and analog modular synthesizers. We collected an extensive image database of such interfaces and implemented image-to-image translation techniques using variants of Generative Adversarial Networks. The created models learn the mapping between input and output images using a training set of either paired or unpaired images. We qualitatively assess the visual outcomes based on three image-to-image translation models: reconstructing interfaces from edge maps, and collection style transfers based on two image sets: visuals of mosaic tile patterns and geometric abstract two-dimensional arts. This paper aims to demonstrate that synthesizing interface layouts based on image-to-image translation techniques can yield insights for researchers, musicians, music technology industrial designers, and the broader NIME community.}, address = {Porto Alegre, Brazil}, author = {Akito Van Troyer and Rebecca Kleinberger}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, doi = {10.5281/zenodo.3672956}, editor = {Marcelo Queiroz and Anna Xambó Sedó}, issn = {2220-4806}, month = {June}, pages = {272--277}, publisher = {UFRGS}, title = {From Mondrian to Modular Synth: Rendering {NIME} using Generative Adversarial Networks}, url = {http://www.nime.org/proceedings/2019/nime2019_paper052.pdf}, year = {2019} }