Proceedings Archive
This page contains a list of all publications that have been published at the NIME conferences.
- Peer review: All papers have been peer-reviewed (most often by three international experts). See the list of reviewers. Only papers that were presented at the conferences (as presentation, poster or demo) are included.
- Open access: NIME papers are open access (gold), and the copyright remains with the author(s). The NIME archive uses the Creative Commons Attribution 4.0 International License (CC BY 4.0).
- Public domain: The bibliographic information for NIME, including all BibTeX information and abstracts, is public domain. The list below is generated from a collection of BibTeX files hosted at GitHub using Jekyll Scholar.
- PDFs: Individual papers are linked for each entry below. All PDFs are archived separately in Zenodo, and there are also Zip files for each year in Zenodo. If you just want to download everything quickly, you can find the Zip files here as well.
- ISSN for the proceedings series: ISSN 2220-4806. Each year’s ISBN is in the BibTeX files and are also listed here.
- Impact factor: Academic work should always be considered on its own right (cf. DORA declaration). That said, the NIME proceedings are generally ranked highly in, for example, the Google Scholar ranking.
- Ethics: Please take a look at NIME’s Publication ethics and malpractice statement.
- Contact: If you find any errors in the database, please feel free to fork and modify at GitHub, or add an issue in the tracker.
NIME publications (in backwards chronological order)
2022
-
Andrea Guidi and Andrew McPherson. 2022. Quantitative evaluation of aspects of embodiment in new digital musical instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.79d0b38f
Download PDF DOIThis paper discusses a quantitative method to evaluate whether an expert player is able to execute skilled actions on an unfamiliar interface while keeping the focus of their performance on the musical outcome rather than on the technology itself. In our study, twelve professional electric guitar players used an augmented plectrum to replicate prerecorded timbre variations in a set of musical excerpts. The task was undertaken in two experimental conditions: a reference condition, and a subtle gradual change in the sensitivity of the augmented plectrum which is designed to affect the guitarist’s performance without making them consciously aware of its effect. We propose that players’ subconscious response to the disruption of changing the sensitivity, as well as their overall ability to replicate the stimuli, may indicate the strength of the relationship they developed with the new interface. The case study presented in this paper highlights the strengths and limitations of this method.
@inproceedings{NIME22_1, author = {Guidi, Andrea and McPherson, Andrew}, title = {Quantitative evaluation of aspects of embodiment in new digital musical instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {1}, doi = {10.21428/92fbeb44.79d0b38f}, url = {https://doi.org/10.21428%2F92fbeb44.79d0b38f}, presentation-video = {https://youtu.be/J4981qsq_7c}, pdf = {101.pdf} }
-
Brady Boettcher, Joseph Malloch, Johnty Wang, and Marcelo M. Wanderley. 2022. Mapper4Live: Using Control Structures to Embed Complex Mapping Tools into Ableton Live. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.625fbdbf
Download PDF DOIThis paper presents Mapper4Live, a software plugin made for the popular digital audio workstation software Ableton Live. Mapper4Live exposes Ableton’s synthesis and effect parameters on the distributed libmapper signal mapping network, providing new opportunities for interaction between software and hardware synths, audio effects, and controllers. The plugin’s uses and relevance in research, music production and musical performance settings are explored, detailing the development journey and ideas for future work on the project.
@inproceedings{NIME22_10, author = {Boettcher, Brady and Malloch, Joseph and Wang, Johnty and Wanderley, Marcelo M.}, title = {Mapper4Live: Using Control Structures to Embed Complex Mapping Tools into Ableton Live}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {10}, doi = {10.21428/92fbeb44.625fbdbf}, url = {https://doi.org/10.21428%2F92fbeb44.625fbdbf}, presentation-video = {https://youtu.be/Sv3v3Jmemp0}, pdf = {115.pdf} }
-
Anthony T. Marasco. 2022. Approaching the Norns Shield as a Laptop Alternative for Democratizing Music Technology Ensembles. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.89003700
Download PDF DOIMusic technology ensembles—often consisting of multiple laptops as the performers’ primary instrument— provide collaborative artistic experiences for electronic musicians. In an effort to remove the significant technical and financial barriers that laptops can present to performers looking to start their own group, this paper proposes a solution in the form of the Norns Shield, a computer music instrument (CMI) that requires minimal set-up and promotes immediate music-making to performers of all skill levels. Prior research centered on using alternative CMIs to supplant laptops in ensemble settings is discussed, and the benefits of adopting the Norns Shield in service of democratizing and diversifying the music technology ensemble are demonstrated in a discussion centered on the University of Texas Rio Grande Valley New Music Ensemble’s adoption of the instrument. A description of two software packages developed by the author showcases an extension of the instrument’s abilities to share collaborative control data between internet-enabled CMIs and to remotely manage script launching and parameter configuration across a group of Norns Shields, providing resources for ensembles interested in incorporating the device into their ranks.
@inproceedings{NIME22_11, author = {Marasco, Anthony T.}, title = {Approaching the Norns Shield as a Laptop Alternative for Democratizing Music Technology Ensembles}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {11}, doi = {10.21428/92fbeb44.89003700}, url = {https://doi.org/10.21428%2F92fbeb44.89003700}, presentation-video = {https://www.youtube.com/watch?v=2XixSYrgRuQ}, pdf = {120.pdf} }
-
Juan Ramos, Esteban Calcagno, Ramiro Oscar Vergara, Pablo Riera, and Rizza Joaquı́n. 2022. Bandoneon 2.0: an interdisciplinary project for research and development of electronic bandoneons in Argentina. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c38bfb86
Download PDF DOIIn this article we present Bandoneon 2.0, an interdisciplinary project whose main objective is to produce electronic bandoneons in Argentina. The current prices of bandoneons and the scarcity of manufacturers are endangering the possibility of access for the new generations to one of the most emblematic instruments of the culture of this country. Therefore, we aim to create an expressive and accessible electronic bandoneon that can be used in recreational, academic and professional contexts, providing an inclusive response to the current sociocultural demand. The project also involves research on instrument acoustics and the development of specialized software and hardware tools.
@inproceedings{NIME22_12, author = {Ramos, Juan and Calcagno, Esteban and Vergara, Ramiro Oscar and Riera, Pablo and Rizza, Joaqu{\'{\i}}n}, title = {Bandoneon 2.0: an interdisciplinary project for research and development of electronic bandoneons in Argentina}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {12}, doi = {10.21428/92fbeb44.c38bfb86}, url = {https://doi.org/10.21428%2F92fbeb44.c38bfb86}, presentation-video = {https://www.youtube.com/watch?v=5y4BbQWVNGQ}, pdf = {123.pdf} }
-
Andrew R. Brown. 2022. On Board Call: A Gestural Wildlife Imitation Machine. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.71a5a0ba
Download PDF DOIThe On Board Call is a bespoke musical interface designed to engage the general public’s interest in wildlife sounds—such as bird, frog or animal calls—through imitation and interaction. The device is a handheld, battery-operated, microprocessor-based machine that synthesizes sounds using frequency modulation synthesis methods. It includes a small amplifier and loudspeaker for playback and employs an accelerometer and force sensor that register gestural motions that control sound parameters in real time. The device is handmade from off-the-shelf components onto a specially designed PCB and laser cut wooden boards. Development versions of the device have been tested in wildlife listening contexts and in location-based ensemble performance. The device is simple to use, compact and inexpensive to facilitate use in community-based active listening workshops intended to enhance user’s appreciation of the eco acoustic richness of natural environments. Unlike most of the previous work in wildlife call imitation, the Call does not simply play back recorded wildlife sounds, it is designed for performative interaction by a user to bring synthesized sounds to life and imbue them with expression.
@inproceedings{NIME22_13, author = {Brown, Andrew R.}, title = {On Board Call: A Gestural Wildlife Imitation Machine}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {13}, doi = {10.21428/92fbeb44.71a5a0ba}, url = {https://doi.org/10.21428%2F92fbeb44.71a5a0ba}, presentation-video = {https://www.youtube.com/watch?v=iBTBPpaSGi8}, pdf = {125.pdf} }
-
Krzysztof Cybulski. 2022. Post-digital sax - a digitally controlled acoustic single-reed woodwind instrument. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.756616d4
Download PDF DOIIn a search for a symbiotic relationship between the digital and physical worlds, I am developing a hybrid, digital-acoustic wind instrument - the Post-Digital Sax. As the name implies, the instrument combines the advantages and flexibility of digital control with a hands-on physical interface and a non-orthodox means of sound production, in which the airflow, supplied by the player’s lungs, is the actual sound source. The pitch, however, is controlled digitally, allowing a wide range of musical material manipulation, bringing the possibilities of a digitally augmented performance into the realm of acoustic sound.
@inproceedings{NIME22_14, author = {Cybulski, Krzysztof}, title = {Post-digital sax - a digitally controlled acoustic single-reed woodwind instrument}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {14}, doi = {10.21428/92fbeb44.756616d4}, url = {https://doi.org/10.21428%2F92fbeb44.756616d4}, presentation-video = {https://youtu.be/RnuEvjMdEj4}, pdf = {126.pdf} }
-
Lonce Wyse and Prashanth Thattai Ravikumar. 2022. Syntex: parametric audio texture datasets for conditional training of instrumental interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0fe70450
Download PDF DOIAn emerging approach to building new musical instruments is based on training neural networks to generate audio conditioned upon parametric input. We use the term "generative models" rather than "musical instruments" for the trained networks because it reflects the statistical way the instruments are trained to "model" the association between parameters and the distribution of audio data, and because "musical" carries historical baggage as a reference to a restricted domain of sound. Generative models are musical instruments in that they produce a prescribed range of sound playable through the expressive manipulation of an interface. To learn the mapping from interface to audio, generative models require large amounts of parametrically labeled audio data. This paper introduces the Synthetic Audio Textures (Syn- Tex1) collection of data set generators. SynTex is a database of parameterized audio textures and a suite of tools for creating and labeling datasets designed for training and testing generative neural networks for parametrically conditioned sound synthesis. While there are many existing labeled speech and traditional musical instrument databases available for training generative models, most datasets of general (e.g. environmental) audio are oriented and labeled for the purpose of classification rather than expressive musical generation. SynTex is designed to provide an open shareable reference set of audio for creating generative sound models including their interfaces. SynTex sound sets are synthetically generated. This facilitates dense and accurate labeling necessary for conditionally training generative networks conditionally dependent on input parameter values. SynTex has several characteristics designed to support a data-centric approach to developing, exploring, training, and testing generative models.
@inproceedings{NIME22_15, author = {Wyse, Lonce and Ravikumar, Prashanth Thattai}, title = {Syntex: parametric audio texture datasets for conditional training of instrumental interfaces.}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {15}, doi = {10.21428/92fbeb44.0fe70450}, url = {https://doi.org/10.21428%2F92fbeb44.0fe70450}, presentation-video = {https://youtu.be/KZHXck9c75s}, pdf = {128.pdf} }
-
Jackson Goode and Stefano Fasciani. 2022. A Toolkit for the Analysis of the NIME Proceedings Archive. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.58efca21
Download PDF DOIThis paper describes a toolkit for analyzing the NIME proceedings archive, which facilitates the bibliometric study of the conference papers and the identification of trends and patterns. The toolkit is implemented as a collection of Python methods that aggregate, scrape and retrieve various meta-data from published papers. Extracted data is stored in a large numeric table as well as plain text files. Analytical functions within the toolkit can be easily extended or modified. The text mining script that can be highly customized without the need for programming. The toolkit uses only publicly available information organized in standard formats, and is available as open-source software to promote continuous development in step with the NIME archive.
@inproceedings{NIME22_16, author = {Goode, Jackson and Fasciani, Stefano}, title = {A Toolkit for the Analysis of the {NIME} Proceedings Archive}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {16}, doi = {10.21428/92fbeb44.58efca21}, url = {https://doi.org/10.21428%2F92fbeb44.58efca21}, presentation-video = {https://youtu.be/Awp5-oxL-NM}, pdf = {13.pdf} }
-
Timothy Tate. 2022. The Concentric Sampler: A musical instrument from a repurposed floppy disk drive. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.324729a3
Download PDF DOIThis paper focuses on the redundancy and physicality of magnetic recording media as a defining factor in the design of a lo-fi audio device, the Concentric Sampler. A modified floppy disk drive (FDD) and additional circuitry enables the FDD to record to and playback audio from a 3.5” floppy disk. The Concentric Sampler is designed as an instrument for live performance and a tool for sonic manipulation, resulting in primitive looping and time-based granular synthesis. This paper explains the motivation and background of the Concentric Sampler, related applications and approaches, its technical realisation, and its musical possibilities. To conclude, the Concentric Sampler’s potential as an instrument and compositional tool is discussed alongside the future possibilities for development.
@inproceedings{NIME22_17, author = {Tate, Timothy}, title = {The Concentric Sampler: A musical instrument from a repurposed floppy disk drive}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {17}, doi = {10.21428/92fbeb44.324729a3}, url = {https://doi.org/10.21428%2F92fbeb44.324729a3}, presentation-video = {https://youtu.be/7Myu1W7tbts}, pdf = {131.pdf} }
-
Takahiro Kamatani, Yoshinao Sato, and Masato Fujino. 2022. Ghost Play - A Violin-Playing Robot using Electromagnetic Linear Actuators. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.754a50b5
Download PDF DOIGhost Play is a violin-playing robot that aims to realize bowing and fingering similar to human players. Existing violin-playing machines have faced various problems concerning performance techniques owing to constraints imposed by their design. Bowing and fingering that require accurate and high-acceleration movement (e.g., a spiccato, tremolo, and glissando) are essential but challenging. To overcome this problem, Ghost Play is equipped with seven electromagnetic linear actuators, three for controlling the bow (i.e., the right hand), and the other four for controlling the pitch on each string (i.e., the left hand). The violin-playing robot is mounted with an unmodified violin bow. A sensor is attached to the bow to measure bow pressure. The control software receives a time series of performance data and manipulates the actuators accordingly. The performance data consists of the bow direction, bow speed, bow pressure, pitch, vibrato interval, vibrato width, and string to be drawn. We also developed an authoring tool for the performance data using a graphic user interface. Finally, we demonstrated Ghost Play performing bowing and fingering techniques such as a spiccato, tremolo, and glissando, as well as a piece of classical music.
@inproceedings{NIME22_18, author = {Kamatani, Takahiro and Sato, Yoshinao and Fujino, Masato}, title = {Ghost Play - A Violin-Playing Robot using Electromagnetic Linear Actuators}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {18}, doi = {10.21428/92fbeb44.754a50b5}, url = {https://doi.org/10.21428%2F92fbeb44.754a50b5}, presentation-video = {https://youtu.be/FOivgYXk1_g}, pdf = {136.pdf} }
-
Thor Magnusson, Chris Kiefer, and Halldor Ulfarsson. 2022. Reflexions upon Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.aa7de712
Download PDF DOIFeedback is a technique that has been used in musical performance since the advent of electricity. From the early cybernetic explorations of Bebe and Louis Barron, through the screaming sound of Hendrix’s guitar, to the systems design of David Tudor or Nic Collins, we find the origins of feedback in music being technologically and aesthetically diverse. Through interviews with participants in a recent Feedback Musicianship Network symposium, this paper seeks to investigate the contemporary use of this technique and explore how key protagonists discuss the nature of their practice. We see common concepts emerging in these conversations: agency, complexity, coupling, play, design and posthumanism. The paper presents a terminological and ideological framework as manifested at this point in time, and makes a theoretical contribution to the understanding of the rationale and potential of this technological and compositional approach.
@inproceedings{NIME22_19, author = {Magnusson, Thor and Kiefer, Chris and Ulfarsson, Halldor}, title = {Reflexions upon Feedback}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {19}, doi = {10.21428/92fbeb44.aa7de712}, url = {https://doi.org/10.21428%2F92fbeb44.aa7de712}, presentation-video = {https://www.youtube.com/watch?v=ouwIA_aVmEM}, pdf = {151.pdf} }
-
Jiayue Cecilia Wu. 2022. Today and Yesterday: Two Case Studies of China’s NIME Community. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.57e41c54
Download PDF DOIThis article explores two in-depth interviews with distinguished Chinese NIMEers, across generations, from the late 1970s to the present. Tian Jinqin and Meng Qi represent role models in the Chinese NIME community. From the innovative NIME designers’ historical technological innovation of the 1970s’ analog ribbon control string synthesizer Xian Kong Qin to the 2020’s Wing Pinger evolving harmony synthesizer, the author shines a light from different angles on the Chinese NIME community.
@inproceedings{NIME22_2, author = {Wu, Jiayue Cecilia}, title = {Today and Yesterday: Two Case Studies of China{\textquotesingle}s {NIME} Community}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {2}, doi = {10.21428/92fbeb44.57e41c54}, url = {https://doi.org/10.21428%2F92fbeb44.57e41c54}, presentation-video = {https://www.youtube.com/watch?v=4PMmDnUNgRk}, pdf = {102.pdf} }
-
Seth Thorn and Byron Lahey. 2022. Decolonizing the Violin with Active Shoulder Rests (ASRs). Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.91f87875
Download PDF DOIBeginning, amateur, and professional violinists alike make use of a shoulder rest with a typical form factor for ergonomic support. Numerous commercial devices are available. We saturate these inert devices with electronics and actuators to open a new design space for “active shoulder rests” (ASRs), a pathway for violinists to adopt inexpensive and transparent electroacoustic interfaces. We present a dual-mode ASR that features a built-in microphone pickup and parametric control of mixing between sound diffusion and actuation modes for experiments with active acoustics and feedback. We document a modular approach to signal processing allowing quick adaptation and differentiation of control signals, and demonstrate rich sound processing techniques that create lively improvisation environments. By fostering participation and convergence among digital media practices and diverse musical cultures, we envision ASRs broadly rekindling creative practice for the violin, long a tool of improvisation before the triumph of classical works. ASRs decolonize the violin by activating new flows and connectivities, freeing up habitual relations, and refreshing the musical affordances of this otherwise quintessentially western and canonical instrument.
@inproceedings{NIME22_20, author = {Thorn, Seth and Lahey, Byron}, title = {Decolonizing the Violin with Active Shoulder Rests ({ASRs})}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {20}, doi = {10.21428/92fbeb44.91f87875}, url = {https://doi.org/10.21428%2F92fbeb44.91f87875}, presentation-video = {https://youtu.be/7qNTa4QplC4}, pdf = {16.pdf} }
-
Sam Bilbow. 2022. Evaluating polaris~ - An Audiovisual Augmented Reality Experience Built on Open-Source Hardware and Software. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.8abb9ce6
Download PDF DOIAugmented reality (AR) is increasingly being envisaged as a process of perceptual mediation or modulation, not only as a system that combines aligned and interactive virtual objects with a real environment. Within artistic practice, this reconceptualisation has led to a medium that emphasises this multisensory integration of virtual processes, leading to expressive, narrative-driven, and thought-provoking AR experiences. This paper outlines the development and evaluation of the polaris experience. polaris is built using a set of open-source hardware and software components that can be used to create privacy-respecting and cost-effective audiovisual AR experiences. Its wearable component is comprised of the open-source Project North Star AR headset and a pair of bone conduction headphones, providing simultaneous real and virtual visual and auditory elements. These elements are spatially aligned using Unity and PureData to the real space that they appear in and can be gesturally interacted with in a way that fosters artistic and musical expression. In order to evaluate the polaris , 10 participants were recruited, who spent approximately 30 minutes each in the AR scene and were interviewed about their experience. Using grounded theory, the author extracted coded remarks from the transcriptions of these studies, that were then sorted into the categories of Sentiment, Learning, Adoption, Expression, and Immersion. In evaluating polaris it was found that the experience engaged participants fruitfully, with many noting their ability to express themselves audiovisually in creative ways. The experience and the framework the author used to create it is available in a Github respository.
@inproceedings{NIME22_21, author = {Bilbow, Sam}, title = {Evaluating polaris{\textasciitilde} - An Audiovisual Augmented Reality Experience Built on Open-Source Hardware and Software}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {21}, doi = {10.21428/92fbeb44.8abb9ce6}, url = {https://doi.org/10.21428%2F92fbeb44.8abb9ce6}, presentation-video = {https://www.youtube.com/watch?v=eCdQku5hFOE}, pdf = {162.pdf} }
-
Felipe Verdugo, Amedeo Ceglia, Christian Frisson, et al. 2022. Feeling the Effort of Classical Musicians - A Pipeline from Electromyography to Smartphone Vibration for Live Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.3ce22588
Download PDF DOIThis paper presents the MappEMG pipeline. The goal of this pipeline is to augment the traditional classical concert experience by giving listeners access, through the sense of touch, to an intimate and non-visible dimension of the musicians’ bodily experience while performing. The live-stream pipeline produces vibrations based on muscle activity captured through surface electromyography (EMG). Therefore, MappEMG allows the audience to experience the performer’s muscle effort, an essential component of music performance which is typically unavailable to direct visual observation. The paper is divided in four sections. First, we overview related works on EMG, music performance, and vibrotactile feedback. We then present conceptual and methodological issues of capturing musicians’ muscle effort related to their expressive intentions. We further explain the different components of the live-stream data pipeline: a python software named Biosiglive for data acquisition and processing, a Max/MSP patch for data post-processing and mapping, and a mobile application named hAPPtiks for real-time control of smartphones’ vibration. Finally, we address the application of the pipeline in an actual music performance. Thanks to their modular structure, the tools presented could be used in different creative and biomedical contexts involving gestural control of haptic stimuli.
@inproceedings{NIME22_22, author = {Verdugo, Felipe and Ceglia, Amedeo and Frisson, Christian and Burton, Alexandre and Begon, Mickael and Gibet, Sylvie and Wanderley, Marcelo M.}, title = {Feeling the Effort of Classical Musicians - A Pipeline from Electromyography to Smartphone Vibration for Live Music Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {22}, doi = {10.21428/92fbeb44.3ce22588}, url = {https://doi.org/10.21428%2F92fbeb44.3ce22588}, presentation-video = {https://youtu.be/gKM0lGs9rxw}, pdf = {165.pdf} }
-
Christian Frisson, Mathias Kirkegaard, Thomas Pietrzak, and Marcelo M. Wanderley. 2022. ForceHost: an open-source toolchain for generating firmware embedding the authoring and rendering of audio and force-feedback haptics. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.76cfc96e
Download PDF DOIForceHost is an opensource toolchain for generating firmware that hosts authoring and rendering of forcefeedback and audio signals and that communicates through I2C with guest motor and sensor boards. With ForceHost, the stability of audio and haptic loops is no longer delegated to and dependent on operating systems and drivers, and devices remain discoverable beyond planned obsolescence. We modified Faust, a highlevel language and compiler for real-time audio digital signal processing, to support haptics. Our toolchain compiles audio-haptic firmware applications with Faust and embeds web-based UIs exposing their parameters. We validate our toolchain by example applications and modifications of integrated development environments: script-based programming examples of haptic firmware applications with our haptic1D Faust library, visual programming by mapping input and output signals between audio and haptic devices in Webmapper, visual programming with physically-inspired mass-interaction models in Synth-a-Modeler Designer. We distribute the documentation and source code of ForceHost and all of its components and forks.
@inproceedings{NIME22_23, author = {Frisson, Christian and Kirkegaard, Mathias and Pietrzak, Thomas and Wanderley, Marcelo M.}, title = {{ForceHost}: an open-source toolchain for generating firmware embedding the authoring and rendering of audio and force-feedback haptics}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {23}, doi = {10.21428/92fbeb44.76cfc96e}, url = {https://doi.org/10.21428%2F92fbeb44.76cfc96e}, presentation-video = {https://youtu.be/smFpkdw-J2w}, pdf = {172.pdf} }
-
Rodney DuPlessis. 2022. A virtual instrument for physics-based musical gesture: CHON. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.18aeca0e
Download PDF DOIPhysical metaphor provides a visceral and universal logical framework for composing musical gestures. Physical simulations can aid composers in creating musical gestures based in complex physical metaphors. CHON (Coupled Harmonic Oscillator Network) is a new crossplatform application for composing musical gestures based in Newtonian physics. It simulates a network of particles connected by springs and sonifies the motion of individual particles. CHON is an interactive instrument that can provide complex yet tangible and physically grounded control data for synthesis, sound processing, and musical score generation. Composers often deploy dozens of independent LFOs to control various parameters in a DAW or synthesizer. By coupling numerous control signals together using physical principles, CHON represents an innovation on the traditional LFO model of musical control. Unlike independent LFOs, CHON’s signals push and pull on each other, creating a tangible causality in the resulting gestures. In this paper, I briefly describe the design of CHON and discuss its use in composition through examples in my own works.
@inproceedings{NIME22_24, author = {DuPlessis, Rodney}, title = {A virtual instrument for physics-based musical gesture: {CHON}}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {24}, doi = {10.21428/92fbeb44.18aeca0e}, url = {https://doi.org/10.21428%2F92fbeb44.18aeca0e}, presentation-video = {https://youtu.be/yXr1m6dW5jo}, pdf = {173.pdf} }
-
Cagri Erdem, Benedikte Wallace, and Alexander Refsum Jensenius. 2022. CAVI: A Coadaptive Audiovisual Instrument–Composition. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.803c24dd
Download PDF DOIThis paper describes the development of CAVI, a coadaptive audiovisual instrument for collaborative humanmachine improvisation. We created this agent-based live processing system to explore how a machine can interact musically based on a human performer’s bodily actions. CAVI utilized a generative deep learning model that monitored muscle and motion data streamed from a Myo armband worn on the performer’s forearm. The generated control signals automated layered time-based effects modules and animated a virtual body representing the artificial agent. In the final performance, two expert musicians (a guitarist and a drummer) performed with CAVI. We discuss the outcome of our artistic exploration, present the scientific methods it was based on, and reflect on developing an interactive system that is as much an audiovisual composition as an interactive musical instrument.
@inproceedings{NIME22_25, author = {Erdem, Cagri and Wallace, Benedikte and Refsum Jensenius, Alexander}, title = {{CAVI}: A Coadaptive Audiovisual Instrument{\textendash}Composition}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {25}, doi = {10.21428/92fbeb44.803c24dd}, url = {https://doi.org/10.21428%2F92fbeb44.803c24dd}, presentation-video = {https://youtu.be/WO766vmghcQ}, pdf = {176.pdf} }
-
Linnea Kirby, Paul Buser, and Marcelo M. Wanderley. 2022. Introducing the t-Tree: Using Multiple t-Sticks for Performance and Installation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2d00f04f
Download PDF DOIThe authors introduce and document how to build the t-Tree, a digital musical instrument (DMI), interactive music system (IMS), hub, and docking station that embeds several t-Sticks. The t-Tree’s potential for collaborative performance as well as an installation is discussed. Specific design choices and inspiration for the t-Tree are explored. Finally, a prototype is developed and showcased that attempts to meet the authors’ goals of creating a novel musical experience for musicians and non-musicians alike, expanding on the premise of the original t-Stick, and mitigating technical obsolescence of DMIs.
@inproceedings{NIME22_26, author = {Kirby, Linnea and Buser, Paul and Wanderley, Marcelo M.}, title = {Introducing the t-Tree: Using Multiple t-Sticks for Performance and Installation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {26}, doi = {10.21428/92fbeb44.2d00f04f}, url = {https://doi.org/10.21428%2F92fbeb44.2d00f04f}, presentation-video = {https://youtu.be/gS87Tpg3h_I}, pdf = {179.pdf} }
-
Yichen Wang and Charles Martin. 2022. Cubing Sound: Designing a NIME for Head-mounted Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b540aa59
Download PDF DOIWe present an empirical study of designing a NIME for the head-mounted augmented reality (HMAR) environment. In the NIME community, various sonic applications have incorporated augmented reality (AR) for sonic experience and audio production. With this novel digital form, new opportunities for musical expression and interface are presented. Yet few works consider whether and how the design of the NIME will be affected given the technology’s affordance. In this paper, we take an autobiographical design approach to design a NIME in HMAR, exploring what is a genuine application of AR in a NIMEs and how AR mediates between the performer and sound as a creative expression. Three interface prototypes are created for a frequency modulation synthesis system. We report on their design process and our learning and experiences through self-usage and improvisation. Our designs explore free-hand and embodied interaction in our interfaces, and we reflect on how these unique qualities of HMAR contribute to an expressive medium for sonic creation.
@inproceedings{NIME22_27, author = {Wang, Yichen and Martin, Charles}, title = {Cubing Sound: Designing a {NIME} for Head-mounted Augmented Reality}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {27}, doi = {10.21428/92fbeb44.b540aa59}, url = {https://doi.org/10.21428%2F92fbeb44.b540aa59}, presentation-video = {https://youtu.be/iOuZqwIwinU}, pdf = {183.pdf} }
-
Charlie Roberts, Ian Hattwick, Eric Sheffield, and Gillian Smith. 2022. Rethinking networked collaboration in the live coding environment Gibber. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.38cb7745
Download PDF DOIWe describe a new set of affordances for networked live coding performances in the browser-based environment Gibber, and discuss their implications in the context of three different performances by three different ensembles at three universities. Each ensemble possessed differing levels of programming and musical expertise, leading to different challenges and subsequent extensions to Gibber to address them. We describe these and additional extensions that came about after shared reflection on our experiences. While our chosen design contains computational inefficiencies that pose challenges for larger ensembles, our experiences suggest that this is a reasonable tradeoff for the low barrier-to-entry that browser-based environments provide, and that the design in general supports a variety of educational goals and compositional strategies.
@inproceedings{NIME22_28, author = {Roberts, Charlie and Hattwick, Ian and Sheffield, Eric and Smith, Gillian}, title = {Rethinking networked collaboration in the live coding environment Gibber}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {28}, doi = {10.21428/92fbeb44.38cb7745}, url = {https://doi.org/10.21428%2F92fbeb44.38cb7745}, presentation-video = {https://youtu.be/BKlHkEAqUOo}, pdf = {191.pdf} }
-
Karitta Christina Zellerbach and Charlie Roberts. 2022. A Framework for the Design and Analysis of Mixed Reality Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b2a44bc9
Download PDF DOIIn the context of immersive sonic interaction, Virtual Reality Musical Instruments have had the relative majority of attention thus far, fueled by the increasing availability of affordable technology. Recent advances in Mixed Reality (MR) experiences have provided the means for a new wave of research that goes beyond Virtual Reality. In this paper, we explore the taxonomy of Extended Reality systems, establishing our own notion of MR. From this, we propose a new classification of Virtual Musical Instrument, known as a Mixed Reality Musical Instrument (MRMI). We define this system as an embodied interface for expressive musical performance, characterized by the relationships between the performer, the virtual, and the physical environment. After a review of existing literature concerning the evaluation of immersive musical instruments and the affordances of MR systems, we offer a new framework based on three dimensions to support the design and analysis of MRMIs. We illustrate its use with application to existing works.
@inproceedings{NIME22_29, author = {Zellerbach, Karitta Christina and Roberts, Charlie}, title = {A Framework for the Design and Analysis of Mixed Reality Musical Instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {29}, doi = {10.21428/92fbeb44.b2a44bc9}, url = {https://doi.org/10.21428%2F92fbeb44.b2a44bc9}, presentation-video = {https://youtu.be/Pb4pAr2v4yU}, pdf = {193.pdf} }
-
Anna Xambó and Visda Goudarzi. 2022. The Mobile Audience as a Digital Musical Persona in Telematic Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.706b549e
Download PDF DOIOne of the consequences of the pandemic has been the potential to embrace hybrid support for different human group activities, including music performance, resulting in accommodating a wider range of situations. We believe that we are barely at the tip of the iceberg and that we can explore further the possibilities of the medium by promoting a more active role of the audience during telematic performance. In this paper, we present personic, a mobile web app designed for distributed audiences to constitute a digital musical instrument. This has the twofold purpose of letting the audience contribute to the performance with a non-intrusive and easy-to-use approach, as well as providing audiovisual feedback that is helpful for both the performers and the audience alike. The challenges and possibilities of this approach are discussed from pilot testing the app using a practice-based approach. We conclude by pointing to new directions of telematic performance, which is a promising direction for network music and digital performance.
@inproceedings{NIME22_3, author = {Xamb{\'{o}}, Anna and Goudarzi, Visda}, title = {The Mobile Audience as a Digital Musical Persona in Telematic Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {3}, doi = {10.21428/92fbeb44.706b549e}, url = {https://doi.org/10.21428%2F92fbeb44.706b549e}, presentation-video = {https://youtu.be/xu5ySfbqYs8}, pdf = {107.pdf} }
-
Laurel Pardue and S. M. Astrid Bin. 2022. The Other Hegemony: Effects of software development culture on music software, and what we can do about it. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0cc78aeb
Download PDF DOINIME has recently seen critique emerging around colonisation of music technology, and the need to decolonise digital audio workstations and music software. While commercial DAWs tend to sideline musical styles outside of western norms (and even many inside too), viewing this problem through an historical lens of imperialist legacies misses the influence of a more recent - and often invisible - hegemony that bears significant direct responsibility: The culture of technological development. In this paper we focus on the commercial technological development culture that produces these softwares, to better understand the more latent reasons why music production software ends up supporting some music practices while failing others. By using this lens we can more meaningfully separate the influence of historic cultural colonisation and music tech development culture, in order to better advocate for and implement meaningful change. We will discuss why the meaning of the term “decolonisation” should be carefully examined when addressing the limitations of DAWs, because while larger imperialist legacies continue to have significant impact on our understanding of culture, this can direct attention away from the techno-cultural subset of this hegemony that is actively engaged in making the decisions that shape the software we use. We discuss how the conventions of this techno-cultural hegemony shape the affordances of major DAWs (and thereby musical creativity). We also examine specific factors that impact decision making in developing and evolving typical music software alongside latent social structures, such as competing commercial demands, how standards are shaped, and the impact of those standards. Lastly, we suggest that, while we must continue to discuss the impact of imperialist legacies on the way we make music, understanding the techno-cultural subset of the colonial hegemony and its motives can create a space to advocate for conventions in music software that are more widely inclusive.
@inproceedings{NIME22_30, author = {Pardue, Laurel and Bin, S. M. Astrid}, title = {The Other Hegemony: Effects of software development culture on music software, and what we can do about it}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {30}, doi = {10.21428/92fbeb44.0cc78aeb}, url = {https://doi.org/10.21428%2F92fbeb44.0cc78aeb}, presentation-video = {https://www.youtube.com/watch?v=a53vwOUDh0M}, pdf = {201.pdf} }
-
Juan Pablo Martinez Avila, Joāo Tragtenberg, Filipe Calegario, et al. 2022. Being (A)part of NIME: Embracing Latin American Perspectives. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b7a7ba4f
Download PDF DOILatin American (LATAM) contributions to Music Technology date back to the early 1940’s. However, as evidenced in historical analyses of NIME, the input from LATAM institutions to its proceedings is considerably low, even when the conference was recently held in Porto Alegre, Brazil. Reflecting on this visible disparity and joining efforts as a group of LATAM researchers, we conducted a workshop and distributed a survey with members of the LATAM community with the aim of sounding out their perspectives on NIME-related practices and the prospect of establishing a LATAM NIME Network. Based on our findings we provide a contemporary contextual overview of the activities happening in LATAM and the particular challenges that practitioners face emerging from their socio-political reality. We also offer LATAM perspectives on critical epistemological issues that affect the NIME community as a whole, contributing to a pluriversal view on these matters, and to the embracement of multiple realities and ways of doing things.
@inproceedings{NIME22_31, author = {Martinez Avila, Juan Pablo and Tragtenberg, Jo{\=a}o and Calegario, Filipe and Alarcon, Ximena and Cadavid Hinojosa, Laddy Patricia and Corintha, Isabela and Dannemann, Teodoro and Jaimovich, Javier and Marquez-Borbon, Adnan and Lerner, Martin Matus and Ortiz, Miguel and Ramos, Juan and Sol{\'{\i}}s, Hugo}, title = {Being (A)part of NIME: Embracing Latin American Perspectives}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {31}, doi = {10.21428/92fbeb44.b7a7ba4f}, url = {https://doi.org/10.21428/92fbeb44.b7a7ba4f}, presentation-video = {https://youtu.be/dCxkrqrbM-M}, pdf = {21.pdf} }
-
Zak Argabrite, Jim Murphy, Sally Jane Norman, and Dale Carnegie. 2022. Technology is Land: Strategies towards decolonisation of technology in artmaking. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.68f7c268
Download PDF DOIThis article provides a lens for viewing technology as land, transformed through resource extraction, manufacturing, distribution, disassembly and waste. This lens is applied to processes of artistic creation with technology, exploring ways of fostering personal and informed relationships with that technology. The goal of these explorations will be to inspire a greater awareness of the colonial and capitalist processes that shape the technology we use and the land and people it is in relationship with. Beyond simply identifying the influence of these colonial and capitalist processes, the article will also provide creative responses (alterations to a creative process with technology) which seek to address these colonial processes in a sensitive and critical way. This will be done not to answer the broad question of ‘how do we decolonise art making with technology?’, but to break that question apart into prompts or potential pathways for decolonising.
@inproceedings{NIME22_32, author = {Argabrite, Zak and Murphy, Jim and Norman, Sally Jane and Carnegie, Dale}, title = {Technology is Land: Strategies towards decolonisation of technology in artmaking}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {32}, doi = {10.21428/92fbeb44.68f7c268}, url = {https://doi.org/10.21428%2F92fbeb44.68f7c268}, presentation-video = {https://youtu.be/JZTmiIByYN4}, pdf = {222.pdf} }
-
Ivica Bukvic. 2022. Latency-, Sync-, and Bandwidth-Agnostic Tightly-Timed Telematic and Crowdsourced Musicking Made Possible Using L2Ork Tweeter. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a0a8d914
Download PDF DOIThe following paper presents L2Ork Tweeter, a new control-data-driven free and open source crowdsourced telematic musicking platform and a new interface for musical expression that deterministically addresses three of the greatest challenges associated with the telematic music medium, that of latency, sync, and bandwidth. Motivated by the COVID-19 pandemic, Tweeter’s introduction in April 2020 has ensured uninterrupted operation of Virginia Tech’s Linux Laptop Orchestra (L2Ork), resulting in 6 international performances over the past 18 months. In addition to enabling tightly-timed sync between clients, it also uniquely supports all stages of NIME-centric telematic musicking, from collaborative instrument design and instruction, to improvisation, composition, rehearsal, and performance, including audience participation. Tweeter is also envisioned as a prototype for the crowdsourced approach to telematic musicking. Below, the paper delves deeper into motivation, constraints, design and implementation, and the observed impact as an applied instance of a proposed paradigmshift in telematic musicking and its newfound identity fueled by the live crowdsourced telematic music genre.
@inproceedings{NIME22_33, author = {Bukvic, Ivica}, title = {Latency-, Sync-, and Bandwidth-Agnostic Tightly-Timed Telematic and Crowdsourced Musicking Made Possible Using L2Ork Tweeter}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {33}, doi = {10.21428/92fbeb44.a0a8d914}, url = {https://doi.org/10.21428%2F92fbeb44.a0a8d914}, presentation-video = {https://youtu.be/5pawphncSmg}, pdf = {26.pdf} }
-
Cagan Arslan, Florent Berthaut, Anthony Beuchey, Paul Cambourian, and Arthur Paté. 2022. Vibrating shapes : Design and evolution of a spatial augmented reality interface for actuated instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c28dd323
Download PDF DOIIn this paper we propose a Spatial Augmented Reality interface for actuated acoustic instruments with active vibration control. We adopt a performance-led research approach to design augmentations throughout multiple residences. The resulting system enables two musicians to improvise with four augmented instruments through virtual shapes distributed in their peripheral space: two 12-string guitars and 1 drum kit actuated with surface speakers and a trumpet attached to an air compressor. Using ethnographic methods, we document the evolution of the augmentations and conduct a thematic analysis to shine a light on the collaborative and iterative design process. In particular, we provide insights on the opportunities brought by Spatial AR and on the role of improvisation.
@inproceedings{NIME22_34, author = {Arslan, Cagan and Berthaut, Florent and Beuchey, Anthony and Cambourian, Paul and Pat{\'{e}}, Arthur}, title = {Vibrating shapes : Design and evolution of a spatial augmented reality interface for actuated instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {34}, doi = {10.21428/92fbeb44.c28dd323}, url = {https://doi.org/10.21428%2F92fbeb44.c28dd323}, presentation-video = {https://youtu.be/oxMrv3R6jK0}, pdf = {30.pdf} }
-
Florent Berthaut and Luke Dahl. 2022. The Effect of Visualisation Level and Situational Visibility in Co-located Digital Musical Ensembles. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9d974714
Download PDF DOIDigital Musical Instruments (DMIs) offer new opportunities for collaboration, such as exchanging sounds or sharing controls between musicians. However, in the context of spontaneous and heterogeneous orchestras, such as jam sessions, collective music-making may become challenging due to the diversity and complexity of the DMIs and the musicians’ unfamiliarity with the others’ instruments. In particular, the potential lack of visibility into each musician’s respective contribution to the sound they hear, i.e. who is playing what, might impede their capacity to play together. In this paper, we propose to augment each instrument in a digital orchestra with visual feedback extracted in real-time from the instrument’s activity, in order to increase this awareness. We present the results of a user study in which we investigate the influence of visualisation level and situational visibility during short improvisations by groups of three musicians. Our results suggest that internal visualisations of all instruments displayed close to each musician’s instrument provide the best awareness.
@inproceedings{NIME22_35, author = {Berthaut, Florent and Dahl, Luke}, title = {The Effect of Visualisation Level and Situational Visibility in Co-located Digital Musical Ensembles}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {35}, doi = {10.21428/92fbeb44.9d974714}, url = {https://doi.org/10.21428%2F92fbeb44.9d974714}, presentation-video = {https://www.youtube.com/watch?v=903cs_oFfwo}, pdf = {31.pdf} }
-
Rı̀ Francesco Ardan Dal and Raul Masu. 2022. Exploring Musical Form: Digital Scores to Support Live Coding Practice. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.828b6114
Download PDF DOIThe management of the musical structures and the awareness of the performer’s processes during a performance are two important aspects of live coding improvisations. To support these aspects, we developed and evaluated two systems, Time_X and Time_Z, for visualizing the musical form during live coding. Time_X allows visualizing an entire performance, while Time_Z provides a detailed overview of the last improvised musical events. Following an autobiographical approach, the two systems have been used in five sessions by the first author of this paper, who created a diary about the experience. These diaries have been analyzed to understand the two systems individually and compare them. We finally discuss the main benefits related to the practical use of these systems, and possible use scenarios.
@inproceedings{NIME22_36, author = {R{\`{\i}}, Francesco Ardan Dal and Masu, Raul}, title = {Exploring Musical Form: Digital Scores to Support Live Coding Practice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {36}, doi = {10.21428/92fbeb44.828b6114}, url = {https://doi.org/10.21428%2F92fbeb44.828b6114}, presentation-video = {https://www.youtube.com/watch?v=r-cxEXjnDzg}, pdf = {32.pdf} }
-
Anil Çamci and John Granzow. 2022. Augmented Touch: A Mounting Adapter for Oculus Touch Controllers that Enables New Hyperreal Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a26a4014
Download PDF DOIIn this paper, we discuss our ongoing work to leverage virtual reality and digital fabrication to investigate sensory mappings across the visual, auditory, and haptic modalities in VR, and how such mappings can affect musical expression in this medium. Specifically, we introduce a custom adapter for the Oculus Touch controller that allows it to be augmented with physical parts that can be tracked, visualized, and sonified in VR. This way, a VR instrument can be made to have a physical manifestation that facilitates additional forms of tactile feedback besides those offered by the Touch controller, enabling new forms of musical interaction. We then discuss a case study, where we use the adapter to implement a new VR instrument that integrates the repelling force between neodymium magnets into the controllers. This allows us to imbue the virtual instrument, which is inherently devoid of tactility, with haptic feedback—-an essential affordance of many musical instruments.
@inproceedings{NIME22_37, author = {{\c{C}}amci, Anil and Granzow, John}, title = {Augmented Touch: A Mounting Adapter for Oculus Touch Controllers that Enables New Hyperreal Instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {37}, doi = {10.21428/92fbeb44.a26a4014}, url = {https://doi.org/10.21428%2F92fbeb44.a26a4014}, presentation-video = {https://youtu.be/fnoQOO4rz4M}, pdf = {33.pdf} }
-
Nick Warren and Anil Çamci. 2022. Latent Drummer: A New Abstraction for Modular Sequencers. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ed873363
Download PDF DOIAutomated processes in musical instruments can serve to free a performer from the physical and mental constraints of music performance, allowing them to expressively control more aspects of music simultaneously. Modular synthesis has been a prominent platform for exploring automation through the use of sequencers and has therefore fostered a tradition of user interface design utilizing increasingly complex abstraction methods. We investigate the history of sequencer design from this perspective and introduce machine learning as a potential source for a new type of intelligent abstraction. We then offer a case study based on this approach and present Latent Drummer, which is a prototype system dedicated to integrating machine learning-based interface abstractions into the tradition of sequencers for modular synthesis.
@inproceedings{NIME22_38, author = {Warren, Nick and {\c{C}}amci, Anil}, title = {Latent Drummer: A New Abstraction for Modular Sequencers}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {38}, doi = {10.21428/92fbeb44.ed873363}, url = {https://doi.org/10.21428%2F92fbeb44.ed873363}, presentation-video = {https://www.youtube.com/watch?v=Hr6B5dIhMVo}, pdf = {34.pdf} }
-
Daniel Chin and Gus Xia. 2022. A Computer-aided Multimodal Music Learning System with Curriculum: A Pilot Study. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c6910363
Download PDF DOIWe present an AI-empowered music tutor with a systematic curriculum design. The tutoring system fully utilizes the interactivity space in the auditory, visual, and haptic modalities, supporting seven haptic feedback modes and four visual feedback modes. The combinations of those modes form different cross-modal tasks of varying difficulties, allowing the curriculum to apply the “scaffolding then fading” educational technique to foster active learning and amortize cognitive load. We study the effect of multimodal instructions, guidance, and feedback using a qualitative pilot study with two subjects over 11 hours of training with our tutoring system. The study reveals valuable insights about the music learning process and points towards new features and learning modes for the next prototype.
@inproceedings{NIME22_39, author = {Chin, Daniel and Xia, Gus}, title = {A Computer-aided Multimodal Music Learning System with Curriculum: A Pilot Study}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {39}, doi = {10.21428/92fbeb44.c6910363}, url = {https://doi.org/10.21428%2F92fbeb44.c6910363}, presentation-video = {https://youtu.be/DifOKvH1ErQ}, pdf = {39.pdf} }
-
Harri Renney, Silvin Willemsen, Benedict Gaster, and Tom Mitchell. 2022. HyperModels - A Framework for GPU Accelerated Physical Modelling Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.98a4210a
Download PDF DOIPhysical modelling sound synthesis methods generate vast and intricate sound spaces that are navigated using meaningful parameters. Numerical based physical modelling synthesis methods provide authentic representations of the physics they model. Unfortunately, the application of these physical models are often limited because of their considerable computational requirements. In previous studies, the CPU has been shown to reliably support two-dimensional linear finite-difference models in real-time with resolutions up to 64x64. However, the near-ubiquitous parallel processing units known as GPUs have previously been used to process considerably larger resolutions, as high as 512x512 in real-time. GPU programming requires a low-level understanding of the architecture, which often imposes a barrier for entry for inexperienced practitioners. Therefore, this paper proposes HyperModels, a framework for automating the mapping of linear finite-difference based physical modelling synthesis into an optimised parallel form suitable for the GPU. An implementation of the design is then used to evaluate the objective performance of the framework by comparing the automated solution to manually developed equivalents. For the majority of the extensive performance profiling tests, the auto-generated programs were observed to perform only 60% slower but in the worst-case scenario it was 50% slower. The initial results suggests that, in most circumstances, the automation provided by the framework avoids the lowlevel expertise required to manually optimise the GPU, with only a small reduction in performance. However, there is still scope to improve the auto-generated optimisations. When comparing the performance of CPU to GPU equivalents, the parallel CPU version supports resolutions of up to 128x128 whilst the GPU continues to support higher resolutions up to 512x512. To conclude the paper, two instruments are developed using HyperModels based on established physical model designs.
@inproceedings{NIME22_4, author = {Renney, Harri and Willemsen, Silvin and Gaster, Benedict and Mitchell, Tom}, title = {{HyperModels} - A Framework for {GPU} Accelerated Physical Modelling Sound Synthesis}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {4}, doi = {10.21428/92fbeb44.98a4210a}, url = {https://doi.org/10.21428%2F92fbeb44.98a4210a}, presentation-video = {https://youtu.be/Pb4pAr2v4yU}, pdf = {109.pdf} }
-
Victor Paredes, Jules Françoise, and Frederic Bevilacqua. 2022. Entangling Practice with Artistic and Educational Aims: Interviews on Technology-based Movement-Sound Interactions. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5b9ac5ba
Download PDF DOIMovement-sound interactive systems are at the interface of different artistic and educational practices. Within this multiplicity of uses, we examine common denominators in terms of learning, appropriation and relationship to technological systems. While these topics have been previously reported at NIME, we wanted to investigate how practitioners, coming from different perspectives, relate to these questions. We conducted interviews with 6 artists who are engaged in movement-sound interactions: 1 performer, 1 performer/composer, 1 composer, 1 teacher/composer, 1 dancer/teacher, 1 dancer. Through a thematic analysis of the transcripts we identified three main themes related to (1) the mediating role of technological tools (2) usability and normativity, and (3) learning and practice. These results provide ground for discussion about the design and study of movementsound interactive systems.
@inproceedings{NIME22_40, author = {Paredes, Victor and Fran{\c{c}}oise, Jules and Bevilacqua, Frederic}, title = {Entangling Practice with Artistic and Educational Aims: Interviews on Technology-based Movement-Sound Interactions}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {40}, doi = {10.21428/92fbeb44.5b9ac5ba}, url = {https://doi.org/10.21428%2F92fbeb44.5b9ac5ba}, presentation-video = {https://youtu.be/n6DZE7TdEeI}, pdf = {42.pdf} }
-
Jean-Philippe Côté. 2022. User-Friendly MIDI in the Web Browser. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.388e4764
Download PDF DOIThe Web MIDI API allows the Web browser to interact with hardware and software MIDI devices detected at the operating system level. This ability for the browser to interface with most electronic instruments made in the past 30 years offers significant opportunities to preserve, enhance or re-discover a rich musical and technical heritage. By including MIDI in the broaderWeb ecosystem, this API also opens endless possibilities to create music in a networked and socially engaging way. However, the Web MIDI API specification only offers low-level access to MIDI devices and messages. For instance, it does not provide semantics on top of the raw numerical messages exchanged between devices. This is likely to deter novice programmers and significantly slow down experienced programmers. After reviewing the usability of the bare Web MIDI API, the WEBMIDI. js JavaScript library was created to alleviate this situation. By decoding raw MIDI messages, encapsulating complicated processes and providing semantically significant objects, properties, methods and events, the library makes it easier to interface with MIDI devices from compatible browsers. This paper first looks at the context in which the specification was created and then discusses the usability improvements layered on top of the API by the opensource WEBMIDI.js library.
@inproceedings{NIME22_41, author = {C{\^{o}}t{\'{e}}, Jean-Philippe}, title = {User-Friendly {MIDI} in the Web Browser}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {41}, doi = {10.21428/92fbeb44.388e4764}, url = {https://doi.org/10.21428%2F92fbeb44.388e4764}, presentation-video = {https://youtu.be/jMzjpUJO860}, pdf = {43.pdf} }
-
Travis West. 2022. Pitch Fingering Systems and the Search for Perfection. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.d6c9dcae
Download PDF DOIIn the search for better designs, one tool is to specify the design problem such that globally optimal solutions can be found. I present a design process using this approach, its strengths and limitations, and its results in the form of four pitch fingering systems that are ergonomic, simple, and symmetric. In hindsight, I emphasize the subjectivity of the design process, despite its reliance on objective quantitative assessment.
@inproceedings{NIME22_42, author = {West, Travis}, title = {Pitch Fingering Systems and the Search for Perfection}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {42}, doi = {10.21428/92fbeb44.d6c9dcae}, url = {https://doi.org/10.21428%2F92fbeb44.d6c9dcae}, presentation-video = {https://youtu.be/4QB3sNRmK1E}, pdf = {53.pdf} }
-
Travis West and Kalun Leung. 2022. early prototypes and artistic practice with the mubone. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e56a93c9
Download PDF DOIThe mubone (lowercase “m”) is a family of instruments descended from the trombone family, a conceptual design space for trombone augmentations, and a growing musical practice rooted in this design space and the artistic affordances that emerge from it. We present the design of the mubone and discuss our initial implementations. We then reflect on the beginnings of an artistic practice: playing mubone, as well as exploring how the instrument adapts to diverse creative contexts. We discuss mappings, musical exercises, and the development of Garcia, a sound-and-movement composition for mubone.
@inproceedings{NIME22_43, author = {West, Travis and Leung, Kalun}, title = {early prototypes and artistic practice with the mubone}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {43}, doi = {10.21428/92fbeb44.e56a93c9}, url = {https://doi.org/10.21428%2F92fbeb44.e56a93c9}, presentation-video = {https://youtu.be/B51eofO4f4Y}, pdf = {54.pdf} }
-
Max Graf and Mathieu Barthet. 2022. Mixed Reality Musical Interface: Exploring Ergonomics and Adaptive Hand Pose Recognition for Gestural Control. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.56ba9b93
Download PDF DOIThe study of extended reality musical instruments is a burgeoning topic in the field of new interfaces for musical expression. We developed a mixed reality musical interface (MRMI) as a technology probe to inspire design for experienced musicians. We namely explore (i) the ergonomics of the interface in relation to musical expression and (ii) user-adaptive hand pose recognition as gestural control. The MRMI probe was experienced by 10 musician participants (mean age: 25.6 years [SD=3.0], 6 females, 4 males). We conducted a user evaluation comprising three stages. After an experimentation period, participants were asked to accompany a pre-recorded piece of music. In a post-task stage, participants took part in semi-structured interviews, which were subjected to thematic analysis. Prevalent themes included reducing the size of the interface, issues with the field of view of the device and physical strain from playing. Participants were largely in favour of hand poses as expressive control, although this depended on customisation and temporal dynamics; the use of interactive machine learning (IML) for user-adaptive hand pose recognition was well received by participants.
@inproceedings{NIME22_44, author = {Graf, Max and Barthet, Mathieu}, title = {Mixed Reality Musical Interface: Exploring Ergonomics and Adaptive Hand Pose Recognition for Gestural Control}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {44}, doi = {10.21428/92fbeb44.56ba9b93}, url = {https://doi.org/10.21428%2F92fbeb44.56ba9b93}, presentation-video = {https://youtu.be/qhE5X3rAWgg}, pdf = {59.pdf} }
-
Doga Cavdir. 2022. Touch, Listen, (Re)Act: Co-designing Vibrotactile Wearable Instruments for Deaf and Hard of Hearing. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b24043e8
Download PDF DOIActive participation of Deaf individuals in the design and performance of artistic practice benefits increasing collaboration potentials between Deaf and hearing individuals. In this research, we present co-design sessions with a Deaf dancer and a hearing musician to explore how they can influence each other’s expressive explorations. We also study vibrotactile wearable interface designs to better support the Deaf dancer’s perception of sound and music. We report our findings and observations on the co-design process over four workshops and one performance and public demonstration session. We detail the design and implementation of the wearable vibrotactile listening garment and participants’ selfreported experiences. This interface provides participants with more embodied listening opportunities and felt experiences of sound and music. All participants reported that the listening experience highlighted their first-person experience, focusing on their bodies, "regardless of an observer". These findings show how we can improve both such an internal experience of the listener and the collaboration potential between performers for increased inclusion. Overall, this paper addresses two different modalities of haptic feedback, the participation of Deaf users in wearable haptics design as well as music-movement performance practice, and artistic co-creation beyond technology development.
@inproceedings{NIME22_45, author = {Cavdir, Doga}, title = {Touch, Listen, (Re)Act: Co-designing Vibrotactile Wearable Instruments for Deaf and Hard of Hearing}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {45}, doi = {10.21428/92fbeb44.b24043e8}, url = {https://doi.org/10.21428%2F92fbeb44.b24043e8}, presentation-video = {https://youtu.be/tuSo2Sq7jy4}, pdf = {64.pdf} }
-
Lia Mice and Andrew McPherson. 2022. The M in NIME: Motivic analysis and the case for a musicology of NIME performances. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.8c1c9817
Download PDF DOIWhile the value of new digital musical instruments lies to a large extent in their music-making capacity, analyses of new instruments in the research literature often focus on analyses of gesture or performer experience rather than the content of the music made with the instrument. In this paper we present a motivic analysis of music made with new instruments. In the context of music, a motive is a small, analysable musical fragment or phrase that is important in or characteristic of a composition. We outline our method for identifying and analysing motives in music made with new instruments, and display its use in a case study in which 10 musicians created performances with a new large-scale digital musical instrument that we designed. This research illustrates the value of a musicological approach to NIME research, suggesting the need for a broader conversation about a musicology of NIME performances, as distinct from its instruments.
@inproceedings{NIME22_46, author = {Mice, Lia and McPherson, Andrew}, title = {The M in {NIME}: Motivic analysis and the case for a musicology of {NIME} performances}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {46}, doi = {10.21428/92fbeb44.8c1c9817}, url = {https://doi.org/10.21428%2F92fbeb44.8c1c9817}, presentation-video = {https://youtu.be/nXrRJGt11J4}, pdf = {65.pdf} }
-
Eevee Zayas-Garin and Andrew McPherson. 2022. Dialogic Design of Accessible Digital Musical Instruments: Investigating Performer Experience. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2b8ce9a4
Download PDF DOIWhile it is accepted that accessible digital musical instruments (ADMIs) should be created with the involvement of targeted communities, participatory design (PD) is an unsettled practice that gets defined variously, loosely or not at all. In this paper, we explore the concept of dialogic design and provide a case study of how it can be used in the design of an ADMI. While a future publication will give detail of the design of this instrument and provide an analysis of the data from this study, in this paper we set out how the conversations between researcher and participant have prepared us to build an instrument that responds to the lived experience of the participant.
@inproceedings{NIME22_47, author = {Zayas-Garin, Eevee and McPherson, Andrew}, title = {Dialogic Design of Accessible Digital Musical Instruments: Investigating Performer Experience}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {47}, doi = {10.21428/92fbeb44.2b8ce9a4}, url = {https://doi.org/10.21428%2F92fbeb44.2b8ce9a4}, presentation-video = {https://www.youtube.com/watch?v=8l1N3G0BdKw}, pdf = {66.pdf} }
-
Nicole Robson, Andrew McPherson, and Nick Bryan-Kinns. 2022. Being With The Waves: An Ultrasonic Art Installation Enabling Rich Interaction Without Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.376bc758
Download PDF DOITo the naked ear, the installation Being With The Waves appears silent, but a hidden composition of voices, instrumental tones, and maritime sounds is revealed through wearing modified headphones. The installation consists of an array of tweeters emitting a multi-channel ultrasonic composition that sounds physically in the space. Ultrasonic phenomena present at the listener’s ears are captured by microphones embedded on the outside of headphone earcups, shifted into audibility, and output to the headphones. The amplitude demodulation of ultrasonic material results in exaggerated Doppler effects and listeners hear the music bend and shift precisely with their movement. There are no movement sensors, mappings, or feedback loops, yet the installation is perceived as interactive due to the close entanglement of the listener with sound phenomena. The dynamic quality of interaction emerges solely through the listening faculties of the visitor, as an embodied sensory experience determined by their orientation to sounds, physical movement, and perceptual behaviour. This paper describes key influences on the installation, its ultrasonic technology, the design of modified headphones, and the compositional approach.
@inproceedings{NIME22_48, author = {Robson, Nicole and McPherson, Andrew and Bryan-Kinns, Nick}, title = {Being With The Waves: An Ultrasonic Art Installation Enabling Rich Interaction Without Sensors}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {48}, doi = {10.21428/92fbeb44.376bc758}, url = {https://doi.org/10.21428%2F92fbeb44.376bc758}, presentation-video = {https://www.youtube.com/watch?v=3D5S5moUvUA}, pdf = {68.pdf} }
-
Courtney N. Reed, Charlotte Nordmoen, Andrea Martelloni, et al. 2022. Exploring Experiences with New Musical Instruments through Micro-phenomenology. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b304e4b1
Download PDF DOIThis paper introduces micro-phenomenology, a research discipline for exploring and uncovering the structures of lived experience, as a beneficial methodology for studying and evaluating interactions with digital musical instruments. Compared to other subjective methods, micro-phenomenology evokes and returns one to the moment of experience, allowing access to dimensions and observations which may not be recalled in reflection alone. We present a case study of five microphenomenological interviews conducted with musicians about their experiences with existing digital musical instruments. The interviews reveal deep, clear descriptions of different modalities of synchronic moments in interaction, especially in tactile connections and bodily sensations. We highlight the elements of interaction captured in these interviews which would not have been revealed otherwise and the importance of these elements in researching perception, understanding, interaction, and performance with digital musical instruments.
@inproceedings{NIME22_49, author = {Reed, Courtney N. and Nordmoen, Charlotte and Martelloni, Andrea and Lepri, Giacomo and Robson, Nicole and Zayas-Garin, Eevee and Cotton, Kelsey and Mice, Lia and McPherson, Andrew}, title = {Exploring Experiences with New Musical Instruments through Micro-phenomenology}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {49}, doi = {10.21428/92fbeb44.b304e4b1}, url = {https://doi.org/10.21428%2F92fbeb44.b304e4b1}, presentation-video = {https://youtu.be/-Ket6l90S8I}, pdf = {69.pdf} }
-
Enrico Dorigatti and Raul Masu. 2022. Circuit Bending and Environmental Sustainability: Current Situation and Steps Forward. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.18502d1d
Download PDF DOIIn this paper, we propose a set of reflections to actively incorporate environmental sustainability instances in the practice of circuit bending. This proposal combines circuit bending-related concepts with literature from the domain of sustainable Human-Computer Interaction (HCI). We commence by presenting an overview of the critical discourse within the New Interfaces for Musical Expression (NIME) community, and of circuit bending itself—exposing the linkages this practice has with themes directly related to this research, such as environmental sustainability and philosophy. Afterwards, we look at how the topic of environmental sustainability has been discussed, concerning circuit bending, within the NIME literature. We conclude by developing a list of recommendations for a sustainable circuit bending practice.
@inproceedings{NIME22_5, author = {Dorigatti, Enrico and Masu, Raul}, title = {Circuit Bending and Environmental Sustainability: Current Situation and Steps Forward}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {5}, doi = {10.21428/92fbeb44.18502d1d}, url = {https://doi.org/10.21428%2F92fbeb44.18502d1d}, presentation-video = {https://youtu.be/n3GcaaHkats}, pdf = {11.pdf} }
-
Giacomo Lepri, John Bowers, Samantha Topley, et al. 2022. The 10,000 Instruments Workshop - (Im)practical Research for Critical Speculation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9e7c9ba3
Download PDF DOIThis paper describes the 10,000 Instruments workshop, a collaborative online event conceived to generate interface ideas and speculate on music technology through open-ended artefacts and playful design explorations. We first present the activity, setting its research and artistic scope. We then report on a selection of outcomes created by workshop attendees, and examine the critical design statements they convey. The paper concludes with reflections on the make-believe, whimsical and troublemaking approach to instrument design adopted in the workshop. In particular, we consider the ways this activity can support individuals’ creativity, unlock shared musical visions and reveal unconventional perspectives on music technology development.
@inproceedings{NIME22_50, author = {Lepri, Giacomo and Bowers, John and Topley, Samantha and Stapleton, Paul and Bennett, Peter and Andersen, Kristina and McPherson, Andrew}, title = {The 10,000 Instruments Workshop - (Im)practical Research for Critical Speculation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {50}, doi = {10.21428/92fbeb44.9e7c9ba3}, url = {https://doi.org/10.21428%2F92fbeb44.9e7c9ba3}, presentation-video = {https://youtu.be/dif8K23TR1Y}, pdf = {70.pdf} }
-
Sam Trolland, Alon Ilsar, Ciaran Frame, Jon McCormack, and Elliott Wilson. 2022. AirSticks 2.0: Instrument Design for Expressive Gestural Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c400bdc2
Download PDF DOIIn this paper we present the development of a new gestural musical instrument, the AirSticks 2.0. The AirSticks 2.0 combines the latest advances in sensor fusion of Inertial Measurement Units (IMU) and low latency wireless data transmission over Bluetooth Low Energy (BLE), to give an expressive wireless instrument capable of triggering and manipulating discrete and continuous sound events in real-time. We outline the design criteria for this new instrument that has evolved from previous prototypes, give a technical overview of the custom hardware and software developed, and present short videos of three distinct mappings that intuitively translate movement into musical sounds.
@inproceedings{NIME22_51, author = {Trolland, Sam and Ilsar, Alon and Frame, Ciaran and McCormack, Jon and Wilson, Elliott}, title = {{AirSticks} 2.0: Instrument Design for Expressive Gestural Interaction}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {51}, doi = {10.21428/92fbeb44.c400bdc2}, url = {https://doi.org/10.21428%2F92fbeb44.c400bdc2}, presentation-video = {https://youtu.be/TnEzwGshr48}, pdf = {77.pdf} }
-
Beat Rossmy, Maximilian Rauh, and Alexander Wiethoff. 2022. Towards User Interface Guidelines for Musical Grid Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.db84ecd0
Download PDF DOIMusical grid interfaces are becoming an industry standard for interfaces that allow interaction with music software, electronics, or instruments. However, there are no clearly defined design standards or guidelines, resulting in grid interfaces being a multitude of interfaces with competing design approaches, making these already abstract UIs even more challenging. In this paper, we compare the co-existing design approaches of UIs for grid interfaces used by commercial and non-commercial developers and designers, and present the results of three experiments that tested the benefits of co-existing design approaches to mitigate some of the inherent design challenges.
@inproceedings{NIME22_52, author = {Rossmy, Beat and Rauh, Maximilian and Wiethoff, Alexander}, title = {Towards User Interface Guidelines for Musical Grid Interfaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {52}, doi = {10.21428/92fbeb44.db84ecd0}, url = {https://doi.org/10.21428%2F92fbeb44.db84ecd0}, presentation-video = {https://www.youtube.com/watch?v=JF514EWYiQ8}, pdf = {86.pdf} }
-
Beat Rossmy. 2022. Buttons, Sliders, and Keys – A Survey on Musical Grid Interface Standards. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.563bfea9
Download PDF DOIApplications for musical grid interfaces are designed without any established guidelines or defined design rules. However, within applications of different manufacturers, musicians, and designers, common patterns and conventions can be observed which might be developing towards unofficial standards. In this survey we analyzed 40 applications, instruments, or controllers and collected 18 types of recurring UI elements, which are clustered, described, and interactively presented in this survey. We further postulate 3 theses which standard UI elements should meet and propose novel UI elements deduced from WIMP standards.
@inproceedings{NIME22_53, author = {Rossmy, Beat}, title = {Buttons, Sliders, and Keys {\textendash} A Survey on Musical Grid Interface Standards}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {53}, doi = {10.21428/92fbeb44.563bfea9}, url = {https://doi.org/10.21428%2F92fbeb44.563bfea9}, presentation-video = {https://www.youtube.com/watch?v=CPHY4_G_LR0}, pdf = {87.pdf} }
-
Jack Armitage, Thor Magnusson, Victor Shepardson, and Halldor Ulfarsson. 2022. The Proto-Langspil: Launching an Icelandic NIME Research Lab with the Help of a Marginalised Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.6178f575
Download PDF DOIHistorically marginalised instruments witness and bear vital stories that can deeply affect identity and galvanise communities when revitalised. We present the protolangspil as a contemporary interpretation of the langspil, an Icelandic monochord-like folk instrument, and describe its agential and performative contributions to the first Icelandic NIME research lab. This paper describes how the proto-langspil has served as an instrument in establishing the research methodology of our new lab and concretised the research agenda via a series of encounters with music performers and composers, luthiers, anthropologists, musicologists, designers and philosophers. These encounters have informed and challenged our research practices, mapped our surroundings, and embedded us in the local social fabric. We share our proto-langspil for replication, and reflect on encounters as a methodology framing mechanism that eschews the more traditional empirical approaches in HCI. We conclude with a final provocation for NIME researchers to embrace AI research with an open mind.
@inproceedings{NIME22_54, author = {Armitage, Jack and Magnusson, Thor and Shepardson, Victor and Ulfarsson, Halldor}, title = {The Proto-Langspil: Launching an Icelandic {NIME} Research Lab with the Help of a Marginalised Instrument}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {54}, doi = {10.21428/92fbeb44.6178f575}, url = {https://doi.org/10.21428%2F92fbeb44.6178f575}, presentation-video = {https://youtu.be/8tRTF1lB6Hg}, pdf = {88.pdf} }
-
Carla Sophie Tapparo and Victor Zappi. 2022. Bodily Awareness Through NIMEs: Deautomatising Music Making Processes. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.7e04cfc8
Download PDF DOIThe lived body, or soma, is the designation for the phenomenological experience of being a body, rather than simply a corporeal entity. Bodily knowledge, which evolves through bodily awareness, carries the lived body’s reflectivity. In this paper, such considerations are put in the context of previous work at NIME, specifically that revolving around with the vocal tract or the voice, due to its singular relation with embodiment. We understand that focusing on somaesthetics allows for novel ways of engaging with technology as well as highlighting biases that might go unnoticed otherwise. We present an inexpensive application of a respiration sensor that emerges from the aforementioned conceptualisations. Lastly, we reflect on how to better frame the role of bodily awareness in NIME.
@inproceedings{NIME22_55, author = {Tapparo, Carla Sophie and Zappi, Victor}, title = {Bodily Awareness Through {NIMEs}: Deautomatising Music Making Processes}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {55}, doi = {10.21428/92fbeb44.7e04cfc8}, url = {https://doi.org/10.21428%2F92fbeb44.7e04cfc8}, presentation-video = {https://youtu.be/GEndgifZmkI}, pdf = {99.pdf} }
-
Laddy Patricia Cadavid Hinojosa. 2022. Kanchay_Yupana\slash \slash: Tangible rhythm sequencer inspired by ancestral Andean technologies. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.61d01269
Download PDF DOIThe Kanchay_Yupana// is an open-source NIME for the generation of rhythms, inspired by the Andean yupana: a tangible board similar to an abacus of different sizes and materials with a system of carved geometric boxes into which seeds or pebbles were disposed to perform arithmetic calculations, used since pre-colonial times. As in the traditional artifact, the interaction of this new electronic yupana is based on the arrangement of seeds on a specially designed board with boxes, holes, and photoresistors. The shadow detected by the seeds’ positioning sends real-time motion data in MIDI messages to Pure Data in a drum machine patch. As a result, percussion samples of Andean instruments fill pulses in a four-quarter beat, generating patterns that can be transformed live into different rhythms. This interface complements the Electronic_Khipu_ (a previous NIME based on an Andean khipu) by producing the rhythmic component. This experience unites ancestral and contemporary technologies in experimental sound performance following the theoretical-practical research on the vindication of the memory in ancestral Andean technological interfaces made invisible by colonization, reusing them from a decolonial perspective in NIMEs.
@inproceedings{NIME22_56, author = {Cadavid Hinojosa, Laddy Patricia}, title = {Kanchay_Yupana{\slash \slash}: Tangible rhythm sequencer inspired by ancestral Andean technologies}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {56}, doi = {10.21428/92fbeb44.61d01269}, url = {https://doi.org/10.21428/92fbeb44.61d01269}, presentation-video = {https://youtu.be/MpMFL6R14kQ}, copyright = {Creative Commons Attribution 4.0 International}, pdf = {49.pdf} }
-
Georgios Diapoulis, Iannis Zannos, Kivanç Tatar, and Palle Dahlstedt. 2022. Bottom-up live coding: Analysis of continuous interactions towards predicting programming behaviours. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.51fecaab
Download PDF DOIThis paper explores a minimalist approach to live coding using a single input parameter to manipulate the graph structure of a finite state machine through a stream of bits. This constitutes an example of bottom-up live coding, which operates on a low level language to generate a high level structure output. Here we examine systematically how to apply mappings of continuous gestural interactions to develop a bottom-up system for predicting programming behaviours. We conducted a statistical analysis based on a controlled data generation procedure. The findings concur with the subjective experience of the behavior of the system when the user modulates the sampling frequency of a variable clock using a knob as an input device. This suggests that a sequential predictive model may be applied towards the development of a tactically predictive system according to Tanimoto’s hierarchy of liveness. The code is provided in a git repository.
@inproceedings{NIME22_6, author = {Diapoulis, Georgios and Zannos, Iannis and Tatar, Kivan{\c{c}} and Dahlstedt, Palle}, title = {Bottom-up live coding: Analysis of continuous interactions towards predicting programming behaviours}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {6}, doi = {10.21428/92fbeb44.51fecaab}, url = {https://doi.org/10.21428%2F92fbeb44.51fecaab}, presentation-video = {https://youtu.be/L_v5P7jGK8Y}, pdf = {110.pdf} }
-
Jianing Zheng and Nick Bryan-Kinns. 2022. Squeeze, Twist, Stretch: Exploring Deformable Digital Musical Interfaces Design Through Non-Functional Prototypes. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.41da9da5
Download PDF DOIDeformable interfaces are an emerging area of Human- Computer Interaction (HCI) research that offers nuanced and responsive physical interaction with digital technologies. They are well suited to creative and expressive forms of HCI such as Digital Musical Interfaces (DMIs). However, research on the design of deformable DMIs is limited. This paper explores the role that deformable interfaces might play in DMI design. We conducted an online study with 23 DMI designers in which they were invited to create non-functional deformable DMIs together. Our results suggest forms of gestural input and sound mappings that deformable interfaces intuitively lend themselves to for DMI design. From our results, we highlight four styles of DMI that deformable interfaces might be most suited to, and suggest the kinds of experience that deformable DMIs might be most compelling for musicians and audiences. We discuss how DMI designers explore deformable materials and gestures input and the role of unexpected affordances in the design process.
@inproceedings{NIME22_7, author = {Zheng, Jianing and Bryan-Kinns, Nick}, title = {Squeeze, Twist, Stretch: Exploring Deformable Digital Musical Interfaces Design Through Non-Functional Prototypes}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {7}, doi = {10.21428/92fbeb44.41da9da5}, url = {https://doi.org/10.21428%2F92fbeb44.41da9da5}, presentation-video = {https://youtu.be/KHqfxL4F7Bg}, pdf = {111.pdf} }
-
Rébecca Kleinberger, Nikhil Singh, Xiao Xiao, and Akito van Troyer. 2022. Voice at NIME: a Taxonomy of New Interfaces for Vocal Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.4308fb94
Download PDF DOIWe present a systematic review of voice-centered NIME publications from the past two decades. Musical expression has been a key driver of innovation in voicebased technologies, from traditional architectures that amplify singing to cutting-edge research in vocal synthesis. NIME conference has emerged as a prime venue for innovative vocal interfaces. However, there hasn’t been a systematic analysis of all voice-related work or an effort to characterize their features. Analyzing trends in Vocal NIMEs can help the community better understand common interests, identify uncharted territories, and explore directions for future research. We identified a corpus of 98 papers about Vocal NIMEs from 2001 to 2021, which we analyzed in 3 ways. First, we automatically extracted latent themes and possible categories using natural language processing. Taking inspiration from concepts surfaced through this process, we then defined several core dimensions with associated descriptors of Vocal NIMEs and assigned each paper relevant descriptors under each dimension. Finally, we defined a classification system, which we then used to uniquely and more precisely situate each paper on a map, taking into account the overall goals of each work. Based on our analyses, we present trends and challenges, including questions of gender and diversity in our community, and reflect on opportunities for future work.
@inproceedings{NIME22_8, author = {Kleinberger, R{\'{e}}becca and Singh, Nikhil and Xiao, Xiao and Troyer, Akito van}, title = {Voice at {NIME}: a Taxonomy of New Interfaces for Vocal Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {8}, doi = {10.21428/92fbeb44.4308fb94}, url = {https://doi.org/10.21428%2F92fbeb44.4308fb94}, presentation-video = {https://youtu.be/PUlGjAblfPM}, pdf = {112.pdf} }
-
Brady Boettcher, John Sullivan, and Marcelo M. Wanderley. 2022. Slapbox: Redesign of a Digital Musical Instrument Towards Reliable Long-Term Practice. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.78fd89cc
Download PDF DOIDigital musical instruments (DMIs) built to be used in performance settings need to go beyond the prototypical stage of design to become robust, reliable, and responsive devices for extensive usage. This paper presents the Tapbox and the Slapbox, two generations of a standalone DMI built for percussion practice. After summarizing the requirements for performance DMIs from previous surveys, we introduce the Tapbox and comment on its strong and weak points. We then focus on the design process of the Slapbox, an improved version that captures a broader range of percussive gestures. Design tasks are reflected upon, including enclosure design, sensor evaluations, gesture extraction algorithms, and sound synthesis methods and mappings. Practical exploration of the Slapbox by two professional percussionists is performed and their insights summarized, providing directions for future work.
@inproceedings{NIME22_9, author = {Boettcher, Brady and Sullivan, John and Wanderley, Marcelo M.}, title = {Slapbox: Redesign of a Digital Musical Instrument Towards Reliable Long-Term Practice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2022}, month = jun, address = {The University of Auckland, New Zealand}, issn = {2220-4806}, articleno = {9}, doi = {10.21428/92fbeb44.78fd89cc}, url = {https://doi.org/10.21428%2F92fbeb44.78fd89cc}, presentation-video = {https://youtu.be/NkYGAp4rmj8}, pdf = {114.pdf} }
2021
-
Stefano Fasciani and Jackson Goode. 2021. 20 NIMEs: Twenty Years of New Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b368bcd5
Download PDF DOIThis paper provides figures and metrics over twenty years of New Interfaces for Musical Expression conferences, which are derived by analyzing the publicly available paper proceedings. Besides presenting statistical information and a bibliometric study, we aim at identifying trends and patterns. The analysis shows the growth and heterogeneity of the NIME demographic, as well the increase in research output. The data presented in this paper allows the community to reflect on several issues such as diversity and sustainability, and it provides insights to address challenges and set future directions.
@inproceedings{NIME21_1, author = {Fasciani, Stefano and Goode, Jackson}, title = {20 NIMEs: Twenty Years of New Interfaces for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {1}, doi = {10.21428/92fbeb44.b368bcd5}, url = {https://nime.pubpub.org/pub/20nimes}, presentation-video = {https://youtu.be/44W7dB7lzQg} }
-
Raul Masu, Nuno N. Correia, and Teresa Romao. 2021. NIME Scores: a Systematic Review of How Scores Have Shaped Performance Ecologies in NIME. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.3ffad95a
Download PDF DOIThis paper investigates how the concept of score has been used in the NIME community. To this end, we performed a systematic literature review of the NIME proceedings, analyzing papers in which scores play a central role. We analyzed the score not as an object per se but in relation to the users and the interactive system(s). In other words, we primarily looked at the role that scores play in the performance ecology. For this reason, to analyze the papers, we relied on ARCAA, a recent framework created to investigate artifact ecologies in computer music performances. Using the framework, we created a scheme for each paper and clustered the papers according to similarities. Our analysis produced five main categories that we present and discuss in relation to literature about musical scores.
@inproceedings{NIME21_10, author = {Masu, Raul and Correia, Nuno N. and Romao, Teresa}, title = {NIME Scores: a Systematic Review of How Scores Have Shaped Performance Ecologies in NIME}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {10}, doi = {10.21428/92fbeb44.3ffad95a}, url = {https://nime.pubpub.org/pub/41cj1pyt}, presentation-video = {https://youtu.be/j7XmQvDdUPk} }
-
Christian Frisson, Mathias Bredholt, Joseph Malloch, and Marcelo M. Wanderley. 2021. MapLooper: Live-looping of distributed gesture-to-sound mappings. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.47175201
Download PDF DOIThis paper presents the development of MapLooper: a live-looping system for gesture-to-sound mappings. We first reviewed loop-based Digital Musical Instruments (DMIs). We then developed a connectivity infrastructure for wireless embedded musical instruments with distributed mapping and synchronization. We evaluated our infrastructure in the context of the real-time constraints of music performance. We measured a round-trip latency of 4.81 ms when mapping signals at 100 Hz with embedded libmapper and an average inter-onset delay of 3.03 ms for synchronizing with Ableton Link. On top of this infrastructure, we developed MapLooper: a live-looping tool with 2 example musical applications: a harp synthesizer with SuperCollider and embedded source-filter synthesis with FAUST on ESP32. Our system is based on a novel approach to mapping, extrapolating from using FIR and IIR filters on gestural data to using delay-lines as part of the mapping of DMIs. Our system features rhythmic time quantization and a flexible loop manipulation system for creative musical exploration. We open-source all of our components.
@inproceedings{NIME21_11, author = {Frisson, Christian and Bredholt, Mathias and Malloch, Joseph and Wanderley, Marcelo M.}, title = {MapLooper: Live-looping of distributed gesture-to-sound mappings}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {11}, doi = {10.21428/92fbeb44.47175201}, url = {https://nime.pubpub.org/pub/2pqbusk7}, presentation-video = {https://youtu.be/9r0zDJA8qbs} }
-
P. J. Charles Reimer and Marcelo M. Wanderley. 2021. Embracing Less Common Evaluation Strategies for Studying User Experience in NIME. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.807a000f
Download PDF DOIAssessment of user experience (UX) is increasingly important in music interaction evaluation, as witnessed in previous NIME reviews describing varied and idiosyncratic evaluation strategies. This paper focuses on evaluations conducted in the last four years of NIME (2017 to 2020), compares results to previous research, and classifies evaluation types to describe how researchers approach and study UX in NIME. While results of this review confirm patterns such as the prominence of short-term, performer perspective evaluations, and the variety of evaluation strategies used, they also show that UX-focused evaluations are typically exploratory and limited to novice performers. Overall, these patterns indicate that current UX evaluation strategies do not address dynamic factors such as skill development, the evolution of the performer-instrument relationship, and hedonic and cognitive aspects of UX. To address such limitations, we discuss a number of less common tools developed within and outside of NIME that focus on dynamic aspects of UX, potentially leading to more informative and meaningful evaluation insights.
@inproceedings{NIME21_12, author = {Reimer, P. J. Charles and Wanderley, Marcelo M.}, title = {Embracing Less Common Evaluation Strategies for Studying User Experience in NIME}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {12}, doi = {10.21428/92fbeb44.807a000f}, url = {https://nime.pubpub.org/pub/fidgs435}, presentation-video = {https://youtu.be/WTaee8NVtPg} }
-
Takuto Fukuda, Eduardo Meneses, Travis West, and Marcelo M. Wanderley. 2021. The T-Stick Music Creation Project: An approach to building a creative community around a DMI. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.26f33210
Download PDF DOITo tackle digital musical instrument (DMI) longevity and the problem of the second performer, we proposed the T-Stick Music Creation Project, a series of musical commissions along with workshops, mentorship, and technical support, meant to foment composition and performance using the T-Stick and provide an opportunity to improve technical and pedagogical support for the instrument. Based on the project’s outcomes, we describe three main contributions: our approach; the artistic works produced; and analysis of these works demonstrating the T-Stick as actuator, modulator, and data provider.
@inproceedings{NIME21_13, author = {Fukuda, Takuto and Meneses, Eduardo and West, Travis and Wanderley, Marcelo M.}, title = {The T-Stick Music Creation Project: An approach to building a creative community around a DMI}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {13}, doi = {10.21428/92fbeb44.26f33210}, url = {https://nime.pubpub.org/pub/7c4qdj4u}, presentation-video = {https://youtu.be/tfOUMr3p4b4} }
-
Doga Cavdir, Chris Clarke, Patrick Chiu, Laurent Denoue, and Don Kimber. 2021. Reactive Video: Movement Sonification for Learning Physical Activity with Adaptive Video Playback. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.eef53755
Download PDF DOIThis paper provides initial efforts in developing and evaluating a real-time movement sonification framework for physical activity practice and learning. Reactive Video provides an interactive, vision-based, adaptive video playback with auditory feedback on users’ performance to better support when learning and practicing new physical skills. We implement the sonification for auditory feedback design by extending the Web Audio API framework. The current application focuses on Tai-Chi performance and provides two main audio cues to users for several Tai Chi exercises. We provide our design approach, implementation, and sound generation and mapping, specifically for interactive systems with direct video manipulation. Our observations reveal the relationship between the movement-to-sound mapping and characteristics of the physical activity.
@inproceedings{NIME21_14, author = {Cavdir, Doga and Clarke, Chris and Chiu, Patrick and Denoue, Laurent and Kimber, Don}, title = {Reactive Video: Movement Sonification for Learning Physical Activity with Adaptive Video Playback}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {14}, doi = {10.21428/92fbeb44.eef53755}, url = {https://nime.pubpub.org/pub/dzlsifz6}, presentation-video = {https://youtu.be/pbvZI80XgEU} }
-
Daniel Chin, Ian Zhang, and Gus Xia. 2021. Hyper-hybrid Flute: Simulating and Augmenting How Breath Affects Octave and Microtone. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c09d91be
Download PDF DOIWe present hyper-hybrid flute, a new interface which can be toggled between its electronic mode and its acoustic mode. In its acoustic mode, the interface is identical to the regular six-hole recorder. In its electronic mode, the interface detects the player’s fingering and breath velocity and translates them to MIDI messages. Specifically, it maps higher breath velocity to higher octaves, with the modulo remainder controlling the microtonal pitch bend. This novel mapping reproduces a highly realistic flute-playing experience. Furthermore, changing the parameters easily augments the interface into a hyperinstrument that allows the player to control microtones more expressively via breathing techniques.
@inproceedings{NIME21_15, author = {Chin, Daniel and Zhang, Ian and Xia, Gus}, title = {Hyper-hybrid Flute: Simulating and Augmenting How Breath Affects Octave and Microtone}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {15}, doi = {10.21428/92fbeb44.c09d91be}, url = {https://nime.pubpub.org/pub/eshr}, presentation-video = {https://youtu.be/UIqsYK9F4xo} }
-
Beat Rossmy and Alexander Wiethoff. 2021. Musical Grid Interfaces: Past, Present, and Future Directions. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.6a2451e6
Download PDF DOIThis paper examines grid interfaces which are currently used in many musical devices and instruments. This type of interface concept has been rooted in the NIME community since the early 2000s. We provide an overview of research projects and commercial products and conducted an expert interview as well as an online survey. In summary this work shares: (1) an overview on grid controller research, (2) a set of three usability issues deduced by a multi method approach, and (3) an evaluation of user perceptions regarding persistent usability issues and common reasons for the use of grid interfaces.
@inproceedings{NIME21_16, author = {Rossmy, Beat and Wiethoff, Alexander}, title = {Musical Grid Interfaces: Past, Present, and Future Directions}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {16}, doi = {10.21428/92fbeb44.6a2451e6}, url = {https://nime.pubpub.org/pub/grid-past-present-future}, presentation-video = {https://youtu.be/GuPIz2boJwA} }
-
Beat Rossmy, Sebastian Unger, and Alexander Wiethoff. 2021. TouchGrid – Combining Touch Interaction with Musical Grid Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.303223db
Download PDF DOIMusical grid interfaces such as the monome grid have developed into standard interfaces for musical equipment over the last 15 years. However, the types of possible interactions more or less remained the same, only expanding grid capabilities by external IO elements. Therefore, we propose to transfer capacitive touch technology to grid devices to expand their input capabilities by combining tangible and capacitive-touch based interaction paradigms. This enables to keep the generic nature of grid interfaces which is a key feature for many users. In this paper we present the TouchGrid concept and share our proof-of-concept implementation as well as an expert evaluation regarding the general concept of touch interaction used on grid devices. TouchGrid provides swipe and bezel interaction derived from smart phone interfaces to allow navigation between applications and access to menu systems in a familiar way.
@inproceedings{NIME21_17, author = {Rossmy, Beat and Unger, Sebastian and Wiethoff, Alexander}, title = {TouchGrid – Combining Touch Interaction with Musical Grid Interfaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {17}, doi = {10.21428/92fbeb44.303223db}, url = {https://nime.pubpub.org/pub/touchgrid}, presentation-video = {https://youtu.be/ti2h_WK5NeU} }
-
Corey Ford, Nick Bryan-Kinns, and Chris Nash. 2021. Creativity in Children’s Digital Music Composition. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e83deee9
Download PDF DOIComposing is a neglected area of music education. To increase participation, many technologies provide open-ended interfaces to motivate child autodidactic use, drawing influence from Papert’s LOGO philosophy to support children’s learning through play. This paper presents a case study examining which interactions with Codetta, a LOGO-inspired, block-based music platform, supports children’s creativity in music composition. Interaction logs were collected from 20 children and correlated against socially-validated creativity scores. To conclude, we recommend that the transition between low-level edits and high-level processes should be carefully scaffolded.
@inproceedings{NIME21_18, author = {Ford, Corey and Bryan-Kinns, Nick and Nash, Chris}, title = {Creativity in Children's Digital Music Composition}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {18}, doi = {10.21428/92fbeb44.e83deee9}, url = {https://nime.pubpub.org/pub/ker5w948}, presentation-video = {https://youtu.be/XpMiDWrxXMU} }
-
Yinmiao Li, Ziyue Piao, and Gus Xia. 2021. A Wearable Haptic Interface for Breath Guidance in Vocal Training. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.6d342615
Download PDF DOIVarious studies have shown that haptic interfaces could enhance the learning efficiency in music learning, but most existing studies focus on training motor skills of instrument playing such as finger motions. In this paper, we present a wearable haptic device to guide diaphragmatic breathing, which can be used in vocal training as well as the learning of wind instruments. The device is a wearable strap vest, consisting of a spinal exoskeleton on the back for inhalation and an elastic belt around the waist for exhalation. We first conducted case studies to assess how convenient and comfortable to wear the device, and then evaluate its effectiveness in guiding rhythm and breath. Results show users’ acceptance of the haptic interface and the potential of haptic guidance in vocal training.
@inproceedings{NIME21_19, author = {Li, Yinmiao and Piao, Ziyue and Xia, Gus}, title = {A Wearable Haptic Interface for Breath Guidance in Vocal Training}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {19}, doi = {10.21428/92fbeb44.6d342615}, url = {https://nime.pubpub.org/pub/cgi7t0ta}, presentation-video = {https://youtu.be/-t-u0V-27ng} }
-
Lior Arbel. 2021. Aeolis: A Virtual Instrument Producing Pitched Tones With Soundscape Timbres. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.64f66047
Download PDF DOIAmbient sounds such as breaking waves or rustling leaves are sometimes used in music recording, composition and performance. However, as these sounds lack a precise pitch, they can not be used melodically. This work describes Aeolis, a virtual instrument producing pitched tones from a real-time ambient sound input using subtractive synthesis. The produced tones retain the identifiable timbres of the ambient sounds. Tones generated using input sounds from various environments, such as sea waves, leaves rustle and traffic noise, are analyzed. A configuration for a live in-situ performance is described, consisting of live streaming the produced sounds. In this configuration, the environment itself acts as a ‘performer’ of sorts, alongside the Aeolis player, providing both real-time input signals and complementary visual cues.
@inproceedings{NIME21_2, author = {Arbel, Lior}, title = {Aeolis: A Virtual Instrument Producing Pitched Tones With Soundscape Timbres}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {2}, doi = {10.21428/92fbeb44.64f66047}, url = {https://nime.pubpub.org/pub/c3w33wya}, presentation-video = {https://youtu.be/C0WEeaYy0tQ} }
-
Florent Berthaut. 2021. Musical Exploration of Volumetric Textures in Mixed and Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.6607d04f
Download PDF DOIThe development of technologies for acquisition and display gives access to a large variety of volumetric (3D) textures, either synthetic or obtained through tomography. They constitute extremely rich data which is usually explored for informative purposes, in medical or engineering contexts. We believe that this exploration has a strong potential for musical expression. To that extent, we propose a design space for the musical exploration of volumetric textures. We describe the challenges for its implementation in Virtual and Mixed-Reality and we present a case study with an instrument called the Volume Sequencer which we analyse using your design space. Finally, we evaluate the impact on expressive exploration of two dimensions, namely the amount of visual feedback and the selection variability.
@inproceedings{NIME21_20, author = {Berthaut, Florent}, title = {Musical Exploration of Volumetric Textures in Mixed and Virtual Reality}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {20}, doi = {10.21428/92fbeb44.6607d04f}, url = {https://nime.pubpub.org/pub/sqceyucq}, presentation-video = {https://youtu.be/C9EiA3TSUag} }
-
Abby Aresty and Rachel Gibson. 2021. Changing GEAR: The Girls Electronic Arts Retreat’s Teaching Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.25757aca
Download PDF DOIThe Girls Electronic Arts Retreat (GEAR) is a STEAM summer camp for ages 8 - 11. In this paper, we compare and contrast lessons from the first two iterations of GEAR, including one in-person and one remote session. We introduce our Teaching Interfaces for Musical Expression (TIME) framework and use our analyses to compose a list of best practices in TIME development and implementation.
@inproceedings{NIME21_21, author = {Aresty, Abby and Gibson, Rachel}, title = {Changing GEAR: The Girls Electronic Arts Retreat's Teaching Interfaces for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {21}, doi = {10.21428/92fbeb44.25757aca}, url = {https://nime.pubpub.org/pub/8lop0zj4}, presentation-video = {https://youtu.be/8qeFjNGaEHc} }
-
Anne Sophie Andersen and Derek Kwan. 2021. Grisey’s ’Talea’: Musical Representation As An Interactive 3D Map. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.27d09832
Download PDF DOIThe praxis of using detailed visual models to illustrate complex ideas is widely used in the sciences but less so in music theory. Taking the composer’s notes as a starting point, we have developed a complete interactive 3D model of Grisey’s Talea (1986). Our model presents a novel approach to music education and theory by making understanding of complex musical structures accessible to students and non-musicians, particularly those who struggle with traditional means of learning or whose mode of learning is predominantly visual. The model builds on the foundations of 1) the historical associations between visual and musical arts and those concerning spectralists in particular 2) evidence of recurring cross-modal associations in the general population and consistent associations for individual synesthetes. Research into educational uses of the model is a topic for future exploration.
@inproceedings{NIME21_22, author = {Andersen, Anne Sophie and Kwan, Derek}, title = {Grisey’s 'Talea': Musical Representation As An Interactive 3D Map}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {22}, doi = {10.21428/92fbeb44.27d09832}, url = {https://nime.pubpub.org/pub/oiwz8bb7}, presentation-video = {https://youtu.be/PGYOkFjyrek} }
-
Enrique Tomás, Thomas Gorbach, Hilda Tellioğlu, and Martin Kaltenbrunner. 2021. Embodied Gestures: Sculpting Energy-Motion Models into Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ce8139a8
Download PDF DOIIn this paper we discuss the beneficial aspects of incorporating energy-motion models as a design pattern in musical interface design. These models can be understood as archetypes of motion trajectories which are commonly applied in the analysis and composition of acousmatic music. With the aim of exploring a new possible paradigm for interface design, our research builds on the parallel investigation of embodied music cognition theory and the praxis of acousmatic music. After having run a large study for understanding a listener’s spontaneous rendering of form and movement, we built a number of digital instruments especially designed to emphasise a particular energy-motion profile. The evaluation through composition and performance indicates that this design paradigm can foster musical inventiveness and expression in the processes of composition and performance of gestural electronic music.
@inproceedings{NIME21_23, author = {Tomás, Enrique and Gorbach, Thomas and Tellioğlu, Hilda and Kaltenbrunner, Martin}, title = {Embodied Gestures: Sculpting Energy-Motion Models into Musical Interfaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {23}, doi = {10.21428/92fbeb44.ce8139a8}, url = {https://nime.pubpub.org/pub/gsx1wqt5}, presentation-video = {https://youtu.be/QDjCEnGYSC4} }
-
Raul Masu, Adam Pultz Melbye, John Sullivan, and Alexander Refsum Jensenius. 2021. NIME and the Environment: Toward a More Sustainable NIME Practice. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5725ad8f
Download PDF DOIThis paper addresses environmental issues around NIME research and practice. We discuss the formulation of an environmental statement for the conference as well as the initiation of a NIME Eco Wiki containing information on environmental concerns related to the creation of new musical instruments. We outline a number of these concerns and, by systematically reviewing the proceedings of all previous NIME conferences, identify a general lack of reflection on the environmental impact of the research undertaken. Finally, we propose a framework for addressing the making, testing, using, and disposal of NIMEs in the hope that sustainability may become a central concern to researchers.
@inproceedings{NIME21_24, author = {Masu, Raul and Melbye, Adam Pultz and Sullivan, John and Jensenius, Alexander Refsum}, title = {NIME and the Environment: Toward a More Sustainable NIME Practice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {24}, doi = {10.21428/92fbeb44.5725ad8f}, url = {https://nime.pubpub.org/pub/4bbl5lod}, presentation-video = {https://youtu.be/JE6YqYsV5Oo} }
-
Randall Harlow, Mattias Petersson, Robert Ek, Federico Visi, and Stefan Östersjö. 2021. Global Hyperorgan: a platform for telematic musicking and research. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.d4146b2d
Download PDF DOIThe Global Hyperorgan is an intercontinental, creative space for acoustic musicking. Existing pipe organs around the world are networked for real-time, geographically-distant performance, with performers utilizing instruments and other input devices to collaborate musically through the voices of the pipes in each location. A pilot study was carried out in January 2021, connecting two large pipe organs in Piteå, Sweden, and Amsterdam, the Netherlands. A quartet of performers tested the Global Hyperorgan’s capacities for telematic musicking through a series of pieces. The concept of modularity is useful when considering the artistic challenges and possibilities of the Global Hyperorgan. We observe how the modular system utilized in the pilot study afforded multiple experiences of shared instrumentality from which new, synthetic voices emerge. As a long-term technological, artistic and social research project, the Global Hyperorgan offers a platform for exploring technology, agency, voice, and intersubjectivity in hyper-acoustic telematic musicking.
@inproceedings{NIME21_25, author = {Harlow, Randall and Petersson, Mattias and Ek, Robert and Visi, Federico and Östersjö, Stefan}, title = {Global Hyperorgan: a platform for telematic musicking and research}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {25}, doi = {10.21428/92fbeb44.d4146b2d}, url = {https://nime.pubpub.org/pub/a626cbqh}, presentation-video = {https://youtu.be/t88aIXdqBWQ} }
-
Luis Zayas-Garin, Jacob Harrison, Robert Jack, and Andrew McPherson. 2021. DMI Apprenticeship: Sharing and Replicating Musical Artefacts. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.87f1d63e
Download PDF DOIThe nature of digital musical instruments (DMIs), often bespoke artefacts devised by single or small groups of technologists, requires thought about how they are shared and archived so that others can replicate or adapt designs. The ability for replication contributes to an instrument’s longevity and creates opportunities for both DMI designers and researchers. Research papers often omit necessary knowledge for replicating research artefacts, but we argue that mitigating this situation is not just about including design materials and documentation. Our way of approaching this issue is by drawing on an age-old method as a way of disseminating knowledge, the apprenticeship. We propose the DMI apprenticeship as a way of exploring the procedural obstacles of replicating DMIs, while highlighting for both apprentice and designer the elements of knowledge that are a challenge to communicate in conventional documentation. Our own engagement with the DMI apprenticeship led to successfully replicating an instrument, Strummi. Framing this process as an apprenticeship highlighted the non-obvious areas of the documentation and manufacturing process that are crucial in the successful replication of a DMI.
@inproceedings{NIME21_26, author = {Zayas-Garin, Luis and Harrison, Jacob and Jack, Robert and McPherson, Andrew}, title = {DMI Apprenticeship: Sharing and Replicating Musical Artefacts}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {26}, doi = {10.21428/92fbeb44.87f1d63e}, url = {https://nime.pubpub.org/pub/dmiapprenticeship}, presentation-video = {https://youtu.be/zTMaubJjlzA} }
-
Kelsey Cotton, Pedro Sanches, Vasiliki Tsaknaki, and Pavel Karpashevich. 2021. The Body Electric: A NIME designed through and with the somatic experience of singing. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ec9f8fdd
Download PDF DOIThis paper presents the soma design process of creating Body Electric: a novel interface for the capture and use of biofeedback signals and physiological changes generated in the body by breathing, during singing. This NIME design is grounded in the performer’s experience of, and relationship to, their body and their voice. We show that NIME design using principles from soma design can offer creative opportunities in developing novel sensing mechanisms, which can in turn inform composition and further elicit curious engagements between performer and artefact, disrupting notions of performer-led control. As contributions, this work 1) offers an example of NIME design for situated living, feeling, performing bodies, and 2) presents the rich potential of soma design as a path for designing in this context.
@inproceedings{NIME21_27, author = {Cotton, Kelsey and Sanches, Pedro and Tsaknaki, Vasiliki and Karpashevich, Pavel}, title = {The Body Electric: A NIME designed through and with the somatic experience of singing}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {27}, doi = {10.21428/92fbeb44.ec9f8fdd}, url = {https://nime.pubpub.org/pub/ntm5kbux}, presentation-video = {https://youtu.be/zwzCgG8MXNA} }
-
Emma Frid and Alon Ilsar. 2021. Reimagining (Accessible) Digital Musical Instruments: A Survey on Electronic Music-Making Tools. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c37a2370
Download PDF DOIThis paper discusses findings from a survey on interfaces for making electronic music. We invited electronic music makers of varying experience to reflect on their practice and setup and to imagine and describe their ideal interface for music-making. We also asked them to reflect on the state of gestural controllers, machine learning, and artificial intelligence in their practice. We had 118 people respond to the survey, with 40.68% professional musicians, and 10.17% identifying as living with a disability or access requirement. Results highlight limitations of music-making setups as perceived by electronic music makers, reflections on how imagined novel interfaces could address such limitations, and positive attitudes towards ML and AI in general.
@inproceedings{NIME21_28, author = {Frid, Emma and Ilsar, Alon}, title = {Reimagining (Accessible) Digital Musical Instruments: A Survey on Electronic Music-Making Tools}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {28}, doi = {10.21428/92fbeb44.c37a2370}, url = {https://nime.pubpub.org/pub/reimaginingadmis}, presentation-video = {https://youtu.be/vX8B7fQki_w} }
-
Jonathan Pitkin. 2021. SoftMRP: a Software Emulation of the Magnetic Resonator Piano. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9e7da18f
Download PDF DOIThe Magnetic Resonator Piano (MRP) is a relatively well-established DMI which significantly expands the capabilities of the acoustic piano. This paper presents SoftMRP, a Max/MSP patch designed to emulate the physical MRP and thereby to allow rehearsal of MRP repertoire and performance techniques using any MIDI keyboard and expression pedal; it is hoped that the development of such a tool will encourage even more widespread adoption of the original instrument amongst composers and performers. This paper explains SoftMRP’s features and limitations, discussing the challenges of approximating responses which rely upon the MRP’s continuous sensing of key position, and considering ways in which the development of the emulation might feed back into the development of the original instrument, both specifically and more broadly: since it was designed by a composer, based on his experience of writing for the instrument, it offers the MRP’s designers an insight into how the instrument is conceptualised and understood by the musicians who use it.
@inproceedings{NIME21_29, author = {Pitkin, Jonathan}, title = {SoftMRP: a Software Emulation of the Magnetic Resonator Piano}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {29}, doi = {10.21428/92fbeb44.9e7da18f}, url = {https://nime.pubpub.org/pub/m9nhdm0p}, presentation-video = {https://youtu.be/Fw43nHVyGUg} }
-
Andreas Förster and Mathias Komesker. 2021. LoopBlocks: Design and Preliminary Evaluation of an Accessible Tangible Musical Step Sequencer. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.f45e1caf
Download PDF DOIThis paper presents the design and preliminary evaluation of an Accessible Digital Musical Instrument (ADMI) in the form of a tangible wooden step sequencer that uses photoresistors and wooden blocks to trigger musical events. Furthermore, the paper presents a short overview of design criteria for ADMIs based on literature and first insights of an ongoing qualitative interview study with German Special Educational Needs (SEN) teachers conducted by the first author. The preliminary evaluation is realized by a reflection on the mentioned criteria. The instrument was designed as a starting point for a participatory design process in music education settings. The software is programmed in Pure Data and running on a Raspberry Pi computer that fits inside the body of the instrument. While most similar developments focus on professional performance and complex interactions, LoopBlocks focuses on accessibility and Special Educational Needs settings. The main goal is to reduce the cognitive load needed to play music by providing a clear and constrained interaction, thus reducing intellectual and technical barriers to active music making.
@inproceedings{NIME21_3, author = {Förster, Andreas and Komesker, Mathias}, title = {LoopBlocks: Design and Preliminary Evaluation of an Accessible Tangible Musical Step Sequencer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {3}, doi = {10.21428/92fbeb44.f45e1caf}, url = {https://nime.pubpub.org/pub/bj2w1gdx}, presentation-video = {https://youtu.be/u5o0gmB3MX8} }
-
Kyriakos Tsoukalas and Ivica Bukvic. 2021. Music Computing and Computational Thinking: A Case Study. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.1eeb3ada
Download PDF DOIThe NIME community has proposed a variety of interfaces that connect making music and education. This paper reviews current literature, proposes a method for developing educational NIMEs, and reflects on a way to manifest computational thinking through music computing. A case study is presented and discussed in which a programmable mechatronics educational NIME and a virtual simulation of the NIME offered as a web application were developed.
@inproceedings{NIME21_30, author = {Tsoukalas, Kyriakos and Bukvic, Ivica}, title = {Music Computing and Computational Thinking: A Case Study}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {30}, doi = {10.21428/92fbeb44.1eeb3ada}, url = {https://nime.pubpub.org/pub/t94aq9rf}, presentation-video = {https://youtu.be/pdsfZX_kJBo} }
-
Travis West, Baptiste Caramiaux, Stéphane Huot, and Marcelo M. Wanderley. 2021. Making Mappings: Design Criteria for Live Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.04f0fc35
Download PDF DOIWe present new results combining data from a previously published study of the mapping design process and a new replication of the same method with a group of participants having different background expertise. Our thematic analysis of participants’ interview responses reveal some design criteria common to both groups of participants: mappings must manage the balance of control between the instrument and the player, and they should be easy to understand for the player and audience. We also consider several criteria that distinguish the two groups’ evaluation strategies. We conclude with important discussion of the mapping designer’s perspective, performance with gestural controllers, and the difficulties of evaluating mapping designs and musical instruments in general.
@inproceedings{NIME21_31, author = {West, Travis and Caramiaux, Baptiste and Huot, Stéphane and Wanderley, Marcelo M.}, title = {Making Mappings: Design Criteria for Live Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {31}, doi = {10.21428/92fbeb44.04f0fc35}, url = {https://nime.pubpub.org/pub/f1ueovwv}, presentation-video = {https://youtu.be/3hM531E_vlg} }
-
Andrea Martelloni, Andrew McPherson, and Mathieu Barthet. 2021. Guitar augmentation for Percussive Fingerstyle: Combining self-reflexive practice and user-centred design. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2f6db6e6
Download PDF DOIWhat is the relationship between a musician-designer’s auditory imagery for a musical piece, a design idea for an augmented instrument to support the realisation of that piece, and the aspiration to introduce the resulting instrument to a community of like-minded performers? We explore this NIME topic in the context of building the first iteration of an augmented acoustic guitar prototype for percussive fingerstyle guitarists. The first author, himself a percussive fingerstyle player, started the project of an augmented guitar with expectations and assumptions made around his own playing style, and in particular around the arrangement of one song. This input was complemented by the outcome of an interview study, in which percussive guitarists highlighted functional and creative requirements to suit their needs. We ran a pilot study to assess the resulting prototype, involving two other players. We present their feedback on two configurations of the prototype, one equalising the signal of surface sensors and the other based on sample triggering. The equalisation-based setting was better received, however both participants provided useful suggestions to improve the sample-triggering model following their own auditory imagery.
@inproceedings{NIME21_32, author = {Martelloni, Andrea and McPherson, Andrew and Barthet, Mathieu}, title = {Guitar augmentation for Percussive Fingerstyle: Combining self-reflexive practice and user-centred design}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {32}, doi = {10.21428/92fbeb44.2f6db6e6}, url = {https://nime.pubpub.org/pub/zgj85mzv}, presentation-video = {https://youtu.be/qeX6dUrJURY} }
-
Thomas Nuttall, Behzad Haki, and Sergi Jorda. 2021. Transformer Neural Networks for Automated Rhythm Generation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.fe9a0d82
Download PDF DOIRecent applications of Transformer neural networks in the field of music have demonstrated their ability to effectively capture and emulate long-term dependencies characteristic of human notions of musicality and creative merit. We propose a novel approach to automated symbolic rhythm generation, where a Transformer-XL model trained on the Magenta Groove MIDI Dataset is used for the tasks of sequence generation and continuation. Hundreds of generations are evaluated using blind-listening tests to determine the extent to which the aspects of rhythm we understand to be valuable are learnt and reproduced. Our model is able to achieve a standard of rhythmic production comparable to human playing across arbitrarily long time periods and multiple playing styles.
@inproceedings{NIME21_33, author = {Nuttall, Thomas and Haki, Behzad and Jorda, Sergi}, title = {Transformer Neural Networks for Automated Rhythm Generation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {33}, doi = {10.21428/92fbeb44.fe9a0d82}, url = {https://nime.pubpub.org/pub/8947fhly}, presentation-video = {https://youtu.be/Ul9s8qSMUgU} }
-
Derek Holzer, Henrik Frisk, and Andre Holzapfel. 2021. Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2723647f
Download PDF DOIThis paper provides a study of a workshop which invited composers, musicians, and sound designers to explore instruments from the history of electronic sound in Sweden. The workshop participants applied media archaeology methods towards analyzing one particular instrument from the past, the Dataton System 3000. They then applied design fiction methods towards imagining several speculative instruments of the future. Each stage of the workshop revealed very specific utopian ideas surrounding the design of sound instruments. After introducing the background and methods of the workshop, the authors present an overview and thematic analysis of the workshop’s outcomes. The paper concludes with some reflections on the use of this method-in-progress for investigating the ethics and affordances of historical electronic sound instruments. It also suggests the significance of ethics and affordances for the design of contemporary instruments.
@inproceedings{NIME21_34, author = {Holzer, Derek and Frisk, Henrik and Holzapfel, Andre}, title = {Sounds of Futures Passed: Media Archaeology and Design Fiction as NIME Methodologies}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {34}, doi = {10.21428/92fbeb44.2723647f}, url = {https://nime.pubpub.org/pub/200fpd5a}, presentation-video = {https://youtu.be/qBapYX7IOHA} }
-
Juliette Regimbal and Marcelo M. Wanderley. 2021. Interpolating Audio and Haptic Control Spaces. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.1084cb07
Download PDF DOIAudio and haptic sensations have previously been linked in the development of NIMEs and in other domains like human-computer interaction. Most efforts to work with these modalities together tend to either treat haptics as secondary to audio, or conversely, audio as secondary to haptics, and design sensations in each modality separately. In this paper, we investigate the possibility of designing audio and vibrotactile effects simultaneously by interpolating audio-haptic control spaces. An inverse radial basis function method is used to dynamically create a mapping from a two-dimensional space to a many-dimensional control space for multimodal effects based on user-specified control points. Two proofs of concept were developed focusing on modifying the same structure across modalities and parallel structures.
@inproceedings{NIME21_35, author = {Regimbal, Juliette and Wanderley, Marcelo M.}, title = {Interpolating Audio and Haptic Control Spaces}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {35}, doi = {10.21428/92fbeb44.1084cb07}, url = {https://nime.pubpub.org/pub/zd2z1evu}, presentation-video = {https://youtu.be/eH3mn1Ad5BE} }
-
Shelly Knotts. 2021. Algorithmic Power Ballads. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.548cca2b
Download PDF DOIAlgorithmic Power Ballads is a performance for Saxophone and autonomous improvisor, with an optional third performer who can use the web interface to hand-write note sequences, and adjust synthesis parameters. The performance system explores shifting power dynamics between acoustic, algorithmic and autonomous performers through modifying the amount of control and agency they have over the sound over the duration of the performance. A higher-level algorithm how strongly the machine listening algorithms, which analyse the saxophone input, influence the rhythmic and melodic patterns generated by the system. The autonomous improvisor is trained on power ballad melodies prior to the performance and in lieu of influence from the saxophonist and live coder strays towards melodic phrases from this musical style. The piece is written in javascript and WebAudio API and uses MMLL a browser-based machine listening library.
@inproceedings{NIME21_36, author = {Knotts, Shelly}, title = {Algorithmic Power Ballads}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {36}, doi = {10.21428/92fbeb44.548cca2b}, url = {https://nime.pubpub.org/pub/w2ubqkv4} }
-
Myungin Lee. 2021. Entangled: A Multi-Modal, Multi-User Interactive Instrument in Virtual 3D Space Using the Smartphone for Gesture Control. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.eae7c23f
Download PDF DOIIn this paper, Entangled, a multi-modal instrument in virtual 3D space with sound, graphics, and the smartphone-based gestural interface for multi-user is introduced. Within the same network, the players can use their smartphone as the controller by entering a specific URL into their smartphone’s browser. After joining the network, by actuating the smartphone’s accelerometer, the players apply gravitational force to a swarm of particles in the virtual space. Machine learning-based gesture pattern recognition is parallelly used to increase the functionality of the gestural command. Through this interface, the player can achieve intuitive control of gravitation in virtual reality (VR) space. The gravitation becomes the medium of the system involving physics, graphics, and sonification which composes a multimodal compositional language with cross-modal correspondence. Entangled is built on AlloLib, which is a cross-platform suite of C++ components for building interactive multimedia tools and applications. Throughout the script, the reason for each decision is elaborated arguing the importance of crossmodal correspondence in the design procedure.
@inproceedings{NIME21_37, author = {Lee, Myungin}, title = {Entangled: A Multi-Modal, Multi-User Interactive Instrument in Virtual 3D Space Using the Smartphone for Gesture Control}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {37}, doi = {10.21428/92fbeb44.eae7c23f}, url = {https://nime.pubpub.org/pub/4gt8wiy0}, presentation-video = {https://youtu.be/NjpXFYDvuZw} }
-
Notto J. W. Thelle and Philippe Pasquier. 2021. Spire Muse: A Virtual Musical Partner for Creative Brainstorming. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.84c0b364
Download PDF DOIWe present Spire Muse, a co-creative musical agent that engages in different kinds of interactive behaviors. The software utilizes corpora of solo instrumental performances encoded as self-organized maps and outputs slices of the corpora as concatenated, remodeled audio sequences. Transitions between behaviors can be automated, and the interface enables the negotiation of these transitions through feedback buttons that signal approval, force reversions to previous behaviors, or request change. Musical responses are embedded in a pre-trained latent space, emergent in the interaction, and influenced through the weighting of rhythmic, spectral, harmonic, and melodic features. The training and run-time modules utilize a modified version of the MASOM agent architecture. Our model stimulates spontaneous creativity and reduces the need for the user to sustain analytical mind frames, thereby optimizing flow. The agent traverses a system autonomy axis ranging from reactive to proactive, which includes the behaviors of shadowing, mirroring, and coupling. A fourth behavior—negotiation—is emergent from the interface between agent and user. The synergy of corpora, interactive modes, and influences induces musical responses along a musical similarity axis from converging to diverging. We share preliminary observations from experiments with the agent and discuss design challenges and future prospects.
@inproceedings{NIME21_38, author = {Thelle, Notto J. W. and Pasquier, Philippe}, title = {Spire Muse: A Virtual Musical Partner for Creative Brainstorming}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {38}, doi = {10.21428/92fbeb44.84c0b364}, url = {https://nime.pubpub.org/pub/wcj8sjee}, presentation-video = {https://youtu.be/4QMQNyoGfOs} }
-
Hans Leeuw. 2021. Virtuoso mapping for the Electrumpet, a hyperinstrument strategy. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a8e0cceb
Download PDF DOIThis paper introduces a new Electrumpet control system that affords for quick and easy access to all its electro-acoustic features. The new implementation uses virtuosic gestures learned on the acoustic trumpet for quick electronic control, showing its effectiveness by controlling an innovative interactive harmoniser. Seamless transition from the smooth but rigid, often uncommunicative sound of the harmoniser to a more noisy, open and chaotic sound world required the addition of extra features and scenarios. This prepares the instrument for multiple musical environments, including free improvised settings with large sonic diversity. The system should particularly interest virtuoso improvising electroacoustic musicians and hyperinstrument player/developers that combine many musical styles in their art and that look for inspiration to use existing virtuosity for electronic control.
@inproceedings{NIME21_39, author = {Leeuw, Hans}, title = {Virtuoso mapping for the Electrumpet, a hyperinstrument strategy}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {39}, doi = {10.21428/92fbeb44.a8e0cceb}, url = {https://nime.pubpub.org/pub/fxe52ym6}, presentation-video = {https://youtu.be/oHM_WfHOGUo} }
-
Filipe Calegario, João Tragtenberg, Christian Frisson, et al. 2021. Documentation and Replicability in the NIME Community. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.dc50e34d
Download PDF DOIIn this paper, we discuss the importance of replicability in Digital Musical Instrument (DMI) design and the NIME community. Replication enables us to: create new artifacts based on existing ones, experiment DMIs in different contexts and cultures, and validate obtained results from evaluations. We investigate how the papers present artifact documentation and source code by analyzing the NIME proceedings from 2018, 2019, and 2020. We argue that the presence and the quality of documentation are good indicators of replicability and can be beneficial for the NIME community. Finally, we discuss the importance of documentation for replication, propose a call to action towards more replicable projects, and present a practical guide informing future steps toward replicability in the NIME community.
@inproceedings{NIME21_4, author = {Calegario, Filipe and Tragtenberg, João and Frisson, Christian and Meneses, Eduardo and Malloch, Joseph and Cusson, Vincent and Wanderley, Marcelo M.}, title = {Documentation and Replicability in the NIME Community}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {4}, doi = {10.21428/92fbeb44.dc50e34d}, url = {https://nime.pubpub.org/pub/czq0nt9i}, presentation-video = {https://youtu.be/ySh5SueLMAA} }
-
Anna Xambó, Gerard Roma, Sam Roig, and Eduard Solaz. 2021. Live Coding with the Cloud and a Virtual Agent. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.64c9f217
Download PDF DOIThe use of crowdsourced sounds in live coding can be seen as an example of asynchronous collaboration. It is not uncommon for crowdsourced databases to return unexpected results to the queries submitted by a user. In such a situation, a live coder is likely to require some degree of additional filtering to adapt the results to her/his musical intentions. We refer to this context-dependent decisions as situated musical actions. Here, we present directions for designing a customisable virtual companion to help live coders in their practice. In particular, we introduce a machine learning (ML) model that, based on a set of examples provided by the live coder, filters the crowdsourced sounds retrieved from the Freesound online database at performance time. We evaluated a first illustrative model using objective and subjective measures. We tested a more generic live coding framework in two performances and two workshops, where several ML models have been trained and used. We discuss the promising results for ML in education, live coding practices and the design of future NIMEs.
@inproceedings{NIME21_40, author = {Xambó, Anna and Roma, Gerard and Roig, Sam and Solaz, Eduard}, title = {Live Coding with the Cloud and a Virtual Agent}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {40}, doi = {10.21428/92fbeb44.64c9f217}, url = {https://nime.pubpub.org/pub/zpdgg2fg}, presentation-video = {https://youtu.be/F4UoH1hRMoU} }
-
Yixiao Zhang, Gus Xia, Mark Levy, and Simon Dixon. 2021. COSMIC: A Conversational Interface for Human-AI Music Co-Creation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.110a7a32
Download PDF DOIIn this paper, we propose COSMIC, a COnverSational Interface for Human-AI MusIc Co-Creation. It is a chatbot with a two-fold design philosophy: to understand human creative intent and to help humans in their creation. The core Natural Language Processing (NLP) module is responsible for three functions: 1) understanding human needs in chat, 2) cross-modal interaction between natural language understanding and music generation models, and 3) mixing and coordinating multiple algorithms to complete the composition.1
@inproceedings{NIME21_41, author = {Zhang, Yixiao and Xia, Gus and Levy, Mark and Dixon, Simon}, title = {COSMIC: A Conversational Interface for Human-AI Music Co-Creation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {41}, doi = {10.21428/92fbeb44.110a7a32}, url = {https://nime.pubpub.org/pub/in6wsc9t}, presentation-video = {https://youtu.be/o5YO0ni7sng} }
-
Gershon Dublon and Xin Liu. 2021. Living Sounds: Live Nature Sound as Online Performance Space. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b90e0fcb
Download PDF DOIThis paper presents Living Sounds, an internet radio station and online venue hosted by nature. The virtual space is animated by live sound from a restored wetland wildlife sanctuary, spatially mixed from dozens of 24/7 streaming microphones across the landscape. The station’s guests are invited artists and others whose performances are responsive to and contingent upon the ever-changing environmental sound. Subtle, sound-active drawings by different visual designers anchor the one-page website. Using low latency, high fidelity WebRTC, our system allows guests to mix themselves in, remix the raw nature streams, or run our multichannel sources fully through their own processors. Created in early 2020 in response to the locked down conditions of the COVID-19 pandemic, the site became a virtual oasis, with usage data showing long duration visits. In collaboration with several festivals that went online in 2020, programmed live content included music, storytelling, and guided meditation. One festival commissioned a local microphone installation, resulting in a second nature source for the station: 5-channels of sound from a small Maine island. Catalyzed by recent events, when many have been separated from environments of inspiration and restoration, we propose Living Sounds as both a virtual nature space for cohabitation and a new kind of contingent online venue.
@inproceedings{NIME21_42, author = {Dublon, Gershon and Liu, Xin}, title = {Living Sounds: Live Nature Sound as Online Performance Space}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {42}, doi = {10.21428/92fbeb44.b90e0fcb}, url = {https://nime.pubpub.org/pub/46by9xxn}, presentation-video = {https://youtu.be/tE4YMDf-bQE} }
-
Nathan Villicaña-Shaw, Dale A. Carnegie, Jim Murphy, and Mo Zareei. 2021. Speculātor: visual soundscape augmentation of natural environments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e521c5a4
Download PDF DOISpeculātor is presented as a fist-sized, battery-powered, environmentally aware, soundscape augmentation artifact that listens to the sonic environment and provides real-time illuminated visual feedback in reaction to what it hears. The visual soundscape augmentations these units offer allow for creating sonic art installations whose artistic subject is the unaltered in-situ sonic environment. Speculātor is designed to be quickly installed in exposed outdoor environments without power infrastructure to allow maximum flexibility when selecting exhibition locations. Data from light, temperature, and humidity sensors guide behavior to maximize soundscape augmentation effectiveness and protect artifacts from operating under dangerous environmental conditions. To highlight the music-like qualities of cicada vocalizations, installations conducted between October 2019 and March 2020, where multiple Speculātor units are installed in outdoor natural locations are presented as an initial case study.
@inproceedings{NIME21_43, author = {Villicaña-Shaw, Nathan and Carnegie, Dale A. and Murphy, Jim and Zareei, Mo}, title = {Speculātor: visual soundscape augmentation of natural environments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {43}, doi = {10.21428/92fbeb44.e521c5a4}, url = {https://nime.pubpub.org/pub/pxr0grnk}, presentation-video = {https://youtu.be/kP3fDzAHXDw} }
-
William Thompson and Edgar Berdahl. 2021. An Infinitely Sustaining Piano Achieved Through a Soundboard-Mounted Shaker . Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2c4879f5
Download PDF DOIThis paper outlines a demonstration of an acoustic piano augmentation that allows for infinite sustain of one or many notes. The result is a natural sounding piano sustain that lasts for an unnatural period of time. Using a tactile shaker, a contact microphone and an amplitude activated FFT-freeze Max patch, this system is easily assembled and creates an infinitely sustaining piano.
@inproceedings{NIME21_44, author = {Thompson, William and Berdahl, Edgar}, title = {An Infinitely Sustaining Piano Achieved Through a Soundboard-Mounted Shaker }, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {44}, doi = {10.21428/92fbeb44.2c4879f5}, url = {https://nime.pubpub.org/pub/cde9r70r}, presentation-video = {https://youtu.be/YRby0VdL8Nk} }
-
Michael Quigley and William Payne. 2021. Toneblocks: Block-based musical programming. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.46c0f6ef
Download PDF DOIBlock-based coding environments enable novices to write code that bypasses the syntactic complexities of text. However, we see a lack of effective block-based tools that balance programming with expressive music making. We introduce Toneblocks1, a prototype web application intended to be intuitive and engaging for novice users with interests in computer programming and music. Toneblocks is designed to lower the barrier of entry while increasing the ceiling of expression for advanced users. In Toneblocks, users produce musical loops ranging from static sequences to generative systems, and can manipulate their properties live. Pilot usability tests conducted with two participants provide evidence that the current prototype is easy to use and can produce complex musical output. An evaluation offers potential future improvements including user-defined variables and functions, and rhythmic variability.
@inproceedings{NIME21_45, author = {Quigley, Michael and Payne, William}, title = {Toneblocks: Block-based musical programming}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {45}, doi = {10.21428/92fbeb44.46c0f6ef}, url = {https://nime.pubpub.org/pub/qn6lqnzx}, presentation-video = {https://youtu.be/c64l1hK3QiY} }
-
Yi Wu and Jason Freeman. 2021. Ripples: An Auditory Augmented Reality iOS Application for the Atlanta Botanical Garden. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.b8e82252
Download PDF DOIThis paper introduces “Ripples”, an iOS application for the Atlanta Botanical Garden that uses auditory augmented reality to provide an intuitive music guide by seamlessly integrating information about the garden into the visiting experience. For each point of interest nearby, “Ripples” generates music in real time, representing a location through data collected from users’ smartphones. The music is then overlaid onto the physical environment and binaural spatialization indicates real-world coordinates of their represented places. By taking advantage of the human auditory sense’s innate spatial sound source localization and source separation capabilities, “Ripples” makes navigation intuitive and information easy to understand.
@inproceedings{NIME21_46, author = {Wu, Yi and Freeman, Jason}, title = {Ripples: An Auditory Augmented Reality iOS Application for the Atlanta Botanical Garden}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {46}, doi = {10.21428/92fbeb44.b8e82252}, url = {https://nime.pubpub.org/pub/n1o19efr}, presentation-video = {https://youtu.be/T7EJVACX3QI} }
-
Thomas LUCAS, Christophe d’Alessandro, and Serge de Laubier. 2021. Mono-Replay : a software tool for digitized sound animation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.7b843efe
Download PDF DOIThis article describes Mono-Replay, a software environment designed for sound animation. "Sound animation" in this context means musical performance based on various modes of replay and transformation of all kinds of recorded music samples. Sound animation using Mono-Replay is a two-step process, including an off-line analysis phase and on-line performance or synthesis phase. The analysis phase proceeds with time segmentation, and the set up of anchor points corresponding to temporal musical discourse parameters (notes, pulses, events). This allows, at the performance phase, for control of timing, playback position, playback speed, and a variety of spectral effects, with the help of gesture interfaces. Animation principles and software features of Mono-Replay are described. Two examples of sound animation based on beat tracking and transient detection algorithms are presented (a multi-track record of Superstition by Steve Wonder and Jeff Beck and Accidents/Harmoniques, an electroacoustic piece by Bernard Parmegiani). With the help of these two contrasted examples, the fundamental principles of “sound animation” are reviewed: parameters of musical discourse, audio file segmentation, gestural control and interaction for animation at the performance stage.
@inproceedings{NIME21_47, author = {LUCAS, Thomas and d'Alessandro, Christophe and Laubier, Serge de}, title = {Mono-Replay : a software tool for digitized sound animation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {47}, doi = {10.21428/92fbeb44.7b843efe}, url = {https://nime.pubpub.org/pub/8lqitvvq}, presentation-video = {https://youtu.be/Ck79wRgqXfU} }
-
Ward J. Slager. 2021. Designing and performing with Pandora’s Box: transforming feedback physically and with algorithms. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.61b13baf
Download PDF DOIThis paper discusses Pandora’s Box, a novel idiosyncratic electroacoustic instrument and performance utilizing feedback as sound generation principle. The instrument’s signal path consists of a closed-loop through custom DSP algorithms and a spring. Pandora’s Box is played by tactile interaction with the spring and a control panel with faders and switches. The design and implementation are described and rituals are explained referencing a video recording of a concert.
@inproceedings{NIME21_48, author = {Slager, Ward J.}, title = {Designing and performing with Pandora’s Box: transforming feedback physically and with algorithms}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {48}, doi = {10.21428/92fbeb44.61b13baf}, url = {https://nime.pubpub.org/pub/kx6d0553}, presentation-video = {https://youtu.be/s89Ycd0QkDI} }
-
Chris Chronopoulos. 2021. Quadrant: A Multichannel, Time-of-Flight Based Hand Tracking Interface for Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.761367fd
Download PDF DOIQuadrant is a new human-computer interface based on an array of distance sensors. The hardware consists of 4 time-of-flight detectors and is designed to detect the position, velocity, and orientation of the user’s hand in free space. Signal processing is used to recognize gestures and other events, which we map to a variety of musical parameters to demonstrate possible applications. We have developed Quadrant as an open-hardware circuit board, which acts as a USB controller to a host computer.
@inproceedings{NIME21_49, author = {Chronopoulos, Chris}, title = {Quadrant: A Multichannel, Time-of-Flight Based Hand Tracking Interface for Computer Music}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {49}, doi = {10.21428/92fbeb44.761367fd}, url = {https://nime.pubpub.org/pub/quadrant}, presentation-video = {https://youtu.be/p8flHKv17Y8} }
-
Marinos Koutsomichalis. 2021. A Yellow Box with a Key Switch and a 1/4" TRS Balanced Audio Output. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.765a94a7
Download PDF DOIThis short article presents a reductionist infra-instrument. It concerns a yellow die-cast aluminium box only featuring a key switch and a 1/4” TRS balanced audio output as its UI. On the turn of the key, the device performs a certain poem in Morse code and via very low frequency acoustic pulses; in this way, it transforms poetry into bursts of intense acoustic energy that may resonate a hosting architecture and any human bodies therein. It is argued that the instrument functions at the very same time as a critical/speculative electronic object, as an ad-hoc performance instrument, and as a piece of (conceptual) art on its own sake.
@inproceedings{NIME21_5, author = {Koutsomichalis, Marinos}, title = {A Yellow Box with a Key Switch and a 1/4" TRS Balanced Audio Output}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {5}, doi = {10.21428/92fbeb44.765a94a7}, url = {https://nime.pubpub.org/pub/n69uznd4}, presentation-video = {https://youtu.be/_IUT0tbtkBI} }
-
Lisa Andersson López, Thelma Svenns, and Andre Holzapfel. 2021. Sensitiv – Designing a Sonic Co-play Tool for Interactive Dance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.18c3fc2b
Download PDF DOIIn the present study a musician and a dancer explore the co-play between them through sensory technology. The main questions concern the placement and processing of motion sensors, and the choice of sound parameters that a dancer can manipulate. Results indicate that sound parameters of delay and pitch altered dancers’ experience most positively and that placement of sensors on each wrist and ankle with a diagonal mapping of the sound parameters was the most suitable.
@inproceedings{NIME21_50, author = {Andersson López, Lisa and Svenns, Thelma and Holzapfel, Andre}, title = {Sensitiv – Designing a Sonic Co-play Tool for Interactive Dance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {50}, doi = {10.21428/92fbeb44.18c3fc2b}, url = {https://nime.pubpub.org/pub/y1y5jolp}, presentation-video = {https://youtu.be/Mo8mVJJrqx8} }
-
Geise Santos, Johnty Wang, Carolina Brum, Marcelo M. Wanderley, Tiago Tavares, and Anderson Rocha. 2021. Comparative Latency Analysis of Optical and Inertial Motion Capture Systems for Gestural Analysis and Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.51b1c3a1
Download PDF DOIWireless sensor-based technologies are becoming increasingly accessible and widely explored in interactive musical performance due to their ubiquity and low-cost, which brings the necessity of understanding the capabilities and limitations of these sensors. This is usually approached by using a reference system, such as an optical motion capture system, to assess the signals’ properties. However, this process raises the issue of synchronizing the signal and the reference data streams, as each sensor is subject to different latency, time drift, reference clocks and initialization timings. This paper presents an empirical quantification of the latency communication stages in a setup consisting of a Qualisys optical motion capture (mocap) system and a wireless microcontroller-based sensor device. We performed event-to-end tests on the critical components of the hybrid setup to determine the synchronization suitability. Overall, further synchronization is viable because of the near individual average latencies of around 25ms for both the mocap system and the wireless sensor interface.
@inproceedings{NIME21_51, author = {Santos, Geise and Wang, Johnty and Brum, Carolina and Wanderley, Marcelo M. and Tavares, Tiago and Rocha, Anderson}, title = {Comparative Latency Analysis of Optical and Inertial Motion Capture Systems for Gestural Analysis and Musical Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {51}, doi = {10.21428/92fbeb44.51b1c3a1}, url = {https://nime.pubpub.org/pub/wmcqkvw1}, presentation-video = {https://youtu.be/a1TVvr9F7hE} }
-
Henrique Portovedo, Paulo Ferreira Lopes, Ricardo Mendes, and Tiago Gala. 2021. HASGS: Five Years of Reduced Augmented Evolution. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.643abd8c
Download PDF DOIThe work presented here is based on the Hybrid Augmented Saxophone of Gestural Symbioses (HASGS) system with a focus on and its evolution over the last five years, and an emphasis on its functional structure and the repertoire. The HASGS system was intended to retain focus on the performance of the acoustic instrument, keeping gestures centralised within the habitual practice of the instrument, and reducing the use of external devices to control electronic parameters in mixed music. Taking a reduced approach, the technology chosen to prototype HASGS was developed in order to serve the aesthetic intentions of the pieces being written for it. This strategy proved to avoid an overload of solutions that could bring artefacts and superficial use of the augmentation processes, which sometimes occur on augmented instruments, specially prototyped for improvisational intentionality. Here, we discuss how the repertoire, hardware, and software of the system can be mutually affected by this approach. We understand this project as an empirically-based study which can both serve as a model for analysis, as well provide composers and performers with pathways and creative strategies for the development of augmentation processes.
@inproceedings{NIME21_52, author = {Portovedo, Henrique and Lopes, Paulo Ferreira and Mendes, Ricardo and Gala, Tiago}, title = {HASGS: Five Years of Reduced Augmented Evolution}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {52}, doi = {10.21428/92fbeb44.643abd8c}, url = {https://nime.pubpub.org/pub/1293exfw}, presentation-video = {https://youtu.be/wRygkMgx2Oc} }
-
Valérian Fraisse, Catherine Guastavino, and Marcelo M. Wanderley. 2021. A Visualization Tool to Explore Interactive Sound Installations. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.4fd9089c
Download PDF DOIThis paper presents a theoretical framework for describing interactive sound installations, along with an interactive database, on a web application, for visualizing various features of sound installations. A corpus of 195 interactive sound installations was reviewed to derive a taxonomy describing them across three perspectives: Artistic Intention, Interaction and System Design. A web application is provided to dynamically visualize and explore the corpus of sound installations using interactive charts (https://isi-database.herokuapp.com/). Our contribution is two-sided: we provide a theoretical framework to characterize interactive sound installations as well as a tool to inform sound artists and designers about up-to-date practices regarding interactive sound installations design.
@inproceedings{NIME21_53, author = {Fraisse, Valérian and Guastavino, Catherine and Wanderley, Marcelo M.}, title = {A Visualization Tool to Explore Interactive Sound Installations}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {53}, doi = {10.21428/92fbeb44.4fd9089c}, url = {https://nime.pubpub.org/pub/i1rx1t2e}, presentation-video = {https://youtu.be/MtIVB7P3bs4} }
-
Alice Eldridge, Chris Kiefer, Dan Overholt, and Halldor Ulfarsson. 2021. Self-resonating Vibrotactile Feedback Instruments \textbar\textbar: Making, Playing, Conceptualising :\textbar\textbar. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.1f29a09e
Download PDF DOISelf-resonating vibrotactile instruments (SRIs) are hybrid feedback instruments, characterised by an electro-mechanical feedback loop that is both the means of sound production and the expressive interface. Through the lens of contemporary SRIs, we reflect on how they are characterised, designed, and played. By considering reports from designers and players of this species of instrument-performance system, we explore the experience of playing them. With a view to supporting future research and practice in the field, we illustrate the value of conceptualising SRIs in Cybernetic and systems theoretic terms and suggest that this offers an intuitive, yet powerful basis for future performance, analysis and making; in doing so we close the loop in the making, playing and conceptualisation of SRIs with the aim of nourishing the evolution of theory, creative and technical practice in this field.
@inproceedings{NIME21_54, author = {Eldridge, Alice and Kiefer, Chris and Overholt, Dan and Ulfarsson, Halldor}, title = {Self-resonating Vibrotactile Feedback Instruments {\textbar}{\textbar}: Making, Playing, Conceptualising :{\textbar}{\textbar}}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {54}, doi = {10.21428/92fbeb44.1f29a09e}, url = {https://nime.pubpub.org/pub/6mhrjiqt}, presentation-video = {https://youtu.be/EP1G4vCVm_E} }
-
Vincent Reynaert, Florent Berthaut, Yosra Rekik, and laurent grisoni. 2021. The Effect of Control-Display Ratio on User Experience in Immersive Virtual Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c47be986
Download PDF DOIVirtual reality (VR) offers novel possibilities of design choices for Digital Musical Instruments in terms of shapes, sizes, sounds or colours, removing many constraints inherent to physical interfaces. In particular, the size and position of the interface components of Immersive Virtual Musical Instruments (IVMIs) can be freely chosen to elicit large or small hand gestures. In addition, VR allows for the manipulation of what users visually perceive of their actual physical actions, through redirections and changes in Control-Display Ratio (CDR). Visual and gestural amplitudes can therefore be defined separately, potentially affecting the user experience in new ways. In this paper, we investigate the use of CDR to enrich the design with a control over the user perceived fatigue, sense of presence and musical expression. Our findings suggest that the CDR has an impact on the sense of presence, on the perceived difficulty of controlling the sound and on the distance covered by the hand. From these results, we derive a set of insights and guidelines for the design of IVMIs.
@inproceedings{NIME21_55, author = {Reynaert, Vincent and Berthaut, Florent and Rekik, Yosra and grisoni, laurent}, title = {The Effect of Control-Display Ratio on User Experience in Immersive Virtual Musical Instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {55}, doi = {10.21428/92fbeb44.c47be986}, url = {https://nime.pubpub.org/pub/8n8br4cc}, presentation-video = {https://youtu.be/d1DthYt8EUw} }
-
Alex Lucas, Jacob Harrison, Franziska Schroeder, and Miguel Ortiz. 2021. Cross-Pollinating Ecological Perspectives in ADMI Design and Evaluation. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ff09de34
Download PDF DOIThis paper explores ecological perspectives of human activity in the use of digital musical instruments and assistive technology. While such perspectives are relatively nascent in DMI design and evaluation, ecological frameworks have a long-standing foundation in occupational therapy and the design of assistive technology products and services. Informed by two case studies, the authors’ critique, compare and marry concepts from each domain to guide future research into accessible music technology. The authors discover that ecological frameworks used by occupational therapists are helpful in describing the nature of individual impairment, disability and situated context. However, such frameworks seemingly flounder when attempting to describe the personal value of music-making.
@inproceedings{NIME21_56, author = {Lucas, Alex and Harrison, Jacob and Schroeder, Franziska and Ortiz, Miguel}, title = {Cross-Pollinating Ecological Perspectives in ADMI Design and Evaluation}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {56}, doi = {10.21428/92fbeb44.ff09de34}, url = {https://nime.pubpub.org/pub/d72sylsq}, presentation-video = {https://youtu.be/Khk05vKMrao} }
-
Matthew Skarha, Vincent Cusson, Christian Frisson, and Marcelo M. Wanderley. 2021. Le Bâton: A Digital Musical Instrument Based on the Chaotic Triple Pendulum. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.09ecc54d
Download PDF DOIThis paper describes Le Bâton, a new digital musical instrument based on the nonlinear dynamics of the triple pendulum. The triple pendulum is a simple physical system constructed by attaching three pendulums vertically such that each joint can swing freely. When subjected to large oscillations, its motion is chaotic and is often described as unexpectedly mesmerizing. Le Bâton uses wireless inertial measurement units (IMUs) embedded in each pendulum arm to send real-time motion data to Max/MSP. Additionally, we implemented a control mechanism, allowing a user to remotely interact with it by setting the initial release angle. Here, we explain the motivation and design of Le Bâton and describe mapping strategies. To conclude, we discuss how its nature of user interaction complicates its status as a digital musical instrument.
@inproceedings{NIME21_57, author = {Skarha, Matthew and Cusson, Vincent and Frisson, Christian and Wanderley, Marcelo M.}, title = {Le Bâton: A Digital Musical Instrument Based on the Chaotic Triple Pendulum}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {57}, doi = {10.21428/92fbeb44.09ecc54d}, url = {https://nime.pubpub.org/pub/uh1zfz1f}, presentation-video = {https://youtu.be/bLx5b9aqwgI} }
-
Claire Pelofi, Michal Goldstein, Dana Bevilacqua, Michael McPhee, Ellie Abrams, and Pablo Ripollés. 2021. CHILLER: a Computer Human Interface for the Live Labeling of Emotional Responses. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5da1ca0b
Download PDF DOIThe CHILLER (a Computer-Human Interface for the Live Labeling of Emotional Responses) is a prototype of an affordable and easy-to-use wearable sensor for the real-time detection and visualization of one of the most accurate biomarkers of musical emotional processing: the piloerection of the skin (i.e., the goosebumps) that accompany musical chills (also known as musical frissons or shivers down the spine). In controlled laboratory experiments, electrodermal activity (EDA) has been traditionally used to measure fluctuations of musical emotion. EDA is, however, ill-suited for real-world settings (e.g., live concerts) because of its sensitivity to movement, electronic noise and variations in the contact between the skin and the recording electrodes. The CHILLER, based on the Raspberry Pi architecture, overcomes these limitations by using a well-known algorithm capable of detecting goosebumps from a video recording of a patch of skin. The CHILLER has potential applications in both academia and industry and could be used as a tool to broaden participation in STEM, as it brings together concepts from experimental psychology, neuroscience, physiology and computer science in an inexpensive, do-it-yourself device well-suited for educational purposes.
@inproceedings{NIME21_58, author = {Pelofi, Claire and Goldstein, Michal and Bevilacqua, Dana and McPhee, Michael and Abrams, Ellie and Ripollés, Pablo}, title = {CHILLER: a Computer Human Interface for the Live Labeling of Emotional Responses}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {58}, doi = {10.21428/92fbeb44.5da1ca0b}, url = {https://nime.pubpub.org/pub/kdahf9fq}, presentation-video = {https://youtu.be/JujnpqoSdR4} }
-
Jeffrey A. T. Lupker. 2021. Score-Transformer: A Deep Learning Aid for Music Composition. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.21d4fd1f
Download PDF DOICreating an artificially intelligent (AI) aid for music composers requires a practical and modular approach, one that allows the composer to manipulate the technology when needed in the search for new sounds. Many existing approaches fail to capture the interest of composers as they are limited beyond their demonstrative purposes, allow for only minimal interaction from the composer or require GPU access to generate samples quickly. This paper introduces Score-Transformer (ST), a practical integration of deep learning technology to aid in the creation of new music which works seamlessly alongside any popular software notation (Finale, Sibelius, etc.). Score-Transformer is built upon a variant of the powerful transformer model, currently used in state-of-the-art natural language models. Owing to hierarchical and sequential similarities between music and language, the transformer model can learn to write polyphonic MIDI music based on any styles, genres, or composers it is trained upon. This paper briefly outlines how the model learns and later notates music based upon any prompt given to it from the user. Furthermore, ST can be updated at any time on additional MIDI recordings minimizing the risk of the software becoming outdated or impractical for continued use.
@inproceedings{NIME21_59, author = {Lupker, Jeffrey A. T.}, title = {Score-Transformer: A Deep Learning Aid for Music Composition}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {59}, doi = {10.21428/92fbeb44.21d4fd1f}, url = {https://nime.pubpub.org/pub/7a6ij1ak}, presentation-video = {https://youtu.be/CZO8nj6YzVI} }
-
Jon Gillick and David Bamman. 2021. What to Play and How to Play it: Guiding Generative Music Models with Multiple Demonstrations. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.06e2d5f4
Download PDF DOIWe propose and evaluate an approach to incorporating multiple user-provided inputs, each demonstrating a complementary set of musical characteristics, to guide the output of a generative model for synthesizing short music performances or loops. We focus on user inputs that describe both “what to play” (via scores in MIDI format) and “how to play it” (via rhythmic inputs to specify expressive timing and dynamics). Through experiments, we demonstrate that our method can facilitate human-AI co-creation of drum loops with diverse and customizable outputs. In the process, we argue for the interaction paradigm of mapping by demonstration as a promising approach to working with deep learning models that are capable of generating complex and realistic musical parts.
@inproceedings{NIME21_6, author = {Gillick, Jon and Bamman, David}, title = {What to Play and How to Play it: Guiding Generative Music Models with Multiple Demonstrations}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {6}, doi = {10.21428/92fbeb44.06e2d5f4}, url = {https://nime.pubpub.org/pub/s3x60926}, presentation-video = {https://youtu.be/Q2M_smiN6oo} }
-
Romain Michon, Catinca Dumitrascu, Sandrine Chudet, Yann Orlarey, Stéphane Letz, and Dominique Fober. 2021. Amstramgrame: Making Scientific Concepts More Tangible Through Music Technology at School. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a84edd3f
Download PDF DOIAmstramgrame is a music technology STEAM (Science Technology Engineering Arts and Mathematics) project aiming at making more tangible abstract scientific concepts through the programming of a Digital Musical Instrument (DMI): the Gramophone. Various custom tools ranging from online programming environments to the Gramophone itself have been developed as part of this project. An innovative method anchored in the reality of the field as well as a wide range of key-turn pedagogical scenarios are also part of the Amtramgrame toolkit. This article presents the tools and the method of Amstramgrame as well as the results of its pilot phase. Future directions along with some insights on the implementation of this kind of project are provided as well.
@inproceedings{NIME21_60, author = {Michon, Romain and Dumitrascu, Catinca and Chudet, Sandrine and Orlarey, Yann and Letz, Stéphane and Fober, Dominique}, title = {Amstramgrame: Making Scientific Concepts More Tangible Through Music Technology at School}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {60}, doi = {10.21428/92fbeb44.a84edd3f}, url = {https://nime.pubpub.org/pub/3zeala6v}, presentation-video = {https://youtu.be/KTgl4suQ_Ks} }
-
Vivian Reuter and Lorenz Schwarz. 2021. Wireless Sound Modules. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.07c72a46
Download PDF DOIWe study the question of how wireless, self-contained CMOS-synthesizers with built-in speakers can be used to achieve low-threshold operability of multichannel sound fields. We deliberately use low-tech and DIY approaches to build simple sound modules for music interaction and education in order to ensure accessibility of the technology. The modules are operated by wireless power transfer (WPT). A multichannel sound field can be easily generated and modulated by placing several sound objects in proximity to the induction coils. Alterations in sound are caused by repositioning, moving or grouping the sound modules. Although not physically linked to each other, the objects start interacting electro-acoustically when they share the same magnetic field. Because they are equipped with electronic sound generators and transducers, the sound modules can work independently from a sound studio situation.
@inproceedings{NIME21_61, author = {Reuter, Vivian and Schwarz, Lorenz}, title = {Wireless Sound Modules}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {61}, doi = {10.21428/92fbeb44.07c72a46}, url = {https://nime.pubpub.org/pub/muvvx0y5}, presentation-video = {https://youtu.be/08kfv74Z880} }
-
Joshua Ryan Lam and Charalampos Saitis. 2021. The Timbre Explorer: A Synthesizer Interface for Educational Purposes and Perceptual Studies. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.92a95683
Download PDF DOIWhen two sounds are played at the same loudness, pitch, and duration, what sets them apart are their timbres. This study documents the design and implementation of the Timbre Explorer, a synthesizer interface based on efforts to dimensionalize this perceptual concept. The resulting prototype controls four perceptually salient dimensions of timbre in real-time: attack time, brightness, spectral flux, and spectral density. A graphical user interface supports user understanding with live visualizations of the effects of each dimension. The applications of this interface are three-fold; further perceptual timbre studies, usage as a practical shortcut for synthesizers, and educating users about the frequency domain, sound synthesis, and the concept of timbre. The project has since been expanded to a standalone version independent of a computer and a purely online web-audio version.
@inproceedings{NIME21_62, author = {Lam, Joshua Ryan and Saitis, Charalampos}, title = {The Timbre Explorer: A Synthesizer Interface for Educational Purposes and Perceptual Studies}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {62}, doi = {10.21428/92fbeb44.92a95683}, url = {https://nime.pubpub.org/pub/q5oc20wg}, presentation-video = {https://youtu.be/EJ0ZAhOdBTw} }
-
Maria Svahn, Josefine Hölling, Fanny Curtsson, and Nina Nokelainen. 2021. The Rullen Band. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e795c9b5
Download PDF DOIMusic education is an important part of the school curriculum; it teaches children to be creative and to collaborate with others. Music gives individuals another medium to communicate through, which is especially important for individuals with cognitive or physical disabilities. Teachers of children with severe disabilities have expressed a lack of musical instruments adapted for these children, which leads to an incomplete music education for this group. This study aims at designing and evaluating a set of collaborative musical instruments for children with cognitive and physical disabilities, and the research is done together with the special education school Rullen in Stockholm, Sweden. The process was divided into three main parts; a pre-study, building and designing, and finally a user study. Based on findings from previous research, together with input received from teachers at Rullen during the pre-study, the resulting design consists of four musical instruments that are connected to a central hub. The results show that the instruments functioned as intended and that the design makes musical learning accessible in a way traditional instruments do not, as well as creates a good basis for a collaborative musical experience. However, fully evaluating the effect of playing together requires more time for the children to get comfortable with the instruments and also for the experiment leaders to test different setups to optimize the conditions for a good interplay.
@inproceedings{NIME21_63, author = {Svahn, Maria and Hölling, Josefine and Curtsson, Fanny and Nokelainen, Nina}, title = {The Rullen Band}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {63}, doi = {10.21428/92fbeb44.e795c9b5}, url = {https://nime.pubpub.org/pub/pvd6davm}, presentation-video = {https://youtu.be/2cD9f493oJM} }
-
Stefan Püst, Lena Gieseke, and Angela Brennecke. 2021. Interaction Taxonomy for Sequencer-Based Music Performances. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0d5ab18d
Download PDF DOISequencer-based live performances of electronic music require a variety of interactions. These interactions depend strongly on the affordances and constraints of the used instrument. Musicians may perceive the available interactions offered by the used instrument as limiting. For furthering the development of instruments for live performances and expanding the interaction possibilities, first, a systematic overview of interactions in current sequencer-based music performance is needed. To that end, we propose a taxonomy of interactions in sequencer-based music performances of electronic music. We identify two performance modes sequencing and sound design and four interaction classes creation, modification, selection, and evaluation. Furthermore, we discuss the influence of the different interaction classes on both, musicians as well as the audience and use the proposed taxonomy to analyze six commercially available hardware devices.
@inproceedings{NIME21_64, author = {Püst, Stefan and Gieseke, Lena and Brennecke, Angela}, title = {Interaction Taxonomy for Sequencer-Based Music Performances}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {64}, doi = {10.21428/92fbeb44.0d5ab18d}, url = {https://nime.pubpub.org/pub/gq2ukghi}, presentation-video = {https://youtu.be/c4MUKWpneg0} }
-
Isabela Corintha and Giordano Cabral. 2021. Improvised Sound-Making within Musical Apprenticeship and Enactivism: An Intersection between the 4E‘s Model and DMIs. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.56a01d33
Download PDF DOIFrom an epistemological perspective, this work presents a discussion of how the paradigm of enactive music cognition is related to improvisation in the context of the skills and needs of 21st-century music learners. Improvisation in music education is addressed within the perspective of an alternative but an increasingly influential enactive approach to mind (Varela et al., 1993) followed by the four theories known as the 4E of cognition - embedded, embodied, enactive and extended - which naturally have characteristics in common that led them to be grouped in this way. I discuss the “autopoietic” (self-maintain systems that auto-reproduce over time based on their own set of internal rules) nature of the embodied musical mind. To conclude, an overview concerning the enactivist approach within DMIs design in order to provide a better understanding of the experiences and benefits of using new technologies in musical learning contexts is outlined.
@inproceedings{NIME21_65, author = {Corintha, Isabela and Cabral, Giordano}, title = {Improvised Sound-Making within Musical Apprenticeship and Enactivism: An Intersection between the 4E`s Model and DMIs}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {65}, doi = {10.21428/92fbeb44.56a01d33}, url = {https://nime.pubpub.org/pub/e4lsrn6c}, presentation-video = {https://youtu.be/dGb5tl_tA58} }
-
Tim Murray-Browne and Panagiotis Tigas. 2021. Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9d4bcd4b
Download PDF DOIIn many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.
@inproceedings{NIME21_66, author = {Murray-Browne, Tim and Tigas, Panagiotis}, title = {Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {66}, doi = {10.21428/92fbeb44.9d4bcd4b}, url = {https://nime.pubpub.org/pub/latent-mappings}, presentation-video = {https://youtu.be/zBOHWyIGaYc} }
-
Graham Wakefield. 2021. A streamlined workflow from Max/gen~ to modular hardware. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e32fde90
Download PDF DOIThis paper describes Oopsy, which provides a streamlined process for editing digital signal processing algorithms for precise and sample accurate sound generation, transformation and modulation, and placing them in the context of embedded hardware and modular synthesizers. This pipeline gives digital instrument designers the development flexibility of established software with the deployment benefits of working on hardware. Specifically, algorithm design takes place in the flexible context of gen in Max, and Oopsy automatically and fluently translates this and uploads it onto the open-ended Daisy embedded hardware. The paper locates this work in the context of related software/hardware workflows, and provides detail of its contributions in design, implementation, and use.
@inproceedings{NIME21_67, author = {Wakefield, Graham}, title = {A streamlined workflow from Max/gen{\textasciitilde} to modular hardware}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {67}, doi = {10.21428/92fbeb44.e32fde90}, url = {https://nime.pubpub.org/pub/0u3ruj23}, presentation-video = {https://youtu.be/xJwI9F9Spbo} }
-
Roger B. Dannenberg. 2021. Canons for Conlon: Composing and Performing Multiple Tempi on the Web. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a41fe2c5
Download PDF DOIIn response to the 2020 pandemic, a new work was composed inspired by the limitations and challenges of performing over the network. Since synchronization is one of the big challenges, or perhaps something to be avoided due to network latency, this work explicitly calls for desynchronization in a controlled way, using metronomes running at different rates to take performers in and out of approximate synchronization. A special editor was developed to visualize the music because conventional editors do not support multiple continuously varying tempi.
@inproceedings{NIME21_68, author = {Dannenberg, Roger B.}, title = {Canons for Conlon: Composing and Performing Multiple Tempi on the Web}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {68}, doi = {10.21428/92fbeb44.a41fe2c5}, url = {https://nime.pubpub.org/pub/jxo0v8r7}, presentation-video = {https://youtu.be/MhcZyE2SCck} }
-
Artemi-Maria Gioti. 2021. A Compositional Exploration of Computational Aesthetic Evaluation and AI Bias. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.de74b046
Download PDF DOIThis paper describes a subversive compositional approach to machine learning, focused on the exploration of AI bias and computational aesthetic evaluation. In Bias, for bass clarinet and Interactive Music System, a computer music system using two Neural Networks trained to develop “aesthetic bias” interacts with the musician by evaluating the sound input based on its “subjective” aesthetic judgments. The composition problematizes the discrepancies between the concepts of error and accuracy, associated with supervised machine learning, and aesthetic judgments as inherently subjective and intangible. The methods used in the compositional process are discussed with respect to the objective of balancing the trade-off between musical authorship and interpretative freedom in interactive musical works.
@inproceedings{NIME21_69, author = {Gioti, Artemi-Maria}, title = {A Compositional Exploration of Computational Aesthetic Evaluation and AI Bias.}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {69}, doi = {10.21428/92fbeb44.de74b046}, url = {https://nime.pubpub.org/pub/zpvgmv74}, presentation-video = {https://youtu.be/9l8NeGmvpDU} }
-
Paul Dunham, Dr. Mo H. Zareei, Prof. Dale Carnegie, and Dr. Dugal McKinnon. 2021. Click::RAND#2. An Indeterminate Sound Sculpture. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5cc6d157
Download PDF DOICan random digit data be transformed and utilized as a sound installation that provides a referential connection between a book and the electromechanical computer? What happens when the text of A Million Random Digits with 100,000 Normal Deviates is ‘vocalized’ by an electro-mechanical object? Using a media archaeological research approach, Click::RAND^(#)2, an indeterminate sound sculpture utilising relays as sound objects, is an audio-visual reinterpretation and representation of an historical relationship between a book of random digits and the electromechanical relay. Developed by the first author, Click::RAND^(#)2 is the physical re-presentation of random digit data sets as compositional elements to complement the physical presence of the work through spatialized sound patterns framed within the context of Henri Lefebvre’s rhythmanalysis and experienced as synchronous, syncopated or discordant rhythms.
@inproceedings{NIME21_7, author = {Dunham, Paul and Zareei, Dr. Mo H. and Carnegie, Prof. Dale and McKinnon, Dr. Dugal}, title = {Click::RAND#2. An Indeterminate Sound Sculpture}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {7}, doi = {10.21428/92fbeb44.5cc6d157}, url = {https://nime.pubpub.org/pub/lac4s48h}, presentation-video = {https://youtu.be/vJynbs8txuA} }
-
Raghavasimhan Sankaranarayanan and Gil Weinberg. 2021. Design of Hathaani - A Robotic Violinist for Carnatic Music. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0ad83109
Download PDF DOIWe present a novel robotic violinist that is designed to play Carnatic music - a music system popular in the southern part of India. The robot plays the D string and uses a single finger mechanism inspired by the Chitravina - a fretless Indian lute. A fingerboard traversal system with a dynamic finger tip apparatus enables the robot to play gamakas - pitch based embellishments in-between notes, which are at the core of Carnatic music. A double roller design is used for bowing which reduces space, produces a tone that resembles the tone of a conventional violin bow, and facilitates super human playing techniques such as infinite bowing. The design also enables the user to change the bow hair tightness to help capture a variety of performing techniques in different musical styles. Objective assessments and subjective listening tests were conducted to evaluate our design, indicating that the robot can play gamakas in a realistic manner and thus, can perform Carnatic music.
@inproceedings{NIME21_70, author = {Sankaranarayanan, Raghavasimhan and Weinberg, Gil}, title = {Design of Hathaani - A Robotic Violinist for Carnatic Music}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {70}, doi = {10.21428/92fbeb44.0ad83109}, url = {https://nime.pubpub.org/pub/225tmviw}, presentation-video = {https://youtu.be/4vNZm2Zewqs} }
-
Damian Mills, Franziska Schroeder, and John D’Arcy. 2021. GIVME: Guided Interactions in Virtual Musical Environments: . Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.5443652c
Download PDF DOIThe current generation of commercial hardware and software for virtual reality and immersive environments presents possibilities for a wealth of creative solutions for new musical expression and interaction. This paper explores the affordances of virtual musical environments with the disabled music-making community of Drake Music Project Northern Ireland. Recent collaborations have investigated strategies for Guided Interactions in Virtual Musical Environments (GIVME), a novel concept the authors introduce here. This paper gives some background on disabled music-making with digital musical instruments before sharing recent research projects that facilitate disabled music performance in virtual reality immersive environments. We expand on the premise of GIVME as a potential guideline for musical interaction design for disabled musicians in VR, and take an explorative look at the possibilities and constraints for instrument design for disabled musicians as virtual worlds integrate ever more closely with the real.
@inproceedings{NIME21_71, author = {Mills, Damian and Schroeder, Franziska and D'Arcy, John}, title = {GIVME: Guided Interactions in Virtual Musical Environments: }, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {71}, doi = {10.21428/92fbeb44.5443652c}, url = {https://nime.pubpub.org/pub/h14o4oit}, presentation-video = {https://youtu.be/sI0K9sMYc80} }
-
Anne Hege, Camille Noufi, Elena Georgieva, and Ge Wang. 2021. Instrument Design for The Furies: A LaptOpera. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.dde5029a
Download PDF DOIIn this article, we discuss the creation of The Furies: A LaptOpera, a new opera for laptop orchestra and live vocal soloists that tells the story of the Greek tragedy Electra. We outline the principles that guided our instrument design with the aim of forging direct and visceral connections between the music, the narrative, and the relationship between characters in ways we can simultaneously hear, see, and feel. Through detailed case studies of three instruments—The Rope and BeatPlayer, the tether chorus, and the autonomous speaker orchestra—this paper offers tools and reflections to guide instrument-building in service of narrative-based works through a unified multimedia art form.
@inproceedings{NIME21_72, author = {Hege, Anne and Noufi, Camille and Georgieva, Elena and Wang, Ge}, title = {Instrument Design for The Furies: A LaptOpera}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {72}, doi = {10.21428/92fbeb44.dde5029a}, url = {https://nime.pubpub.org/pub/gx6klqui}, presentation-video = {https://youtu.be/QC_-h4cVVog} }
-
Staas de Jong. 2021. Human noise at the fingertip: Positional (non)control under varying haptic × musical conditions. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9765f11d
Download PDF DOIAs technologies and interfaces for the instrumental control of musical sound get ever better at tracking aspects of human position and motion in space, a fundamental problem emerges: Unintended or even counter-intentional control may result when humans themselves become a source of positional noise. A clear case of what is meant by this, is the “stillness movement” of a body part, occurring despite the simultaneous explicit intention for that body part to remain still. In this paper, we present the results of a randomized, controlled experiment investigating this phenomenon along a vertical axis relative to the human fingertip. The results include characterizations of both the spatial distribution and frequency distribution of the stillness movement observed. Also included are results indicating a possible role for constant forces and viscosities in reducing stillness movement amplitude, thereby potentially enabling the implementation of more positional control of musical sound within the same available spatial range. Importantly, the above is summarized in a form that is directly interpretable for anyone designing technologies, interactions, or performances that involve fingertip control of musical sound. Also, a complete data set of the experimental results is included in the separate Appendices to this paper, again in a format that is directly interpretable.
@inproceedings{NIME21_73, author = {de Jong, Staas}, title = {Human noise at the fingertip: Positional (non)control under varying haptic × musical conditions}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {73}, doi = {10.21428/92fbeb44.9765f11d}, url = {https://nime.pubpub.org/pub/bol2r7nr}, presentation-video = {https://youtu.be/L_WhJ3N-v8c} }
-
Christian Faubel. 2021. Emergent Polyrhythmic Patterns with a Neuromorph Electronic Network. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e66a8542
Download PDF DOIIn this paper I show how it is possible to create polyrhythmic patterns with analogue oscillators by setting up a network of variable resistances that connect these oscillators. The system I present is build with electronic circuits connected to dc-motors and allows for a very tangible and playful exploration of the dynamic properties of artificial neural networks. The theoretical underpinnings of this approach stem from observation and models of synchronization in living organisms, where synchronization and phase-locking is not only an observable phenomenon but can also be seen as a marker of the quality of interaction. Realized as a technical system of analogue oscillators synchronization also appears between oscillators tuned at different basic rhythm and stable polyrhythmic patterns emerge as the result of electrical connections.
@inproceedings{NIME21_74, author = {Faubel, Christian}, title = {Emergent Polyrhythmic Patterns with a Neuromorph Electronic Network}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {74}, doi = {10.21428/92fbeb44.e66a8542}, url = {https://nime.pubpub.org/pub/g04egsqn}, presentation-video = {https://youtu.be/pJlxVJTMRto} }
-
João Tragtenberg, Gabriel Albuquerque, and Filipe Calegario. 2021. Gambiarra and Techno-Vernacular Creativity in NIME Research. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.98354a15
Download PDF DOIOver past editions of the NIME Conference, there has been a growing concern towards diversity and inclusion. It is relevant for an international community whose vast majority of its members are in Europe, the USA, and Canada to seek a richer cultural diversity. To contribute to a decolonial perspective in the inclusion of underrepresented countries and ethnic/racial groups, we discuss Gambiarra and Techno-Vernacular Creativity concepts. We believe these concepts may help structure and stimulate individuals from these underrepresented contexts to perform research in the NIME field.
@inproceedings{NIME21_75, author = {Tragtenberg, João and Albuquerque, Gabriel and Calegario, Filipe}, title = {Gambiarra and Techno-Vernacular Creativity in NIME Research}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {75}, doi = {10.21428/92fbeb44.98354a15}, url = {https://nime.pubpub.org/pub/aqm27581}, presentation-video = {https://youtu.be/iJ8g7vBPFYw} }
-
Timothy Roth, Aiyun Huang, and Tyler Cunningham. 2021. On Parallel Performance Practices: Some Observations on Personalizing DMIs as Percussionists. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c61b9546
Download PDF DOIDigital musical instrument (DMI) design and performance is primarily practiced by those with backgrounds in music technology and human-computer interaction. Research on these topics is rarely led by performers, much less by those without backgrounds in technology. In this study, we explore DMI design and performance from the perspective of a singular community of classically-trained percussionists. We use a practiced-based methodology informed by our skillset as percussionists to study how instrumental skills and sensibilities can be incorporated into the personalization of, and performance with, DMIs. We introduced a simple and adaptable digital musical instrument, built using the Arduino Uno, that individuals (percussionists) could personalize and extend in order to improvise, compose and create music (études). Our analysis maps parallel percussion practices emerging from the resultant DMI compositions and performances by examining the functionality of each Arduino instrument through the lens of material-oriented and communication-oriented approaches to interactivity.
@inproceedings{NIME21_76, author = {Roth, Timothy and Huang, Aiyun and Cunningham, Tyler}, title = {On Parallel Performance Practices: Some Observations on Personalizing DMIs as Percussionists}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {76}, doi = {10.21428/92fbeb44.c61b9546}, url = {https://nime.pubpub.org/pub/226jlaug}, presentation-video = {https://youtu.be/kjQDN907FXs} }
-
Sofy Yuditskaya, Sophia Sun, and Margaret Schedel. 2021. Synthetic Erudition Assist Lattice. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.0282a79c
Download PDF DOIThe Seals are a political, feminist, noise, and AI-inspired electronic sorta-surf rock band composed of the talents of Margaret Schedel, Susie Green, Sophia Sun, Ria Rajan, and Sofy Yuditskaya, augmented by the S.E.A.L. (Synthetic Erudition Assist Lattice), as we call the collection of AIs that assist us in creating usable content with which to mold and shape our music and visuals. Our concerts begin by invoking one another through internet conferencing software; during the concert, we play skull augmented theremins while reading GPT2 & GPT3 (Machine Learning language models) generated dialogue over pre-generated songs. As a distributed band we designed our performance to take place over video conferencing systems deliberately incorporating the glitch artifacts that they bring. We use one of the oldest forms of generative operations, throwing dice, as well as the latest in ML technology to create our collaborative music over a distance. In this paper, we illustrate how we leverage the multiple novel interfaces that we use to create our unique sound.
@inproceedings{NIME21_77, author = {Yuditskaya, Sofy and Sun, Sophia and Schedel, Margaret}, title = {Synthetic Erudition Assist Lattice}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {77}, doi = {10.21428/92fbeb44.0282a79c}, url = {https://nime.pubpub.org/pub/5oupvoun}, presentation-video = {https://youtu.be/FmTbEUyePXg} }
-
Michael Blandino and Edgar Berdahl. 2021. Using a Pursuit Tracking Task to Compare Continuous Control of Various NIME Sensors. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.c2b5a672
Download PDF DOIThis study investigates how accurately users can continuously control a variety of one degree of freedom sensors commonly used in electronic music interfaces. Analysis within an information-theoretic model yields channel capacities of maximum information throughput in bits/sec that can support a unified comparison. The results may inform the design of digital musical instruments and the design of systems with similarly demanding control tasks.
@inproceedings{NIME21_78, author = {Blandino, Michael and Berdahl, Edgar}, title = {Using a Pursuit Tracking Task to Compare Continuous Control of Various NIME Sensors}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {78}, doi = {10.21428/92fbeb44.c2b5a672}, url = {https://nime.pubpub.org/pub/using-a-pursuit-tracking-task-to-compare-continuous-control-of-various-nime-sensors}, presentation-video = {https://youtu.be/-p7mp3LFsQg} }
-
Margaret Schedel, Brian Smith, Robert Cosgrove, and Nick Hwang. 2021. RhumbLine: Plectrohyla Exquisita — Spatial Listening of Zoomorphic Musical Robots. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.9e1312b1
Download PDF DOIContending with ecosystem silencing in the Anthropocene, RhumbLine: Plectrohyla Exquisita is an installation-scale instrument featuring an ensemble of zoomorphic musical robots that generate an acoustic soundscape from behind an acousmatic veil, highlighting the spatial attributes of acoustic sound. Originally conceived as a physical installation, the global COVID-19 pandemic catalyzed a reconceptualization of the work that allowed it to function remotely and collaboratively with users seeding robotic frog callers with improvised rhythmic calls via the internet—transforming a physical installation into a web-based performable installation-scale instrument. The performed calls from online visitors evolve using AI as they pass through the frog collective. After performing a rhythm, audiences listen ambisonically from behind a virtual veil and attempt to map the formation of the frogs, based on the spatial information embedded in their calls. After listening, audience members can reveal the frogs and their formation. By reconceiving rhumb lines—navigational tools that create paths of constant bearing to navigate space—as sonic tools to spatially orient listeners, RhumbLine: Plectrohyla Exquisita functions as a new interface for spatial musical expression (NISME) in both its physical and virtual instantiations.
@inproceedings{NIME21_79, author = {Schedel, Margaret and Smith, Brian and Cosgrove, Robert and Hwang, Nick}, title = {RhumbLine: Plectrohyla Exquisita — Spatial Listening of Zoomorphic Musical Robots}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {79}, doi = {10.21428/92fbeb44.9e1312b1}, url = {https://nime.pubpub.org/pub/f5jtuy87}, presentation-video = {https://youtu.be/twzpxObh9jw} }
-
S. M. Astrid Bin. 2021. Discourse is critical: Towards a collaborative NIME history. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.ac5d43e1
Download PDF DOIRecent work in NIME has questioned the political and social implications of work in this field, and has called for direct action on problems in the areas of diversity, representation and political engagement. Though there is motivation to address these problems, there is an open question of how to meaningfully do so. This paper proposes that NIME’s historical record is the best tool for understanding our own output but this record is incomplete, and makes the case for collective action to improve how we document our work. I begin by contrasting NIME’s output with its discourse, and explore the nature of this discourse through NIME history and examine our inherited epistemological complexity. I assert that, if left unexamined, this complexity can undermine our community values of diversity and inclusion. I argue that meaningfully addressing current problems demands critical reflection on our work, and explore how NIME’s historical record is currently used as a means of doing so. I then review what NIME’s historical record contains (and what it does not), and evaluate its fitness for use as a tool of inquiry. Finally I make the case for collective action to establish better documentation practices, and suggest features that may be helpful for the process as well as the result.
@inproceedings{NIME21_8, author = {Bin, S. M. Astrid}, title = {Discourse is critical: Towards a collaborative NIME history}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {8}, doi = {10.21428/92fbeb44.ac5d43e1}, url = {https://nime.pubpub.org/pub/nbrrk8ll}, presentation-video = {https://youtu.be/omnMRlj7miA} }
-
Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli. 2021. AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.3d0e9e12
Download PDF DOIIn this paper we present the recent developments in the AI-terity instrument. AI-terity is a deformable, non-rigid musical instrument that comprises a particular artificial intelligence (AI) method for generating audio samples for real-time audio synthesis. As an improvement, we developed the control interface structure with additional sensor hardware. In addition, we implemented a new hybrid deep learning architecture, GANSpaceSynth, in which we applied the GANSpace method on the GANSynth model. Following the deep learning model improvement, we developed new autonomous features for the instrument that aim at keeping the musician in an active and uncertain state of exploration. Through these new features, the instrument enables more accurate control on GAN latent space. Further, we intend to investigate the current developments through a musical composition that idiomatically reflects the new autonomous features of the AI-terity instrument. We argue that the present technology of AI is suitable for enabling alternative autonomous features in audio domain for the creative practices of musicians.
@inproceedings{NIME21_80, author = {Tahiroğlu, Koray and Kastemaa, Miranda and Koli, Oskar}, title = {AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {80}, doi = {10.21428/92fbeb44.3d0e9e12}, url = {https://nime.pubpub.org/pub/9zu49nu5}, presentation-video = {https://youtu.be/WVAIPwI-3P8} }
-
Alex Champagne, Bob Pritchard, Paul Dietz, and Sidney Fels. 2021. Investigation of a Novel Shape Sensor for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.a72b68dd
Download PDF DOIA novel, high-fidelity, shape-sensing technology, BendShape [1], is investigated as an expressive music controller for sound effects, direct sound manipulation, and voice synthesis. Various approaches are considered for developing mapping strategies that create transparent metaphors to facilitate expression for both the performer and the audience. We explore strategies in the input, intermediate, and output mapping layers using a two-step approach guided by Perry’s Principles [2]. First, we use trial-and-error to establish simple mappings between single input parameter control and effects to identify promising directions for further study. Then, we compose a specific piece that supports different uses of the BendShape mappings in a performance context: this allows us to study a performer trying different types of expressive techniques, enabling us to analyse the role each mapping has in facilitating musical expression. We also investigate the effects these mapping strategies have on performer bandwidth. Our main finding is that the high fidelity of the novel BendShape sensor facilitates creating interpretable input representations to control sound representations, and thereby match interpretations that provide better expressive mappings, such as with vocal shape to vocal sound and bumpiness control; however, direct mappings of individual, independent sensor mappings to effects does not provide obvious advantages over simpler controls. Furthermore, while the BendShape sensor enables rich explorations for sound, the ability to find expressive interpretable shape-to-sound representations while respecting the performer’s bandwidth limitations (caused by having many coupled input degrees of freedom) remains a challenge and an opportunity.
@inproceedings{NIME21_81, author = {Champagne, Alex and Pritchard, Bob and Dietz, Paul and Fels, Sidney}, title = {Investigation of a Novel Shape Sensor for Musical Expression}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {81}, doi = {10.21428/92fbeb44.a72b68dd}, url = {https://nime.pubpub.org/pub/bu2jb1d6}, presentation-video = {https://youtu.be/CnJmH6fX6XA} }
-
Frederic Anthony Robinson. 2021. Debris: A playful interface for direct manipulation of audio waveforms. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.02005035
Download PDF DOIDebris is a playful interface for direct manipulation of audio waveforms. Audio data is represented as a collection of waveform elements, which provide a low-resolution visualisation of the audio sample. Each element, however, can be individually examined, re-positioned, or broken down into smaller fragments, thereby becoming a tangible representation of a moment in the sample. Debris is built around the idea of looking at a sound not as a linear event to be played from beginning to end, but as a non-linear collection of moments, timbres, and sound fragments which can be explored, closely examined and interacted with. This paper positions the work among conceptually related NIME interfaces, details the various user interactions and their mappings and ends with a discussion around the interface’s constraints.
@inproceedings{NIME21_82, author = {Robinson, Frederic Anthony}, title = {Debris: A playful interface for direct manipulation of audio waveforms}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {82}, doi = {10.21428/92fbeb44.02005035}, url = {https://nime.pubpub.org/pub/xn761337}, presentation-video = {https://youtu.be/H04LgbZqc-c} }
-
Jeff Gregorio and Youngmoo E. Kim. 2021. Evaluation of Timbre-Based Control of a Parametric Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.31419bf9
Download PDF DOIMusical audio synthesis often requires systems-level knowledge and uniquely analytical approaches to music making, thus a number of machine learning systems have been proposed to replace traditional parameter spaces with more intuitive control spaces based on spatial arrangement of sonic qualities. Some prior evaluations of simplified control spaces have shown increased user efficacy via quantitative metrics in sound design tasks, and some indicate that simplification may lower barriers to entry to synthesis. However, the level and nature of the appeal of simplified interfaces to synthesists merits investigation, particularly in relation to the type of task, prior expertise, and aesthetic values. Toward addressing these unknowns, this work investigates user experience in a sample of 20 musicians with varying degrees of synthesis expertise, and uses a one-week, at-home, multi-task evaluation of a novel instrument presenting a simplified mode of control alongside the full parameter space. We find that our participants generally give primacy to parameter space and seek understanding of parameter-sound relationships, yet most do report finding some creative utility in timbre-space control for discovery of sounds, timbral transposition, and expressive modulations of parameters. Although we find some articulations of particular aesthetic values, relationships to user experience remain difficult to characterize generally.
@inproceedings{NIME21_83, author = {Gregorio, Jeff and Kim, Youngmoo E.}, title = {Evaluation of Timbre-Based Control of a Parametric Synthesizer}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {83}, doi = {10.21428/92fbeb44.31419bf9}, url = {https://nime.pubpub.org/pub/adtb2zl5}, presentation-video = {https://youtu.be/m7IqWceQmuk} }
-
Milton Riaño. 2021. Hybridization No. 1: Standing at the Boundary between Physical and Virtual Space. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.d3354ff3
Download PDF DOIHybridization No. 1 is a wireless hand-held rotary instrument that allows the performer to simultaneously interact with physical and virtual spaces. The instrument emits visible laser lights and invisible ultrasonic waves which scan the architecture of a physical space. The instrument is also connected to a virtual 3D model of the same space, which allows the performer to create an immersive audiovisual composition that blurs the limits between physical and virtual space. In this paper I describe the instrument, its operation and its integrated multimedia system.
@inproceedings{NIME21_84, author = {Riaño, Milton}, title = {Hybridization No. 1: Standing at the Boundary between Physical and Virtual Space}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {84}, doi = {10.21428/92fbeb44.d3354ff3}, url = {https://nime.pubpub.org/pub/h1} }
-
Lloyd May and Peter Larsson. 2021. Nerve Sensors in Inclusive Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.82c5626f
Download PDF DOIWe present the methods and findings of a multi-day performance research lab that evaluated the efficacy of a novel nerve sensor in the context of a physically inclusive performance practice. Nerve sensors are a variant of surface electromyography that are optimized to detect signals from nerve firings rather than skeletal muscle movement, allowing performers with altered muscle physiology or control to use the sensors more effectively. Through iterative co-design and musical performance evaluation, we compared the performative affordances and limitations of the nerve sensor to other contemporary sensor-based gestural instruments. The nerve sensor afforded the communication of gestural effort in a manner that other gestural instruments did not, while offering a smaller palette of reliably classifiable gestures.
@inproceedings{NIME21_85, author = {May, Lloyd and Larsson, Peter}, title = {Nerve Sensors in Inclusive Musical Performance}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {85}, doi = {10.21428/92fbeb44.82c5626f}, url = {https://nime.pubpub.org/pub/yxcp36ii}, presentation-video = {https://youtu.be/qsRVcBl2gAo} }
-
Guadalupe Babio Fernandez and Kent Larson. 2021. Tune Field. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.2305755b
Download PDF DOIThis paper introduces Tune Field, a 3-dimensional tangible interface that combines and alters previously existing concepts of topographical, field sensing and capacitive touch interfaces as a method for musical expression and sound visualization. Users are invited to create experimental sound textures while modifying the topography of antennas. The interface’s touch antennas are randomly located on a box promoting exploration and discovery of gesture-to-sound relationships. This way, the interface opens space to playfully producing sound and triggering visuals; thus, converting Tune Field into a sensorial experience.
@inproceedings{NIME21_86, author = {Fernandez, Guadalupe Babio and Larson, Kent}, title = {Tune Field}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {86}, doi = {10.21428/92fbeb44.2305755b}, url = {https://nime.pubpub.org/pub/eqvxspw3}, presentation-video = {https://youtu.be/2lB8idO_yDs} }
-
Taejun Kim, Yi-Hsuan Yang, and Juhan Nam. 2021. Reverse-Engineering The Transition Regions of Real-World DJ Mixes using Sub-band Analysis with Convex Optimization. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.4b2fc7b9
Download PDF DOIThe basic role of DJs is creating a seamless sequence of music tracks. In order to make the DJ mix a single continuous audio stream, DJs control various audio effects on a DJ mixer system particularly in the transition region between one track and the next track and modify the audio signals in terms of volume, timbre, tempo, and other musical elements. There have been research efforts to imitate the DJ mixing techniques but they are mainly rule-based approaches based on domain knowledge. In this paper, we propose a method to analyze the DJ mixer control from real-world DJ mixes toward a data-driven approach to imitate the DJ performance. Specifically, we estimate the mixing gain trajectories between the two tracks using sub-band analysis with constrained convex optimization. We evaluate the method by reconstructing the original tracks using the two source tracks and the gain estimate, and show that the proposed method outperforms the linear crossfading as a baseline and the single-band analysis. A listening test from the survey of 14 participants also confirms that our proposed method is superior among them.
@inproceedings{NIME21_87, author = {Kim, Taejun and Yang, Yi-Hsuan and Nam, Juhan}, title = {Reverse-Engineering The Transition Regions of Real-World DJ Mixes using Sub-band Analysis with Convex Optimization}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {87}, doi = {10.21428/92fbeb44.4b2fc7b9}, url = {https://nime.pubpub.org/pub/g7avj1a7}, presentation-video = {https://youtu.be/ju0P-Zq8Bwo} }
-
Benedict Gaster and Ryan Challinor. 2021. Bespoke Anywhere. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.02c348fb
Download PDF DOIThis paper reports on a project aimed to break away from the portability concerns of native DSP code between different platforms, thus freeing the instrument designer from the burden of porting new Digital Musical Instruments (DMIs) to different architectures. Bespoke Anywhere is a live modular style software DMI with an instance of the Audio Anywhere (AA) framework, that enables working with audio plugins that are compiled once and run anywhere. At the heart of Audio Anywhere is an audio engine whose Digital Signal Processing (DSP) components are written in Faust and deployed with Web Assembly (Wasm). We demonstrate Bespoke Anywhere as a hosting application, for live performance, and music production. We focus on an instance of AA using Faust for DSP, that is statically complied to portable Wasm, and Graphical User Interfaces (GUIs) described in JSON, both of which are loaded dynamically into our modified version of Bespoke.
@inproceedings{NIME21_88, author = {Gaster, Benedict and Challinor, Ryan}, title = {Bespoke Anywhere}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {88}, doi = {10.21428/92fbeb44.02c348fb}, url = {https://nime.pubpub.org/pub/8jaqbl7m}, presentation-video = {https://youtu.be/ayJzFVRXPMs} }
-
Sang-won Leigh and Jeonghyun (Jonna) Lee. 2021. A Study on Learning Advanced Skills on Co-Playable Robotic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.002be215
Download PDF DOILearning advanced skills on a musical instrument takes a range of physical and cognitive efforts. For instance, practicing polyrhythm is a complex task that requires the development of both musical and physical skills. This paper explores the use of automation in the context of learning advanced skills on the guitar. Our robotic guitar is capable of physically plucking on the strings along with a musician, providing both haptic and audio guidance to the musician. We hypothesize that a multimodal and first-person experience of “being able to play” could increase learning efficacy. We discuss the novel learning application and a user study, through which we illustrate the implication and potential issues in systems that provide temporary skills and in-situ multimodal guidance for learning.
@inproceedings{NIME21_9, author = {Leigh, Sang-won and Lee, Jeonghyun (Jonna)}, title = {A Study on Learning Advanced Skills on Co-Playable Robotic Instruments}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2021}, month = jun, address = {Shanghai, China}, issn = {2220-4806}, articleno = {9}, doi = {10.21428/92fbeb44.002be215}, url = {https://nime.pubpub.org/pub/h5dqsvpm}, presentation-video = {https://youtu.be/MeXrN95jajU} }
2020
-
Ruolun Weng. 2020. Interactive Mobile Musical Application using faust2smartphone. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 1–4. http://doi.org/10.5281/zenodo.4813164
Download PDF DOIWe introduce faust2smartphone, a tool to generate an edit-ready project for musical mobile application, which connects Faust programming language and mobile application’s development. It is an extended implementation of faust2api. Faust DSP objects can be easily embedded as a high level API so that the developers can access various functions and elements across different mobile platforms. This paper provides several modes and technical details on the structures and implementation of this system as well as some applications and future directions for this tool.
@inproceedings{NIME20_0, author = {Weng, Ruolun}, title = {Interactive Mobile Musical Application using faust2smartphone}, pages = {1--4}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813164}, url = {https://www.nime.org/proceedings/2020/nime2020_paper0.pdf} }
-
John Sullivan, Julian Vanasse, Catherine Guastavino, and Marcelo Wanderley. 2020. Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 5–10. http://doi.org/10.5281/zenodo.4813166
Download PDF DOIThis paper reports on the user-driven redesign of an embedded digital musical instrument that has yielded a trio of new instruments, informed by early user feedback and co-design workshops organized with active musicians. Collectively, they share a stand-alone design, digitally fabricated enclosures, and a common sensor acquisition and sound synthesis architecture, yet each is unique in its playing technique and sonic output. We focus on the technical design of the instruments and provide examples of key design specifications that were derived from user input, while reflecting on the challenges to, and opportunities for, creating instruments that support active practices of performing musicians.
@inproceedings{NIME20_1, author = {Sullivan, John and Vanasse, Julian and Guastavino, Catherine and Wanderley, Marcelo}, title = {Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians}, pages = {5--10}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813166}, url = {https://www.nime.org/proceedings/2020/nime2020_paper1.pdf}, presentation-video = {https://youtu.be/DUMXJw-CTVo} }
-
Darrell J Gibson and Richard Polfreman. 2020. Star Interpolator – A Novel Visualization Paradigm for Graphical Interpolators. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 49–54. http://doi.org/10.5281/zenodo.4813168
Download PDF DOIThis paper presents a new visualization paradigm for graphical interpolation systems, known as Star Interpolation, that has been specifically created for sound design applications. Through the presented investigation of previous visualizations, it becomes apparent that the existing visuals in this class of system, generally relate to the interpolation model that determines the weightings of the presets and not the sonic output. The Star Interpolator looks to resolve this deficiency by providing visual cues that relate to the parameter space. Through comparative exploration it has been found this visualization provides a number of benefits over the previous systems. It is also shown that hybrid visualization can be generated that combined benefits of the new visualization with the existing interpolation models. These can then be accessed by using an Interactive Visualization (IV) approach. The results from our exploration of these visualizations are encouraging and they appear to be advantageous when using the interpolators for sound designs tasks. Therefore, it is proposed that formal usability testing is undertaken to measure the potential value of this form of visualization.
@inproceedings{NIME20_10, author = {Gibson, Darrell J and Polfreman, Richard}, title = {Star Interpolator – A Novel Visualization Paradigm for Graphical Interpolators}, pages = {49--54}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813168}, url = {https://www.nime.org/proceedings/2020/nime2020_paper10.pdf}, presentation-video = {https://youtu.be/3ImRZdSsP-M} }
-
Laurel S Pardue, Miguel Ortiz, Maarten van Walstijn, Paul Stapleton, and Matthew Rodger. 2020. Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 523–524. http://doi.org/10.5281/zenodo.4813170
Download PDF DOIThis paper reports on the process of development of a virtual-acoustic proto-instrument, Vodhrán, based on a physical model of a plate, within a musical performance-driven ecosystemic environment. Performers explore the plate model via tactile interaction through a Sensel Morph interface, chosen to allow damping and localised striking consistent with playing hand percussion. Through an iteration of prototypes, we have designed an embedded proto-instrument that allows a bodily interaction between the performer and the virtual-acoustic plate in a way that redirects from the perception of the Sensel as a touchpad and reframes it as a percussive surface. Due to the computational effort required to run such a rich physical model and the necessity to provide a natural interaction, the audio processing is implemented on a high powered single board computer. We describe the design challenges and report on the technological solutions we have found in the implementation of Vodhrán which we believe are valuable to the wider NIME community.
@inproceedings{NIME20_100, author = {Pardue, Laurel S and Ortiz, Miguel and van Walstijn, Maarten and Stapleton, Paul and Rodger, Matthew}, title = {Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument}, pages = {523--524}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813170}, url = {https://www.nime.org/proceedings/2020/nime2020_paper100.pdf} }
-
Satvik Venkatesh, Edward Braund, and Eduardo Miranda. 2020. Designing Brain-computer Interfaces for Sonic Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 525–530. http://doi.org/10.5281/zenodo.4813172
Download PDF DOIBrain-computer interfaces (BCIs) are beneficial for patients who are suffering from motor disabilities because it offers them a way of creative expression, which improves mental well-being. BCIs aim to establish a direct communication medium between the brain and the computer. Therefore, unlike conventional musical interfaces, it does not require muscular power. This paper explores the potential of building sound synthesisers with BCIs that are based on steady-state visually evoked potential (SSVEP). It investigates novel ways to enable patients with motor disabilities to express themselves. It presents a new concept called sonic expression, that is to express oneself purely by the synthesis of sound. It introduces new layouts and designs for BCI-based sound synthesisers and the limitations of these interfaces are discussed. An evaluation of different sound synthesis techniques is conducted to find an appropriate one for such systems. Synthesis techniques are evaluated and compared based on a framework governed by sonic expression.
@inproceedings{NIME20_101, author = {Venkatesh, Satvik and Braund, Edward and Miranda, Eduardo}, title = {Designing Brain-computer Interfaces for Sonic Expression}, pages = {525--530}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813172}, url = {https://www.nime.org/proceedings/2020/nime2020_paper101.pdf} }
-
Duncan A.H. Williams, Bruno Fazenda, Victoria J. Williamson, and Gyorgy Fazekas. 2020. Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 531–536. http://doi.org/10.5281/zenodo.4813174
Download PDF DOIMusic has previously been shown to be beneficial in improving runners performance in treadmill based experiments. This paper evaluates a generative music system, HEARTBEATS, designed to create biosignal synchronous music in real-time according to an individual athlete’s heart-rate or cadence (steps per minute). The tempo, melody, and timbral features of the generated music are modulated according to biosensor input from each runner using a wearable Bluetooth sensor. We compare the relative performance of athletes listening to heart-rate and cadence synchronous music, across a randomized trial (N=57) on a trail course with 76ft of elevation. Participants were instructed to continue until perceived effort went beyond an 18 using the Borg rating of perceived exertion scale. We found that cadence-synchronous music improved performance and decreased perceived effort in male runners, and improved performance but not perceived effort in female runners, in comparison to heart-rate synchronous music. This work has implications for the future design and implementation of novel portable music systems and in music-assisted coaching.
@inproceedings{NIME20_102, author = {Williams, Duncan A.H. and Fazenda, Bruno and Williamson, Victoria J. and Fazekas, Gyorgy}, title = {Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners}, pages = {531--536}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813174}, url = {https://www.nime.org/proceedings/2020/nime2020_paper102.pdf} }
-
Gilberto Bernardes and Gilberto Bernardes. 2020. Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 537–542. http://doi.org/10.5281/zenodo.4813176
Download PDF DOIAudio content-based processing has become a pervasive methodology for techno-fluent musicians. System architectures typically create thumbnail audio descriptions, based on signal processing methods, to visualize, retrieve and transform musical audio efficiently. Towards enhanced usability of these descriptor-based frameworks for the music community, the paper advances a minimal content-based audio description scheme, rooted on primary musical notation attributes at the threefold sound object, meso and macro hierarchies. Multiple perceptually-guided viewpoints from rhythmic, harmonic, timbral and dynamic attributes define a discrete and finite alphabet with minimal formal and subjective assumptions using unsupervised and user-guided methods. The Factor Oracle automaton is then adopted to model and visualize temporal morphology. The generative musical applications enabled by the descriptor-based framework at multiple structural hierarchies are discussed.
@inproceedings{NIME20_103, author = {Bernardes, Gilberto and Bernardes, Gilberto}, title = {Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0}, pages = {537--542}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813176}, url = {https://www.nime.org/proceedings/2020/nime2020_paper103.pdf}, presentation-video = {https://youtu.be/zEg9Cpir8zA} }
-
Joung Min Han and Yasuaki Kakehi. 2020. ParaSampling: A Musical Instrument with Handheld Tapehead Interfaces for Impromptu Recording and Playing on a Magnetic Tape. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 543–544. http://doi.org/10.5281/zenodo.4813178
Download PDF DOIFor a long time, magnetic tape has been commonly utilized as one of physical media for recording and playing music. In this research, we propose a novel interactive musical instrument called ParaSampling that utilizes the technology of magnetic sound recording, and a improvisational sound playing method based on the instrument. While a conventional cassette tape player has a single tapehead, which rigidly placed, our instrument utilizes multiple handheld tapehead modules as an interface. Players can hold the interfaces and press them against the rotating magnetic tape at an any point to record or reproduce sounds The player can also easily erase and rewrite the sound recorded on the tape. With this instrument, they can achieve improvised and unique musical expressions through tangible and spatial interactions. In this paper, we describe the system design of ParaSampling, the implementation of the prototype system, and discuss music expressions enabled by the system.
@inproceedings{NIME20_104, author = {Han, Joung Min and Kakehi, Yasuaki}, title = {ParaSampling: A Musical Instrument with Handheld Tapehead Interfaces for Impromptu Recording and Playing on a Magnetic Tape}, pages = {543--544}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813178}, url = {https://www.nime.org/proceedings/2020/nime2020_paper104.pdf} }
-
Giorgos Filandrianos, Natalia Kotsani, Edmund G Dervakos, Giorgos Stamou, Vaios Amprazis, and Panagiotis Kiourtzoglou. 2020. Brainwaves-driven Effects Automation in Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 545–546. http://doi.org/10.5281/zenodo.4813180
Download PDF DOIA variety of controllers with multifarious sensors and functions have maximized the real time performers control capabilities. The idea behind this project was to create an interface which enables the interaction between the performers and the effect processor measuring their brain waves amplitudes, e.g., alpha, beta, theta, delta and gamma, not necessarily with the user’s awareness. We achieved this by using an electroencephalography (EEG) sensor for detecting performer’s different emotional states and, based on these, sending midi messages for digital processing units automation. The aim is to create a new generation of digital processor units that could be automatically configured in real-time given the emotions or thoughts of the performer or the audience. By introducing emotional state information in the real time control of several aspects of artistic expression, we highlight the impact of surprise and uniqueness in the artistic performance.
@inproceedings{NIME20_105, author = {Filandrianos, Giorgos and Kotsani, Natalia and Dervakos, Edmund G and Stamou, Giorgos and Amprazis, Vaios and Kiourtzoglou, Panagiotis}, title = {Brainwaves-driven Effects Automation in Musical Performance}, pages = {545--546}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813180}, url = {https://www.nime.org/proceedings/2020/nime2020_paper105.pdf} }
-
Graham Wakefield, Michael Palumbo, and Alexander Zonta. 2020. Affordances and Constraints of Modular Synthesis in Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 547–550. http://doi.org/10.5281/zenodo.4813182
Download PDF DOIThis article focuses on the rich potential of hybrid domain translation of modular synthesis (MS) into virtual reality (VR). It asks: to what extent can what is valued in studio-based MS practice find a natural home or rich new interpretations in the immersive capacities of VR? The article attends particularly to the relative affordances and constraints of each as they inform the design and development of a new system ("Mischmasch") supporting collaborative and performative patching of Max gen patches and operators within a shared room-scale VR space.
@inproceedings{NIME20_106, author = {Wakefield, Graham and Palumbo, Michael and Zonta, Alexander}, title = {Affordances and Constraints of Modular Synthesis in Virtual Reality}, pages = {547--550}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813182}, url = {https://www.nime.org/proceedings/2020/nime2020_paper106.pdf} }
-
emmanouil moraitis. 2020. Symbiosis: a biological taxonomy for modes of interaction in dance-music collaborations. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 551–556. http://doi.org/10.5281/zenodo.4813184
Download PDF DOIFocusing on interactive performance works borne out of dancer-musician collaborations, this paper investigates the relationship between the mediums of sound and movement through a conceptual interpretation of the biological phenomenon of symbiosis. Describing the close and persistent interactions between organisms of different species, symbioses manifest across a spectrum of relationship types, each identified according to the health effect experienced by the engaged organisms. This biological taxonomy is appropriated within a framework which identifies specific modes of interaction between sound and movement according to the collaborating practitioners’ intended outcome, and required provisions, cognition of affect, and system operation. Using the symbiotic framework as an analytical tool, six dancer-musician collaborations from the field of NIME are examined in respect to the employed modes of interaction within each of the four examined areas. The findings reveal the emergence of multiple modes in each work, as well as examples of mutation between different modes over the course of a performance. Furthermore, the symbiotic concept provides a novel understanding of the ways gesture recognition technologies (GRTs) have redefined the relationship dynamics between dancers and musicians, and suggests a more efficient and inclusive approach in communicating the potential and limitations presented by Human-Computer Interaction tools.
@inproceedings{NIME20_107, author = {moraitis, emmanouil}, title = {Symbiosis: a biological taxonomy for modes of interaction in dance-music collaborations}, pages = {551--556}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813184}, url = {https://www.nime.org/proceedings/2020/nime2020_paper107.pdf}, presentation-video = {https://youtu.be/5X6F_nL8SOg} }
-
Antonella Nonnis and Nick Bryan-Kinns. 2020. Όλοι: music making to scaffold social playful activities and self-regulation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 557–558. http://doi.org/10.5281/zenodo.4813186
Download PDF DOIWe present Olly, a musical textile tangible user interface (TUI) designed around the observations of a group of five children with autism who like music. The intention is to support scaffolding social interactions and sensory regulation during a semi-structured and open-ended playful activity. Olly was tested in the dance studio of a special education needs (SEN) school in North-East London, UK, for a period of 5 weeks, every Thursday afternoon for 30 minutes. Olly uses one Bare touch board in midi mode and four stretch analog sensors embedded inside four elastic ribbons. These ribbons top the main body of the installation which is made by using an inflatable gym ball wrapped in felt. Each of the ribbons plays a different instrument and triggers different harmonic chords. Olly allows to play pleasant melodies if interacting with it in solo mode and more complex harmonies when playing together with others. Results show great potentials for carefully designed musical TUI implementation aimed at scaffolding social play while affording self-regulation in SEN contexts. We present a brief introduction on the background and motivations, design considerations and results.
@inproceedings{NIME20_108, author = {Nonnis, Antonella and Bryan-Kinns, Nick}, title = {Όλοι: music making to scaffold social playful activities and self-regulation}, pages = {557--558}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813186}, url = {https://www.nime.org/proceedings/2020/nime2020_paper108.pdf} }
-
Sara Sithi-Amnuai. 2020. Exploring Identity Through Design: A Focus on the Cultural Body Via Nami. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 559–563. http://doi.org/10.5281/zenodo.4813188
Download PDF DOIIdentity is inextricably linked to culture and sustained through creation and performance of music and dance, yet discussion of agency and cultural tools informing design and performance application of gestural controllers is not widely discussed. The purpose of this paper is to discuss the cultural body, its consideration in existing gestural controller design, and how cultural design methods have the potential to extend musical/social identities and/or traditions within a technological context. In an effort to connect and reconnect with the author’s personal Nikkei heritage, this paper will discuss the design of Nami – a custom built gestural controller and its applicability to extend the author’s cultural body through a community-centric case study performance.
@inproceedings{NIME20_109, author = {Sithi-Amnuai, Sara}, title = {Exploring Identity Through Design: A Focus on the Cultural Body Via Nami}, pages = {559--563}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813188}, url = {https://www.nime.org/proceedings/2020/nime2020_paper109.pdf}, presentation-video = {https://youtu.be/QCUGtE_z1LE} }
-
Anna Xambó and Gerard Roma. 2020. Performing Audiences: Composition Strategies for Network Music using Mobile Phones. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 55–60. http://doi.org/10.5281/zenodo.4813192
Download PDF DOIWith the development of web audio standards, it has quickly become technically easy to develop and deploy software for inviting audiences to participate in musical performances using their mobile phones. Thus, a new audience-centric musical genre has emerged, which aligns with artistic manifestations where there is an explicit inclusion of the public (e.g. participatory art, cinema or theatre). Previous research has focused on analysing this new genre from historical, social organisation and technical perspectives. This follow-up paper contributes with reflections on technical and aesthetic aspects of composing within this audience-centric approach. We propose a set of 13 composition dimensions that deal with the role of the performer, the role of the audience, the location of sound and the type of feedback, among others. From a reflective approach, four participatory pieces developed by the authors are analysed using the proposed dimensions. Finally, we discuss a set of recommendations and challenges for the composers-developers of this new and promising musical genre. This paper concludes discussing the implications of this research for the NIME community.
@inproceedings{NIME20_11, author = {Xambó, Anna and Roma, Gerard}, title = {Performing Audiences: Composition Strategies for Network Music using Mobile Phones}, pages = {55--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813192}, url = {https://www.nime.org/proceedings/2020/nime2020_paper11.pdf} }
-
Joe Wright. 2020. The Appropriation and Utility of Constrained ADMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 564–569. http://doi.org/10.5281/zenodo.4813194
Download PDF DOIThis paper reflects on players’ first responses to a constrained Accessible Digital Musical Instrument (ADMI) in open, child-led sessions with seven children at a special school. Each player’s gestures with the instrument were sketched, categorised and compared with those of others among the group. Additionally, sensor data from the instruments was recorded and analysed to give a secondary indication of playing style, based on note and silence durations. In accord with previous studies, the high degree of constraints led to a diverse range of playing styles, allowing each player to appropriate and explore the instruments within a short inaugural session. The open, undirected sessions also provided insights which could potentially direct future work based on each person’s responses to the instruments. The paper closes with a short discussion of these diverse styles, and the potential role constrained ADMIs could serve as ’ice-breakers’ in musical projects that seek to co-produce or co-design with neurodiverse children and young people.
@inproceedings{NIME20_110, author = {Wright, Joe}, title = {The Appropriation and Utility of Constrained ADMIs}, pages = {564--569}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813194}, url = {https://www.nime.org/proceedings/2020/nime2020_paper110.pdf}, presentation-video = {https://youtu.be/RhaIzCXQ3uo} }
-
Lia Mice and Andrew McPherson. 2020. From miming to NIMEing: the development of idiomatic gestural language on large scale DMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 570–575. http://doi.org/10.5281/zenodo.4813200
Download PDF DOIWhen performing with new instruments, musicians often develop new performative gestures and playing techniques. Music performance studies on new instruments often consider interfaces that feature a spectrum of gestures similar to already existing sound production techniques. This paper considers the choices performers make when creating an idiomatic gestural language for an entirely unfamiliar instrument. We designed a musical interface with a unique large-scale layout to encourage new performers to create fully original instrument-body interactions. We conducted a study where trained musicians were invited to perform one of two versions of the same instrument, each physically identical but with a different tone mapping. The study results reveal insights into how musicians develop novel performance gestures when encountering a new instrument characterised by an unfamiliar shape and size. Our discussion highlights the impact of an instrument’s scale and layout on the emergence of new gestural vocabularies and on the qualities of the music performed.
@inproceedings{NIME20_111, author = {Mice, Lia and McPherson, Andrew}, title = {From miming to NIMEing: the development of idiomatic gestural language on large scale DMIs}, pages = {570--575}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813200}, url = {https://www.nime.org/proceedings/2020/nime2020_paper111.pdf}, presentation-video = {https://youtu.be/mnJN8ELneUU} }
-
William C Payne, Ann Paradiso, and Shaun Kane. 2020. Cyclops: Designing an eye-controlled instrument for accessibility and flexible use. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 576–580. http://doi.org/10.5281/zenodo.4813204
Download PDF DOIThe Cyclops is an eye-gaze controlled instrument designed for live performance and improvisation. It is primarily mo- tivated by a need for expressive musical instruments that are more easily accessible to people who rely on eye track- ers for computer access, such as those with amyotrophic lateral sclerosis (ALS). At its current implementation, the Cyclops contains a synthesizer and sequencer, and provides the ability to easily create and automate musical parameters and effects through recording eye-gaze gestures on a two- dimensional canvas. In this paper, we frame our prototype in the context of previous eye-controlled instruments, and we discuss we designed the Cyclops to make gaze-controlled music making as fun, accessible, and seamless as possible despite notable interaction challenges like latency, inaccu- racy, and “Midas Touch.”
@inproceedings{NIME20_112, author = {Payne, William C and Paradiso, Ann and Kane, Shaun}, title = {Cyclops: Designing an eye-controlled instrument for accessibility and flexible use}, pages = {576--580}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813204}, url = {https://www.nime.org/proceedings/2020/nime2020_paper112.pdf}, presentation-video = {https://youtu.be/G6dxngoCx60} }
-
Adnan Marquez-Borbon. 2020. Collaborative Learning with Interactive Music Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 581–586. http://doi.org/10.5281/zenodo.4813206
Download PDF DOIThis paper presents the results of an observational study focusing on the collaborative learning processes of a group of performers with an interactive musical system. The main goal of this study was to implement methods for learning and developing practice with these technological objects in order to generate future pedagogical methods. During the research period of six months, four participants regularly engaged in workshop-type scenarios where learning objectives were proposed and guided by themselves.The principal researcher, working as participant-observer, did not impose or prescribed learning objectives to the other members of the group. Rather, all participants had equal say in what was to be done and how it was to be accomplished. Results show that the group learning environment is rich in opportunities for learning, mutual teaching, and for establishing a comunal practice for a given interactive musical system.Key findings suggest that learning by demonstration, observation and modelling are significant for learning in this context. Additionally, it was observed that a dialogue and a continuous flow of information between the members of the community is needed in order to motivate and further their learning.
@inproceedings{NIME20_113, author = {Marquez-Borbon, Adnan}, title = {Collaborative Learning with Interactive Music Systems}, pages = {581--586}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813206}, url = {https://www.nime.org/proceedings/2020/nime2020_paper113.pdf}, presentation-video = {https://youtu.be/1G0bOVlWwyI} }
-
Jens Vetter. 2020. WELLE - a web-based music environment for the blind. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 587–590. http://doi.org/10.5281/zenodo.4813208
Download PDF DOIThis paper presents WELLE, a web-based music environment for blind people, and describes its development, design, notation syntax and first experiences. WELLE is intended to serve as a collaborative, performative and educational tool to quickly create and record musical ideas. It is pattern-oriented, based on textual notation and focuses on accessibility, playful interaction and ease of use. WELLE was developed as part of the research project Tangible Signals and will also serve as a platform for the integration of upcoming new interfaces.
@inproceedings{NIME20_114, author = {Vetter, Jens}, title = {WELLE - a web-based music environment for the blind}, pages = {587--590}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813208}, url = {https://www.nime.org/proceedings/2020/nime2020_paper114.pdf} }
-
Margarida Pessoa, Cláudio Parauta, Pedro Luís, Isabela Corintha, and Gilberto Bernardes. 2020. Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 591–595. http://doi.org/10.5281/zenodo.4813210
Download PDF DOIThis paper presents an overview of the design principles behind Digital Music Instruments (DMIs) for education across all editions of the International Conference on New Interfaces for Music Expression (NIME). We compiled a comprehensive catalogue of over hundred DMIs with varying degrees of applicability in the educational practice. Each catalogue entry is annotated according to a proposed taxonomy for DMIs for education, rooted in the mechanics of control, mapping and feedback of an interactive music system, along with the required expertise of target user groups and the instrument learning curve. Global statistics unpack underlying trends and design goals across the chronological period of the NIME conference. In recent years, we note a growing number of DMIs targeting non-experts and with reduced requirements in terms of expertise. Stemming from the identified trends, we discuss future challenges in the design of DMIs for education towards enhanced degrees of variation and unpredictability.
@inproceedings{NIME20_115, author = {Pessoa, Margarida and Parauta, Cláudio and Luís, Pedro and Corintha, Isabela and Bernardes, Gilberto}, title = {Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy}, pages = {591--595}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813210}, url = {https://www.nime.org/proceedings/2020/nime2020_paper115.pdf} }
-
Laurel S Pardue, Kuljit Bhamra, Graham England, Phil Eddershaw, and Duncan Menzies. 2020. Demystifying tabla through the development of an electronic drum. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 596–599. http://doi.org/10.5281/zenodo.4813212
Download PDF DOIThe tabla is a traditional pitched two-piece Indian drum set, popular not only within South East Asian music, but whose sounds also regularly feature in western music. Yet tabla remains an aural tradition, taught largely through a guru system heavy in custom and mystique. Tablas can also pose problems for school and professional performance environments as they are physically bulky, fragile, and reactive to environmental factors such as damp and heat. As part of a broader project to demystify tabla, we present an electronic tabla that plays nearly identically to an acoustic tabla and was created in order to make the tabla acces- sible and practical for a wider audience of students, pro- fessional musicians and composers. Along with develop- ment of standardised tabla notation and instructional educational aides, the electronic tabla is designed to be compact, robust, easily tuned, and the electronic nature allows for scoring tabla through playing. Further, used as an interface, it allows the use of learned tabla technique to control other percussive sounds. We also discuss the technological approaches used to accurately capture the localized multi-touch rapid-fire strikes and damping that combine to make tabla such a captivating and virtuosic instrument.
@inproceedings{NIME20_116, author = {Pardue, Laurel S and Bhamra, Kuljit and England, Graham and Eddershaw, Phil and Menzies, Duncan}, title = {Demystifying tabla through the development of an electronic drum}, pages = {596--599}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813212}, url = {https://www.nime.org/proceedings/2020/nime2020_paper116.pdf}, presentation-video = {https://youtu.be/PPaHq8fQjB0} }
-
Juan D Sierra. 2020. SpeakerDrum. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 600–604. http://doi.org/10.5281/zenodo.4813216
Download PDF DOISpeakerDrum is an instrument composed of multiple Dual Voice Coil speakers (DVC) where two coils are used to drive the same membrane. However, in this case, one of them is used as a microphone which is then used by the performer as an input interface of percussive gestures. Of course, this leads to poten- tial feedback, but with enough control, a compelling exploration of resonance haptic feedback and sound embodiment is possible.
@inproceedings{NIME20_117, author = {Sierra, Juan D}, title = {SpeakerDrum}, pages = {600--604}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813216}, url = {https://www.nime.org/proceedings/2020/nime2020_paper117.pdf} }
-
Matthew Caren, Romain Michon, and Matthew Wright. 2020. The KeyWI: An Expressive and Accessible Electronic Wind Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 605–608. http://doi.org/10.5281/zenodo.4813218
Download PDF DOIThis paper presents the KeyWI, an electronic wind instrument design based on the melodica that both improves upon limitations in current systems and is general and powerful enough to support a variety of applications. Four opportunities for growth are identified in current electronic wind instrument systems, which then are used as focuses in the development and evaluation of the instrument. The instrument features a breath pressure sensor with a large dynamic range, a keyboard that allows for polyphonic pitch selection, and a completely integrated construction. Sound synthesis is performed with Faust code compiled to the Bela Mini, which offers low-latency audio and a simple yet powerful development workflow. In order to be as accessible and versatile as possible, the hardware and software is entirely open-source, and fabrication requires only common maker tools.
@inproceedings{NIME20_118, author = {Caren, Matthew and Michon, Romain and Wright, Matthew}, title = {The KeyWI: An Expressive and Accessible Electronic Wind Instrument}, pages = {605--608}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813218}, url = {https://www.nime.org/proceedings/2020/nime2020_paper118.pdf} }
-
Pelle Juul Christensen, Dan Overholt, and Stefania Serafin. 2020. The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 609–612. http://doi.org/10.5281/zenodo.4813220
Download PDF DOIIn this paper we provide a detailed description of the development of a new interface for musical expression, the da ̈ıs, with focus on an iterative development process, control of physical models for sounds synthesis, and haptic feedback. The development process, consisting of three iterations, is covered along with a discussion of the tools and methods used. The sound synthesis algorithm for the da ̈ıs, a physical model of a bowed string, is covered and the mapping from the interface parameters to those of the synthesis algorithms is described in detail. Using a qualitative test the affordances, advantages, and disadvantages of the chosen design, synthesis algorithm, and parameter mapping is highlighted. Lastly, the possibilities for future work is discussed with special focus on alternate sounds and mappings.
@inproceedings{NIME20_119, author = {Christensen, Pelle Juul and Overholt, Dan and Serafin, Stefania}, title = {The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis}, pages = {609--612}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813220}, url = {https://www.nime.org/proceedings/2020/nime2020_paper119.pdf}, presentation-video = {https://youtu.be/XOvnc_AKKX8} }
-
Samuel J Hunt, Tom Mitchell, and Chris Nash. 2020. Composing computer generated music, an observational study using IGME: the Interactive Generative Music Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 61–66. http://doi.org/10.5281/zenodo.4813222
Download PDF DOIComputer composed music remains a novel and challenging problem to solve. Despite an abundance of techniques and systems little research has explored how these might be useful for end-users looking to compose with generative and algorithmic music techniques. User interfaces for generative music systems are often inaccessible to non-programmers and neglect established composition workflow and design paradigms that are familiar to computer-based music composers. We have developed a system called the Interactive Generative Music Environment (IGME) that attempts to bridge the gap between generative music and music sequencing software, through an easy to use score editing interface. This paper discusses a series of user studies in which users explore generative music composition with IGME. A questionnaire evaluates the user’s perception of interacting with generative music and from this provide recommendations for future generative music systems and interfaces.
@inproceedings{NIME20_12, author = {Hunt, Samuel J and Mitchell, Tom and Nash, Chris}, title = {Composing computer generated music, an observational study using IGME: the Interactive Generative Music Environment}, pages = {61--66}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813222}, url = {https://www.nime.org/proceedings/2020/nime2020_paper12.pdf} }
-
Joao Wilbert, Don D Haddad, Hiroshi Ishii, and Joseph Paradiso. 2020. Patch-corde: an expressive patch-cable for the modular synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 613–616. http://doi.org/10.5281/zenodo.4813224
Download PDF DOIMany opportunities and challenges in both the control and performative aspects of today’s modular synthesizers exist. The user interface prevailing in the world of synthesizers and music controllers has always been revolving around knobs, faders, switches, dials, buttons, or capacitive touchpads, to name a few. This paper presents a novel way of interaction with a modular synthesizer by exploring the affordances of cord-base UIs. A special patch cable was developed us- ing commercially available piezo-resistive rubber cords, and was adapted to fit to the 3.5 mm mono audio jack, making it compatible with the Eurorack modular-synth standard. Moreover, a module was developed to condition this stretch- able sensor/cable, to allow multiple Patch-cordes to be used in a given patch simultaneously. This paper also presents a vocabulary of interactions, labeled through various physical actions, turning the patch cable into an expressive controller that complements traditional patching techniques.
@inproceedings{NIME20_120, author = {Wilbert, Joao and Haddad, Don D and Ishii, Hiroshi and Paradiso, Joseph}, title = {Patch-corde: an expressive patch-cable for the modular synthesizer.}, pages = {613--616}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813224}, url = {https://www.nime.org/proceedings/2020/nime2020_paper120.pdf}, presentation-video = {https://youtu.be/7gklx8ek8U8} }
-
Jiří Suchánek. 2020. SOIL CHOIR v.1.3 - soil moisture sonification installation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 617–618. http://doi.org/10.5281/zenodo.4813226
Download PDF DOIThe artistic sonification offers a creative method for putting direct semantic layers to the abstract sounds. This paper is dedicated to the sound installation “Soil choir v.1.3” that sonifies soil moisture in different depths and transforms this non-musical phenomenon into organized sound structures. The sonification of natural soil moisture processes tests the limits of our attention, patience and willingness to still perceive ultra-slow reactions and examines the mechanisms of our sense adaptation. Although the musical time of the installation is set to almost non-human – environmental time scale (changes happen within hours, days, weeks or even months…) this system can be explored and even played also as an instrument by putting sensors to different soil areas or pouring liquid into the soil and waiting for changes... The crucial aspect of the work was to design the sonification architecture that deals with extreme slow changes of input data – measured values from moisture sensors. The result is the sound installation consisting of three objects – each with different types of soil. Every object is compact, independent unit consisting of three low-cost capacitive soil moisture sensors, 1m long perspex tube filled with soil, full range loudspeaker and Bela platform with custom Supercollider code. I developed this installation during the year 2019 and this paper gives insight into the aspects and issues connected with creating this installation.
@inproceedings{NIME20_121, author = {Suchánek, Jiří}, title = {SOIL CHOIR v.1.3 - soil moisture sonification installation}, pages = {617--618}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813226}, url = {https://www.nime.org/proceedings/2020/nime2020_paper121.pdf} }
-
Marinos Koutsomichalis. 2020. Rough-hewn Hertzian Multimedia Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 619–624. http://doi.org/10.5281/zenodo.4813228
Download PDF DOIThree DIY electronic instruments that the author has used in real-life multimedia performance contexts are scrutinised herein. The instruments are made intentionally rough-hewn, non-optimal and user-unfriendly in several respects, and are shown to draw upon experimental traits in electronics de- sign and interfaces for music expression. The various different ways in which such design traits affects their performance are outlined, as are their overall consequence to the artistic outcome and to individual experiences of it. It is shown that, to a varying extent, they all embody, mediate, and aid actualise the specifics their parent projects revolve around. It is eventually suggested that in the context of an exploratory and hybrid artistic practice, bespoke instruments of sorts, their improvised performance, the material traits or processes they implement or pivot on, and the ideas/narratives that perturb thereof, may all intertwine and fuse into one another so that a clear distinction between one another is not always possible, or meaningful. In such a vein, this paper aims at being an account of such a practice upon which prospective researchers/artists may further build upon.
@inproceedings{NIME20_122, author = {Koutsomichalis, Marinos}, title = {Rough-hewn Hertzian Multimedia Instruments}, pages = {619--624}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813228}, url = {https://www.nime.org/proceedings/2020/nime2020_paper122.pdf}, presentation-video = {https://youtu.be/DWecR7exl8k} }
-
Taylor J Olsen. 2020. Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 625–630. http://doi.org/10.5281/zenodo.4813230
Download PDF DOIThe visual-audioizer is a patch created in Max in which the concept of fluid-time animation techniques, in tandem with basic computer vision tracking methods, can be used as a tool to allow the visual time-based media artist to create music. Visual aspects relating to the animator’s knowledge of motion, animated loops, and auditory synchronization derived from computer vision tracking methods, allow an immediate connection between the generated audio derived from visuals—becoming a new way to experience and create audio-visual media. A conceptual overview, comparisons of past/current audio-visual contributors, and a summary of the Max patch will be discussed. The novelty of practice-based animation methods in the field of musical expression, considerations of utilizing the visual-audioizer, and the future of fluid-time animation techniques as a tool of musical creativity will also be addressed.
@inproceedings{NIME20_123, author = {Olsen, Taylor J}, title = {Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype}, pages = {625--630}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813230}, url = {https://www.nime.org/proceedings/2020/nime2020_paper123.pdf} }
-
Virginia de las Pozas. 2020. Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 631–634. http://doi.org/10.5281/zenodo.4813232
Download PDF DOIThis paper describes a system for automating the generation of mapping schemes between human interaction with extramusical objects and electronic dance music. These mappings are determined through the comparison of sensor input to a synthesized matrix of sequenced audio. The goal of the system is to facilitate live performances that feature quotidian objects in the place of traditional musical instruments. The practical and artistic applications of musical control with quotidian objects is discussed. The associated object-manipulating gesture vocabularies are mapped to musical output so that the objects themselves may be perceived as DMIs. This strategy is used in a performance to explore the liveness qualities of the system.
@inproceedings{NIME20_124, author = {de las Pozas, Virginia}, title = {Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music}, pages = {631--634}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813232}, url = {https://www.nime.org/proceedings/2020/nime2020_paper124.pdf} }
-
Christodoulos Benetatos, Joseph VanderStel, and Zhiyao Duan. 2020. BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 635–640. http://doi.org/10.5281/zenodo.4813234
Download PDF DOIDuring theBaroque period, improvisation was a key element of music performance and education. Great musicians, such as J.S. Bach, were better known as improvisers than composers. Today, however, there is a lack of improvisation culture in classical music performance and education; classical musicians either are not trained to improvise, or cannot find other people to improvise with. Motivated by this observation, we develop BachDuet, a system that enables real-time counterpoint improvisation between a human anda machine. This system uses a recurrent neural network toprocess the human musician’s monophonic performance ona MIDI keyboard and generates the machine’s monophonic performance in real time. We develop a GUI to visualize the generated music content and to facilitate this interaction. We conduct user studies with 13 musically trained users and show the feasibility of two-party duet counterpoint improvisation and the effectiveness of BachDuet for this purpose. We also conduct listening tests with 48 participants and show that they cannot tell the difference between duets generated by human-machine improvisation using BachDuet and those generated by human-human improvisation. Objective evaluation is also conducted to assess the degree to which these improvisations adhere to common rules of counterpoint, showing promising results.
@inproceedings{NIME20_125, author = {Benetatos, Christodoulos and VanderStel, Joseph and Duan, Zhiyao}, title = {BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation}, pages = {635--640}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813234}, url = {https://www.nime.org/proceedings/2020/nime2020_paper125.pdf}, presentation-video = {https://youtu.be/wFGW0QzuPPk} }
-
Olivier Capra, Florent Berthaut, and Laurent Grisoni. 2020. All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 67–72. http://doi.org/10.5281/zenodo.4813236
Download PDF DOIBecause they break the physical link between gestures and sound, Digital Musical Instruments offer countless opportunities for musical expression. For the same reason however, they may hinder the audience experience, making the musician contribution and expressiveness difficult to perceive. In order to cope with this issue without altering the instruments, researchers and artists alike have designed techniques to augment their performances with additional information, through audio, haptic or visual modalities. These techniques have however only been designed to offer a fixed level of information, without taking into account the variety of spectators expertise and preferences. In this paper, we investigate the design, implementation and effect on audience experience of visual augmentations with controllable level of detail (LOD). We conduct a controlled experiment with 18 participants, including novices and experts. Our results show contrasts in the impact of LOD on experience and comprehension for experts and novices, and highlight the diversity of usage of visual augmentations by spectators.
@inproceedings{NIME20_13, author = {Capra, Olivier and Berthaut, Florent and Grisoni, Laurent}, title = {All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience}, pages = {67--72}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813236}, url = {https://www.nime.org/proceedings/2020/nime2020_paper13.pdf}, presentation-video = {https://youtu.be/3hIGu9QDn4o} }
-
Johnty Wang, Eduardo Meneses, and Marcelo Wanderley. 2020. The Scalability of WiFi for Mobile Embedded Sensor Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 73–76. http://doi.org/10.5281/zenodo.4813239
Download PDF DOIIn this work we test the performance of multiple ESP32microcontrollers used as WiFi sensor interfaces in the context of real-time interactive systems. The number of devices from 1 to 13, and individual sending rates from 50 to 2300 messages per second are tested to provide examples of various network load situations that may resemble a performance configuration. The overall end-to-end latency and bandwidth are measured as the basic performance metrics of interest. The results show that a maximum message rate of 2300 Hz is possible on a 2.4 GHz network for a single embedded device and decreases as the number of devices are added. During testing it was possible to have up to 7 devices transmitting at 100 Hz while attaining less than 10 ms latency, but performance degrades with increasing sending rates and number of devices. Performance can also vary significantly from day to day depending on network usage in a crowded environment.
@inproceedings{NIME20_14, author = {Wang, Johnty and Meneses, Eduardo and Wanderley, Marcelo}, title = {The Scalability of WiFi for Mobile Embedded Sensor Interfaces}, pages = {73--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813239}, url = {https://www.nime.org/proceedings/2020/nime2020_paper14.pdf} }
-
Florent Berthaut and Luke Dahl. 2020. Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 77–82. http://doi.org/10.5281/zenodo.4813241
Download PDF DOIAdvanced musical cooperation, such as concurrent control of musical parameters or sharing data between instruments,has previously been investigated using multi-user instruments or orchestras of identical instruments. In the case of heterogeneous digital orchestras, where the instruments, interfaces, and control gestures can be very different, a number of issues may impede such collaboration opportunities. These include the lack of a standard method for sharing data or control, the incompatibility of parameter types, and limited awareness of other musicians’ activity and instrument structure. As a result, most collaborations remain limited to synchronising tempo or applying effects to audio outputs. In this paper we present two interfaces for real-time group collaboration amongst musicians with heterogeneous instruments. We conducted a qualitative study to investigate how these interfaces impact musicians’ experience and their musical output, we performed a thematic analysis of inter-views, and we analysed logs of interactions. From these results we derive principles and guidelines for the design of advanced collaboration systems for heterogeneous digital orchestras, namely Adapting (to) the System, Support Development, Default to Openness, and Minimise Friction to Support Expressivity.
@inproceedings{NIME20_15, author = {Berthaut, Florent and Dahl, Luke}, title = {Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras}, pages = {77--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813241}, url = {https://www.nime.org/proceedings/2020/nime2020_paper15.pdf}, presentation-video = {https://youtu.be/jGpKkbWq_TY} }
-
Andreas Förster, Christina Komesker, and Norbert Schnell. 2020. SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 83–88. http://doi.org/10.5281/zenodo.4813243
Download PDF DOIMusic technology can provide persons who experience physical and/or intellectual barriers using traditional musical instruments with a unique access to active music making. This applies particularly but not exclusively to the so-called group of people with physical and/or mental disabilities. This paper presents two Accessible Digital Musical Instruments (ADMIs) that were specifically designed for the students of a Special Educational Needs (SEN) school with a focus on intellectual disabilities. With SnoeSky, we present an ADMI in the form of an interactive starry sky that integrates into the Snoezel-Room. Here, users can ’play’ with ’melodic constellations’ using a flashlight. SonicDive is an interactive installation that enables users to explore a complex water soundscape through their movement inside a ball pool. The underlying goal of both ADMIs was the promotion of self-efficacy experiences while stimulating the users’ relaxation and activation. This paper reports on the design process involving the users and their environment. In addition, it describes some details of the technical implementaion of the ADMIs as well as first indices for their effectiveness.
@inproceedings{NIME20_16, author = {Förster, Andreas and Komesker, Christina and Schnell, Norbert}, title = {SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School}, pages = {83--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813243}, url = {https://www.nime.org/proceedings/2020/nime2020_paper16.pdf} }
-
Robert Pritchard and Ian Lavery. 2020. Inexpensive Colour Tracking to Overcome Performer ID Loss . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 89–92. http://doi.org/10.5281/zenodo.4813245
Download PDF DOIThe NuiTrack IDE supports writing code for an active infrared camera to track up to six bodies, with up to 25 target points on each person. The system automatically assigns IDs to performers/users as they enter the tracking area, but when occlusion of a performer occurs, or when a user exits and then re-enters the tracking area, upon rediscovery of the user the system generates a new tracking ID. Because of this any assigned and registered target tracking points for specific users are lost, as are the linked abilities of that performer to control media based on their movements. We describe a single camera system for overcoming this problem by assigning IDs based on the colours worn by the performers, and then using the colour tracking for updating and confirming identification when the performer reappears after occlusion or upon re-entry. A video link is supplied showing the system used for an interactive dance work with four dancers controlling individual audio tracks.
@inproceedings{NIME20_17, author = {Pritchard, Robert and Lavery, Ian}, title = {Inexpensive Colour Tracking to Overcome Performer ID Loss }, pages = {89--92}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813245}, url = {https://www.nime.org/proceedings/2020/nime2020_paper17.pdf} }
-
Kiyu Nishida and kazuhiro jo. 2020. Modules for analog synthesizers using Aloe vera biomemristor. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 93–96. http://doi.org/10.5281/zenodo.4813249
Download PDF DOIIn this study, an analog synthesizer module using Aloe vera was proposed as a biomemristor. The recent revival of analog modular synthesizers explores novel possibilities of sounds based on unconventional technologies such as integrating biological forms and structures into traditional circuits. A biosignal has been used in experimental music as the material for composition. However, the recent development of a biocomputor using a slime mold biomemristor expands the use of biomemristors in music. Based on prior research, characteristics of Aloe vera as a biomemristor were electrically measured, and two types of analog synthesizer modules were developed, current to voltage converter and current spike to voltage converter. For this application, a live performance was conducted with the CVC module and the possibilities as a new interface for musical expression were examined.
@inproceedings{NIME20_18, author = {Nishida, Kiyu and jo, kazuhiro}, title = {Modules for analog synthesizers using Aloe vera biomemristor}, pages = {93--96}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813249}, url = {https://www.nime.org/proceedings/2020/nime2020_paper18.pdf}, presentation-video = {https://youtu.be/bZaCd6igKEA} }
-
Giulio Moro and Andrew McPherson. 2020. A platform for low-latency continuous keyboard sensing and sound generation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 97–102. http://doi.org/10.5281/zenodo.4813253
Download PDF DOIOn several acoustic and electromechanical keyboard instruments, the produced sound is not always strictly dependent exclusively on a discrete key velocity parameter, and minute gesture details can affect the final sonic result. By contrast, subtle variations in articulation have a relatively limited effect on the sound generation when the keyboard controller uses the MIDI standard, used in the vast majority of digital keyboards. In this paper we present an embedded platform that can generate sound in response to a controller capable of sensing the continuous position of keys on a keyboard. This platform enables the creation of keyboard-based DMIs which allow for a richer set of interaction gestures than would be possible through a MIDI keyboard, which we demonstrate through two example instruments. First, in a Hammond organ emulator, the sensing device allows to recreate the nuances of the interaction with the original instrument in a way a velocity-based MIDI controller could not. Second, a nonlinear waveguide flute synthesizer is shown as an example of the expressive capabilities that a continuous-keyboard controller opens up in the creation of new keyboard-based DMIs.
@inproceedings{NIME20_19, author = {Moro, Giulio and McPherson, Andrew}, title = {A platform for low-latency continuous keyboard sensing and sound generation}, pages = {97--102}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813253}, url = {https://www.nime.org/proceedings/2020/nime2020_paper19.pdf}, presentation-video = {https://youtu.be/Y137M9UoKKg} }
-
Advait Sarkar and Henry Mattinson. 2020. Excello: exploring spreadsheets for music composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 11–16. http://doi.org/10.5281/zenodo.4813256
Download PDF DOIExcello is a spreadsheet-based music composition and programming environment. We co-developed Excello with feedback from 21 musicians at varying levels of musical and computing experience. We asked: can the spreadsheet interface be used for programmatic music creation? Our design process encountered questions such as how time should be represented, whether amplitude and octave should be encoded as properties of individual notes or entire phrases, and how best to leverage standard spreadsheet features, such as formulae and copy-paste. We present the user-centric rationale for our current design, and report a user study suggesting that Excello’s notation retains similar cognitive dimensions to conventional music composition tools, while allowing the user to write substantially complex programmatic music.
@inproceedings{NIME20_2, author = {Sarkar, Advait and Mattinson, Henry}, title = {Excello: exploring spreadsheets for music composition}, pages = {11--16}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813256}, url = {https://www.nime.org/proceedings/2020/nime2020_paper2.pdf} }
-
Andrea Guidi, Fabio Morreale, and Andrew McPherson. 2020. Design for auditory imagery: altering instruments to explore performer fluency. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 103–108. http://doi.org/10.5281/zenodo.4813260
Download PDF DOIIn NIME design, thorough attention has been devoted to feedback modalities, including auditory, visual and haptic feedback. How the performer executes the gestures to achieve a sound on an instrument, by contrast, appears to be less examined. Previous research showed that auditory imagery, or the ability to hear or recreate sounds in the mind even when no audible sound is present, is essential to the sensorimotor control involved in playing an instrument. In this paper, we enquire whether auditory imagery can also help to support skill transfer between musical instruments resulting in possible implications for new instrument design. To answer this question, we performed two experimental studies on pitch accuracy and fluency where professional violinists were asked to play a modified violin. Results showed altered or even possibly irrelevant auditory feedback on a modified violin does not appear to be a significant impediment to performance. However, performers need to have coherent imagery of what they want to do, and the sonic outcome needs to be coupled to the motor program to achieve it. This finding shows that the design lens should be shifted from a direct feedback model of instrumental playing toward a model where imagery guides the playing process. This result is in agreement with recent research on skilled sensorimotor control that highlights the value of feedforward anticipation in embodied musical performance. It is also of primary importance for the design of new instruments: new sounds that cannot easily be imagined and that are not coupled to a motor program are not likely to be easily performed on the instrument.
@inproceedings{NIME20_20, author = {Guidi, Andrea and Morreale, Fabio and McPherson, Andrew}, title = {Design for auditory imagery: altering instruments to explore performer fluency}, pages = {103--108}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813260}, url = {https://www.nime.org/proceedings/2020/nime2020_paper20.pdf}, presentation-video = {https://youtu.be/yK7Tg1kW2No} }
-
Raul Masu, Paulo Bala, Muhammad Ahmad, et al. 2020. VR Open Scores: Scores as Inspiration for VR Scenarios. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 109–114. http://doi.org/10.5281/zenodo.4813262
Download PDF DOIIn this paper, we introduce the concept of VR Open Scores: aleatoric score-based virtual scenarios where an aleatoric score is embedded in a virtual environment. This idea builds upon the notion of graphic scores and composed instrument, and apply them in a new context. Our proposal also explores possible parallels between open meaning in interaction design, and aleatoric score, conceptualized as Open Work by the Italian philosopher Umberto Eco. Our approach has two aims. The first aim is to create an environment where users can immerse themselves in the visual elements of a score while listening to the corresponding music. The second aim is to facilitate users to develop a personal relationship with both the system and the score. To achieve those aims, as a practical implementation of our proposed concept, we developed two immersive scenarios: a 360º video and an interactive space. We conclude presenting how our design aims were accomplished in the two scenarios, and describing positive and negative elements of our implementations.
@inproceedings{NIME20_21, author = {Masu, Raul and Bala, Paulo and Ahmad, Muhammad and Correia, Nuno N. and Nisi, Valentina and Nunes, Nuno and Romão, Teresa}, title = {VR Open Scores: Scores as Inspiration for VR Scenarios}, pages = {109--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813262}, url = {https://www.nime.org/proceedings/2020/nime2020_paper21.pdf}, presentation-video = {https://youtu.be/JSM6Rydz7iE} }
-
Amble H C Skuse and Shelly Knotts. 2020. Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 115–120. http://doi.org/10.5281/zenodo.4813266
Download PDF DOIThe project takes a Universal Design approach to exploring the possibility of creating a software platform to facilitate a Networked Ensemble for Disabled musicians. In accordance with the Nothing About Us Without Us (Charlton, 1998) principle I worked with a group of 15 professional musicians who are also disabled. The group gave interviews as to their perspectives and needs around networked music practices and this data was then analysed to look at how software design could be developed to make it more accessible. We also identified key messages for the wider design of digital musical instrument makers, performers and event organisers to improve practice around working with and for disabled musicians.
@inproceedings{NIME20_22, author = {Skuse, Amble H C and Knotts, Shelly}, title = {Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology.}, pages = {115--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813266}, url = {https://www.nime.org/proceedings/2020/nime2020_paper22.pdf}, presentation-video = {https://youtu.be/m4D4FBuHpnE} }
-
Anıl Çamcı, Matias Vilaplana, and Ruth Wang. 2020. Exploring the Affordances of VR for Musical Interaction Design with VIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 121–126. http://doi.org/10.5281/zenodo.4813268
Download PDF DOIAs virtual reality (VR) continues to gain prominence as a medium for artistic expression, a growing number of projects explore the use of VR for musical interaction design. In this paper, we discuss the concept of VIMEs (Virtual Interfaces for Musical Expression) through four case studies that explore different aspects of musical interactions in virtual environments. We then describe a user study designed to evaluate these VIMEs in terms of various usability considerations, such as immersion, perception of control, learnability and physical effort. We offer the results of the study, articulating the relationship between the design of a VIME and the various performance behaviors observed among its users. Finally, we discuss how these results, combined with recent developments in VR technology, can inform the design of new VIMEs.
@inproceedings{NIME20_23, author = {Çamcı, Anıl and Vilaplana, Matias and Wang, Ruth}, title = {Exploring the Affordances of VR for Musical Interaction Design with VIMEs}, pages = {121--126}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813268}, url = {https://www.nime.org/proceedings/2020/nime2020_paper23.pdf} }
-
Anıl Çamcı, Aaron Willette, Nachiketa Gargi, Eugene Kim, Julia Xu, and Tanya Lai. 2020. Cross-platform and Cross-reality Design of Immersive Sonic Environments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 127–130. http://doi.org/10.5281/zenodo.4813270
Download PDF DOIThe continued growth of modern VR (virtual reality) platforms into mass adoption is fundamentally driven by the work of content creators who offer engaging experiences. It is therefore essential to design accessible creativity support tools that can facilitate the work of a broad range of practitioners in this domain. In this paper, we focus on one facet of VR content creation, namely immersive audio design. We discuss a suite of design tools that enable both novice and expert users to rapidly prototype immersive sonic environments across desktop, virtual reality and augmented reality platforms. We discuss the design considerations adopted for each implementation, and how the individual systems informed one another in terms of interaction design. We then offer a preliminary evaluation of these systems with reports from first-time users. Finally, we discuss our road-map for improving individual and collaborative creative experiences across platforms and realities in the context of immersive audio.
@inproceedings{NIME20_24, author = {Çamcı, Anıl and Willette, Aaron and Gargi, Nachiketa and Kim, Eugene and Xu, Julia and Lai, Tanya}, title = {Cross-platform and Cross-reality Design of Immersive Sonic Environments}, pages = {127--130}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813270}, url = {https://www.nime.org/proceedings/2020/nime2020_paper24.pdf} }
-
Marius Schebella, Gertrud Fischbacher, and Matthew Mosher. 2020. Silver: A Textile Wireframe Interface for the Interactive Sound Installation Idiosynkrasia. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 131–132. http://doi.org/10.5281/zenodo.4813272
Download PDF DOISilver is an artwork that deals with the emotional feeling of contact by exaggerating it acoustically. It originates from an interactive room installation, where several textile sculptures merge with sounds. Silver is made from a wire mesh and its surface is reactive to closeness and touch. This material property forms a hybrid of artwork and parametric controller for the real-time sound generation. The textile quality of the fine steel wire-mesh evokes a haptic familiarity inherent to textile materials. This makes it easy for the audience to overcome the initial threshold barrier to get in touch with the artwork in an exhibition situation. Additionally, the interaction is not dependent on visuals. The characteristics of the surface sensor allows a user to play the instrument without actually touching it.
@inproceedings{NIME20_25, author = {Schebella, Marius and Fischbacher, Gertrud and Mosher, Matthew}, title = {Silver: A Textile Wireframe Interface for the Interactive Sound Installation Idiosynkrasia}, pages = {131--132}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813272}, url = {https://www.nime.org/proceedings/2020/nime2020_paper25.pdf} }
-
Ning Yang, Richard Savery, Raghavasimhan Sankaranarayanan, Lisa Zahray, and Gil Weinberg. 2020. Mechatronics-Driven Musical Expressivity for Robotic Percussionists. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 133–138. http://doi.org/10.5281/zenodo.4813274
Download PDF DOIMusical expressivity is an important aspect of musical performance for humans as well as robotic musicians. We present a novel mechatronics-driven implementation of Brushless Direct Current (BLDC) motors in a robotic marimba player, named ANON, designed to improve speed, dynamic range (loudness), and ultimately perceived musical expressivity in comparison to state-of-the-art robotic percussionist actuators. In an objective test of dynamic range, we find that our implementation provides wider and more consistent dynamic range response in comparison with solenoid-based robotic percussionists. Our implementation also outperforms both solenoid and human marimba players in striking speed. In a subjective listening test measuring musical expressivity, our system performs significantly better than a solenoid-based system and is statistically indistinguishable from human performers.
@inproceedings{NIME20_26, author = {Yang, Ning and Savery, Richard and Sankaranarayanan, Raghavasimhan and Zahray, Lisa and Weinberg, Gil}, title = {Mechatronics-Driven Musical Expressivity for Robotic Percussionists}, pages = {133--138}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813274}, url = {https://www.nime.org/proceedings/2020/nime2020_paper26.pdf}, presentation-video = {https://youtu.be/KsQNlArUv2k} }
-
Paul Dunham. 2020. Click::RAND. A Minimalist Sound Sculpture. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 139–142. http://doi.org/10.5281/zenodo.4813276
Download PDF DOIDiscovering outmoded or obsolete technologies and appropriating them in creative practice can uncover new relationships between those technologies. Using a media archaeological research approach, this paper presents the electromechanical relay and a book of random numbers as related forms of obsolete media. Situated within the context of electromechanical sound art, the work uses a non-deterministic approach to explore the non-linear and unpredictable agency and materiality of the objects in the work. Developed by the first author, Click::RAND is an object-based sound installation. The work has been developed as an audio-visual representation of a genealogy of connections between these two forms of media in the history of computing.
@inproceedings{NIME20_27, author = {Dunham, Paul}, title = {Click::RAND. A Minimalist Sound Sculpture.}, pages = {139--142}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813276}, url = {https://www.nime.org/proceedings/2020/nime2020_paper27.pdf}, presentation-video = {https://youtu.be/vWKw8H0F9cI} }
-
Enrique Tomás. 2020. A Playful Approach to Teaching NIME: Pedagogical Methods from a Practice-Based Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 143–148. http://doi.org/10.5281/zenodo.4813280
Download PDF DOIThis paper reports on the experience gained after five years of teaching a NIME master course designed specifically for artists. A playful pedagogical approach based on practice-based methods is presented and evaluated. My goal was introducing the art of NIME design and performance giving less emphasis to technology. Instead of letting technology determine how we teach and think during the class, I propose fostering at first the student’s active construction and understanding of the field experimenting with physical materials,sound production and bodily movements. For this intention I developed a few classroom exercises which my students had to study and practice. During this period of five years, 95 students attended the course. At the end of the semester course, each student designed, built and performed a new interface for musical expression in front of an audience. Thus, in this paper I describe and discuss the benefits of applying playfulness and practice-based methods for teaching NIME in art universities. I introduce the methods and classroom exercises developed and finally I present some lessons learned from this pedagogical experience.
@inproceedings{NIME20_28, author = {Tomás, Enrique}, title = {A Playful Approach to Teaching NIME: Pedagogical Methods from a Practice-Based Perspective}, pages = {143--148}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813280}, url = {https://www.nime.org/proceedings/2020/nime2020_paper28.pdf}, presentation-video = {https://youtu.be/94o3J3ozhMs} }
-
Quinn D Jarvis Holland, Crystal Quartez, Francisco Botello, and Nathan Gammill. 2020. EXPANDING ACCESS TO MUSIC TECHNOLOGY- Rapid Prototyping Accessible Instrument Solutions For Musicians With Intellectual Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 149–153. http://doi.org/10.5281/zenodo.4813286
Download PDF DOIUsing open-source and creative coding frameworks, a team of artist-engineers from Portland Community College working with artists that experience Intellectual/Developmental disabilities prototyped an ensemble of adapted instruments and synthesizers that facilitate real-time in-key collaboration. The instruments employ a variety of sensors, sending the resulting musical controls to software sound generators via MIDI. Careful consideration was given to the balance between freedom of expression, and curating the possible sonic outcomes as adaptation. Evaluation of adapted instrument design may differ greatly from frameworks for evaluating traditional instruments or products intended for mass-market, though the results of such focused and individualised design have a variety of possible applications.
@inproceedings{NIME20_29, author = {Jarvis Holland, Quinn D and Quartez, Crystal and Botello, Francisco and Gammill, Nathan}, title = {EXPANDING ACCESS TO MUSIC TECHNOLOGY- Rapid Prototyping Accessible Instrument Solutions For Musicians With Intellectual Disabilities}, pages = {149--153}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813286}, url = {https://www.nime.org/proceedings/2020/nime2020_paper29.pdf} }
-
Alberto Boem, Giovanni M Troiano, Giacomo and Lepri, and Victor Zappi. 2020. Non-Rigid Musical Interfaces: Exploring Practices, Takes, and Future Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 17–22. http://doi.org/10.5281/zenodo.4813288
Download PDF DOINon-rigid interfaces allow for exploring new interactive paradigms that rely on deformable input and shape change, and whose possible applications span several branches of human-computer interaction (HCI). While extensively explored as deformable game controllers, bendable smartphones, and shape-changing displays, non-rigid interfaces are rarely framed in a musical context, and their use for composition and performance is rather sparse and unsystematic. With this work, we start a systematic exploration of this relatively uncharted research area, by means of (1) briefly reviewing existing musical interfaces that capitalize on deformable input,and (2) surveying 11 among experts and pioneers in the field about their experience with and vision on non-rigid musical interfaces.Based on experts’ input, we suggest possible next steps of musical appropriation with deformable and shape-changing technologies.We conclude by discussing how cross-overs between NIME and HCI research will benefit non-rigid interfaces.
@inproceedings{NIME20_3, author = {Boem, Alberto and Troiano, Giovanni M and and Lepri, Giacomo and Zappi, Victor}, title = {Non-Rigid Musical Interfaces: Exploring Practices, Takes, and Future Perspective}, pages = {17--22}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813288}, url = {https://www.nime.org/proceedings/2020/nime2020_paper3.pdf}, presentation-video = {https://youtu.be/o4CuAglHvf4} }
-
Jack Atherton and Ge Wang. 2020. Curating Perspectives: Incorporating Virtual Reality into Laptop Orchestra Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 154–159. http://doi.org/10.5281/zenodo.4813290
Download PDF DOIDespite a history spanning nearly 30 years, best practices for the use of virtual reality (VR) in computer music performance remain exploratory. Here, we present a case study of a laptop orchestra performance entitled Resilience, involving one VR performer and an ensemble of instrumental performers, in order to explore values and design principles for incorporating this emerging technology into computer music performance. We present a brief history at the intersection of VR and the laptop orchestra. We then present the design of the piece and distill it into a set of design principles. Broadly, these design principles address the interplay between the different conflicting perspectives at play: those of the VR performer, the ensemble, and the audience. For example, one principle suggests that the perceptual link between the physical and virtual world maybe enhanced for the audience by improving the performers’ sense of embodiment. We argue that these design principles are a form of generalized knowledge about how we might design laptop orchestra pieces involving virtual reality.
@inproceedings{NIME20_30, author = {Atherton, Jack and Wang, Ge}, title = {Curating Perspectives: Incorporating Virtual Reality into Laptop Orchestra Performance}, pages = {154--159}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813290}, url = {https://www.nime.org/proceedings/2020/nime2020_paper30.pdf}, presentation-video = {https://youtu.be/tmeDO5hg56Y} }
-
Fabio Morreale, S. M. Astrid Bin, Andrew McPherson, Paul Stapleton, and Marcelo Wanderley. 2020. A NIME Of The Times: Developing an Outward-Looking Political Agenda For This Community. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 160–165. http://doi.org/10.5281/zenodo.4813294
Download PDF DOISo far, NIME research has been mostly inward-looking, dedicated to divulging and studying our own work and having limited engagement with trends outside our community. Though musical instruments as cultural artefacts are inherently political, we have so far not sufficiently engaged with confronting these themes in our own research. In this paper we argue that we should consider how our work is also political, and begin to develop a clear political agenda that includes social, ethical, and cultural considerations through which to consider not only our own musical instruments, but also those not created by us. Failing to do so would result in an unintentional but tacit acceptance and support of such ideologies. We explore one item to be included in this political agenda: the recent trend in music technology of “democratising music”, which carries implicit political ideologies grounded in techno-solutionism. We conclude with a number of recommendations for stimulating community-wide discussion on these themes in the hope that this leads to the development of an outward-facing perspective that fully engages with political topics.
@inproceedings{NIME20_31, author = {Morreale, Fabio and Bin, S. M. Astrid and McPherson, Andrew and Stapleton, Paul and Wanderley, Marcelo}, title = {A NIME Of The Times: Developing an Outward-Looking Political Agenda For This Community}, pages = {160--165}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813294}, url = {https://www.nime.org/proceedings/2020/nime2020_paper31.pdf}, presentation-video = {https://youtu.be/y2iDN24ZLTg} }
-
Chantelle L Ko and Lora Oehlberg. 2020. Touch Responsive Augmented Violin Interface System II: Integrating Sensors into a 3D Printed Fingerboard. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 166–171. http://doi.org/10.5281/zenodo.4813300
Download PDF DOIWe present TRAVIS II, an augmented acoustic violin with touch sensors integrated into its 3D printed fingerboard that track left-hand finger gestures in real time. The fingerboard has four strips of conductive PLA filament which produce an electric signal when fingers press down on each string. While these sensors are physically robust, they are mechanically assembled and thus easy to replace if damaged. The performer can also trigger presets via four FSRs attached to the body of the violin. The instrument is completely wireless, giving the performer the freedom to move throughout the performance space. While the sensing fingerboard is installed in place of the traditional fingerboard, all other electronics can be removed from the augmented instrument, maintaining the aesthetics of a traditional violin. Our design allows violinists to naturally create music for interactive performance and improvisation without requiring new instrumental techniques. In this paper, we describe the design of the instrument, experiments leading to the sensing fingerboard, and performative applications of the instrument.
@inproceedings{NIME20_32, author = {Ko, Chantelle L and Oehlberg, Lora}, title = {Touch Responsive Augmented Violin Interface System II: Integrating Sensors into a 3D Printed Fingerboard}, pages = {166--171}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813300}, url = {https://www.nime.org/proceedings/2020/nime2020_paper32.pdf}, presentation-video = {https://youtu.be/XIAd_dr9PHE} }
-
Nicolas E Gold, Chongyang Wang, Temitayo Olugbade, Nadia Berthouze, and Amanda Williams. 2020. P(l)aying Attention: Multi-modal, multi-temporal music control. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 172–175. http://doi.org/10.5281/zenodo.4813303
Download PDF DOIThe expressive control of sound and music through body movements is well-studied. For some people, body movement is demanding, and although they would prefer to express themselves freely using gestural control, they are unable to use such interfaces without difficulty. In this paper, we present the P(l)aying Attention framework for manipulating recorded music to support these people, and to help the therapists that work with them. The aim is to facilitate body awareness, exploration, and expressivity by allowing the manipulation of a pre-recorded ‘ensemble’ through an interpretation of body movement, provided by a machine-learning system trained on physiotherapist assessments and movement data from people with chronic pain. The system considers the nature of a person’s movement (e.g. protective) and offers an interpretation in terms of the joint-groups that are playing a major role in the determination at that point in the movement, and to which attention should perhaps be given (or the opposite at the user’s discretion). Using music to convey the interpretation offers informational (through movement sonification) and creative (through manipulating the ensemble by movement) possibilities. The approach offers the opportunity to explore movement and music at multiple timescales and under varying musical aesthetics.
@inproceedings{NIME20_33, author = {Gold, Nicolas E and Wang, Chongyang and Olugbade, Temitayo and Berthouze, Nadia and Williams, Amanda}, title = {P(l)aying Attention: Multi-modal, multi-temporal music control}, pages = {172--175}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813303}, url = {https://www.nime.org/proceedings/2020/nime2020_paper33.pdf} }
-
Doga Cavdir and Ge Wang. 2020. Felt Sound: A Shared Musical Experience for the Deaf and Hard of Hearing. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 176–181. http://doi.org/10.5281/zenodo.4813305
Download PDF DOIWe present a musical interface specifically designed for inclusive performance that offers a shared experience for both individuals who are deaf and hard of hearing as well as those who are not. This interface borrows gestures (with or without overt meaning) from American Sign Language (ASL), rendered using low-frequency sounds that can be felt by everyone in the performance. The Deaf and Hard of Hearing cannot experience the sound in the same way. Instead, they are able to physically experience the vibrations, nuances, contours, as well as its correspondence with the hand gestures. Those who are not hard of hearing can experience the sound, but also feel it just the same, with the knowledge that the same physical vibrations are shared by everyone. The employment of sign language adds another aesthetic dimension to the instrument –a nuanced borrowing of a functional communication medium for an artistic end.
@inproceedings{NIME20_34, author = {Cavdir, Doga and Wang, Ge}, title = {Felt Sound: A Shared Musical Experience for the Deaf and Hard of Hearing}, pages = {176--181}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813305}, url = {https://www.nime.org/proceedings/2020/nime2020_paper34.pdf}, presentation-video = {https://youtu.be/JCvlHu4UaZ0} }
-
Sasha Leitman. 2020. Sound Based Sensors for NIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 182–187. http://doi.org/10.5281/zenodo.4813309
Download PDF DOIThis paper examines the use of Sound Sensors and audio as input material for New Interfaces for Musical Expression (NIMEs), exploring the unique affordances and character of the interactions and instruments that leverage it. Examples of previous work in the literature that use audio as sensor input data are examined for insights into how the use of Sound Sensors provides unique opportunities within the NIME context. We present the results of a user study comparing sound-based sensors to other sensing modalities within the context of controlling parameters. The study suggests that the use of Sound Sensors can enhance gestural flexibility and nuance but that they also present challenges in accuracy and repeatability.
@inproceedings{NIME20_35, author = {Leitman, Sasha}, title = {Sound Based Sensors for NIMEs}, pages = {182--187}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813309}, url = {https://www.nime.org/proceedings/2020/nime2020_paper35.pdf} }
-
Yuma Ikawa and Akihiro Matsuura. 2020. Playful Audio-Visual Interaction with Spheroids . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 188–189. http://doi.org/10.5281/zenodo.4813311
Download PDF DOIThis paper presents a novel interactive system for creating audio-visual expressions on tabletop display by dynamically manipulating solids of revolution called spheroids. The four types of basic spinning and rolling movements of spheroids are recognized from the physical conditions such as the contact area, the location of the centroid, the (angular) velocity, and the curvature of the locus all obtained from sensor data on the display. They are then used for interactively generating audio-visual effects that match each of the movements. We developed a digital content that integrated these functionalities and enabled composition and live performance through manipulation of spheroids.
@inproceedings{NIME20_36, author = {Ikawa, Yuma and Matsuura, Akihiro}, title = {Playful Audio-Visual Interaction with Spheroids }, pages = {188--189}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813311}, url = {https://www.nime.org/proceedings/2020/nime2020_paper36.pdf} }
-
Sihwa Park. 2020. Collaborative Mobile Instruments in a Shared AR Space: a Case of ARLooper. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 190–195. http://doi.org/10.5281/zenodo.4813313
Download PDF DOIThis paper presents ARLooper, an augmented reality mobile interface that allows multiple users to record sound and perform together in a shared AR space. ARLooper is an attempt to explore the potential of collaborative mobile AR instruments in supporting non-verbal communication for musical performances. With ARLooper, the user can record, manipulate, and play sounds being visualized as 3D waveforms in an AR space. ARLooper provides a shared AR environment wherein multiple users can observe each other’s activities in real time, supporting increasing the understanding of collaborative contexts. This paper provides the background of the research and the design and technical implementation of ARLooper, followed by a user study.
@inproceedings{NIME20_37, author = {Park, Sihwa}, title = {Collaborative Mobile Instruments in a Shared AR Space: a Case of ARLooper}, pages = {190--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813313}, url = {https://www.nime.org/proceedings/2020/nime2020_paper37.pdf}, presentation-video = {https://youtu.be/Trw4epKeUbM} }
-
Diemo Schwarz, Abby Wanyu Liu, and Frederic Bevilacqua. 2020. A Survey on the Use of 2D Touch Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 196–201. http://doi.org/10.5281/zenodo.4813318
Download PDF DOIExpressive 2D multi-touch interfaces have in recent years moved from research prototypes to industrial products, from repurposed generic computer input devices to controllers specially designed for musical expression. A host of practicioners use this type of devices in many different ways, with different gestures and sound synthesis or transformation methods. In order to get an overview of existing and desired usages, we launched an on-line survey that collected 37 answers from practicioners in and outside of academic and design communities. In the survey we inquired about the participants’ devices, their strengths and weaknesses, the layout of control dimensions, the used gestures and mappings, the synthesis software or hardware and the use of audio descriptors and machine learning. The results can inform the design of future interfaces, gesture analysis and mapping, and give directions for the need and use of machine learning for user adaptation.
@inproceedings{NIME20_38, author = {Schwarz, Diemo and Liu, Abby Wanyu and Bevilacqua, Frederic}, title = {A Survey on the Use of 2D Touch Interfaces for Musical Expression}, pages = {196--201}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813318}, url = {https://www.nime.org/proceedings/2020/nime2020_paper38.pdf}, presentation-video = {https://youtu.be/eE8I3mecaB8} }
-
Harri L Renney, Tom Mitchell, and Benedict Gaster. 2020. There and Back Again: The Practicality of GPU Accelerated Digital Audio. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 202–207. http://doi.org/10.5281/zenodo.4813320
Download PDF DOIGeneral-Purpose GPU computing is becoming an increasingly viable option for acceleration, including in the audio domain. Although it can improve performance, the intrinsic nature of a device like the GPU involves data transfers and execution commands which requires time to complete. Therefore, there is an understandable caution concerning the overhead involved with using the GPU for audio computation. This paper aims to clarify the limitations by presenting a performance benchmarking suite. The benchmarks utilize OpenCL and CUDA across various tests to highlight the considerations and limitations of processing audio in the GPU environment. The benchmarking suite has been used to gather a collection of results across various hardware. Salient results have been reviewed in order to highlight the benefits and limitations of the GPU for digital audio. The results in this work show that the minimal GPU overhead fits into the real-time audio requirements provided the buffer size is selected carefully. The baseline overhead is shown to be roughly 0.1ms, depending on the GPU. This means buffer sizes 8 and above are completed within the allocated time frame. Results from more demanding tests, involving physical modelling synthesis, demonstrated a balance was needed between meeting the sample rate and keeping within limits for latency and jitter. Buffer sizes from 1 to 16 failed to sustain the sample rate whilst buffer sizes 512 to 32768 exceeded either latency or jitter limits. Buffer sizes in between these ranges, such as 256, satisfied the sample rate, latency and jitter requirements chosen for this paper.
@inproceedings{NIME20_39, author = {Renney, Harri L and Mitchell, Tom and Gaster, Benedict}, title = {There and Back Again: The Practicality of GPU Accelerated Digital Audio}, pages = {202--207}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813320}, url = {https://www.nime.org/proceedings/2020/nime2020_paper39.pdf}, presentation-video = {https://youtu.be/xAVEHJZRIx0} }
-
Tim Shaw and John Bowers. 2020. Ambulation: Exploring Listening Technologies for an Extended Sound Walking Practice. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 23–28. http://doi.org/10.5281/zenodo.4813322
Download PDF DOIAmbulation is a sound walk that uses field recording techniques and listening technologies to create a walking performance using environmental sound. Ambulation engages with the act of recording as an improvised performance in response to the soundscapes it is presented within. In this paper we describe the work and place it in relationship to other artists engaged with field recording and extended sound walking practices. We will give technical details of the Ambulation system we developed as part of the creation of the piece, and conclude with a collection of observations that emerged from the project. The research around the development and presentation of Ambulation contributes to the idea of field recording as a live, procedural practice, moving away from the ideas of the movement of documentary material from one place to another. We will show how having an open, improvisational approach to technologically supported sound walking enables rich and unexpected results to occur and how this way of working can contribute to NIME design and thinking.
@inproceedings{NIME20_4, author = {Shaw, Tim and Bowers, John}, title = {Ambulation: Exploring Listening Technologies for an Extended Sound Walking Practice}, pages = {23--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813322}, url = {https://www.nime.org/proceedings/2020/nime2020_paper4.pdf}, presentation-video = {https://youtu.be/dDXkNnQXdN4} }
-
Gus Xia, Daniel Chin, Yian Zhang, Tianyu Zhang, and Junbo Zhao. 2020. Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 208–213. http://doi.org/10.5281/zenodo.4813324
Download PDF DOILearning to play an instrument is intrinsically multimodal, and we have seen a trend of applying visual and haptic feedback in music games and computer-aided music tutoring systems. However, most current systems are still designed to master individual pieces of music; it is unclear how well the learned skills can be generalized to new pieces. We aim to explore this question. In this study, we contribute Interactive Rainbow Score, an interactive visual system to boost the learning of sight-playing, the general musical skill to read music and map the visual representations to performance motions. The key design of Interactive Rainbow Score is to associate pitches (and the corresponding motions) with colored notation and further strengthen such association via real-time interactions. Quantitative results show that the interactive feature on average increases the learning efficiency by 31.1%. Further analysis indicates that it is critical to apply the interaction in the early period of learning.
@inproceedings{NIME20_40, author = {Xia, Gus and Chin, Daniel and Zhang, Yian and Zhang, Tianyu and Zhao, Junbo}, title = {Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System}, pages = {208--213}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813324}, url = {https://www.nime.org/proceedings/2020/nime2020_paper40.pdf} }
-
Nicola Davanzo and Federico Avanzini. 2020. A Dimension Space for the Evaluation of Accessible Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 214–220. http://doi.org/10.5281/zenodo.4813326
Download PDF DOIResearch on Accessible Digital Musical Instruments (ADMIs) has received growing attention over the past decades, carving out an increasingly large space in the literature. Despite the recent publication of state-of-the-art review works, there are still few systematic studies on ADMIs design analysis. In this paper we propose a formal tool to explore the main design aspects of ADMIs based on Dimension Space Analysis, a well established methodology in the NIME literature which allows to generate an effective visual representation of the design space. We therefore propose a set of relevant dimensions, which are based both on categories proposed in recent works in the research context, and on original contributions. We then proceed to demonstrate its applicability by selecting a set of relevant case studies, and analyzing a sample set of ADMIs found in the literature.
@inproceedings{NIME20_41, author = {Davanzo, Nicola and Avanzini, Federico}, title = {A Dimension Space for the Evaluation of Accessible Digital Musical Instruments}, pages = {214--220}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813326}, url = {https://www.nime.org/proceedings/2020/nime2020_paper41.pdf}, presentation-video = {https://youtu.be/pJlB5k8TV9M} }
-
Adam Pultz Melbye and Halldor A Ulfarsson. 2020. Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 221–226. http://doi.org/10.5281/zenodo.4813328
Download PDF DOIThis paper describes physical and digital design strategies for the Feedback-Actuated Augmented Bass - a self-contained feedback double bass with embedded DSP capabilities. A primary goal of the research project is to create an instrument that responds well to the use of extended playing techniques and can manifest complex harmonic spectra while retaining the feel and sonic fingerprint of an acoustic double bass. While the physical con figuration of the instrument builds on similar feedback string instruments being developed in recent years, this project focuses on modifying the feedback behaviour through low-level audio feature extractions coupled to computationally lightweight filtering and amplitude management algorithms. We discuss these adaptive and time-variant processing strategies and how we apply them in sculpting the system’s dynamic and complex behaviour to our liking.
@inproceedings{NIME20_42, author = {Melbye, Adam Pultz and Ulfarsson, Halldor A}, title = {Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms}, pages = {221--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813328}, url = {https://www.nime.org/proceedings/2020/nime2020_paper42.pdf}, presentation-video = {https://youtu.be/jXePge1MS8A} }
-
Gwendal Le Vaillant, Thierry Dutoit, and Rudi Giot. 2020. Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 227–232. http://doi.org/10.5281/zenodo.4813330
Download PDF DOIThe comparative study presented in this paper focuses on two approaches for the search of sound presets using a specific geometric touch app. The first approach is based on independent sliders on screen and is called analytic. The second is based on interpolation between presets represented by polygons on screen and is called holistic. Participants had to listen to, memorize, and search for sound presets characterized by four parameters. Ten different configurations of sound synthesis and processing were presented to each participant, once for each approach. The performance scores of 28 participants (not including early testers) were computed using two measured values: the search duration, and the parametric distance between the reference and answered presets. Compared to the analytic sliders-based interface, the holistic interpolation-based interface demonstrated a significant performance improvement for 60% of sound synthesizers. The other 40% led to equivalent results for the analytic and holistic interfaces. Using sliders, expert users performed nearly as well as they did with interpolation. Beginners and intermediate users struggled more with sliders, while the interpolation allowed them to get quite close to experts’ results.
@inproceedings{NIME20_43, author = {Le Vaillant, Gwendal and Dutoit, Thierry and Giot, Rudi}, title = {Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation}, pages = {227--232}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813330}, url = {https://www.nime.org/proceedings/2020/nime2020_paper43.pdf}, presentation-video = {https://youtu.be/Korw3J_QvQE} }
-
Chase Mitchusson. 2020. Indeterminate Sample Sequencing in Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 233–236. http://doi.org/10.5281/zenodo.4813332
Download PDF DOIThe purpose of this project is to develop an interface for writing and performing music using sequencers in virtual reality (VR). The VR sequencer deals with chance-based operations to select audio clips for playback and spatial orientation-based rhythm and melody generation, while incorporating three-dimensional (3-D) objects as omnidirectional playheads. Spheres which grow from a variable minimum size to a variable maximum size at a variable speed, constantly looping, represent the passage of time in this VR sequencer. The 3-D assets which represent samples are actually sample containers that come in six common dice shapes. As the dice come into contact with a sphere, their samples are triggered to play. This behavior mimics digital audio workstation (DAW) playheads reading MIDI left-to-right in popular professional and consumer software sequencers. To incorporate height into VR music making, the VR sequencer is capable of generating terrain at the press of a button. Each terrain will gradually change, creating the possibility for the dice to roll on their own. Audio effects are built in to each scene and mapped to terrain parameters, creating another opportunity for chance operations in the music making process. The chance-based sample selection, spatial orientation-defined rhythms, and variable terrain mapped to audio effects lead to indeterminacy in performance and replication of a single piece of music. This project aims to give the gaming community access to experimental music making by means of consumer virtual reality hardware.
@inproceedings{NIME20_44, author = {Mitchusson, Chase}, title = {Indeterminate Sample Sequencing in Virtual Reality}, pages = {233--236}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813332}, url = {https://www.nime.org/proceedings/2020/nime2020_paper44.pdf} }
-
Rebecca Fiebrink and Laetitia Sonami. 2020. Reflections on Eight Years of Instrument Creation with Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 237–242. http://doi.org/10.5281/zenodo.4813334
Download PDF DOIMachine learning (ML) has been used to create mappings for digital musical instruments for over twenty-five years, and numerous ML toolkits have been developed for the NIME community. However, little published work has studied how ML has been used in sustained instrument building and performance practices. This paper examines the experiences of instrument builder and performer Laetitia Sonami, who has been using ML to build and refine her Spring Spyre instrument since 2012. Using Sonami’s current practice as a case study, this paper explores the utility, opportunities, and challenges involved in using ML in practice over many years. This paper also reports the perspective of Rebecca Fiebrink, the creator of the Wekinator ML tool used by Sonami, revealing how her work with Sonami has led to changes to the software and to her teaching. This paper thus contributes a deeper understanding of the value of ML for NIME practitioners, and it can inform design considerations for future ML toolkits as well as NIME pedagogy. Further, it provides new perspectives on familiar NIME conversations about mapping strategies, expressivity, and control, informed by a dedicated practice over many years.
@inproceedings{NIME20_45, author = {Fiebrink, Rebecca and Sonami, Laetitia}, title = {Reflections on Eight Years of Instrument Creation with Machine Learning}, pages = {237--242}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813334}, url = {https://www.nime.org/proceedings/2020/nime2020_paper45.pdf}, presentation-video = {https://youtu.be/EvXZ9NayZhA} }
-
Alex Lucas, Miguel Ortiz, and Franziska Schroeder. 2020. The Longevity of Bespoke, Accessible Music Technology: A Case for Community. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 243–248. http://doi.org/10.5281/zenodo.4813338
Download PDF DOIBased on the experience garnered through a longitudinal ethnographic study, the authors reflect on the practice of designing and fabricating bespoke, accessible music tech- nologies. Of particular focus are the social, technical and environmental factors at play which make the provision of such technology a reality. The authors make suggestions of ways to achieve long-term, sustained use. Seemingly those involved in its design, fabrication and use could benefit from a concerted effort to share resources, knowledge and skill as a mobilised community of practitioners.
@inproceedings{NIME20_46, author = {Lucas, Alex and Ortiz, Miguel and Schroeder, Franziska}, title = {The Longevity of Bespoke, Accessible Music Technology: A Case for Community}, pages = {243--248}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813338}, url = {https://www.nime.org/proceedings/2020/nime2020_paper46.pdf}, presentation-video = {https://youtu.be/cLguyuZ9weI} }
-
Ivica I Bukvic, Disha Sardana, and Woohun Joo. 2020. New Interfaces for Spatial Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 249–254. http://doi.org/10.5281/zenodo.4813342
Download PDF DOIWith the proliferation of venues equipped with the high density loudspeaker arrays there is a growing interest in developing new interfaces for spatial musical expression (NISME). Of particular interest are interfaces that focus on the emancipation of the spatial domain as the primary dimension for musical expression. Here we present Monet NISME that leverages multitouch pressure-sensitive surface and the D4 library’s spatial mask and thereby allows for a unique approach to interactive spatialization. Further, we present a study with 22 participants designed to assess its usefulness and compare it to the Locus, a NISME introduced in 2019 as part of a localization study which is built on the same design principles of using natural gestural interaction with the spatial content. Lastly, we briefly discuss the utilization of both NISMEs in two artistic performances and propose a set of guidelines for further exploration in the NISME domain.
@inproceedings{NIME20_47, author = {Bukvic, Ivica I and Sardana, Disha and Joo, Woohun}, title = {New Interfaces for Spatial Musical Expression}, pages = {249--254}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813342}, url = {https://www.nime.org/proceedings/2020/nime2020_paper47.pdf}, presentation-video = {https://youtu.be/GQ0552Lc1rw} }
-
Mark Durham. 2020. Inhabiting the Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 255–258. http://doi.org/10.5281/zenodo.4813344
Download PDF DOIThis study presents an ecosystemic approach to music interaction, through the practice-based development of a mixed reality installation artwork. It fuses a generative, immersive audio composition with augmented reality visualisation, within an architectural space as part of a blended experience. Participants are encouraged to explore and interact with this combination of elements through physical engagement, to then develop an understanding of how the blending of real and virtual space occurs as the installation unfolds. The sonic layer forms a link between the two, as a three-dimensional sound composition. Connections in the system allow for multiple streams of data to run between the layers, which are used for the real-time modulation of parameters. These feedback mechanisms form a complete loop between the participant in real space, soundscape, and mixed reality visualisation, providing a participant mediated experience that exists somewhere between creator and observer.
@inproceedings{NIME20_48, author = {Durham, Mark}, title = {Inhabiting the Instrument}, pages = {255--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813344}, url = {https://www.nime.org/proceedings/2020/nime2020_paper48.pdf} }
-
Chris Nash. 2020. Crowd-driven Music: Interactive and Generative Approaches using Machine Vision and Manhattan. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 259–264. http://doi.org/10.5281/zenodo.4813346
Download PDF DOIThis paper details technologies and artistic approaches to crowd-driven music, discussed in the context of a live public installation in which activity in a public space (a busy railway platform) is used to drive the automated composition and performance of music. The approach presented uses realtime machine vision applied to a live video feed of a scene, from which detected objects and people are fed into Manhattan (Nash, 2014), a digital music notation that integrates sequencing and programming to support the live creation of complex musical works that combine static, algorithmic, and interactive elements. The paper discusses the technical details of the system and artistic development of specific musical works, introducing novel techniques for mapping chaotic systems to musical expression and exploring issues of agency, aesthetic, accessibility and adaptability relating to composing interactive music for crowds and public spaces. In particular, performances as part of an installation for BBC Music Day 2018 are described. The paper subsequently details a practical workshop, delivered digitally, exploring the development of interactive performances in which the audience or general public actively or passively control live generation of a musical piece. Exercises support discussions on technical, aesthetic, and ontological issues arising from the identification and mapping of structure, order, and meaning in non-musical domains to analogous concepts in musical expression. Materials for the workshop are available freely with the Manhattan software.
@inproceedings{NIME20_49, author = {Nash, Chris}, title = {Crowd-driven Music: Interactive and Generative Approaches using Machine Vision and Manhattan}, pages = {259--264}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813346}, url = {https://www.nime.org/proceedings/2020/nime2020_paper49.pdf}, presentation-video = {https://youtu.be/DHIowP2lOsA} }
-
Michael J Krzyzaniak. 2020. Words to Music Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 29–34. http://doi.org/10.5281/zenodo.4813350
Download PDF DOIThis paper discusses the design of a musical synthesizer that takes words as input, and attempts to generate music that somehow underscores those words. This is considered as a tool for sound designers who could, for example, enter dialogue from a film script and generate appropriate back- ground music. The synthesizer uses emotional valence and arousal as a common representation between words and mu- sic. It draws on previous studies that relate words and mu- sical features to valence and arousal. The synthesizer was evaluated with a user study. Participants listened to music generated by the synthesizer, and described the music with words. The arousal of the words they entered was highly correlated with the intended arousal of the music. The same was, surprisingly, not true for valence. The synthesizer is online, at [redacted URL].
@inproceedings{NIME20_5, author = {Krzyzaniak, Michael J}, title = {Words to Music Synthesis}, pages = {29--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813350}, url = {https://www.nime.org/proceedings/2020/nime2020_paper5.pdf} }
-
Alex Mclean. 2020. Algorithmic Pattern. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 265–270. http://doi.org/10.5281/zenodo.4813352
Download PDF DOIThis paper brings together two main perspectives on algorithmic pattern. First, the writing of musical patterns in live coding performance, and second, the weaving of patterns in textiles. In both cases, algorithmic pattern is an interface between the human and the outcome, where small changes have far-reaching impact on the results. By bringing contemporary live coding and ancient textile approaches together, we reach a common view of pattern as algorithmic movement (e.g. looping, shifting, reflecting, interfering) in the making of things. This works beyond the usual definition of pattern used in musical interfaces, of mere repeating sequences. We conclude by considering the place of algorithmic pattern in a wider activity of making.
@inproceedings{NIME20_50, author = {Mclean, Alex}, title = {Algorithmic Pattern}, pages = {265--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813352}, url = {https://www.nime.org/proceedings/2020/nime2020_paper50.pdf}, presentation-video = {https://youtu.be/X9AkOAEDV08} }
-
Louis McCallum and Mick S Grierson. 2020. Supporting Interactive Machine Learning Approaches to Building Musical Instruments in the Browser. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 271–272. http://doi.org/10.5281/zenodo.4813357
Download PDF DOIInteractive machine learning (IML) is an approach to building interactive systems, including DMIs, focusing on iterative end-user data provision and direct evaluation. This paper describes the implementation of a Javascript library, encapsulating many of the boilerplate needs of building IML systems for creative tasks with minimal code inclusion and low barrier to entry. Further, we present a set of complimentary Audio Worklet-backed instruments to allow for in-browser creation of new musical systems able to run concurrently with various computationally expensive feature extractor and lightweight machine learning models without the interference often seen in interactive Web Audio applications.
@inproceedings{NIME20_51, author = {McCallum, Louis and Grierson, Mick S}, title = {Supporting Interactive Machine Learning Approaches to Building Musical Instruments in the Browser}, pages = {271--272}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813357}, url = {https://www.nime.org/proceedings/2020/nime2020_paper51.pdf} }
-
Mathias S Kirkegaard, Mathias Bredholt, Christian Frisson, and Marcelo Wanderley. 2020. TorqueTuner: A self contained module for designing rotary haptic force feedback for digital musical instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 273–278. http://doi.org/10.5281/zenodo.4813359
Download PDF DOITorqueTuner is an embedded module that allows Digital Musical Instrument (DMI) designers to map sensors to parameters of haptic effects and dynamically modify rotary force feedback in real-time. We embedded inside TorqueTuner a collection of haptic effects (Wall, Magnet, Detents, Spring, Friction, Spin, Free) and a bi-directional interface through libmapper, a software library for making connections between data signals on a shared network. To increase affordability and portability of force-feedback implementations in DMI design, we designed our platform to be wireless, self-contained and built from commercially available components. To provide examples of modularity and portability, we integrated TorqueTuner into a standalone haptic knob and into an existing DMI, the T-Stick. We implemented 3 musical applications (Pitch wheel, Turntable and Exciter), by mapping sensors to sound synthesis in audio programming environment SuperCollider. While the original goal was to simulate the haptic feedback associated with turning a knob, we found that the platform allows for further expanding interaction possibilities in application scenarios where rotary control is familiar.
@inproceedings{NIME20_52, author = {Kirkegaard, Mathias S and Bredholt, Mathias and Frisson, Christian and Wanderley, Marcelo}, title = {TorqueTuner: A self contained module for designing rotary haptic force feedback for digital musical instruments}, pages = {273--278}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813359}, url = {https://www.nime.org/proceedings/2020/nime2020_paper52.pdf}, presentation-video = {https://youtu.be/V8WDMbuX9QA} }
-
Corey J Ford and Chris Nash. 2020. An Iterative Design ‘by proxy’ Method for Developing Educational Music Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 279–284. http://doi.org/10.5281/zenodo.4813361
Download PDF DOIIterative design methods involving children and educators are difficult to conduct, given both the ethical implications and time commitments understandably required. The qualitative design process presented here recruits introductory teacher training students, towards discovering useful design insights relevant to music education technologies “by proxy”. Therefore, some of the barriers present in child-computer interaction research are avoided. As an example, the method is applied to the creation of a block-based music notation system, named Codetta. Building upon successful educational technologies that intersect both music and computer programming, Codetta seeks to enable child composition, whilst aiding generalist educator’s confidence in teaching music.
@inproceedings{NIME20_53, author = {Ford, Corey J and Nash, Chris}, title = {An Iterative Design ‘by proxy’ Method for Developing Educational Music Interfaces}, pages = {279--284}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813361}, url = {https://www.nime.org/proceedings/2020/nime2020_paper53.pdf}, presentation-video = {https://youtu.be/fPbZMQ5LEmk} }
-
Filipe Calegario, Marcelo Wanderley, João Tragtenberg, et al. 2020. Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 285–290. http://doi.org/10.5281/zenodo.4813363
Download PDF DOIProbatio is an open-source toolkit for prototyping new digital musical instruments created in 2016. Based on a morphological chart of postures and controls of musical instruments, it comprises a set of blocks, bases, hubs, and supports that, when combined, allows designers, artists, and musicians to experiment with different input devices for musical interaction in different positions and postures. Several musicians have used the system, and based on these past experiences, we assembled a list of improvements to implement version 1.0 of the toolkit through a unique international partnership between two laboratories in Brazil and Canada. In this paper, we present the original toolkit and its use so far, summarize the main lessons learned from musicians using it, and present the requirements behind, and the final design of, v1.0 of the project. We also detail the work developed in digital fabrication using two different techniques: laser cutting and 3D printing, comparing their pros and cons. We finally discuss the opportunities and challenges of fully sharing the project online and replicating its parts in both countries.
@inproceedings{NIME20_54, author = {Calegario, Filipe and Wanderley, Marcelo and Tragtenberg, João and Meneses, Eduardo and Wang, Johnty and Sullivan, John and Franco, Ivan and Kirkegaard, Mathias S and Bredholt, Mathias and Rohs, Josh}, title = {Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes}, pages = {285--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813363}, url = {https://www.nime.org/proceedings/2020/nime2020_paper54.pdf}, presentation-video = {https://youtu.be/jkFnZZUA3xs} }
-
Travis J West, Marcelo Wanderley, and Baptiste Caramiaux. 2020. Making Mappings: Examining the Design Process. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 291–296. http://doi.org/10.5281/zenodo.4813365
Download PDF DOIWe conducted a study which examines mappings from a relatively unexplored perspective: how they are made. Twelve skilled NIME users designed a mapping from a T-Stick to a subtractive synthesizer, and were interviewed about their approach to mapping design. We present a thematic analysis of the interviews, with reference to data recordings captured while the designers worked. Our results suggest that the mapping design process is an iterative process that alternates between two working modes: diffuse exploration and directed experimentation.
@inproceedings{NIME20_55, author = {West, Travis J and Wanderley, Marcelo and Caramiaux, Baptiste}, title = {Making Mappings: Examining the Design Process}, pages = {291--296}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813365}, url = {https://www.nime.org/proceedings/2020/nime2020_paper55.pdf}, presentation-video = {https://youtu.be/aaoResYjqmE} }
-
Michael Sidler, Matthew C Bisson, Jordan Grotz, and Scott Barton. 2020. Parthenope: A Robotic Musical Siren. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 297–300. http://doi.org/10.5281/zenodo.4813367
Download PDF DOIParthenope is a robotic musical siren developed to produce unique timbres and sonic gestures. Parthenope uses perforated spinning disks through which air is directed to produce sound. Computer-control of disk speed and air flow in conjunction with a variety of nozzles allow pitches to be precisely produced at different volumes. The instrument is controlled via Open Sound Control (OSC) messages sent over an ethernet connection and can interface with common DAWs and physical controllers. Parthenope is capable of microtonal tuning, portamenti, rapid and precise articulation (and thus complex rhythms) and distinct timbres that result from its aerophonic character. It occupies a unique place among robotic musical instruments.
@inproceedings{NIME20_56, author = {Sidler, Michael and Bisson, Matthew C and Grotz, Jordan and Barton, Scott}, title = {Parthenope: A Robotic Musical Siren}, pages = {297--300}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813367}, url = {https://www.nime.org/proceedings/2020/nime2020_paper56.pdf}, presentation-video = {https://youtu.be/HQuR0aBJ70Y} }
-
Steven Kemper. 2020. Tremolo-Harp: A Vibration-Motor Actuated Robotic String Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 301–304. http://doi.org/10.5281/zenodo.4813369
Download PDF DOIThe Tremolo-Harp is a twelve-stringed robotic instrument, where each string is actuated with a DC vibration motor to produce a mechatronic “tremolo” effect. It was inspired by instruments and musical styles that employ tremolo as a primary performance technique, including the hammered dulcimer, pipa, banjo, flamenco guitar, and surf rock guitar. Additionally, the Tremolo-Harp is designed to produce long, sustained textures and continuous dynamic variation. These capabilities represent a different approach from the majority of existing robotic string instruments, which tend to focus on actuation speed and rhythmic precision. The composition Tremolo-Harp Study 1 (2019) presents an initial exploration of the Tremolo-Harp’s unique timbre and capability for continuous dynamic variation.
@inproceedings{NIME20_57, author = {Kemper, Steven}, title = {Tremolo-Harp: A Vibration-Motor Actuated Robotic String Instrument}, pages = {301--304}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813369}, url = {https://www.nime.org/proceedings/2020/nime2020_paper57.pdf} }
-
Atsuya Kobayashi, Reo Anzai, and Nao Tokui. 2020. ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 305–308. http://doi.org/10.5281/zenodo.4813371
Download PDF DOIWe propose ExSampling: an integrated system of recording application and Deep Learning environment for a real-time music performance of environmental sounds sampled by field recording. Automated sound mapping to Ableton Live tracks by Deep Learning enables field recording to be applied to real-time performance, and create interactions among sound recorder, composers and performers.
@inproceedings{NIME20_58, author = {Kobayashi, Atsuya and Anzai, Reo and Tokui, Nao}, title = {ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds}, pages = {305--308}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813371}, url = {https://www.nime.org/proceedings/2020/nime2020_paper58.pdf} }
-
Juan Pablo Yepez Placencia, Jim Murphy, and Dale Carnegie. 2020. Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 309–314. http://doi.org/10.5281/zenodo.4813375
Download PDF DOIThe exploration of musical robots has been an area of interest due to the timbral and mechanical advantages they offer for music generation and performance. However, one of the greatest challenges in mechatronic music is to enable these robots to deliver a nuanced and expressive performance. This depends on their capability to integrate dynamics, articulation, and a variety of ornamental techniques while playing a given musical passage. In this paper we introduce a robot arm pitch shifter for a mechatronic monochord prototype. This is a fast, precise, and mechanically quiet system that enables sliding techniques during musical performance. We discuss the design and construction process, as well as the system’s advantages and restrictions. We also review the quantitative evaluation process used to assess if the instrument meets the design requirements. This process reveals how the pitch shifter outperforms existing configurations, and potential areas of improvement for future work.
@inproceedings{NIME20_59, author = {Yepez Placencia, Juan Pablo and Murphy, Jim and Carnegie, Dale}, title = {Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones}, pages = {309--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813375}, url = {https://www.nime.org/proceedings/2020/nime2020_paper59.pdf}, presentation-video = {https://youtu.be/rpX8LTZd-Zs} }
-
Marcel Ehrhardt, Max Neupert, and Clemens Wegener. 2020. Piezoelectric strings as a musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 35–36. http://doi.org/10.5281/zenodo.4813377
Download PDF DOIFlexible strings with piezoelectric properties have been developed but until date not evaluated for the use as part of a musical instrument. This paper is assessing the properties of these new fibers, looking at their possibilities for NIME applications.
@inproceedings{NIME20_6, author = {Ehrhardt, Marcel and Neupert, Max and Wegener, Clemens}, title = {Piezoelectric strings as a musical interface}, pages = {35--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813377}, url = {https://www.nime.org/proceedings/2020/nime2020_paper6.pdf} }
-
Alon A Ilsar, Matthew Hughes, and Andrew Johnston. 2020. NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 315–320. http://doi.org/10.5281/zenodo.4813383
Download PDF DOIThis paper outlines the development process of an audio-visual gestural instrument—the AirSticks—and elaborates on the role ‘miming’ has played in the formation of new mappings for the instrument. The AirSticks, although fully-functioning, were used as props in live performances in order to evaluate potential mapping strategies that were later implemented for real. This use of mime when designing Digital Musical Instruments (DMIs) can help overcome choice paralysis, break from established habits, and liberate creators to realise more meaningful parameter mappings. Bringing this process into an interactive performance environment acknowledges the audience as stakeholders in the design of these instruments, and also leads us to reflect upon the beliefs and assumptions made by an audience when engaging with the performance of such ‘magical’ devices. This paper establishes two opposing strategies to parameter mapping, ‘movement-first’ mapping, and the less conventional ‘sound-first’ mapping that incorporates mime. We discuss the performance ‘One Five Nine’, its transformation from a partial mime into a fully interactive presentation, and the influence this process has had on the outcome of the performance and the AirSticks as a whole.
@inproceedings{NIME20_60, author = {Ilsar, Alon A and Hughes, Matthew and Johnston, Andrew}, title = {NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument}, pages = {315--320}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813383}, url = {https://www.nime.org/proceedings/2020/nime2020_paper60.pdf}, presentation-video = {https://youtu.be/ZFQKKI3dFhE} }
-
Matthew Hughes and Andrew Johnston. 2020. URack: Audio-visual Composition and Performance using Unity and VCV Rack. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 321–322. http://doi.org/10.5281/zenodo.4813389
Download PDF DOIThis demonstration presents URack, a custom-built audio-visual composition and performance environment that combines the Unity video-game engine with the VCV Rack software modular synthesiser. In alternative cross-modal solutions, a compromise is likely made in either the sonic or visual output, or the consistency and intuitiveness of the composition environment. By integrating control mechanisms for graphics inside VCV Rack, the music-making metaphors used to build a patch are extended into the visual domain. Users familiar with modular synthesizers are immediately able to start building high-fidelity graphics using the same control voltages regularly used to compose sound. Without needing to interact with two separate development environments, languages or metaphorical domains, users are encouraged to freely, creatively and enjoyably construct their own highly-integrated audio-visual instruments. This demonstration will showcase the construction of an audio-visual patch using URack, focusing on the integration of flexible GPU particle systems present in Unity with the vast library of creative audio composition modules inside VCV.
@inproceedings{NIME20_61, author = {Hughes, Matthew and Johnston, Andrew}, title = {URack: Audio-visual Composition and Performance using Unity and VCV Rack}, pages = {321--322}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813389}, url = {https://www.nime.org/proceedings/2020/nime2020_paper61.pdf} }
-
Irmandy Wicaksono and Joseph Paradiso. 2020. KnittedKeyboard: Digital Knitting of Electronic Textile Musical Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 323–326. http://doi.org/10.5281/zenodo.4813391
Download PDF DOIIn this work, we have developed a textile-based interactive surface fabricated through digital knitting technology. Our prototype explores intarsia, interlock patterning, and a collection of functional and non-functional fibers to create a piano-pattern textile for expressive and virtuosic sonic interaction. We combined conductive, thermochromic, and composite yarns with high-flex polyester yarns to develop KnittedKeyboard with its soft physical properties and responsive sensing and display capabilities. The individual and combination of each key could simultaneously sense discrete touch, as well as continuous proximity and pressure. The KnittedKeyboard enables performers to experience fabric-based multimodal interaction as they explore the seamless texture and materiality of the electronic textile.
@inproceedings{NIME20_62, author = {Wicaksono, Irmandy and Paradiso, Joseph}, title = {KnittedKeyboard: Digital Knitting of Electronic Textile Musical Controllers}, pages = {323--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813391}, url = {https://www.nime.org/proceedings/2020/nime2020_paper62.pdf} }
-
Olivier Capra, Florent Berthaut, and Laurent Grisoni. 2020. A Taxonomy of Spectator Experience Augmentation Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 327–330. http://doi.org/10.5281/zenodo.4813396
Download PDF DOIIn the context of artistic performances, the complexity and diversity of digital interfaces may impair the spectator experience, in particular hiding the engagement and virtuosity of the performers. Artists and researchers have made attempts at solving this by augmenting performances with additional information provided through visual, haptic or sonic modalities. However, the proposed techniques have not yet been formalized and we believe a clarification of their many aspects is necessary for future research. In this paper, we propose a taxonomy for what we define as Spectator Experience Augmentation Techniques (SEATs). We use it to analyse existing techniques and we demonstrate how it can serve as a basis for the exploration of novel ones.
@inproceedings{NIME20_63, author = {Capra, Olivier and Berthaut, Florent and Grisoni, Laurent}, title = {A Taxonomy of Spectator Experience Augmentation Techniques}, pages = {327--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813396}, url = {https://www.nime.org/proceedings/2020/nime2020_paper63.pdf} }
-
Sourya Sen, Koray Tahiroğlu, and Julia Lohmann. 2020. Sounding Brush: A Tablet based Musical Instrument for Drawing and Mark Making. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 331–336. http://doi.org/10.5281/zenodo.4813398
Download PDF DOIExisting applications of mobile music tools are often concerned with the simulation of acoustic or digital musical instruments, extended with graphical representations of keys, pads, etc. Following an intensive review of existing tools and approaches to mobile music making, we implemented a digital drawing tool, employing a time-based graphical/gestural interface for music composition and performance. In this paper, we introduce our Sounding Brush project, through which we explore music making in various forms with the natural gestures of drawing and mark making on a tablet device. Subsequently, we present the design and development of the Sounding Brush application. Utilising this project idea, we discuss the act of drawing as an activity that is not separated from the act of playing musical instrument. Drawing is essentially the act of playing music by means of a continuous process of observation, individualisation and exploring time and space in a unique way.
@inproceedings{NIME20_64, author = {Sen, Sourya and Tahiroğlu, Koray and Lohmann, Julia}, title = {Sounding Brush: A Tablet based Musical Instrument for Drawing and Mark Making}, pages = {331--336}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813398}, url = {https://www.nime.org/proceedings/2020/nime2020_paper64.pdf}, presentation-video = {https://youtu.be/7RkGbyGM-Ho} }
-
Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli. 2020. Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 337–342. http://doi.org/10.5281/zenodo.4813402
Download PDF DOIA deformable musical instrument can take numerous distinct shapes with its non-rigid features. Building audio synthesis module for such an interface behaviour can be challenging. In this paper, we present the Al-terity, a non-rigid musical instrument that comprises a deep learning model with generative adversarial network architecture and use it for generating audio samples for real-time audio synthesis. The particular deep learning model we use for this instrument was trained with existing data set as input for purposes of further experimentation. The main benefits of the model used are the ability to produce the realistic range of timbre of the trained data set and the ability to generate new audio samples in real-time, in the moment of playing, with the characteristics of sounds that the performer ever heard before. We argue that these advanced intelligence features on the audio synthesis level could allow us to explore performing music with particular response features that define the instrument’s digital idiomaticity and allow us reinvent the instrument in the act of music performance.
@inproceedings{NIME20_65, author = {Tahiroğlu, Koray and Kastemaa, Miranda and Koli, Oskar}, title = {Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis}, pages = {337--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813402}, url = {https://www.nime.org/proceedings/2020/nime2020_paper65.pdf}, presentation-video = {https://youtu.be/giYxFovZAvQ} }
-
Chris Kiefer, Dan Overholt, and Alice Eldridge. 2020. Shaping the behaviour of feedback instruments with complexity-controlled gain dynamics. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 343–348. http://doi.org/10.5281/zenodo.4813406
Download PDF DOIFeedback instruments offer radical new ways of engaging with instrument design and musicianship. They are defined by recurrent circulation of signals through the instrument, which give the instrument ‘a life of its own’ and a ’stimulating uncontrollability’. Arguably, the most interesting musical behaviour in these instruments happens when their dynamic complexity is maximised, without falling into saturating feedback. It is often challenging to keep the instrument in this zone; this research looks at algorithmic ways to manage the behaviour of feedback loops in order to make feedback instruments more playable and musical; to expand and maintain the ‘sweet spot’. We propose a solution that manages gain dynamics based on measurement of complexity, using a realtime implementation of the Effort to Compress algorithm. The system was evaluated with four musicians, each of whom have different variations of string-based feedback instruments, following an autobiographical design approach. Qualitative feedback was gathered, showing that the system was successful in modifying the behaviour of these instruments to allow easier access to edge transition zones, sometimes at the expense of losing some of the more compelling dynamics of the instruments. The basic efficacy of the system is evidenced by descriptive audio analysis. This paper is accompanied by a dataset of sounds collected during the study, and the open source software that was written to support the research.
@inproceedings{NIME20_66, author = {Kiefer, Chris and Overholt, Dan and Eldridge, Alice}, title = {Shaping the behaviour of feedback instruments with complexity-controlled gain dynamics}, pages = {343--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813406}, url = {https://www.nime.org/proceedings/2020/nime2020_paper66.pdf}, presentation-video = {https://youtu.be/sf6FwsUX-84} }
-
Duncan A.H. Williams. 2020. MINDMIX: Mapping of brain activity to congruent audio mixing features. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 349–352. http://doi.org/10.5281/zenodo.4813408
Download PDF DOIBrain-computer interfacing (BCI) offers novel methods to facilitate participation in audio engineering, providing access for individuals who might otherwise be unable to take part (either due to lack of training, or physical disability). This paper describes the development of a BCI system for conscious, or ‘active’, control of parameters on an audio mixer by generation of synchronous MIDI Machine Control messages. The mapping between neurophysiological cues and audio parameter must be intuitive for a neophyte audience (i.e., one without prior training or the physical skills developed by professional audio engineers when working with tactile interfaces). The prototype is dubbed MINDMIX (a portmanteau of ‘mind’ and ‘mixer’), combining discrete and many-to-many mappings of audio mixer parameters and BCI control signals measured via Electronecephalograph (EEG). In future, specific evaluation of discrete mappings would be useful for iterative system design.
@inproceedings{NIME20_67, author = {Williams, Duncan A.H.}, title = {MINDMIX: Mapping of brain activity to congruent audio mixing features}, pages = {349--352}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813408}, url = {https://www.nime.org/proceedings/2020/nime2020_paper67.pdf} }
-
Marcel O DeSmith, Andrew Piepenbrink, and Ajay Kapur. 2020. SQUISHBOI: A Multidimensional Controller for Complex Musical Interactions using Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 353–356. http://doi.org/10.5281/zenodo.4813412
Download PDF DOIWe present SQUISHBOI, a continuous touch controller for interacting with complex musical systems. An elastic rubber membrane forms the playing surface of the instrument, while machine learning is used for dimensionality reduction and gesture recognition. The membrane is stretched over a hollow shell which permits considerable depth excursion, with an array of distance sensors tracking the surface displacement from underneath. The inherent dynamics of the membrane lead to cross-coupling between nearby sensors, however we do not see this as a flaw or limitation. Instead we find this coupling gives structure to the playing techniques and mapping schemes chosen by the user. The instrument is best utilized as a tool for actively designing abstraction and forming a relative control structure within a given system, one which allows for intuitive gestural control beyond what can be accomplished with conventional musical controllers.
@inproceedings{NIME20_68, author = {DeSmith, Marcel O and Piepenbrink, Andrew and Kapur, Ajay}, title = {SQUISHBOI: A Multidimensional Controller for Complex Musical Interactions using Machine Learning}, pages = {353--356}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813412}, url = {https://www.nime.org/proceedings/2020/nime2020_paper68.pdf} }
-
Nick Bryan-Kinns, LI ZIJIN, and Xiaohua Sun. 2020. On Digital Platforms and AI for Music in the UK and China. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 357–360. http://doi.org/10.5281/zenodo.4813414
Download PDF DOIDigital technologies play a fundamental role in New Interfaces for Musical Expression as well as music making and consumption more widely. This paper reports on two workshops with music professionals and researchers who undertook an initial exploration of the differences between digital platforms (software and online services) for music in the UK and China. Differences were found in primary target user groups of digital platforms in the UK and China as well as the stages of the culture creation cycle they were developed for. Reasons for the divergence of digital platforms include differences in culture, regulation, and infrastructure, as well as the inherent Western bias of software for music making such as Digital Audio Workstations. Using AI to bridge between Western and Chinese music traditions is suggested as an opportunity to address aspects of the divergent landscape of digital platforms for music inside and outside China.
@inproceedings{NIME20_69, author = {Bryan-Kinns, Nick and ZIJIN, LI and Sun, Xiaohua}, title = {On Digital Platforms and AI for Music in the UK and China}, pages = {357--360}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813414}, url = {https://www.nime.org/proceedings/2020/nime2020_paper69.pdf}, presentation-video = {https://youtu.be/c7nkCBBTnDA} }
-
Jean Chu and Jaewon Choi. 2020. Reinterpretation of Pottery as a Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 37–38. http://doi.org/10.5281/zenodo.4813416
Download PDF DOIDigitally integrating the materiality, form, and tactility in everyday objects (e.g., pottery) provides inspiration for new ways of musical expression and performance. In this project we reinterpret the creative process and aesthetic philosophy of pottery as algorithmic music to help users rediscover the latent story behind pottery through a synesthetic experience. Projects Mobius I and Mobius II illustrate two potential directions toward a musical interface, one focusing on the circular form, and the other, on graphical ornaments of pottery. Six conductive graphics on the pottery function as capacitive sensors while retaining their resemblance to traditional ornamental patterns in pottery. Offering pottery as a musical interface, we invite users to orchestrate algorithmic music by physically touching the different graphics.
@inproceedings{NIME20_7, author = {Chu, Jean and Choi, Jaewon}, title = {Reinterpretation of Pottery as a Musical Interface}, pages = {37--38}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813416}, url = {https://www.nime.org/proceedings/2020/nime2020_paper7.pdf} }
-
Anders Eskildsen and Mads Walther-Hansen. 2020. Force dynamics as a design framework for mid-air musical interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 361–366. http://doi.org/10.5281/zenodo.4813418
Download PDF DOIIn this paper we adopt the theory of force dynamics in human cognition as a fundamental design principle for the development of mid-air musical interfaces. We argue that this principle can provide more intuitive user experiences when the interface does not provide direct haptic feedback – such as interfaces made with various gesture-tracking technologies. Grounded in five concepts from the theoretical literature on force dynamics in musical cognition, the paper presents a set of principles for interaction design focused on five force schemas: Path restraint, Containment restraint, Counter-force, Attraction, and Compulsion. We describe an initial set of examples that implement these principles using a Leap Motion sensor for gesture tracking and SuperCollider for interactive audio design. Finally, the paper presents a pilot experiment that provides initial ratings of intuitiveness in the user experience.
@inproceedings{NIME20_70, author = {Eskildsen, Anders and Walther-Hansen, Mads}, title = {Force dynamics as a design framework for mid-air musical interfaces}, pages = {361--366}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813418}, url = {https://www.nime.org/proceedings/2020/nime2020_paper70.pdf}, presentation-video = {https://youtu.be/REe967aGVN4} }
-
Erik Nyström. 2020. Intra-Actions: Experiments with Velocity and Position in Continuous Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 367–368. http://doi.org/10.5281/zenodo.4813420
Download PDF DOIContinuous MIDI controllers commonly output their position only, with no influence of the performative energy with which they were set. In this paper, creative uses of time as a parameter in continuous controller mapping are demonstrated: the speed of movement affects the position mapping and control output. A set of SuperCollider classes are presented, developed in the author’s practice in computer music, where they have been used together with commercial MIDI controllers. The creative applications employ various approaches and metaphors for scaling time, but also machine learning for recognising patterns. In the techniques, performer, controller and synthesis ‘intra-act’, to use Karen Barad’s term: because position and velocity are derived from the same data, sound output cannot be predicted without the temporal context of performance.
@inproceedings{NIME20_71, author = {Nyström, Erik}, title = {Intra-Actions: Experiments with Velocity and Position in Continuous Controllers}, pages = {367--368}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813420}, url = {https://www.nime.org/proceedings/2020/nime2020_paper71.pdf} }
-
James Leonard and Andrea Giomi. 2020. Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 369–374. http://doi.org/10.5281/zenodo.4813422
Download PDF DOIThis paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can provide useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.
@inproceedings{NIME20_72, author = {Leonard, James and Giomi, Andrea}, title = {Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance}, pages = {369--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813422}, url = {https://www.nime.org/proceedings/2020/nime2020_paper72.pdf}, presentation-video = {https://youtu.be/HQqIjL-Z8dA} }
-
Romulo A Vieira and Flávio Luiz Schiavoni. 2020. Fliperama: An affordable Arduino based MIDI Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 375–379. http://doi.org/10.5281/zenodo.4813424
Download PDF DOILack of access to technological devices is a common exponent of a new form of social exclusion. Coupled with this, there are also the risk of increasing inequality between developed and underdeveloped countries when concerning technology access. Regarding Internet access, the percentage of young Africans who do not have access to this technology is around 60%, while in Europe the figure is 4%. This limitation also expands for musical instruments, whether electronic or not. In light of this worldwide problem, this paper aims to showcase a method for building a MIDI Controller, a prominent instrument for musical production and live performance, in an economically viable form that can be accessible to the poorest populations. It is also desirable that the equipment is suitable for teaching various subjects such as Music, Computer Science and Engineering. The outcome of this research is not an amazing controller or a brandy new cool interface but the experience of building a controller concerning all the bad conditions of doing it.
@inproceedings{NIME20_73, author = {Vieira, Romulo A and Schiavoni, Flávio Luiz}, title = {Fliperama: An affordable Arduino based MIDI Controller}, pages = {375--379}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813424}, url = {https://www.nime.org/proceedings/2020/nime2020_paper73.pdf}, presentation-video = {https://youtu.be/X1GE5jk2cgc} }
-
Alex MacLean. 2020. Immersive Dreams: A Shared VR Experience. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 380–381. http://doi.org/10.5281/zenodo.4813426
Download PDF DOIThis paper reports on a project that aimed to break apart the isolation of VR and share an experience between both the wearer of a headset and a room of observers. It presented the user with an acoustically playable virtual environment in which their interactions with objects spawned audio events from the room’s 80 loudspeakers and animations on the room’s 3 display walls. This required the use of several Unity engines running on separate machines and SuperCollider running as the audio engine. The perspectives into what the wearer of the headset was doing allowed the audience to connect their movements to the sounds and images being experienced, effectively allowing them all to participate in the installation simultaneously.
@inproceedings{NIME20_74, author = {MacLean, Alex}, title = {Immersive Dreams: A Shared VR Experience}, pages = {380--381}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813426}, url = {https://www.nime.org/proceedings/2020/nime2020_paper74.pdf} }
-
Nick Bryan-Kinns and LI ZIJIN. 2020. ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 382–387. http://doi.org/10.5281/zenodo.4813428
Download PDF DOIThere are many studies of Digital Musical Instrument (DMI) design, but there is little research on the cross-cultural co-creation of DMIs drawing on traditional musical instruments. We present a study of cross-cultural co-creation inspired by the Duxianqin - a traditional Chinese Jing ethnic minority single stringed musical instrument. We report on how we structured the co-creation with European and Chinese participants ranging from DMI designers to composers and performers. We discuss how we identified the ‘essence’ of the Duxianqin and used this to drive co-creation of three Duxianqin reimagined through digital technologies. Music was specially composed for these reimagined Duxianqin and performed in public as the culmination of the design process. We reflect on our co-creation process and how others could use such an approach to identify the essence of traditional instruments and reimagine them in the digital age.
@inproceedings{NIME20_75, author = {Bryan-Kinns, Nick and ZIJIN, LI}, title = {ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies}, pages = {382--387}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813428}, url = {https://www.nime.org/proceedings/2020/nime2020_paper75.pdf}, presentation-video = {https://youtu.be/NvHcUQea82I} }
-
Konstantinos n/a Vasilakos, Scott Wilson, Thomas McCauley, Tsun Winston Yeung, Emma Margetson, and Milad Khosravi Mardakheh. 2020. Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 388–393. http://doi.org/10.5281/zenodo.4813430
Download PDF DOIThis paper presents a discussion of Dark Matter, a sonification project using live coding and just-in-time programming techniques. The project uses data from proton-proton collisions produced by the Large Hadron Collider (LHC) at CERN, Switzerland, and then detected and reconstructed by the Compact Muon Solenoid (CMS) experiment, and was developed with the support of the art@CMS project. Work for the Dark Matter project included the development of a custom-made environment in the SuperCollider (SC) programming language that lets the performers of the group engage in collective improvisations using dynamic interventions and networked music systems. This paper will also provide information about a spin-off project entitled the Interactive Physics Sonification System (IPSOS), an interactive and standalone online application developed in the JavaScript programming language. It provides a web-based interface that allows users to map particle data to sound on commonly used web browsers, mobile devices, such as smartphones, tablets etc. The project was developed as an educational outreach tool to engage young students and the general public with data derived from LHC collisions.
@inproceedings{NIME20_76, author = {Vasilakos, Konstantinos n/a and Wilson, Scott and McCauley, Thomas and Yeung, Tsun Winston and Margetson, Emma and Khosravi Mardakheh, Milad}, title = {Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces.}, pages = {388--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813430}, url = {https://www.nime.org/proceedings/2020/nime2020_paper76.pdf}, presentation-video = {https://youtu.be/1vS_tFUyz7g} }
-
Haruya Takase and Shun Shiramatsu. 2020. Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 394–398. http://doi.org/10.5281/zenodo.4813434
Download PDF DOIOur goal is to develop an improvisational ensemble support system for music beginners who do not have knowledge of chord progressions and do not have enough experience of playing an instrument. We hypothesized that a music beginner cannot determine tonal pitches of melody over a particular chord but can use body movements to specify the pitch contour (i.e., melodic outline) and the attack timings (i.e., rhythm). We aim to realize a performance interface for supporting expressing intuitive pitch contour and attack timings using body motion and outputting harmonious pitches over the chord progression of the background music. Since the intended users of this system are not limited to people with music experience, we plan to develop a system that uses Android smartphones, which many people have. Our system consists of three modules: a module for specifying attack timing using smartphone sensors, module for estimating the vertical movement of the smartphone using smartphone sensors, and module for estimating the sound height using smartphone vertical movement and background chord progression. Each estimation module is developed using long short-term memory (LSTM), which is often used to estimate time series data. We conduct evaluation experiments for each module. As a result, the attack timing estimation had zero misjudgments, and the mean error time of the estimated attack timing was smaller than the sensor-acquisition interval. The accuracy of the vertical motion estimation was 64%, and that of the pitch estimation was 7.6%. The results indicate that the attack timing is accurate enough, but the vertical motion estimation and the pitch estimation need to be improved for actual use.
@inproceedings{NIME20_77, author = {Takase, Haruya and Shiramatsu, Shun}, title = {Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor}, pages = {394--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813434}, url = {https://www.nime.org/proceedings/2020/nime2020_paper77.pdf}, presentation-video = {https://youtu.be/WhrGhas9Cvc} }
-
Augoustinos Tsiros and Alessandro Palladini. 2020. Towards a Human-Centric Design Framework for AI Assisted Music Production. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 399–404. http://doi.org/10.5281/zenodo.4813436
Download PDF DOIIn this paper, we contribute to the discussion on how to best design human-centric MIR tools for live audio mixing by bridging the gap between research on complex systems, the psychology of automation and the design of tools that support creativity in music production. We present the design of the Channel-AI, an embedded AI system which performs instrument recognition and generates parameter settings suggestions for gain levels, gating, compression and equalization which are specific to the input signal and the instrument type. We discuss what we believe to be the key design principles and perspectives on the making of intelligent tools for creativity and for experts in the loop. We demonstrate how these principles have been applied to inform the design of the interaction between expert live audio mixing engineers with the Channel-AI (i.e. a corpus of AI features embedded in the Midas HD Console. We report the findings from a preliminary evaluation we conducted with three professional mixing engineers and reflect on mixing engineers’ comments about the Channel-AI on social media.
@inproceedings{NIME20_78, author = {Tsiros, Augoustinos and Palladini, Alessandro}, title = {Towards a Human-Centric Design Framework for AI Assisted Music Production}, pages = {399--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813436}, url = {https://www.nime.org/proceedings/2020/nime2020_paper78.pdf} }
-
Matthew Rodger, Paul Stapleton, Maarten van Walstijn, Miguel Ortiz, and Laurel S Pardue. 2020. What Makes a Good Musical Instrument? A Matter of Processes, Ecologies and Specificities . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 405–410. http://doi.org/10.5281/zenodo.4813438
Download PDF DOIUnderstanding the question of what makes a good musical instrument raises several conceptual challenges. Researchers have regularly adopted tools from traditional HCI as a framework to address this issue, in which instrumental musical activities are taken to comprise a device and a user, and should be evaluated as such. We argue that this approach is not equipped to fully address the conceptual issues raised by this question. It is worth reflecting on what exactly an instrument is, and how instruments contribute toward meaningful musical experiences. Based on a theoretical framework that incorporates ideas from ecological psychology, enactivism, and phenomenology, we propose an alternative approach to studying musical instruments. According to this approach, instruments are better understood in terms of processes rather than as devices, while musicians are not users, but rather agents in musical ecologies. A consequence of this reframing is that any evaluations of instruments, if warranted, should align with the specificities of the relevant processes and ecologies concerned. We present an outline of this argument and conclude with a description of a current research project to illustrate how our approach can shape the design and performance of a musical instrument in-progress.
@inproceedings{NIME20_79, author = {Rodger, Matthew and Stapleton, Paul and van Walstijn, Maarten and Ortiz, Miguel and Pardue, Laurel S}, title = {What Makes a Good Musical Instrument? A Matter of Processes, Ecologies and Specificities }, pages = {405--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813438}, url = {https://www.nime.org/proceedings/2020/nime2020_paper79.pdf}, presentation-video = {https://youtu.be/ADLo-QdSwBc} }
-
Charles Patrick Martin, Zeruo Liu, Yichen Wang, Wennan He, and Henry Gardner. 2020. Sonic Sculpture: Activating Engagement with Head-Mounted Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 39–42. http://doi.org/10.5281/zenodo.4813445
Download PDF DOIWe describe a sonic artwork, "Listening To Listening", that has been designed to accompany a real-world sculpture with two prototype interaction schemes. Our artwork is created for the HoloLens platform so that users can have an individual experience in a mixed reality context. Personal AR systems have recently become available and practical for integration into public art projects, however research into sonic sculpture works has yet to account for the affordances of current portable and mainstream AR systems. In this work, we take advantage of the HoloLens’ spatial awareness to build sonic spaces that have a precise spatial relationship to a given sculpture and where the sculpture itself is modelled in the augmented scene as an "invisible hologram". We describe the artistic rationale for our artwork, the design of the two interaction schemes, and the technical and usability feedback that we have obtained from demonstrations during iterative development. This work appears to be the first time that head-mounted AR has been used to build an interactive sonic landscape to engage with a public sculpture.
@inproceedings{NIME20_8, author = {Martin, Charles Patrick and Liu, Zeruo and Wang, Yichen and He, Wennan and Gardner, Henry}, title = {Sonic Sculpture: Activating Engagement with Head-Mounted Augmented Reality}, pages = {39--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813445}, url = {https://www.nime.org/proceedings/2020/nime2020_paper8.pdf}, presentation-video = {https://youtu.be/RlTWXnFOLN8} }
-
Giovanni Santini. 2020. Augmented Piano in Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 411–415. http://doi.org/10.5281/zenodo.4813449
Download PDF DOIAugmented instruments have been a widely explored research topic since the late 80s. The possibility to use sensors for providing an input for sound processing/synthesis units let composers and sound artist open up new ways for experimentation. Augmented Reality, by rendering virtual objects in the real world and by making those objects interactive (via some sensor-generated input), provides a new frame for this research field. In fact, the 3D visual feedback, delivering a precise indication of the spatial configuration/function of each virtual interface, can make the instrumental augmentation process more intuitive for the interpreter and more resourceful for a composer/creator: interfaces can change their behavior over time, can be reshaped, activated or deactivated. Each of these modifications can be made obvious to the performer by using strategies of visual feedback. In addition, it is possible to accurately sample space and to map it with differentiated functions. Augmenting interfaces can also be considered a visual expressive tool for the audience and designed accordingly: the performer’s point of view (or another point of view provided by an external camera) can be mirrored to a projector. This article will show some example of different designs of AR piano augmentation from the composition Studi sulla realtà nuova.
@inproceedings{NIME20_80, author = {Santini, Giovanni}, title = {Augmented Piano in Augmented Reality}, pages = {411--415}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813449}, url = {https://www.nime.org/proceedings/2020/nime2020_paper80.pdf}, presentation-video = {https://youtu.be/3HBWvKj2cqc} }
-
Tom Davis and Laura Reid. 2020. Taking Back Control: Taming the Feral Cello. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 416–421. http://doi.org/10.5281/zenodo.4813453
Download PDF DOIWhilst there is a large body of NIME papers that concentrate on the presentation of new technologies there are fewer papers that have focused on a longitudinal understanding of NIMEs in practice. This paper embodies the more recent acknowledgement of the importance of practice-based methods of evaluation [1,2,3,4] concerning the use of NIMEs within performance and the recognition that it is only within the situation of practice that the context is available to actually interpret and evaluate the instrument [2]. Within this context this paper revisits the Feral Cello performance system that was first presented at NIME 2017 [5]. This paper explores what has been learned through the artistic practice of performing and workshopping in this context by drawing heavily on the experiences of the performer/composer who has become an integral part of this project and co-author of this paper. The original philosophical context is also revisited and reflections are made on the tensions between this position and the need to ‘get something to work’. The authors feel the presentation of the semi-structured interview within the paper is the best method of staying truthful to Hayes understanding of musical improvisation as an enactive framework ‘in its ability to demonstrate the importance of participatory, relational, emergent, and embodied musical activities and processes’ [4].
@inproceedings{NIME20_81, author = {Davis, Tom and Reid, Laura}, title = {Taking Back Control: Taming the Feral Cello}, pages = {416--421}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813453}, url = {https://www.nime.org/proceedings/2020/nime2020_paper81.pdf}, presentation-video = {https://youtu.be/9npR0T6YGiA} }
-
Thibault Jaccard, Robert Lieck, and Martin Rohrmeier. 2020. AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 422–427. http://doi.org/10.5281/zenodo.4813457
Download PDF DOIBecoming a practical musician traditionally requires an extensive amount of preparatory work to master the technical and theoretical challenges of the particular instrument and musical style before being able to devote oneself to musical expression. In particular, in jazz improvisation, one of the major barriers is the mastery and appropriate selection of scales from a wide range, according to harmonic context and style. In this paper, we present AutoScale, an interactive software for making jazz improvisation more accessible by lifting the burden of scale selection from the musician while still allowing full controllability if desired. This is realized by implementing a MIDI effect that dynamically maps the desired scales onto a standardized layout. Scale selection can be pre-programmed, automated based on algorithmic lead sheet analysis, or interactively adapted. We discuss the music-theoretical foundations underlying our approach, the design choices taken for building an intuitive user interface, and provide implementations as VST plugin and web applications for use with a Launchpad or traditional MIDI keyboard.
@inproceedings{NIME20_82, author = {Jaccard, Thibault and Lieck, Robert and Rohrmeier, Martin}, title = {AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation}, pages = {422--427}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813457}, url = {https://www.nime.org/proceedings/2020/nime2020_paper82.pdf}, presentation-video = {https://youtu.be/KqGpTTQ9ZrE} }
-
Lauren Hayes and Adnan Marquez-Borbon. 2020. Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 428–433. http://doi.org/10.5281/zenodo.4813459
Download PDF DOINearly two decades after its inception as a workshop at the ACM Conference on Human Factors in Computing Systems, NIME exists as an established international conference significantly distinct from its precursor. While this origin story is often noted, the implications of NIME’s history as emerging from a field predominantly dealing with human-computer interaction have rarely been discussed. In this paper we highlight many of the recent—and some not so recent—challenges that have been brought upon the NIME community as it attempts to maintain and expand its identity as a platform for multidisciplinary research into HCI, interface design, and electronic and computer music. We discuss the relationship between the market demands of the neoliberal university—which have underpinned academia’s drive for innovation—and the quantification and economisation of research performance which have facilitated certain disciplinary and social frictions to emerge within NIME-related research and practice. Drawing on work that engages with feminist theory and cultural studies, we suggest that critical reflection and moreover mediation is necessary in order to address burgeoning concerns which have been raised within the NIME discourse in relation to methodological approaches,’diversity and inclusion’, ’accessibility’, and the fostering of rigorous interdisciplinary research.
@inproceedings{NIME20_83, author = {Hayes, Lauren and Marquez-Borbon, Adnan}, title = {Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises}, pages = {428--433}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813459}, url = {https://www.nime.org/proceedings/2020/nime2020_paper83.pdf}, presentation-video = {https://youtu.be/4UERHlFUQzo} }
-
Andrew McPherson and Giacomo Lepri. 2020. Beholden to our tools: negotiating with technology while sketching digital instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 434–439. http://doi.org/10.5281/zenodo.4813461
Download PDF DOIDigital musical instrument design is often presented as an open-ended creative process in which technology is adopted and adapted to serve the musical will of the designer. The real-time music programming languages powering many new instruments often provide access to audio manipulation at a low level, theoretically allowing the creation of any sonic structure from primitive operations. As a result, designers may assume that these seemingly omnipotent tools are pliable vehicles for the expression of musical ideas. We present the outcomes of a compositional game in which sound designers were invited to create simple instruments using common sensors and the Pure Data programming language. We report on the patterns and structures that often emerged during the exercise, arguing that designers respond strongly to suggestions offered by the tools they use. We discuss the idea that current music programming languages may be as culturally loaded as the communities of practice that produce and use them. Instrument making is then best viewed as a protracted negotiation between designer and tools.
@inproceedings{NIME20_84, author = {McPherson, Andrew and Lepri, Giacomo}, title = {Beholden to our tools: negotiating with technology while sketching digital instruments}, pages = {434--439}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813461}, url = {https://www.nime.org/proceedings/2020/nime2020_paper84.pdf}, presentation-video = {https://youtu.be/-nRtaucPKx4} }
-
Andrea Martelloni, Andrew McPherson, and Mathieu Barthet. 2020. Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 440–445. http://doi.org/10.5281/zenodo.4813463
Download PDF DOIPercussive fingerstyle is a playing technique adopted by many contemporary acoustic guitarists, and it has grown substantially in popularity over the last decade. Its foundations lie in the use of the guitar’s body for percussive lines, and in the extended range given by the novel use of altered tunings. There are very few formal accounts of percussive fingerstyle, therefore, we devised an interview study to investigate its approach to composition, performance and musical experimentation. Our aim was to gain insight into the technique from a gesture-based point of view, observe whether modern fingerstyle shares similarities to the approaches in NIME practice and investigate possible avenues for guitar augmentations inspired by the percussive technique. We conducted an inductive thematic analysis on the transcribed interviews: our findings highlight the participants’ material-based approach to musical interaction and we present a three-zone model of the most common percussive gestures on the guitar’s body. Furthermore, we examine current trends in Digital Musical Instruments, especially in guitar augmentation, and we discuss possible future directions in augmented guitars in light of the interviewees’ perspectives.
@inproceedings{NIME20_85, author = {Martelloni, Andrea and McPherson, Andrew and Barthet, Mathieu}, title = {Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study}, pages = {440--445}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813463}, url = {https://www.nime.org/proceedings/2020/nime2020_paper85.pdf}, presentation-video = {https://youtu.be/ON8ckEBcQ98} }
-
Robert Jack, Jacob Harrison, and Andrew McPherson. 2020. Digital Musical Instruments as Research Products. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 446–451. http://doi.org/10.5281/zenodo.4813465
Download PDF DOIIn the field of human computer interaction (HCI) the limitations of prototypes as the primary artefact used in research are being realised. Prototypes often remain open in their design, are partially-finished, and have a focus on a specific aspect of interaction. Previous authors have proposed ‘research products’ as a specific category of artefact distinct from both research prototypes and commercial products. The characteristics of research products are their holistic completeness as a design artefact, their situatedness in a specific cultural context, and the fact that they are evaluated for what they are, not what they will become. This paper discusses the ways in which many instruments created within the context of New Interfaces for Musical Expression (NIME), including those that are used in performances, often fall into the category of prototype. We shall discuss why research products might be a useful framing for NIME research. Research products shall be weighed up against some of the main themes of NIME research: technological innovation; musical expression; instrumentality. We conclude this paper with a case study of Strummi, a digital musical instrument which we frame as research product.
@inproceedings{NIME20_86, author = {Jack, Robert and Harrison, Jacob and McPherson, Andrew}, title = {Digital Musical Instruments as Research Products}, pages = {446--451}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813465}, url = {https://www.nime.org/proceedings/2020/nime2020_paper86.pdf}, presentation-video = {https://youtu.be/luJwlZBeBqY} }
-
Amit D Patel and John Richards. 2020. Pop-up for Collaborative Music-making. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 452–457. http://doi.org/10.5281/zenodo.4813473
Download PDF DOIThis paper presents a micro-residency in a pop-up shop and collaborative making amongst a group of researchers and practitioners. The making extends to sound(-making) objects, instruments, workshop, sound installation, performance and discourse on DIY electronic music. Our research builds on creative workshopping and speculative design and is informed by ideas of collective making. The ad hoc and temporary pop-up space is seen as formative in shaping the outcomes of the work. Through the lens of curated research, working together with a provocative brief, we explored handmade objects, craft, non-craft, human error, and the spirit of DIY, DIYness. We used the Studio Bench - a method that brings making, recording and performance together in one space - and viewed workshopping and performance as a holistic event. A range of methodologies were investigated in relation to NIME. These included the Hardware Mash-up, Speculative Sound Circuits and Reverse Design, from product to prototype, resulting in the instrument the Radical Nails. Finally, our work drew on the notion of design as performance and making in public and further developed our understanding of workshop-installation and performance-installation.
@inproceedings{NIME20_87, author = {Patel, Amit D and Richards, John}, title = {Pop-up for Collaborative Music-making}, pages = {452--457}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813473}, url = {https://www.nime.org/proceedings/2020/nime2020_paper87.pdf} }
-
Courtney Reed and Andrew McPherson. 2020. Surface Electromyography for Direct Vocal Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 458–463. http://doi.org/10.5281/zenodo.4813475
Download PDF DOIThis paper introduces a new method for direct control using the voice via measurement of vocal muscular activation with surface electromyography (sEMG). Digital musical interfaces based on the voice have typically used indirect control, in which features extracted from audio signals control the parameters of sound generation, for example in audio to MIDI controllers. By contrast, focusing on the musculature of the singing voice allows direct muscular control, or alternatively, combined direct and indirect control in an augmented vocal instrument. In this way we aim to both preserve the intimate relationship a vocalist has with their instrument and key timbral and stylistic characteristics of the voice while expanding its sonic capabilities. This paper discusses other digital instruments which effectively utilise a combination of indirect and direct control as well as a history of controllers involving the voice. Subsequently, a new method of direct control from physiological aspects of singing through sEMG and its capabilities are discussed. Future developments of the system are further outlined along with usage in performance studies, interactive live vocal performance, and educational and practice tools.
@inproceedings{NIME20_88, author = {Reed, Courtney and McPherson, Andrew}, title = {Surface Electromyography for Direct Vocal Control}, pages = {458--463}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813475}, url = {https://www.nime.org/proceedings/2020/nime2020_paper88.pdf}, presentation-video = {https://youtu.be/1nWLgQGNh0g} }
-
Henrik von Coler, Steffen Lepa, and Stefan Weinzierl. 2020. User-Defined Mappings for Spatial Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 464–469. http://doi.org/10.5281/zenodo.4813477
Download PDF DOIThe presented sound synthesis system allows the individual spatialization of spectral components in real-time, using a sinusoidal modeling approach within 3-dimensional sound reproduction systems. A co-developed, dedicated haptic interface is used to jointly control spectral and spatial attributes of the sound. Within a user study, participants were asked to create an individual mapping between control parameters of the interface and rendering parameters of sound synthesis and spatialization, using a visual programming environment. Resulting mappings of all participants are evaluated, indicating the preference of single control parameters for specific tasks. In comparison with mappings intended by the development team, the results validate certain design decisions and indicate new directions.
@inproceedings{NIME20_89, author = {von Coler, Henrik and Lepa, Steffen and Weinzierl, Stefan}, title = {User-Defined Mappings for Spatial Sound Synthesis}, pages = {464--469}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813477}, url = {https://www.nime.org/proceedings/2020/nime2020_paper89.pdf} }
-
Rohan Proctor and Charles Patrick Martin. 2020. A Laptop Ensemble Performance System using Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 43–48. http://doi.org/10.5281/zenodo.4813481
Download PDF DOIThe popularity of applying machine learning techniques in musical domains has created an inherent availability of freely accessible pre-trained neural network (NN) models ready for use in creative applications. This work outlines the implementation of one such application in the form of an assistance tool designed for live improvisational performances by laptop ensembles. The primary intention was to leverage off-the-shelf pre-trained NN models as a basis for assisting individual performers either as musical novices looking to engage with more experienced performers or as a tool to expand musical possibilities through new forms of creative expression. The system expands upon a variety of ideas found in different research areas including new interfaces for musical expression, generative music and group performance to produce a networked performance solution served via a web-browser interface. The final implementation of the system offers performers a mixture of high and low-level controls to influence the shape of sequences of notes output by locally run NN models in real time, also allowing performers to define their level of engagement with the assisting generative models. Two test performances were played, with the system shown to feasibly support four performers over a four minute piece while producing musically cohesive and engaging music. Iterations on the design of the system exposed technical constraints on the use of a JavaScript environment for generative models in a live music context, largely derived from inescapable processing overheads.
@inproceedings{NIME20_9, author = {Proctor, Rohan and Martin, Charles Patrick}, title = {A Laptop Ensemble Performance System using Recurrent Neural Networks}, pages = {43--48}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813481}, url = {https://www.nime.org/proceedings/2020/nime2020_paper9.pdf} }
-
Tiago Brizolara, Sylvie Gibet, and Caroline Larboulette. 2020. Elemental: a Gesturally Controlled System to Perform Meteorological Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 470–476. http://doi.org/10.5281/zenodo.4813483
Download PDF DOIIn this paper, we present and evaluate Elemental, a NIME (New Interface for Musical Expression) based on audio synthesis of sounds of meteorological phenomena, namely rain, wind and thunder, intended for application in contemporary music/sound art, performing arts and entertainment. We first describe the system, controlled by the performer’s arms through Inertial Measuring Units and Electromyography sensors. The produced data is analyzed and used through mapping strategies as input of the sound synthesis engine. We conducted user studies to refine the sound synthesis engine, the choice of gestures and the mappings between them, and to finally evaluate this proof of concept. Indeed, the users approached the system with their own awareness ranging from the manipulation of abstract sound to the direct simulation of atmospheric phenomena - in the latter case, it could even be to revive memories or to create novel situations. This suggests that the approach of instrumentalization of sounds of known source may be a fruitful strategy for constructing expressive interactive sonic systems.
@inproceedings{NIME20_90, author = {Brizolara, Tiago and Gibet, Sylvie and Larboulette, Caroline}, title = {Elemental: a Gesturally Controlled System to Perform Meteorological Sounds}, pages = {470--476}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813483}, url = {https://www.nime.org/proceedings/2020/nime2020_paper90.pdf} }
-
Çağrı Erdem and Alexander Refsum Jensenius. 2020. RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 477–482. http://doi.org/10.5281/zenodo.4813485
Download PDF DOIThis paper describes the ongoing process of developing RAW, a collaborative body–machine instrument that relies on ’sculpting’ the sonification of raw EMG signals. The instrument is built around two Myo armbands located on the forearms of the performer. These are used to investigate muscle contraction, which is again used as the basis for the sonic interaction design. Using a practice-based approach, the aim is to explore the musical aesthetics of naturally occurring bioelectric signals. We are particularly interested in exploring the differences between processing at audio rate versus control rate, and how the level of detail in the signal–and the complexity of the mappings–influence the experience of control in the instrument. This is exemplified through reflections on four concerts in which RAW has been used in different types of collective improvisation.
@inproceedings{NIME20_91, author = {Erdem, Çağrı and Jensenius, Alexander Refsum}, title = {RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation}, pages = {477--482}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813485}, url = {https://www.nime.org/proceedings/2020/nime2020_paper91.pdf}, presentation-video = {https://youtu.be/gX-X1iw7uWE} }
-
Travis C MacDonald, James Hughes, and Barry MacKenzie. 2020. SmartDrone: An Aurally Interactive Harmonic Drone. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 483–488. http://doi.org/10.5281/zenodo.4813488
Download PDF DOIMobile devices provide musicians with the convenience of musical accompaniment wherever they are, granting them new methods for developing their craft. We developed the application SmartDrone to give users the freedom to practice in different harmonic settings with the assistance of their smartphone. This application further explores the area of dynamic accompaniment by implementing functionality so that chords are generated based on the key in which the user is playing. Since this app was designed to be a tool for scale practice, drone-like accompaniment was chosen so that musicians could experiment with combinations of melody and harmony. The details of the application development process are discussed in this paper, with the main focus on scale analysis and harmonic transposition. By using these two components, the application is able to dynamically alter key to reflect the user’s playing. As well as the design and implementation details, this paper reports and examines feedback from a small user study of undergraduate music students who used the app.
@inproceedings{NIME20_92, author = {MacDonald, Travis C and Hughes, James and MacKenzie, Barry}, title = {SmartDrone: An Aurally Interactive Harmonic Drone}, pages = {483--488}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813488}, url = {https://www.nime.org/proceedings/2020/nime2020_paper92.pdf} }
-
Juan P Martinez Avila, Vasiliki Tsaknaki, Pavel Karpashevich, et al. 2020. Soma Design for NIME. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 489–494. http://doi.org/10.5281/zenodo.4813491
Download PDF DOIPrevious research on musical embodiment has reported that expert performers often regard their instruments as an extension of their body. Not every digital musical instrument seeks to create a close relationship between body and instrument, but even for the many that do, the design process often focuses heavily on technical and sonic factors, with relatively less attention to the bodily experience of the performer. In this paper we propose Somaesthetic design as an alternative to explore this space. The Soma method aims to attune the sensibilities of designers, as well as their experience of their body, and make use of these notions as a resource for creative design. We then report on a series of workshops exploring the relationship between the body and the guitar with a Soma design approach. The workshops resulted in a series of guitar-related artefacts and NIMEs that emerged from the somatic exploration of balance and tension during guitar performance. Lastly we present lessons learned from our research that could inform future Soma-based musical instrument design, and how NIME research may also inform Soma design.
@inproceedings{NIME20_93, author = {Martinez Avila, Juan P and Tsaknaki, Vasiliki and Karpashevich, Pavel and Windlin, Charles and Valenti, Niklas and Höök, Kristina and McPherson, Andrew and Benford, Steve}, title = {Soma Design for NIME}, pages = {489--494}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813491}, url = {https://www.nime.org/proceedings/2020/nime2020_paper93.pdf}, presentation-video = {https://youtu.be/i4UN_23A_SE} }
-
Laddy P Cadavid. 2020. Knotting the memory//Encoding the Khipu_: Reuse of an ancient Andean device as a NIME . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 495–498. http://doi.org/10.5281/zenodo.4813495
Download PDF DOIThe khipu is an information processing and transmission device used mainly by the Inca empire and previous Andean societies. This mnemotechnic interface is one of the first textile computers known, consisting of a central wool or cotton cord to which other strings are attached with knots of different shapes, colors, and sizes encrypting different kinds of values and information. The system was widely used until the Spanish colonization that banned their use and destroyed a large number of these devices. This paper introduces the creation process of a NIME based in a khipu converted into an electronic instrument for the interaction and generation of live experimental sound by weaving knots with conductive rubber cords, and its implementation in the performance Knotting the memory//Encoding the Khipu_ that aim to pay homage to this system, from a decolonial perspective continuing the interrupted legacy of this ancestral practice in a different experience of tangible live coding and computer music, as well as weaving the past with the present of the indigenous and people resistance of the Andean territory with their sounds.
@inproceedings{NIME20_94, author = {Cadavid, Laddy P}, title = {Knotting the memory//Encoding the Khipu_: Reuse of an ancient Andean device as a NIME }, pages = {495--498}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813495}, url = {https://www.nime.org/proceedings/2020/nime2020_paper94.pdf}, presentation-video = {https://youtu.be/nw5rbc15pT8} }
-
Shelly Knotts and Nick Collins. 2020. A survey on the uptake of Music AI Software. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 499–504. http://doi.org/10.5281/zenodo.4813499
Download PDF DOIThe recent proliferation of commercial software claiming ground in the field of music AI has provided opportunity to engage with AI in music making without the need to use libraries aimed at those with programming skills. Pre-packaged music AI software has the potential to broaden access to machine learning tools but it is unclear how widely these softwares are used by music technologists or how engagement affects attitudes towards AI in music making. To interrogate these questions we undertook a survey in October 2019, gaining 117 responses. The survey collected statistical information on the use of pre-packaged and self-written music AI software. Respondents reported a range of musical outputs including producing recordings, live performance and generative work across many genres of music making. The survey also gauged general attitudes towards AI in music and provided an open field for general comments. The responses to the survey suggested a forward-looking attitude to music AI with participants often pointing to the future potential of AI tools, rather than present utility. Optimism was partially related to programming skill with those with more experience showing higher skepticism towards the current state and future potential of AI.
@inproceedings{NIME20_95, author = {Knotts, Shelly and Collins, Nick}, title = {A survey on the uptake of Music AI Software}, pages = {499--504}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813499}, url = {https://www.nime.org/proceedings/2020/nime2020_paper95.pdf}, presentation-video = {https://youtu.be/v6hT3ED3N60} }
-
Scott Barton. 2020. Circularity in Rhythmic Representation and Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 505–508. http://doi.org/10.5281/zenodo.4813501
Download PDF DOICycle is a software tool for musical composition and improvisation that represents events along a circular timeline. In doing so, it breaks from the linear representational conventions of European Art music and modern Digital Audio Workstations. A user specifies time points on different layers, each of which corresponds to a particular sound. The layers are superimposed on a single circle, which allows a unique visual perspective on the relationships between musical voices given their geometric positions. Positions in-between quantizations are possible, which encourages experimentation with expressive timing and machine rhythms. User-selected transformations affect groups of notes, layers, and the pattern as a whole. Past and future states are also represented, synthesizing linear and cyclical notions of time. This paper will contemplate philosophical questions raised by circular rhythmic notation and will reflect on the ways in which the representational novelties and editing functions of Cycle have inspired creativity in musical composition.
@inproceedings{NIME20_96, author = {Barton, Scott}, title = {Circularity in Rhythmic Representation and Composition}, pages = {505--508}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813501}, url = {https://www.nime.org/proceedings/2020/nime2020_paper96.pdf}, presentation-video = {https://youtu.be/0CEKbyJUSw4} }
-
Thor Magnusson. 2020. Instrumental Investigations at Emute Lab. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 509–513. http://doi.org/10.5281/zenodo.4813503
Download PDF DOIThis lab report discusses recent projects and activities of the Experimental Music Technologies Lab at the University of Sussex. The lab was founded in 2014 and has contributed to the development of the field of new musical technologies. The report introduces the lab’s agenda, gives examples of its activities through common themes and gives short description of lab members’ work. The lab environment, funding income and future vision are also presented.
@inproceedings{NIME20_97, author = {Magnusson, Thor}, title = {Instrumental Investigations at Emute Lab}, pages = {509--513}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813503}, url = {https://www.nime.org/proceedings/2020/nime2020_paper97.pdf} }
-
Satvik Venkatesh, Edward Braund, and Eduardo Miranda. 2020. Composing Popular Music with Physarum polycephalum-based Memristors. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 514–519. http://doi.org/10.5281/zenodo.4813507
Download PDF DOICreative systems such as algorithmic composers often use Artificial Intelligence models like Markov chains, Artificial Neural Networks, and Genetic Algorithms in order to model stochastic processes. Unconventional Computing (UC) technologies explore non-digital ways of data storage, processing, input, and output. UC paradigms such as Quantum Computing and Biocomputing delve into domains beyond the binary bit to handle complex non-linear functions. In this paper, we harness Physarum polycephalum as memristors to process and generate creative data for popular music. While there has been research conducted in this area, the literature lacks examples of popular music and how the organism’s non-linear behaviour can be controlled while composing music. This is important because non-linear forms of representation are not as obvious as conventional digital means. This study aims at disseminating this technology to non-experts and musicians so that they can incorporate it in their creative processes. Furthermore, it combines resistors and memristors to have more flexibility while generating music and optimises parameters for faster processing and performance.
@inproceedings{NIME20_98, author = {Venkatesh, Satvik and Braund, Edward and Miranda, Eduardo}, title = {Composing Popular Music with Physarum polycephalum-based Memristors}, pages = {514--519}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813507}, url = {https://www.nime.org/proceedings/2020/nime2020_paper98.pdf}, presentation-video = {https://youtu.be/NBLa-KoMUh8} }
-
Fede Camara Halac and Shadrick Addy. 2020. PathoSonic: Performing Sound In Virtual Reality Feature Space. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 520–522. http://doi.org/10.5281/zenodo.4813510
Download PDF DOIPathoSonic is a VR experience that enables a participant to visualize and perform a sound file based on timbre feature descriptors displayed in space. The name comes from the different paths the participant can create through their sonic explorations. The goal of this research is to leverage affordances of virtual reality technology to visualize sound through different levels of performance-based interactivity that immerses the participant’s body in a spatial virtual environment. Through implementation of a multi-sensory experience, including visual aesthetics, sound, and haptic feedback, we explore inclusive approaches to sound visualization, making it more accessible to a wider audience including those with hearing, and mobility impairments. The online version of the paper can be accessed here: https://fdch.github.io/pathosonic
@inproceedings{NIME20_99, author = {Camara Halac, Fede and Addy, Shadrick}, title = {PathoSonic: Performing Sound In Virtual Reality Feature Space}, pages = {520--522}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, doi = {10.5281/zenodo.4813510}, url = {https://www.nime.org/proceedings/2020/nime2020_paper99.pdf} }
2019
-
Enrique Tomas, Thomas Gorbach, Hilda Tellioglu, and Martin Kaltenbrunner. 2019. Material embodiments of electroacoustic music: an experimental workshop study. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 1–6. http://doi.org/10.5281/zenodo.3672842
Download PDF DOIThis paper reports on a workshop where participants produced physical mock-ups of musical interfaces directly after miming control of short electroacoustic music pieces. Our goal was understanding how people envision and materialize their own sound-producing gestures from spontaneous cognitive mappings. During the workshop, 50 participants from different creative backgrounds modeled more than 180 physical artifacts. Participants were filmed and interviewed for the later analysis of their different multimodal associations about music. Our initial hypothesis was that most of the physical mock-ups would be similar to the sound-producing objects that participants would identify in the musical pieces. Although the majority of artifacts clearly showed correlated design trajectories, our results indicate that a relevant number of participants intuitively decided to engineer alternative solutions emphasizing their personal design preferences. Therefore, in this paper we present and discuss the workshop format, its results and the possible applications for designing new musical interfaces.
@inproceedings{Tomas2019, author = {Tomas, Enrique and Gorbach, Thomas and Tellioglu, Hilda and Kaltenbrunner, Martin}, title = {Material embodiments of electroacoustic music: an experimental workshop study}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672842}, url = {http://www.nime.org/proceedings/2019/nime2019_paper001.pdf} }
-
Yupu Lu, Yijie Wu, and Shijie Zhu. 2019. Collaborative Musical Performances with Automatic Harp Based on Image Recognition and Force Sensing Resistors. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 7–8. http://doi.org/10.5281/zenodo.3672846
Download PDF DOIIn this paper, collaborative performance is defined as the performance of the piano by the performer and accompanied by an automatic harp. The automatic harp can play music based on the electronic score and change its speed according to the speed of the performer. We built a 32-channel automatic harp and designed a layered modular framework integrating both hardware and software, for experimental real-time control protocols. Considering that MIDI keyboard lacking information of force (acceleration) and fingering detection, both of which are important for expression, we designed force-sensor glove and achieved basic image recognition. They are used to accurately detect speed, force (corresponding to velocity in MIDI) and pitch when a performer plays the piano.
@inproceedings{Lu2019, author = {Lu, Yupu and Wu, Yijie and Zhu, Shijie}, title = {Collaborative Musical Performances with Automatic Harp Based on Image Recognition and Force Sensing Resistors}, pages = {7--8}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672846}, url = {http://www.nime.org/proceedings/2019/nime2019_paper002.pdf} }
-
Lior Arbel, Yoav Y. Schechner, and Noam Amir. 2019. The Symbaline — An Active Wine Glass Instrument with a Liquid Sloshing Vibrato Mechanism. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 9–14. http://doi.org/10.5281/zenodo.3672848
Download PDF DOIThe Symbaline is an active instrument comprised of several partly-filled wine glasses excited by electromagnetic coils. This work describes an electromechanical system for incorporating frequency and amplitude modulation to the Symbaline’s sound. A pendulum having a magnetic bob is suspended inside the liquid in the wine glass. The pendulum is put into oscillation by driving infra-sound signals through the coil. The pendulum’s movement causes the liquid in the glass to slosh back and forth. Simultaneously, wine glass sounds are produced by driving audio-range signals through the coil, inducing vibrations in a small magnet attached to the glass surface and exciting glass vibrations. As the glass vibrates, the sloshing liquid periodically changes the glass’s resonance frequencies and dampens the glass, thus modulating both wine glass pitch and sound intensity.
@inproceedings{Arbel2019, author = {Arbel, Lior and Schechner, Yoav Y. and Amir, Noam}, title = {The Symbaline --- An Active Wine Glass Instrument with a Liquid Sloshing Vibrato Mechanism}, pages = {9--14}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672848}, url = {http://www.nime.org/proceedings/2019/nime2019_paper003.pdf} }
-
Helena de Souza Nunes, Federico Visi, Lydia Helena Wohl Coelho, and Rodrigo Schramm. 2019. SIBILIM: A low-cost customizable wireless musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 15–20. http://doi.org/10.5281/zenodo.3672850
Download PDF DOIThis paper presents the SIBILIM, a low-cost musical interface composed of a resonance box made of cardboard containing customised push buttons that interact with a smartphone through its video camera. Each button can be mapped to a set of MIDI notes or control parameters. The sound is generated through synthesis or sample playback and can be amplified with the help of a transducer, which excites the resonance box. An essential contribution of this interface is the possibility of reconfiguration of the buttons layout without the need to hard rewire the system since it uses only the smartphone built-in camera. This features allow for quick instrument customisation for different use cases, such as low cost projects for schools or instrument building workshops. Our case study used the Sibilim for music education, where it was designed to develop the conscious of music perception and to stimulate creativity through exercises of short tonal musical compositions. We conducted a study with a group of twelve participants in an experimental workshop to verify its validity.
@inproceedings{deSouzaNunes2019, author = {de Souza Nunes, Helena and Visi, Federico and Coelho, Lydia Helena Wohl and Schramm, Rodrigo}, title = {SIBILIM: A low-cost customizable wireless musical interface}, pages = {15--20}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672850}, url = {http://www.nime.org/proceedings/2019/nime2019_paper004.pdf} }
-
Jonathan Bell. 2019. The Risset Cycle, Recent Use Cases With SmartVox. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 21–24. http://doi.org/10.5281/zenodo.3672852
Download PDF DOIThe combination of graphic/animated scores, acoustic signals (audio-scores) and Head-Mounted Display (HMD) technology offers promising potentials in the context of distributed notation, for live performances and concerts involving voices, instruments and electronics. After an explanation of what SmartVox is technically, and how it is used by composers and performers, this paper explains why this form of technology-aided performance might help musicians for synchronization to an electronic tape and (spectral) tuning. Then, from an exploration of the concepts of distributed notation and networked music performances, it proposes solutions (in conjunction with INScore, BabelScores and the Decibel Score Player) seeking for the expansion of distributed notation practice to a wider community. It finally presents findings relative to the use of SmartVox with HMDs.
@inproceedings{Bell2019, author = {Bell, Jonathan}, title = {The Risset Cycle, Recent Use Cases With SmartVox}, pages = {21--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672852}, url = {http://www.nime.org/proceedings/2019/nime2019_paper005.pdf} }
-
Johnty Wang, Axel Mulder, and Marcelo Wanderley. 2019. Practical Considerations for MIDI over Bluetooth Low Energy as a Wireless Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 25–30. http://doi.org/10.5281/zenodo.3672854
Download PDF DOIThis paper documents the key issues of performance and compatibility working with Musical Instrument Digital Interface (MIDI) via Bluetooth Low Energy (BLE) as a wireless interface for sensor or controller data and inter-module communication in the context of building interactive digital systems. An overview of BLE MIDI is presented along with a comparison of the protocol from the perspective of theoretical limits and interoperability, showing its widespread compatibility across platforms compared with other alternatives. Then we perform three complementary tests on BLE MIDI and alternative interfaces using prototype and commercial devices, showing that BLE MIDI has comparable performance with the tested WiFi implementations, with end-to-end (sensor input to audio output) latencies of under 10ms under certain conditions. Overall, BLE MIDI is an ideal choice for controllers and sensor interfaces that are designed to work on a wide variety of platforms.
@inproceedings{Wang2019, author = {Wang, Johnty and Mulder, Axel and Wanderley, Marcelo}, title = {Practical Considerations for {MIDI} over Bluetooth Low Energy as a Wireless Interface}, pages = {25--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672854}, url = {http://www.nime.org/proceedings/2019/nime2019_paper006.pdf} }
-
Richard Ramchurn, Juan Pablo Martinez-Avila, Sarah Martindale, Alan Chamberlain, Max L Wilson, and Steve Benford. 2019. Improvising a Live Score to an Interactive Brain-Controlled Film. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 31–36. http://doi.org/10.5281/zenodo.3672856
Download PDF DOIWe report on the design and deployment of systems for the performance of live score accompaniment to an interactive movie by a Networked Musical Ensemble. In this case, the audio-visual content of the movie is selected in real time based on user input to a Brain-Computer Interface (BCI). Our system supports musical improvisation between human performers and automated systems responding to the BCI. We explore the performers’ roles during two performances when these socio-technical systems were implemented, in terms of coordination, problem-solving, managing uncertainty and musical responses to system constraints. This allows us to consider how features of these systems and practices might be incorporated into a general tool, aimed at any musician, which could scale for use in different performance settings involving interactive media.
@inproceedings{Ramchurn2019, author = {Ramchurn, Richard and Martinez-Avila, Juan Pablo and Martindale, Sarah and Chamberlain, Alan and Wilson, Max L and Benford, Steve}, title = {Improvising a Live Score to an Interactive Brain-Controlled Film}, pages = {31--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672856}, url = {http://www.nime.org/proceedings/2019/nime2019_paper007.pdf} }
-
Ajin Jiji Tom, Harish Jayanth Venkatesan, Ivan Franco, and Marcelo Wanderley. 2019. Rebuilding and Reinterpreting a Digital Musical Instrument — The Sponge. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 37–42. http://doi.org/10.5281/zenodo.3672858
Download PDF DOIAlthough several Digital Musical Instruments (DMIs) have been presented at NIME, very few of them remain accessible to the community. Rebuilding a DMI is often a necessary step to allow for performance with NIMEs. Rebuilding a DMI exactly similar to its original, however, might not be possible due to technology obsolescence, lack of documentation or other reasons. It might then be interesting to re-interpret a DMI and build an instrument inspired by the original one, creating novel performance opportunities. This paper presents the challenges and approaches involved in rebuilding and re-interpreting an existing DMI, The Sponge by Martin Marier. The rebuilt versions make use of newer/improved technology and customized design aspects like addition of vibrotactile feedback and implementation of different mapping strategies. It also discusses the implications of embedding sound synthesis within the DMI, by using the Prynth framework and further presents a comparison between this approach and the more traditional ground-up approach. As a result of the evaluation and comparison of the two rebuilt DMIs, we present a third version which combines the benefits and discuss performance issues with these devices.
@inproceedings{Tom2019, author = {Tom, Ajin Jiji and Venkatesan, Harish Jayanth and Franco, Ivan and Wanderley, Marcelo}, title = {Rebuilding and Reinterpreting a Digital Musical Instrument --- The Sponge}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672858}, url = {http://www.nime.org/proceedings/2019/nime2019_paper008.pdf} }
-
Kiyu Nishida, Akishige Yuguchi, kazuhiro jo, Paul Modler, and Markus Noisternig. 2019. Border: A Live Performance Based on Web AR and a Gesture-Controlled Virtual Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 43–46. http://doi.org/10.5281/zenodo.3672860
Download PDF DOIRecent technological advances, such as increased CPU/GPU processing speed, along with the miniaturization of devices and sensors, have created new possibilities for integrating immersive technologies in music and performance art. Virtual and Augmented Reality (VR/AR) have become increasingly interesting as mobile device platforms, such as up-to-date smartphones, with necessary CPU resources entered the consumer market. In combination with recent web technologies, any mobile device can simply connect with a browser to a local server to access the latest technology. The web platform also eases the integration of collaborative situated media in participatory artwork. In this paper, we present the interactive music improvisation piece ‘Border,’ premiered in 2018 at the Beyond Festival at the Center for Art and Media Karlsruhe (ZKM). This piece explores the interaction between a performer and the audience using web-based applications – including AR, real-time 3D audio/video streaming, advanced web audio, and gesture-controlled virtual instruments – on smart mobile devices.
@inproceedings{Nishida2019, author = {Nishida, Kiyu and Yuguchi, Akishige and kazuhiro jo and Modler, Paul and Noisternig, Markus}, title = {Border: A Live Performance Based on Web {AR} and a Gesture-Controlled Virtual Instrument}, pages = {43--46}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672860}, url = {http://www.nime.org/proceedings/2019/nime2019_paper009.pdf} }
-
Palle Dahlstedt. 2019. Taming and Tickling the Beast — Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 47–52. http://doi.org/10.5281/zenodo.3672862
Download PDF DOILibration Perturbed is a performance and an improvisation instrument, originally composed and designed for a multi-speaker dome. The performer controls a bank of 64 virtual inter-connected resonating strings, with individual and direct control of tuning and resonance characteristics through a multitouch-enhanced klavier interface (TouchKeys). It is a hybrid acoustic-electronic instrument, as all string vibrations originate from physical vibrations in the klavier and its casing, captured through contact microphones. In addition, there are gestural strings, called ropes, excited by performed musical gestures. All strings and ropes are connected, and inter-resonate together as a ”super-harp”, internally and through the performance space. With strong resonance, strings may go into chaotic motion or emergent quasi-periodic patterns, but custom adaptive leveling mechanisms keep loudness under the musician’s control at all times. The hybrid digital/acoustic approach and the enhanced keyboard provide for an expressive and very physical interaction, and a strong multi-channel immersive experience. The paper describes the aesthetic choices behind the design of the system, as well as the technical implementation, and – primarily – the interaction design, as it emerges from mapping, sound design, physical modeling and integration of the acoustic, the gestural, and the virtual. The work is evaluated based on the experiences from a series of performances.
@inproceedings{Dahlstedt2019, author = {Dahlstedt, Palle}, title = {Taming and Tickling the Beast --- Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp}, pages = {47--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672862}, url = {http://www.nime.org/proceedings/2019/nime2019_paper010.pdf} }
-
Doga Cavdir, Juan Sierra, and Ge Wang. 2019. Taptop, Armtop, Blowtop: Evolving the Physical Laptop Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 53–58. http://doi.org/10.5281/zenodo.3672864
Download PDF DOIThis research represents an evolution and evaluation of the embodied physical laptop instruments. Specifically, these are instruments that are physical in that they use bodily interaction, take advantage of the physical affordances of the laptop. They are embodied in the sense that instruments are played in such ways where the sound is embedded to be close to the instrument. Three distinct laptop instruments, Taptop, Armtop, and Blowtop, are introduced in this paper. We discuss the integrity of the design process with composing for laptop instruments and performing with them. In this process, our aim is to blur the boundaries of the composer and designer/engineer roles. How the physicality is achieved by leveraging musical gestures gained through traditional instrument practice is studied, as well as those inspired by body gestures. We aim to explore how using such interaction methods affects the communication between the ensemble and the audience. An aesthetic-first qualitative evaluation of these interfaces is discussed, through works and performances crafted specifically for these instruments and presented in the concert setting of the laptop orchestra. In so doing, we reflect on how such physical, embodied instrument design practices can inform a different kind of expressive and performance mindset.
@inproceedings{Cavdir2019, author = {Cavdir, Doga and Sierra, Juan and Wang, Ge}, title = {Taptop, Armtop, Blowtop: Evolving the Physical Laptop Instrument}, pages = {53--58}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672864}, url = {http://www.nime.org/proceedings/2019/nime2019_paper011.pdf} }
-
David Antonio Gómez Jáuregui, Irvin Dongo, and Nadine Couture. 2019. Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 59–64. http://doi.org/10.5281/zenodo.3672866
Download PDF DOIThis work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor. Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter’s user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language.
@inproceedings{GomezJauregui2019, author = {Jáuregui, David Antonio Gómez and Dongo, Irvin and Couture, Nadine}, title = {Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds}, pages = {59--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672866}, url = {http://www.nime.org/proceedings/2019/nime2019_paper012.pdf} }
-
Fabio Morreale, Andrea Guidi, and Andrew P. McPherson. 2019. Magpick: an Augmented Guitar Pick for Nuanced Control. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 65–70. http://doi.org/10.5281/zenodo.3672868
Download PDF DOIThis paper introduces the Magpick, an augmented pick for electric guitar that uses electromagnetic induction to sense the motion of the pick with respect to the permanent magnets in the guitar pickup. The Magpick provides the guitarist with nuanced control of the sound which coexists with traditional plucking-hand technique. The paper presents three ways that the signal from the pick can modulate the guitar sound, followed by a case study of its use in which 11 guitarists tested the Magpick for five days and composed a piece with it. Reflecting on their comments and experiences, we outline the innovative features of this technology from the point of view of performance practice. In particular, compared to other augmentations, the high temporal resolution, low latency, and large dynamic range of the Magpick support a highly nuanced control over the sound. Our discussion highlights the utility of having the locus of augmentation coincide with the locus of interaction.
@inproceedings{Morreale2019, author = {Morreale, Fabio and Guidi, Andrea and McPherson, Andrew P.}, title = {Magpick: an Augmented Guitar Pick for Nuanced Control}, pages = {65--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672868}, url = {http://www.nime.org/proceedings/2019/nime2019_paper013.pdf} }
-
Bertrand Petit and manuel serrano. 2019. Composing and executing Interactive music using the HipHop.js language. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 71–76. http://doi.org/10.5281/zenodo.3672870
Download PDF DOISkini is a platform for composing and producing live performances with audience participating using connected devices (smartphones, tablets, PC, etc.). The music composer creates beforehand musical elements such as melodic patterns, sound patterns, instruments, group of instruments, and a dynamic score that governs the way the basic elements will behave according to events produced by the audience. During the concert or the performance, the audience, by interacting with the system, gives birth to an original music composition. Skini music scores are expressed in terms of constraints that establish relationships between instruments. A constraint maybe instantaneous, for instance one may disable violins while trumpets are playing. A constraint may also be temporal, for instance, the piano cannot play more than 30 consecutive seconds. The Skini platform is implemented in Hop.js and HipHop.js. HipHop.js, a synchronous reactive DLS, is used for implementing the music scores as its elementary constructs consisting of high level operators such as parallel executions, sequences, awaits, synchronization points, etc, form an ideal core language for implementing Skini constraints. This paper presents the Skini platform. It reports about live performances and an educational project. It briefly overviews the use of HipHop.js for representing score.
@inproceedings{Petit2019, author = {Petit, Bertrand and manuel serrano}, title = {Composing and executing Interactive music using the HipHop.js language}, pages = {71--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672870}, url = {http://www.nime.org/proceedings/2019/nime2019_paper014.pdf} }
-
Gabriel Lopes Rocha, João Teixera Araújo, and Flávio Luiz Schiavoni. 2019. Ha Dou Ken Music: Different mappings to play music with joysticks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 77–78. http://doi.org/10.5281/zenodo.3672872
Download PDF DOIDue to video game controls great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures performed in a joystick control can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.
@inproceedings{Rocha2019, author = {Rocha, Gabriel Lopes and Araújo, João Teixera and Schiavoni, Flávio Luiz}, title = {Ha Dou Ken Music: Different mappings to play music with joysticks}, pages = {77--78}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672872}, url = {http://www.nime.org/proceedings/2019/nime2019_paper015.pdf} }
-
Torgrim Rudland Næss and Charles Patrick Martin. 2019. A Physical Intelligent Instrument using Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 79–82. http://doi.org/10.5281/zenodo.3672874
Download PDF DOIThis paper describes a new intelligent interactive instrument, based on an embedded computing platform, where deep neural networks are applied to interactive music generation. Even though using neural networks for music composition is not uncommon, a lot of these models tend to not support any form of user interaction. We introduce a self-contained intelligent instrument using generative models, with support for real-time interaction where the user can adjust high-level parameters to modify the music generated by the instrument. We describe the technical details of our generative model and discuss the experience of using the system as part of musical performance.
@inproceedings{Næss2019, author = {Næss, Torgrim Rudland and Martin, Charles Patrick}, title = {A Physical Intelligent Instrument using Recurrent Neural Networks}, pages = {79--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672874}, url = {http://www.nime.org/proceedings/2019/nime2019_paper016.pdf} }
-
Angelo Fraietta. 2019. Creating Order and Progress. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 83–88. http://doi.org/10.5281/zenodo.3672876
Download PDF DOIThis paper details the mapping strategy of the work Order and Progress: a sonic segue across A Auriverde, a composition based upon the skyscape represented on the Brazilian flag. This work uses the Stellarium planetarium software as a performance interface, blending the political symbology, scientific data and musical mapping of each star represented on the flag as a multimedia performance. The work is interfaced through the Stellar Command module, a Java based program that converts the visible field of view from the Stellarium planetarium interface to astronomical data through the VizieR database of astronomical catalogues. This scientific data is then mapped to musical parameters through a Java based programming environment. I will discuss the strategies employed to create a work that was not only artistically novel, but also visually engaging and scientifically accurate.
@inproceedings{Fraietta2019, author = {Fraietta, Angelo}, title = {Creating Order and Progress}, pages = {83--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672876}, url = {http://www.nime.org/proceedings/2019/nime2019_paper017.pdf} }
-
João Nogueira Tragtenberg, Filipe Calegario, Giordano Cabral, and Geber L. Ramalho. 2019. Towards the Concept of Digital Dance and Music Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 89–94. http://doi.org/10.5281/zenodo.3672878
Download PDF DOIThis paper discusses the creation of instruments in which music is intentionally generated by dance. We introduce the conceptual framework of Digital Dance and Music Instruments (DDMI). Several DDMI have already been created, but they have been developed isolatedly, and there is still a lack of a common process of ideation and development. Knowledge about Digital Musical Instruments (DMIs) and Interactive Dance Systems (IDSs) can contribute to the design of DDMI, but the former brings few contributions to the body’s expressiveness, and the latter brings few references to an instrumental relationship with music. Because of those different premises, the integration between both paradigms can be an arduous task for the designer of DDMI. The conceptual framework of DDMI can also serve as a bridge between DMIs and IDSs, serving as a lingua franca between both communities and facilitating the exchange of knowledge. The conceptual framework has shown to be a promising analytical tool for the design, development, and evaluation of new digital dance and music instrument.
@inproceedings{Tragtenberg2019, author = {Tragtenberg, João Nogueira and Calegario, Filipe and Cabral, Giordano and Ramalho, Geber L.}, title = {Towards the Concept of Digital Dance and Music Instruments}, pages = {89--94}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672878}, url = {http://www.nime.org/proceedings/2019/nime2019_paper018.pdf} }
-
Maros Suran Bomba and Palle Dahlstedt. 2019. Somacoustics: Interactive Body-as-Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 95–100. http://doi.org/10.5281/zenodo.3672880
Download PDF DOIVisitors interact with a blindfolded artist’s body, the motions of which are tracked and translated into synthesized four-channel sound, surrounding the participants. Through social-physical and aural interactions, they play his instrument-body, in a mutual dance. Crucial for this work has been the motion-to-sound mapping design, and the investigations of bodily interaction with normal lay-people and with professional contact-improvisation dancers. The extra layer of social-physical interaction both constrains and inspires the participant-artist relation and the sonic exploration, and through this, his body is transformed into an instrument, and physical space is transformed into a sound-space. The project aims to explore the experience of interaction between human and technology and its impact on one’s bodily perception and embodiment, as well as the relation between body and space, departing from a set of existing theories on embodiment. In the paper, its underlying aesthetics are described and discussed, as well as the sensitive motion research process behind it, and the technical implementation of the work. It is evaluated based on participant behavior and experiences and analysis of its premiere exhibition in 2018.
@inproceedings{Bomba2019, author = {Bomba, Maros Suran and Dahlstedt, Palle}, title = {Somacoustics: Interactive Body-as-Instrument}, pages = {95--100}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672880}, url = {http://www.nime.org/proceedings/2019/nime2019_paper019.pdf} }
-
Nathan Turczan and Ajay Kapur. 2019. The Scale Navigator: A System for Networked Algorithmic Harmony. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 101–104. http://doi.org/10.5281/zenodo.3672882
Download PDF DOIThe Scale Navigator is a graphical interface implementation of Dmitri Tymoczko’s scale network designed to help generate algorithmic harmony and harmonically synchronize performers in a laptop or electro-acoustic orchestra. The user manipulates the Scale Navigator to direct harmony on a chord-to-chord level and on a scale-to-scale level. In a live performance setting, the interface broadcasts control data, MIDI, and real-time notation to an ensemble of live electronic performers, sight-reading improvisers, and musical generative algorithms.
@inproceedings{Turczan2019, author = {Turczan, Nathan and Kapur, Ajay}, title = {The Scale Navigator: A System for Networked Algorithmic Harmony}, pages = {101--104}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672882}, url = {http://www.nime.org/proceedings/2019/nime2019_paper020.pdf} }
-
Alex Michael Lucas, Miguel Ortiz, and Dr. Franziska Schroeder. 2019. Bespoke Design for Inclusive Music: The Challenges of Evaluation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 105–109. http://doi.org/10.5281/zenodo.3672884
Download PDF DOIIn this paper, the authors describe the evaluation of a collection of bespoke knob cap designs intended to improve the ease in which a specific musician with dyskinetic cerebral palsy can operate rotary controls in a musical context. The authors highlight the importance of the performers perspective when using design as a means for overcoming access barriers to music. Also, while the authors were not able to find an ideal solution for the musician within the confines of this study, several useful observations on the process of evaluating bespoke assistive music technology are described; observations which may prove useful to digital musical instrument designers working within the field of inclusive music.
@inproceedings{Lucas2019, author = {Lucas, Alex Michael and Ortiz, Miguel and Schroeder, Dr. Franziska}, title = {Bespoke Design for Inclusive Music: The Challenges of Evaluation}, pages = {105--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672884}, url = {http://www.nime.org/proceedings/2019/nime2019_paper021.pdf} }
-
Xiao Xiao, Grégoire Locqueville, Christophe d’Alessandro, and Boris Doval. 2019. T-Voks: the Singing and Speaking Theremin. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 110–115. http://doi.org/10.5281/zenodo.3672886
Download PDF DOIT-Voks is an augmented theremin that controls Voks, a performative singing synthesizer. Originally developed for control with a graphic tablet interface, Voks allows for real-time pitch and time scaling, vocal effort modification and syllable sequencing for pre-recorded voice utterances. For T-Voks the theremin’s frequency antenna modifies the output pitch of the target utterance while the amplitude antenna controls not only volume as usual but also voice quality and vocal effort. Syllabic sequencing is handled by an additional pressure sensor attached to the player’s volume-control hand. This paper presents the system architecture of T-Voks, the preparation procedure for a song, playing gestures, and practice techniques, along with musical and poetic examples across four different languages and styles.
@inproceedings{Xiao2019, author = {Xiao, Xiao and Locqueville, Grégoire and d'Alessandro, Christophe and Doval, Boris}, title = {T-Voks: the Singing and Speaking Theremin}, pages = {110--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672886}, url = {http://www.nime.org/proceedings/2019/nime2019_paper022.pdf} }
-
Hunter Brown and spencer topel. 2019. DRMMR: An Augmented Percussion Implement. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 116–121. http://doi.org/10.5281/zenodo.3672888
Download PDF DOIRecent developments in music technology have enabled novel timbres to be acoustically synthesized using various actuation and excitation methods. Utilizing recent work in nonlinear acoustic synthesis, we propose a transducer based augmented percussion implement entitled DRMMR. This design enables the user to sustain computer sequencer-like drum rolls at faster speeds while also enabling the user to achieve nonlinear acoustic synthesis effects. Our acoustic evaluation shows drum rolls executed by DRMMR easily exhibit greater levels of regularity, speed, and precision than comparable transducer and electromagnetic-based actuation methods. DRMMR’s nonlinear acoustic synthesis functionality also presents possibilities for new kinds of sonic interactions on the surface of drum membranes.
@inproceedings{Brown2019, author = {Brown, Hunter and spencer topel}, title = {{DRMMR}: An Augmented Percussion Implement}, pages = {116--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672888}, url = {http://www.nime.org/proceedings/2019/nime2019_paper023.pdf} }
-
Giacomo Lepri and Andrew P. McPherson. 2019. Fictional instruments, real values: discovering musical backgrounds with non-functional prototypes. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 122–127. http://doi.org/10.5281/zenodo.3672890
Download PDF DOIThe emergence of a new technology can be considered as the result of social, cultural and technical process. Instrument designs are particularly influenced by cultural and aesthetic values linked to the specific contexts and communities that produced them. In previous work, we ran a design fiction workshop in which musicians created non-functional instrument mockups. In the current paper, we report on an online survey in which music technologists were asked to speculate on the background of the musicians who designed particular instruments. Our results showed several cues for the interpretation of the artefacts’ origins, including physical features, body-instrument interactions, use of language and references to established music practices and tools. Tacit musical and cultural values were also identified based on intuitive and holistic judgments. Our discussion highlights the importance of cultural awareness and context-dependent values on the design and use of interactive musical systems.
@inproceedings{Lepri2019, author = {Lepri, Giacomo and McPherson, Andrew P.}, title = {Fictional instruments, real values: discovering musical backgrounds with non-functional prototypes}, pages = {122--127}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672890}, url = {http://www.nime.org/proceedings/2019/nime2019_paper024.pdf} }
-
Christopher Dewey and Jonathan P. Wakefield. 2019. Exploring the Container Metaphor for Equalisation Manipulation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 128–129. http://doi.org/10.5281/zenodo.3672892
Download PDF DOIThis paper presents the first stage in the design and evaluation of a novel container metaphor interface for equalisation control. The prototype system harnesses the Pepper’s Ghost illusion to project mid-air a holographic data visualisation of an audio track’s long-term average and real-time frequency content as a deformable shape manipulated directly via hand gestures. The system uses HTML 5, JavaScript and the Web Audio API in conjunction with a Leap Motion controller and bespoke low budget projection system. During subjective evaluation users commented that the novel system was simpler and more intuitive to use than commercially established equalisation interface paradigms and most suited to creative, expressive and explorative equalisation tasks.
@inproceedings{Dewey2019, author = {Dewey, Christopher and Wakefield, Jonathan P.}, title = {Exploring the Container Metaphor for Equalisation Manipulation}, pages = {128--129}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672892}, url = {http://www.nime.org/proceedings/2019/nime2019_paper025.pdf} }
-
Alex Hofmann, Vasileios Chatziioannou, Sebastian Schmutzhard, Gökberk Erdogan, and Alexander Mayer. 2019. The Half-Physler: An oscillating real-time interface to a tube resonator model. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 130–133. http://doi.org/10.5281/zenodo.3672896
Download PDF DOIPhysics-based sound synthesis allows to shape the sound by modifying parameters that reference to real world properties of acoustic instruments. This paper presents a hybrid physical modeling single reed instrument, where a virtual tube is coupled to a real mouthpiece with a sensor-equipped clarinet reed. The tube model is provided as an opcode for Csound which is running on the low-latency embedded audio-platform Bela. An actuator is connected to the audio output and the sensor-reed signal is fed back into the input of Bela. The performer can control the coupling between reed and actuator, and is also provided with a 3D-printed slider/knob interface to change parameters of the tube model in real-time.
@inproceedings{Hofmann2019, author = {Hofmann, Alex and Chatziioannou, Vasileios and Schmutzhard, Sebastian and Erdogan, Gökberk and Mayer, Alexander}, title = {The Half-Physler: An oscillating real-time interface to a tube resonator model}, pages = {130--133}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672896}, url = {http://www.nime.org/proceedings/2019/nime2019_paper026.pdf} }
-
Peter Bussigel, Stephan Moore, and Scott Smallwood. 2019. Reanimating the Readymade. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 134–139. http://doi.org/10.5281/zenodo.3672898
Download PDF DOIThere is rich history of using found or “readymade” objects in music performances and sound installations. John Cage’s Water Walk, Carolee Schneeman’s Noise Bodies, and David Tudor’s Rainforest all lean on both the sonic and cultural affordances of found objects. Today, composers and sound artists continue to look at the everyday, combining readymades with microcontrollers and homemade electronics and repurposing known interfaces for their latent sonic potential. This paper gives a historical overview of work at the intersection of music and the readymade and then describes three recent sound installations/performances by the authors that further explore this space. The emphasis is on processes involved in working with found objects–the complex, practical, and playful explorations into sound and material culture.
@inproceedings{Bussigel2019, author = {Bussigel, Peter and Moore, Stephan and Smallwood, Scott}, title = {Reanimating the Readymade}, pages = {134--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672898}, url = {http://www.nime.org/proceedings/2019/nime2019_paper027.pdf} }
-
Yian Zhang, Yinmiao Li, Daniel Chin, and Gus Xia. 2019. Adaptive Multimodal Music Learning via Interactive Haptic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 140–145. http://doi.org/10.5281/zenodo.3672900
Download PDF DOIHaptic interfaces have untapped the sense of touch to assist multimodal music learning. We have recently seen various improvements of interface design on tactile feedback and force guidance aiming to make instrument learning more effective. However, most interfaces are still quite static; they cannot yet sense the learning progress and adjust the tutoring strategy accordingly. To solve this problem, we contribute an adaptive haptic interface based on the latest design of haptic flute. We first adopted a clutch mechanism to enable the interface to turn on and off the haptic control flexibly in real time. The interactive tutor is then able to follow human performances and apply the “teacher force” only when the software instructs so. Finally, we incorporated the adaptive interface with a step-by-step dynamic learning strategy. Experimental results showed that dynamic learning dramatically outperforms static learning, which boosts the learning rate by 45.3% and shrinks the forgetting chance by 86%.
@inproceedings{Zhang2019, author = {Zhang, Yian and Li, Yinmiao and Chin, Daniel and Xia, Gus}, title = {Adaptive Multimodal Music Learning via Interactive Haptic Instrument}, pages = {140--145}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672900}, url = {http://www.nime.org/proceedings/2019/nime2019_paper028.pdf} }
-
Fabián Sguiglia, Pauli Coton, and Fernando Toth. 2019. El mapa no es el territorio: Sensor mapping for audiovisual performances. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 146–149. http://doi.org/10.5281/zenodo.3672902
Download PDF DOIWe present El mapa no es el territorio (MNT), a set of open source tools that facilitate the design of visual and musical mappings for interactive installations and performance pieces. MNT is being developed by a multidisciplinary group that explores gestural control of audio-visual environments and virtual instruments. Along with these tools, this paper will present two projects in which they were used -interactive installation Memorias Migrantes and stage performance Recorte de Jorge Cárdenas Cayendo-, showing how MNT allows us to develop collaborative artworks that articulate body movement and generative audiovisual systems, and how its current version was influenced by these successive implementations.
@inproceedings{Sguiglia2019, author = {Sguiglia, Fabián and Coton, Pauli and Toth, Fernando}, title = {El mapa no es el territorio: Sensor mapping for audiovisual performances}, pages = {146--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672902}, url = {http://www.nime.org/proceedings/2019/nime2019_paper029.pdf} }
-
Vanessa Yaremchuk, Carolina Brum Medeiros, and Marcelo Wanderley. 2019. Small Dynamic Neural Networks for Gesture Classification with The Rulers (a Digital Musical Instrument). Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 150–155. http://doi.org/10.5281/zenodo.3672904
Download PDF DOIThe Rulers is a Digital Musical Instrument with 7 metal beams, each of which is fixed at one end. It uses infrared sensors, Hall sensors, and strain gauges to estimate deflection. These sensors each perform better or worse depending on the class of gesture the user is making, motivating sensor fusion practices. Residuals between Kalman filters and sensor output are calculated and used as input to a recurrent neural network which outputs a classification that determines which processing parameters and sensor measurements are employed. Multiple instances (30) of layer recurrent neural networks with a single hidden layer varying in size from 1 to 10 processing units were trained, and tested on previously unseen data. The best performing neural network has only 3 hidden units and has a sufficiently low error rate to be good candidate for gesture classification. This paper demonstrates that: dynamic networks out-perform feedforward networks for this type of gesture classification, a small network can handle a problem of this level of complexity, recurrent networks of this size are fast enough for real-time applications of this type, and the importance of training multiple instances of each network architecture and selecting the best performing one from within that set.
@inproceedings{Yaremchuk2019, author = {Yaremchuk, Vanessa and Medeiros, Carolina Brum and Wanderley, Marcelo}, title = {Small Dynamic Neural Networks for Gesture Classification with The Rulers (a Digital Musical Instrument)}, pages = {150--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672904}, url = {http://www.nime.org/proceedings/2019/nime2019_paper030.pdf} }
-
Palle Dahlstedt and Ami Skånberg Dahlstedt. 2019. OtoKin: Mapping for Sound Space Exploration through Dance Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 156–161. http://doi.org/10.5281/zenodo.3672906
Download PDF DOIWe present a work where a space of realtime synthesized sounds is explored through ear (Oto) and movement (Kinesis) by one or two dancers. Movement is tracked and mapped through extensive pre-processing to a high-dimensional acoustic space, using a many-to-many mapping, so that every small body movement matters. Designed for improvised exploration, it works as both performance and installation. Through this re-translation of bodily action, position, and posture into infinite-dimensional sound texture and timbre, the performers are invited to re-think and re-learn position and posture as sound, effort as gesture, and timbre as a bodily construction. The sound space can be shared by two people, with added modes of presence, proximity and interaction. The aesthetic background and technical implementation of the system are described, and the system is evaluated based on a number of performances, workshops and installation exhibits. Finally, the aesthetic and choreographic motivations behind the performance narrative are explained, and discussed in the light of the design of the sonification.
@inproceedings{Dahlstedtb2019, author = {Dahlstedt, Palle and Dahlstedt, Ami Skånberg}, title = {OtoKin: Mapping for Sound Space Exploration through Dance Improvisation}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672906}, url = {http://www.nime.org/proceedings/2019/nime2019_paper031.pdf} }
-
Joe Wright and James Dooley. 2019. On the Inclusivity of Constraint: Creative Appropriation in Instruments for Neurodiverse Children and Young People. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 162–167. http://doi.org/10.5281/zenodo.3672908
Download PDF DOITaking inspiration from research into deliberately constrained musical technologies and the emergence of neurodiverse, child-led musical groups such as the Artism Ensemble, the interplay between design-constraints, inclusivity and appro- priation is explored. A small scale review covers systems from two prominent UK-based companies, and two itera- tions of a new prototype system that were developed in collaboration with a small group of young people on the autistic spectrum. Amongst these technologies, the aspects of musical experience that are made accessible differ with re- spect to the extent and nature of each system’s constraints. It is argued that the design-constraints of the new prototype system facilitated the diverse playing styles and techniques observed during its development. Based on these obser- vations, we propose that deliberately constrained musical instruments may be one way of providing more opportuni- ties for the emergence of personal practices and preferences in neurodiverse groups of children and young people, and that this is a fitting subject for further research.
@inproceedings{Wright2019, author = {Wright, Joe and Dooley, James}, title = {On the Inclusivity of Constraint: Creative Appropriation in Instruments for Neurodiverse Children and Young People}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672908}, url = {http://www.nime.org/proceedings/2019/nime2019_paper032.pdf} }
-
Isabela Corintha Almeida, Giordano Cabral, and Professor Gilberto Bernardes Almeida. 2019. AMIGO: An Assistive Musical Instrument to Engage, Create and Learn Music. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 168–169. http://doi.org/10.5281/zenodo.3672910
Download PDF DOIWe present AMIGO, a real-time computer music system that assists novice users in the composition process through guided musical improvisation. The system consists of 1) a computational analysis-generation algorithm, which not only formalizes musical principles from examples, but also guides the user in selecting note sequences; 2) a MIDI keyboard controller with an integrated LED stripe, which provides visual feedback to the user; and 3) a real-time music notation, which displays the generated output. Ultimately, AMIGO allows the intuitive creation of new musical structures and the acquisition of Western music formalisms, such as musical notation.
@inproceedings{Almeida2019, author = {Almeida, Isabela Corintha and Cabral, Giordano and Almeida, Professor Gilberto Bernardes}, title = {{AMIGO}: An Assistive Musical Instrument to Engage, Create and Learn Music}, pages = {168--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672910}, url = {http://www.nime.org/proceedings/2019/nime2019_paper033.pdf} }
-
Cristiano Figueiró, Guilherme Soares, and Bruno Rohde. 2019. ESMERIL — An interactive audio player and composition system for collaborative experimental music netlabels. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 170–173. http://doi.org/10.5281/zenodo.3672912
Download PDF DOIESMERIL is an application developed for Android with a toolchain based on Puredata and OpenFrameworks (with Ofelia library). The application enables music creation in a specific expanded format: four separate mono tracks, each one able to manipulate up to eight audio samples per channel. It works also as a performance instrument that stimulates collaborative remixings from compositions of scored interaction gestures called “scenes”. The interface also aims to be a platform to exchange those sample packs as artistic releases, a format similar to the popular idea of an “album”, but prepared to those four channel packs of samples and scores of interaction. It uses an adaptive audio slicing mechanism and it is based on interaction design for multi-touch screen features. A timing sequencer enhances the interaction between pre-set sequences (the “scenes”) and screen manipulation scratching, expanding and moving graphic sound waves. This paper describes the graphical interface features, some development decisions up to now and perspectives to its continuity.
@inproceedings{Figueiró2019, author = {Figueiró, Cristiano and Soares, Guilherme and Rohde, Bruno}, title = {{ESMERIL} --- An interactive audio player and composition system for collaborative experimental music netlabels}, pages = {170--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672912}, url = {http://www.nime.org/proceedings/2019/nime2019_paper034.pdf} }
-
Aline Weber, Lucas Nunes Alegre, Jim Torresen, and Bruno C. da Silva. 2019. Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 174–179. http://doi.org/10.5281/zenodo.3672914
Download PDF DOIWe introduce a machine learning technique to autonomously generate novel melodies that are variations of an arbitrary base melody. These are produced by a neural network that ensures that (with high probability) the melodic and rhythmic structure of the new melody is consistent with a given set of sample songs. We train a Variational Autoencoder network to identify a low-dimensional set of variables that allows for the compression and representation of sample songs. By perturbing these variables with Perlin Noise—a temporally-consistent parameterized noise function—it is possible to generate smoothly-changing novel melodies. We show that (1) by regulating the amount of noise, one can specify how much of the base song will be preserved; and (2) there is a direct correlation between the noise signal and the differences between the statistical properties of novel melodies and the original one. Users can interpret the controllable noise as a type of "creativity knob": the higher it is, the more leeway the network has to generate significantly different melodies. We present a physical prototype that allows musicians to use a keyboard to provide base melodies and to adjust the network’s "creativity knobs" to regulate in real-time the process that proposes new melody ideas.
@inproceedings{Weber2019, author = {Weber, Aline and Alegre, Lucas Nunes and Torresen, Jim and da Silva, Bruno C.}, title = {Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise}, pages = {174--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672914}, url = {http://www.nime.org/proceedings/2019/nime2019_paper035.pdf} }
-
Atau Tanaka, Balandino Di Donato, Michael Zbyszynski, and Geert Roks. 2019. Designing Gestures for Continuous Sonic Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 180–185. http://doi.org/10.5281/zenodo.3672916
Download PDF DOIThis paper presents a system that allows users to quickly try different ways to train neural networks and temporal modeling techniques to associate arm gestures with time varying sound. We created a software framework for this, and designed three interactive sounds and presented them to participants in a workshop based study. We build upon previous work in sound-tracing and mapping-by-demonstration to ask the participants to design gestures with which to perform the given sounds using a multimodal, inertial measurement (IMU) and muscle sensing (EMG) device. We presented the user with four techniques for associating sensor input to synthesizer parameter output. Two were classical techniques from the literature, and two proposed different ways to capture dynamic gesture in a neural network. These four techniques were: 1.) A Static Position regression training procedure, 2.) A Hidden Markov based temporal modeler, 3.) Whole Gesture capture to a neural network, and 4.) a Windowed method using the position-based procedure on the fly during the performance of a dynamic gesture. Our results show trade-offs between accurate, predictable reproduction of the source sounds and exploration of the gesture-sound space. Several of the users were attracted to our new windowed method for capturing gesture anchor points on the fly as training data for neural network based regression. This paper will be of interest to musicians interested in going from sound design to gesture design and offers a workflow for quickly trying different mapping-by-demonstration techniques.
@inproceedings{Tanaka2019, author = {Tanaka, Atau and Di Donato, Balandino and Zbyszynski, Michael and Roks, Geert}, title = {Designing Gestures for Continuous Sonic Interaction}, pages = {180--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672916}, url = {http://www.nime.org/proceedings/2019/nime2019_paper036.pdf} }
-
Cagri Erdem, Katja Henriksen Schia, and Alexander Refsum Jensenius. 2019. Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 186–191. http://doi.org/10.5281/zenodo.3672918
Download PDF DOIThis paper describes the process of developing a shared instrument for music–dance performance, with a particular focus on exploring the boundaries between standstill vs motion, and silence vs sound. The piece Vrengt grew from the idea of enabling a true partnership between a musician and a dancer, developing an instrument that would allow for active co-performance. Using a participatory design approach, we worked with sonification as a tool for systematically exploring the dancer’s bodily expressions. The exploration used a "spatiotemporal matrix", with a particular focus on sonic microinteraction. In the final performance, two Myo armbands were used for capturing muscle activity of the arm and leg of the dancer, together with a wireless headset microphone capturing the sound of breathing. In the paper we reflect on multi-user instrument paradigms, discuss our approach to creating a shared instrument using sonification as a tool for the sound design, and reflect on the performers’ subjective evaluation of the instrument.
@inproceedings{Erdem2019, author = {Erdem, Cagri and Schia, Katja Henriksen and Jensenius, Alexander Refsum}, title = {Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance}, pages = {186--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672918}, url = {http://www.nime.org/proceedings/2019/nime2019_paper037.pdf} }
-
Samuel Thompson Parke-Wolfe, Hugo Scurto, and Rebecca Fiebrink. 2019. Sound Control: Supporting Custom Musical Interface Design for Children with Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 192–197. http://doi.org/10.5281/zenodo.3672920
Download PDF DOIWe have built a new software toolkit that enables music therapists and teachers to create custom digital musical interfaces for children with diverse disabilities. It was designed in collaboration with music therapists, teachers, and children. It uses interactive machine learning to create new sensor- and vision-based musical interfaces using demonstrations of actions and sound, making interface building fast and accessible to people without programming or engineering expertise. Interviews with two music therapy and education professionals who have used the software extensively illustrate how richly customised, sensor-based interfaces can be used in music therapy contexts; they also reveal how properties of input devices, music-making approaches, and mapping techniques can support a variety of interaction styles and therapy goals.
@inproceedings{ParkeWolfe2019, author = {Parke-Wolfe, Samuel Thompson and Scurto, Hugo and Fiebrink, Rebecca}, title = {Sound Control: Supporting Custom Musical Interface Design for Children with Disabilities}, pages = {192--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672920}, url = {http://www.nime.org/proceedings/2019/nime2019_paper038.pdf} }
-
Oliver Hödl. 2019. ’Blending Dimensions’ when Composing for DMI and Symphonic Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 198–203. http://doi.org/10.5281/zenodo.3672922
Download PDF DOIWith a new digital music instrument (DMI), the interface itself, the sound generation, the composition, and the performance are often closely related and even intrinsically linked with each other. Similarly, the instrument designer, composer, and performer are often the same person. The Academic Festival Overture is a new piece of music for the DMI Trombosonic and symphonic orchestra written by a composer who had no prior experience with the instrument. The piece underwent the phases of a composition competition, rehearsals, a music video production, and a public live performance. This whole process was evaluated reflecting on the experience of three involved key stakeholder: the composer, the conductor, and the instrument designer as performer. ‘Blending dimensions’ of these stakeholder and decoupling the composition from the instrument designer inspired the newly involved composer to completely rethink the DMI’s interaction and sound concept. Thus, to deliberately avoid an early collaboration between a DMI designer and a composer bears the potential for new inspiration and at the same time the challenge to seek such a collaboration in the need of clarifying possible misunderstandings and improvement.
@inproceedings{Hödl2019, author = {Hödl, Oliver}, title = {'Blending Dimensions' when Composing for {DMI} and Symphonic Orchestra}, pages = {198--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672922}, url = {http://www.nime.org/proceedings/2019/nime2019_paper039.pdf} }
-
behzad haki and Sergi Jorda. 2019. A Bassline Generation System Based on Sequence-to-Sequence Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 204–209. http://doi.org/10.5281/zenodo.3672928
Download PDF DOIThis paper presents a detailed explanation of a system generating basslines that are stylistically and rhythmically interlocked with a provided audio drum loop. The proposed system is based on a natural language processing technique: word-based sequence-to-sequence learning using LSTM units. The novelty of the proposed method lies in the fact that the system is not reliant on a voice-by-voice transcription of drums; instead, in this method, a drum representation is used as an input sequence from which a translated bassline is obtained at the output. The drum representation consists of fixed size sequences of onsets detected from a 2-bar audio drum loop in eight different frequency bands. The basslines generated by this method consist of pitched notes with different duration. The proposed system was trained on two distinct datasets compiled for this project by the authors. Each dataset contains a variety of 2-bar drum loops with annotated basslines from two different styles of dance music: House and Soca. A listening experiment designed based on the system revealed that the proposed system is capable of generating basslines that are interesting and are well rhythmically interlocked with the drum loops from which they were generated.
@inproceedings{haki2019, author = {behzad haki and Jorda, Sergi}, title = {A Bassline Generation System Based on Sequence-to-Sequence Learning}, pages = {204--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672928}, url = {http://www.nime.org/proceedings/2019/nime2019_paper040.pdf} }
-
Lloyd May and spencer topel. 2019. BLIKSEM: An Acoustic Synthesis Fuzz Pedal. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 210–215. http://doi.org/10.5281/zenodo.3672930
Download PDF DOIThis paper presents a novel physical fuzz pedal effect system named BLIKSEM. Our approach applies previous work in nonlinear acoustic synthesis via a driven cantilever soundboard configuration for the purpose of generating fuzz pedal-like effects as well as a variety of novel audio effects. Following a presentation of our pedal design, we compare the performance of our system with various various classic and contemporary fuzz pedals using an electric guitar. Our results show that BLIKSEM is capable of generating signals that approximate the timbre and dynamic behaviors of conventional fuzz pedals, as well as offer new mechanisms for expressive interactions and a range of new effects in different configurations.
@inproceedings{May2019, author = {May, Lloyd and spencer topel}, title = {{BLIKSEM}: An Acoustic Synthesis Fuzz Pedal}, pages = {210--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672930}, url = {http://www.nime.org/proceedings/2019/nime2019_paper041.pdf} }
-
Anna Xambó, Sigurd Saue, Alexander Refsum Jensenius, Robin Støckert, and Oeyvind Brandtsegg. 2019. NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 216–221. http://doi.org/10.5281/zenodo.3672932
Download PDF DOIIn this paper, we present a workshop of physical computing applied to NIME design based on science, technology, engineering, arts, and mathematics (STEAM) education. The workshop is designed for master students with multidisciplinary backgrounds. They are encouraged to work in teams from two university campuses remotely connected through a portal space. The components of the workshop are prototyping, music improvisation and reflective practice. We report the results of this course, which show a positive impact on the students’ confidence in prototyping and intention to continue in STEM fields. We also present the challenges and lessons learned on how to improve the teaching of hybrid technologies and programming skills in an interdisciplinary context across two locations, with the aim of satisfying both beginners and experts. We conclude with a broader discussion on how these new pedagogical perspectives can improve NIME-related courses.
@inproceedings{Xambó2019, author = {Xambó, Anna and Saue, Sigurd and Jensenius, Alexander Refsum and Støckert, Robin and Brandtsegg, Oeyvind}, title = {{NIME} Prototyping in Teams: A Participatory Approach to Teaching Physical Computing}, pages = {216--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672932}, url = {http://www.nime.org/proceedings/2019/nime2019_paper042.pdf} }
-
Eduardo Meneses, Johnty Wang, Sergio Freire, and Marcelo Wanderley. 2019. A Comparison of Open-Source Linux Frameworks for an Augmented Musical Instrument Implementation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 222–227. http://doi.org/10.5281/zenodo.3672934
Download PDF DOIThe increasing availability of accessible sensor technologies, single board computers, and prototyping platforms have resulted in a growing number of frameworks explicitly geared towards the design and construction of Digital and Augmented Musical Instruments. Developing such instruments can be facilitated by choosing the most suitable framework for each project. In the process of selecting a framework for implementing an augmented guitar instrument, we have tested three Linux-based open-source platforms that have been designed for real-time sensor interfacing, audio processing, and synthesis. Factors such as acquisition latency, workload measurements, documentation, and software implementation are compared and discussed to determine the suitability of each environment for our particular project.
@inproceedings{Meneses2019, author = {Meneses, Eduardo and Wang, Johnty and Freire, Sergio and Wanderley, Marcelo}, title = {A Comparison of Open-Source Linux Frameworks for an Augmented Musical Instrument Implementation}, pages = {222--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672934}, url = {http://www.nime.org/proceedings/2019/nime2019_paper043.pdf} }
-
Martin Matus Lerner. 2019. Latin American NIMEs: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 228–233. http://doi.org/10.5281/zenodo.3672936
Download PDF DOIDuring the twentieth century several Latin American nations (such as Argentina, Brazil, Chile, Cuba and Mexico) have originated relevant antecedents in the NIME field. Their innovative authors have interrelated musical composition, lutherie, electronics and computing. This paper provides a panoramic view of their original electronic instruments and experimental sound practices, as well as a perspective of them regarding other inventions around the World.
@inproceedings{MatusLerner2019, author = {Lerner, Martin Matus}, title = {Latin American {NIME}s: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century}, pages = {228--233}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672936}, url = {http://www.nime.org/proceedings/2019/nime2019_paper044.pdf} }
-
Sarah Reid, Ryan Gaston, and Ajay Kapur. 2019. Perspectives on Time: performance practice, mapping strategies, & composition with MIGSI. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 234–239. http://doi.org/10.5281/zenodo.3672940
Download PDF DOIThis paper presents four years of development in performance and compositional practice on an electronically augmented trumpet called MIGSI. Discussion is focused on conceptual and technical approaches to data mapping, sonic interaction, and composition that are inspired by philosophical questions of time: what is now? Is time linear or multi-directional? Can we operate in multiple modes of temporal perception simultaneously? A number of mapping strategies are presented which explore these ideas through the manipulation of temporal separation between user input and sonic output. In addition to presenting technical progress, this paper will introduce a body of original repertoire composed for MIGSI, in order to illustrate how these tools and approaches have been utilized in live performance and how they may find use in other creative applications.
@inproceedings{Reid2019, author = {Reid, Sarah and Gaston, Ryan and Kapur, Ajay}, title = {Perspectives on Time: performance practice, mapping strategies, \& composition with {MIGSI}}, pages = {234--239}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672940}, url = {http://www.nime.org/proceedings/2019/nime2019_paper045.pdf} }
-
Natacha Lamounier, Luiz Naveda, and Adriana Bicalho. 2019. The design of technological interfaces for interactions between music, dance and garment movements. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 240–245. http://doi.org/10.5281/zenodo.3672942
Download PDF DOIThe present work explores the design of multimodal interfaces that capture hand gestures and promote interactions between dance, music and wearable technologic garment. We aim at studying the design strategies used to interface music to other domains of the performance, in special, the application of wearable technologies into music performances. The project describes the development of the music and wearable interfaces, which comprise a hand interface and a mechanical actuator attached to the dancer’s dress. The performance resulted from the study is inspired in the butoh dances and attempts to add a technological poetic as music-dance-wearable interactions to the traditional dialogue between dance and music.
@inproceedings{Lamounier2019, author = {Lamounier, Natacha and Naveda, Luiz and Bicalho, Adriana}, title = {The design of technological interfaces for interactions between music, dance and garment movements}, pages = {240--245}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672942}, url = {http://www.nime.org/proceedings/2019/nime2019_paper046.pdf} }
-
Ximena Alarcon Diaz, Victor Evaristo Gonzalez Sanchez, and Cagri Erdem. 2019. INTIMAL: Walking to Find Place, Breathing to Feel Presence. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 246–249. http://doi.org/10.5281/zenodo.3672944
Download PDF DOIINTIMAL is a physical virtual embodied system for relational listening that integrates body movement, oral archives, and voice expression through telematic improvisatory performance in migratory contexts. It has been informed by nine Colombian migrant women who express their migratory journeys through free body movement, voice and spoken word improvisation. These improvisations have been recorded using Motion Capture, in order to develop interfaces for co-located and telematic interactions for the sharing of narratives of migration. In this paper, using data from the Motion Capture experiments, we are exploring two specific movements from improvisers: displacements on space (walking, rotating), and breathing data. Here we envision how co-relations between walking and breathing, might be further studied to implement interfaces that help the making of connections between place, and the feeling of presence for people in-between distant locations.
@inproceedings{AlarconDiaz2019, author = {Diaz, Ximena Alarcon and Sanchez, Victor Evaristo Gonzalez and Erdem, Cagri}, title = {{INTIMAL}: Walking to Find Place, Breathing to Feel Presence}, pages = {246--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672944}, url = {http://www.nime.org/proceedings/2019/nime2019_paper047.pdf} }
-
Disha Sardana, Woohun Joo, Ivica Ico Bukvic, and Greg Earle. 2019. Introducing Locus: a NIME for Immersive Exocentric Aural Environments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 250–255. http://doi.org/10.5281/zenodo.3672946
Download PDF DOILocus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead, it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter-based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. In this paper, we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.
@inproceedings{Sardana2019, author = {Sardana, Disha and Joo, Woohun and Bukvic, Ivica Ico and Earle, Greg}, title = {Introducing Locus: a {NIME} for Immersive Exocentric Aural Environments}, pages = {250--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672946}, url = {http://www.nime.org/proceedings/2019/nime2019_paper048.pdf} }
-
Echo Ho, Prof. Dr. Phil. Alberto de Campo, and Hannes Hoelzl. 2019. The SlowQin: An Interdisciplinary Approach to reinventing the Guqin. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 256–259. http://doi.org/10.5281/zenodo.3672948
Download PDF DOIThis paper presents an ongoing process of examining and reinventing the Guqin, to forge a contemporary engagement with this unique traditional Chinese string instrument. The SlowQin is both a hybrid resemblance of the Guqin and a fully functioning wireless interface to interact with computer software. It has been developed and performed during the last decade. Instead of aiming for virtuosic perfection of playing the instrument, SlowQin emphasizes the openness for continuously rethinking and reinventing the Guqin’s possibilities. Through a combination of conceptual work and practical production, Echo Ho’s SlowQin project works as an experimental twist on Historically Informed Performance, with the motivation of conveying artistic gestures that tackle philosophical, ideological, and socio-political subjects embedded in our living environment in globalised conditions. In particular, this paper touches the history of the Guqin, gives an overview of the technical design concepts of the instrument, and discusses the aesthetical approaches of the SlowQin performances that have been realised so far.
@inproceedings{Ho2019, author = {Ho, Echo and de Campo, Prof. Dr. Phil. Alberto and Hoelzl, Hannes}, title = {The SlowQin: An Interdisciplinary Approach to reinventing the Guqin}, pages = {256--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672948}, url = {http://www.nime.org/proceedings/2019/nime2019_paper049.pdf} }
-
Charles Patrick Martin and Jim Torresen. 2019. An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 260–265. http://doi.org/10.5281/zenodo.3672952
Download PDF DOIThis paper is about creating digital musical instruments where a predictive neural network model is integrated into the interactive system. Rather than predicting symbolic music (e.g., MIDI notes), we suggest that predicting future control data from the user and precise temporal information can lead to new and interesting interactive possibilities. We propose that a mixture density recurrent neural network (MDRNN) is an appropriate model for this task. The predictions can be used to fill-in control data when the user stops performing, or as a kind of filter on the user’s own input. We present an interactive MDRNN prediction server that allows rapid prototyping of new NIMEs featuring predictive musical interaction by recording datasets, training MDRNN models, and experimenting with interaction modes. We illustrate our system with several example NIMEs applying this idea. Our evaluation shows that real-time predictive interaction is viable even on single-board computers and that small models are appropriate for small datasets.
@inproceedings{Martin2019, author = {Martin, Charles Patrick and Torresen, Jim}, title = {An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks}, pages = {260--265}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672952}, url = {http://www.nime.org/proceedings/2019/nime2019_paper050.pdf} }
-
Nicolas Bazoge, Ronan Gaugne, Florian Nouviale, Valérie Gouranton, and Bruno Bossis. 2019. Expressive potentials of motion capture in musical performance. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 266–271. http://doi.org/10.5281/zenodo.3672954
Download PDF DOIThe paper presents the electronic music performance project Vis Insita implementing the design of experimental instrumental interfaces based on optical motion capture technology with passive infrared markers (MoCap), and the analysis of their use in a real scenic presentation context. Because of MoCap’s predisposition to capture the movements of the body, a lot of research and musical applications in the performing arts concern dance or the sonification of gesture. For our research, we wanted to move away from the capture of the human body to analyse the possibilities of a kinetic object handled by a performer, both in terms of musical expression, but also in the broader context of a multimodal scenic interpretation.
@inproceedings{Bazoge2019, author = {Bazoge, Nicolas and Gaugne, Ronan and Nouviale, Florian and Gouranton, Valérie and Bossis, Bruno}, title = {Expressive potentials of motion capture in musical performance}, pages = {266--271}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672954}, url = {http://www.nime.org/proceedings/2019/nime2019_paper051.pdf} }
-
Akito Van Troyer and Rebecca Kleinberger. 2019. From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 272–277. http://doi.org/10.5281/zenodo.3672956
Download PDF DOIThis paper explores the potential of image-to-image translation techniques in aiding the design of new hardware-based musical interfaces such as MIDI keyboard, grid-based controller, drum machine, and analog modular synthesizers. We collected an extensive image database of such interfaces and implemented image-to-image translation techniques using variants of Generative Adversarial Networks. The created models learn the mapping between input and output images using a training set of either paired or unpaired images. We qualitatively assess the visual outcomes based on three image-to-image translation models: reconstructing interfaces from edge maps, and collection style transfers based on two image sets: visuals of mosaic tile patterns and geometric abstract two-dimensional arts. This paper aims to demonstrate that synthesizing interface layouts based on image-to-image translation techniques can yield insights for researchers, musicians, music technology industrial designers, and the broader NIME community.
@inproceedings{VanTroyer2019, author = {Troyer, Akito Van and Kleinberger, Rebecca}, title = {From Mondrian to Modular Synth: Rendering {NIME} using Generative Adversarial Networks}, pages = {272--277}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672956}, url = {http://www.nime.org/proceedings/2019/nime2019_paper052.pdf} }
-
Laurel Pardue, Kurijn Buys, Dan Overholt, Andrew P. McPherson, and Michael Edinger. 2019. Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 278–283. http://doi.org/10.5281/zenodo.3672958
Download PDF DOIWhen designing an augmented acoustic instrument, it is often of interest to retain an instrument’s sound quality and nuanced response while leveraging the richness of digital synthesis. Digital audio has traditionally been generated through speakers, separating sound generation from the instrument itself, or by adding an actuator within the instrument’s resonating body, imparting new sounds along with the original. We offer a third option, isolating the playing interface from the actuated resonating body, allowing us to rewrite the relationship between performance action and sound result while retaining the general form and feel of the acoustic instrument. We present a hybrid acoustic-electronic violin based on a stick-body electric violin and an electrodynamic polyphonic pick-up capturing individual string displacements. A conventional violin body acts as the resonator, actuated using digitally altered audio of the string inputs. By attaching the electric violin above the body with acoustic isolation, we retain the physical playing experience of a normal violin along with some of the acoustic filtering and radiation of a traditional build. We propose the use of the hybrid instrument with digitally automated pitch and tone correction to make an easy violin for use as a potential motivational tool for beginning violinists.
@inproceedings{Pardue2019, author = {Pardue, Laurel and Buys, Kurijn and Overholt, Dan and McPherson, Andrew P. and Edinger, Michael}, title = {Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation}, pages = {278--283}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672958}, url = {http://www.nime.org/proceedings/2019/nime2019_paper053.pdf} }
-
Gabriela Bila Advincula, Don Derek Haddad, and Kent Larson. 2019. Grain Prism: Hieroglyphic Interface for Granular Sampling. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 284–285. http://doi.org/10.5281/zenodo.3672960
Download PDF DOIThis paper introduces the Grain Prism, a hybrid of a granular synthesizer and sampler that, through a capacitive sensing interface presented in obscure glyphs, invites users to create experimental sound textures with their own recorded voice. The capacitive sensing system, activated through skin contact over single glyphs or a combination of them, instigates the user to decipher the hidden sonic messages. The mysterious interface open space to aleatoricism in the act of conjuring sound, and therefore new discoveries. The users, when forced to abandon preconceived ways of playing a synthesizer, look at themselves in a different light, as their voice is the source material.
@inproceedings{Advincula2019, author = {Advincula, Gabriela Bila and Haddad, Don Derek and Larson, Kent}, title = {Grain Prism: Hieroglyphic Interface for Granular Sampling}, pages = {284--285}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672960}, url = {http://www.nime.org/proceedings/2019/nime2019_paper054.pdf} }
-
Oliver Bown, Angelo Fraietta, Sam Ferguson, Lian Loke, and Liam Bray. 2019. Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 286–291. http://doi.org/10.5281/zenodo.3672962
Download PDF DOIWe present an audio-focused creative coding toolkit for deploying music programs to remote networked devices. It is designed to support efficient creative exploratory search in the context of the Internet of Things (IoT), where one or more devices must be configured, programmed and interact over a network, with applications in digital musical instruments, networked music performance and other digital experiences. Users can easily monitor and hack what multiple devices are doing on the fly, enhancing their ability to perform “exploratory search” in a creative workflow. We present two creative case studies using the system: the creation of a dance performance and the creation of a distributed musical installation. Analysing different activities within the production process, with a particular focus on the trade-off between more creative exploratory tasks and more standard configuring and problem-solving tasks, we show how the system supports creative exploratory search for multiple networked devices.
@inproceedings{Bown2019, author = {Bown, Oliver and Fraietta, Angelo and Ferguson, Sam and Loke, Lian and Bray, Liam}, title = {Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets}, pages = {286--291}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672962}, url = {http://www.nime.org/proceedings/2019/nime2019_paper055.pdf} }
-
Thais Fernandes Santos. 2019. The reciprocity between ancillary gesture and music structure performed by expert musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 292–297. http://doi.org/10.5281/zenodo.3672966
Download PDF DOIDuring the musical performance, expert musicians consciously manipulate acoustical parameters expressing their interpretative choices. Also, players make physical motions and, in many cases, these gestures are related to the musicians’ artistic intentions. However, it’s not clear if the sound manipulation reflects in physical motions. The understanding of the musical structure of the work being performed in its many levels may impact the projection of artistic intentions, and performers alter it in micro and macro-sections, such as in musical motifs, phrases and sessions. Therefore, this paper investigates the timing manipulation and how such variations may reflect in physical gestures. The study involved musicians (flute, clarinet, and bassoon players) performing a unison excerpt by G. Rossini. We analyzed the relationship between timing variation (the Inter Onsets Interval deviations) and physical motion based on the traveled distance of the flute under different conditions. The flutists were asked to play the musical excerpt in three experimental conditions: (1) playing solo and playing in duets with previous recordings by other instrumentalists, (2) clarinetist and (3) bassoonist. The finding suggests that: 1) the movements, which seem to be related to the sense of pulse, are recurrent and stable, 2) the timing variability in micro or macro sections reflects in gestures’ amplitude performed by flutists.
@inproceedings{FernandesSantos2019, author = {Santos, Thais Fernandes}, title = {The reciprocity between ancillary gesture and music structure performed by expert musicians}, pages = {292--297}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672966}, url = {http://www.nime.org/proceedings/2019/nime2019_paper056.pdf} }
-
Razvan Paisa and Dan Overholt. 2019. Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 298–302. http://doi.org/10.5281/zenodo.3672968
Download PDF DOIThis project describes a novel approach to hybrid electro-acoustical instruments by augmenting the Sensel Morph, with real-time audio sensing capabilities. The actual action-sounds are captured with a piezoelectric transducer and processed in Max 8 to extend the sonic range existing in the acoustical domain alone. The control parameters are captured by the Morph and mapped to audio algorithm proprieties like filter cutoff frequency, frequency shift or overdrive. The instrument opens up the possibility for a large selection of different interaction techniques that have a direct impact on the output sound. The instrument is evaluated from a sound designer’s perspective, encouraging exploration in the materials used as well as techniques. The contribution are two-fold. First, the use of a piezo transducer to augment the Sensel Morph affords an extra dimension of control on top of the offerings. Second, the use of acoustic sounds from physical interactions as a source for excitation and manipulation of an audio processing system offers a large variety of new sounds to be discovered. The methodology involved an exploratory process of iterative instrument making, interspersed with observations gathered via improvisatory trials, focusing on the new interactions made possible through the fusion of audio-rate inputs with the Morph’s default interaction methods.
@inproceedings{Paisa2019, author = {Paisa, Razvan and Overholt, Dan}, title = {Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing}, pages = {298--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672968}, url = {http://www.nime.org/proceedings/2019/nime2019_paper057.pdf} }
-
Juan Mariano Ramos. 2019. Eolos: a wireless MIDI wind controller. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 303–306. http://doi.org/10.5281/zenodo.3672972
Download PDF DOIThis paper presents a description of the design and usage of Eolos, a wireless MIDI wind controller. The main goal of Eolos is to provide an interface that facilitates the production of music for any individual, regardless of their playing skills or previous musical knowledge. Its features are: open design, lower cost than commercial alternatives, wireless MIDI operation, rechargeable battery power, graphical user interface, tactile keys, sensitivity to air pressure, left-right reversible design and two FSR sensors. There is also a mention about its participation in the 1st Collaborative Concert over the Internet between Argentina and Cuba "Tradición y Nuevas Sonoridades".
@inproceedings{Ramos2019, author = {Ramos, Juan Mariano}, title = {Eolos: a wireless {MIDI} wind controller}, pages = {303--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672972}, url = {http://www.nime.org/proceedings/2019/nime2019_paper058.pdf} }
-
Ruihan Yang, Tianyao Chen, Yiyi Zhang, and gus xia. 2019. Inspecting and Interacting with Meaningful Music Representations using VAE. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 307–312. http://doi.org/10.5281/zenodo.3672974
Download PDF DOIVariational Autoencoder has already achieved great results on image generation and recently made promising progress on music sequence generation. However, the model is still quite difficult to control in the sense that the learned latent representations lack meaningful music semantics. What users really need is to interact with certain music features, such as rhythm and pitch contour, in the creation process so that they can easily test different composition ideas. In this paper, we propose a disentanglement by augmentation method to inspect the pitch and rhythm interpretations of the latent representations. Based on the interpretable representations, an intuitive graphical user interface demo is designed for users to better direct the music creation process by manipulating the pitch contours and rhythmic complexity.
@inproceedings{Yang2019, author = {Yang, Ruihan and Chen, Tianyao and Zhang, Yiyi and gus xia}, title = {Inspecting and Interacting with Meaningful Music Representations using {VAE}}, pages = {307--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672974}, url = {http://www.nime.org/proceedings/2019/nime2019_paper059.pdf} }
-
Gerard Roma, Owen Green, and Pierre Alexandre Tremblay. 2019. Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 313–318. http://doi.org/10.5281/zenodo.3672976
Download PDF DOIDescriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition, this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
@inproceedings{Roma2019, author = {Roma, Gerard and Green, Owen and Tremblay, Pierre Alexandre}, title = {Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces}, pages = {313--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672976}, url = {http://www.nime.org/proceedings/2019/nime2019_paper060.pdf} }
-
Vesa Petri Norilo. 2019. Veneer: Visual and Touch-based Programming for Audio. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 319–324. http://doi.org/10.5281/zenodo.3672978
Download PDF DOIThis paper presents Veneer, a visual, touch-ready programming interface for the Kronos programming language. The challenges of representing high-level data flow abstractions, including higher order functions, are described. The tension between abstraction and spontaneity in programming is addressed, and gradual abstraction in live programming is proposed as a potential solution. Several novel user interactions for patching on a touch device are shown. In addition, the paper describes some of the current issues of web audio music applications and offers strategies for integrating a web-based presentation layer with a low-latency native processing backend.
@inproceedings{Norilo2019, author = {Norilo, Vesa Petri}, title = {Veneer: Visual and Touch-based Programming for Audio}, pages = {319--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672978}, url = {http://www.nime.org/proceedings/2019/nime2019_paper061.pdf} }
-
Andrei Faitas, Synne Engdahl Baumann, Torgrim Rudland Næss, Jim Torresen, and Charles Patrick Martin. 2019. Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 325–330. http://doi.org/10.5281/zenodo.3672980
Download PDF DOIGenerating convincing music via deep neural networks is a challenging problem that shows promise for many applications including interactive musical creation. One part of this challenge is the problem of generating convincing accompaniment parts to a given melody, as could be used in an automatic accompaniment system. Despite much progress in this area, systems that can automatically learn to generate interesting sounding, as well as harmonically plausible, accompanying melodies remain somewhat elusive. In this paper we explore the problem of sequence to sequence music generation where a human user provides a sequence of notes, and a neural network model responds with a harmonically suitable sequence of equal length. We consider two sequence-to-sequence models; one featuring standard unidirectional long short-term memory (LSTM) architecture, and the other featuring bidirectional LSTM, both successfully trained to produce a sequence based on the given input. Both of these are fairly dated models, as part of the investigation is to see what can be achieved with such models. These are evaluated and compared via a qualitative study that features 106 respondents listening to eight random samples from our set of generated music, as well as two human samples. From the results we see a preference for the sequences generated by the bidirectional model as well as an indication that these sequences sound more human.
@inproceedings{Faitas2019, author = {Faitas, Andrei and Baumann, Synne Engdahl and Næss, Torgrim Rudland and Torresen, Jim and Martin, Charles Patrick}, title = {Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks}, pages = {325--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672980}, url = {http://www.nime.org/proceedings/2019/nime2019_paper062.pdf} }
-
Anthony T. Marasco, Edgar Berdahl, and Jesse Allison. 2019. Bendit_I/O: A System for Networked Performance of Circuit-Bent Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 331–334. http://doi.org/10.5281/zenodo.3672982
Download PDF DOIBendit_I/O is a system that allows for wireless, networked performance of circuit-bent devices, giving artists a new outlet for performing with repurposed technology. In a typical setup, a user pre-bends a device using the Bendit_I/O board as an intermediary, replacing physical switches and potentiometers with the board’s reed relays, motor driver, and digital potentiometer signals. Bendit_I/O brings the networking techniques of distributed music performances to the hardware hacking realm, opening the door for creative implementation of multiple circuit-bent devices in audiovisual experiences. Consisting of a Wi-Fi- enabled I/O board and a Node-based server, the system provides performers with a variety of interaction and control possibilities between connected users and hacked devices. Moreover, it is user-friendly, low-cost, and modular, making it a flexible toolset for artists of diverse experience levels.
@inproceedings{Marasco2019, author = {Marasco, Anthony T. and Berdahl, Edgar and Allison, Jesse}, title = {{Bendit\_I/O}: A System for Networked Performance of Circuit-Bent Devices}, pages = {331--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672982}, url = {http://www.nime.org/proceedings/2019/nime2019_paper063.pdf} }
-
McLean J Macionis and Ajay Kapur. 2019. Where Is The Quiet: Immersive Experience Design Using the Brain, Mechatronics, and Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 335–338. http://doi.org/10.5281/zenodo.3672984
Download PDF DOI’Where Is The Quiet?’ is a mixed-media installation that utilizes immersive experience design, mechatronics, and machine learning in order to enhance wellness and increase connectivity to the natural world. Individuals interact with the installation by wearing a brainwave interface that measures the strength of the alpha wave signal. The interface then transmits the data to a computer that uses it in order to determine the individual’s overall state of relaxation. As the individual achieves higher states of relaxation, mechatronic instruments respond and provide feedback. This feedback not only encourages self-awareness but also it motivates the individual to relax further. Visitors without the headset experience the installation by watching a film and listening to an original musical score. Through the novel arrangement of technologies and features, ’Where Is The Quiet?’ demonstrates that mediated technological experiences are capable of evoking meditative states of consciousness, facilitating individual and group connectivity, and deepening awareness of the natural world. As such, this installation opens the door to future research regarding the possibility of immersive experiences supporting humanitarian needs.
@inproceedings{Macionis2019, author = {Macionis, McLean J and Kapur, Ajay}, title = {Where Is The Quiet: Immersive Experience Design Using the Brain, Mechatronics, and Machine Learning}, pages = {335--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672984}, url = {http://www.nime.org/proceedings/2019/nime2019_paper064.pdf} }
-
Tate Carson. 2019. Mesh Garden: A creative-based musical game for participatory musical performance . Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 339–342. http://doi.org/10.5281/zenodo.3672986
Download PDF DOIMesh Garden explores participatory music-making with smart- phones using an audio sequencer game made up of a distributed smartphone speaker system. The piece allows a group of people in a relaxed situation to create a piece of ambient music using their smartphones networked through the internet. The players’ interactions with the music are derived from the orientations of their phones. The work also has a gameplay aspect; if two players’ phones match in orientation, one player has the option to take the other player’s note, building up a bank of notes that will be used to form a melody.
@inproceedings{Carson2019, author = {Carson, Tate}, title = {Mesh Garden: A creative-based musical game for participatory musical performance }, pages = {339--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672986}, url = {http://www.nime.org/proceedings/2019/nime2019_paper065.pdf} }
-
Beat Rossmy and Alexander Wiethoff. 2019. The Modular Backward Evolution — Why to Use Outdated Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 343–348. http://doi.org/10.5281/zenodo.3672988
Download PDF DOIIn this paper we draw a picture that captures the increasing interest in the format of modular synthesizers today. We therefore provide a historical summary, which includes the origins, the fall and the rediscovery of that technology. Further an empirical analysis is performed based on statements given by artists and manufacturers taken from published interviews. These statements were aggregated, objectified and later reviewed by an expert group consisting of modular synthesizer vendors. Their responses provide the basis for the discussion on how emerging trends in synthesizer interface design reveal challenges and opportunities for the NIME community.
@inproceedings{Rossmy2019, author = {Rossmy, Beat and Wiethoff, Alexander}, title = {The Modular Backward Evolution --- Why to Use Outdated Technologies}, pages = {343--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672988}, url = {http://www.nime.org/proceedings/2019/nime2019_paper066.pdf} }
-
Vincent Goudard. 2019. Ephemeral instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 349–354. http://doi.org/10.5281/zenodo.3672990
Download PDF DOIThis article questions the notion of ephemerality of digital musical instruments (DMI). Longevity is generally regarded as a valuable quality that good design criteria should help to achieve. However, the nature of the tools, of the performance conditions and of the music itself may lead to think of ephemerality as an intrinsic modality of the existence of DMIs. In particular, the conditions of contemporary musical production suggest that contextual adaptations of instrumental devices beyond the monolithic unity of classical instruments should be considered. The first two parts of this article analyse various reasons to reassess the issue of longevity and ephemerality. The last two sections attempt to propose an articulation of these two aspects to inform both the design of the DMI and their learning.
@inproceedings{Goudard2019, author = {Goudard, Vincent}, title = {Ephemeral instruments}, pages = {349--354}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672990}, url = {http://www.nime.org/proceedings/2019/nime2019_paper067.pdf} }
-
Julian Jaramillo and Fernando Iazzetta. 2019. PICO: A portable audio effect box for traditional plucked-string instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 355–360. http://doi.org/10.5281/zenodo.3672992
Download PDF DOIThis paper reports the conception, design, implementation and evaluation processes of PICO, a portable audio effect system created with Pure Data and the Raspberry Pi, which augments traditional plucked string instruments such as the Brazilian Cavaquinho, the Venezuelan Cuatro, the Colombian Tiple and the Peruvian/Bolivian Charango. A fabric soft case fixed to the instrument‘s body holds the PICO modules: the touchscreen, the single board computer, the sound card, the speaker system and the DC power bank. The device audio specifications arose from musicological insights about the social role of performers in their musical contexts and the instruments’ playing techniques. They were taken as design challenges in the creation process of PICO‘s first prototype, which was submitted to a short evaluation. Along with the construction of PICO, we reflected over the design of an interactive audio interface as a mode of research. Therefore, the paper will also discuss methodological aspects of audio hardware design.
@inproceedings{Jaramillo2019, author = {Jaramillo, Julian and Iazzetta, Fernando}, title = {{PICO}: A portable audio effect box for traditional plucked-string instruments}, pages = {355--360}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672992}, url = {http://www.nime.org/proceedings/2019/nime2019_paper068.pdf} }
-
Guilherme Bertissolo. 2019. Composing Understandings: music, motion, gesture and embodied cognition. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 361–364. http://doi.org/10.5281/zenodo.3672994
Download PDF DOIThis paper focuses on ongoing research in music composition based on the study of cognitive research in musical meaning. As a method and result at the same time, we propose the creation of experiments related to key issues in composition and music cognition, such as music and movement, memory, expectation and metaphor in creative process. The theoretical reference approached is linked to the embodied cognition, with unfolding related to the cognitive semantics and the enactivist current of cognitive sciences, among other domains of contemporary sciences of mind and neuroscience. The experiments involve the relationship between music and movement, based on prior research using as a reference context in which it is not possible to establish a clear distinction between them: the Capoeira. Finally, we proposes a discussion about the application of the theoretical approach in two compositions: Boreal IV, for Steel Drums and real time electronics, and Converse, collaborative multimedia piece for piano, real-time audio (Puredata) and video processing (GEM and live video) and a dancer.
@inproceedings{Bertissolo2019, author = {Bertissolo, Guilherme}, title = {Composing Understandings: music, motion, gesture and embodied cognition}, pages = {361--364}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672994}, url = {http://www.nime.org/proceedings/2019/nime2019_paper069.pdf} }
-
Cristohper Ramos Flores, Jim Murphy, and Michael Norris. 2019. HypeSax: Saxophone acoustic augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 365–370. http://doi.org/10.5281/zenodo.3672996
Download PDF DOINew interfaces allow performers to access new possibilities of musical expression. Even though interfaces are often designed to be adaptable to different software, most of them rely on external speakers or similar transducers. This often results on disembodiment and acoustic disengagement from the interface, and in the case of augmented instruments, from the instruments themselves. This paper describes a project in which a hybrid system allows an acoustic integration between the sound of acoustic saxophone and electronics.
@inproceedings{RamosFlores2019, author = {Flores, Cristohper Ramos and Murphy, Jim and Norris, Michael}, title = {HypeSax: Saxophone acoustic augmentation}, pages = {365--370}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672996}, url = {http://www.nime.org/proceedings/2019/nime2019_paper070.pdf} }
-
Patrick Chwalek and Joe Paradiso. 2019. CD-Synth: a Rotating, Untethered, Digital Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 371–374. http://doi.org/10.5281/zenodo.3672998
Download PDF DOIWe describe the design of an untethered digital synthesizer that can be held and manipulated while broadcasting audio data to a receiving off-the-shelf Bluetooth receiver. The synthesizer allows the user to freely rotate and reorient the instrument while exploiting non-contact light sensing for a truly expressive performance. The system consists of a suite of sensors that convert rotation, orientation, touch, and user proximity into various audio filters and effects operated on preset wave tables, while offering a persistence of vision display for input visualization. This paper discusses the design of the system, including the circuit, mechanics, and software layout, as well as how this device may be incorporated into a performance.
@inproceedings{Chwalek2019, author = {Chwalek, Patrick and Paradiso, Joe}, title = {CD-Synth: a Rotating, Untethered, Digital Synthesizer}, pages = {371--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672998}, url = {http://www.nime.org/proceedings/2019/nime2019_paper071.pdf} }
-
Niccolò Granieri and James Dooley. 2019. Reach: a keyboard-based gesture recognition system for live piano sound modulation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 375–376. http://doi.org/10.5281/zenodo.3673000
Download PDF DOIThis paper presents Reach, a keyboard-based gesture recog- nition system for live piano sound modulation. Reach is a system built using the Leap Motion Orion SDK, Pure Data and a custom C++ OSC mapper1. It provides control over the sound modulation of an acoustic piano using the pi- anist’s ancillary gestures. The system was developed using an iterative design pro- cess, incorporating research findings from two user studies and several case studies. The results that emerged show the potential of recognising and utilising the pianist’s existing technique when designing keyboard-based DMIs, reducing the requirement to learn additional techniques.
@inproceedings{Granieri2019, author = {Granieri, Niccolò and Dooley, James}, title = {Reach: a keyboard-based gesture recognition system for live piano sound modulation}, pages = {375--376}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673000}, url = {http://www.nime.org/proceedings/2019/nime2019_paper072.pdf} }
-
margaret schedel, Jocelyn Ho, and Matthew Blessing. 2019. Women’s Labor: Creating NIMEs from Domestic Tools . Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 377–380. http://doi.org/10.5281/zenodo.3672729
Download PDF DOIThis paper describes the creation of a NIME created from an iron and wooden ironing board. The ironing board acts as a resonator for the system which includes sensors embedded in the iron such as pressure, and piezo microphones. The iron has LEDs wired to the sides and at either end of the board are CCDs; using machine learning we can identify what kind of fabric is being ironed, and the position of the iron along the x and y-axes as well as its rotation and tilt. This instrument is part of a larger project, Women’s Labor, that juxtaposes traditional musical instruments such as spinets and virginals designated for “ladies” with new interfaces for musical expression that repurpose older tools of women’s work. Using embedded technologies, we reimagine domestic tools as musical interfaces, creating expressive instruments from the appliances of women’s chores.
@inproceedings{schedel2019, author = {margaret schedel and Ho, Jocelyn and Blessing, Matthew}, title = {Women's Labor: Creating {NIME}s from Domestic Tools }, pages = {377--380}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672729}, url = {http://www.nime.org/proceedings/2019/nime2019_paper073.pdf} }
-
Andre Rauber Du Bois and Rodrigo Geraldo Ribeiro. 2019. HMusic: A domain specific language for music programming and live coding. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 381–386. http://doi.org/10.5281/zenodo.3673003
Download PDF DOIThis paper presents HMusic, a domain specific language based on music patterns that can be used to write music and live coding. The main abstractions provided by the language are patterns and tracks. Code written in HMusic looks like patterns and multi-tracks available in music sequencers and drum machines. HMusic provides primitives to design and compose patterns generating new patterns. The basic abstractions provided by the language have an inductive definition and HMusic is embedded in the Haskell functional programming language, programmers can design functions to manipulate music on the fly.
@inproceedings{RauberDuBois2019, author = {Bois, Andre Rauber Du and Ribeiro, Rodrigo Geraldo}, title = {HMusic: A domain specific language for music programming and live coding}, pages = {381--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673003}, url = {http://www.nime.org/proceedings/2019/nime2019_paper074.pdf} }
-
Angelo Fraietta. 2019. Stellar Command: a planetarium software based cosmic performance interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 387–392. http://doi.org/10.5281/zenodo.3673005
Download PDF DOIThis paper presents the use of Stellarium planetarium software coupled with the VizieR database of astronomical catalogues as an interface mechanism for creating astronomy based multimedia performances, and as a music composition interface. The celestial display from Stellarium is used to query VizieR, which then obtains scienti c astronomical data from the stars displayed–including colour, celestial position, magnitude and distance–and sends it as input data for music composition or performance. Stellarium and VizieR are controlled through Stellar Command, a software library that couples the two systems and can be used as both a standalone command line utility using Open Sound Control, and as a software library.
@inproceedings{Fraiettab2019, author = {Fraietta, Angelo}, title = {Stellar Command: a planetarium software based cosmic performance interface}, pages = {387--392}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673005}, url = {http://www.nime.org/proceedings/2019/nime2019_paper075.pdf} }
-
Patrick Müller and Johannes Michael Schuett. 2019. Towards a Telematic Dimension Space. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 393–400. http://doi.org/10.5281/zenodo.3673007
Download PDF DOITelematic performances connect two or more locations so that participants are able to interact in real time. Such practices blend a variety of dimensions, insofar as the representation of remote performers on a local stage intrinsically occurs on auditory, as well as visual and scenic, levels. Due to their multimodal nature, the analysis or creation of such performances can quickly descend into a house of mirrors wherein certain intensely interdependent dimensions come to the fore, while others are multiplied, seem hidden or are made invisible. In order to have a better understanding of such performances, Dimension Space Analysis, with its capacity to review multifaceted components of a system, can be applied to telematic performances, understood here as (a bundle of) NIMEs. In the second part of the paper, some telematic works from the practices of the authors are described with the toolset developed.
@inproceedings{Müller2019, author = {Müller, Patrick and Schuett, Johannes Michael}, title = {Towards a Telematic Dimension Space}, pages = {393--400}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673007}, url = {http://www.nime.org/proceedings/2019/nime2019_paper076.pdf} }
-
Pedro Pablo Lucas. 2019. A MIDI Controller Mapper for the Built-in Audio Mixer in the Unity Game Engine. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 401–404. http://doi.org/10.5281/zenodo.3673009
Download PDF DOIUnity is one of the most used engines in the game industry and several extensions have been implemented to increase its features in order to create multimedia products in a more effective and efficient way. From the point of view of audio development, Unity has included an Audio Mixer from version 5 which facilitates the organization of sounds, effects, and the mixing process in general; however, this module can be manipulated only through its graphical interface. This work describes the design and implementation of an extension tool to map parameters from the Audio Mixer to MIDI external devices, like controllers with sliders and knobs, such way the developer can easily mix a game with the feeling of a physical interface.
@inproceedings{Lucasb2019, author = {Lucas, Pedro Pablo}, title = {A {MIDI} Controller Mapper for the Built-in Audio Mixer in the Unity Game Engine}, pages = {401--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673009}, url = {http://www.nime.org/proceedings/2019/nime2019_paper077.pdf} }
-
Pedro Pablo Lucas. 2019. AuSynthAR: A simple low-cost modular synthesizer based on Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 405–406. http://doi.org/10.5281/zenodo.3673011
Download PDF DOIAuSynthAR is a digital instrument based on Augmented Reality (AR), which allows sound synthesis modules to create simple sound networks. It only requires a mobile device, a set of tokens, a sound output device and, optionally, a MIDI controller, which makes it an affordable instrument. An application running on the device generates the sounds and the graphical augmentations over the tokens.
@inproceedings{Lucasc2019, author = {Lucas, Pedro Pablo}, title = {AuSynthAR: A simple low-cost modular synthesizer based on Augmented Reality}, pages = {405--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673011}, url = {http://www.nime.org/proceedings/2019/nime2019_paper078.pdf} }
-
Don Derek Haddad and Joe Paradiso. 2019. The World Wide Web in an Analog Patchbay. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 407–410. http://doi.org/10.5281/zenodo.3673013
Download PDF DOIThis paper introduces a versatile module for Eurorack synthesizers that allows multiple modular synthesizers to be patched together remotely through the world wide web. The module is configured from a read-eval-print-loop environment running in the web browser, that can be used to send signals to the modular synthesizer from a live coding interface or from various data sources on the internet.
@inproceedings{Haddad2019, author = {Haddad, Don Derek and Paradiso, Joe}, title = {The World Wide Web in an Analog Patchbay}, pages = {407--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673013}, url = {http://www.nime.org/proceedings/2019/nime2019_paper079.pdf} }
-
Fou Yoshimura and kazuhiro jo. 2019. A "voice" instrument based on vocal tract models by using soft material for a 3D printer and an electrolarynx. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 411–412. http://doi.org/10.5281/zenodo.3673015
Download PDF DOIIn this paper, we propose a “voice” instrument based on vocal tract models with a soft material for a 3D printer and an electrolarynx. In our practice, we explore the incongruity of the voice instrument through the accompanying music production and performance. With the instrument, we aim to return to the fact that the “Machine speaks out.” With the production of a song “Vocalise (Incomplete),” and performances, we reveal how the instrument could work with the audiences and explore the uncultivated field of voices.
@inproceedings{Yoshimura2019, author = {Yoshimura, Fou and kazuhiro jo}, title = {A "voice" instrument based on vocal tract models by using soft material for a 3D printer and an electrolarynx}, pages = {411--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673015}, url = {http://www.nime.org/proceedings/2019/nime2019_paper080.pdf} }
-
Juan Pablo Yepez Placencia, Jim Murphy, and Dale Carnegie. 2019. Exploring Dynamic Variations for Expressive Mechatronic Chordophones. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 413–418. http://doi.org/10.5281/zenodo.3673017
Download PDF DOIMechatronic chordophones have become increasingly common in mechatronic music. As expressive instruments, they offer multiple techniques to create and manipulate sounds using their actuation mechanisms. Chordophone designs have taken multiple forms, from frames that play a guitar-like instrument, to machines that integrate strings and actuators as part of their frame. However, few of these instruments have taken advantage of dynamics, which have been largely unexplored. This paper details the design and construction of a new picking mechanism prototype which enables expressive techniques through fast and precise movement and actuation. We have adopted iterative design and rapid prototyping strategies to develop and refine a compact picker capable of creating dynamic variations reliably. Finally, a quantitative evaluation process demonstrates that this system offers the speed and consistency of previously existing picking mechanisms, while providing increased control over musical dynamics and articulations.
@inproceedings{YepezPlacencia2019, author = {Placencia, Juan Pablo Yepez and Murphy, Jim and Carnegie, Dale}, title = {Exploring Dynamic Variations for Expressive Mechatronic Chordophones}, pages = {413--418}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673017}, url = {http://www.nime.org/proceedings/2019/nime2019_paper081.pdf} }
-
Dhruv Chauhan and Peter Bennett. 2019. Searching for the Perfect Instrument: Increased Telepresence through Interactive Evolutionary Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 419–422. http://doi.org/10.5281/zenodo.3673019
Download PDF DOIIn this paper, we introduce and explore a novel Virtual Reality musical interaction system (named REVOLVE) that utilises a user-guided evolutionary algorithm to personalise musical instruments to users’ individual preferences. REVOLVE is designed towards being an ‘endlessly entertaining’ experience through the potentially infinite number of sounds that can be produced. Our hypothesis is that using evolutionary algorithms with VR for musical interactions will lead to increased user telepresence. In addition to this, REVOLVE was designed to inform novel research into this unexplored area. Think aloud trials and thematic analysis revealed 5 main themes: control, comparison to the real world, immersion, general usability and limitations, in addition to practical improvements. Overall, it was found that this combination of technologies did improve telepresence levels, proving the original hypothesis correct.
@inproceedings{Chauhan2019, author = {Chauhan, Dhruv and Bennett, Peter}, title = {Searching for the Perfect Instrument: Increased Telepresence through Interactive Evolutionary Instrument Design}, pages = {419--422}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673019}, url = {http://www.nime.org/proceedings/2019/nime2019_paper082.pdf} }
-
Richard J Savery, Benjamin Genchel, Jason Brent Smith, Anthony Caulkins, Molly E Jones, and Anna Savery. 2019. Learning from History: Recreating and Repurposing Harriet Padberg’s Computer Composed Canon and Free Fugue. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 423–428. http://doi.org/10.5281/zenodo.3673021
Download PDF DOIHarriet Padberg wrote Computer-Composed Canon and Free Fugue as part of her 1964 dissertation in Mathematics and Music at Saint Louis University. This program is one of the earliest examples of text-to-music software and algorithmic composition, which are areas of great interest in the present-day field of music technology. This paper aims to analyze the technological innovation, aesthetic design process, and impact of Harriet Padberg’s original 1964 thesis as well as the design of a modern recreation and utilization, in order to gain insight to the nature of revisiting older works. Here, we present our open source recreation of Padberg’s program with a modern interface and, through its use as an artistic tool by three composers, show how historical works can be effectively used for new creative purposes in contemporary contexts. Not Even One by Molly Jones draws on the historical and social significance of Harriet Padberg through using her program in a piece about the lack of representation of women judges in composition competitions. Brevity by Anna Savery utilizes the original software design as a composition tool, and The Padberg Piano by Anthony Caulkins uses the melodic generation of the original to create a software instrument.
@inproceedings{Savery2019, author = {Savery, Richard J and Genchel, Benjamin and Smith, Jason Brent and Caulkins, Anthony and Jones, Molly E and Savery, Anna}, title = {Learning from History: Recreating and Repurposing Harriet Padberg's Computer Composed Canon and Free Fugue}, pages = {423--428}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673021}, url = {http://www.nime.org/proceedings/2019/nime2019_paper083.pdf} }
-
Edgar Berdahl, Austin Franklin, and Eric Sheffield. 2019. A Spatially Distributed Vibrotactile Actuator Array for the Fingertips. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 429–430. http://doi.org/10.5281/zenodo.3673023
Download PDF DOIThe design of a Spatially Distributed Vibrotactile Actuator Array (SDVAA) for the fingertips is presented. It provides high-fidelity vibrotactile stimulation at the audio sampling rate. Prior works are discussed, and the system is demonstrated using two music compositions by the authors.
@inproceedings{Berdahl2019, author = {Berdahl, Edgar and Franklin, Austin and Sheffield, Eric}, title = {A Spatially Distributed Vibrotactile Actuator Array for the Fingertips}, pages = {429--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673023}, url = {http://www.nime.org/proceedings/2019/nime2019_paper084.pdf} }
-
Jeff Gregorio and Youngmoo Kim. 2019. Augmenting Parametric Synthesis with Learned Timbral Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 431–436. http://doi.org/10.5281/zenodo.3673025
Download PDF DOIFeature-based synthesis applies machine learning and signal processing methods to the development of alternative interfaces for controlling parametric synthesis algorithms. One approach, geared toward real-time control, uses low dimensional gestural controllers and learned mappings from control spaces to parameter spaces, making use of an intermediate latent timbre distribution, such that the control space affords a spatially-intuitive arrangement of sonic possibilities. Whereas many existing systems present alternatives to the traditional parametric interfaces, the proposed system explores ways in which feature-based synthesis can augment one-to-one parameter control, made possible by fully invertible mappings between control and parameter spaces.
@inproceedings{Gregorio2019, author = {Gregorio, Jeff and Kim, Youngmoo}, title = {Augmenting Parametric Synthesis with Learned Timbral Controllers}, pages = {431--436}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673025}, url = {http://www.nime.org/proceedings/2019/nime2019_paper085.pdf} }
-
Sang-won Leigh, Abhinandan Jain, and Pattie Maes. 2019. Exploring Human-Machine Synergy and Interaction on a Robotic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 437–442. http://doi.org/10.5281/zenodo.3673027
Download PDF DOIThis paper introduces studies conducted with musicians that aim to understand modes of human-robot interaction, situated between automation and human augmentation. Our robotic guitar system used for the study consists of various sound generating mechanisms, either driven by software or by a musician directly. The control mechanism allows the musician to have a varying degree of agency over the overall musical direction. We present interviews and discussions on open-ended experiments conducted with music students and musicians. The outcome of this research includes new modes of playing the guitar given the robotic capabilities, and an understanding of how automation can be integrated into instrument-playing processes. The results present insights into how a human-machine hybrid system can increase the efficacy of training or exploration, without compromising human engagement with a task.
@inproceedings{Leigh2019, author = {Leigh, Sang-won and Jain, Abhinandan and Maes, Pattie}, title = {Exploring Human-Machine Synergy and Interaction on a Robotic Instrument}, pages = {437--442}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673027}, url = {http://www.nime.org/proceedings/2019/nime2019_paper086.pdf} }
-
Sang Won Lee. 2019. Show Them My Screen: Mirroring a Laptop Screen as an Expressive and Communicative Means in Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 443–448. http://doi.org/10.5281/zenodo.3673029
Download PDF DOIModern computer music performances often involve a musical instrument that is primarily digital; software runs on a computer, and the physical form of the instrument is the computer. In such a practice, the performance interface is rendered on a computer screen for the performer. There has been a concern in using a laptop as a musical instrument from the audience’s perspective, in that having “a laptop performer sitting behind the screen” makes it difficult for the audience to understand how the performer is creating music. Mirroring a computer screen on a projection screen has been one way to address the concern and reveal the performer’s instrument. This paper introduces and discusses the author’s computer music practice, in which a performer actively considers screen mirroring as an essential part of the performance, beyond visualization of music. In this case, screen mirroring is not complementary, but inevitable from the inception of the performance. The related works listed within explore various roles of screen mirroring in computer music performance and helps us understand empirical and logistical findings in such practices.
@inproceedings{Lee2019, author = {Lee, Sang Won}, title = {Show Them My Screen: Mirroring a Laptop Screen as an Expressive and Communicative Means in Computer Music}, pages = {443--448}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673029}, url = {http://www.nime.org/proceedings/2019/nime2019_paper087.pdf} }
-
Josh Urban Davis. 2019. IllumiWear: A Fiber-Optic eTextile for MultiMedia Interactions. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 449–454. http://doi.org/10.5281/zenodo.3673033
Download PDF DOIWe present IllumiWear, a novel eTextile prototype that uses fiber optics as interactive input and visual output. Fiber optic cables are separated into bundles and then woven like a basket into a bendable glowing fabric. By equipping light emitting diodes to one side of these bundles and photodiode light intensity sensors to the other, loss of light intensity can be measured when the fabric is bent. The sensing technique of IllumiWear is not only able to discriminate between discreet touch, slight bends, and harsh bends, but also recover the location of deformation. In this way, our computational fabric prototype uses its intrinsic means of visual output (light) as a tool for interactive input. We provide design and implementation details for our prototype as well as a technical evaluation of its effectiveness and limitations as an interactive computational textile. In addition, we examine the potential of this prototype’s interactive capabilities by extending our eTextile to create a tangible user interface for audio and visual manipulation.
@inproceedings{Davis2019, author = {Davis, Josh Urban}, title = {IllumiWear: A Fiber-Optic eTextile for MultiMedia Interactions}, pages = {449--454}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673033}, url = {http://www.nime.org/proceedings/2019/nime2019_paper088.pdf} }
2018
-
Oeyvind Brandtsegg, Trond Engum, and Bernt Isak Wærstad. 2018. Working methods and instrument design for cross-adaptive sessions. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 1–6. http://doi.org/10.5281/zenodo.1302649
Download PDF DOIThis paper explores working methods and instrument design for musical performance sessions (studio and live) where cross-adaptive techniques for audio processing are utilized. Cross-adaptive processing uses feature extraction methods and digital processing to allow the actions of one acoustic instrument to influence the timbre of another. Even though the physical interface for the musician is the familiar acoustic instrument, the musical dimensions controlled with the actions on the instrument have been expanded radically. For this reason, and when used in live performance, the cross-adaptive methods constitute new interfaces for musical expression. Not only do the musician control his or her own instrumental expression, but the instrumental actions directly influence the timbre of another instrument in the ensemble, while their own instrument’s sound is modified by the actions of other musicians. In the present paper we illustrate and discuss some design issues relating to the configuration and composition of such tools for different musical situations. Such configurations include among other things the mapping of modulators, the choice of applied effects and processing methods.
@inproceedings{Brandtsegg2018, author = {Brandtsegg, Oeyvind and Engum, Trond and Wærstad, Bernt Isak}, title = {Working methods and instrument design for cross-adaptive sessions}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302649}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0001.pdf} }
-
Eran Egozy and Eun Young Lee. 2018. *12*: Mobile Phone-Based Audience Participation in a Chamber Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 7–12. http://doi.org/10.5281/zenodo.1302655
Download PDF DOI*12* is chamber music work composed with the goal of letting audience members have an engaging, individualized, and influential role in live music performance using their mobile phones as custom tailored musical instruments. The goals of direct music making, meaningful communication, intuitive interfaces, and technical transparency led to a design that purposefully limits the number of participating audience members, balances the tradeoffs between interface simplicity and control, and prioritizes the role of a graphics and animation display system that is both functional and aesthetically integrated. Survey results from the audience and stage musicians show a successful and engaging experience, and also illuminate the path towards future improvements.
@inproceedings{Egozy2018, author = {Egozy, Eran and Lee, Eun Young}, title = {*12*: Mobile Phone-Based Audience Participation in a Chamber Music Performance}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302655}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0002.pdf} }
-
Anders Lind. 2018. Animated Notation in Multiple Parts for Crowd of Non-professional Performers. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 13–18. http://doi.org/10.5281/zenodo.1302657
Download PDF DOIThe Max Maestro – an animated music notation system was developed to enable the exploration of artistic possibilities for composition and performance practices within the field of contemporary art music, more specifically, to enable a large crowd of non-professional performers regardless of their musical background to perform a fixed music compositions written in multiple individual parts. Furthermore, the Max Maestro was developed to facilitate concert hall performances where non-professional performers could be synchronised with an electronic music part. This paper presents the background, the content and the artistic ideas with the Max Maestro system and gives two examples of live concert hall performances where the Max Maestro was used. An artistic research approach with an auto ethnographic method was adopted for the study. This paper contributes with new knowledge to the field of animated music notation.
@inproceedings{Lind2018, author = {Lind, Anders}, title = {Animated Notation in Multiple Parts for Crowd of Non-professional Performers}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302657}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0003.pdf} }
-
Andrew R. Brown, Matthew Horrigan, Arne Eigenfeldt, Toby Gifford, Daniel Field, and Jon McCormack. 2018. Interacting with Musebots. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 19–24. http://doi.org/10.5281/zenodo.1302659
Download PDF DOIMusebots are autonomous musical agents that interact with other musebots to produce music. Inaugurated in 2015, musebots are now an established practice in the field of musical metacreation, which aims to automate aspects of creative practice. Originally musebot development focused on software-only ensembles of musical agents, coded by a community of developers. More recent experiments have explored humans interfacing with musebot ensembles in various ways: including through electronic interfaces in which parametric control of high-level musebot parameters are used; message-based interfaces which allow human users to communicate with musebots in their own language; and interfaces through which musebots have jammed with human musicians. Here we report on the recent developments of human interaction with musebot ensembles and reflect on some of the implications of these developments for the design of metacreative music systems.
@inproceedings{Brown2018, author = {Brown, Andrew R. and Horrigan, Matthew and Eigenfeldt, Arne and Gifford, Toby and Field, Daniel and McCormack, Jon}, title = {Interacting with Musebots}, pages = {19--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302659}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0004.pdf} }
-
Chris Kiefer and Cecile Chevalier. 2018. Towards New Modes of Collective Musical Expression through Audio Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 25–28. http://doi.org/10.5281/zenodo.1302661
Download PDF DOIWe investigate how audio augmented reality can engender new collective modes of musical expression in the context of a sound art installation, ’Listening Mirrors’, exploring the creation of interactive sound environments for musicians and non-musicians alike. ’Listening Mirrors’ is designed to incorporate physical objects and computational systems for altering the acoustic environment, to enhance collective listening and challenge traditional musician-instrument performance. At a formative stage in exploring audio AR technology, we conducted an audience experience study investigating questions around the potential of audio AR in creating sound installation environments for collective musical expression. We collected interview evidence about the participants’ experience and analysed the data with using a grounded theory approach. The results demonstrated that the technology has the potential to create immersive spaces where an audience can feel safe to experiment musically, and showed how AR can intervene in sound perception to instrumentalise an environment. The results also revealed caveats about the use of audio AR, mainly centred on social inhibition and seamlessness of experience, and finding a balance between mediated worlds so that there is space for interplay between the two.
@inproceedings{Kiefer2018, author = {Kiefer, Chris and Chevalier, Cecile}, title = {Towards New Modes of Collective Musical Expression through Audio Augmented Reality}, pages = {25--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302661}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0005.pdf} }
-
Tomoya Matsuura and kazuhiro jo. 2018. Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 29–30. http://doi.org/10.5281/zenodo.1302663
Download PDF DOIAphysical Unmodeling Instrument is the title of a sound installation that re-physicalizes the Whirlwind meta-wind-instrument physical model. We re-implemented the Whirlwind by using real-world physical objects to comprise a sound installation. The sound propagation between a speaker and microphone was used as the delay, and a paper cylinder was employed as the resonator. This paper explains the concept and implementation of this work at the 2017 HANARART exhibition. We examine the characteristics of the work, address its limitations, and discuss the possibility of its interpretation by means of a “re-physicalization.”
@inproceedings{Matsuura2018, author = {Matsuura, Tomoya and kazuhiro jo}, title = {Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind}, pages = {29--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302663}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0006.pdf} }
-
Ulf A. S. Holbrook. 2018. An approach to stochastic spatialization — A case of Hot Pocket. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 31–32. http://doi.org/10.5281/zenodo.1302665
Download PDF DOIMany common and popular sound spatialisation techniques and methods rely on listeners being positioned in a "sweet-spot" for an optimal listening position in a circle of speakers. This paper discusses a stochastic spatialisation method and its first iteration as implemented for the exhibition Hot Pocket at The Museum of Contemporary Art in Oslo in 2017. This method is implemented in Max and offers a matrix-based amplitude panning methodology which can provide a flexible means for the spatialialisation of sounds.
@inproceedings{Holbrook2018, author = {Holbrook, Ulf A. S.}, title = {An approach to stochastic spatialization --- A case of Hot Pocket}, pages = {31--32}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302665}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0007.pdf} }
-
Cory Champion and Mo H Zareei. 2018. AM MODE: Using AM and FM Synthesis for Acoustic Drum Set Augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 33–34. http://doi.org/10.5281/zenodo.1302667
Download PDF DOIAM MODE is a custom-designed software interface for electronic augmentation of the acoustic drum set. The software is used in the development a series of recordings, similarly titled as AM MODE. Programmed in Max/MSP, the software uses live audio input from individual instruments within the drum set as control parameters for modulation synthesis. By using a combination of microphones and MIDI triggers, audio signal features such as the velocity of the strike of the drum, or the frequency at which the drum resonates, are tracked, interpolated, and scaled to user specifications. The resulting series of recordings is comprised of the digitally generated output of the modulation engine, in addition to both raw and modulated signals from the acoustic drum set. In this way, this project explores drum set augmentation not only at the input and from a performative angle, but also at the output, where the acoustic and the synthesized elements are merged into each other, forming a sonic hybrid.
@inproceedings{Champion2018, author = {Champion, Cory and Zareei, Mo H}, title = {AM MODE: Using AM and FM Synthesis for Acoustic Drum Set Augmentation}, pages = {33--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302667}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0008.pdf} }
-
Don Derek Haddad and Joe Paradiso. 2018. Kinesynth: Patching, Modulating, and Mixing a Hybrid Kinesthetic Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 35–36. http://doi.org/10.5281/zenodo.1302669
Download PDF DOIThis paper introduces the Kinesynth, a hybrid kinesthetic synthesizer that uses the human body as both an analog mixer and as a modulator using a combination of capacitive sensing in "transmit" mode and skin conductance. This is achieved when the body, through the skin, relays signals from control & audio sources to the inputs of the instrument. These signals can be harnessed from the environment, from within the Kinesynth’s internal synthesizer, or from external instrument, making the Kinesynth a mediator between the body and the environment.
@inproceedings{Haddad2018, author = {Haddad, Don Derek and Paradiso, Joe}, title = {Kinesynth: Patching, Modulating, and Mixing a Hybrid Kinesthetic Synthesizer.}, pages = {35--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302669}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0009.pdf} }
-
Riccardo Marogna. 2018. CABOTO: A Graphic-Based Interactive System for Composing and Performing Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 37–42. http://doi.org/10.5281/zenodo.1302671
Download PDF DOICABOTO is an interactive system for live performance and composition. A graphic score sketched on paper is read by a computer vision system. The graphic elements are scanned following a symbolic-raw hybrid approach, that is, they are recognised and classified according to their shapes but also scanned as waveforms and optical signals. All this information is mapped into the synthesis engine, which implements different kind of synthesis techniques for different shapes. In CABOTO the score is viewed as a cartographic map explored by some navigators. These navigators traverse the score in a semi-autonomous way, scanning the graphic elements found along their paths. The system tries to challenge the boundaries between the concepts of composition, score, performance, instrument, since the musical result will depend both on the composed score and the way the navigators will traverse it during the live performance.
@inproceedings{Marogna2018, author = {Marogna, Riccardo}, title = {CABOTO: A Graphic-Based Interactive System for Composing and Performing Electronic Music}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302671}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0010.pdf} }
-
Gustavo Oliveira da Silveira. 2018. The XT Synth: A New Controller for String Players. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 43–44. http://doi.org/10.5281/zenodo.1302673
Download PDF DOIThis paper describes the concept, design, and realization of two iterations of a new controller called the XT Synth. The development of the instrument came from the desire to maintain the expressivity and familiarity of string instruments, while adding the flexibility and power usually found in keyboard controllers. There are different examples of instruments that bring the physicality and expressiveness of acoustic instruments into electronic music, from “Do it yourself” (DIY) products to commercially available ones. This paper discusses the process and the challenges faced when creating a DIY musical instrument and then subsequently transforming the instrument into a product suitable for commercialization.
@inproceedings{Oliveira2018, author = {Oliveira da Silveira, Gustavo}, title = {The XT Synth: A New Controller for String Players}, pages = {43--44}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302673}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0011.pdf} }
-
S. M. Astrid Bin, Nick Bryan-Kinns, and Andrew P. McPherson. 2018. Risky business: Disfluency as a design strategy. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 45–50. http://doi.org/10.5281/zenodo.1302675
Download PDF DOIThis paper presents a study examining the effects of disfluent design on audience perception of digital musical instrument (DMI) performance. Disfluency, defined as a barrier to effortless cognitive processing, has been shown to generate better results in some contexts as it engages higher levels of cognition. We were motivated to determine if disfluent design in a DMI would result in a risk state that audiences would be able to perceive, and if this would have any effect on their evaluation of the performance. A DMI was produced that incorporated a disfluent characteristic: It would turn itself off if not constantly moved. Six physically identical instruments were produced, each in one of three versions: Control (no disfluent characteristics), mild disfluency (turned itself off slowly), and heightened disfluency (turned itself off more quickly). 6 percussionists each performed on one instrument for a live audience (N=31), and data was collected in the form of real-time feedback (via a mobile phone app), and post-hoc surveys. Though there was little difference in ratings of enjoyment between the versions of the instrument, the real-time and qualitative data suggest that disfluent behaviour in a DMI may be a way for audiences to perceive and appreciate performer skill.
@inproceedings{Bin2018, author = {Bin, S. M. Astrid and Bryan-Kinns, Nick and McPherson, Andrew P.}, title = {Risky business: Disfluency as a design strategy}, pages = {45--50}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302675}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0012.pdf} }
-
Rachel Gibson. 2018. The Theremin Textural Expander. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 51–52. http://doi.org/10.5281/zenodo.1302527
Download PDF DOIThe voice of the theremin is more than just a simple sine wave. Its unique sound is made through two radio frequency oscillators that, when operating at almost identical frequencies, gravitate towards each other. Ultimately, this pull alters the sine wave, creating the signature sound of the theremin. The Theremin Textural Expander (TTE) explores other textures the theremin can produce when its sound is processed and manipulated through a Max/MSP patch and controlled via a MIDI pedalboard. The TTE extends the theremin’s ability, enabling it to produce five distinct new textures beyond the original. It also features a looping system that the performer can use to layer textures created with the traditional theremin sound. Ultimately, this interface introduces a new way to play and experience the theremin; it extends its expressivity, affording a greater range of compositional possibilities and greater flexibility in free improvisation contexts.
@inproceedings{Gibson2018, author = {Gibson, Rachel}, title = {The Theremin Textural Expander}, pages = {51--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302527}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0013.pdf} }
-
Mert Toka, Can Ince, and Mehmet Aydin Baytas. 2018. Siren: Interface for Pattern Languages. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 53–58. http://doi.org/10.5281/zenodo.1302677
Download PDF DOIThis paper introduces Siren, a hybrid system for algorithmic composition and live-coding performances. Its hierarchical structure allows small modifications to propagate and aggregate on lower levels for dramatic changes in the musical output. It uses functional programming language TidalCycles as the core pattern creation environment due to its inherent ability to create complex pattern relations with minimal syntax. Borrowing the best from TidalCycles, Siren augments the pattern creation process by introdu