Proceedings Archive
This page contains a list of all publications that have been published at the NIME conferences.
- Peer review: All papers have been peer-reviewed (most often by three international experts). See the list of reviewers. Only papers that were presented at the conferences (as presentation, poster or demo) are included.
- Open access: NIME papers are open access (gold), and the copyright remains with the author(s). The NIME archive uses the Creative Commons Attribution 4.0 International License (CC BY 4.0).
- Public domain: The bibliographic information for NIME, including all BibTeX information and abstracts, is public domain. The list below is generated from a collection of BibTeX files hosted at GitHub using Jekyll Scholar.
- PDFs: Individual papers are linked for each entry below. All PDFs are archived separately in Zenodo, and there are also Zip files for each year in Zenodo. If you just want to download everything quickly, you can find the Zip files here as well.
- ISSN for the proceedings series: ISSN 2220-4806. Each year’s ISBN is in the BibTeX files and are also listed here.
- Impact factor: Academic work should always be considered on its own right (cf. DORA declaration). That said, the NIME proceedings are generally ranked highly in, for example, the Google Scholar ranking.
- Ethics: Please take a look at NIME’s Publication ethics and malpractice statement.
- Contact: If you find any errors in the database, please feel free to fork and modify at GitHub, or add an issue in the tracker.
NIME publications (in backwards chronological order)
2020
-
Ruolun Weng. 2020. Interactive Mobile Musical Application using faust2smartphone. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 1–4.
Download PDFWe introduce faust2smartphone, a tool to generate an edit-ready project for musical mobile application, which connects Faust programming language and mobile application’s development. It is an extended implementation of faust2api. Faust DSP objects can be easily embedded as a high level API so that the developers can access various functions and elements across different mobile platforms. This paper provides several modes and technical details on the structures and implementation of this system as well as some applications and future directions for this tool.
@inproceedings{NIME20_0, author = {Weng, Ruolun}, title = {Interactive Mobile Musical Application using faust2smartphone}, pages = {1--4}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper0.pdf} }
-
John Sullivan, Julian Vanasse, Catherine Guastavino, and Marcelo Wanderley. 2020. Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 5–10.
Download PDFThis paper reports on the user-driven redesign of an embedded digital musical instrument that has yielded a trio of new instruments, informed by early user feedback and co-design workshops organized with active musicians. Collectively, they share a stand-alone design, digitally fabricated enclosures, and a common sensor acquisition and sound synthesis architecture, yet each is unique in its playing technique and sonic output. We focus on the technical design of the instruments and provide examples of key design specifications that were derived from user input, while reflecting on the challenges to, and opportunities for, creating instruments that support active practices of performing musicians.
@inproceedings{NIME20_1, author = {Sullivan, John and Vanasse, Julian and Guastavino, Catherine and Wanderley, Marcelo}, title = {Reinventing the Noisebox: Designing Embedded Instruments for Active Musicians}, pages = {5--10}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper1.pdf}, presentation-video = {https://youtu.be/DUMXJw-CTVo} }
-
Darrell J Gibson and Richard Polfreman. 2020. Star Interpolator – A Novel Visualization Paradigm for Graphical Interpolators. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 49–54.
Download PDFThis paper presents a new visualization paradigm for graphical interpolation systems, known as Star Interpolation, that has been specifically created for sound design applications. Through the presented investigation of previous visualizations, it becomes apparent that the existing visuals in this class of system, generally relate to the interpolation model that determines the weightings of the presets and not the sonic output. The Star Interpolator looks to resolve this deficiency by providing visual cues that relate to the parameter space. Through comparative exploration it has been found this visualization provides a number of benefits over the previous systems. It is also shown that hybrid visualization can be generated that combined benefits of the new visualization with the existing interpolation models. These can then be accessed by using an Interactive Visualization (IV) approach. The results from our exploration of these visualizations are encouraging and they appear to be advantageous when using the interpolators for sound designs tasks. Therefore, it is proposed that formal usability testing is undertaken to measure the potential value of this form of visualization.
@inproceedings{NIME20_10, author = {Gibson, Darrell J and Polfreman, Richard}, title = {Star Interpolator – A Novel Visualization Paradigm for Graphical Interpolators}, pages = {49--54}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper10.pdf}, presentation-video = {https://youtu.be/3ImRZdSsP-M} }
-
Laurel S Pardue, Miguel Ortiz, Maarten van Walstijn, Paul Stapleton, and Matthew Rodger. 2020. Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 523–524.
Download PDFThis paper reports on the process of development of a virtual-acoustic proto-instrument, Vodhrán, based on a physical model of a plate, within a musical performance-driven ecosystemic environment. Performers explore the plate model via tactile interaction through a Sensel Morph interface, chosen to allow damping and localised striking consistent with playing hand percussion. Through an iteration of prototypes, we have designed an embedded proto-instrument that allows a bodily interaction between the performer and the virtual-acoustic plate in a way that redirects from the perception of the Sensel as a touchpad and reframes it as a percussive surface. Due to the computational effort required to run such a rich physical model and the necessity to provide a natural interaction, the audio processing is implemented on a high powered single board computer. We describe the design challenges and report on the technological solutions we have found in the implementation of Vodhrán which we believe are valuable to the wider NIME community.
@inproceedings{NIME20_100, author = {Pardue, Laurel S and Ortiz, Miguel and van Walstijn, Maarten and Stapleton, Paul and Rodger, Matthew}, title = {Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument}, pages = {523--524}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper100.pdf} }
-
Satvik Venkatesh, Edward Braund, and Eduardo Miranda. 2020. Designing Brain-computer Interfaces for Sonic Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 525–530.
Download PDFBrain-computer interfaces (BCIs) are beneficial for patients who are suffering from motor disabilities because it offers them a way of creative expression, which improves mental well-being. BCIs aim to establish a direct communication medium between the brain and the computer. Therefore, unlike conventional musical interfaces, it does not require muscular power. This paper explores the potential of building sound synthesisers with BCIs that are based on steady-state visually evoked potential (SSVEP). It investigates novel ways to enable patients with motor disabilities to express themselves. It presents a new concept called sonic expression, that is to express oneself purely by the synthesis of sound. It introduces new layouts and designs for BCI-based sound synthesisers and the limitations of these interfaces are discussed. An evaluation of different sound synthesis techniques is conducted to find an appropriate one for such systems. Synthesis techniques are evaluated and compared based on a framework governed by sonic expression.
@inproceedings{NIME20_101, author = {Venkatesh, Satvik and Braund, Edward and Miranda, Eduardo}, title = {Designing Brain-computer Interfaces for Sonic Expression}, pages = {525--530}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper101.pdf} }
-
Duncan A.H. Williams, Bruno Fazenda, Victoria J. Williamson, and Gyorgy Fazekas. 2020. Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 531–536.
Download PDFMusic has previously been shown to be beneficial in improving runners performance in treadmill based experiments. This paper evaluates a generative music system, HEARTBEATS, designed to create biosignal synchronous music in real-time according to an individual athlete’s heart-rate or cadence (steps per minute). The tempo, melody, and timbral features of the generated music are modulated according to biosensor input from each runner using a wearable Bluetooth sensor. We compare the relative performance of athletes listening to heart-rate and cadence synchronous music, across a randomized trial (N=57) on a trail course with 76ft of elevation. Participants were instructed to continue until perceived effort went beyond an 18 using the Borg rating of perceived exertion scale. We found that cadence-synchronous music improved performance and decreased perceived effort in male runners, and improved performance but not perceived effort in female runners, in comparison to heart-rate synchronous music. This work has implications for the future design and implementation of novel portable music systems and in music-assisted coaching.
@inproceedings{NIME20_102, author = {Williams, Duncan A.H. and Fazenda, Bruno and Williamson, Victoria J. and Fazekas, Gyorgy}, title = {Biophysiologically synchronous computer generated music improves performance and reduces perceived effort in trail runners}, pages = {531--536}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper102.pdf} }
-
Gilberto Bernardes and Gilberto Bernardes. 2020. Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 537–542.
Download PDFAudio content-based processing has become a pervasive methodology for techno-fluent musicians. System architectures typically create thumbnail audio descriptions, based on signal processing methods, to visualize, retrieve and transform musical audio efficiently. Towards enhanced usability of these descriptor-based frameworks for the music community, the paper advances a minimal content-based audio description scheme, rooted on primary musical notation attributes at the threefold sound object, meso and macro hierarchies. Multiple perceptually-guided viewpoints from rhythmic, harmonic, timbral and dynamic attributes define a discrete and finite alphabet with minimal formal and subjective assumptions using unsupervised and user-guided methods. The Factor Oracle automaton is then adopted to model and visualize temporal morphology. The generative musical applications enabled by the descriptor-based framework at multiple structural hierarchies are discussed.
@inproceedings{NIME20_103, author = {Bernardes, Gilberto and Bernardes, Gilberto}, title = {Interfacing Sounds: Hierarchical Audio-Content Morphologies for Creative Re-purposing in earGram 2.0}, pages = {537--542}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper103.pdf}, presentation-video = {https://youtu.be/zEg9Cpir8zA} }
-
Joung Min Han and Yasuaki Kakehi. 2020. ParaSampling: A Musical Instrument with Handheld Tapehead Interfaces for Impromptu Recording and Playing on a Magnetic Tape. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 543–544.
Download PDFFor a long time, magnetic tape has been commonly utilized as one of physical media for recording and playing music. In this research, we propose a novel interactive musical instrument called ParaSampling that utilizes the technology of magnetic sound recording, and a improvisational sound playing method based on the instrument. While a conventional cassette tape player has a single tapehead, which rigidly placed, our instrument utilizes multiple handheld tapehead modules as an interface. Players can hold the interfaces and press them against the rotating magnetic tape at an any point to record or reproduce sounds The player can also easily erase and rewrite the sound recorded on the tape. With this instrument, they can achieve improvised and unique musical expressions through tangible and spatial interactions. In this paper, we describe the system design of ParaSampling, the implementation of the prototype system, and discuss music expressions enabled by the system.
@inproceedings{NIME20_104, author = {Han, Joung Min and Kakehi, Yasuaki}, title = {ParaSampling: A Musical Instrument with Handheld Tapehead Interfaces for Impromptu Recording and Playing on a Magnetic Tape}, pages = {543--544}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper104.pdf} }
-
Giorgos Filandrianos, Natalia Kotsani, Edmund G Dervakos, Giorgos Stamou, Vaios Amprazis, and Panagiotis Kiourtzoglou. 2020. Brainwaves-driven Effects Automation in Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 545–546.
Download PDFA variety of controllers with multifarious sensors and functions have maximized the real time performers control capabilities. The idea behind this project was to create an interface which enables the interaction between the performers and the effect processor measuring their brain waves amplitudes, e.g., alpha, beta, theta, delta and gamma, not necessarily with the user’s awareness. We achieved this by using an electroencephalography (EEG) sensor for detecting performer’s different emotional states and, based on these, sending midi messages for digital processing units automation. The aim is to create a new generation of digital processor units that could be automatically configured in real-time given the emotions or thoughts of the performer or the audience. By introducing emotional state information in the real time control of several aspects of artistic expression, we highlight the impact of surprise and uniqueness in the artistic performance.
@inproceedings{NIME20_105, author = {Filandrianos, Giorgos and Kotsani, Natalia and Dervakos, Edmund G and Stamou, Giorgos and Amprazis, Vaios and Kiourtzoglou, Panagiotis}, title = {Brainwaves-driven Effects Automation in Musical Performance}, pages = {545--546}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper105.pdf} }
-
Graham Wakefield, Michael Palumbo, and Alexander Zonta. 2020. Affordances and Constraints of Modular Synthesis in Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 547–550.
Download PDFThis article focuses on the rich potential of hybrid domain translation of modular synthesis (MS) into virtual reality (VR). It asks: to what extent can what is valued in studio-based MS practice find a natural home or rich new interpretations in the immersive capacities of VR? The article attends particularly to the relative affordances and constraints of each as they inform the design and development of a new system ("Mischmasch") supporting collaborative and performative patching of Max gen patches and operators within a shared room-scale VR space.
@inproceedings{NIME20_106, author = {Wakefield, Graham and Palumbo, Michael and Zonta, Alexander}, title = {Affordances and Constraints of Modular Synthesis in Virtual Reality}, pages = {547--550}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper106.pdf} }
-
emmanouil moraitis. 2020. Symbiosis: a biological taxonomy for modes of interaction in dance-music collaborations. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 551–556.
Download PDFFocusing on interactive performance works borne out of dancer-musician collaborations, this paper investigates the relationship between the mediums of sound and movement through a conceptual interpretation of the biological phenomenon of symbiosis. Describing the close and persistent interactions between organisms of different species, symbioses manifest across a spectrum of relationship types, each identified according to the health effect experienced by the engaged organisms. This biological taxonomy is appropriated within a framework which identifies specific modes of interaction between sound and movement according to the collaborating practitioners’ intended outcome, and required provisions, cognition of affect, and system operation. Using the symbiotic framework as an analytical tool, six dancer-musician collaborations from the field of NIME are examined in respect to the employed modes of interaction within each of the four examined areas. The findings reveal the emergence of multiple modes in each work, as well as examples of mutation between different modes over the course of a performance. Furthermore, the symbiotic concept provides a novel understanding of the ways gesture recognition technologies (GRTs) have redefined the relationship dynamics between dancers and musicians, and suggests a more efficient and inclusive approach in communicating the potential and limitations presented by Human-Computer Interaction tools.
@inproceedings{NIME20_107, author = {moraitis, emmanouil}, title = {Symbiosis: a biological taxonomy for modes of interaction in dance-music collaborations}, pages = {551--556}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper107.pdf}, presentation-video = {https://youtu.be/5X6F_nL8SOg} }
-
Antonella Nonnis and Nick Bryan-Kinns. 2020. Όλοι: music making to scaffold social playful activities and self-regulation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 557–558.
Download PDFWe present Olly, a musical textile tangible user interface (TUI) designed around the observations of a group of five children with autism who like music. The intention is to support scaffolding social interactions and sensory regulation during a semi-structured and open-ended playful activity. Olly was tested in the dance studio of a special education needs (SEN) school in North-East London, UK, for a period of 5 weeks, every Thursday afternoon for 30 minutes. Olly uses one Bare touch board in midi mode and four stretch analog sensors embedded inside four elastic ribbons. These ribbons top the main body of the installation which is made by using an inflatable gym ball wrapped in felt. Each of the ribbons plays a different instrument and triggers different harmonic chords. Olly allows to play pleasant melodies if interacting with it in solo mode and more complex harmonies when playing together with others. Results show great potentials for carefully designed musical TUI implementation aimed at scaffolding social play while affording self-regulation in SEN contexts. We present a brief introduction on the background and motivations, design considerations and results.
@inproceedings{NIME20_108, author = {Nonnis, Antonella and Bryan-Kinns, Nick}, title = {Όλοι: music making to scaffold social playful activities and self-regulation}, pages = {557--558}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper108.pdf} }
-
Sara Sithi-Amnuai. 2020. Exploring Identity Through Design: A Focus on the Cultural Body Via Nami. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 559–563.
Download PDFIdentity is inextricably linked to culture and sustained through creation and performance of music and dance, yet discussion of agency and cultural tools informing design and performance application of gestural controllers is not widely discussed. The purpose of this paper is to discuss the cultural body, its consideration in existing gestural controller design, and how cultural design methods have the potential to extend musical/social identities and/or traditions within a technological context. In an effort to connect and reconnect with the author’s personal Nikkei heritage, this paper will discuss the design of Nami – a custom built gestural controller and its applicability to extend the author’s cultural body through a community-centric case study performance.
@inproceedings{NIME20_109, author = {Sithi-Amnuai, Sara}, title = {Exploring Identity Through Design: A Focus on the Cultural Body Via Nami}, pages = {559--563}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper109.pdf}, presentation-video = {https://youtu.be/QCUGtE_z1LE} }
-
Anna Xambó and Gerard Roma. 2020. Performing Audiences: Composition Strategies for Network Music using Mobile Phones. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 55–60.
Download PDFWith the development of web audio standards, it has quickly become technically easy to develop and deploy software for inviting audiences to participate in musical performances using their mobile phones. Thus, a new audience-centric musical genre has emerged, which aligns with artistic manifestations where there is an explicit inclusion of the public (e.g. participatory art, cinema or theatre). Previous research has focused on analysing this new genre from historical, social organisation and technical perspectives. This follow-up paper contributes with reflections on technical and aesthetic aspects of composing within this audience-centric approach. We propose a set of 13 composition dimensions that deal with the role of the performer, the role of the audience, the location of sound and the type of feedback, among others. From a reflective approach, four participatory pieces developed by the authors are analysed using the proposed dimensions. Finally, we discuss a set of recommendations and challenges for the composers-developers of this new and promising musical genre. This paper concludes discussing the implications of this research for the NIME community.
@inproceedings{NIME20_11, author = {Xambó, Anna and Roma, Gerard}, title = {Performing Audiences: Composition Strategies for Network Music using Mobile Phones}, pages = {55--60}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper11.pdf} }
-
Joe Wright. 2020. The Appropriation and Utility of Constrained ADMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 564–569.
Download PDFThis paper reflects on players’ first responses to a constrained Accessible Digital Musical Instrument (ADMI) in open, child-led sessions with seven children at a special school. Each player’s gestures with the instrument were sketched, categorised and compared with those of others among the group. Additionally, sensor data from the instruments was recorded and analysed to give a secondary indication of playing style, based on note and silence durations. In accord with previous studies, the high degree of constraints led to a diverse range of playing styles, allowing each player to appropriate and explore the instruments within a short inaugural session. The open, undirected sessions also provided insights which could potentially direct future work based on each person’s responses to the instruments. The paper closes with a short discussion of these diverse styles, and the potential role constrained ADMIs could serve as ’ice-breakers’ in musical projects that seek to co-produce or co-design with neurodiverse children and young people.
@inproceedings{NIME20_110, author = {Wright, Joe}, title = {The Appropriation and Utility of Constrained ADMIs}, pages = {564--569}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper110.pdf}, presentation-video = {https://youtu.be/RhaIzCXQ3uo} }
-
Lia Mice and Andrew McPherson. 2020. From miming to NIMEing: the development of idiomatic gestural language on large scale DMIs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 570–575.
Download PDFWhen performing with new instruments, musicians often develop new performative gestures and playing techniques. Music performance studies on new instruments often consider interfaces that feature a spectrum of gestures similar to already existing sound production techniques. This paper considers the choices performers make when creating an idiomatic gestural language for an entirely unfamiliar instrument. We designed a musical interface with a unique large-scale layout to encourage new performers to create fully original instrument-body interactions. We conducted a study where trained musicians were invited to perform one of two versions of the same instrument, each physically identical but with a different tone mapping. The study results reveal insights into how musicians develop novel performance gestures when encountering a new instrument characterised by an unfamiliar shape and size. Our discussion highlights the impact of an instrument’s scale and layout on the emergence of new gestural vocabularies and on the qualities of the music performed.
@inproceedings{NIME20_111, author = {Mice, Lia and McPherson, Andrew}, title = {From miming to NIMEing: the development of idiomatic gestural language on large scale DMIs}, pages = {570--575}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper111.pdf}, presentation-video = {https://youtu.be/mnJN8ELneUU} }
-
William C Payne, Ann Paradiso, and Shaun Kane. 2020. Cyclops: Designing an eye-controlled instrument for accessibility and flexible use. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 576–580.
Download PDFThe Cyclops is an eye-gaze controlled instrument designed for live performance and improvisation. It is primarily mo- tivated by a need for expressive musical instruments that are more easily accessible to people who rely on eye track- ers for computer access, such as those with amyotrophic lateral sclerosis (ALS). At its current implementation, the Cyclops contains a synthesizer and sequencer, and provides the ability to easily create and automate musical parameters and effects through recording eye-gaze gestures on a two- dimensional canvas. In this paper, we frame our prototype in the context of previous eye-controlled instruments, and we discuss we designed the Cyclops to make gaze-controlled music making as fun, accessible, and seamless as possible despite notable interaction challenges like latency, inaccu- racy, and “Midas Touch.”
@inproceedings{NIME20_112, author = {Payne, William C and Paradiso, Ann and Kane, Shaun}, title = {Cyclops: Designing an eye-controlled instrument for accessibility and flexible use}, pages = {576--580}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper112.pdf}, presentation-video = {https://youtu.be/G6dxngoCx60} }
-
Adnan Marquez-Borbon. 2020. Collaborative Learning with Interactive Music Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 581–586.
Download PDFThis paper presents the results of an observational study focusing on the collaborative learning processes of a group of performers with an interactive musical system. The main goal of this study was to implement methods for learning and developing practice with these technological objects in order to generate future pedagogical methods. During the research period of six months, four participants regularly engaged in workshop-type scenarios where learning objectives were proposed and guided by themselves.The principal researcher, working as participant-observer, did not impose or prescribed learning objectives to the other members of the group. Rather, all participants had equal say in what was to be done and how it was to be accomplished. Results show that the group learning environment is rich in opportunities for learning, mutual teaching, and for establishing a comunal practice for a given interactive musical system.Key findings suggest that learning by demonstration, observation and modelling are significant for learning in this context. Additionally, it was observed that a dialogue and a continuous flow of information between the members of the community is needed in order to motivate and further their learning.
@inproceedings{NIME20_113, author = {Marquez-Borbon, Adnan}, title = {Collaborative Learning with Interactive Music Systems}, pages = {581--586}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper113.pdf}, presentation-video = {https://youtu.be/1G0bOVlWwyI} }
-
Jens Vetter. 2020. WELLE - a web-based music environment for the blind. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 587–590.
Download PDFThis paper presents WELLE, a web-based music environment for blind people, and describes its development, design, notation syntax and first experiences. WELLE is intended to serve as a collaborative, performative and educational tool to quickly create and record musical ideas. It is pattern-oriented, based on textual notation and focuses on accessibility, playful interaction and ease of use. WELLE was developed as part of the research project Tangible Signals and will also serve as a platform for the integration of upcoming new interfaces.
@inproceedings{NIME20_114, author = {Vetter, Jens}, title = {WELLE - a web-based music environment for the blind}, pages = {587--590}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper114.pdf} }
-
Margarida Pessoa, Cláudio Parauta, Pedro Luís, Isabela Corintha, and Gilberto Bernardes. 2020. Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 591–595.
Download PDFThis paper presents an overview of the design principles behind Digital Music Instruments (DMIs) for education across all editions of the International Conference on New Interfaces for Music Expression (NIME). We compiled a comprehensive catalogue of over hundred DMIs with varying degrees of applicability in the educational practice. Each catalogue entry is annotated according to a proposed taxonomy for DMIs for education, rooted in the mechanics of control, mapping and feedback of an interactive music system, along with the required expertise of target user groups and the instrument learning curve. Global statistics unpack underlying trends and design goals across the chronological period of the NIME conference. In recent years, we note a growing number of DMIs targeting non-experts and with reduced requirements in terms of expertise. Stemming from the identified trends, we discuss future challenges in the design of DMIs for education towards enhanced degrees of variation and unpredictability.
@inproceedings{NIME20_115, author = {Pessoa, Margarida and Parauta, Cláudio and Luís, Pedro and Corintha, Isabela and Bernardes, Gilberto}, title = {Examining Temporal Trends and Design Goals of Digital Music Instruments for Education in NIME: A Proposed Taxonomy}, pages = {591--595}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper115.pdf} }
-
Laurel S Pardue, Kuljit Bhamra, Graham England, Phil Eddershaw, and Duncan Menzies. 2020. Demystifying tabla through the development of an electronic drum. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 596–599.
Download PDFThe tabla is a traditional pitched two-piece Indian drum set, popular not only within South East Asian music, but whose sounds also regularly feature in western music. Yet tabla remains an aural tradition, taught largely through a guru system heavy in custom and mystique. Tablas can also pose problems for school and professional performance environments as they are physically bulky, fragile, and reactive to environmental factors such as damp and heat. As part of a broader project to demystify tabla, we present an electronic tabla that plays nearly identically to an acoustic tabla and was created in order to make the tabla acces- sible and practical for a wider audience of students, pro- fessional musicians and composers. Along with develop- ment of standardised tabla notation and instructional educational aides, the electronic tabla is designed to be compact, robust, easily tuned, and the electronic nature allows for scoring tabla through playing. Further, used as an interface, it allows the use of learned tabla technique to control other percussive sounds. We also discuss the technological approaches used to accurately capture the localized multi-touch rapid-fire strikes and damping that combine to make tabla such a captivating and virtuosic instrument.
@inproceedings{NIME20_116, author = {Pardue, Laurel S and Bhamra, Kuljit and England, Graham and Eddershaw, Phil and Menzies, Duncan}, title = {Demystifying tabla through the development of an electronic drum}, pages = {596--599}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper116.pdf}, presentation-video = {https://youtu.be/PPaHq8fQjB0} }
-
Juan D Sierra. 2020. SpeakerDrum. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 600–604.
Download PDFSpeakerDrum is an instrument composed of multiple Dual Voice Coil speakers (DVC) where two coils are used to drive the same membrane. However, in this case, one of them is used as a microphone which is then used by the performer as an input interface of percussive gestures. Of course, this leads to poten- tial feedback, but with enough control, a compelling exploration of resonance haptic feedback and sound embodiment is possible.
@inproceedings{NIME20_117, author = {Sierra, Juan D}, title = {SpeakerDrum}, pages = {600--604}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper117.pdf} }
-
Matthew Caren, Romain Michon, and Matthew Wright. 2020. The KeyWI: An Expressive and Accessible Electronic Wind Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 605–608.
Download PDFThis paper presents the KeyWI, an electronic wind instrument design based on the melodica that both improves upon limitations in current systems and is general and powerful enough to support a variety of applications. Four opportunities for growth are identified in current electronic wind instrument systems, which then are used as focuses in the development and evaluation of the instrument. The instrument features a breath pressure sensor with a large dynamic range, a keyboard that allows for polyphonic pitch selection, and a completely integrated construction. Sound synthesis is performed with Faust code compiled to the Bela Mini, which offers low-latency audio and a simple yet powerful development workflow. In order to be as accessible and versatile as possible, the hardware and software is entirely open-source, and fabrication requires only common maker tools.
@inproceedings{NIME20_118, author = {Caren, Matthew and Michon, Romain and Wright, Matthew}, title = {The KeyWI: An Expressive and Accessible Electronic Wind Instrument}, pages = {605--608}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper118.pdf} }
-
Pelle Juul Christensen, Dan Overholt, and Stefania Serafin. 2020. The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 609–612.
Download PDFIn this paper we provide a detailed description of the development of a new interface for musical expression, the da ̈ıs, with focus on an iterative development process, control of physical models for sounds synthesis, and haptic feedback. The development process, consisting of three iterations, is covered along with a discussion of the tools and methods used. The sound synthesis algorithm for the da ̈ıs, a physical model of a bowed string, is covered and the mapping from the interface parameters to those of the synthesis algorithms is described in detail. Using a qualitative test the affordances, advantages, and disadvantages of the chosen design, synthesis algorithm, and parameter mapping is highlighted. Lastly, the possibilities for future work is discussed with special focus on alternate sounds and mappings.
@inproceedings{NIME20_119, author = {Christensen, Pelle Juul and Overholt, Dan and Serafin, Stefania}, title = {The Da ̈ıs: A Haptically Enabled New Interface for Musical Expression for Controlling Physical Models for Sound Synthesis}, pages = {609--612}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper119.pdf}, presentation-video = {https://youtu.be/XOvnc_AKKX8} }
-
Samuel J Hunt, Tom Mitchell, and Chris Nash. 2020. Composing computer generated music, an observational study using IGME: the Interactive Generative Music Environment. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 61–66.
Download PDFComputer composed music remains a novel and challenging problem to solve. Despite an abundance of techniques and systems little research has explored how these might be useful for end-users looking to compose with generative and algorithmic music techniques. User interfaces for generative music systems are often inaccessible to non-programmers and neglect established composition workflow and design paradigms that are familiar to computer-based music composers. We have developed a system called the Interactive Generative Music Environment (IGME) that attempts to bridge the gap between generative music and music sequencing software, through an easy to use score editing interface. This paper discusses a series of user studies in which users explore generative music composition with IGME. A questionnaire evaluates the user’s perception of interacting with generative music and from this provide recommendations for future generative music systems and interfaces.
@inproceedings{NIME20_12, author = {Hunt, Samuel J and Mitchell, Tom and Nash, Chris}, title = {Composing computer generated music, an observational study using IGME: the Interactive Generative Music Environment}, pages = {61--66}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper12.pdf} }
-
Joao Wilbert, Don D Haddad, Hiroshi Ishii, and Joseph Paradiso. 2020. Patch-corde: an expressive patch-cable for the modular synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 613–616.
Download PDFMany opportunities and challenges in both the control and performative aspects of today’s modular synthesizers exist. The user interface prevailing in the world of synthesizers and music controllers has always been revolving around knobs, faders, switches, dials, buttons, or capacitive touchpads, to name a few. This paper presents a novel way of interaction with a modular synthesizer by exploring the affordances of cord-base UIs. A special patch cable was developed us- ing commercially available piezo-resistive rubber cords, and was adapted to fit to the 3.5 mm mono audio jack, making it compatible with the Eurorack modular-synth standard. Moreover, a module was developed to condition this stretch- able sensor/cable, to allow multiple Patch-cordes to be used in a given patch simultaneously. This paper also presents a vocabulary of interactions, labeled through various physical actions, turning the patch cable into an expressive controller that complements traditional patching techniques.
@inproceedings{NIME20_120, author = {Wilbert, Joao and Haddad, Don D and Ishii, Hiroshi and Paradiso, Joseph}, title = {Patch-corde: an expressive patch-cable for the modular synthesizer.}, pages = {613--616}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper120.pdf}, presentation-video = {https://youtu.be/7gklx8ek8U8} }
-
Jiří Suchánek. 2020. SOIL CHOIR v.1.3 - soil moisture sonification installation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 617–618.
Download PDFThe artistic sonification offers a creative method for putting direct semantic layers to the abstract sounds. This paper is dedicated to the sound installation “Soil choir v.1.3” that sonifies soil moisture in different depths and transforms this non-musical phenomenon into organized sound structures. The sonification of natural soil moisture processes tests the limits of our attention, patience and willingness to still perceive ultra-slow reactions and examines the mechanisms of our sense adaptation. Although the musical time of the installation is set to almost non-human – environmental time scale (changes happen within hours, days, weeks or even months…) this system can be explored and even played also as an instrument by putting sensors to different soil areas or pouring liquid into the soil and waiting for changes... The crucial aspect of the work was to design the sonification architecture that deals with extreme slow changes of input data – measured values from moisture sensors. The result is the sound installation consisting of three objects – each with different types of soil. Every object is compact, independent unit consisting of three low-cost capacitive soil moisture sensors, 1m long perspex tube filled with soil, full range loudspeaker and Bela platform with custom Supercollider code. I developed this installation during the year 2019 and this paper gives insight into the aspects and issues connected with creating this installation.
@inproceedings{NIME20_121, author = {Suchánek, Jiří}, title = {SOIL CHOIR v.1.3 - soil moisture sonification installation}, pages = {617--618}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper121.pdf} }
-
Marinos Koutsomichalis. 2020. Rough-hewn Hertzian Multimedia Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 619–624.
Download PDFThree DIY electronic instruments that the author has used in real-life multimedia performance contexts are scrutinised herein. The instruments are made intentionally rough-hewn, non-optimal and user-unfriendly in several respects, and are shown to draw upon experimental traits in electronics de- sign and interfaces for music expression. The various different ways in which such design traits affects their performance are outlined, as are their overall consequence to the artistic outcome and to individual experiences of it. It is shown that, to a varying extent, they all embody, mediate, and aid actualise the specifics their parent projects revolve around. It is eventually suggested that in the context of an exploratory and hybrid artistic practice, bespoke instruments of sorts, their improvised performance, the material traits or processes they implement or pivot on, and the ideas/narratives that perturb thereof, may all intertwine and fuse into one another so that a clear distinction between one another is not always possible, or meaningful. In such a vein, this paper aims at being an account of such a practice upon which prospective researchers/artists may further build upon.
@inproceedings{NIME20_122, author = {Koutsomichalis, Marinos}, title = {Rough-hewn Hertzian Multimedia Instruments}, pages = {619--624}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper122.pdf}, presentation-video = {https://youtu.be/DWecR7exl8k} }
-
Taylor J Olsen. 2020. Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 625–630.
Download PDFThe visual-audioizer is a patch created in Max in which the concept of fluid-time animation techniques, in tandem with basic computer vision tracking methods, can be used as a tool to allow the visual time-based media artist to create music. Visual aspects relating to the animator’s knowledge of motion, animated loops, and auditory synchronization derived from computer vision tracking methods, allow an immediate connection between the generated audio derived from visuals—becoming a new way to experience and create audio-visual media. A conceptual overview, comparisons of past/current audio-visual contributors, and a summary of the Max patch will be discussed. The novelty of practice-based animation methods in the field of musical expression, considerations of utilizing the visual-audioizer, and the future of fluid-time animation techniques as a tool of musical creativity will also be addressed.
@inproceedings{NIME20_123, author = {Olsen, Taylor J}, title = {Animation, Sonification, and Fluid-Time: A Visual-Audioizer Prototype}, pages = {625--630}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper123.pdf} }
-
Virginia de las Pozas. 2020. Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 631–634.
Download PDFThis paper describes a system for automating the generation of mapping schemes between human interaction with extramusical objects and electronic dance music. These mappings are determined through the comparison of sensor input to a synthesized matrix of sequenced audio. The goal of the system is to facilitate live performances that feature quotidian objects in the place of traditional musical instruments. The practical and artistic applications of musical control with quotidian objects is discussed. The associated object-manipulating gesture vocabularies are mapped to musical output so that the objects themselves may be perceived as DMIs. This strategy is used in a performance to explore the liveness qualities of the system.
@inproceedings{NIME20_124, author = {de las Pozas, Virginia}, title = {Semi-Automated Mappings for Object-Manipulating Gestural Control of Electronic Music}, pages = {631--634}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper124.pdf} }
-
Christodoulos Benetatos, Joseph VanderStel, and Zhiyao Duan. 2020. BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 635–640.
Download PDFDuring theBaroque period, improvisation was a key element of music performance and education. Great musicians, such as J.S. Bach, were better known as improvisers than composers. Today, however, there is a lack of improvisation culture in classical music performance and education; classical musicians either are not trained to improvise, or cannot find other people to improvise with. Motivated by this observation, we develop BachDuet, a system that enables real-time counterpoint improvisation between a human anda machine. This system uses a recurrent neural network toprocess the human musician’s monophonic performance ona MIDI keyboard and generates the machine’s monophonic performance in real time. We develop a GUI to visualize the generated music content and to facilitate this interaction. We conduct user studies with 13 musically trained users and show the feasibility of two-party duet counterpoint improvisation and the effectiveness of BachDuet for this purpose. We also conduct listening tests with 48 participants and show that they cannot tell the difference between duets generated by human-machine improvisation using BachDuet and those generated by human-human improvisation. Objective evaluation is also conducted to assess the degree to which these improvisations adhere to common rules of counterpoint, showing promising results.
@inproceedings{NIME20_125, author = {Benetatos, Christodoulos and VanderStel, Joseph and Duan, Zhiyao}, title = {BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation}, pages = {635--640}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper125.pdf}, presentation-video = {https://youtu.be/wFGW0QzuPPk} }
-
Olivier Capra, Florent Berthaut, and Laurent Grisoni. 2020. All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 67–72.
Download PDFBecause they break the physical link between gestures and sound, Digital Musical Instruments offer countless opportunities for musical expression. For the same reason however, they may hinder the audience experience, making the musician contribution and expressiveness difficult to perceive. In order to cope with this issue without altering the instruments, researchers and artists alike have designed techniques to augment their performances with additional information, through audio, haptic or visual modalities. These techniques have however only been designed to offer a fixed level of information, without taking into account the variety of spectators expertise and preferences. In this paper, we investigate the design, implementation and effect on audience experience of visual augmentations with controllable level of detail (LOD). We conduct a controlled experiment with 18 participants, including novices and experts. Our results show contrasts in the impact of LOD on experience and comprehension for experts and novices, and highlight the diversity of usage of visual augmentations by spectators.
@inproceedings{NIME20_13, author = {Capra, Olivier and Berthaut, Florent and Grisoni, Laurent}, title = {All You Need Is LOD : Levels of Detail in Visual Augmentations for the Audience}, pages = {67--72}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper13.pdf}, presentation-video = {https://youtu.be/3hIGu9QDn4o} }
-
Johnty Wang, Eduardo Meneses, and Marcelo Wanderley. 2020. The Scalability of WiFi for Mobile Embedded Sensor Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 73–76.
Download PDFIn this work we test the performance of multiple ESP32microcontrollers used as WiFi sensor interfaces in the context of real-time interactive systems. The number of devices from 1 to 13, and individual sending rates from 50 to 2300 messages per second are tested to provide examples of various network load situations that may resemble a performance configuration. The overall end-to-end latency and bandwidth are measured as the basic performance metrics of interest. The results show that a maximum message rate of 2300 Hz is possible on a 2.4 GHz network for a single embedded device and decreases as the number of devices are added. During testing it was possible to have up to 7 devices transmitting at 100 Hz while attaining less than 10 ms latency, but performance degrades with increasing sending rates and number of devices. Performance can also vary significantly from day to day depending on network usage in a crowded environment.
@inproceedings{NIME20_14, author = {Wang, Johnty and Meneses, Eduardo and Wanderley, Marcelo}, title = {The Scalability of WiFi for Mobile Embedded Sensor Interfaces}, pages = {73--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper14.pdf} }
-
Florent Berthaut and Luke Dahl. 2020. Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 77–82.
Download PDFAdvanced musical cooperation, such as concurrent control of musical parameters or sharing data between instruments,has previously been investigated using multi-user instruments or orchestras of identical instruments. In the case of heterogeneous digital orchestras, where the instruments, interfaces, and control gestures can be very different, a number of issues may impede such collaboration opportunities. These include the lack of a standard method for sharing data or control, the incompatibility of parameter types, and limited awareness of other musicians’ activity and instrument structure. As a result, most collaborations remain limited to synchronising tempo or applying effects to audio outputs. In this paper we present two interfaces for real-time group collaboration amongst musicians with heterogeneous instruments. We conducted a qualitative study to investigate how these interfaces impact musicians’ experience and their musical output, we performed a thematic analysis of inter-views, and we analysed logs of interactions. From these results we derive principles and guidelines for the design of advanced collaboration systems for heterogeneous digital orchestras, namely Adapting (to) the System, Support Development, Default to Openness, and Minimise Friction to Support Expressivity.
@inproceedings{NIME20_15, author = {Berthaut, Florent and Dahl, Luke}, title = {Adapting & Openness: Dynamics of Collaboration Interfaces for Heterogeneous Digital Orchestras}, pages = {77--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper15.pdf}, presentation-video = {https://youtu.be/jGpKkbWq_TY} }
-
Andreas Förster, Christina Komesker, and Norbert Schnell. 2020. SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 83–88.
Download PDFMusic technology can provide persons who experience physical and/or intellectual barriers using traditional musical instruments with a unique access to active music making. This applies particularly but not exclusively to the so-called group of people with physical and/or mental disabilities. This paper presents two Accessible Digital Musical Instruments (ADMIs) that were specifically designed for the students of a Special Educational Needs (SEN) school with a focus on intellectual disabilities. With SnoeSky, we present an ADMI in the form of an interactive starry sky that integrates into the Snoezel-Room. Here, users can ’play’ with ’melodic constellations’ using a flashlight. SonicDive is an interactive installation that enables users to explore a complex water soundscape through their movement inside a ball pool. The underlying goal of both ADMIs was the promotion of self-efficacy experiences while stimulating the users’ relaxation and activation. This paper reports on the design process involving the users and their environment. In addition, it describes some details of the technical implementaion of the ADMIs as well as first indices for their effectiveness.
@inproceedings{NIME20_16, author = {Förster, Andreas and Komesker, Christina and Schnell, Norbert}, title = {SnoeSky and SonicDive - Design and Evaluation of Two Accessible Digital Musical Instruments for a SEN School}, pages = {83--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper16.pdf} }
-
Robert Pritchard and Ian Lavery. 2020. Inexpensive Colour Tracking to Overcome Performer ID Loss . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 89–92.
Download PDFThe NuiTrack IDE supports writing code for an active infrared camera to track up to six bodies, with up to 25 target points on each person. The system automatically assigns IDs to performers/users as they enter the tracking area, but when occlusion of a performer occurs, or when a user exits and then re-enters the tracking area, upon rediscovery of the user the system generates a new tracking ID. Because of this any assigned and registered target tracking points for specific users are lost, as are the linked abilities of that performer to control media based on their movements. We describe a single camera system for overcoming this problem by assigning IDs based on the colours worn by the performers, and then using the colour tracking for updating and confirming identification when the performer reappears after occlusion or upon re-entry. A video link is supplied showing the system used for an interactive dance work with four dancers controlling individual audio tracks.
@inproceedings{NIME20_17, author = {Pritchard, Robert and Lavery, Ian}, title = {Inexpensive Colour Tracking to Overcome Performer ID Loss }, pages = {89--92}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper17.pdf} }
-
Kiyu Nishida and kazuhiro jo. 2020. Modules for analog synthesizers using Aloe vera biomemristor. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 93–96.
Download PDFIn this study, an analog synthesizer module using Aloe vera was proposed as a biomemristor. The recent revival of analog modular synthesizers explores novel possibilities of sounds based on unconventional technologies such as integrating biological forms and structures into traditional circuits. A biosignal has been used in experimental music as the material for composition. However, the recent development of a biocomputor using a slime mold biomemristor expands the use of biomemristors in music. Based on prior research, characteristics of Aloe vera as a biomemristor were electrically measured, and two types of analog synthesizer modules were developed, current to voltage converter and current spike to voltage converter. For this application, a live performance was conducted with the CVC module and the possibilities as a new interface for musical expression were examined.
@inproceedings{NIME20_18, author = {Nishida, Kiyu and jo, kazuhiro}, title = {Modules for analog synthesizers using Aloe vera biomemristor}, pages = {93--96}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper18.pdf}, presentation-video = {https://youtu.be/bZaCd6igKEA} }
-
Giulio Moro and Andrew McPherson. 2020. A platform for low-latency continuous keyboard sensing and sound generation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 97–102.
Download PDFOn several acoustic and electromechanical keyboard instruments, the produced sound is not always strictly dependent exclusively on a discrete key velocity parameter, and minute gesture details can affect the final sonic result. By contrast, subtle variations in articulation have a relatively limited effect on the sound generation when the keyboard controller uses the MIDI standard, used in the vast majority of digital keyboards. In this paper we present an embedded platform that can generate sound in response to a controller capable of sensing the continuous position of keys on a keyboard. This platform enables the creation of keyboard-based DMIs which allow for a richer set of interaction gestures than would be possible through a MIDI keyboard, which we demonstrate through two example instruments. First, in a Hammond organ emulator, the sensing device allows to recreate the nuances of the interaction with the original instrument in a way a velocity-based MIDI controller could not. Second, a nonlinear waveguide flute synthesizer is shown as an example of the expressive capabilities that a continuous-keyboard controller opens up in the creation of new keyboard-based DMIs.
@inproceedings{NIME20_19, author = {Moro, Giulio and McPherson, Andrew}, title = {A platform for low-latency continuous keyboard sensing and sound generation}, pages = {97--102}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper19.pdf}, presentation-video = {https://youtu.be/Y137M9UoKKg} }
-
Advait Sarkar and Henry Mattinson. 2020. Excello: exploring spreadsheets for music composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 11–16.
Download PDFExcello is a spreadsheet-based music composition and programming environment. We co-developed Excello with feedback from 21 musicians at varying levels of musical and computing experience. We asked: can the spreadsheet interface be used for programmatic music creation? Our design process encountered questions such as how time should be represented, whether amplitude and octave should be encoded as properties of individual notes or entire phrases, and how best to leverage standard spreadsheet features, such as formulae and copy-paste. We present the user-centric rationale for our current design, and report a user study suggesting that Excello’s notation retains similar cognitive dimensions to conventional music composition tools, while allowing the user to write substantially complex programmatic music.
@inproceedings{NIME20_2, author = {Sarkar, Advait and Mattinson, Henry}, title = {Excello: exploring spreadsheets for music composition}, pages = {11--16}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper2.pdf} }
-
Andrea Guidi, Fabio Morreale, and Andrew McPherson. 2020. Design for auditory imagery: altering instruments to explore performer fluency. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 103–108.
Download PDFIn NIME design, thorough attention has been devoted to feedback modalities, including auditory, visual and haptic feedback. How the performer executes the gestures to achieve a sound on an instrument, by contrast, appears to be less examined. Previous research showed that auditory imagery, or the ability to hear or recreate sounds in the mind even when no audible sound is present, is essential to the sensorimotor control involved in playing an instrument. In this paper, we enquire whether auditory imagery can also help to support skill transfer between musical instruments resulting in possible implications for new instrument design. To answer this question, we performed two experimental studies on pitch accuracy and fluency where professional violinists were asked to play a modified violin. Results showed altered or even possibly irrelevant auditory feedback on a modified violin does not appear to be a significant impediment to performance. However, performers need to have coherent imagery of what they want to do, and the sonic outcome needs to be coupled to the motor program to achieve it. This finding shows that the design lens should be shifted from a direct feedback model of instrumental playing toward a model where imagery guides the playing process. This result is in agreement with recent research on skilled sensorimotor control that highlights the value of feedforward anticipation in embodied musical performance. It is also of primary importance for the design of new instruments: new sounds that cannot easily be imagined and that are not coupled to a motor program are not likely to be easily performed on the instrument.
@inproceedings{NIME20_20, author = {Guidi, Andrea and Morreale, Fabio and McPherson, Andrew}, title = {Design for auditory imagery: altering instruments to explore performer fluency}, pages = {103--108}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper20.pdf}, presentation-video = {https://youtu.be/yK7Tg1kW2No} }
-
Raul G.M. Masu, Paulo Bala, Muhammad Ahmad, et al. 2020. VR Open Scores: Scores as Inspiration for VR Scenarios. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 109–114.
Download PDFIn this paper, we introduce the concept of VR Open Scores: aleatoric score-based virtual scenarios where an aleatoric score is embedded in a virtual environment. This idea builds upon the notion of graphic scores and composed instrument, and apply them in a new context. Our proposal also explores possible parallels between open meaning in interaction design, and aleatoric score, conceptualized as Open Work by the Italian philosopher Umberto Eco. Our approach has two aims. The first aim is to create an environment where users can immerse themselves in the visual elements of a score while listening to the corresponding music. The second aim is to facilitate users to develop a personal relationship with both the system and the score. To achieve those aims, as a practical implementation of our proposed concept, we developed two immersive scenarios: a 360º video and an interactive space. We conclude presenting how our design aims were accomplished in the two scenarios, and describing positive and negative elements of our implementations.
@inproceedings{NIME20_21, author = {Masu, Raul G.M. and Bala, Paulo and Ahmad, Muhammad and Correia, Nuno N. and Nisi, Valentina and Nunes, Nuno and Romão, Teresa}, title = {VR Open Scores: Scores as Inspiration for VR Scenarios}, pages = {109--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper21.pdf}, presentation-video = {https://youtu.be/JSM6Rydz7iE} }
-
Amble H C Skuse and Shelly Knotts. 2020. Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 115–120.
Download PDFThe project takes a Universal Design approach to exploring the possibility of creating a software platform to facilitate a Networked Ensemble for Disabled musicians. In accordance with the Nothing About Us Without Us (Charlton, 1998) principle I worked with a group of 15 professional musicians who are also disabled. The group gave interviews as to their perspectives and needs around networked music practices and this data was then analysed to look at how software design could be developed to make it more accessible. We also identified key messages for the wider design of digital musical instrument makers, performers and event organisers to improve practice around working with and for disabled musicians.
@inproceedings{NIME20_22, author = {Skuse, Amble H C and Knotts, Shelly}, title = {Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design - why disabled people must be at the heart of developing technology.}, pages = {115--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper22.pdf}, presentation-video = {https://youtu.be/m4D4FBuHpnE} }
-
Anıl Çamcı, Matias Vilaplana, and Ruth Wang. 2020. Exploring the Affordances of VR for Musical Interaction Design with VIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 121–126.
Download PDFAs virtual reality (VR) continues to gain prominence as a medium for artistic expression, a growing number of projects explore the use of VR for musical interaction design. In this paper, we discuss the concept of VIMEs (Virtual Interfaces for Musical Expression) through four case studies that explore different aspects of musical interactions in virtual environments. We then describe a user study designed to evaluate these VIMEs in terms of various usability considerations, such as immersion, perception of control, learnability and physical effort. We offer the results of the study, articulating the relationship between the design of a VIME and the various performance behaviors observed among its users. Finally, we discuss how these results, combined with recent developments in VR technology, can inform the design of new VIMEs.
@inproceedings{NIME20_23, author = {Çamcı, Anıl and Vilaplana, Matias and Wang, Ruth}, title = {Exploring the Affordances of VR for Musical Interaction Design with VIMEs}, pages = {121--126}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper23.pdf} }
-
Anıl Çamcı, Aaron Willette, Nachiketa Gargi, Eugene Kim, Julia Xu, and Tanya Lai. 2020. Cross-platform and Cross-reality Design of Immersive Sonic Environments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 127–130.
Download PDFThe continued growth of modern VR (virtual reality) platforms into mass adoption is fundamentally driven by the work of content creators who offer engaging experiences. It is therefore essential to design accessible creativity support tools that can facilitate the work of a broad range of practitioners in this domain. In this paper, we focus on one facet of VR content creation, namely immersive audio design. We discuss a suite of design tools that enable both novice and expert users to rapidly prototype immersive sonic environments across desktop, virtual reality and augmented reality platforms. We discuss the design considerations adopted for each implementation, and how the individual systems informed one another in terms of interaction design. We then offer a preliminary evaluation of these systems with reports from first-time users. Finally, we discuss our road-map for improving individual and collaborative creative experiences across platforms and realities in the context of immersive audio.
@inproceedings{NIME20_24, author = {Çamcı, Anıl and Willette, Aaron and Gargi, Nachiketa and Kim, Eugene and Xu, Julia and Lai, Tanya}, title = {Cross-platform and Cross-reality Design of Immersive Sonic Environments}, pages = {127--130}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper24.pdf} }
-
Marius Schebella, Gertrud Fischbacher, and Matthew Mosher. 2020. Silver: A Textile Wireframe Interface for the Interactive Sound Installation Idiosynkrasia. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 131–132.
Download PDFSilver is an artwork that deals with the emotional feeling of contact by exaggerating it acoustically. It originates from an interactive room installation, where several textile sculptures merge with sounds. Silver is made from a wire mesh and its surface is reactive to closeness and touch. This material property forms a hybrid of artwork and parametric controller for the real-time sound generation. The textile quality of the fine steel wire-mesh evokes a haptic familiarity inherent to textile materials. This makes it easy for the audience to overcome the initial threshold barrier to get in touch with the artwork in an exhibition situation. Additionally, the interaction is not dependent on visuals. The characteristics of the surface sensor allows a user to play the instrument without actually touching it.
@inproceedings{NIME20_25, author = {Schebella, Marius and Fischbacher, Gertrud and Mosher, Matthew}, title = {Silver: A Textile Wireframe Interface for the Interactive Sound Installation Idiosynkrasia}, pages = {131--132}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper25.pdf} }
-
Ning Yang, Richard Savery, Raghavasimhan Sankaranarayanan, Lisa Zahray, and Gil Weinberg. 2020. Mechatronics-Driven Musical Expressivity for Robotic Percussionists. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 133–138.
Download PDFMusical expressivity is an important aspect of musical performance for humans as well as robotic musicians. We present a novel mechatronics-driven implementation of Brushless Direct Current (BLDC) motors in a robotic marimba player, named ANON, designed to improve speed, dynamic range (loudness), and ultimately perceived musical expressivity in comparison to state-of-the-art robotic percussionist actuators. In an objective test of dynamic range, we find that our implementation provides wider and more consistent dynamic range response in comparison with solenoid-based robotic percussionists. Our implementation also outperforms both solenoid and human marimba players in striking speed. In a subjective listening test measuring musical expressivity, our system performs significantly better than a solenoid-based system and is statistically indistinguishable from human performers.
@inproceedings{NIME20_26, author = {Yang, Ning and Savery, Richard and Sankaranarayanan, Raghavasimhan and Zahray, Lisa and Weinberg, Gil}, title = {Mechatronics-Driven Musical Expressivity for Robotic Percussionists}, pages = {133--138}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper26.pdf}, presentation-video = {https://youtu.be/KsQNlArUv2k} }
-
Paul Dunham. 2020. Click::RAND. A Minimalist Sound Sculpture. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 139–142.
Download PDFDiscovering outmoded or obsolete technologies and appropriating them in creative practice can uncover new relationships between those technologies. Using a media archaeological research approach, this paper presents the electromechanical relay and a book of random numbers as related forms of obsolete media. Situated within the context of electromechanical sound art, the work uses a non-deterministic approach to explore the non-linear and unpredictable agency and materiality of the objects in the work. Developed by the first author, Click::RAND is an object-based sound installation. The work has been developed as an audio-visual representation of a genealogy of connections between these two forms of media in the history of computing.
@inproceedings{NIME20_27, author = {Dunham, Paul}, title = {Click::RAND. A Minimalist Sound Sculpture.}, pages = {139--142}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper27.pdf}, presentation-video = {https://youtu.be/vWKw8H0F9cI} }
-
Enrique Tomás. 2020. A Playful Approach to Teaching NIME: Pedagogical Methods from a Practice-Based Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 143–148.
Download PDFThis paper reports on the experience gained after five years of teaching a NIME master course designed specifically for artists. A playful pedagogical approach based on practice-based methods is presented and evaluated. My goal was introducing the art of NIME design and performance giving less emphasis to technology. Instead of letting technology determine how we teach and think during the class, I propose fostering at first the student’s active construction and understanding of the field experimenting with physical materials,sound production and bodily movements. For this intention I developed a few classroom exercises which my students had to study and practice. During this period of five years, 95 students attended the course. At the end of the semester course, each student designed, built and performed a new interface for musical expression in front of an audience. Thus, in this paper I describe and discuss the benefits of applying playfulness and practice-based methods for teaching NIME in art universities. I introduce the methods and classroom exercises developed and finally I present some lessons learned from this pedagogical experience.
@inproceedings{NIME20_28, author = {Tomás, Enrique}, title = {A Playful Approach to Teaching NIME: Pedagogical Methods from a Practice-Based Perspective}, pages = {143--148}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper28.pdf}, presentation-video = {https://youtu.be/94o3J3ozhMs} }
-
Quinn D Jarvis Holland, Crystal Quartez, Francisco Botello, and Nathan Gammill. 2020. EXPANDING ACCESS TO MUSIC TECHNOLOGY- Rapid Prototyping Accessible Instrument Solutions For Musicians With Intellectual Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 149–153.
Download PDFUsing open-source and creative coding frameworks, a team of artist-engineers from Portland Community College working with artists that experience Intellectual/Developmental disabilities prototyped an ensemble of adapted instruments and synthesizers that facilitate real-time in-key collaboration. The instruments employ a variety of sensors, sending the resulting musical controls to software sound generators via MIDI. Careful consideration was given to the balance between freedom of expression, and curating the possible sonic outcomes as adaptation. Evaluation of adapted instrument design may differ greatly from frameworks for evaluating traditional instruments or products intended for mass-market, though the results of such focused and individualised design have a variety of possible applications.
@inproceedings{NIME20_29, author = {Jarvis Holland, Quinn D and Quartez, Crystal and Botello, Francisco and Gammill, Nathan}, title = {EXPANDING ACCESS TO MUSIC TECHNOLOGY- Rapid Prototyping Accessible Instrument Solutions For Musicians With Intellectual Disabilities}, pages = {149--153}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper29.pdf} }
-
Giovanni M Troiano, Alberto Boem, Giacomo Lepri, and Victor Zappi. 2020. Non-Rigid Musical Interfaces: Exploring Practices, Takes, and Future Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 17–22.
Download PDFNon-rigid interfaces allow for exploring new interactive paradigms that rely on deformable input and shape change, and whose possible applications span several branches of human-computer interaction (HCI). While extensively explored as deformable game controllers, bendable smartphones, and shape-changing displays, non-rigid interfaces are rarely framed in a musical context, and their use for composition and performance is rather sparse and unsystematic. With this work, we start a systematic exploration of this relatively uncharted research area, by means of (1) briefly reviewing existing musical interfaces that capitalize on deformable input,and (2) surveying 11 among experts and pioneers in the field about their experience with and vision on non-rigid musical interfaces.Based on experts’ input, we suggest possible next steps of musical appropriation with deformable and shape-changing technologies.We conclude by discussing how cross-overs between NIME and HCI research will benefit non-rigid interfaces.
@inproceedings{NIME20_3, author = {Troiano, Giovanni M and Boem, Alberto and Lepri, Giacomo and Zappi, Victor}, title = {Non-Rigid Musical Interfaces: Exploring Practices, Takes, and Future Perspective}, pages = {17--22}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper3.pdf}, presentation-video = {https://youtu.be/o4CuAglHvf4} }
-
Jack Atherton and Ge Wang. 2020. Curating Perspectives: Incorporating Virtual Reality into Laptop Orchestra Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 154–159.
Download PDFDespite a history spanning nearly 30 years, best practices for the use of virtual reality (VR) in computer music performance remain exploratory. Here, we present a case study of a laptop orchestra performance entitled Resilience, involving one VR performer and an ensemble of instrumental performers, in order to explore values and design principles for incorporating this emerging technology into computer music performance. We present a brief history at the intersection of VR and the laptop orchestra. We then present the design of the piece and distill it into a set of design principles. Broadly, these design principles address the interplay between the different conflicting perspectives at play: those of the VR performer, the ensemble, and the audience. For example, one principle suggests that the perceptual link between the physical and virtual world maybe enhanced for the audience by improving the performers’ sense of embodiment. We argue that these design principles are a form of generalized knowledge about how we might design laptop orchestra pieces involving virtual reality.
@inproceedings{NIME20_30, author = {Atherton, Jack and Wang, Ge}, title = {Curating Perspectives: Incorporating Virtual Reality into Laptop Orchestra Performance}, pages = {154--159}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper30.pdf}, presentation-video = {https://youtu.be/tmeDO5hg56Y} }
-
Fabio Morreale, S. M. Astrid Bin, Andrew McPherson, Paul Stapleton, and Marcelo Wanderley. 2020. A NIME Of The Times: Developing an Outward-Looking Political Agenda For This Community. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 160–165.
Download PDFSo far, NIME research has been mostly inward-looking, dedicated to divulging and studying our own work and having limited engagement with trends outside our community. Though musical instruments as cultural artefacts are inherently political, we have so far not sufficiently engaged with confronting these themes in our own research. In this paper we argue that we should consider how our work is also political, and begin to develop a clear political agenda that includes social, ethical, and cultural considerations through which to consider not only our own musical instruments, but also those not created by us. Failing to do so would result in an unintentional but tacit acceptance and support of such ideologies. We explore one item to be included in this political agenda: the recent trend in music technology of “democratising music”, which carries implicit political ideologies grounded in techno-solutionism. We conclude with a number of recommendations for stimulating community-wide discussion on these themes in the hope that this leads to the development of an outward-facing perspective that fully engages with political topics.
@inproceedings{NIME20_31, author = {Morreale, Fabio and Bin, S. M. Astrid and McPherson, Andrew and Stapleton, Paul and Wanderley, Marcelo}, title = {A NIME Of The Times: Developing an Outward-Looking Political Agenda For This Community}, pages = {160--165}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper31.pdf}, presentation-video = {https://youtu.be/y2iDN24ZLTg} }
-
Chantelle L Ko and Lora Oehlberg. 2020. Touch Responsive Augmented Violin Interface System II: Integrating Sensors into a 3D Printed Fingerboard. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 166–171.
Download PDFWe present TRAVIS II, an augmented acoustic violin with touch sensors integrated into its 3D printed fingerboard that track left-hand finger gestures in real time. The fingerboard has four strips of conductive PLA filament which produce an electric signal when fingers press down on each string. While these sensors are physically robust, they are mechanically assembled and thus easy to replace if damaged. The performer can also trigger presets via four FSRs attached to the body of the violin. The instrument is completely wireless, giving the performer the freedom to move throughout the performance space. While the sensing fingerboard is installed in place of the traditional fingerboard, all other electronics can be removed from the augmented instrument, maintaining the aesthetics of a traditional violin. Our design allows violinists to naturally create music for interactive performance and improvisation without requiring new instrumental techniques. In this paper, we describe the design of the instrument, experiments leading to the sensing fingerboard, and performative applications of the instrument.
@inproceedings{NIME20_32, author = {Ko, Chantelle L and Oehlberg, Lora}, title = {Touch Responsive Augmented Violin Interface System II: Integrating Sensors into a 3D Printed Fingerboard}, pages = {166--171}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper32.pdf}, presentation-video = {https://youtu.be/XIAd_dr9PHE} }
-
Nicolas E Gold, Chongyang Wang, Temitayo Olugbade, Nadia Berthouze, and Amanda Williams. 2020. P(l)aying Attention: Multi-modal, multi-temporal music control. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 172–175.
Download PDFThe expressive control of sound and music through body movements is well-studied. For some people, body movement is demanding, and although they would prefer to express themselves freely using gestural control, they are unable to use such interfaces without difficulty. In this paper, we present the P(l)aying Attention framework for manipulating recorded music to support these people, and to help the therapists that work with them. The aim is to facilitate body awareness, exploration, and expressivity by allowing the manipulation of a pre-recorded ‘ensemble’ through an interpretation of body movement, provided by a machine-learning system trained on physiotherapist assessments and movement data from people with chronic pain. The system considers the nature of a person’s movement (e.g. protective) and offers an interpretation in terms of the joint-groups that are playing a major role in the determination at that point in the movement, and to which attention should perhaps be given (or the opposite at the user’s discretion). Using music to convey the interpretation offers informational (through movement sonification) and creative (through manipulating the ensemble by movement) possibilities. The approach offers the opportunity to explore movement and music at multiple timescales and under varying musical aesthetics.
@inproceedings{NIME20_33, author = {Gold, Nicolas E and Wang, Chongyang and Olugbade, Temitayo and Berthouze, Nadia and Williams, Amanda}, title = {P(l)aying Attention: Multi-modal, multi-temporal music control}, pages = {172--175}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper33.pdf} }
-
Doga Cavdir and Ge Wang. 2020. Felt Sound: A Shared Musical Experience for the Deaf and Hard of Hearing. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 176–181.
Download PDFWe present a musical interface specifically designed for inclusive performance that offers a shared experience for both individuals who are deaf and hard of hearing as well as those who are not. This interface borrows gestures (with or without overt meaning) from American Sign Language (ASL), rendered using low-frequency sounds that can be felt by everyone in the performance. The Deaf and Hard of Hearing cannot experience the sound in the same way. Instead, they are able to physically experience the vibrations, nuances, contours, as well as its correspondence with the hand gestures. Those who are not hard of hearing can experience the sound, but also feel it just the same, with the knowledge that the same physical vibrations are shared by everyone. The employment of sign language adds another aesthetic dimension to the instrument –a nuanced borrowing of a functional communication medium for an artistic end.
@inproceedings{NIME20_34, author = {Cavdir, Doga and Wang, Ge}, title = {Felt Sound: A Shared Musical Experience for the Deaf and Hard of Hearing}, pages = {176--181}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper34.pdf}, presentation-video = {https://youtu.be/JCvlHu4UaZ0} }
-
Sasha Leitman. 2020. Sound Based Sensors for NIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 182–187.
Download PDFThis paper examines the use of Sound Sensors and audio as input material for New Interfaces for Musical Expression (NIMEs), exploring the unique affordances and character of the interactions and instruments that leverage it. Examples of previous work in the literature that use audio as sensor input data are examined for insights into how the use of Sound Sensors provides unique opportunities within the NIME context. We present the results of a user study comparing sound-based sensors to other sensing modalities within the context of controlling parameters. The study suggests that the use of Sound Sensors can enhance gestural flexibility and nuance but that they also present challenges in accuracy and repeatability.
@inproceedings{NIME20_35, author = {Leitman, Sasha}, title = {Sound Based Sensors for NIMEs}, pages = {182--187}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper35.pdf} }
-
Yuma Ikawa and Akihiro Matsuura. 2020. Playful Audio-Visual Interaction with Spheroids . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 188–189.
Download PDFThis paper presents a novel interactive system for creating audio-visual expressions on tabletop display by dynamically manipulating solids of revolution called spheroids. The four types of basic spinning and rolling movements of spheroids are recognized from the physical conditions such as the contact area, the location of the centroid, the (angular) velocity, and the curvature of the locus all obtained from sensor data on the display. They are then used for interactively generating audio-visual effects that match each of the movements. We developed a digital content that integrated these functionalities and enabled composition and live performance through manipulation of spheroids.
@inproceedings{NIME20_36, author = {Ikawa, Yuma and Matsuura, Akihiro}, title = {Playful Audio-Visual Interaction with Spheroids }, pages = {188--189}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper36.pdf} }
-
Sihwa Park. 2020. Collaborative Mobile Instruments in a Shared AR Space: a Case of ARLooper. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 190–195.
Download PDFThis paper presents ARLooper, an augmented reality mobile interface that allows multiple users to record sound and perform together in a shared AR space. ARLooper is an attempt to explore the potential of collaborative mobile AR instruments in supporting non-verbal communication for musical performances. With ARLooper, the user can record, manipulate, and play sounds being visualized as 3D waveforms in an AR space. ARLooper provides a shared AR environment wherein multiple users can observe each other’s activities in real time, supporting increasing the understanding of collaborative contexts. This paper provides the background of the research and the design and technical implementation of ARLooper, followed by a user study.
@inproceedings{NIME20_37, author = {Park, Sihwa}, title = {Collaborative Mobile Instruments in a Shared AR Space: a Case of ARLooper}, pages = {190--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper37.pdf}, presentation-video = {https://youtu.be/Trw4epKeUbM} }
-
Diemo Schwarz, Abby Wanyu Liu, and Frederic Bevilacqua. 2020. A Survey on the Use of 2D Touch Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 196–201.
Download PDFExpressive 2D multi-touch interfaces have in recent years moved from research prototypes to industrial products, from repurposed generic computer input devices to controllers specially designed for musical expression. A host of practicioners use this type of devices in many different ways, with different gestures and sound synthesis or transformation methods. In order to get an overview of existing and desired usages, we launched an on-line survey that collected 37 answers from practicioners in and outside of academic and design communities. In the survey we inquired about the participants’ devices, their strengths and weaknesses, the layout of control dimensions, the used gestures and mappings, the synthesis software or hardware and the use of audio descriptors and machine learning. The results can inform the design of future interfaces, gesture analysis and mapping, and give directions for the need and use of machine learning for user adaptation.
@inproceedings{NIME20_38, author = {Schwarz, Diemo and Liu, Abby Wanyu and Bevilacqua, Frederic}, title = {A Survey on the Use of 2D Touch Interfaces for Musical Expression}, pages = {196--201}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper38.pdf}, presentation-video = {https://youtu.be/eE8I3mecaB8} }
-
Harri L Renney, Tom Mitchell, and Benedict Gaster. 2020. There and Back Again: The Practicality of GPU Accelerated Digital Audio. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 202–207.
Download PDFGeneral-Purpose GPU computing is becoming an increasingly viable option for acceleration, including in the audio domain. Although it can improve performance, the intrinsic nature of a device like the GPU involves data transfers and execution commands which requires time to complete. Therefore, there is an understandable caution concerning the overhead involved with using the GPU for audio computation. This paper aims to clarify the limitations by presenting a performance benchmarking suite. The benchmarks utilize OpenCL and CUDA across various tests to highlight the considerations and limitations of processing audio in the GPU environment. The benchmarking suite has been used to gather a collection of results across various hardware. Salient results have been reviewed in order to highlight the benefits and limitations of the GPU for digital audio. The results in this work show that the minimal GPU overhead fits into the real-time audio requirements provided the buffer size is selected carefully. The baseline overhead is shown to be roughly 0.1ms, depending on the GPU. This means buffer sizes 8 and above are completed within the allocated time frame. Results from more demanding tests, involving physical modelling synthesis, demonstrated a balance was needed between meeting the sample rate and keeping within limits for latency and jitter. Buffer sizes from 1 to 16 failed to sustain the sample rate whilst buffer sizes 512 to 32768 exceeded either latency or jitter limits. Buffer sizes in between these ranges, such as 256, satisfied the sample rate, latency and jitter requirements chosen for this paper.
@inproceedings{NIME20_39, author = {Renney, Harri L and Mitchell, Tom and Gaster, Benedict}, title = {There and Back Again: The Practicality of GPU Accelerated Digital Audio}, pages = {202--207}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper39.pdf}, presentation-video = {https://youtu.be/xAVEHJZRIx0} }
-
Tim Shaw and John Bowers. 2020. Ambulation: Exploring Listening Technologies for an Extended Sound Walking Practice. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 23–28.
Download PDFAmbulation is a sound walk that uses field recording techniques and listening technologies to create a walking performance using environmental sound. Ambulation engages with the act of recording as an improvised performance in response to the soundscapes it is presented within. In this paper we describe the work and place it in relationship to other artists engaged with field recording and extended sound walking practices. We will give technical details of the Ambulation system we developed as part of the creation of the piece, and conclude with a collection of observations that emerged from the project. The research around the development and presentation of Ambulation contributes to the idea of field recording as a live, procedural practice, moving away from the ideas of the movement of documentary material from one place to another. We will show how having an open, improvisational approach to technologically supported sound walking enables rich and unexpected results to occur and how this way of working can contribute to NIME design and thinking.
@inproceedings{NIME20_4, author = {Shaw, Tim and Bowers, John}, title = {Ambulation: Exploring Listening Technologies for an Extended Sound Walking Practice}, pages = {23--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper4.pdf}, presentation-video = {https://youtu.be/dDXkNnQXdN4} }
-
Gus Xia, Daniel Chin, Yian Zhang, Tianyu Zhang, and Junbo Zhao. 2020. Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 208–213.
Download PDFLearning to play an instrument is intrinsically multimodal, and we have seen a trend of applying visual and haptic feedback in music games and computer-aided music tutoring systems. However, most current systems are still designed to master individual pieces of music; it is unclear how well the learned skills can be generalized to new pieces. We aim to explore this question. In this study, we contribute Interactive Rainbow Score, an interactive visual system to boost the learning of sight-playing, the general musical skill to read music and map the visual representations to performance motions. The key design of Interactive Rainbow Score is to associate pitches (and the corresponding motions) with colored notation and further strengthen such association via real-time interactions. Quantitative results show that the interactive feature on average increases the learning efficiency by 31.1%. Further analysis indicates that it is critical to apply the interaction in the early period of learning.
@inproceedings{NIME20_40, author = {Xia, Gus and Chin, Daniel and Zhang, Yian and Zhang, Tianyu and Zhao, Junbo}, title = {Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System}, pages = {208--213}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper40.pdf} }
-
Nicola Davanzo and Federico Avanzini. 2020. A Dimension Space for the Evaluation of Accessible Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 214–220.
Download PDFResearch on Accessible Digital Musical Instruments (ADMIs) has received growing attention over the past decades, carving out an increasingly large space in the literature. Despite the recent publication of state-of-the-art review works, there are still few systematic studies on ADMIs design analysis. In this paper we propose a formal tool to explore the main design aspects of ADMIs based on Dimension Space Analysis, a well established methodology in the NIME literature which allows to generate an effective visual representation of the design space. We therefore propose a set of relevant dimensions, which are based both on categories proposed in recent works in the research context, and on original contributions. We then proceed to demonstrate its applicability by selecting a set of relevant case studies, and analyzing a sample set of ADMIs found in the literature.
@inproceedings{NIME20_41, author = {Davanzo, Nicola and Avanzini, Federico}, title = {A Dimension Space for the Evaluation of Accessible Digital Musical Instruments}, pages = {214--220}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper41.pdf}, presentation-video = {https://youtu.be/pJlB5k8TV9M} }
-
Adam Pultz Melbye and Halldor A Ulfarsson. 2020. Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 221–226.
Download PDFThis paper describes physical and digital design strategies for the Feedback-Actuated Augmented Bass - a self-contained feedback double bass with embedded DSP capabilities. A primary goal of the research project is to create an instrument that responds well to the use of extended playing techniques and can manifest complex harmonic spectra while retaining the feel and sonic fingerprint of an acoustic double bass. While the physical con figuration of the instrument builds on similar feedback string instruments being developed in recent years, this project focuses on modifying the feedback behaviour through low-level audio feature extractions coupled to computationally lightweight filtering and amplitude management algorithms. We discuss these adaptive and time-variant processing strategies and how we apply them in sculpting the system’s dynamic and complex behaviour to our liking.
@inproceedings{NIME20_42, author = {Melbye, Adam Pultz and Ulfarsson, Halldor A}, title = {Sculpting the behaviour of the Feedback-Actuated Augmented Bass: Design strategies for subtle manipulations of string feedback using simple adaptive algorithms}, pages = {221--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper42.pdf}, presentation-video = {https://youtu.be/jXePge1MS8A} }
-
Gwendal Le Vaillant, Thierry Dutoit, and Rudi Giot. 2020. Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 227–232.
Download PDFThe comparative study presented in this paper focuses on two approaches for the search of sound presets using a specific geometric touch app. The first approach is based on independent sliders on screen and is called analytic. The second is based on interpolation between presets represented by polygons on screen and is called holistic. Participants had to listen to, memorize, and search for sound presets characterized by four parameters. Ten different configurations of sound synthesis and processing were presented to each participant, once for each approach. The performance scores of 28 participants (not including early testers) were computed using two measured values: the search duration, and the parametric distance between the reference and answered presets. Compared to the analytic sliders-based interface, the holistic interpolation-based interface demonstrated a significant performance improvement for 60% of sound synthesizers. The other 40% led to equivalent results for the analytic and holistic interfaces. Using sliders, expert users performed nearly as well as they did with interpolation. Beginners and intermediate users struggled more with sliders, while the interpolation allowed them to get quite close to experts’ results.
@inproceedings{NIME20_43, author = {Le Vaillant, Gwendal and Dutoit, Thierry and Giot, Rudi}, title = {Analytic vs. holistic approaches for the live search of sound presets using graphical interpolation}, pages = {227--232}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper43.pdf}, presentation-video = {https://youtu.be/Korw3J_QvQE} }
-
Chase Mitchusson. 2020. Indeterminate Sample Sequencing in Virtual Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 233–236.
Download PDFThe purpose of this project is to develop an interface for writing and performing music using sequencers in virtual reality (VR). The VR sequencer deals with chance-based operations to select audio clips for playback and spatial orientation-based rhythm and melody generation, while incorporating three-dimensional (3-D) objects as omnidirectional playheads. Spheres which grow from a variable minimum size to a variable maximum size at a variable speed, constantly looping, represent the passage of time in this VR sequencer. The 3-D assets which represent samples are actually sample containers that come in six common dice shapes. As the dice come into contact with a sphere, their samples are triggered to play. This behavior mimics digital audio workstation (DAW) playheads reading MIDI left-to-right in popular professional and consumer software sequencers. To incorporate height into VR music making, the VR sequencer is capable of generating terrain at the press of a button. Each terrain will gradually change, creating the possibility for the dice to roll on their own. Audio effects are built in to each scene and mapped to terrain parameters, creating another opportunity for chance operations in the music making process. The chance-based sample selection, spatial orientation-defined rhythms, and variable terrain mapped to audio effects lead to indeterminacy in performance and replication of a single piece of music. This project aims to give the gaming community access to experimental music making by means of consumer virtual reality hardware.
@inproceedings{NIME20_44, author = {Mitchusson, Chase}, title = {Indeterminate Sample Sequencing in Virtual Reality}, pages = {233--236}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper44.pdf} }
-
Rebecca Fiebrink and Laetitia Sonami. 2020. Reflections on Eight Years of Instrument Creation with Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 237–242.
Download PDFMachine learning (ML) has been used to create mappings for digital musical instruments for over twenty-five years, and numerous ML toolkits have been developed for the NIME community. However, little published work has studied how ML has been used in sustained instrument building and performance practices. This paper examines the experiences of instrument builder and performer Laetitia Sonami, who has been using ML to build and refine her Spring Spyre instrument since 2012. Using Sonami’s current practice as a case study, this paper explores the utility, opportunities, and challenges involved in using ML in practice over many years. This paper also reports the perspective of Rebecca Fiebrink, the creator of the Wekinator ML tool used by Sonami, revealing how her work with Sonami has led to changes to the software and to her teaching. This paper thus contributes a deeper understanding of the value of ML for NIME practitioners, and it can inform design considerations for future ML toolkits as well as NIME pedagogy. Further, it provides new perspectives on familiar NIME conversations about mapping strategies, expressivity, and control, informed by a dedicated practice over many years.
@inproceedings{NIME20_45, author = {Fiebrink, Rebecca and Sonami, Laetitia}, title = {Reflections on Eight Years of Instrument Creation with Machine Learning}, pages = {237--242}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper45.pdf}, presentation-video = {https://youtu.be/EvXZ9NayZhA} }
-
Alex Lucas, Miguel Ortiz, and Franziska Schroeder. 2020. The Longevity of Bespoke, Accessible Music Technology: A Case for Community. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 243–248.
Download PDFBased on the experience garnered through a longitudinal ethnographic study, the authors reflect on the practice of designing and fabricating bespoke, accessible music tech- nologies. Of particular focus are the social, technical and environmental factors at play which make the provision of such technology a reality. The authors make suggestions of ways to achieve long-term, sustained use. Seemingly those involved in its design, fabrication and use could benefit from a concerted effort to share resources, knowledge and skill as a mobilised community of practitioners.
@inproceedings{NIME20_46, author = {Lucas, Alex and Ortiz, Miguel and Schroeder, Franziska}, title = {The Longevity of Bespoke, Accessible Music Technology: A Case for Community}, pages = {243--248}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper46.pdf}, presentation-video = {https://youtu.be/cLguyuZ9weI} }
-
Ivica I Bukvic, Disha Sardana, and Woohun Joo. 2020. New Interfaces for Spatial Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 249–254.
Download PDFWith the proliferation of venues equipped with the high density loudspeaker arrays there is a growing interest in developing new interfaces for spatial musical expression (NISME). Of particular interest are interfaces that focus on the emancipation of the spatial domain as the primary dimension for musical expression. Here we present Monet NISME that leverages multitouch pressure-sensitive surface and the D4 library’s spatial mask and thereby allows for a unique approach to interactive spatialization. Further, we present a study with 22 participants designed to assess its usefulness and compare it to the Locus, a NISME introduced in 2019 as part of a localization study which is built on the same design principles of using natural gestural interaction with the spatial content. Lastly, we briefly discuss the utilization of both NISMEs in two artistic performances and propose a set of guidelines for further exploration in the NISME domain.
@inproceedings{NIME20_47, author = {Bukvic, Ivica I and Sardana, Disha and Joo, Woohun}, title = {New Interfaces for Spatial Musical Expression}, pages = {249--254}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper47.pdf}, presentation-video = {https://youtu.be/GQ0552Lc1rw} }
-
Mark Durham. 2020. Inhabiting the Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 255–258.
Download PDFThis study presents an ecosystemic approach to music interaction, through the practice-based development of a mixed reality installation artwork. It fuses a generative, immersive audio composition with augmented reality visualisation, within an architectural space as part of a blended experience. Participants are encouraged to explore and interact with this combination of elements through physical engagement, to then develop an understanding of how the blending of real and virtual space occurs as the installation unfolds. The sonic layer forms a link between the two, as a three-dimensional sound composition. Connections in the system allow for multiple streams of data to run between the layers, which are used for the real-time modulation of parameters. These feedback mechanisms form a complete loop between the participant in real space, soundscape, and mixed reality visualisation, providing a participant mediated experience that exists somewhere between creator and observer.
@inproceedings{NIME20_48, author = {Durham, Mark}, title = {Inhabiting the Instrument}, pages = {255--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper48.pdf} }
-
Chris Nash. 2020. Crowd-driven Music: Interactive and Generative Approaches using Machine Vision and Manhattan. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 259–264.
Download PDFThis paper details technologies and artistic approaches to crowd-driven music, discussed in the context of a live public installation in which activity in a public space (a busy railway platform) is used to drive the automated composition and performance of music. The approach presented uses realtime machine vision applied to a live video feed of a scene, from which detected objects and people are fed into Manhattan (Nash, 2014), a digital music notation that integrates sequencing and programming to support the live creation of complex musical works that combine static, algorithmic, and interactive elements. The paper discusses the technical details of the system and artistic development of specific musical works, introducing novel techniques for mapping chaotic systems to musical expression and exploring issues of agency, aesthetic, accessibility and adaptability relating to composing interactive music for crowds and public spaces. In particular, performances as part of an installation for BBC Music Day 2018 are described. The paper subsequently details a practical workshop, delivered digitally, exploring the development of interactive performances in which the audience or general public actively or passively control live generation of a musical piece. Exercises support discussions on technical, aesthetic, and ontological issues arising from the identification and mapping of structure, order, and meaning in non-musical domains to analogous concepts in musical expression. Materials for the workshop are available freely with the Manhattan software.
@inproceedings{NIME20_49, author = {Nash, Chris}, title = {Crowd-driven Music: Interactive and Generative Approaches using Machine Vision and Manhattan}, pages = {259--264}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper49.pdf}, presentation-video = {https://youtu.be/DHIowP2lOsA} }
-
Michael J Krzyzaniak. 2020. Words to Music Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 29–34.
Download PDFThis paper discusses the design of a musical synthesizer that takes words as input, and attempts to generate music that somehow underscores those words. This is considered as a tool for sound designers who could, for example, enter dialogue from a film script and generate appropriate back- ground music. The synthesizer uses emotional valence and arousal as a common representation between words and mu- sic. It draws on previous studies that relate words and mu- sical features to valence and arousal. The synthesizer was evaluated with a user study. Participants listened to music generated by the synthesizer, and described the music with words. The arousal of the words they entered was highly correlated with the intended arousal of the music. The same was, surprisingly, not true for valence. The synthesizer is online, at [redacted URL].
@inproceedings{NIME20_5, author = {Krzyzaniak, Michael J}, title = {Words to Music Synthesis}, pages = {29--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper5.pdf} }
-
Alex Mclean. 2020. Algorithmic Pattern. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 265–270.
Download PDFThis paper brings together two main perspectives on algorithmic pattern. First, the writing of musical patterns in live coding performance, and second, the weaving of patterns in textiles. In both cases, algorithmic pattern is an interface between the human and the outcome, where small changes have far-reaching impact on the results. By bringing contemporary live coding and ancient textile approaches together, we reach a common view of pattern as algorithmic movement (e.g. looping, shifting, reflecting, interfering) in the making of things. This works beyond the usual definition of pattern used in musical interfaces, of mere repeating sequences. We conclude by considering the place of algorithmic pattern in a wider activity of making.
@inproceedings{NIME20_50, author = {Mclean, Alex}, title = {Algorithmic Pattern}, pages = {265--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper50.pdf}, presentation-video = {https://youtu.be/X9AkOAEDV08} }
-
Louis McCallum and Mick S Grierson. 2020. Supporting Interactive Machine Learning Approaches to Building Musical Instruments in the Browser. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 271–272.
Download PDFInteractive machine learning (IML) is an approach to building interactive systems, including DMIs, focusing on iterative end-user data provision and direct evaluation. This paper describes the implementation of a Javascript library, encapsulating many of the boilerplate needs of building IML systems for creative tasks with minimal code inclusion and low barrier to entry. Further, we present a set of complimentary Audio Worklet-backed instruments to allow for in-browser creation of new musical systems able to run concurrently with various computationally expensive feature extractor and lightweight machine learning models without the interference often seen in interactive Web Audio applications.
@inproceedings{NIME20_51, author = {McCallum, Louis and Grierson, Mick S}, title = {Supporting Interactive Machine Learning Approaches to Building Musical Instruments in the Browser}, pages = {271--272}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper51.pdf} }
-
Mathias S Kirkegaard, Mathias Bredholt, Christian Frisson, and Marcelo Wanderley. 2020. TorqueTuner: A self contained module for designing rotary haptic force feedback for digital musical instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 273–278.
Download PDFTorqueTuner is an embedded module that allows Digital Musical Instrument (DMI) designers to map sensors to parameters of haptic effects and dynamically modify rotary force feedback in real-time. We embedded inside TorqueTuner a collection of haptic effects (Wall, Magnet, Detents, Spring, Friction, Spin, Free) and a bi-directional interface through libmapper, a software library for making connections between data signals on a shared network. To increase affordability and portability of force-feedback implementations in DMI design, we designed our platform to be wireless, self-contained and built from commercially available components. To provide examples of modularity and portability, we integrated TorqueTuner into a standalone haptic knob and into an existing DMI, the T-Stick. We implemented 3 musical applications (Pitch wheel, Turntable and Exciter), by mapping sensors to sound synthesis in audio programming environment SuperCollider. While the original goal was to simulate the haptic feedback associated with turning a knob, we found that the platform allows for further expanding interaction possibilities in application scenarios where rotary control is familiar.
@inproceedings{NIME20_52, author = {Kirkegaard, Mathias S and Bredholt, Mathias and Frisson, Christian and Wanderley, Marcelo}, title = {TorqueTuner: A self contained module for designing rotary haptic force feedback for digital musical instruments}, pages = {273--278}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper52.pdf}, presentation-video = {https://youtu.be/V8WDMbuX9QA} }
-
Corey J Ford and Chris Nash. 2020. An Iterative Design ‘by proxy’ Method for Developing Educational Music Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 279–284.
Download PDFIterative design methods involving children and educators are difficult to conduct, given both the ethical implications and time commitments understandably required. The qualitative design process presented here recruits introductory teacher training students, towards discovering useful design insights relevant to music education technologies “by proxy”. Therefore, some of the barriers present in child-computer interaction research are avoided. As an example, the method is applied to the creation of a block-based music notation system, named Codetta. Building upon successful educational technologies that intersect both music and computer programming, Codetta seeks to enable child composition, whilst aiding generalist educator’s confidence in teaching music.
@inproceedings{NIME20_53, author = {Ford, Corey J and Nash, Chris}, title = {An Iterative Design ‘by proxy’ Method for Developing Educational Music Interfaces}, pages = {279--284}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper53.pdf}, presentation-video = {https://youtu.be/fPbZMQ5LEmk} }
-
Filipe Calegario, Marcelo Wanderley, João Tragtenberg, et al. 2020. Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 285–290.
Download PDFProbatio is an open-source toolkit for prototyping new digital musical instruments created in 2016. Based on a morphological chart of postures and controls of musical instruments, it comprises a set of blocks, bases, hubs, and supports that, when combined, allows designers, artists, and musicians to experiment with different input devices for musical interaction in different positions and postures. Several musicians have used the system, and based on these past experiences, we assembled a list of improvements to implement version 1.0 of the toolkit through a unique international partnership between two laboratories in Brazil and Canada. In this paper, we present the original toolkit and its use so far, summarize the main lessons learned from musicians using it, and present the requirements behind, and the final design of, v1.0 of the project. We also detail the work developed in digital fabrication using two different techniques: laser cutting and 3D printing, comparing their pros and cons. We finally discuss the opportunities and challenges of fully sharing the project online and replicating its parts in both countries.
@inproceedings{NIME20_54, author = {Calegario, Filipe and Wanderley, Marcelo and Tragtenberg, João and Meneses, Eduardo and Wang, Johnty and Sullivan, John and Franco, Ivan and Kirkegaard, Mathias S and Bredholt, Mathias and Rohs, Josh}, title = {Probatio 1.0: collaborative development of a toolkit for functional DMI prototypes}, pages = {285--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper54.pdf}, presentation-video = {https://youtu.be/jkFnZZUA3xs} }
-
Travis J West, Marcelo Wanderley, and Baptiste Caramiaux. 2020. Making Mappings: Examining the Design Process. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 291–296.
Download PDFWe conducted a study which examines mappings from a relatively unexplored perspective: how they are made. Twelve skilled NIME users designed a mapping from a T-Stick to a subtractive synthesizer, and were interviewed about their approach to mapping design. We present a thematic analysis of the interviews, with reference to data recordings captured while the designers worked. Our results suggest that the mapping design process is an iterative process that alternates between two working modes: diffuse exploration and directed experimentation.
@inproceedings{NIME20_55, author = {West, Travis J and Wanderley, Marcelo and Caramiaux, Baptiste}, title = {Making Mappings: Examining the Design Process}, pages = {291--296}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper55.pdf}, presentation-video = {https://youtu.be/aaoResYjqmE} }
-
Michael Sidler, Matthew C Bisson, Jordan Grotz, and Scott Barton. 2020. Parthenope: A Robotic Musical Siren. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 297–300.
Download PDFParthenope is a robotic musical siren developed to produce unique timbres and sonic gestures. Parthenope uses perforated spinning disks through which air is directed to produce sound. Computer-control of disk speed and air flow in conjunction with a variety of nozzles allow pitches to be precisely produced at different volumes. The instrument is controlled via Open Sound Control (OSC) messages sent over an ethernet connection and can interface with common DAWs and physical controllers. Parthenope is capable of microtonal tuning, portamenti, rapid and precise articulation (and thus complex rhythms) and distinct timbres that result from its aerophonic character. It occupies a unique place among robotic musical instruments.
@inproceedings{NIME20_56, author = {Sidler, Michael and Bisson, Matthew C and Grotz, Jordan and Barton, Scott}, title = {Parthenope: A Robotic Musical Siren}, pages = {297--300}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper56.pdf}, presentation-video = {https://youtu.be/HQuR0aBJ70Y} }
-
Steven Kemper. 2020. Tremolo-Harp: A Vibration-Motor Actuated Robotic String Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 301–304.
Download PDFThe Tremolo-Harp is a twelve-stringed robotic instrument, where each string is actuated with a DC vibration motor to produce a mechatronic “tremolo” effect. It was inspired by instruments and musical styles that employ tremolo as a primary performance technique, including the hammered dulcimer, pipa, banjo, flamenco guitar, and surf rock guitar. Additionally, the Tremolo-Harp is designed to produce long, sustained textures and continuous dynamic variation. These capabilities represent a different approach from the majority of existing robotic string instruments, which tend to focus on actuation speed and rhythmic precision. The composition Tremolo-Harp Study 1 (2019) presents an initial exploration of the Tremolo-Harp’s unique timbre and capability for continuous dynamic variation.
@inproceedings{NIME20_57, author = {Kemper, Steven}, title = {Tremolo-Harp: A Vibration-Motor Actuated Robotic String Instrument}, pages = {301--304}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper57.pdf} }
-
Atsuya Kobayashi, Reo Anzai, and Nao Tokui. 2020. ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 305–308.
Download PDFWe propose ExSampling: an integrated system of recording application and Deep Learning environment for a real-time music performance of environmental sounds sampled by field recording. Automated sound mapping to Ableton Live tracks by Deep Learning enables field recording to be applied to real-time performance, and create interactions among sound recorder, composers and performers.
@inproceedings{NIME20_58, author = {Kobayashi, Atsuya and Anzai, Reo and Tokui, Nao}, title = {ExSampling: a system for the real-time ensemble performance of field-recorded environmental sounds}, pages = {305--308}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper58.pdf} }
-
Juan Pablo Yepez Placencia, Jim Murphy, and Dale Carnegie. 2020. Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 309–314.
Download PDFThe exploration of musical robots has been an area of interest due to the timbral and mechanical advantages they offer for music generation and performance. However, one of the greatest challenges in mechatronic music is to enable these robots to deliver a nuanced and expressive performance. This depends on their capability to integrate dynamics, articulation, and a variety of ornamental techniques while playing a given musical passage. In this paper we introduce a robot arm pitch shifter for a mechatronic monochord prototype. This is a fast, precise, and mechanically quiet system that enables sliding techniques during musical performance. We discuss the design and construction process, as well as the system’s advantages and restrictions. We also review the quantitative evaluation process used to assess if the instrument meets the design requirements. This process reveals how the pitch shifter outperforms existing configurations, and potential areas of improvement for future work.
@inproceedings{NIME20_59, author = {Yepez Placencia, Juan Pablo and Murphy, Jim and Carnegie, Dale}, title = {Designing an Expressive Pitch Shifting Mechanism for Mechatronic Chordophones}, pages = {309--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper59.pdf}, presentation-video = {https://youtu.be/rpX8LTZd-Zs} }
-
Marcel Ehrhardt, Max Neupert, and Clemens Wegener. 2020. Piezoelectric strings as a musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 35–36.
Download PDFFlexible strings with piezoelectric properties have been developed but until date not evaluated for the use as part of a musical instrument. This paper is assessing the properties of these new fibers, looking at their possibilities for NIME applications.
@inproceedings{NIME20_6, author = {Ehrhardt, Marcel and Neupert, Max and Wegener, Clemens}, title = {Piezoelectric strings as a musical interface}, pages = {35--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper6.pdf} }
-
Alon A Ilsar, Matthew Hughes, and Andrew Johnston. 2020. NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 315–320.
Download PDFThis paper outlines the development process of an audio-visual gestural instrument—the AirSticks—and elaborates on the role ‘miming’ has played in the formation of new mappings for the instrument. The AirSticks, although fully-functioning, were used as props in live performances in order to evaluate potential mapping strategies that were later implemented for real. This use of mime when designing Digital Musical Instruments (DMIs) can help overcome choice paralysis, break from established habits, and liberate creators to realise more meaningful parameter mappings. Bringing this process into an interactive performance environment acknowledges the audience as stakeholders in the design of these instruments, and also leads us to reflect upon the beliefs and assumptions made by an audience when engaging with the performance of such ‘magical’ devices. This paper establishes two opposing strategies to parameter mapping, ‘movement-first’ mapping, and the less conventional ‘sound-first’ mapping that incorporates mime. We discuss the performance ‘One Five Nine’, its transformation from a partial mime into a fully interactive presentation, and the influence this process has had on the outcome of the performance and the AirSticks as a whole.
@inproceedings{NIME20_60, author = {Ilsar, Alon A and Hughes, Matthew and Johnston, Andrew}, title = {NIME or Mime: A Sound-First Approach to Developing an Audio-Visual Gestural Instrument}, pages = {315--320}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper60.pdf}, presentation-video = {https://youtu.be/ZFQKKI3dFhE} }
-
Matthew Hughes and Andrew Johnston. 2020. URack: Audio-visual Composition and Performance using Unity and VCV Rack. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 321–322.
Download PDFThis demonstration presents URack, a custom-built audio-visual composition and performance environment that combines the Unity video-game engine with the VCV Rack software modular synthesiser. In alternative cross-modal solutions, a compromise is likely made in either the sonic or visual output, or the consistency and intuitiveness of the composition environment. By integrating control mechanisms for graphics inside VCV Rack, the music-making metaphors used to build a patch are extended into the visual domain. Users familiar with modular synthesizers are immediately able to start building high-fidelity graphics using the same control voltages regularly used to compose sound. Without needing to interact with two separate development environments, languages or metaphorical domains, users are encouraged to freely, creatively and enjoyably construct their own highly-integrated audio-visual instruments. This demonstration will showcase the construction of an audio-visual patch using URack, focusing on the integration of flexible GPU particle systems present in Unity with the vast library of creative audio composition modules inside VCV.
@inproceedings{NIME20_61, author = {Hughes, Matthew and Johnston, Andrew}, title = {URack: Audio-visual Composition and Performance using Unity and VCV Rack}, pages = {321--322}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper61.pdf} }
-
Irmandy Wicaksono and Joseph Paradiso. 2020. KnittedKeyboard: Digital Knitting of Electronic Textile Musical Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 323–326.
Download PDFIn this work, we have developed a textile-based interactive surface fabricated through digital knitting technology. Our prototype explores intarsia, interlock patterning, and a collection of functional and non-functional fibers to create a piano-pattern textile for expressive and virtuosic sonic interaction. We combined conductive, thermochromic, and composite yarns with high-flex polyester yarns to develop KnittedKeyboard with its soft physical properties and responsive sensing and display capabilities. The individual and combination of each key could simultaneously sense discrete touch, as well as continuous proximity and pressure. The KnittedKeyboard enables performers to experience fabric-based multimodal interaction as they explore the seamless texture and materiality of the electronic textile.
@inproceedings{NIME20_62, author = {Wicaksono, Irmandy and Paradiso, Joseph}, title = {KnittedKeyboard: Digital Knitting of Electronic Textile Musical Controllers}, pages = {323--326}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper62.pdf} }
-
Olivier Capra, Florent Berthaut, and Laurent Grisoni. 2020. A Taxonomy of Spectator Experience Augmentation Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 327–330.
Download PDFIn the context of artistic performances, the complexity and diversity of digital interfaces may impair the spectator experience, in particular hiding the engagement and virtuosity of the performers. Artists and researchers have made attempts at solving this by augmenting performances with additional information provided through visual, haptic or sonic modalities. However, the proposed techniques have not yet been formalized and we believe a clarification of their many aspects is necessary for future research. In this paper, we propose a taxonomy for what we define as Spectator Experience Augmentation Techniques (SEATs). We use it to analyse existing techniques and we demonstrate how it can serve as a basis for the exploration of novel ones.
@inproceedings{NIME20_63, author = {Capra, Olivier and Berthaut, Florent and Grisoni, Laurent}, title = {A Taxonomy of Spectator Experience Augmentation Techniques}, pages = {327--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper63.pdf} }
-
Sourya Sen, Koray Tahiroğlu, and Julia Lohmann. 2020. Sounding Brush: A Tablet based Musical Instrument for Drawing and Mark Making. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 331–336.
Download PDFExisting applications of mobile music tools are often concerned with the simulation of acoustic or digital musical instruments, extended with graphical representations of keys, pads, etc. Following an intensive review of existing tools and approaches to mobile music making, we implemented a digital drawing tool, employing a time-based graphical/gestural interface for music composition and performance. In this paper, we introduce our Sounding Brush project, through which we explore music making in various forms with the natural gestures of drawing and mark making on a tablet device. Subsequently, we present the design and development of the Sounding Brush application. Utilising this project idea, we discuss the act of drawing as an activity that is not separated from the act of playing musical instrument. Drawing is essentially the act of playing music by means of a continuous process of observation, individualisation and exploring time and space in a unique way.
@inproceedings{NIME20_64, author = {Sen, Sourya and Tahiroğlu, Koray and Lohmann, Julia}, title = {Sounding Brush: A Tablet based Musical Instrument for Drawing and Mark Making}, pages = {331--336}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper64.pdf}, presentation-video = {https://youtu.be/7RkGbyGM-Ho} }
-
Koray Tahiroğlu, Miranda Kastemaa, and Oskar Koli. 2020. Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 337–342.
Download PDFA deformable musical instrument can take numerous distinct shapes with its non-rigid features. Building audio synthesis module for such an interface behaviour can be challenging. In this paper, we present the Al-terity, a non-rigid musical instrument that comprises a deep learning model with generative adversarial network architecture and use it for generating audio samples for real-time audio synthesis. The particular deep learning model we use for this instrument was trained with existing data set as input for purposes of further experimentation. The main benefits of the model used are the ability to produce the realistic range of timbre of the trained data set and the ability to generate new audio samples in real-time, in the moment of playing, with the characteristics of sounds that the performer ever heard before. We argue that these advanced intelligence features on the audio synthesis level could allow us to explore performing music with particular response features that define the instrument’s digital idiomaticity and allow us reinvent the instrument in the act of music performance.
@inproceedings{NIME20_65, author = {Tahiroğlu, Koray and Kastemaa, Miranda and Koli, Oskar}, title = {Al-terity: Non-Rigid Musical Instrument with Artificial Intelligence Applied to Real-Time Audio Synthesis}, pages = {337--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper65.pdf}, presentation-video = {https://youtu.be/giYxFovZAvQ} }
-
Chris Kiefer, Dan Overholt, and Alice Eldridge. 2020. Shaping the behaviour of feedback instruments with complexity-controlled gain dynamics. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 343–348.
Download PDFFeedback instruments offer radical new ways of engaging with instrument design and musicianship. They are defined by recurrent circulation of signals through the instrument, which give the instrument ‘a life of its own’ and a ’stimulating uncontrollability’. Arguably, the most interesting musical behaviour in these instruments happens when their dynamic complexity is maximised, without falling into saturating feedback. It is often challenging to keep the instrument in this zone; this research looks at algorithmic ways to manage the behaviour of feedback loops in order to make feedback instruments more playable and musical; to expand and maintain the ‘sweet spot’. We propose a solution that manages gain dynamics based on measurement of complexity, using a realtime implementation of the Effort to Compress algorithm. The system was evaluated with four musicians, each of whom have different variations of string-based feedback instruments, following an autobiographical design approach. Qualitative feedback was gathered, showing that the system was successful in modifying the behaviour of these instruments to allow easier access to edge transition zones, sometimes at the expense of losing some of the more compelling dynamics of the instruments. The basic efficacy of the system is evidenced by descriptive audio analysis. This paper is accompanied by a dataset of sounds collected during the study, and the open source software that was written to support the research.
@inproceedings{NIME20_66, author = {Kiefer, Chris and Overholt, Dan and Eldridge, Alice}, title = {Shaping the behaviour of feedback instruments with complexity-controlled gain dynamics}, pages = {343--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper66.pdf}, presentation-video = {https://youtu.be/sf6FwsUX-84} }
-
Duncan A.H. Williams. 2020. MINDMIX: Mapping of brain activity to congruent audio mixing features. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 349–352.
Download PDFBrain-computer interfacing (BCI) offers novel methods to facilitate participation in audio engineering, providing access for individuals who might otherwise be unable to take part (either due to lack of training, or physical disability). This paper describes the development of a BCI system for conscious, or ‘active’, control of parameters on an audio mixer by generation of synchronous MIDI Machine Control messages. The mapping between neurophysiological cues and audio parameter must be intuitive for a neophyte audience (i.e., one without prior training or the physical skills developed by professional audio engineers when working with tactile interfaces). The prototype is dubbed MINDMIX (a portmanteau of ‘mind’ and ‘mixer’), combining discrete and many-to-many mappings of audio mixer parameters and BCI control signals measured via Electronecephalograph (EEG). In future, specific evaluation of discrete mappings would be useful for iterative system design.
@inproceedings{NIME20_67, author = {Williams, Duncan A.H.}, title = {MINDMIX: Mapping of brain activity to congruent audio mixing features}, pages = {349--352}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper67.pdf} }
-
Marcel O DeSmith, Andrew Piepenbrink, and Ajay Kapur. 2020. SQUISHBOI: A Multidimensional Controller for Complex Musical Interactions using Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 353–356.
Download PDFWe present SQUISHBOI, a continuous touch controller for interacting with complex musical systems. An elastic rubber membrane forms the playing surface of the instrument, while machine learning is used for dimensionality reduction and gesture recognition. The membrane is stretched over a hollow shell which permits considerable depth excursion, with an array of distance sensors tracking the surface displacement from underneath. The inherent dynamics of the membrane lead to cross-coupling between nearby sensors, however we do not see this as a flaw or limitation. Instead we find this coupling gives structure to the playing techniques and mapping schemes chosen by the user. The instrument is best utilized as a tool for actively designing abstraction and forming a relative control structure within a given system, one which allows for intuitive gestural control beyond what can be accomplished with conventional musical controllers.
@inproceedings{NIME20_68, author = {DeSmith, Marcel O and Piepenbrink, Andrew and Kapur, Ajay}, title = {SQUISHBOI: A Multidimensional Controller for Complex Musical Interactions using Machine Learning}, pages = {353--356}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper68.pdf} }
-
Nick Bryan-Kinns, LI ZIJIN, and Xiaohua Sun. 2020. On Digital Platforms and AI for Music in the UK and China. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 357–360.
Download PDFDigital technologies play a fundamental role in New Interfaces for Musical Expression as well as music making and consumption more widely. This paper reports on two workshops with music professionals and researchers who undertook an initial exploration of the differences between digital platforms (software and online services) for music in the UK and China. Differences were found in primary target user groups of digital platforms in the UK and China as well as the stages of the culture creation cycle they were developed for. Reasons for the divergence of digital platforms include differences in culture, regulation, and infrastructure, as well as the inherent Western bias of software for music making such as Digital Audio Workstations. Using AI to bridge between Western and Chinese music traditions is suggested as an opportunity to address aspects of the divergent landscape of digital platforms for music inside and outside China.
@inproceedings{NIME20_69, author = {Bryan-Kinns, Nick and ZIJIN, LI and Sun, Xiaohua}, title = {On Digital Platforms and AI for Music in the UK and China}, pages = {357--360}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper69.pdf}, presentation-video = {https://youtu.be/c7nkCBBTnDA} }
-
Jean Chu and Jaewon Choi. 2020. Reinterpretation of Pottery as a Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 37–38.
Download PDFDigitally integrating the materiality, form, and tactility in everyday objects (e.g., pottery) provides inspiration for new ways of musical expression and performance. In this project we reinterpret the creative process and aesthetic philosophy of pottery as algorithmic music to help users rediscover the latent story behind pottery through a synesthetic experience. Projects Mobius I and Mobius II illustrate two potential directions toward a musical interface, one focusing on the circular form, and the other, on graphical ornaments of pottery. Six conductive graphics on the pottery function as capacitive sensors while retaining their resemblance to traditional ornamental patterns in pottery. Offering pottery as a musical interface, we invite users to orchestrate algorithmic music by physically touching the different graphics.
@inproceedings{NIME20_7, author = {Chu, Jean and Choi, Jaewon}, title = {Reinterpretation of Pottery as a Musical Interface}, pages = {37--38}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper7.pdf} }
-
Anders Eskildsen and Mads Walther-Hansen. 2020. Force dynamics as a design framework for mid-air musical interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 361–366.
Download PDFIn this paper we adopt the theory of force dynamics in human cognition as a fundamental design principle for the development of mid-air musical interfaces. We argue that this principle can provide more intuitive user experiences when the interface does not provide direct haptic feedback – such as interfaces made with various gesture-tracking technologies. Grounded in five concepts from the theoretical literature on force dynamics in musical cognition, the paper presents a set of principles for interaction design focused on five force schemas: Path restraint, Containment restraint, Counter-force, Attraction, and Compulsion. We describe an initial set of examples that implement these principles using a Leap Motion sensor for gesture tracking and SuperCollider for interactive audio design. Finally, the paper presents a pilot experiment that provides initial ratings of intuitiveness in the user experience.
@inproceedings{NIME20_70, author = {Eskildsen, Anders and Walther-Hansen, Mads}, title = {Force dynamics as a design framework for mid-air musical interfaces}, pages = {361--366}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper70.pdf}, presentation-video = {https://youtu.be/REe967aGVN4} }
-
Erik Nyström. 2020. Intra-Actions: Experiments with Velocity and Position in Continuous Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 367–368.
Download PDFContinuous MIDI controllers commonly output their position only, with no influence of the performative energy with which they were set. In this paper, creative uses of time as a parameter in continuous controller mapping are demonstrated: the speed of movement affects the position mapping and control output. A set of SuperCollider classes are presented, developed in the author’s practice in computer music, where they have been used together with commercial MIDI controllers. The creative applications employ various approaches and metaphors for scaling time, but also machine learning for recognising patterns. In the techniques, performer, controller and synthesis ‘intra-act’, to use Karen Barad’s term: because position and velocity are derived from the same data, sound output cannot be predicted without the temporal context of performance.
@inproceedings{NIME20_71, author = {Nyström, Erik}, title = {Intra-Actions: Experiments with Velocity and Position in Continuous Controllers}, pages = {367--368}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper71.pdf} }
-
James Leonard and Andrea Giomi. 2020. Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 369–374.
Download PDFThis paper presents an ongoing research on hand gesture interactive sonification in dance performances. For this purpose, a conceptual framework and a multilayered mapping model issued from an experimental case study will be proposed. The goal of this research is twofold. On the one hand, we aim to determine action-based perceptual invariants that allow us to establish pertinent relations between gesture qualities and sound features. On the other hand, we are interested in analysing how an interactive model-based sonification can provide useful and effective feedback for dance practitioners. From this point of view, our research explicitly addresses the convergence between the scientific understandings provided by the field of movement sonification and the traditional know-how developed over the years within the digital instrument and interaction design communities. A key component of our study is the combination between physically-based sound synthesis and motion features analysis. This approach has proven effective in providing interesting insights for devising novel sonification models for artistic and scientific purposes, and for developing a collaborative platform involving the designer, the musician and the performer.
@inproceedings{NIME20_72, author = {Leonard, James and Giomi, Andrea}, title = {Towards an Interactive Model-Based Sonification of Hand Gesture for Dance Performance}, pages = {369--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper72.pdf}, presentation-video = {https://youtu.be/HQqIjL-Z8dA} }
-
Romulo A Vieira and Flávio Luiz Schiavoni. 2020. Fliperama: An affordable Arduino based MIDI Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 375–379.
Download PDFLack of access to technological devices is a common exponent of a new form of social exclusion. Coupled with this, there are also the risk of increasing inequality between developed and underdeveloped countries when concerning technology access. Regarding Internet access, the percentage of young Africans who do not have access to this technology is around 60%, while in Europe the figure is 4%. This limitation also expands for musical instruments, whether electronic or not. In light of this worldwide problem, this paper aims to showcase a method for building a MIDI Controller, a prominent instrument for musical production and live performance, in an economically viable form that can be accessible to the poorest populations. It is also desirable that the equipment is suitable for teaching various subjects such as Music, Computer Science and Engineering. The outcome of this research is not an amazing controller or a brandy new cool interface but the experience of building a controller concerning all the bad conditions of doing it.
@inproceedings{NIME20_73, author = {Vieira, Romulo A and Schiavoni, Flávio Luiz}, title = {Fliperama: An affordable Arduino based MIDI Controller}, pages = {375--379}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper73.pdf}, presentation-video = {https://youtu.be/X1GE5jk2cgc} }
-
Alex MacLean. 2020. Immersive Dreams: A Shared VR Experience. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 380–381.
Download PDFThis paper reports on a project that aimed to break apart the isolation of VR and share an experience between both the wearer of a headset and a room of observers. It presented the user with an acoustically playable virtual environment in which their interactions with objects spawned audio events from the room’s 80 loudspeakers and animations on the room’s 3 display walls. This required the use of several Unity engines running on separate machines and SuperCollider running as the audio engine. The perspectives into what the wearer of the headset was doing allowed the audience to connect their movements to the sounds and images being experienced, effectively allowing them all to participate in the installation simultaneously.
@inproceedings{NIME20_74, author = {MacLean, Alex}, title = {Immersive Dreams: A Shared VR Experience}, pages = {380--381}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper74.pdf} }
-
Nick Bryan-Kinns and LI ZIJIN. 2020. ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 382–387.
Download PDFThere are many studies of Digital Musical Instrument (DMI) design, but there is little research on the cross-cultural co-creation of DMIs drawing on traditional musical instruments. We present a study of cross-cultural co-creation inspired by the Duxianqin - a traditional Chinese Jing ethnic minority single stringed musical instrument. We report on how we structured the co-creation with European and Chinese participants ranging from DMI designers to composers and performers. We discuss how we identified the ‘essence’ of the Duxianqin and used this to drive co-creation of three Duxianqin reimagined through digital technologies. Music was specially composed for these reimagined Duxianqin and performed in public as the culmination of the design process. We reflect on our co-creation process and how others could use such an approach to identify the essence of traditional instruments and reimagine them in the digital age.
@inproceedings{NIME20_75, author = {Bryan-Kinns, Nick and ZIJIN, LI}, title = {ReImagining: Cross-cultural Co-Creation of a Chinese Traditional Musical Instrument with Digital Technologies}, pages = {382--387}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper75.pdf}, presentation-video = {https://youtu.be/NvHcUQea82I} }
-
Konstantinos n/a Vasilakos, Scott Wilson, Thomas McCauley, Tsun Winston Yeung, Emma Margetson, and Milad Khosravi Mardakheh. 2020. Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 388–393.
Download PDFThis paper presents a discussion of Dark Matter, a sonification project using live coding and just-in-time programming techniques. The project uses data from proton-proton collisions produced by the Large Hadron Collider (LHC) at CERN, Switzerland, and then detected and reconstructed by the Compact Muon Solenoid (CMS) experiment, and was developed with the support of the art@CMS project. Work for the Dark Matter project included the development of a custom-made environment in the SuperCollider (SC) programming language that lets the performers of the group engage in collective improvisations using dynamic interventions and networked music systems. This paper will also provide information about a spin-off project entitled the Interactive Physics Sonification System (IPSOS), an interactive and standalone online application developed in the JavaScript programming language. It provides a web-based interface that allows users to map particle data to sound on commonly used web browsers, mobile devices, such as smartphones, tablets etc. The project was developed as an educational outreach tool to engage young students and the general public with data derived from LHC collisions.
@inproceedings{NIME20_76, author = {Vasilakos, Konstantinos n/a and Wilson, Scott and McCauley, Thomas and Yeung, Tsun Winston and Margetson, Emma and Khosravi Mardakheh, Milad}, title = {Sonification of High Energy Physics Data Using Live Coding and Web Based Interfaces.}, pages = {388--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper76.pdf}, presentation-video = {https://youtu.be/1vS_tFUyz7g} }
-
Haruya Takase. 2020. Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 394–398.
Download PDFOur goal is to develop an improvisational ensemble support system for music beginners who do not have knowledge of chord progressions and do not have enough experience of playing an instrument. We hypothesized that a music beginner cannot determine tonal pitches of melody over a particular chord but can use body movements to specify the pitch contour (i.e., melodic outline) and the attack timings (i.e., rhythm). We aim to realize a performance interface for supporting expressing intuitive pitch contour and attack timings using body motion and outputting harmonious pitches over the chord progression of the background music. Since the intended users of this system are not limited to people with music experience, we plan to develop a system that uses Android smartphones, which many people have. Our system consists of three modules: a module for specifying attack timing using smartphone sensors, module for estimating the vertical movement of the smartphone using smartphone sensors, and module for estimating the sound height using smartphone vertical movement and background chord progression. Each estimation module is developed using long short-term memory (LSTM), which is often used to estimate time series data. We conduct evaluation experiments for each module. As a result, the attack timing estimation had zero misjudgments, and the mean error time of the estimated attack timing was smaller than the sensor-acquisition interval. The accuracy of the vertical motion estimation was 64%, and that of the pitch estimation was 7.6%. The results indicate that the attack timing is accurate enough, but the vertical motion estimation and the pitch estimation need to be improved for actual use.
@inproceedings{NIME20_77, author = {Takase, Haruya}, title = {Support System for Improvisational Ensemble Based on Long Short-Term Memory Using Smartphone Sensor}, pages = {394--398}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper77.pdf}, presentation-video = {https://youtu.be/WhrGhas9Cvc} }
-
Augoustinos Tsiros and Alessandro Palladini. 2020. Towards a Human-Centric Design Framework for AI Assisted Music Production. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 399–404.
Download PDFIn this paper, we contribute to the discussion on how to best design human-centric MIR tools for live audio mixing by bridging the gap between research on complex systems, the psychology of automation and the design of tools that support creativity in music production. We present the design of the Channel-AI, an embedded AI system which performs instrument recognition and generates parameter settings suggestions for gain levels, gating, compression and equalization which are specific to the input signal and the instrument type. We discuss what we believe to be the key design principles and perspectives on the making of intelligent tools for creativity and for experts in the loop. We demonstrate how these principles have been applied to inform the design of the interaction between expert live audio mixing engineers with the Channel-AI (i.e. a corpus of AI features embedded in the Midas HD Console. We report the findings from a preliminary evaluation we conducted with three professional mixing engineers and reflect on mixing engineers’ comments about the Channel-AI on social media.
@inproceedings{NIME20_78, author = {Tsiros, Augoustinos and Palladini, Alessandro}, title = {Towards a Human-Centric Design Framework for AI Assisted Music Production}, pages = {399--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper78.pdf} }
-
Matthew Rodger, Paul Stapleton, Maarten van Walstijn, Miguel Ortiz, and Laurel S Pardue. 2020. What Makes a Good Musical Instrument? A Matter of Processes, Ecologies and Specificities . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 405–410.
Download PDFUnderstanding the question of what makes a good musical instrument raises several conceptual challenges. Researchers have regularly adopted tools from traditional HCI as a framework to address this issue, in which instrumental musical activities are taken to comprise a device and a user, and should be evaluated as such. We argue that this approach is not equipped to fully address the conceptual issues raised by this question. It is worth reflecting on what exactly an instrument is, and how instruments contribute toward meaningful musical experiences. Based on a theoretical framework that incorporates ideas from ecological psychology, enactivism, and phenomenology, we propose an alternative approach to studying musical instruments. According to this approach, instruments are better understood in terms of processes rather than as devices, while musicians are not users, but rather agents in musical ecologies. A consequence of this reframing is that any evaluations of instruments, if warranted, should align with the specificities of the relevant processes and ecologies concerned. We present an outline of this argument and conclude with a description of a current research project to illustrate how our approach can shape the design and performance of a musical instrument in-progress.
@inproceedings{NIME20_79, author = {Rodger, Matthew and Stapleton, Paul and van Walstijn, Maarten and Ortiz, Miguel and Pardue, Laurel S}, title = {What Makes a Good Musical Instrument? A Matter of Processes, Ecologies and Specificities }, pages = {405--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper79.pdf}, presentation-video = {https://youtu.be/ADLo-QdSwBc} }
-
Charles Patrick Martin, Zeruo Liu, Yichen Wang, Wennan He, and Henry Gardner. 2020. Sonic Sculpture: Activating Engagement with Head-Mounted Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 39–42.
Download PDFWe describe a sonic artwork, "Listening To Listening", that has been designed to accompany a real-world sculpture with two prototype interaction schemes. Our artwork is created for the HoloLens platform so that users can have an individual experience in a mixed reality context. Personal AR systems have recently become available and practical for integration into public art projects, however research into sonic sculpture works has yet to account for the affordances of current portable and mainstream AR systems. In this work, we take advantage of the HoloLens’ spatial awareness to build sonic spaces that have a precise spatial relationship to a given sculpture and where the sculpture itself is modelled in the augmented scene as an "invisible hologram". We describe the artistic rationale for our artwork, the design of the two interaction schemes, and the technical and usability feedback that we have obtained from demonstrations during iterative development. This work appears to be the first time that head-mounted AR has been used to build an interactive sonic landscape to engage with a public sculpture.
@inproceedings{NIME20_8, author = {Martin, Charles Patrick and Liu, Zeruo and Wang, Yichen and He, Wennan and Gardner, Henry}, title = {Sonic Sculpture: Activating Engagement with Head-Mounted Augmented Reality}, pages = {39--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper8.pdf}, presentation-video = {https://youtu.be/RlTWXnFOLN8} }
-
Giovanni Santini. 2020. Augmented Piano in Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 411–415.
Download PDFAugmented instruments have been a widely explored research topic since the late 80s. The possibility to use sensors for providing an input for sound processing/synthesis units let composers and sound artist open up new ways for experimentation. Augmented Reality, by rendering virtual objects in the real world and by making those objects interactive (via some sensor-generated input), provides a new frame for this research field. In fact, the 3D visual feedback, delivering a precise indication of the spatial configuration/function of each virtual interface, can make the instrumental augmentation process more intuitive for the interpreter and more resourceful for a composer/creator: interfaces can change their behavior over time, can be reshaped, activated or deactivated. Each of these modifications can be made obvious to the performer by using strategies of visual feedback. In addition, it is possible to accurately sample space and to map it with differentiated functions. Augmenting interfaces can also be considered a visual expressive tool for the audience and designed accordingly: the performer’s point of view (or another point of view provided by an external camera) can be mirrored to a projector. This article will show some example of different designs of AR piano augmentation from the composition Studi sulla realtà nuova.
@inproceedings{NIME20_80, author = {Santini, Giovanni}, title = {Augmented Piano in Augmented Reality}, pages = {411--415}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper80.pdf}, presentation-video = {https://youtu.be/3HBWvKj2cqc} }
-
Tom Davis and Laura Reid. 2020. Taking Back Control: Taming the Feral Cello. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 416–421.
Download PDFWhilst there is a large body of NIME papers that concentrate on the presentation of new technologies there are fewer papers that have focused on a longitudinal understanding of NIMEs in practice. This paper embodies the more recent acknowledgement of the importance of practice-based methods of evaluation [1,2,3,4] concerning the use of NIMEs within performance and the recognition that it is only within the situation of practice that the context is available to actually interpret and evaluate the instrument [2]. Within this context this paper revisits the Feral Cello performance system that was first presented at NIME 2017 [5]. This paper explores what has been learned through the artistic practice of performing and workshopping in this context by drawing heavily on the experiences of the performer/composer who has become an integral part of this project and co-author of this paper. The original philosophical context is also revisited and reflections are made on the tensions between this position and the need to ‘get something to work’. The authors feel the presentation of the semi-structured interview within the paper is the best method of staying truthful to Hayes understanding of musical improvisation as an enactive framework ‘in its ability to demonstrate the importance of participatory, relational, emergent, and embodied musical activities and processes’ [4].
@inproceedings{NIME20_81, author = {Davis, Tom and Reid, Laura}, title = {Taking Back Control: Taming the Feral Cello}, pages = {416--421}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper81.pdf}, presentation-video = {https://youtu.be/9npR0T6YGiA} }
-
Thibault Jaccard, Robert Lieck, and Martin Rohrmeier. 2020. AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 422–427.
Download PDFBecoming a practical musician traditionally requires an extensive amount of preparatory work to master the technical and theoretical challenges of the particular instrument and musical style before being able to devote oneself to musical expression. In particular, in jazz improvisation, one of the major barriers is the mastery and appropriate selection of scales from a wide range, according to harmonic context and style. In this paper, we present AutoScale, an interactive software for making jazz improvisation more accessible by lifting the burden of scale selection from the musician while still allowing full controllability if desired. This is realized by implementing a MIDI effect that dynamically maps the desired scales onto a standardized layout. Scale selection can be pre-programmed, automated based on algorithmic lead sheet analysis, or interactively adapted. We discuss the music-theoretical foundations underlying our approach, the design choices taken for building an intuitive user interface, and provide implementations as VST plugin and web applications for use with a Launchpad or traditional MIDI keyboard.
@inproceedings{NIME20_82, author = {Jaccard, Thibault and Lieck, Robert and Rohrmeier, Martin}, title = {AutoScale: Automatic and Dynamic Scale Selection for Live Jazz Improvisation}, pages = {422--427}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper82.pdf}, presentation-video = {https://youtu.be/KqGpTTQ9ZrE} }
-
Lauren Hayes and Adnan Marquez-Borbon. 2020. Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 428–433.
Download PDFNearly two decades after its inception as a workshop at the ACM Conference on Human Factors in Computing Systems, NIME exists as an established international conference significantly distinct from its precursor. While this origin story is often noted, the implications of NIME’s history as emerging from a field predominantly dealing with human-computer interaction have rarely been discussed. In this paper we highlight many of the recent—and some not so recent—challenges that have been brought upon the NIME community as it attempts to maintain and expand its identity as a platform for multidisciplinary research into HCI, interface design, and electronic and computer music. We discuss the relationship between the market demands of the neoliberal university—which have underpinned academia’s drive for innovation—and the quantification and economisation of research performance which have facilitated certain disciplinary and social frictions to emerge within NIME-related research and practice. Drawing on work that engages with feminist theory and cultural studies, we suggest that critical reflection and moreover mediation is necessary in order to address burgeoning concerns which have been raised within the NIME discourse in relation to methodological approaches,’diversity and inclusion’, ’accessibility’, and the fostering of rigorous interdisciplinary research.
@inproceedings{NIME20_83, author = {Hayes, Lauren and Marquez-Borbon, Adnan}, title = {Nuanced and Interrelated Mediations and Exigencies (NIME): Addressing the Prevailing Political and Epistemological Crises}, pages = {428--433}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper83.pdf}, presentation-video = {https://youtu.be/4UERHlFUQzo} }
-
Andrew McPherson and Giacomo Lepri. 2020. Beholden to our tools: negotiating with technology while sketching digital instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 434–439.
Download PDFDigital musical instrument design is often presented as an open-ended creative process in which technology is adopted and adapted to serve the musical will of the designer. The real-time music programming languages powering many new instruments often provide access to audio manipulation at a low level, theoretically allowing the creation of any sonic structure from primitive operations. As a result, designers may assume that these seemingly omnipotent tools are pliable vehicles for the expression of musical ideas. We present the outcomes of a compositional game in which sound designers were invited to create simple instruments using common sensors and the Pure Data programming language. We report on the patterns and structures that often emerged during the exercise, arguing that designers respond strongly to suggestions offered by the tools they use. We discuss the idea that current music programming languages may be as culturally loaded as the communities of practice that produce and use them. Instrument making is then best viewed as a protracted negotiation between designer and tools.
@inproceedings{NIME20_84, author = {McPherson, Andrew and Lepri, Giacomo}, title = {Beholden to our tools: negotiating with technology while sketching digital instruments}, pages = {434--439}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper84.pdf}, presentation-video = {https://youtu.be/-nRtaucPKx4} }
-
Andrea Martelloni, Andrew McPherson, and Mathieu Barthet. 2020. Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 440–445.
Download PDFPercussive fingerstyle is a playing technique adopted by many contemporary acoustic guitarists, and it has grown substantially in popularity over the last decade. Its foundations lie in the use of the guitar’s body for percussive lines, and in the extended range given by the novel use of altered tunings. There are very few formal accounts of percussive fingerstyle, therefore, we devised an interview study to investigate its approach to composition, performance and musical experimentation. Our aim was to gain insight into the technique from a gesture-based point of view, observe whether modern fingerstyle shares similarities to the approaches in NIME practice and investigate possible avenues for guitar augmentations inspired by the percussive technique. We conducted an inductive thematic analysis on the transcribed interviews: our findings highlight the participants’ material-based approach to musical interaction and we present a three-zone model of the most common percussive gestures on the guitar’s body. Furthermore, we examine current trends in Digital Musical Instruments, especially in guitar augmentation, and we discuss possible future directions in augmented guitars in light of the interviewees’ perspectives.
@inproceedings{NIME20_85, author = {Martelloni, Andrea and McPherson, Andrew and Barthet, Mathieu}, title = {Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study}, pages = {440--445}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper85.pdf}, presentation-video = {https://youtu.be/ON8ckEBcQ98} }
-
Robert Jack, Jacob Harrison, and Andrew McPherson. 2020. Digital Musical Instruments as Research Products. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 446–451.
Download PDFIn the field of human computer interaction (HCI) the limitations of prototypes as the primary artefact used in research are being realised. Prototypes often remain open in their design, are partially-finished, and have a focus on a specific aspect of interaction. Previous authors have proposed ‘research products’ as a specific category of artefact distinct from both research prototypes and commercial products. The characteristics of research products are their holistic completeness as a design artefact, their situatedness in a specific cultural context, and the fact that they are evaluated for what they are, not what they will become. This paper discusses the ways in which many instruments created within the context of New Interfaces for Musical Expression (NIME), including those that are used in performances, often fall into the category of prototype. We shall discuss why research products might be a useful framing for NIME research. Research products shall be weighed up against some of the main themes of NIME research: technological innovation; musical expression; instrumentality. We conclude this paper with a case study of Strummi, a digital musical instrument which we frame as research product.
@inproceedings{NIME20_86, author = {Jack, Robert and Harrison, Jacob and McPherson, Andrew}, title = {Digital Musical Instruments as Research Products}, pages = {446--451}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper86.pdf}, presentation-video = {https://youtu.be/luJwlZBeBqY} }
-
Amit D Patel and John Richards. 2020. Pop-up for Collaborative Music-making. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 452–457.
Download PDFThis paper presents a micro-residency in a pop-up shop and collaborative making amongst a group of researchers and practitioners. The making extends to sound(-making) objects, instruments, workshop, sound installation, performance and discourse on DIY electronic music. Our research builds on creative workshopping and speculative design and is informed by ideas of collective making. The ad hoc and temporary pop-up space is seen as formative in shaping the outcomes of the work. Through the lens of curated research, working together with a provocative brief, we explored handmade objects, craft, non-craft, human error, and the spirit of DIY, DIYness. We used the Studio Bench - a method that brings making, recording and performance together in one space - and viewed workshopping and performance as a holistic event. A range of methodologies were investigated in relation to NIME. These included the Hardware Mash-up, Speculative Sound Circuits and Reverse Design, from product to prototype, resulting in the instrument the Radical Nails. Finally, our work drew on the notion of design as performance and making in public and further developed our understanding of workshop-installation and performance-installation.
@inproceedings{NIME20_87, author = {Patel, Amit D and Richards, John}, title = {Pop-up for Collaborative Music-making}, pages = {452--457}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper87.pdf} }
-
Courtney Reed and Andrew McPherson. 2020. Surface Electromyography for Direct Vocal Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 458–463.
Download PDFThis paper introduces a new method for direct control using the voice via measurement of vocal muscular activation with surface electromyography (sEMG). Digital musical interfaces based on the voice have typically used indirect control, in which features extracted from audio signals control the parameters of sound generation, for example in audio to MIDI controllers. By contrast, focusing on the musculature of the singing voice allows direct muscular control, or alternatively, combined direct and indirect control in an augmented vocal instrument. In this way we aim to both preserve the intimate relationship a vocalist has with their instrument and key timbral and stylistic characteristics of the voice while expanding its sonic capabilities. This paper discusses other digital instruments which effectively utilise a combination of indirect and direct control as well as a history of controllers involving the voice. Subsequently, a new method of direct control from physiological aspects of singing through sEMG and its capabilities are discussed. Future developments of the system are further outlined along with usage in performance studies, interactive live vocal performance, and educational and practice tools.
@inproceedings{NIME20_88, author = {Reed, Courtney and McPherson, Andrew}, title = {Surface Electromyography for Direct Vocal Control}, pages = {458--463}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper88.pdf}, presentation-video = {https://youtu.be/1nWLgQGNh0g} }
-
Henrik von Coler, Steffen Lepa, and Stefan Weinzierl. 2020. User-Defined Mappings for Spatial Sound Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 464–469.
Download PDFThe presented sound synthesis system allows the individual spatialization of spectral components in real-time, using a sinusoidal modeling approach within 3-dimensional sound reproduction systems. A co-developed, dedicated haptic interface is used to jointly control spectral and spatial attributes of the sound. Within a user study, participants were asked to create an individual mapping between control parameters of the interface and rendering parameters of sound synthesis and spatialization, using a visual programming environment. Resulting mappings of all participants are evaluated, indicating the preference of single control parameters for specific tasks. In comparison with mappings intended by the development team, the results validate certain design decisions and indicate new directions.
@inproceedings{NIME20_89, author = {von Coler, Henrik and Lepa, Steffen and Weinzierl, Stefan}, title = {User-Defined Mappings for Spatial Sound Synthesis}, pages = {464--469}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper89.pdf} }
-
Rohan Proctor and Charles Patrick Martin. 2020. A Laptop Ensemble Performance System using Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 43–48.
Download PDFThe popularity of applying machine learning techniques in musical domains has created an inherent availability of freely accessible pre-trained neural network (NN) models ready for use in creative applications. This work outlines the implementation of one such application in the form of an assistance tool designed for live improvisational performances by laptop ensembles. The primary intention was to leverage off-the-shelf pre-trained NN models as a basis for assisting individual performers either as musical novices looking to engage with more experienced performers or as a tool to expand musical possibilities through new forms of creative expression. The system expands upon a variety of ideas found in different research areas including new interfaces for musical expression, generative music and group performance to produce a networked performance solution served via a web-browser interface. The final implementation of the system offers performers a mixture of high and low-level controls to influence the shape of sequences of notes output by locally run NN models in real time, also allowing performers to define their level of engagement with the assisting generative models. Two test performances were played, with the system shown to feasibly support four performers over a four minute piece while producing musically cohesive and engaging music. Iterations on the design of the system exposed technical constraints on the use of a JavaScript environment for generative models in a live music context, largely derived from inescapable processing overheads.
@inproceedings{NIME20_9, author = {Proctor, Rohan and Martin, Charles Patrick}, title = {A Laptop Ensemble Performance System using Recurrent Neural Networks}, pages = {43--48}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper9.pdf} }
-
Tiago Brizolara, Sylvie Gibet, and Caroline Larboulette. 2020. Elemental: a Gesturally Controlled System to Perform Meteorological Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 470–476.
Download PDFIn this paper, we present and evaluate Elemental, a NIME (New Interface for Musical Expression) based on audio synthesis of sounds of meteorological phenomena, namely rain, wind and thunder, intended for application in contemporary music/sound art, performing arts and entertainment. We first describe the system, controlled by the performer’s arms through Inertial Measuring Units and Electromyography sensors. The produced data is analyzed and used through mapping strategies as input of the sound synthesis engine. We conducted user studies to refine the sound synthesis engine, the choice of gestures and the mappings between them, and to finally evaluate this proof of concept. Indeed, the users approached the system with their own awareness ranging from the manipulation of abstract sound to the direct simulation of atmospheric phenomena - in the latter case, it could even be to revive memories or to create novel situations. This suggests that the approach of instrumentalization of sounds of known source may be a fruitful strategy for constructing expressive interactive sonic systems.
@inproceedings{NIME20_90, author = {Brizolara, Tiago and Gibet, Sylvie and Larboulette, Caroline}, title = {Elemental: a Gesturally Controlled System to Perform Meteorological Sounds}, pages = {470--476}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper90.pdf} }
-
Çağrı Erdem and Alexander Refsum Jensenius. 2020. RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 477–482.
Download PDFThis paper describes the ongoing process of developing RAW, a collaborative body–machine instrument that relies on ’sculpting’ the sonification of raw EMG signals. The instrument is built around two Myo armbands located on the forearms of the performer. These are used to investigate muscle contraction, which is again used as the basis for the sonic interaction design. Using a practice-based approach, the aim is to explore the musical aesthetics of naturally occurring bioelectric signals. We are particularly interested in exploring the differences between processing at audio rate versus control rate, and how the level of detail in the signal–and the complexity of the mappings–influence the experience of control in the instrument. This is exemplified through reflections on four concerts in which RAW has been used in different types of collective improvisation.
@inproceedings{NIME20_91, author = {Erdem, Çağrı and Jensenius, Alexander Refsum}, title = {RAW: Exploring Control Structures for Muscle-based Interaction in Collective Improvisation}, pages = {477--482}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper91.pdf}, presentation-video = {https://youtu.be/gX-X1iw7uWE} }
-
Travis C MacDonald, James Hughes, and Barry MacKenzie. 2020. SmartDrone: An Aurally Interactive Harmonic Drone. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 483–488.
Download PDFMobile devices provide musicians with the convenience of musical accompaniment wherever they are, granting them new methods for developing their craft. We developed the application SmartDrone to give users the freedom to practice in different harmonic settings with the assistance of their smartphone. This application further explores the area of dynamic accompaniment by implementing functionality so that chords are generated based on the key in which the user is playing. Since this app was designed to be a tool for scale practice, drone-like accompaniment was chosen so that musicians could experiment with combinations of melody and harmony. The details of the application development process are discussed in this paper, with the main focus on scale analysis and harmonic transposition. By using these two components, the application is able to dynamically alter key to reflect the user’s playing. As well as the design and implementation details, this paper reports and examines feedback from a small user study of undergraduate music students who used the app.
@inproceedings{NIME20_92, author = {MacDonald, Travis C and Hughes, James and MacKenzie, Barry}, title = {SmartDrone: An Aurally Interactive Harmonic Drone}, pages = {483--488}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper92.pdf} }
-
Juan P Martinez Avila, Vasiliki Tsaknaki, Pavel Karpashevich, et al. 2020. Soma Design for NIME. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 489–494.
Download PDFPrevious research on musical embodiment has reported that expert performers often regard their instruments as an extension of their body. Not every digital musical instrument seeks to create a close relationship between body and instrument, but even for the many that do, the design process often focuses heavily on technical and sonic factors, with relatively less attention to the bodily experience of the performer. In this paper we propose Somaesthetic design as an alternative to explore this space. The Soma method aims to attune the sensibilities of designers, as well as their experience of their body, and make use of these notions as a resource for creative design. We then report on a series of workshops exploring the relationship between the body and the guitar with a Soma design approach. The workshops resulted in a series of guitar-related artefacts and NIMEs that emerged from the somatic exploration of balance and tension during guitar performance. Lastly we present lessons learned from our research that could inform future Soma-based musical instrument design, and how NIME research may also inform Soma design.
@inproceedings{NIME20_93, author = {Martinez Avila, Juan P and Tsaknaki, Vasiliki and Karpashevich, Pavel and Windlin, Charles and Valenti, Niklas and Höök, Kristina and McPherson, Andrew and Benford, Steve}, title = {Soma Design for NIME}, pages = {489--494}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper93.pdf}, presentation-video = {https://youtu.be/i4UN_23A_SE} }
-
Laddy P Cadavid. 2020. Knotting the memory//Encoding the Khipu_: Reuse of an ancient Andean device as a NIME . Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 495–498.
Download PDFThe khipu is an information processing and transmission device used mainly by the Inca empire and previous Andean societies. This mnemotechnic interface is one of the first textile computers known, consisting of a central wool or cotton cord to which other strings are attached with knots of different shapes, colors, and sizes encrypting different kinds of values and information. The system was widely used until the Spanish colonization that banned their use and destroyed a large number of these devices. This paper introduces the creation process of a NIME based in a khipu converted into an electronic instrument for the interaction and generation of live experimental sound by weaving knots with conductive rubber cords, and its implementation in the performance Knotting the memory//Encoding the Khipu_ that aim to pay homage to this system, from a decolonial perspective continuing the interrupted legacy of this ancestral practice in a different experience of tangible live coding and computer music, as well as weaving the past with the present of the indigenous and people resistance of the Andean territory with their sounds.
@inproceedings{NIME20_94, author = {Cadavid, Laddy P}, title = {Knotting the memory//Encoding the Khipu_: Reuse of an ancient Andean device as a NIME }, pages = {495--498}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper94.pdf}, presentation-video = {https://youtu.be/nw5rbc15pT8} }
-
Shelly Knotts and Nick Collins. 2020. A survey on the uptake of Music AI Software. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 499–504.
Download PDFThe recent proliferation of commercial software claiming ground in the field of music AI has provided opportunity to engage with AI in music making without the need to use libraries aimed at those with programming skills. Pre-packaged music AI software has the potential to broaden access to machine learning tools but it is unclear how widely these softwares are used by music technologists or how engagement affects attitudes towards AI in music making. To interrogate these questions we undertook a survey in October 2019, gaining 117 responses. The survey collected statistical information on the use of pre-packaged and self-written music AI software. Respondents reported a range of musical outputs including producing recordings, live performance and generative work across many genres of music making. The survey also gauged general attitudes towards AI in music and provided an open field for general comments. The responses to the survey suggested a forward-looking attitude to music AI with participants often pointing to the future potential of AI tools, rather than present utility. Optimism was partially related to programming skill with those with more experience showing higher skepticism towards the current state and future potential of AI.
@inproceedings{NIME20_95, author = {Knotts, Shelly and Collins, Nick}, title = {A survey on the uptake of Music AI Software}, pages = {499--504}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper95.pdf}, presentation-video = {https://youtu.be/v6hT3ED3N60} }
-
Scott Barton. 2020. Circularity in Rhythmic Representation and Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 505–508.
Download PDFCycle is a software tool for musical composition and improvisation that represents events along a circular timeline. In doing so, it breaks from the linear representational conventions of European Art music and modern Digital Audio Workstations. A user specifies time points on different layers, each of which corresponds to a particular sound. The layers are superimposed on a single circle, which allows a unique visual perspective on the relationships between musical voices given their geometric positions. Positions in-between quantizations are possible, which encourages experimentation with expressive timing and machine rhythms. User-selected transformations affect groups of notes, layers, and the pattern as a whole. Past and future states are also represented, synthesizing linear and cyclical notions of time. This paper will contemplate philosophical questions raised by circular rhythmic notation and will reflect on the ways in which the representational novelties and editing functions of Cycle have inspired creativity in musical composition.
@inproceedings{NIME20_96, author = {Barton, Scott}, title = {Circularity in Rhythmic Representation and Composition}, pages = {505--508}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper96.pdf}, presentation-video = {https://youtu.be/0CEKbyJUSw4} }
-
Thor Magnusson. 2020. Instrumental Investigations at Emute Lab. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 509–513.
Download PDFThis lab report discusses recent projects and activities of the Experimental Music Technologies Lab at the University of Sussex. The lab was founded in 2014 and has contributed to the development of the field of new musical technologies. The report introduces the lab’s agenda, gives examples of its activities through common themes and gives short description of lab members’ work. The lab environment, funding income and future vision are also presented.
@inproceedings{NIME20_97, author = {Magnusson, Thor}, title = {Instrumental Investigations at Emute Lab}, pages = {509--513}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper97.pdf} }
-
Satvik Venkatesh, Edward Braund, and Eduardo Miranda. 2020. Composing Popular Music with Physarum polycephalum-based Memristors. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 514–519.
Download PDFCreative systems such as algorithmic composers often use Artificial Intelligence models like Markov chains, Artificial Neural Networks, and Genetic Algorithms in order to model stochastic processes. Unconventional Computing (UC) technologies explore non-digital ways of data storage, processing, input, and output. UC paradigms such as Quantum Computing and Biocomputing delve into domains beyond the binary bit to handle complex non-linear functions. In this paper, we harness Physarum polycephalum as memristors to process and generate creative data for popular music. While there has been research conducted in this area, the literature lacks examples of popular music and how the organism’s non-linear behaviour can be controlled while composing music. This is important because non-linear forms of representation are not as obvious as conventional digital means. This study aims at disseminating this technology to non-experts and musicians so that they can incorporate it in their creative processes. Furthermore, it combines resistors and memristors to have more flexibility while generating music and optimises parameters for faster processing and performance.
@inproceedings{NIME20_98, author = {Venkatesh, Satvik and Braund, Edward and Miranda, Eduardo}, title = {Composing Popular Music with Physarum polycephalum-based Memristors}, pages = {514--519}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper98.pdf}, presentation-video = {https://youtu.be/NBLa-KoMUh8} }
-
Fede Camara Halac and Shadrick Addy. 2020. PathoSonic: Performing Sound In Virtual Reality Feature Space. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 520–522.
Download PDFPathoSonic is a VR experience that enables a participant to visualize and perform a sound file based on timbre feature descriptors displayed in space. The name comes from the different paths the participant can create through their sonic explorations. The goal of this research is to leverage affordances of virtual reality technology to visualize sound through different levels of performance-based interactivity that immerses the participant’s body in a spatial virtual environment. Through implementation of a multi-sensory experience, including visual aesthetics, sound, and haptic feedback, we explore inclusive approaches to sound visualization, making it more accessible to a wider audience including those with hearing, and mobility impairments. The online version of the paper can be accessed here: https://fdch.github.io/pathosonic
@inproceedings{NIME20_99, author = {Camara Halac, Fede and Addy, Shadrick}, title = {PathoSonic: Performing Sound In Virtual Reality Feature Space}, pages = {520--522}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Michon, Romain and Schroeder, Franziska}, year = {2020}, month = jul, publisher = {Birmingham City University}, address = {Birmingham, UK}, issn = {2220-4806}, url = {https://www.nime.org/proceedings/2020/nime2020_paper99.pdf} }
2019
-
Enrique Tomas, Thomas Gorbach, Hilda Tellioglu, and Martin Kaltenbrunner. 2019. Material embodiments of electroacoustic music: an experimental workshop study. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 1–6. http://doi.org/10.5281/zenodo.3672842
Download PDF DOIThis paper reports on a workshop where participants produced physical mock-ups of musical interfaces directly after miming control of short electroacoustic music pieces. Our goal was understanding how people envision and materialize their own sound-producing gestures from spontaneous cognitive mappings. During the workshop, 50 participants from different creative backgrounds modeled more than 180 physical artifacts. Participants were filmed and interviewed for the later analysis of their different multimodal associations about music. Our initial hypothesis was that most of the physical mock-ups would be similar to the sound-producing objects that participants would identify in the musical pieces. Although the majority of artifacts clearly showed correlated design trajectories, our results indicate that a relevant number of participants intuitively decided to engineer alternative solutions emphasizing their personal design preferences. Therefore, in this paper we present and discuss the workshop format, its results and the possible applications for designing new musical interfaces.
@inproceedings{Tomas2019, author = {Tomas, Enrique and Gorbach, Thomas and Tellioglu, Hilda and Kaltenbrunner, Martin}, title = {Material embodiments of electroacoustic music: an experimental workshop study}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672842}, url = {http://www.nime.org/proceedings/2019/nime2019_paper001.pdf} }
-
Yupu Lu, Yijie Wu, and Shijie Zhu. 2019. Collaborative Musical Performances with Automatic Harp Based on Image Recognition and Force Sensing Resistors. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 7–8. http://doi.org/10.5281/zenodo.3672846
Download PDF DOIIn this paper, collaborative performance is defined as the performance of the piano by the performer and accompanied by an automatic harp. The automatic harp can play music based on the electronic score and change its speed according to the speed of the performer. We built a 32-channel automatic harp and designed a layered modular framework integrating both hardware and software, for experimental real-time control protocols. Considering that MIDI keyboard lacking information of force (acceleration) and fingering detection, both of which are important for expression, we designed force-sensor glove and achieved basic image recognition. They are used to accurately detect speed, force (corresponding to velocity in MIDI) and pitch when a performer plays the piano.
@inproceedings{Lu2019, author = {Lu, Yupu and Wu, Yijie and Zhu, Shijie}, title = {Collaborative Musical Performances with Automatic Harp Based on Image Recognition and Force Sensing Resistors}, pages = {7--8}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672846}, url = {http://www.nime.org/proceedings/2019/nime2019_paper002.pdf} }
-
Lior Arbel, Yoav Y. Schechner, and Noam Amir. 2019. The Symbaline — An Active Wine Glass Instrument with a Liquid Sloshing Vibrato Mechanism. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 9–14. http://doi.org/10.5281/zenodo.3672848
Download PDF DOIThe Symbaline is an active instrument comprised of several partly-filled wine glasses excited by electromagnetic coils. This work describes an electromechanical system for incorporating frequency and amplitude modulation to the Symbaline’s sound. A pendulum having a magnetic bob is suspended inside the liquid in the wine glass. The pendulum is put into oscillation by driving infra-sound signals through the coil. The pendulum’s movement causes the liquid in the glass to slosh back and forth. Simultaneously, wine glass sounds are produced by driving audio-range signals through the coil, inducing vibrations in a small magnet attached to the glass surface and exciting glass vibrations. As the glass vibrates, the sloshing liquid periodically changes the glass’s resonance frequencies and dampens the glass, thus modulating both wine glass pitch and sound intensity.
@inproceedings{Arbel2019, author = {Arbel, Lior and Schechner, Yoav Y. and Amir, Noam}, title = {The Symbaline --- An Active Wine Glass Instrument with a Liquid Sloshing Vibrato Mechanism}, pages = {9--14}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672848}, url = {http://www.nime.org/proceedings/2019/nime2019_paper003.pdf} }
-
Helena de Souza Nunes, Federico Visi, Lydia Helena Wohl Coelho, and Rodrigo Schramm. 2019. SIBILIM: A low-cost customizable wireless musical interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 15–20. http://doi.org/10.5281/zenodo.3672850
Download PDF DOIThis paper presents the SIBILIM, a low-cost musical interface composed of a resonance box made of cardboard containing customised push buttons that interact with a smartphone through its video camera. Each button can be mapped to a set of MIDI notes or control parameters. The sound is generated through synthesis or sample playback and can be amplified with the help of a transducer, which excites the resonance box. An essential contribution of this interface is the possibility of reconfiguration of the buttons layout without the need to hard rewire the system since it uses only the smartphone built-in camera. This features allow for quick instrument customisation for different use cases, such as low cost projects for schools or instrument building workshops. Our case study used the Sibilim for music education, where it was designed to develop the conscious of music perception and to stimulate creativity through exercises of short tonal musical compositions. We conducted a study with a group of twelve participants in an experimental workshop to verify its validity.
@inproceedings{de-Souza-Nunes2019, author = {de Souza Nunes, Helena and Visi, Federico and Coelho, Lydia Helena Wohl and Schramm, Rodrigo}, title = {SIBILIM: A low-cost customizable wireless musical interface}, pages = {15--20}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672850}, url = {http://www.nime.org/proceedings/2019/nime2019_paper004.pdf} }
-
Jonathan Bell. 2019. The Risset Cycle, Recent Use Cases With SmartVox. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 21–24. http://doi.org/10.5281/zenodo.3672852
Download PDF DOIThe combination of graphic/animated scores, acoustic signals (audio-scores) and Head-Mounted Display (HMD) technology offers promising potentials in the context of distributed notation, for live performances and concerts involving voices, instruments and electronics. After an explanation of what SmartVox is technically, and how it is used by composers and performers, this paper explains why this form of technology-aided performance might help musicians for synchronization to an electronic tape and (spectral) tuning. Then, from an exploration of the concepts of distributed notation and networked music performances, it proposes solutions (in conjunction with INScore, BabelScores and the Decibel Score Player) seeking for the expansion of distributed notation practice to a wider community. It finally presents findings relative to the use of SmartVox with HMDs.
@inproceedings{Bell2019, author = {Bell, Jonathan}, title = {The Risset Cycle, Recent Use Cases With SmartVox}, pages = {21--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672852}, url = {http://www.nime.org/proceedings/2019/nime2019_paper005.pdf} }
-
Johnty Wang, Axel Mulder, and Marcelo Wanderley. 2019. Practical Considerations for MIDI over Bluetooth Low Energy as a Wireless Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 25–30. http://doi.org/10.5281/zenodo.3672854
Download PDF DOIThis paper documents the key issues of performance and compatibility working with Musical Instrument Digital Interface (MIDI) via Bluetooth Low Energy (BLE) as a wireless interface for sensor or controller data and inter-module communication in the context of building interactive digital systems. An overview of BLE MIDI is presented along with a comparison of the protocol from the perspective of theoretical limits and interoperability, showing its widespread compatibility across platforms compared with other alternatives. Then we perform three complementary tests on BLE MIDI and alternative interfaces using prototype and commercial devices, showing that BLE MIDI has comparable performance with the tested WiFi implementations, with end-to-end (sensor input to audio output) latencies of under 10ms under certain conditions. Overall, BLE MIDI is an ideal choice for controllers and sensor interfaces that are designed to work on a wide variety of platforms.
@inproceedings{Wang2019, author = {Wang, Johnty and Mulder, Axel and Wanderley, Marcelo}, title = {Practical Considerations for MIDI over Bluetooth Low Energy as a Wireless Interface}, pages = {25--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672854}, url = {http://www.nime.org/proceedings/2019/nime2019_paper006.pdf} }
-
Richard Ramchurn, Juan Pablo Martinez-Avila, Sarah Martindale, Alan Chamberlain, Max L Wilson, and Steve Benford. 2019. Improvising a Live Score to an Interactive Brain-Controlled Film. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 31–36. http://doi.org/10.5281/zenodo.3672856
Download PDF DOIWe report on the design and deployment of systems for the performance of live score accompaniment to an interactive movie by a Networked Musical Ensemble. In this case, the audio-visual content of the movie is selected in real time based on user input to a Brain-Computer Interface (BCI). Our system supports musical improvisation between human performers and automated systems responding to the BCI. We explore the performers’ roles during two performances when these socio-technical systems were implemented, in terms of coordination, problem-solving, managing uncertainty and musical responses to system constraints. This allows us to consider how features of these systems and practices might be incorporated into a general tool, aimed at any musician, which could scale for use in different performance settings involving interactive media.
@inproceedings{Ramchurn2019, author = {Ramchurn, Richard and Martinez-Avila, Juan Pablo and Martindale, Sarah and Chamberlain, Alan and Wilson, Max L and Benford, Steve}, title = {Improvising a Live Score to an Interactive Brain-Controlled Film}, pages = {31--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672856}, url = {http://www.nime.org/proceedings/2019/nime2019_paper007.pdf} }
-
Ajin Jiji Tom, Harish Jayanth Venkatesan, Ivan Franco, and Marcelo Wanderley. 2019. Rebuilding and Reinterpreting a Digital Musical Instrument — The Sponge. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 37–42. http://doi.org/10.5281/zenodo.3672858
Download PDF DOIAlthough several Digital Musical Instruments (DMIs) have been presented at NIME, very few of them remain accessible to the community. Rebuilding a DMI is often a necessary step to allow for performance with NIMEs. Rebuilding a DMI exactly similar to its original, however, might not be possible due to technology obsolescence, lack of documentation or other reasons. It might then be interesting to re-interpret a DMI and build an instrument inspired by the original one, creating novel performance opportunities. This paper presents the challenges and approaches involved in rebuilding and re-interpreting an existing DMI, The Sponge by Martin Marier. The rebuilt versions make use of newer/improved technology and customized design aspects like addition of vibrotactile feedback and implementation of different mapping strategies. It also discusses the implications of embedding sound synthesis within the DMI, by using the Prynth framework and further presents a comparison between this approach and the more traditional ground-up approach. As a result of the evaluation and comparison of the two rebuilt DMIs, we present a third version which combines the benefits and discuss performance issues with these devices.
@inproceedings{Tom2019, author = {Tom, Ajin Jiji and Venkatesan, Harish Jayanth and Franco, Ivan and Wanderley, Marcelo}, title = {Rebuilding and Reinterpreting a Digital Musical Instrument --- The Sponge}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672858}, url = {http://www.nime.org/proceedings/2019/nime2019_paper008.pdf} }
-
Kiyu Nishida, Akishige Yuguchi, kazuhiro jo, Paul Modler, and Markus Noisternig. 2019. Border: A Live Performance Based on Web AR and a Gesture-Controlled Virtual Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 43–46. http://doi.org/10.5281/zenodo.3672860
Download PDF DOIRecent technological advances, such as increased CPU/GPU processing speed, along with the miniaturization of devices and sensors, have created new possibilities for integrating immersive technologies in music and performance art. Virtual and Augmented Reality (VR/AR) have become increasingly interesting as mobile device platforms, such as up-to-date smartphones, with necessary CPU resources entered the consumer market. In combination with recent web technologies, any mobile device can simply connect with a browser to a local server to access the latest technology. The web platform also eases the integration of collaborative situated media in participatory artwork. In this paper, we present the interactive music improvisation piece ‘Border,’ premiered in 2018 at the Beyond Festival at the Center for Art and Media Karlsruhe (ZKM). This piece explores the interaction between a performer and the audience using web-based applications – including AR, real-time 3D audio/video streaming, advanced web audio, and gesture-controlled virtual instruments – on smart mobile devices.
@inproceedings{Nishida2019, author = {Nishida, Kiyu and Yuguchi, Akishige and kazuhiro jo and Modler, Paul and Noisternig, Markus}, title = {Border: A Live Performance Based on Web AR and a Gesture-Controlled Virtual Instrument}, pages = {43--46}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672860}, url = {http://www.nime.org/proceedings/2019/nime2019_paper009.pdf} }
-
Palle Dahlstedt. 2019. Taming and Tickling the Beast — Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 47–52. http://doi.org/10.5281/zenodo.3672862
Download PDF DOILibration Perturbed is a performance and an improvisation instrument, originally composed and designed for a multi-speaker dome. The performer controls a bank of 64 virtual inter-connected resonating strings, with individual and direct control of tuning and resonance characteristics through a multitouch-enhanced klavier interface (TouchKeys). It is a hybrid acoustic-electronic instrument, as all string vibrations originate from physical vibrations in the klavier and its casing, captured through contact microphones. In addition, there are gestural strings, called ropes, excited by performed musical gestures. All strings and ropes are connected, and inter-resonate together as a ”super-harp”, internally and through the performance space. With strong resonance, strings may go into chaotic motion or emergent quasi-periodic patterns, but custom adaptive leveling mechanisms keep loudness under the musician’s control at all times. The hybrid digital/acoustic approach and the enhanced keyboard provide for an expressive and very physical interaction, and a strong multi-channel immersive experience. The paper describes the aesthetic choices behind the design of the system, as well as the technical implementation, and – primarily – the interaction design, as it emerges from mapping, sound design, physical modeling and integration of the acoustic, the gestural, and the virtual. The work is evaluated based on the experiences from a series of performances.
@inproceedings{Dahlstedt2019, author = {Dahlstedt, Palle}, title = {Taming and Tickling the Beast --- Multi-Touch Keyboard as Interface for a Physically Modelled Interconnected Resonating Super-Harp}, pages = {47--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672862}, url = {http://www.nime.org/proceedings/2019/nime2019_paper010.pdf} }
-
Doga Cavdir, Juan Sierra, and Ge Wang. 2019. Taptop, Armtop, Blowtop: Evolving the Physical Laptop Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 53–58. http://doi.org/10.5281/zenodo.3672864
Download PDF DOIThis research represents an evolution and evaluation of the embodied physical laptop instruments. Specifically, these are instruments that are physical in that they use bodily interaction, take advantage of the physical affordances of the laptop. They are embodied in the sense that instruments are played in such ways where the sound is embedded to be close to the instrument. Three distinct laptop instruments, Taptop, Armtop, and Blowtop, are introduced in this paper. We discuss the integrity of the design process with composing for laptop instruments and performing with them. In this process, our aim is to blur the boundaries of the composer and designer/engineer roles. How the physicality is achieved by leveraging musical gestures gained through traditional instrument practice is studied, as well as those inspired by body gestures. We aim to explore how using such interaction methods affects the communication between the ensemble and the audience. An aesthetic-first qualitative evaluation of these interfaces is discussed, through works and performances crafted specifically for these instruments and presented in the concert setting of the laptop orchestra. In so doing, we reflect on how such physical, embodied instrument design practices can inform a different kind of expressive and performance mindset.
@inproceedings{Cavdir2019, author = {Cavdir, Doga and Sierra, Juan and Wang, Ge}, title = {Taptop, Armtop, Blowtop: Evolving the Physical Laptop Instrument}, pages = {53--58}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672864}, url = {http://www.nime.org/proceedings/2019/nime2019_paper011.pdf} }
-
David Antonio Gómez Jáuregui, Irvin Dongo, and Nadine Couture. 2019. Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 59–64. http://doi.org/10.5281/zenodo.3672866
Download PDF DOIThis work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor. Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter’s user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language.
@inproceedings{Gomez-Jauregui2019, author = {Jáuregui, David Antonio Gómez and Dongo, Irvin and Couture, Nadine}, title = {Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds}, pages = {59--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672866}, url = {http://www.nime.org/proceedings/2019/nime2019_paper012.pdf} }
-
Fabio Morreale, Andrea Guidi, and Andrew P. McPherson. 2019. Magpick: an Augmented Guitar Pick for Nuanced Control. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 65–70. http://doi.org/10.5281/zenodo.3672868
Download PDF DOIThis paper introduces the Magpick, an augmented pick for electric guitar that uses electromagnetic induction to sense the motion of the pick with respect to the permanent magnets in the guitar pickup. The Magpick provides the guitarist with nuanced control of the sound which coexists with traditional plucking-hand technique. The paper presents three ways that the signal from the pick can modulate the guitar sound, followed by a case study of its use in which 11 guitarists tested the Magpick for five days and composed a piece with it. Reflecting on their comments and experiences, we outline the innovative features of this technology from the point of view of performance practice. In particular, compared to other augmentations, the high temporal resolution, low latency, and large dynamic range of the Magpick support a highly nuanced control over the sound. Our discussion highlights the utility of having the locus of augmentation coincide with the locus of interaction.
@inproceedings{Morreale2019, author = {Morreale, Fabio and Guidi, Andrea and McPherson, Andrew P.}, title = {Magpick: an Augmented Guitar Pick for Nuanced Control}, pages = {65--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672868}, url = {http://www.nime.org/proceedings/2019/nime2019_paper013.pdf} }
-
Bertrand Petit and manuel serrano. 2019. Composing and executing Interactive music using the HipHop.js language. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 71–76. http://doi.org/10.5281/zenodo.3672870
Download PDF DOISkini is a platform for composing and producing live performances with audience participating using connected devices (smartphones, tablets, PC, etc.). The music composer creates beforehand musical elements such as melodic patterns, sound patterns, instruments, group of instruments, and a dynamic score that governs the way the basic elements will behave according to events produced by the audience. During the concert or the performance, the audience, by interacting with the system, gives birth to an original music composition. Skini music scores are expressed in terms of constraints that establish relationships between instruments. A constraint maybe instantaneous, for instance one may disable violins while trumpets are playing. A constraint may also be temporal, for instance, the piano cannot play more than 30 consecutive seconds. The Skini platform is implemented in Hop.js and HipHop.js. HipHop.js, a synchronous reactive DLS, is used for implementing the music scores as its elementary constructs consisting of high level operators such as parallel executions, sequences, awaits, synchronization points, etc, form an ideal core language for implementing Skini constraints. This paper presents the Skini platform. It reports about live performances and an educational project. It briefly overviews the use of HipHop.js for representing score.
@inproceedings{Petit2019, author = {Petit, Bertrand and manuel serrano}, title = {Composing and executing Interactive music using the HipHop.js language}, pages = {71--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672870}, url = {http://www.nime.org/proceedings/2019/nime2019_paper014.pdf} }
-
Gabriel Lopes Rocha, João Teixera Araújo, and Flávio Luiz Schiavoni. 2019. Ha Dou Ken Music: Different mappings to play music with joysticks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 77–78. http://doi.org/10.5281/zenodo.3672872
Download PDF DOIDue to video game controls great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures performed in a joystick control can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.
@inproceedings{Rocha2019, author = {Rocha, Gabriel Lopes and Araújo, João Teixera and Schiavoni, Flávio Luiz}, title = {Ha Dou Ken Music: Different mappings to play music with joysticks}, pages = {77--78}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672872}, url = {http://www.nime.org/proceedings/2019/nime2019_paper015.pdf} }
-
Torgrim Rudland Næss and Charles Patrick Martin. 2019. A Physical Intelligent Instrument using Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 79–82. http://doi.org/10.5281/zenodo.3672874
Download PDF DOIThis paper describes a new intelligent interactive instrument, based on an embedded computing platform, where deep neural networks are applied to interactive music generation. Even though using neural networks for music composition is not uncommon, a lot of these models tend to not support any form of user interaction. We introduce a self-contained intelligent instrument using generative models, with support for real-time interaction where the user can adjust high-level parameters to modify the music generated by the instrument. We describe the technical details of our generative model and discuss the experience of using the system as part of musical performance.
@inproceedings{Næss2019, author = {Næss, Torgrim Rudland and Martin, Charles Patrick}, title = {A Physical Intelligent Instrument using Recurrent Neural Networks}, pages = {79--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672874}, url = {http://www.nime.org/proceedings/2019/nime2019_paper016.pdf} }
-
Angelo Fraietta. 2019. Creating Order and Progress. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 83–88. http://doi.org/10.5281/zenodo.3672876
Download PDF DOIThis paper details the mapping strategy of the work Order and Progress: a sonic segue across A Auriverde, a composition based upon the skyscape represented on the Brazilian flag. This work uses the Stellarium planetarium software as a performance interface, blending the political symbology, scientific data and musical mapping of each star represented on the flag as a multimedia performance. The work is interfaced through the Stellar Command module, a Java based program that converts the visible field of view from the Stellarium planetarium interface to astronomical data through the VizieR database of astronomical catalogues. This scientific data is then mapped to musical parameters through a Java based programming environment. I will discuss the strategies employed to create a work that was not only artistically novel, but also visually engaging and scientifically accurate.
@inproceedings{Fraietta2019, author = {Fraietta, Angelo}, title = {Creating Order and Progress}, pages = {83--88}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672876}, url = {http://www.nime.org/proceedings/2019/nime2019_paper017.pdf} }
-
João Nogueira Tragtenberg, Filipe Calegario, Giordano Cabral, and Geber L. Ramalho. 2019. Towards the Concept of Digital Dance and Music Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 89–94. http://doi.org/10.5281/zenodo.3672878
Download PDF DOIThis paper discusses the creation of instruments in which music is intentionally generated by dance. We introduce the conceptual framework of Digital Dance and Music Instruments (DDMI). Several DDMI have already been created, but they have been developed isolatedly, and there is still a lack of a common process of ideation and development. Knowledge about Digital Musical Instruments (DMIs) and Interactive Dance Systems (IDSs) can contribute to the design of DDMI, but the former brings few contributions to the body’s expressiveness, and the latter brings few references to an instrumental relationship with music. Because of those different premises, the integration between both paradigms can be an arduous task for the designer of DDMI. The conceptual framework of DDMI can also serve as a bridge between DMIs and IDSs, serving as a lingua franca between both communities and facilitating the exchange of knowledge. The conceptual framework has shown to be a promising analytical tool for the design, development, and evaluation of new digital dance and music instrument.
@inproceedings{Tragtenberg2019, author = {Tragtenberg, João Nogueira and Calegario, Filipe and Cabral, Giordano and Ramalho, Geber L.}, title = {Towards the Concept of Digital Dance and Music Instruments}, pages = {89--94}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672878}, url = {http://www.nime.org/proceedings/2019/nime2019_paper018.pdf} }
-
Maros Suran Bomba and Palle Dahlstedt. 2019. Somacoustics: Interactive Body-as-Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 95–100. http://doi.org/10.5281/zenodo.3672880
Download PDF DOIVisitors interact with a blindfolded artist’s body, the motions of which are tracked and translated into synthesized four-channel sound, surrounding the participants. Through social-physical and aural interactions, they play his instrument-body, in a mutual dance. Crucial for this work has been the motion-to-sound mapping design, and the investigations of bodily interaction with normal lay-people and with professional contact-improvisation dancers. The extra layer of social-physical interaction both constrains and inspires the participant-artist relation and the sonic exploration, and through this, his body is transformed into an instrument, and physical space is transformed into a sound-space. The project aims to explore the experience of interaction between human and technology and its impact on one’s bodily perception and embodiment, as well as the relation between body and space, departing from a set of existing theories on embodiment. In the paper, its underlying aesthetics are described and discussed, as well as the sensitive motion research process behind it, and the technical implementation of the work. It is evaluated based on participant behavior and experiences and analysis of its premiere exhibition in 2018.
@inproceedings{Bomba2019, author = {Bomba, Maros Suran and Dahlstedt, Palle}, title = {Somacoustics: Interactive Body-as-Instrument}, pages = {95--100}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672880}, url = {http://www.nime.org/proceedings/2019/nime2019_paper019.pdf} }
-
Nathan Turczan and Ajay Kapur. 2019. The Scale Navigator: A System for Networked Algorithmic Harmony. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 101–104. http://doi.org/10.5281/zenodo.3672882
Download PDF DOIThe Scale Navigator is a graphical interface implementation of Dmitri Tymoczko’s scale network designed to help generate algorithmic harmony and harmonically synchronize performers in a laptop or electro-acoustic orchestra. The user manipulates the Scale Navigator to direct harmony on a chord-to-chord level and on a scale-to-scale level. In a live performance setting, the interface broadcasts control data, MIDI, and real-time notation to an ensemble of live electronic performers, sight-reading improvisers, and musical generative algorithms.
@inproceedings{Turczan2019, author = {Turczan, Nathan and Kapur, Ajay}, title = {The Scale Navigator: A System for Networked Algorithmic Harmony}, pages = {101--104}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672882}, url = {http://www.nime.org/proceedings/2019/nime2019_paper020.pdf} }
-
Alex Michael Lucas, Miguel Ortiz, and Dr. Franziska Schroeder. 2019. Bespoke Design for Inclusive Music: The Challenges of Evaluation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 105–109. http://doi.org/10.5281/zenodo.3672884
Download PDF DOIIn this paper, the authors describe the evaluation of a collection of bespoke knob cap designs intended to improve the ease in which a specific musician with dyskinetic cerebral palsy can operate rotary controls in a musical context. The authors highlight the importance of the performers perspective when using design as a means for overcoming access barriers to music. Also, while the authors were not able to find an ideal solution for the musician within the confines of this study, several useful observations on the process of evaluating bespoke assistive music technology are described; observations which may prove useful to digital musical instrument designers working within the field of inclusive music.
@inproceedings{Lucas2019, author = {Lucas, Alex Michael and Ortiz, Miguel and Schroeder, Dr. Franziska}, title = {Bespoke Design for Inclusive Music: The Challenges of Evaluation}, pages = {105--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672884}, url = {http://www.nime.org/proceedings/2019/nime2019_paper021.pdf} }
-
Xiao Xiao, Grégoire Locqueville, Christophe d’Alessandro, and Boris Doval. 2019. T-Voks: the Singing and Speaking Theremin. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 110–115. http://doi.org/10.5281/zenodo.3672886
Download PDF DOIT-Voks is an augmented theremin that controls Voks, a performative singing synthesizer. Originally developed for control with a graphic tablet interface, Voks allows for real-time pitch and time scaling, vocal effort modification and syllable sequencing for pre-recorded voice utterances. For T-Voks the theremin’s frequency antenna modifies the output pitch of the target utterance while the amplitude antenna controls not only volume as usual but also voice quality and vocal effort. Syllabic sequencing is handled by an additional pressure sensor attached to the player’s volume-control hand. This paper presents the system architecture of T-Voks, the preparation procedure for a song, playing gestures, and practice techniques, along with musical and poetic examples across four different languages and styles.
@inproceedings{Xiao2019, author = {Xiao, Xiao and Locqueville, Grégoire and d'Alessandro, Christophe and Doval, Boris}, title = {T-Voks: the Singing and Speaking Theremin}, pages = {110--115}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672886}, url = {http://www.nime.org/proceedings/2019/nime2019_paper022.pdf} }
-
Hunter Brown and spencer topel. 2019. DRMMR: An Augmented Percussion Implement. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 116–121. http://doi.org/10.5281/zenodo.3672888
Download PDF DOIRecent developments in music technology have enabled novel timbres to be acoustically synthesized using various actuation and excitation methods. Utilizing recent work in nonlinear acoustic synthesis, we propose a transducer based augmented percussion implement entitled DRMMR. This design enables the user to sustain computer sequencer-like drum rolls at faster speeds while also enabling the user to achieve nonlinear acoustic synthesis effects. Our acoustic evaluation shows drum rolls executed by DRMMR easily exhibit greater levels of regularity, speed, and precision than comparable transducer and electromagnetic-based actuation methods. DRMMR’s nonlinear acoustic synthesis functionality also presents possibilities for new kinds of sonic interactions on the surface of drum membranes.
@inproceedings{Brown2019, author = {Brown, Hunter and spencer topel}, title = {DRMMR: An Augmented Percussion Implement}, pages = {116--121}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672888}, url = {http://www.nime.org/proceedings/2019/nime2019_paper023.pdf} }
-
Giacomo Lepri and Andrew P. McPherson. 2019. Fictional instruments, real values: discovering musical backgrounds with non-functional prototypes. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 122–127. http://doi.org/10.5281/zenodo.3672890
Download PDF DOIThe emergence of a new technology can be considered as the result of social, cultural and technical process. Instrument designs are particularly influenced by cultural and aesthetic values linked to the specific contexts and communities that produced them. In previous work, we ran a design fiction workshop in which musicians created non-functional instrument mockups. In the current paper, we report on an online survey in which music technologists were asked to speculate on the background of the musicians who designed particular instruments. Our results showed several cues for the interpretation of the artefacts’ origins, including physical features, body-instrument interactions, use of language and references to established music practices and tools. Tacit musical and cultural values were also identified based on intuitive and holistic judgments. Our discussion highlights the importance of cultural awareness and context-dependent values on the design and use of interactive musical systems.
@inproceedings{Lepri2019, author = {Lepri, Giacomo and McPherson, Andrew P.}, title = {Fictional instruments, real values: discovering musical backgrounds with non-functional prototypes}, pages = {122--127}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672890}, url = {http://www.nime.org/proceedings/2019/nime2019_paper024.pdf} }
-
Christopher Dewey and Jonathan P. Wakefield. 2019. Exploring the Container Metaphor for Equalisation Manipulation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 128–129. http://doi.org/10.5281/zenodo.3672892
Download PDF DOIThis paper presents the first stage in the design and evaluation of a novel container metaphor interface for equalisation control. The prototype system harnesses the Pepper’s Ghost illusion to project mid-air a holographic data visualisation of an audio track’s long-term average and real-time frequency content as a deformable shape manipulated directly via hand gestures. The system uses HTML 5, JavaScript and the Web Audio API in conjunction with a Leap Motion controller and bespoke low budget projection system. During subjective evaluation users commented that the novel system was simpler and more intuitive to use than commercially established equalisation interface paradigms and most suited to creative, expressive and explorative equalisation tasks.
@inproceedings{Dewey2019, author = {Dewey, Christopher and Wakefield, Jonathan P.}, title = {Exploring the Container Metaphor for Equalisation Manipulation}, pages = {128--129}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672892}, url = {http://www.nime.org/proceedings/2019/nime2019_paper025.pdf} }
-
Alex Hofmann, Vasileios Chatziioannou, Sebastian Schmutzhard, Gökberk Erdogan, and Alexander Mayer. 2019. The Half-Physler: An oscillating real-time interface to a tube resonator model. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 130–133. http://doi.org/10.5281/zenodo.3672896
Download PDF DOIPhysics-based sound synthesis allows to shape the sound by modifying parameters that reference to real world properties of acoustic instruments. This paper presents a hybrid physical modeling single reed instrument, where a virtual tube is coupled to a real mouthpiece with a sensor-equipped clarinet reed. The tube model is provided as an opcode for Csound which is running on the low-latency embedded audio-platform Bela. An actuator is connected to the audio output and the sensor-reed signal is fed back into the input of Bela. The performer can control the coupling between reed and actuator, and is also provided with a 3D-printed slider/knob interface to change parameters of the tube model in real-time.
@inproceedings{Hofmann2019, author = {Hofmann, Alex and Chatziioannou, Vasileios and Schmutzhard, Sebastian and Erdogan, Gökberk and Mayer, Alexander}, title = {The Half-Physler: An oscillating real-time interface to a tube resonator model}, pages = {130--133}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672896}, url = {http://www.nime.org/proceedings/2019/nime2019_paper026.pdf} }
-
Peter Bussigel, Stephan Moore, and Scott Smallwood. 2019. Reanimating the Readymade. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 134–139. http://doi.org/10.5281/zenodo.3672898
Download PDF DOIThere is rich history of using found or “readymade” objects in music performances and sound installations. John Cage’s Water Walk, Carolee Schneeman’s Noise Bodies, and David Tudor’s Rainforest all lean on both the sonic and cultural affordances of found objects. Today, composers and sound artists continue to look at the everyday, combining readymades with microcontrollers and homemade electronics and repurposing known interfaces for their latent sonic potential. This paper gives a historical overview of work at the intersection of music and the readymade and then describes three recent sound installations/performances by the authors that further explore this space. The emphasis is on processes involved in working with found objects–the complex, practical, and playful explorations into sound and material culture.
@inproceedings{Bussigel2019, author = {Bussigel, Peter and Moore, Stephan and Smallwood, Scott}, title = {Reanimating the Readymade}, pages = {134--139}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672898}, url = {http://www.nime.org/proceedings/2019/nime2019_paper027.pdf} }
-
Yian Zhang, Yinmiao Li, Daniel Chin, and Gus Xia. 2019. Adaptive Multimodal Music Learning via Interactive Haptic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 140–145. http://doi.org/10.5281/zenodo.3672900
Download PDF DOIHaptic interfaces have untapped the sense of touch to assist multimodal music learning. We have recently seen various improvements of interface design on tactile feedback and force guidance aiming to make instrument learning more effective. However, most interfaces are still quite static; they cannot yet sense the learning progress and adjust the tutoring strategy accordingly. To solve this problem, we contribute an adaptive haptic interface based on the latest design of haptic flute. We first adopted a clutch mechanism to enable the interface to turn on and off the haptic control flexibly in real time. The interactive tutor is then able to follow human performances and apply the “teacher force” only when the software instructs so. Finally, we incorporated the adaptive interface with a step-by-step dynamic learning strategy. Experimental results showed that dynamic learning dramatically outperforms static learning, which boosts the learning rate by 45.3% and shrinks the forgetting chance by 86%.
@inproceedings{Zhang2019, author = {Zhang, Yian and Li, Yinmiao and Chin, Daniel and Xia, Gus}, title = {Adaptive Multimodal Music Learning via Interactive Haptic Instrument}, pages = {140--145}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672900}, url = {http://www.nime.org/proceedings/2019/nime2019_paper028.pdf} }
-
Fabián Sguiglia, Pauli Coton, and Fernando Toth. 2019. El mapa no es el territorio: Sensor mapping for audiovisual performances. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 146–149. http://doi.org/10.5281/zenodo.3672902
Download PDF DOIWe present El mapa no es el territorio (MNT), a set of open source tools that facilitate the design of visual and musical mappings for interactive installations and performance pieces. MNT is being developed by a multidisciplinary group that explores gestural control of audio-visual environments and virtual instruments. Along with these tools, this paper will present two projects in which they were used -interactive installation Memorias Migrantes and stage performance Recorte de Jorge Cárdenas Cayendo-, showing how MNT allows us to develop collaborative artworks that articulate body movement and generative audiovisual systems, and how its current version was influenced by these successive implementations.
@inproceedings{Sguiglia2019, author = {Sguiglia, Fabián and Coton, Pauli and Toth, Fernando}, title = {El mapa no es el territorio: Sensor mapping for audiovisual performances}, pages = {146--149}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672902}, url = {http://www.nime.org/proceedings/2019/nime2019_paper029.pdf} }
-
Vanessa Yaremchuk, Carolina Brum Medeiros, and Marcelo Wanderley. 2019. Small Dynamic Neural Networks for Gesture Classification with The Rulers (a Digital Musical Instrument). Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 150–155. http://doi.org/10.5281/zenodo.3672904
Download PDF DOIThe Rulers is a Digital Musical Instrument with 7 metal beams, each of which is fixed at one end. It uses infrared sensors, Hall sensors, and strain gauges to estimate deflection. These sensors each perform better or worse depending on the class of gesture the user is making, motivating sensor fusion practices. Residuals between Kalman filters and sensor output are calculated and used as input to a recurrent neural network which outputs a classification that determines which processing parameters and sensor measurements are employed. Multiple instances (30) of layer recurrent neural networks with a single hidden layer varying in size from 1 to 10 processing units were trained, and tested on previously unseen data. The best performing neural network has only 3 hidden units and has a sufficiently low error rate to be good candidate for gesture classification. This paper demonstrates that: dynamic networks out-perform feedforward networks for this type of gesture classification, a small network can handle a problem of this level of complexity, recurrent networks of this size are fast enough for real-time applications of this type, and the importance of training multiple instances of each network architecture and selecting the best performing one from within that set.
@inproceedings{Yaremchuk2019, author = {Yaremchuk, Vanessa and Medeiros, Carolina Brum and Wanderley, Marcelo}, title = {Small Dynamic Neural Networks for Gesture Classification with The Rulers (a Digital Musical Instrument)}, pages = {150--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672904}, url = {http://www.nime.org/proceedings/2019/nime2019_paper030.pdf} }
-
Palle Dahlstedt and Ami Skånberg Dahlstedt. 2019. OtoKin: Mapping for Sound Space Exploration through Dance Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 156–161. http://doi.org/10.5281/zenodo.3672906
Download PDF DOIWe present a work where a space of realtime synthesized sounds is explored through ear (Oto) and movement (Kinesis) by one or two dancers. Movement is tracked and mapped through extensive pre-processing to a high-dimensional acoustic space, using a many-to-many mapping, so that every small body movement matters. Designed for improvised exploration, it works as both performance and installation. Through this re-translation of bodily action, position, and posture into infinite-dimensional sound texture and timbre, the performers are invited to re-think and re-learn position and posture as sound, effort as gesture, and timbre as a bodily construction. The sound space can be shared by two people, with added modes of presence, proximity and interaction. The aesthetic background and technical implementation of the system are described, and the system is evaluated based on a number of performances, workshops and installation exhibits. Finally, the aesthetic and choreographic motivations behind the performance narrative are explained, and discussed in the light of the design of the sonification.
@inproceedings{Dahlstedt-b2019, author = {Dahlstedt, Palle and Dahlstedt, Ami Skånberg}, title = {OtoKin: Mapping for Sound Space Exploration through Dance Improvisation}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672906}, url = {http://www.nime.org/proceedings/2019/nime2019_paper031.pdf} }
-
Joe Wright and James Dooley. 2019. On the Inclusivity of Constraint: Creative Appropriation in Instruments for Neurodiverse Children and Young People. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 162–167. http://doi.org/10.5281/zenodo.3672908
Download PDF DOITaking inspiration from research into deliberately constrained musical technologies and the emergence of neurodiverse, child-led musical groups such as the Artism Ensemble, the interplay between design-constraints, inclusivity and appro- priation is explored. A small scale review covers systems from two prominent UK-based companies, and two itera- tions of a new prototype system that were developed in collaboration with a small group of young people on the autistic spectrum. Amongst these technologies, the aspects of musical experience that are made accessible differ with re- spect to the extent and nature of each system’s constraints. It is argued that the design-constraints of the new prototype system facilitated the diverse playing styles and techniques observed during its development. Based on these obser- vations, we propose that deliberately constrained musical instruments may be one way of providing more opportuni- ties for the emergence of personal practices and preferences in neurodiverse groups of children and young people, and that this is a fitting subject for further research.
@inproceedings{Wright2019, author = {Wright, Joe and Dooley, James}, title = {On the Inclusivity of Constraint: Creative Appropriation in Instruments for Neurodiverse Children and Young People}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672908}, url = {http://www.nime.org/proceedings/2019/nime2019_paper032.pdf} }
-
Isabela Corintha Almeida, Giordano Cabral, and Professor Gilberto Bernardes Almeida. 2019. AMIGO: An Assistive Musical Instrument to Engage, Create and Learn Music. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 168–169. http://doi.org/10.5281/zenodo.3672910
Download PDF DOIWe present AMIGO, a real-time computer music system that assists novice users in the composition process through guided musical improvisation. The system consists of 1) a computational analysis-generation algorithm, which not only formalizes musical principles from examples, but also guides the user in selecting note sequences; 2) a MIDI keyboard controller with an integrated LED stripe, which provides visual feedback to the user; and 3) a real-time music notation, which displays the generated output. Ultimately, AMIGO allows the intuitive creation of new musical structures and the acquisition of Western music formalisms, such as musical notation.
@inproceedings{Almeida2019, author = {Almeida, Isabela Corintha and Cabral, Giordano and Almeida, Professor Gilberto Bernardes}, title = {AMIGO: An Assistive Musical Instrument to Engage, Create and Learn Music}, pages = {168--169}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672910}, url = {http://www.nime.org/proceedings/2019/nime2019_paper033.pdf} }
-
Cristiano Figueiró, Guilherme Soares, and Bruno Rohde. 2019. ESMERIL — An interactive audio player and composition system for collaborative experimental music netlabels. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 170–173. http://doi.org/10.5281/zenodo.3672912
Download PDF DOIESMERIL is an application developed for Android with a toolchain based on Puredata and OpenFrameworks (with Ofelia library). The application enables music creation in a specific expanded format: four separate mono tracks, each one able to manipulate up to eight audio samples per channel. It works also as a performance instrument that stimulates collaborative remixings from compositions of scored interaction gestures called “scenes”. The interface also aims to be a platform to exchange those sample packs as artistic releases, a format similar to the popular idea of an “album”, but prepared to those four channel packs of samples and scores of interaction. It uses an adaptive audio slicing mechanism and it is based on interaction design for multi-touch screen features. A timing sequencer enhances the interaction between pre-set sequences (the “scenes”) and screen manipulation scratching, expanding and moving graphic sound waves. This paper describes the graphical interface features, some development decisions up to now and perspectives to its continuity.
@inproceedings{Figueiró2019, author = {Figueiró, Cristiano and Soares, Guilherme and Rohde, Bruno}, title = {ESMERIL --- An interactive audio player and composition system for collaborative experimental music netlabels}, pages = {170--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672912}, url = {http://www.nime.org/proceedings/2019/nime2019_paper034.pdf} }
-
Aline Weber, Lucas Nunes Alegre, Jim Torresen, and Bruno C. da Silva. 2019. Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 174–179. http://doi.org/10.5281/zenodo.3672914
Download PDF DOIWe introduce a machine learning technique to autonomously generate novel melodies that are variations of an arbitrary base melody. These are produced by a neural network that ensures that (with high probability) the melodic and rhythmic structure of the new melody is consistent with a given set of sample songs. We train a Variational Autoencoder network to identify a low-dimensional set of variables that allows for the compression and representation of sample songs. By perturbing these variables with Perlin Noise—a temporally-consistent parameterized noise function—it is possible to generate smoothly-changing novel melodies. We show that (1) by regulating the amount of noise, one can specify how much of the base song will be preserved; and (2) there is a direct correlation between the noise signal and the differences between the statistical properties of novel melodies and the original one. Users can interpret the controllable noise as a type of "creativity knob": the higher it is, the more leeway the network has to generate significantly different melodies. We present a physical prototype that allows musicians to use a keyboard to provide base melodies and to adjust the network’s "creativity knobs" to regulate in real-time the process that proposes new melody ideas.
@inproceedings{Weber2019, author = {Weber, Aline and Alegre, Lucas Nunes and Torresen, Jim and da Silva, Bruno C.}, title = {Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise}, pages = {174--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672914}, url = {http://www.nime.org/proceedings/2019/nime2019_paper035.pdf} }
-
Atau Tanaka, Balandino Di Donato, Michael Zbyszynski, and Geert Roks. 2019. Designing Gestures for Continuous Sonic Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 180–185. http://doi.org/10.5281/zenodo.3672916
Download PDF DOIThis paper presents a system that allows users to quickly try different ways to train neural networks and temporal modeling techniques to associate arm gestures with time varying sound. We created a software framework for this, and designed three interactive sounds and presented them to participants in a workshop based study. We build upon previous work in sound-tracing and mapping-by-demonstration to ask the participants to design gestures with which to perform the given sounds using a multimodal, inertial measurement (IMU) and muscle sensing (EMG) device. We presented the user with four techniques for associating sensor input to synthesizer parameter output. Two were classical techniques from the literature, and two proposed different ways to capture dynamic gesture in a neural network. These four techniques were: 1.) A Static Position regression training procedure, 2.) A Hidden Markov based temporal modeler, 3.) Whole Gesture capture to a neural network, and 4.) a Windowed method using the position-based procedure on the fly during the performance of a dynamic gesture. Our results show trade-offs between accurate, predictable reproduction of the source sounds and exploration of the gesture-sound space. Several of the users were attracted to our new windowed method for capturing gesture anchor points on the fly as training data for neural network based regression. This paper will be of interest to musicians interested in going from sound design to gesture design and offers a workflow for quickly trying different mapping-by-demonstration techniques.
@inproceedings{Tanaka2019, author = {Tanaka, Atau and Di Donato, Balandino and Zbyszynski, Michael and Roks, Geert}, title = {Designing Gestures for Continuous Sonic Interaction}, pages = {180--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672916}, url = {http://www.nime.org/proceedings/2019/nime2019_paper036.pdf} }
-
Cagri Erdem, Katja Henriksen Schia, and Alexander Refsum Jensenius. 2019. Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 186–191. http://doi.org/10.5281/zenodo.3672918
Download PDF DOIThis paper describes the process of developing a shared instrument for music–dance performance, with a particular focus on exploring the boundaries between standstill vs motion, and silence vs sound. The piece Vrengt grew from the idea of enabling a true partnership between a musician and a dancer, developing an instrument that would allow for active co-performance. Using a participatory design approach, we worked with sonification as a tool for systematically exploring the dancer’s bodily expressions. The exploration used a "spatiotemporal matrix", with a particular focus on sonic microinteraction. In the final performance, two Myo armbands were used for capturing muscle activity of the arm and leg of the dancer, together with a wireless headset microphone capturing the sound of breathing. In the paper we reflect on multi-user instrument paradigms, discuss our approach to creating a shared instrument using sonification as a tool for the sound design, and reflect on the performers’ subjective evaluation of the instrument.
@inproceedings{Erdem2019, author = {Erdem, Cagri and Schia, Katja Henriksen and Jensenius, Alexander Refsum}, title = {Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance}, pages = {186--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672918}, url = {http://www.nime.org/proceedings/2019/nime2019_paper037.pdf} }
-
Samuel Thompson Parke-Wolfe, Hugo Scurto, and Rebecca Fiebrink. 2019. Sound Control: Supporting Custom Musical Interface Design for Children with Disabilities. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 192–197. http://doi.org/10.5281/zenodo.3672920
Download PDF DOIWe have built a new software toolkit that enables music therapists and teachers to create custom digital musical interfaces for children with diverse disabilities. It was designed in collaboration with music therapists, teachers, and children. It uses interactive machine learning to create new sensor- and vision-based musical interfaces using demonstrations of actions and sound, making interface building fast and accessible to people without programming or engineering expertise. Interviews with two music therapy and education professionals who have used the software extensively illustrate how richly customised, sensor-based interfaces can be used in music therapy contexts; they also reveal how properties of input devices, music-making approaches, and mapping techniques can support a variety of interaction styles and therapy goals.
@inproceedings{Parke-Wolfe2019, author = {Parke-Wolfe, Samuel Thompson and Scurto, Hugo and Fiebrink, Rebecca}, title = {Sound Control: Supporting Custom Musical Interface Design for Children with Disabilities}, pages = {192--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672920}, url = {http://www.nime.org/proceedings/2019/nime2019_paper038.pdf} }
-
Oliver Hödl. 2019. ’Blending Dimensions’ when Composing for DMI and Symphonic Orchestra. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 198–203. http://doi.org/10.5281/zenodo.3672922
Download PDF DOIWith a new digital music instrument (DMI), the interface itself, the sound generation, the composition, and the performance are often closely related and even intrinsically linked with each other. Similarly, the instrument designer, composer, and performer are often the same person. The Academic Festival Overture is a new piece of music for the DMI Trombosonic and symphonic orchestra written by a composer who had no prior experience with the instrument. The piece underwent the phases of a composition competition, rehearsals, a music video production, and a public live performance. This whole process was evaluated reflecting on the experience of three involved key stakeholder: the composer, the conductor, and the instrument designer as performer. ‘Blending dimensions’ of these stakeholder and decoupling the composition from the instrument designer inspired the newly involved composer to completely rethink the DMI’s interaction and sound concept. Thus, to deliberately avoid an early collaboration between a DMI designer and a composer bears the potential for new inspiration and at the same time the challenge to seek such a collaboration in the need of clarifying possible misunderstandings and improvement.
@inproceedings{Hödl2019, author = {Hödl, Oliver}, title = {'Blending Dimensions' when Composing for DMI and Symphonic Orchestra}, pages = {198--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672922}, url = {http://www.nime.org/proceedings/2019/nime2019_paper039.pdf} }
-
behzad haki and Sergi Jorda. 2019. A Bassline Generation System Based on Sequence-to-Sequence Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 204–209. http://doi.org/10.5281/zenodo.3672928
Download PDF DOIThis paper presents a detailed explanation of a system generating basslines that are stylistically and rhythmically interlocked with a provided audio drum loop. The proposed system is based on a natural language processing technique: word-based sequence-to-sequence learning using LSTM units. The novelty of the proposed method lies in the fact that the system is not reliant on a voice-by-voice transcription of drums; instead, in this method, a drum representation is used as an input sequence from which a translated bassline is obtained at the output. The drum representation consists of fixed size sequences of onsets detected from a 2-bar audio drum loop in eight different frequency bands. The basslines generated by this method consist of pitched notes with different duration. The proposed system was trained on two distinct datasets compiled for this project by the authors. Each dataset contains a variety of 2-bar drum loops with annotated basslines from two different styles of dance music: House and Soca. A listening experiment designed based on the system revealed that the proposed system is capable of generating basslines that are interesting and are well rhythmically interlocked with the drum loops from which they were generated.
@inproceedings{haki2019, author = {behzad haki and Jorda, Sergi}, title = {A Bassline Generation System Based on Sequence-to-Sequence Learning}, pages = {204--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672928}, url = {http://www.nime.org/proceedings/2019/nime2019_paper040.pdf} }
-
Lloyd May and spencer topel. 2019. BLIKSEM: An Acoustic Synthesis Fuzz Pedal. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 210–215. http://doi.org/10.5281/zenodo.3672930
Download PDF DOIThis paper presents a novel physical fuzz pedal effect system named BLIKSEM. Our approach applies previous work in nonlinear acoustic synthesis via a driven cantilever soundboard configuration for the purpose of generating fuzz pedal-like effects as well as a variety of novel audio effects. Following a presentation of our pedal design, we compare the performance of our system with various various classic and contemporary fuzz pedals using an electric guitar. Our results show that BLIKSEM is capable of generating signals that approximate the timbre and dynamic behaviors of conventional fuzz pedals, as well as offer new mechanisms for expressive interactions and a range of new effects in different configurations.
@inproceedings{May2019, author = {May, Lloyd and spencer topel}, title = {BLIKSEM: An Acoustic Synthesis Fuzz Pedal}, pages = {210--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672930}, url = {http://www.nime.org/proceedings/2019/nime2019_paper041.pdf} }
-
Anna Xambó, Sigurd Saue, Alexander Refsum Jensenius, Robin Støckert, and Oeyvind Brandtsegg. 2019. NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 216–221. http://doi.org/10.5281/zenodo.3672932
Download PDF DOIIn this paper, we present a workshop of physical computing applied to NIME design based on science, technology, engineering, arts, and mathematics (STEAM) education. The workshop is designed for master students with multidisciplinary backgrounds. They are encouraged to work in teams from two university campuses remotely connected through a portal space. The components of the workshop are prototyping, music improvisation and reflective practice. We report the results of this course, which show a positive impact on the students’ confidence in prototyping and intention to continue in STEM fields. We also present the challenges and lessons learned on how to improve the teaching of hybrid technologies and programming skills in an interdisciplinary context across two locations, with the aim of satisfying both beginners and experts. We conclude with a broader discussion on how these new pedagogical perspectives can improve NIME-related courses.
@inproceedings{Xambó2019, author = {Xambó, Anna and Saue, Sigurd and Jensenius, Alexander Refsum and Støckert, Robin and Brandtsegg, Oeyvind}, title = {NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing}, pages = {216--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672932}, url = {http://www.nime.org/proceedings/2019/nime2019_paper042.pdf} }
-
Eduardo Meneses, Johnty Wang, Sergio Freire, and Marcelo Wanderley. 2019. A Comparison of Open-Source Linux Frameworks for an Augmented Musical Instrument Implementation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 222–227. http://doi.org/10.5281/zenodo.3672934
Download PDF DOIThe increasing availability of accessible sensor technologies, single board computers, and prototyping platforms have resulted in a growing number of frameworks explicitly geared towards the design and construction of Digital and Augmented Musical Instruments. Developing such instruments can be facilitated by choosing the most suitable framework for each project. In the process of selecting a framework for implementing an augmented guitar instrument, we have tested three Linux-based open-source platforms that have been designed for real-time sensor interfacing, audio processing, and synthesis. Factors such as acquisition latency, workload measurements, documentation, and software implementation are compared and discussed to determine the suitability of each environment for our particular project.
@inproceedings{Meneses2019, author = {Meneses, Eduardo and Wang, Johnty and Freire, Sergio and Wanderley, Marcelo}, title = {A Comparison of Open-Source Linux Frameworks for an Augmented Musical Instrument Implementation}, pages = {222--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672934}, url = {http://www.nime.org/proceedings/2019/nime2019_paper043.pdf} }
-
Martin Matus Lerner. 2019. Latin American NIMEs: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 228–233. http://doi.org/10.5281/zenodo.3672936
Download PDF DOIDuring the twentieth century several Latin American nations (such as Argentina, Brazil, Chile, Cuba and Mexico) have originated relevant antecedents in the NIME field. Their innovative authors have interrelated musical composition, lutherie, electronics and computing. This paper provides a panoramic view of their original electronic instruments and experimental sound practices, as well as a perspective of them regarding other inventions around the World.
@inproceedings{Matus-Lerner2019, author = {Lerner, Martin Matus}, title = {Latin American NIMEs: Electronic Musical Instruments and Experimental Sound Devices in the Twentieth Century}, pages = {228--233}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672936}, url = {http://www.nime.org/proceedings/2019/nime2019_paper044.pdf} }
-
Sarah Reid, Ryan Gaston, and Ajay Kapur. 2019. Perspectives on Time: performance practice, mapping strategies, & composition with MIGSI . Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 234–239. http://doi.org/10.5281/zenodo.3672940
Download PDF DOIThis paper presents four years of development in performance and compositional practice on an electronically augmented trumpet called MIGSI. Discussion is focused on conceptual and technical approaches to data mapping, sonic interaction, and composition that are inspired by philosophical questions of time: what is now? Is time linear or multi-directional? Can we operate in multiple modes of temporal perception simultaneously? A number of mapping strategies are presented which explore these ideas through the manipulation of temporal separation between user input and sonic output. In addition to presenting technical progress, this paper will introduce a body of original repertoire composed for MIGSI, in order to illustrate how these tools and approaches have been utilized in live performance and how they may find use in other creative applications.
@inproceedings{Reid2019, author = {Reid, Sarah and Gaston, Ryan and Kapur, Ajay}, title = { Perspectives on Time: performance practice, mapping strategies, & composition with MIGSI }, pages = {234--239}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672940}, url = {http://www.nime.org/proceedings/2019/nime2019_paper045.pdf} }
-
Natacha Lamounier, Luiz Naveda, and Adriana Bicalho. 2019. The design of technological interfaces for interactions between music, dance and garment movements. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 240–245. http://doi.org/10.5281/zenodo.3672942
Download PDF DOIThe present work explores the design of multimodal interfaces that capture hand gestures and promote interactions between dance, music and wearable technologic garment. We aim at studying the design strategies used to interface music to other domains of the performance, in special, the application of wearable technologies into music performances. The project describes the development of the music and wearable interfaces, which comprise a hand interface and a mechanical actuator attached to the dancer’s dress. The performance resulted from the study is inspired in the butoh dances and attempts to add a technological poetic as music-dance-wearable interactions to the traditional dialogue between dance and music.
@inproceedings{Lamounier2019, author = {Lamounier, Natacha and Naveda, Luiz and Bicalho, Adriana}, title = {The design of technological interfaces for interactions between music, dance and garment movements}, pages = {240--245}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672942}, url = {http://www.nime.org/proceedings/2019/nime2019_paper046.pdf} }
-
Ximena Alarcon Diaz, Victor Evaristo Gonzalez Sanchez, and Cagri Erdem. 2019. INTIMAL: Walking to Find Place, Breathing to Feel Presence. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 246–249. http://doi.org/10.5281/zenodo.3672944
Download PDF DOIINTIMAL is a physical virtual embodied system for relational listening that integrates body movement, oral archives, and voice expression through telematic improvisatory performance in migratory contexts. It has been informed by nine Colombian migrant women who express their migratory journeys through free body movement, voice and spoken word improvisation. These improvisations have been recorded using Motion Capture, in order to develop interfaces for co-located and telematic interactions for the sharing of narratives of migration. In this paper, using data from the Motion Capture experiments, we are exploring two specific movements from improvisers: displacements on space (walking, rotating), and breathing data. Here we envision how co-relations between walking and breathing, might be further studied to implement interfaces that help the making of connections between place, and the feeling of presence for people in-between distant locations.
@inproceedings{Alarcon-Diaz2019, author = {Diaz, Ximena Alarcon and Sanchez, Victor Evaristo Gonzalez and Erdem, Cagri}, title = {INTIMAL: Walking to Find Place, Breathing to Feel Presence}, pages = {246--249}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672944}, url = {http://www.nime.org/proceedings/2019/nime2019_paper047.pdf} }
-
Disha Sardana, Woohun Joo, Ivica Ico Bukvic, and Greg Earle. 2019. Introducing Locus: a NIME for Immersive Exocentric Aural Environments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 250–255. http://doi.org/10.5281/zenodo.3672946
Download PDF DOILocus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead, it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter-based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. In this paper, we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.
@inproceedings{Sardana2019, author = {Sardana, Disha and Joo, Woohun and Bukvic, Ivica Ico and Earle, Greg}, title = {Introducing Locus: a NIME for Immersive Exocentric Aural Environments}, pages = {250--255}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672946}, url = {http://www.nime.org/proceedings/2019/nime2019_paper048.pdf} }
-
Echo Ho, Prof. Dr. Phil. Alberto de Campo, and Hannes Hoelzl. 2019. The SlowQin: An Interdisciplinary Approach to reinventing the Guqin. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 256–259. http://doi.org/10.5281/zenodo.3672948
Download PDF DOIThis paper presents an ongoing process of examining and reinventing the Guqin, to forge a contemporary engagement with this unique traditional Chinese string instrument. The SlowQin is both a hybrid resemblance of the Guqin and a fully functioning wireless interface to interact with computer software. It has been developed and performed during the last decade. Instead of aiming for virtuosic perfection of playing the instrument, SlowQin emphasizes the openness for continuously rethinking and reinventing the Guqin’s possibilities. Through a combination of conceptual work and practical production, Echo Ho’s SlowQin project works as an experimental twist on Historically Informed Performance, with the motivation of conveying artistic gestures that tackle philosophical, ideological, and socio-political subjects embedded in our living environment in globalised conditions. In particular, this paper touches the history of the Guqin, gives an overview of the technical design concepts of the instrument, and discusses the aesthetical approaches of the SlowQin performances that have been realised so far.
@inproceedings{Ho2019, author = {Ho, Echo and de Campo, Prof. Dr. Phil. Alberto and Hoelzl, Hannes}, title = {The SlowQin: An Interdisciplinary Approach to reinventing the Guqin}, pages = {256--259}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672948}, url = {http://www.nime.org/proceedings/2019/nime2019_paper049.pdf} }
-
Charles Patrick Martin and Jim Torresen. 2019. An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 260–265. http://doi.org/10.5281/zenodo.3672952
Download PDF DOIThis paper is about creating digital musical instruments where a predictive neural network model is integrated into the interactive system. Rather than predicting symbolic music (e.g., MIDI notes), we suggest that predicting future control data from the user and precise temporal information can lead to new and interesting interactive possibilities. We propose that a mixture density recurrent neural network (MDRNN) is an appropriate model for this task. The predictions can be used to fill-in control data when the user stops performing, or as a kind of filter on the user’s own input. We present an interactive MDRNN prediction server that allows rapid prototyping of new NIMEs featuring predictive musical interaction by recording datasets, training MDRNN models, and experimenting with interaction modes. We illustrate our system with several example NIMEs applying this idea. Our evaluation shows that real-time predictive interaction is viable even on single-board computers and that small models are appropriate for small datasets.
@inproceedings{Martin2019, author = {Martin, Charles Patrick and Torresen, Jim}, title = {An Interactive Musical Prediction System with Mixture Density Recurrent Neural Networks}, pages = {260--265}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672952}, url = {http://www.nime.org/proceedings/2019/nime2019_paper050.pdf} }
-
Nicolas Bazoge, Ronan Gaugne, Florian Nouviale, Valérie Gouranton, and Bruno Bossis. 2019. Expressive potentials of motion capture in musical performance. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 266–271. http://doi.org/10.5281/zenodo.3672954
Download PDF DOIThe paper presents the electronic music performance project Vis Insita implementing the design of experimental instrumental interfaces based on optical motion capture technology with passive infrared markers (MoCap), and the analysis of their use in a real scenic presentation context. Because of MoCap’s predisposition to capture the movements of the body, a lot of research and musical applications in the performing arts concern dance or the sonification of gesture. For our research, we wanted to move away from the capture of the human body to analyse the possibilities of a kinetic object handled by a performer, both in terms of musical expression, but also in the broader context of a multimodal scenic interpretation.
@inproceedings{Bazoge2019, author = {Bazoge, Nicolas and Gaugne, Ronan and Nouviale, Florian and Gouranton, Valérie and Bossis, Bruno}, title = {Expressive potentials of motion capture in musical performance}, pages = {266--271}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672954}, url = {http://www.nime.org/proceedings/2019/nime2019_paper051.pdf} }
-
Akito Van Troyer and Rebecca Kleinberger. 2019. From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 272–277. http://doi.org/10.5281/zenodo.3672956
Download PDF DOIThis paper explores the potential of image-to-image translation techniques in aiding the design of new hardware-based musical interfaces such as MIDI keyboard, grid-based controller, drum machine, and analog modular synthesizers. We collected an extensive image database of such interfaces and implemented image-to-image translation techniques using variants of Generative Adversarial Networks. The created models learn the mapping between input and output images using a training set of either paired or unpaired images. We qualitatively assess the visual outcomes based on three image-to-image translation models: reconstructing interfaces from edge maps, and collection style transfers based on two image sets: visuals of mosaic tile patterns and geometric abstract two-dimensional arts. This paper aims to demonstrate that synthesizing interface layouts based on image-to-image translation techniques can yield insights for researchers, musicians, music technology industrial designers, and the broader NIME community.
@inproceedings{Van-Troyer2019, author = {Troyer, Akito Van and Kleinberger, Rebecca}, title = {From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks}, pages = {272--277}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672956}, url = {http://www.nime.org/proceedings/2019/nime2019_paper052.pdf} }
-
Laurel Pardue, Kurijn Buys, Dan Overholt, Andrew P. McPherson, and Michael Edinger. 2019. Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 278–283. http://doi.org/10.5281/zenodo.3672958
Download PDF DOIWhen designing an augmented acoustic instrument, it is often of interest to retain an instrument’s sound quality and nuanced response while leveraging the richness of digital synthesis. Digital audio has traditionally been generated through speakers, separating sound generation from the instrument itself, or by adding an actuator within the instrument’s resonating body, imparting new sounds along with the original. We offer a third option, isolating the playing interface from the actuated resonating body, allowing us to rewrite the relationship between performance action and sound result while retaining the general form and feel of the acoustic instrument. We present a hybrid acoustic-electronic violin based on a stick-body electric violin and an electrodynamic polyphonic pick-up capturing individual string displacements. A conventional violin body acts as the resonator, actuated using digitally altered audio of the string inputs. By attaching the electric violin above the body with acoustic isolation, we retain the physical playing experience of a normal violin along with some of the acoustic filtering and radiation of a traditional build. We propose the use of the hybrid instrument with digitally automated pitch and tone correction to make an easy violin for use as a potential motivational tool for beginning violinists.
@inproceedings{Pardue2019, author = {Pardue, Laurel and Buys, Kurijn and Overholt, Dan and McPherson, Andrew P. and Edinger, Michael}, title = {Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation}, pages = {278--283}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672958}, url = {http://www.nime.org/proceedings/2019/nime2019_paper053.pdf} }
-
Gabriela Bila Advincula, Don Derek Haddad, and Kent Larson. 2019. Grain Prism: Hieroglyphic Interface for Granular Sampling. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 284–285. http://doi.org/10.5281/zenodo.3672960
Download PDF DOIThis paper introduces the Grain Prism, a hybrid of a granular synthesizer and sampler that, through a capacitive sensing interface presented in obscure glyphs, invites users to create experimental sound textures with their own recorded voice. The capacitive sensing system, activated through skin contact over single glyphs or a combination of them, instigates the user to decipher the hidden sonic messages. The mysterious interface open space to aleatoricism in the act of conjuring sound, and therefore new discoveries. The users, when forced to abandon preconceived ways of playing a synthesizer, look at themselves in a different light, as their voice is the source material.
@inproceedings{Advincula2019, author = {Advincula, Gabriela Bila and Haddad, Don Derek and Larson, Kent}, title = {Grain Prism: Hieroglyphic Interface for Granular Sampling}, pages = {284--285}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672960}, url = {http://www.nime.org/proceedings/2019/nime2019_paper054.pdf} }
-
Oliver Bown, Angelo Fraietta, Sam Ferguson, Lian Loke, and Liam Bray. 2019. Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 286–291. http://doi.org/10.5281/zenodo.3672962
Download PDF DOIWe present an audio-focused creative coding toolkit for deploying music programs to remote networked devices. It is designed to support efficient creative exploratory search in the context of the Internet of Things (IoT), where one or more devices must be configured, programmed and interact over a network, with applications in digital musical instruments, networked music performance and other digital experiences. Users can easily monitor and hack what multiple devices are doing on the fly, enhancing their ability to perform “exploratory search” in a creative workflow. We present two creative case studies using the system: the creation of a dance performance and the creation of a distributed musical installation. Analysing different activities within the production process, with a particular focus on the trade-off between more creative exploratory tasks and more standard configuring and problem-solving tasks, we show how the system supports creative exploratory search for multiple networked devices.
@inproceedings{Bown2019, author = {Bown, Oliver and Fraietta, Angelo and Ferguson, Sam and Loke, Lian and Bray, Liam}, title = {Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets}, pages = {286--291}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672962}, url = {http://www.nime.org/proceedings/2019/nime2019_paper055.pdf} }
-
Thais Fernandes Santos. 2019. The reciprocity between ancillary gesture and music structure performed by expert musicians. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 292–297. http://doi.org/10.5281/zenodo.3672966
Download PDF DOIDuring the musical performance, expert musicians consciously manipulate acoustical parameters expressing their interpretative choices. Also, players make physical motions and, in many cases, these gestures are related to the musicians’ artistic intentions. However, it’s not clear if the sound manipulation reflects in physical motions. The understanding of the musical structure of the work being performed in its many levels may impact the projection of artistic intentions, and performers alter it in micro and macro-sections, such as in musical motifs, phrases and sessions. Therefore, this paper investigates the timing manipulation and how such variations may reflect in physical gestures. The study involved musicians (flute, clarinet, and bassoon players) performing a unison excerpt by G. Rossini. We analyzed the relationship between timing variation (the Inter Onsets Interval deviations) and physical motion based on the traveled distance of the flute under different conditions. The flutists were asked to play the musical excerpt in three experimental conditions: (1) playing solo and playing in duets with previous recordings by other instrumentalists, (2) clarinetist and (3) bassoonist. The finding suggests that: 1) the movements, which seem to be related to the sense of pulse, are recurrent and stable, 2) the timing variability in micro or macro sections reflects in gestures’ amplitude performed by flutists.
@inproceedings{Fernandes-Santos2019, author = {Santos, Thais Fernandes}, title = {The reciprocity between ancillary gesture and music structure performed by expert musicians}, pages = {292--297}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672966}, url = {http://www.nime.org/proceedings/2019/nime2019_paper056.pdf} }
-
Razvan Paisa and Dan Overholt. 2019. Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 298–302. http://doi.org/10.5281/zenodo.3672968
Download PDF DOIThis project describes a novel approach to hybrid electro-acoustical instruments by augmenting the Sensel Morph, with real-time audio sensing capabilities. The actual action-sounds are captured with a piezoelectric transducer and processed in Max 8 to extend the sonic range existing in the acoustical domain alone. The control parameters are captured by the Morph and mapped to audio algorithm proprieties like filter cutoff frequency, frequency shift or overdrive. The instrument opens up the possibility for a large selection of different interaction techniques that have a direct impact on the output sound. The instrument is evaluated from a sound designer’s perspective, encouraging exploration in the materials used as well as techniques. The contribution are two-fold. First, the use of a piezo transducer to augment the Sensel Morph affords an extra dimension of control on top of the offerings. Second, the use of acoustic sounds from physical interactions as a source for excitation and manipulation of an audio processing system offers a large variety of new sounds to be discovered. The methodology involved an exploratory process of iterative instrument making, interspersed with observations gathered via improvisatory trials, focusing on the new interactions made possible through the fusion of audio-rate inputs with the Morph’s default interaction methods.
@inproceedings{Paisa2019, author = {Paisa, Razvan and Overholt, Dan}, title = {Enhancing the Expressivity of the Sensel Morph via Audio-rate Sensing}, pages = {298--302}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672968}, url = {http://www.nime.org/proceedings/2019/nime2019_paper057.pdf} }
-
Juan Mariano Ramos. 2019. Eolos: a wireless MIDI wind controller. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 303–306. http://doi.org/10.5281/zenodo.3672972
Download PDF DOIThis paper presents a description of the design and usage of Eolos, a wireless MIDI wind controller. The main goal of Eolos is to provide an interface that facilitates the production of music for any individual, regardless of their playing skills or previous musical knowledge. Its features are: open design, lower cost than commercial alternatives, wireless MIDI operation, rechargeable battery power, graphical user interface, tactile keys, sensitivity to air pressure, left-right reversible design and two FSR sensors. There is also a mention about its participation in the 1st Collaborative Concert over the Internet between Argentina and Cuba "Tradición y Nuevas Sonoridades".
@inproceedings{Ramos2019, author = {Ramos, Juan Mariano}, title = {Eolos: a wireless MIDI wind controller}, pages = {303--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672972}, url = {http://www.nime.org/proceedings/2019/nime2019_paper058.pdf} }
-
Ruihan Yang, Tianyao Chen, Yiyi Zhang, and gus xia. 2019. Inspecting and Interacting with Meaningful Music Representations using VAE. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 307–312. http://doi.org/10.5281/zenodo.3672974
Download PDF DOIVariational Autoencoder has already achieved great results on image generation and recently made promising progress on music sequence generation. However, the model is still quite difficult to control in the sense that the learned latent representations lack meaningful music semantics. What users really need is to interact with certain music features, such as rhythm and pitch contour, in the creation process so that they can easily test different composition ideas. In this paper, we propose a disentanglement by augmentation method to inspect the pitch and rhythm interpretations of the latent representations. Based on the interpretable representations, an intuitive graphical user interface demo is designed for users to better direct the music creation process by manipulating the pitch contours and rhythmic complexity.
@inproceedings{Yang2019, author = {Yang, Ruihan and Chen, Tianyao and Zhang, Yiyi and gus xia}, title = {Inspecting and Interacting with Meaningful Music Representations using VAE}, pages = {307--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672974}, url = {http://www.nime.org/proceedings/2019/nime2019_paper059.pdf} }
-
Gerard Roma, Owen Green, and Pierre Alexandre Tremblay. 2019. Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 313–318. http://doi.org/10.5281/zenodo.3672976
Download PDF DOIDescriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition, this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples. In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a software library using the SuperCollider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
@inproceedings{Roma2019, author = {Roma, Gerard and Green, Owen and Tremblay, Pierre Alexandre}, title = {Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces}, pages = {313--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672976}, url = {http://www.nime.org/proceedings/2019/nime2019_paper060.pdf} }
-
Vesa Petri Norilo. 2019. Veneer: Visual and Touch-based Programming for Audio. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 319–324. http://doi.org/10.5281/zenodo.3672978
Download PDF DOIThis paper presents Veneer, a visual, touch-ready programming interface for the Kronos programming language. The challenges of representing high-level data flow abstractions, including higher order functions, are described. The tension between abstraction and spontaneity in programming is addressed, and gradual abstraction in live programming is proposed as a potential solution. Several novel user interactions for patching on a touch device are shown. In addition, the paper describes some of the current issues of web audio music applications and offers strategies for integrating a web-based presentation layer with a low-latency native processing backend.
@inproceedings{Norilo2019, author = {Norilo, Vesa Petri}, title = {Veneer: Visual and Touch-based Programming for Audio}, pages = {319--324}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672978}, url = {http://www.nime.org/proceedings/2019/nime2019_paper061.pdf} }
-
Andrei Faitas, Synne Engdahl Baumann, Torgrim Rudland Næss, Jim Torresen, and Charles Patrick Martin. 2019. Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 325–330. http://doi.org/10.5281/zenodo.3672980
Download PDF DOIGenerating convincing music via deep neural networks is a challenging problem that shows promise for many applications including interactive musical creation. One part of this challenge is the problem of generating convincing accompaniment parts to a given melody, as could be used in an automatic accompaniment system. Despite much progress in this area, systems that can automatically learn to generate interesting sounding, as well as harmonically plausible, accompanying melodies remain somewhat elusive. In this paper we explore the problem of sequence to sequence music generation where a human user provides a sequence of notes, and a neural network model responds with a harmonically suitable sequence of equal length. We consider two sequence-to-sequence models; one featuring standard unidirectional long short-term memory (LSTM) architecture, and the other featuring bidirectional LSTM, both successfully trained to produce a sequence based on the given input. Both of these are fairly dated models, as part of the investigation is to see what can be achieved with such models. These are evaluated and compared via a qualitative study that features 106 respondents listening to eight random samples from our set of generated music, as well as two human samples. From the results we see a preference for the sequences generated by the bidirectional model as well as an indication that these sequences sound more human.
@inproceedings{Faitas2019, author = {Faitas, Andrei and Baumann, Synne Engdahl and Næss, Torgrim Rudland and Torresen, Jim and Martin, Charles Patrick}, title = {Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks}, pages = {325--330}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672980}, url = {http://www.nime.org/proceedings/2019/nime2019_paper062.pdf} }
-
Anthony T. Marasco, Edgar Berdahl, and Jesse Allison. 2019. Bendit_I/O: A System for Networked Performance of Circuit-Bent Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 331–334. http://doi.org/10.5281/zenodo.3672982
Download PDF DOIBendit_I/O is a system that allows for wireless, networked performance of circuit-bent devices, giving artists a new outlet for performing with repurposed technology. In a typical setup, a user pre-bends a device using the Bendit_I/O board as an intermediary, replacing physical switches and potentiometers with the board’s reed relays, motor driver, and digital potentiometer signals. Bendit_I/O brings the networking techniques of distributed music performances to the hardware hacking realm, opening the door for creative implementation of multiple circuit-bent devices in audiovisual experiences. Consisting of a Wi-Fi- enabled I/O board and a Node-based server, the system provides performers with a variety of interaction and control possibilities between connected users and hacked devices. Moreover, it is user-friendly, low-cost, and modular, making it a flexible toolset for artists of diverse experience levels.
@inproceedings{Marasco2019, author = {Marasco, Anthony T. and Berdahl, Edgar and Allison, Jesse}, title = {Bendit_I/O: A System for Networked Performance of Circuit-Bent Devices}, pages = {331--334}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672982}, url = {http://www.nime.org/proceedings/2019/nime2019_paper063.pdf} }
-
McLean J Macionis and Ajay Kapur. 2019. Where Is The Quiet: Immersive Experience Design Using the Brain, Mechatronics, and Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 335–338. http://doi.org/10.5281/zenodo.3672984
Download PDF DOI’Where Is The Quiet?’ is a mixed-media installation that utilizes immersive experience design, mechatronics, and machine learning in order to enhance wellness and increase connectivity to the natural world. Individuals interact with the installation by wearing a brainwave interface that measures the strength of the alpha wave signal. The interface then transmits the data to a computer that uses it in order to determine the individual’s overall state of relaxation. As the individual achieves higher states of relaxation, mechatronic instruments respond and provide feedback. This feedback not only encourages self-awareness but also it motivates the individual to relax further. Visitors without the headset experience the installation by watching a film and listening to an original musical score. Through the novel arrangement of technologies and features, ’Where Is The Quiet?’ demonstrates that mediated technological experiences are capable of evoking meditative states of consciousness, facilitating individual and group connectivity, and deepening awareness of the natural world. As such, this installation opens the door to future research regarding the possibility of immersive experiences supporting humanitarian needs.
@inproceedings{Macionis2019, author = {Macionis, McLean J and Kapur, Ajay}, title = {Where Is The Quiet: Immersive Experience Design Using the Brain, Mechatronics, and Machine Learning}, pages = {335--338}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672984}, url = {http://www.nime.org/proceedings/2019/nime2019_paper064.pdf} }
-
Tate Carson. 2019. Mesh Garden: A creative-based musical game for participatory musical performance . Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 339–342. http://doi.org/10.5281/zenodo.3672986
Download PDF DOIMesh Garden explores participatory music-making with smart- phones using an audio sequencer game made up of a distributed smartphone speaker system. The piece allows a group of people in a relaxed situation to create a piece of ambient music using their smartphones networked through the internet. The players’ interactions with the music are derived from the orientations of their phones. The work also has a gameplay aspect; if two players’ phones match in orientation, one player has the option to take the other player’s note, building up a bank of notes that will be used to form a melody.
@inproceedings{Carson2019, author = {Carson, Tate}, title = {Mesh Garden: A creative-based musical game for participatory musical performance }, pages = {339--342}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672986}, url = {http://www.nime.org/proceedings/2019/nime2019_paper065.pdf} }
-
Beat Rossmy and Alexander Wiethoff. 2019. The Modular Backward Evolution — Why to Use Outdated Technologies. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 343–348. http://doi.org/10.5281/zenodo.3672988
Download PDF DOIIn this paper we draw a picture that captures the increasing interest in the format of modular synthesizers today. We therefore provide a historical summary, which includes the origins, the fall and the rediscovery of that technology. Further an empirical analysis is performed based on statements given by artists and manufacturers taken from published interviews. These statements were aggregated, objectified and later reviewed by an expert group consisting of modular synthesizer vendors. Their responses provide the basis for the discussion on how emerging trends in synthesizer interface design reveal challenges and opportunities for the NIME community.
@inproceedings{Rossmy2019, author = {Rossmy, Beat and Wiethoff, Alexander}, title = {The Modular Backward Evolution --- Why to Use Outdated Technologies}, pages = {343--348}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672988}, url = {http://www.nime.org/proceedings/2019/nime2019_paper066.pdf} }
-
Vincent Goudard. 2019. Ephemeral instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 349–354. http://doi.org/10.5281/zenodo.3672990
Download PDF DOIThis article questions the notion of ephemerality of digital musical instruments (DMI). Longevity is generally regarded as a valuable quality that good design criteria should help to achieve. However, the nature of the tools, of the performance conditions and of the music itself may lead to think of ephemerality as an intrinsic modality of the existence of DMIs. In particular, the conditions of contemporary musical production suggest that contextual adaptations of instrumental devices beyond the monolithic unity of classical instruments should be considered. The first two parts of this article analyse various reasons to reassess the issue of longevity and ephemerality. The last two sections attempt to propose an articulation of these two aspects to inform both the design of the DMI and their learning.
@inproceedings{Goudard2019, author = {Goudard, Vincent}, title = {Ephemeral instruments}, pages = {349--354}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672990}, url = {http://www.nime.org/proceedings/2019/nime2019_paper067.pdf} }
-
Julian Jaramillo and Fernando Iazzetta. 2019. PICO: A portable audio effect box for traditional plucked-string instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 355–360. http://doi.org/10.5281/zenodo.3672992
Download PDF DOIThis paper reports the conception, design, implementation and evaluation processes of PICO, a portable audio effect system created with Pure Data and the Raspberry Pi, which augments traditional plucked string instruments such as the Brazilian Cavaquinho, the Venezuelan Cuatro, the Colombian Tiple and the Peruvian/Bolivian Charango. A fabric soft case fixed to the instrument‘s body holds the PICO modules: the touchscreen, the single board computer, the sound card, the speaker system and the DC power bank. The device audio specifications arose from musicological insights about the social role of performers in their musical contexts and the instruments’ playing techniques. They were taken as design challenges in the creation process of PICO‘s first prototype, which was submitted to a short evaluation. Along with the construction of PICO, we reflected over the design of an interactive audio interface as a mode of research. Therefore, the paper will also discuss methodological aspects of audio hardware design.
@inproceedings{Jaramillo2019, author = {Jaramillo, Julian and Iazzetta, Fernando}, title = {PICO: A portable audio effect box for traditional plucked-string instruments}, pages = {355--360}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672992}, url = {http://www.nime.org/proceedings/2019/nime2019_paper068.pdf} }
-
Guilherme Bertissolo. 2019. Composing Understandings: music, motion, gesture and embodied cognition. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 361–364. http://doi.org/10.5281/zenodo.3672994
Download PDF DOIThis paper focuses on ongoing research in music composition based on the study of cognitive research in musical meaning. As a method and result at the same time, we propose the creation of experiments related to key issues in composition and music cognition, such as music and movement, memory, expectation and metaphor in creative process. The theoretical reference approached is linked to the embodied cognition, with unfolding related to the cognitive semantics and the enactivist current of cognitive sciences, among other domains of contemporary sciences of mind and neuroscience. The experiments involve the relationship between music and movement, based on prior research using as a reference context in which it is not possible to establish a clear distinction between them: the Capoeira. Finally, we proposes a discussion about the application of the theoretical approach in two compositions: Boreal IV, for Steel Drums and real time electronics, and Converse, collaborative multimedia piece for piano, real-time audio (Puredata) and video processing (GEM and live video) and a dancer.
@inproceedings{Bertissolo2019, author = {Bertissolo, Guilherme}, title = {Composing Understandings: music, motion, gesture and embodied cognition}, pages = {361--364}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672994}, url = {http://www.nime.org/proceedings/2019/nime2019_paper069.pdf} }
-
Cristohper Ramos Flores, Jim Murphy, and Michael Norris. 2019. HypeSax: Saxophone acoustic augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 365–370. http://doi.org/10.5281/zenodo.3672996
Download PDF DOINew interfaces allow performers to access new possibilities of musical expression. Even though interfaces are often designed to be adaptable to different software, most of them rely on external speakers or similar transducers. This often results on disembodiment and acoustic disengagement from the interface, and in the case of augmented instruments, from the instruments themselves. This paper describes a project in which a hybrid system allows an acoustic integration between the sound of acoustic saxophone and electronics.
@inproceedings{Ramos-Flores2019, author = {Flores, Cristohper Ramos and Murphy, Jim and Norris, Michael}, title = {HypeSax: Saxophone acoustic augmentation}, pages = {365--370}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672996}, url = {http://www.nime.org/proceedings/2019/nime2019_paper070.pdf} }
-
Patrick Chwalek and Joe Paradiso. 2019. CD-Synth: a Rotating, Untethered, Digital Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 371–374. http://doi.org/10.5281/zenodo.3672998
Download PDF DOIWe describe the design of an untethered digital synthesizer that can be held and manipulated while broadcasting audio data to a receiving off-the-shelf Bluetooth receiver. The synthesizer allows the user to freely rotate and reorient the instrument while exploiting non-contact light sensing for a truly expressive performance. The system consists of a suite of sensors that convert rotation, orientation, touch, and user proximity into various audio filters and effects operated on preset wave tables, while offering a persistence of vision display for input visualization. This paper discusses the design of the system, including the circuit, mechanics, and software layout, as well as how this device may be incorporated into a performance.
@inproceedings{Chwalek2019, author = {Chwalek, Patrick and Paradiso, Joe}, title = {CD-Synth: a Rotating, Untethered, Digital Synthesizer}, pages = {371--374}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672998}, url = {http://www.nime.org/proceedings/2019/nime2019_paper071.pdf} }
-
Niccolò Granieri and James Dooley. 2019. Reach: a keyboard-based gesture recognition system for live piano sound modulation. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 375–376. http://doi.org/10.5281/zenodo.3673000
Download PDF DOIThis paper presents Reach, a keyboard-based gesture recog- nition system for live piano sound modulation. Reach is a system built using the Leap Motion Orion SDK, Pure Data and a custom C++ OSC mapper1. It provides control over the sound modulation of an acoustic piano using the pi- anist’s ancillary gestures. The system was developed using an iterative design pro- cess, incorporating research findings from two user studies and several case studies. The results that emerged show the potential of recognising and utilising the pianist’s existing technique when designing keyboard-based DMIs, reducing the requirement to learn additional techniques.
@inproceedings{Granieri2019, author = {Granieri, Niccolò and Dooley, James}, title = {Reach: a keyboard-based gesture recognition system for live piano sound modulation}, pages = {375--376}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673000}, url = {http://www.nime.org/proceedings/2019/nime2019_paper072.pdf} }
-
margaret schedel, Jocelyn Ho, and Matthew Blessing. 2019. Women’s Labor: Creating NIMEs from Domestic Tools . Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 377–380. http://doi.org/10.5281/zenodo.3672729
Download PDF DOIThis paper describes the creation of a NIME created from an iron and wooden ironing board. The ironing board acts as a resonator for the system which includes sensors embedded in the iron such as pressure, and piezo microphones. The iron has LEDs wired to the sides and at either end of the board are CCDs; using machine learning we can identify what kind of fabric is being ironed, and the position of the iron along the x and y-axes as well as its rotation and tilt. This instrument is part of a larger project, Women’s Labor, that juxtaposes traditional musical instruments such as spinets and virginals designated for “ladies” with new interfaces for musical expression that repurpose older tools of women’s work. Using embedded technologies, we reimagine domestic tools as musical interfaces, creating expressive instruments from the appliances of women’s chores.
@inproceedings{schedel2019, author = {margaret schedel and Ho, Jocelyn and Blessing, Matthew}, title = {Women's Labor: Creating NIMEs from Domestic Tools }, pages = {377--380}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3672729}, url = {http://www.nime.org/proceedings/2019/nime2019_paper073.pdf} }
-
Andre Rauber Du Bois and Rodrigo Geraldo Ribeiro. 2019. HMusic: A domain specific language for music programming and live coding. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 381–386. http://doi.org/10.5281/zenodo.3673003
Download PDF DOIThis paper presents HMusic, a domain specific language based on music patterns that can be used to write music and live coding. The main abstractions provided by the language are patterns and tracks. Code written in HMusic looks like patterns and multi-tracks available in music sequencers and drum machines. HMusic provides primitives to design and compose patterns generating new patterns. The basic abstractions provided by the language have an inductive definition and HMusic is embedded in the Haskell functional programming language, programmers can design functions to manipulate music on the fly.
@inproceedings{Rauber-Du-Bois2019, author = {Bois, Andre Rauber Du and Ribeiro, Rodrigo Geraldo}, title = {HMusic: A domain specific language for music programming and live coding}, pages = {381--386}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673003}, url = {http://www.nime.org/proceedings/2019/nime2019_paper074.pdf} }
-
Angelo Fraietta. 2019. Stellar Command: a planetarium software based cosmic performance interface. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 387–392. http://doi.org/10.5281/zenodo.3673005
Download PDF DOIThis paper presents the use of Stellarium planetarium software coupled with the VizieR database of astronomical catalogues as an interface mechanism for creating astronomy based multimedia performances, and as a music composition interface. The celestial display from Stellarium is used to query VizieR, which then obtains scienti c astronomical data from the stars displayed–including colour, celestial position, magnitude and distance–and sends it as input data for music composition or performance. Stellarium and VizieR are controlled through Stellar Command, a software library that couples the two systems and can be used as both a standalone command line utility using Open Sound Control, and as a software library.
@inproceedings{Fraietta-b2019, author = {Fraietta, Angelo}, title = {Stellar Command: a planetarium software based cosmic performance interface}, pages = {387--392}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673005}, url = {http://www.nime.org/proceedings/2019/nime2019_paper075.pdf} }
-
Patrick Müller and Johannes Michael Schuett. 2019. Towards a Telematic Dimension Space. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 393–400. http://doi.org/10.5281/zenodo.3673007
Download PDF DOITelematic performances connect two or more locations so that participants are able to interact in real time. Such practices blend a variety of dimensions, insofar as the representation of remote performers on a local stage intrinsically occurs on auditory, as well as visual and scenic, levels. Due to their multimodal nature, the analysis or creation of such performances can quickly descend into a house of mirrors wherein certain intensely interdependent dimensions come to the fore, while others are multiplied, seem hidden or are made invisible. In order to have a better understanding of such performances, Dimension Space Analysis, with its capacity to review multifaceted components of a system, can be applied to telematic performances, understood here as (a bundle of) NIMEs. In the second part of the paper, some telematic works from the practices of the authors are described with the toolset developed.
@inproceedings{Müller2019, author = {Müller, Patrick and Schuett, Johannes Michael}, title = {Towards a Telematic Dimension Space}, pages = {393--400}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673007}, url = {http://www.nime.org/proceedings/2019/nime2019_paper076.pdf} }
-
Pedro Pablo Lucas. 2019. A MIDI Controller Mapper for the Built-in Audio Mixer in the Unity Game Engine. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 401–404. http://doi.org/10.5281/zenodo.3673009
Download PDF DOIUnity is one of the most used engines in the game industry and several extensions have been implemented to increase its features in order to create multimedia products in a more effective and efficient way. From the point of view of audio development, Unity has included an Audio Mixer from version 5 which facilitates the organization of sounds, effects, and the mixing process in general; however, this module can be manipulated only through its graphical interface. This work describes the design and implementation of an extension tool to map parameters from the Audio Mixer to MIDI external devices, like controllers with sliders and knobs, such way the developer can easily mix a game with the feeling of a physical interface.
@inproceedings{Lucas-b2019, author = {Lucas, Pedro Pablo}, title = {A MIDI Controller Mapper for the Built-in Audio Mixer in the Unity Game Engine}, pages = {401--404}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673009}, url = {http://www.nime.org/proceedings/2019/nime2019_paper077.pdf} }
-
Pedro Pablo Lucas. 2019. AuSynthAR: A simple low-cost modular synthesizer based on Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 405–406. http://doi.org/10.5281/zenodo.3673011
Download PDF DOIAuSynthAR is a digital instrument based on Augmented Reality (AR), which allows sound synthesis modules to create simple sound networks. It only requires a mobile device, a set of tokens, a sound output device and, optionally, a MIDI controller, which makes it an affordable instrument. An application running on the device generates the sounds and the graphical augmentations over the tokens.
@inproceedings{Lucas-c2019, author = {Lucas, Pedro Pablo}, title = {AuSynthAR: A simple low-cost modular synthesizer based on Augmented Reality}, pages = {405--406}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673011}, url = {http://www.nime.org/proceedings/2019/nime2019_paper078.pdf} }
-
Don Derek Haddad and Joe Paradiso. 2019. The World Wide Web in an Analog Patchbay. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 407–410. http://doi.org/10.5281/zenodo.3673013
Download PDF DOIThis paper introduces a versatile module for Eurorack synthesizers that allows multiple modular synthesizers to be patched together remotely through the world wide web. The module is configured from a read-eval-print-loop environment running in the web browser, that can be used to send signals to the modular synthesizer from a live coding interface or from various data sources on the internet.
@inproceedings{Haddad2019, author = {Haddad, Don Derek and Paradiso, Joe}, title = {The World Wide Web in an Analog Patchbay}, pages = {407--410}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673013}, url = {http://www.nime.org/proceedings/2019/nime2019_paper079.pdf} }
-
Fou Yoshimura and kazuhiro jo. 2019. A "voice" instrument based on vocal tract models by using soft material for a 3D printer and an electrolarynx. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 411–412. http://doi.org/10.5281/zenodo.3673015
Download PDF DOIIn this paper, we propose a “voice” instrument based on vocal tract models with a soft material for a 3D printer and an electrolarynx. In our practice, we explore the incongruity of the voice instrument through the accompanying music production and performance. With the instrument, we aim to return to the fact that the “Machine speaks out.” With the production of a song “Vocalise (Incomplete),” and performances, we reveal how the instrument could work with the audiences and explore the uncultivated field of voices.
@inproceedings{Yoshimura2019, author = {Yoshimura, Fou and kazuhiro jo}, title = {A "voice" instrument based on vocal tract models by using soft material for a 3D printer and an electrolarynx}, pages = {411--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673015}, url = {http://www.nime.org/proceedings/2019/nime2019_paper080.pdf} }
-
Juan Pablo Yepez Placencia, Jim Murphy, and Dale Carnegie. 2019. Exploring Dynamic Variations for Expressive Mechatronic Chordophones. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 413–418. http://doi.org/10.5281/zenodo.3673017
Download PDF DOIMechatronic chordophones have become increasingly common in mechatronic music. As expressive instruments, they offer multiple techniques to create and manipulate sounds using their actuation mechanisms. Chordophone designs have taken multiple forms, from frames that play a guitar-like instrument, to machines that integrate strings and actuators as part of their frame. However, few of these instruments have taken advantage of dynamics, which have been largely unexplored. This paper details the design and construction of a new picking mechanism prototype which enables expressive techniques through fast and precise movement and actuation. We have adopted iterative design and rapid prototyping strategies to develop and refine a compact picker capable of creating dynamic variations reliably. Finally, a quantitative evaluation process demonstrates that this system offers the speed and consistency of previously existing picking mechanisms, while providing increased control over musical dynamics and articulations.
@inproceedings{Yepez-Placencia2019, author = {Placencia, Juan Pablo Yepez and Murphy, Jim and Carnegie, Dale}, title = {Exploring Dynamic Variations for Expressive Mechatronic Chordophones}, pages = {413--418}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673017}, url = {http://www.nime.org/proceedings/2019/nime2019_paper081.pdf} }
-
Dhruv Chauhan and Peter Bennett. 2019. Searching for the Perfect Instrument: Increased Telepresence through Interactive Evolutionary Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 419–422. http://doi.org/10.5281/zenodo.3673019
Download PDF DOIIn this paper, we introduce and explore a novel Virtual Reality musical interaction system (named REVOLVE) that utilises a user-guided evolutionary algorithm to personalise musical instruments to users’ individual preferences. REVOLVE is designed towards being an ‘endlessly entertaining’ experience through the potentially infinite number of sounds that can be produced. Our hypothesis is that using evolutionary algorithms with VR for musical interactions will lead to increased user telepresence. In addition to this, REVOLVE was designed to inform novel research into this unexplored area. Think aloud trials and thematic analysis revealed 5 main themes: control, comparison to the real world, immersion, general usability and limitations, in addition to practical improvements. Overall, it was found that this combination of technologies did improve telepresence levels, proving the original hypothesis correct.
@inproceedings{Chauhan2019, author = {Chauhan, Dhruv and Bennett, Peter}, title = {Searching for the Perfect Instrument: Increased Telepresence through Interactive Evolutionary Instrument Design}, pages = {419--422}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673019}, url = {http://www.nime.org/proceedings/2019/nime2019_paper082.pdf} }
-
Richard J Savery, Benjamin Genchel, Jason Brent Smith, Anthony Caulkins, Molly E Jones, and Anna Savery. 2019. Learning from History: Recreating and Repurposing Harriet Padberg’s Computer Composed Canon and Free Fugue. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 423–428. http://doi.org/10.5281/zenodo.3673021
Download PDF DOIHarriet Padberg wrote Computer-Composed Canon and Free Fugue as part of her 1964 dissertation in Mathematics and Music at Saint Louis University. This program is one of the earliest examples of text-to-music software and algorithmic composition, which are areas of great interest in the present-day field of music technology. This paper aims to analyze the technological innovation, aesthetic design process, and impact of Harriet Padberg’s original 1964 thesis as well as the design of a modern recreation and utilization, in order to gain insight to the nature of revisiting older works. Here, we present our open source recreation of Padberg’s program with a modern interface and, through its use as an artistic tool by three composers, show how historical works can be effectively used for new creative purposes in contemporary contexts. Not Even One by Molly Jones draws on the historical and social significance of Harriet Padberg through using her program in a piece about the lack of representation of women judges in composition competitions. Brevity by Anna Savery utilizes the original software design as a composition tool, and The Padberg Piano by Anthony Caulkins uses the melodic generation of the original to create a software instrument.
@inproceedings{Savery2019, author = {Savery, Richard J and Genchel, Benjamin and Smith, Jason Brent and Caulkins, Anthony and Jones, Molly E and Savery, Anna}, title = {Learning from History: Recreating and Repurposing Harriet Padberg's Computer Composed Canon and Free Fugue}, pages = {423--428}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673021}, url = {http://www.nime.org/proceedings/2019/nime2019_paper083.pdf} }
-
Edgar Berdahl, Austin Franklin, and Eric Sheffield. 2019. A Spatially Distributed Vibrotactile Actuator Array for the Fingertips. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 429–430. http://doi.org/10.5281/zenodo.3673023
Download PDF DOIThe design of a Spatially Distributed Vibrotactile Actuator Array (SDVAA) for the fingertips is presented. It provides high-fidelity vibrotactile stimulation at the audio sampling rate. Prior works are discussed, and the system is demonstrated using two music compositions by the authors.
@inproceedings{Berdahl2019, author = {Berdahl, Edgar and Franklin, Austin and Sheffield, Eric}, title = {A Spatially Distributed Vibrotactile Actuator Array for the Fingertips}, pages = {429--430}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673023}, url = {http://www.nime.org/proceedings/2019/nime2019_paper084.pdf} }
-
Jeff Gregorio and Youngmoo Kim. 2019. Augmenting Parametric Synthesis with Learned Timbral Controllers. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 431–436. http://doi.org/10.5281/zenodo.3673025
Download PDF DOIFeature-based synthesis applies machine learning and signal processing methods to the development of alternative interfaces for controlling parametric synthesis algorithms. One approach, geared toward real-time control, uses low dimensional gestural controllers and learned mappings from control spaces to parameter spaces, making use of an intermediate latent timbre distribution, such that the control space affords a spatially-intuitive arrangement of sonic possibilities. Whereas many existing systems present alternatives to the traditional parametric interfaces, the proposed system explores ways in which feature-based synthesis can augment one-to-one parameter control, made possible by fully invertible mappings between control and parameter spaces.
@inproceedings{Gregorio2019, author = {Gregorio, Jeff and Kim, Youngmoo}, title = {Augmenting Parametric Synthesis with Learned Timbral Controllers}, pages = {431--436}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673025}, url = {http://www.nime.org/proceedings/2019/nime2019_paper085.pdf} }
-
Sang-won Leigh, Abhinandan Jain, and Pattie Maes. 2019. Exploring Human-Machine Synergy and Interaction on a Robotic Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 437–442. http://doi.org/10.5281/zenodo.3673027
Download PDF DOIThis paper introduces studies conducted with musicians that aim to understand modes of human-robot interaction, situated between automation and human augmentation. Our robotic guitar system used for the study consists of various sound generating mechanisms, either driven by software or by a musician directly. The control mechanism allows the musician to have a varying degree of agency over the overall musical direction. We present interviews and discussions on open-ended experiments conducted with music students and musicians. The outcome of this research includes new modes of playing the guitar given the robotic capabilities, and an understanding of how automation can be integrated into instrument-playing processes. The results present insights into how a human-machine hybrid system can increase the efficacy of training or exploration, without compromising human engagement with a task.
@inproceedings{Leigh2019, author = {Leigh, Sang-won and Jain, Abhinandan and Maes, Pattie}, title = {Exploring Human-Machine Synergy and Interaction on a Robotic Instrument}, pages = {437--442}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673027}, url = {http://www.nime.org/proceedings/2019/nime2019_paper086.pdf} }
-
Sang Won Lee. 2019. Show Them My Screen: Mirroring a Laptop Screen as an Expressive and Communicative Means in Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 443–448. http://doi.org/10.5281/zenodo.3673029
Download PDF DOIModern computer music performances often involve a musical instrument that is primarily digital; software runs on a computer, and the physical form of the instrument is the computer. In such a practice, the performance interface is rendered on a computer screen for the performer. There has been a concern in using a laptop as a musical instrument from the audience’s perspective, in that having “a laptop performer sitting behind the screen” makes it difficult for the audience to understand how the performer is creating music. Mirroring a computer screen on a projection screen has been one way to address the concern and reveal the performer’s instrument. This paper introduces and discusses the author’s computer music practice, in which a performer actively considers screen mirroring as an essential part of the performance, beyond visualization of music. In this case, screen mirroring is not complementary, but inevitable from the inception of the performance. The related works listed within explore various roles of screen mirroring in computer music performance and helps us understand empirical and logistical findings in such practices.
@inproceedings{Lee2019, author = {Lee, Sang Won}, title = {Show Them My Screen: Mirroring a Laptop Screen as an Expressive and Communicative Means in Computer Music}, pages = {443--448}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673029}, url = {http://www.nime.org/proceedings/2019/nime2019_paper087.pdf} }
-
Josh Urban Davis. 2019. IllumiWear: A Fiber-Optic eTextile for MultiMedia Interactions. Proceedings of the International Conference on New Interfaces for Musical Expression, UFRGS, pp. 449–454. http://doi.org/10.5281/zenodo.3673033
Download PDF DOIWe present IllumiWear, a novel eTextile prototype that uses fiber optics as interactive input and visual output. Fiber optic cables are separated into bundles and then woven like a basket into a bendable glowing fabric. By equipping light emitting diodes to one side of these bundles and photodiode light intensity sensors to the other, loss of light intensity can be measured when the fabric is bent. The sensing technique of IllumiWear is not only able to discriminate between discreet touch, slight bends, and harsh bends, but also recover the location of deformation. In this way, our computational fabric prototype uses its intrinsic means of visual output (light) as a tool for interactive input. We provide design and implementation details for our prototype as well as a technical evaluation of its effectiveness and limitations as an interactive computational textile. In addition, we examine the potential of this prototype’s interactive capabilities by extending our eTextile to create a tangible user interface for audio and visual manipulation.
@inproceedings{Davis2019, author = {Davis, Josh Urban}, title = {IllumiWear: A Fiber-Optic eTextile for MultiMedia Interactions}, pages = {449--454}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Queiroz, Marcelo and Sedó, Anna Xambó}, year = {2019}, month = jun, publisher = {UFRGS}, address = {Porto Alegre, Brazil}, issn = {2220-4806}, doi = {10.5281/zenodo.3673033}, url = {http://www.nime.org/proceedings/2019/nime2019_paper088.pdf} }
2018
-
Oeyvind Brandtsegg, Trond Engum, and Bernt Isak Wærstad. 2018. Working methods and instrument design for cross-adaptive sessions. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 1–6. http://doi.org/10.5281/zenodo.1302649
Download PDF DOIThis paper explores working methods and instrument design for musical performance sessions (studio and live) where cross-adaptive techniques for audio processing are utilized. Cross-adaptive processing uses feature extraction methods and digital processing to allow the actions of one acoustic instrument to influence the timbre of another. Even though the physical interface for the musician is the familiar acoustic instrument, the musical dimensions controlled with the actions on the instrument have been expanded radically. For this reason, and when used in live performance, the cross-adaptive methods constitute new interfaces for musical expression. Not only do the musician control his or her own instrumental expression, but the instrumental actions directly influence the timbre of another instrument in the ensemble, while their own instrument’s sound is modified by the actions of other musicians. In the present paper we illustrate and discuss some design issues relating to the configuration and composition of such tools for different musical situations. Such configurations include among other things the mapping of modulators, the choice of applied effects and processing methods.
@inproceedings{Brandtsegg2018, author = {Brandtsegg, Oeyvind and Engum, Trond and Wærstad, Bernt Isak}, title = {Working methods and instrument design for cross-adaptive sessions}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302649}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0001.pdf} }
-
Eran Egozy and Eun Young Lee. 2018. *12*: Mobile Phone-Based Audience Participation in a Chamber Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 7–12. http://doi.org/10.5281/zenodo.1302655
Download PDF DOI*12* is chamber music work composed with the goal of letting audience members have an engaging, individualized, and influential role in live music performance using their mobile phones as custom tailored musical instruments. The goals of direct music making, meaningful communication, intuitive interfaces, and technical transparency led to a design that purposefully limits the number of participating audience members, balances the tradeoffs between interface simplicity and control, and prioritizes the role of a graphics and animation display system that is both functional and aesthetically integrated. Survey results from the audience and stage musicians show a successful and engaging experience, and also illuminate the path towards future improvements.
@inproceedings{Egozy2018, author = {Egozy, Eran and Lee, Eun Young}, title = {*12*: Mobile Phone-Based Audience Participation in a Chamber Music Performance}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302655}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0002.pdf} }
-
Anders Lind. 2018. Animated Notation in Multiple Parts for Crowd of Non-professional Performers. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 13–18. http://doi.org/10.5281/zenodo.1302657
Download PDF DOIThe Max Maestro – an animated music notation system was developed to enable the exploration of artistic possibilities for composition and performance practices within the field of contemporary art music, more specifically, to enable a large crowd of non-professional performers regardless of their musical background to perform a fixed music compositions written in multiple individual parts. Furthermore, the Max Maestro was developed to facilitate concert hall performances where non-professional performers could be synchronised with an electronic music part. This paper presents the background, the content and the artistic ideas with the Max Maestro system and gives two examples of live concert hall performances where the Max Maestro was used. An artistic research approach with an auto ethnographic method was adopted for the study. This paper contributes with new knowledge to the field of animated music notation.
@inproceedings{Lind2018, author = {Lind, Anders}, title = {Animated Notation in Multiple Parts for Crowd of Non-professional Performers}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302657}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0003.pdf} }
-
Andrew R. Brown, Matthew Horrigan, Arne Eigenfeldt, Toby Gifford, Daniel Field, and Jon McCormack. 2018. Interacting with Musebots. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 19–24. http://doi.org/10.5281/zenodo.1302659
Download PDF DOIMusebots are autonomous musical agents that interact with other musebots to produce music. Inaugurated in 2015, musebots are now an established practice in the field of musical metacreation, which aims to automate aspects of creative practice. Originally musebot development focused on software-only ensembles of musical agents, coded by a community of developers. More recent experiments have explored humans interfacing with musebot ensembles in various ways: including through electronic interfaces in which parametric control of high-level musebot parameters are used; message-based interfaces which allow human users to communicate with musebots in their own language; and interfaces through which musebots have jammed with human musicians. Here we report on the recent developments of human interaction with musebot ensembles and reflect on some of the implications of these developments for the design of metacreative music systems.
@inproceedings{Brown2018, author = {Brown, Andrew R. and Horrigan, Matthew and Eigenfeldt, Arne and Gifford, Toby and Field, Daniel and McCormack, Jon}, title = {Interacting with Musebots}, pages = {19--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302659}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0004.pdf} }
-
Chris Kiefer and Cecile Chevalier. 2018. Towards New Modes of Collective Musical Expression through Audio Augmented Reality. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 25–28. http://doi.org/10.5281/zenodo.1302661
Download PDF DOIWe investigate how audio augmented reality can engender new collective modes of musical expression in the context of a sound art installation, ’Listening Mirrors’, exploring the creation of interactive sound environments for musicians and non-musicians alike. ’Listening Mirrors’ is designed to incorporate physical objects and computational systems for altering the acoustic environment, to enhance collective listening and challenge traditional musician-instrument performance. At a formative stage in exploring audio AR technology, we conducted an audience experience study investigating questions around the potential of audio AR in creating sound installation environments for collective musical expression. We collected interview evidence about the participants’ experience and analysed the data with using a grounded theory approach. The results demonstrated that the technology has the potential to create immersive spaces where an audience can feel safe to experiment musically, and showed how AR can intervene in sound perception to instrumentalise an environment. The results also revealed caveats about the use of audio AR, mainly centred on social inhibition and seamlessness of experience, and finding a balance between mediated worlds so that there is space for interplay between the two.
@inproceedings{Kiefer2018, author = {Kiefer, Chris and Chevalier, Cecile}, title = {Towards New Modes of Collective Musical Expression through Audio Augmented Reality}, pages = {25--28}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302661}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0005.pdf} }
-
Tomoya Matsuura and kazuhiro jo. 2018. Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 29–30. http://doi.org/10.5281/zenodo.1302663
Download PDF DOIAphysical Unmodeling Instrument is the title of a sound installation that re-physicalizes the Whirlwind meta-wind-instrument physical model. We re-implemented the Whirlwind by using real-world physical objects to comprise a sound installation. The sound propagation between a speaker and microphone was used as the delay, and a paper cylinder was employed as the resonator. This paper explains the concept and implementation of this work at the 2017 HANARART exhibition. We examine the characteristics of the work, address its limitations, and discuss the possibility of its interpretation by means of a “re-physicalization.”
@inproceedings{Matsuura2018, author = {Matsuura, Tomoya and kazuhiro jo}, title = {Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind}, pages = {29--30}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302663}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0006.pdf} }
-
Ulf A. S. Holbrook. 2018. An approach to stochastic spatialization — A case of Hot Pocket. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 31–32. http://doi.org/10.5281/zenodo.1302665
Download PDF DOIMany common and popular sound spatialisation techniques and methods rely on listeners being positioned in a "sweet-spot" for an optimal listening position in a circle of speakers. This paper discusses a stochastic spatialisation method and its first iteration as implemented for the exhibition Hot Pocket at The Museum of Contemporary Art in Oslo in 2017. This method is implemented in Max and offers a matrix-based amplitude panning methodology which can provide a flexible means for the spatialialisation of sounds.
@inproceedings{Holbrook2018, author = {Holbrook, Ulf A. S.}, title = {An approach to stochastic spatialization --- A case of Hot Pocket}, pages = {31--32}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302665}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0007.pdf} }
-
Cory Champion and Mo H Zareei. 2018. AM MODE: Using AM and FM Synthesis for Acoustic Drum Set Augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 33–34. http://doi.org/10.5281/zenodo.1302667
Download PDF DOIAM MODE is a custom-designed software interface for electronic augmentation of the acoustic drum set. The software is used in the development a series of recordings, similarly titled as AM MODE. Programmed in Max/MSP, the software uses live audio input from individual instruments within the drum set as control parameters for modulation synthesis. By using a combination of microphones and MIDI triggers, audio signal features such as the velocity of the strike of the drum, or the frequency at which the drum resonates, are tracked, interpolated, and scaled to user specifications. The resulting series of recordings is comprised of the digitally generated output of the modulation engine, in addition to both raw and modulated signals from the acoustic drum set. In this way, this project explores drum set augmentation not only at the input and from a performative angle, but also at the output, where the acoustic and the synthesized elements are merged into each other, forming a sonic hybrid.
@inproceedings{Champion2018, author = {Champion, Cory and Zareei, Mo H}, title = {AM MODE: Using AM and FM Synthesis for Acoustic Drum Set Augmentation}, pages = {33--34}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302667}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0008.pdf} }
-
Don Derek Haddad and Joe Paradiso. 2018. Kinesynth: Patching, Modulating, and Mixing a Hybrid Kinesthetic Synthesizer. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 35–36. http://doi.org/10.5281/zenodo.1302669
Download PDF DOIThis paper introduces the Kinesynth, a hybrid kinesthetic synthesizer that uses the human body as both an analog mixer and as a modulator using a combination of capacitive sensing in "transmit" mode and skin conductance. This is achieved when the body, through the skin, relays signals from control & audio sources to the inputs of the instrument. These signals can be harnessed from the environment, from within the Kinesynth’s internal synthesizer, or from external instrument, making the Kinesynth a mediator between the body and the environment.
@inproceedings{Haddad2018, author = {Haddad, Don Derek and Paradiso, Joe}, title = {Kinesynth: Patching, Modulating, and Mixing a Hybrid Kinesthetic Synthesizer.}, pages = {35--36}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302669}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0009.pdf} }
-
Riccardo Marogna. 2018. CABOTO: A Graphic-Based Interactive System for Composing and Performing Electronic Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 37–42. http://doi.org/10.5281/zenodo.1302671
Download PDF DOICABOTO is an interactive system for live performance and composition. A graphic score sketched on paper is read by a computer vision system. The graphic elements are scanned following a symbolic-raw hybrid approach, that is, they are recognised and classified according to their shapes but also scanned as waveforms and optical signals. All this information is mapped into the synthesis engine, which implements different kind of synthesis techniques for different shapes. In CABOTO the score is viewed as a cartographic map explored by some navigators. These navigators traverse the score in a semi-autonomous way, scanning the graphic elements found along their paths. The system tries to challenge the boundaries between the concepts of composition, score, performance, instrument, since the musical result will depend both on the composed score and the way the navigators will traverse it during the live performance.
@inproceedings{Marogna2018, author = {Marogna, Riccardo}, title = {CABOTO: A Graphic-Based Interactive System for Composing and Performing Electronic Music}, pages = {37--42}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302671}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0010.pdf} }
-
Gustavo Oliveira da Silveira. 2018. The XT Synth: A New Controller for String Players. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 43–44. http://doi.org/10.5281/zenodo.1302673
Download PDF DOIThis paper describes the concept, design, and realization of two iterations of a new controller called the XT Synth. The development of the instrument came from the desire to maintain the expressivity and familiarity of string instruments, while adding the flexibility and power usually found in keyboard controllers. There are different examples of instruments that bring the physicality and expressiveness of acoustic instruments into electronic music, from “Do it yourself” (DIY) products to commercially available ones. This paper discusses the process and the challenges faced when creating a DIY musical instrument and then subsequently transforming the instrument into a product suitable for commercialization.
@inproceedings{Oliveira2018, author = {Oliveira da Silveira, Gustavo}, title = {The XT Synth: A New Controller for String Players}, pages = {43--44}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302673}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0011.pdf} }
-
S. M. Astrid Bin, Nick Bryan-Kinns, and Andrew P. McPherson. 2018. Risky business: Disfluency as a design strategy. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 45–50. http://doi.org/10.5281/zenodo.1302675
Download PDF DOIThis paper presents a study examining the effects of disfluent design on audience perception of digital musical instrument (DMI) performance. Disfluency, defined as a barrier to effortless cognitive processing, has been shown to generate better results in some contexts as it engages higher levels of cognition. We were motivated to determine if disfluent design in a DMI would result in a risk state that audiences would be able to perceive, and if this would have any effect on their evaluation of the performance. A DMI was produced that incorporated a disfluent characteristic: It would turn itself off if not constantly moved. Six physically identical instruments were produced, each in one of three versions: Control (no disfluent characteristics), mild disfluency (turned itself off slowly), and heightened disfluency (turned itself off more quickly). 6 percussionists each performed on one instrument for a live audience (N=31), and data was collected in the form of real-time feedback (via a mobile phone app), and post-hoc surveys. Though there was little difference in ratings of enjoyment between the versions of the instrument, the real-time and qualitative data suggest that disfluent behaviour in a DMI may be a way for audiences to perceive and appreciate performer skill.
@inproceedings{Bin2018, author = {Bin, S. M. Astrid and Bryan-Kinns, Nick and McPherson, Andrew P.}, title = {Risky business: Disfluency as a design strategy}, pages = {45--50}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302675}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0012.pdf} }
-
Rachel Gibson. 2018. The Theremin Textural Expander. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 51–52. http://doi.org/10.5281/zenodo.1302527
Download PDF DOIThe voice of the theremin is more than just a simple sine wave. Its unique sound is made through two radio frequency oscillators that, when operating at almost identical frequencies, gravitate towards each other. Ultimately, this pull alters the sine wave, creating the signature sound of the theremin. The Theremin Textural Expander (TTE) explores other textures the theremin can produce when its sound is processed and manipulated through a Max/MSP patch and controlled via a MIDI pedalboard. The TTE extends the theremin’s ability, enabling it to produce five distinct new textures beyond the original. It also features a looping system that the performer can use to layer textures created with the traditional theremin sound. Ultimately, this interface introduces a new way to play and experience the theremin; it extends its expressivity, affording a greater range of compositional possibilities and greater flexibility in free improvisation contexts.
@inproceedings{Gibson2018, author = {Gibson, Rachel}, title = {The Theremin Textural Expander}, pages = {51--52}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302527}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0013.pdf} }
-
Mert Toka, Can Ince, and Mehmet Aydin Baytas. 2018. Siren: Interface for Pattern Languages. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 53–58. http://doi.org/10.5281/zenodo.1302677
Download PDF DOIThis paper introduces Siren, a hybrid system for algorithmic composition and live-coding performances. Its hierarchical structure allows small modifications to propagate and aggregate on lower levels for dramatic changes in the musical output. It uses functional programming language TidalCycles as the core pattern creation environment due to its inherent ability to create complex pattern relations with minimal syntax. Borrowing the best from TidalCycles, Siren augments the pattern creation process by introducing various interface level features: a multi-channel sequencer, local and global parameters, mathematical expressions, and pattern history. It presents new opportunities for recording, refining, and reusing the playback information with the pattern roll component. Subsequently, the paper concludes with a preliminary evaluation of Siren in the context of user interface design principles, which originates from the cognitive dimensions framework for musical notation design.
@inproceedings{Toka2018, author = {Toka, Mert and Ince, Can and Baytas, Mehmet Aydin}, title = {Siren: Interface for Pattern Languages}, pages = {53--58}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302677}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0014.pdf} }
-
Spencer Salazar, Andrew Piepenbrink, and Sarah Reid. 2018. Developing a Performance Practice for Mobile Music Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 59–64. http://doi.org/10.5281/zenodo.1302679
Download PDF DOIThis paper documents an extensive and varied series of performances by the authors over the past year using mobile technology, primarily iPad tablets running the Auraglyph musical sketchpad software. These include both solo and group performances, the latter under the auspices of the Mobile Ensemble of CalArts (MECA), a group created to perform music with mobile technology devices. As a whole, this diverse mobile technology-based performance practice leverages Auraglyph’s versatility to explore a number of topical issues in electronic music performance, including the use of physical and acoustical space, audience participation, and interaction design of musical instruments.
@inproceedings{Salazar2018, author = {Salazar, Spencer and Piepenbrink, Andrew and Reid, Sarah}, title = {Developing a Performance Practice for Mobile Music Technology}, pages = {59--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302679}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0015.pdf} }
-
Ali Momeni, Daniel McNamara, and Jesse Stiles. 2018. MOM: an Extensible Platform for Rapid Prototyping and Design of Electroacoustic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 65–71. http://doi.org/10.5281/zenodo.1302681
Download PDF DOIThis paper provides an overview of the design, prototyping, deployment and evaluation of a multi-agent interactive sound instrument named MOM (Mobile Object for Music). MOM combines a real-time signal processing engine implemented with Pure Data on an embedded Linux platform, with gestural interaction implemented via a variety of analog and digital sensors. Power, sound-input and sound-diffusion subsystems make the instrument autonomous and mobile. This instrument was designed in coordination with the development of an evening-length dance/music performance in which the performing musician is engaged in choreographed movements with the mobile instruments. The design methodology relied on a participatory process that engaged an interdisciplinary team made up of technologists, musicians, composers, choreographers, and dancers. The prototyping process relied on a mix of in-house and out-sourced digital fabrication processes intended to make the open source hardware and software design of the system accessible and affordable for other creators.
@inproceedings{Momeni2018, author = {Momeni, Ali and McNamara, Daniel and Stiles, Jesse}, title = {MOM: an Extensible Platform for Rapid Prototyping and Design of Electroacoustic Instruments}, pages = {65--71}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302681}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0016.pdf} }
-
Ben Luca Robertson and Luke Dahl. 2018. Harmonic Wand: An Instrument for Microtonal Control and Gestural Excitation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 72–77. http://doi.org/10.5281/zenodo.1302683
Download PDF DOIThe Harmonic Wand is a transducer-based instrument that combines physical excitation, synthesis, and gestural control. Our objective was to design a device that affords exploratory modes of interaction with the performer’s surroundings, as well as precise control over microtonal pitch content and other concomitant parameters. The instrument is comprised of a hand-held wand, containing two piezo-electric transducers affixed to a pair of metal probes. The performer uses the wand to physically excite surfaces in the environment and capture resultant signals. Input materials are then processed using a novel application of Karplus-Strong synthesis, in which these impulses are imbued with discrete resonances. We achieved gestural control over synthesis parameters using a secondary tactile interface, consisting of four force-sensitive resistors (FSR), a fader, and momentary switch. As a unique feature of our instrument, we modeled pitch organization and associated parametric controls according to theoretical principles outlined in Harry Partch’s “monophonic fabric” of Just Intonation—specifically his conception of odentities, udentities, and a variable numerary nexus. This system classifies pitch content based upon intervallic structures found in both the overtone and undertone series. Our paper details the procedural challenges in designing the Harmonic Wand.
@inproceedings{Robertson2018, author = {Robertson, Ben Luca and Dahl, Luke}, title = {Harmonic Wand: An Instrument for Microtonal Control and Gestural Excitation}, pages = {72--77}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302683}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0017.pdf} }
-
McLean J Macionis and Ajay Kapur. 2018. Sansa: A Modified Sansula for Extended Compositional Techniques Using Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 78–81. http://doi.org/10.5281/zenodo.1302685
Download PDF DOISansa is an extended sansula, a hyper-instrument that is similar in design and functionality to a kalimba or thumb piano. At the heart of this interface is a series of sensors that are used to augment the tone and expand the performance capabilities of the instrument. The sensor data is further exploited using the machine learning program Wekinator, which gives users the ability to interact and perform with the instrument using several different modes of operation. In this way, Sansa is capable of both solo acoustic performances as well as complex productions that require interactions between multiple technological mediums. Sansa expands the current community of hyper-instruments by demonstrating the ways that hardware and software can extend an acoustic instrument’s functionality and playability in a live performance or studio setting.
@inproceedings{Macionis2018, author = {Macionis, McLean J and Kapur, Ajay}, title = {Sansa: A Modified Sansula for Extended Compositional Techniques Using Machine Learning}, pages = {78--81}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302685}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0018.pdf} }
-
Luca Turchet and Mathieu Barthet. 2018. Demo of interactions between a performer playing a Smart Mandolin and audience members using Musical Haptic Wearables. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 82–83. http://doi.org/10.5281/zenodo.1302687
Download PDF DOIThis demo will showcase technologically mediated interactions between a performer playing a smart musical instrument (SMIs) and audience members using Musical Haptic Wearables (MHWs). Smart Instruments are a family of musical instruments characterized by embedded computational intelligence, wireless connectivity, an embedded sound delivery system, and an onboard system for feedback to the player. They offer direct point-to-point communication between each other and other portable sensor-enabled devices connected to local networks and to the Internet. MHWs are wearable devices for audience members, which encompass haptic stimulation, gesture tracking, and wireless connectivity features. This demo will present an architecture enabling the multidirectional creative communication between a performer playing a Smart Mandolin and audience members using armband-based MHWs.
@inproceedings{Turchet2018, author = {Turchet, Luca and Barthet, Mathieu}, title = {Demo of interactions between a performer playing a Smart Mandolin and audience members using Musical Haptic Wearables}, pages = {82--83}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302687}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0019.pdf} }
-
Steven Kemper and Scott Barton. 2018. Mechatronic Expression: Reconsidering Expressivity in Music for Robotic Instruments . Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 84–87. http://doi.org/10.5281/zenodo.1302689
Download PDF DOIRobotic instrument designers tend to focus on the number of sound control parameters and their resolution when trying to develop expressivity in their instruments. These parameters afford greater sonic nuance related to elements of music that are traditionally associated with expressive human performances including articulation, timbre, dynamics, and phrasing. Equating the capacity for sonic nuance and musical expression stems from the “transitive” perspective that musical expression is an act of emotional communication from performer to listener. However, this perspective is problematic in the case of robotic instruments since we do not typically consider machines to be capable of expressing emotion. Contemporary theories of musical expression focus on an “intransitive” perspective, where musical meaning is generated as an embodied experience. Understanding expressivity from this perspective allows listeners to interpret performances by robotic instruments as possessing their own expressive meaning, even though the performer is a machine. It also enables musicians working with robotic instruments to develop their own unique vocabulary of expressive gestures unique to mechanical instruments. This paper explores these issues of musical expression, introducing the concept of mechatronic expression as a compositional and design strategy that highlights the musical and performative capabilities unique to robotic instruments.
@inproceedings{Kemper2018, author = {Kemper, Steven and Barton, Scott}, title = {Mechatronic Expression: Reconsidering Expressivity in Music for Robotic Instruments }, pages = {84--87}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302689}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0020.pdf} }
-
Courtney Brown. 2018. Interactive Tango Milonga: Designing DMIs for the Social Dance Context . Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 88–91. http://doi.org/10.5281/zenodo.1302693
Download PDF DOIMusical participation has brought individuals together in on-going communities throughout human history, aiding in the kinds of social integration essential for wellbeing. The design of Digital Musical Instruments (DMIs), however, has generally been driven by idiosyncratic artistic concerns, Western art music and dance traditions of expert performance, and short-lived interactive art installations engaging a broader public of musical novices. These DMIs rarely engage with the problems of on-going use in musical communities with existing performance idioms, repertoire, and social codes with participants representing the full learning curve of musical skill, such as social dance. Our project, Interactive Tango Milonga, an interactive Argentine tango dance system for social dance addresses these challenges in order to innovate connection, the feeling of intense relation between dance partners, music, and the larger tango community.
@inproceedings{Brown-b2018, author = {Brown, Courtney}, title = {Interactive Tango Milonga: Designing DMIs for the Social Dance Context }, pages = {88--91}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302693}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0021.pdf} }
-
Rebecca Kleinberger. 2018. Vocal Musical Expression with a Tactile Resonating Device and its Psychophysiological Effects. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 92–95. http://doi.org/10.5281/zenodo.1302693
Download PDF DOIThis paper presents an experiment to investigate how new types of vocal practices can affect psychophysiological activity. We know that health can influence the voice, but can a certain use of the voice influence health through modification of mental and physical state? This study took place in the setting of the Vocal Vibrations installation. For the experiment, participants engage in a multi sensory vocal exercise with a limited set of guidance to obtain a wide spectrum of vocal performances across participants. We compare characteristics of those vocal practices to the participant’s heart rate, breathing rate, electrodermal activity and mental states. We obtained significant results suggesting that we can correlate psychophysiological states with characteristics of the vocal practice if we also take into account biographical information, and in particular mea- surement of how much people “like” their own voice.
@inproceedings{Kleinberger2018, author = {Kleinberger, Rebecca}, title = {Vocal Musical Expression with a Tactile Resonating Device and its Psychophysiological Effects}, pages = {92--95}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302693}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0022.pdf} }
-
Patrick Palsbröker, Christine Steinmeier, and Dominic Becking. 2018. A Framework for Modular VST-based NIMEs Using EDA and Dependency Injection. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 96–101. http://doi.org/10.5281/zenodo.1302653
Download PDF DOIIn order to facilitate access to playing music spontaneously, the prototype of an instrument which allows a more natural learning approach was developed as part of the research project Drum-Dance-Music-Machine. The result was a modular system consisting of several VST plug-ins, which on the one hand provides a drum interface to create sounds and tones and on the other hand generates or manipulates music through dance movement, in order to simplify the understanding of more abstract characteristics of music. This paper describes the development of a new software concept for the prototype, which since then has been further developed and evaluated several times. This will improve the maintainability and extensibility of the system and eliminate design weaknesses. To do so, the existing system first will be analyzed and requirements for a new framework, which is based on the concepts of event driven architecture and dependency injection, will be defined. The components are then transferred to the new system and their performance is assessed. The approach chosen in this case study and the lessons learned are intended to provide a viable solution for solving similar problems in the development of modular VST-based NIMEs.
@inproceedings{Palsbröker2018, author = {Palsbröker, Patrick and Steinmeier, Christine and Becking, Dominic}, title = {A Framework for Modular VST-based NIMEs Using EDA and Dependency Injection}, pages = {96--101}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302653}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0023.pdf} }
-
Jack Atherton and Ge Wang. 2018. Chunity: Integrated Audiovisual Programming in Unity. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 102–107. http://doi.org/10.5281/zenodo.1302695
Download PDF DOIChunity is a programming environment for the design of interactive audiovisual games, instruments, and experiences. It embodies an audio-driven, sound-first approach that integrates audio programming and graphics programming in the same workflow, taking advantage of strongly-timed audio programming features of the ChucK programming language and the state-of-the-art real-time graphics engine found in Unity. We describe both the system and its intended workflow for the creation of expressive audiovisual works. Chunity was evaluated as the primary software platform in a computer music and design course, where students created a diverse assortment of interactive audiovisual software. We present results from the evaluation and discuss Chunity’s usability, utility, and aesthetics as a way of working. Through these, we argue for Chunity as a unique and useful way to program sound, graphics, and interaction in tandem, giving users the flexibility to use a game engine to do much more than "just" make games.
@inproceedings{Atherton2018, author = {Atherton, Jack and Wang, Ge}, title = {Chunity: Integrated Audiovisual Programming in Unity}, pages = {102--107}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302695}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0024.pdf} }
-
Steffan Carlos Ianigro and Oliver Bown. 2018. Exploring Continuous Time Recurrent Neural Networks through Novelty Search. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 108–113. http://doi.org/10.5281/zenodo.1302697
Download PDF DOIIn this paper we expand on prior research into the use of Continuous Time Recurrent Neural Networks (CTRNNs) as evolvable generators of musical structures such as audio waveforms. This type of neural network has a compact structure and is capable of producing a large range of temporal dynamics. Due to these properties, we believe that CTRNNs combined with evolutionary algorithms (EA) could offer musicians many creative possibilities for the exploration of sound. In prior work, we have explored the use of interactive and target-based EA designs to tap into the creative possibilities of CTRNNs. Our results have shown promise for the use of CTRNNs in the audio domain. However, we feel neither EA designs allow both open-ended discovery and effective navigation of the CTRNN audio search space by musicians. Within this paper, we explore the possibility of using novelty search as an alternative algorithm that facilitates both open-ended and rapid discovery of the CTRNN creative search space.
@inproceedings{Ianigro2018, author = {Ianigro, Steffan Carlos and Bown, Oliver}, title = {Exploring Continuous Time Recurrent Neural Networks through Novelty Search}, pages = {108--113}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302697}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0025.pdf} }
-
John Bowers and Owen Green. 2018. All the Noises: Hijacking Listening Machines for Performative Research. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 114–119. http://doi.org/10.5281/zenodo.1302699
Download PDF DOIResearch into machine listening has intensified in recent years creating a variety of techniques for recognising musical features suitable, for example, in musicological analysis or commercial application in song recognition. Within NIME, several projects exist seeking to make these techniques useful in real-time music making. However, we debate whether the functionally-oriented approaches inherited from engineering domains that much machine listening research manifests is fully suited to the exploratory, divergent, boundary-stretching, uncertainty-seeking, playful and irreverent orientations of many artists. To explore this, we engaged in a concerted collaborative design exercise in which many different listening algorithms were implemented and presented with input which challenged their customary range of application and the implicit norms of musicality which research can take for granted. An immersive 3D spatialised multichannel environment was created in which the algorithms could be explored in a hybrid installation/performance/lecture form of research presentation. The paper closes with reflections on the creative value of ‘hijacking’ formal approaches into deviant contexts, the typically undocumented practical know-how required to make algorithms work, the productivity of a playfully irreverent relationship between engineering and artistic approaches to NIME, and a sketch of a sonocybernetic aesthetics for our work.
@inproceedings{Bowers2018, author = {Bowers, John and Green, Owen}, title = {All the Noises: Hijacking Listening Machines for Performative Research}, pages = {114--119}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302699}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0026.pdf} }
-
Rodrigo Schramm, Federico Visi, André Brasil, and Marcelo O Johann. 2018. A polyphonic pitch tracking embedded system for rapid instrument augmentation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 120–125. http://doi.org/10.5281/zenodo.1302650
Download PDF DOIThis paper presents a system for easily augmenting polyphonic pitched instruments. The entire system is designed to run on a low-cost embedded computer, suitable for live performance and easy to customise for different use cases. The core of the system implements real-time spectrum factorisation, decomposing polyphonic audio input signals into music note activations. New instruments can be easily added to the system with the help of custom spectral template dictionaries. Instrument augmentation is achieved by replacing or mixing the instrument’s original sounds with a large variety of synthetic or sampled sounds, which follow the polyphonic pitch activations.
@inproceedings{Schramm2018, author = {Schramm, Rodrigo and Visi, Federico and Brasil, André and Johann, Marcelo O}, title = {A polyphonic pitch tracking embedded system for rapid instrument augmentation}, pages = {120--125}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302650}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0027.pdf} }
-
Koray Tahiroglu, Michael Gurevich, and R. Benjamin Knapp. 2018. Contextualising Idiomatic Gestures in Musical Interactions with NIMEs. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 126–131. http://doi.org/10.5281/zenodo.1302701
Download PDF DOIThis paper introduces various ways that idiomatic gestures emerge in performance practice with new musical instruments. It demonstrates that idiomatic gestures can play an important role in the development of personalized performance practices that can be the basis for the development of style and expression. Three detailed examples – biocontrollers, accordion-inspired instruments, and a networked intelligent controller – illustrate how a complex suite of factors throughout the design, composition and performance processes can influence the development of idiomatic gestures. We argue that the explicit consideration of idiomatic gestures throughout the life cycle of new instruments can facilitate the emergence of style and give rise to performances that can develop rich layers of meaning.
@inproceedings{Tahiroglu2018, author = {Tahiroglu, Koray and Gurevich, Michael and Knapp, R. Benjamin}, title = {Contextualising Idiomatic Gestures in Musical Interactions with NIMEs}, pages = {126--131}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302701}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0028.pdf} }
-
Lamtharn Hantrakul. 2018. GestureRNN: A neural gesture system for the Roli Lightpad Block. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 132–137. http://doi.org/10.5281/zenodo.1302703
Download PDF DOIMachine learning and deep learning has recently made a large impact in the artistic community. In many of these applications however, the model is often used to render the high dimensional output directly e.g. every individual pixel in the final image. Humans arguably operate in much lower dimensional spaces during the creative process e.g. the broad movements of a brush. In this paper, we design a neural gesture system for music generation based around this concept. Instead of directly generating audio, we train a Long Short Term Memory (LSTM) recurrent neural network to generate instantaneous position and pressure on the Roli Lightpad instrument. These generated coordinates in turn, give rise to the sonic output defined in the synth engine. The system relies on learning these movements from a musician who has already developed a palette of musical gestures idiomatic to the Lightpad. Unlike many deep learning systems that render high dimensional output, our low-dimensional system can be run in real-time, enabling the first real time gestural duet of its kind between a player and a recurrent neural network on the Lightpad instrument.
@inproceedings{Hantrakul2018, author = {Hantrakul, Lamtharn}, title = {GestureRNN: A neural gesture system for the Roli Lightpad Block}, pages = {132--137}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302703}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0029.pdf} }
-
Balandino Di Donato, Jamie Bullock, and Atau Tanaka. 2018. Myo Mapper: a Myo armband to OSC mapper. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 138–143. http://doi.org/10.5281/zenodo.1302705
Download PDF DOIMyo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a ‘quick and easy’ solution for exploring the Myo’s potential for realising new interfaces for musical expression. Together with details of the software, this paper reports some applications in which Myo Mapper has been successfully used and a qualitative evaluation. We then proposed guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. Findings show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of Myo data high-level features for realising interactive artistic works. It also facilitates the recognition of poses and gestures beyond those included with the product by using third-party interactive machine learning software.
@inproceedings{DiDonato2018, author = {Di Donato, Balandino and Bullock, Jamie and Tanaka, Atau}, title = {Myo Mapper: a Myo armband to OSC mapper}, pages = {138--143}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302705}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0030.pdf} }
-
Federico Visi and Luke Dahl. 2018. Real-Time Motion Capture Analysis and Music Interaction with the Modosc Descriptor Library. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 144–147. http://doi.org/10.5281/zenodo.1302707
Download PDF DOIWe present modosc, a set of Max abstractions designed for computing motion descriptors from raw motion capture data in real time. The library contains methods for extracting descriptors useful for expressive movement analysis and sonic interaction design. modosc is designed to address the data handling and synchronization issues that often arise when working with complex marker sets. This is achieved by adopting a multiparadigm approach facilitated by odot and Open Sound Control to overcome some of the limitations of conventional Max programming, and structure incoming and outgoing data streams in a meaningful and easily accessible manner. After describing the contents of the library and how data streams are structured and processed, we report on a sonic interaction design use case involving motion feature extraction and machine learning.
@inproceedings{Visi2018, author = {Visi, Federico and Dahl, Luke}, title = {Real-Time Motion Capture Analysis and Music Interaction with the Modosc Descriptor Library}, pages = {144--147}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302707}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0031.pdf} }
-
Cagan Arslan, Florent Berthaut, Jean Martinet, Ioan Marius Bilasco, and Laurent Grisoni. 2018. The Phone with the Flow: Combining Touch + Optical Flow in Mobile Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 148–151. http://doi.org/10.5281/zenodo.1302709
Download PDF DOIMobile devices have been a promising platform for musical performance thanks to the various sensors readily available on board. In particular, mobile cameras can provide rich input as they can capture a wide variety of user gestures or environment dynamics. However, this raw camera input only provides continuous parameters and requires expensive computation. In this paper, we propose to combine motion/gesture input with the touch input, in order to filter movement information both temporally and spatially, thus increasing expressiveness while reducing computation time. We present a design space which demonstrates the diversity of interactions that our technique enables. We also report the results of a user study in which we observe how musicians appropriate the interaction space with an example instrument.
@inproceedings{Arslan2018, author = {Arslan, Cagan and Berthaut, Florent and Martinet, Jean and Bilasco, Ioan Marius and Grisoni, Laurent}, title = {The Phone with the Flow: Combining Touch + Optical Flow in Mobile Instruments}, pages = {148--151}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302709}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0032.pdf} }
-
Lars Engeln, Dietrich Kammer, Leon Brandt, and Rainer Groh. 2018. Multi-Touch Enhanced Visual Audio-Morphing. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 152–155. http://doi.org/10.5281/zenodo.1302711
Download PDF DOIMany digital interfaces for audio effects still resemble racks and cases of their hardware counterparts. For instance, DSP-algorithms are often adjusted via direct value input, sliders, or knobs. While recent research has started to experiment with the capabilities offered by modern interfaces, there are no examples for productive applications such as audio-morphing. Audio-morphing as a special field of DSP has a high complexity for the morph itself and for the parametrization of the transition between two sources. We propose a multi-touch enhanced interface for visual audiomorphing. This interface visualizes the internal processing and allows direct manipulation of the morphing parameters in the visualization. Using multi-touch gestures to manipulate audio-morphing in a visual way, sound design and music production becomes more unrestricted and creative.
@inproceedings{Engeln2018, author = {Engeln, Lars and Kammer, Dietrich and Brandt, Leon and Groh, Rainer}, title = {Multi-Touch Enhanced Visual Audio-Morphing}, pages = {152--155}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302711}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0033.pdf} }
-
Anıl Çamcı. 2018. GrainTrain: A Hand-drawn Multi-touch Interface for Granular Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 156–161. http://doi.org/10.5281/zenodo.1302529
Download PDF DOIWe describe an innovative multi-touch performance tool for real-time granular synthesis based on hand-drawn waveform paths. GrainTrain is a cross-platform web application that can run on both desktop and mobile computers, including tablets and phones. In this paper, we first offer an analysis of existing granular synthesis tools from an interaction stand-point, and outline a taxonomy of common interaction paradigms used in their designs. We then delineate the implementation of GrainTrain, and its unique approach to controlling real-time granular synthesis. We describe practical scenarios in which GrainTrain enables new performance possibilities. Finally, we discuss the results of a user study, and provide reports from expert users who evaluated GrainTrain.
@inproceedings{Çamcı2018, author = {Çamcı, Anıl}, title = {GrainTrain: A Hand-drawn Multi-touch Interface for Granular Synthesis}, pages = {156--161}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302529}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0034.pdf} }
-
gus xia and Roger B. Dannenberg. 2018. ShIFT: A Semi-haptic Interface for Flute Tutoring. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 162–167. http://doi.org/10.5281/zenodo.1302531
Download PDF DOITraditional instrument learning procedure is time-consuming; it begins with learning music notations and necessitates layers of sophistication and abstraction. Haptic interfaces open another door to the music world for the vast majority of talentless beginners when traditional training methods are not effective. However, the existing haptic interfaces can only be used to learn specially designed pieces with great restrictions on duration and pitch range due to the fact that it is only feasible to guide a part of performance motion haptically for most instruments. Our study breaks such restrictions using a semi-haptic guidance method. For the first time, the pitch range of the haptically learned pieces go beyond an octave (with the fingering motion covers most of the possible choices) and the duration of learned pieces cover a whole phrase. This significant change leads to a more realistic instrument learning process. Experiments show that semi-haptic interface is effective as long as learners are not “tone deaf”. Using our prototype device, the learning rate is about 30% faster compared with learning from videos.
@inproceedings{xia2018, author = {gus xia and Dannenberg, Roger B.}, title = {ShIFT: A Semi-haptic Interface for Flute Tutoring}, pages = {162--167}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302531}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0035.pdf} }
-
Fabio Morreale, Andrew P. McPherson, and Marcelo Wanderley. 2018. NIME Identity from the Performer’s Perspective. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 168–173. http://doi.org/10.5281/zenodo.1302533
Download PDF DOIThe term ‘NIME’ — New Interfaces for Musical Expression — has come to signify both technical and cultural characteristics. Not all new musical instruments are NIMEs, and not all NIMEs are defined as such for the sole ephemeral condition of being new. So, what are the typical characteristics of NIMEs and what are their roles in performers’ practice? Is there a typical NIME repertoire? This paper aims to address these questions with a bottom up approach. We reflect on the answers of 78 NIME performers to an online questionnaire discussing their performance experience with NIMEs. The results of our investigation explore the role of NIMEs in the performers’ practice and identify the values that are common among performers. We find that most NIMEs are viewed as exploratory tools created by and for performers, and that they are constantly in development and almost in no occasions in a finite state. The findings of our survey also reflect upon virtuosity with NIMEs, whose peculiar performance practice results in learning trajectories that often do not lead to the development of virtuosity as it is commonly understood in traditional performance.
@inproceedings{Morreale2018, author = {Morreale, Fabio and McPherson, Andrew P. and Wanderley, Marcelo}, title = {NIME Identity from the Performer's Perspective}, pages = {168--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302533}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0036.pdf} }
-
Anna Xambó. 2018. Who Are the Women Authors in NIME?–Improving Gender Balance in NIME Research. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 174–177. http://doi.org/10.5281/zenodo.1302535
Download PDF DOIIn recent years, there has been an increase in awareness of the underrepresentation of women in the sound and music computing fields. The New Interfaces for Musical Expression (NIME) conference is not an exception, with a number of open questions remaining around the issue. In the present paper, we study the presence and evolution over time of women authors in NIME since the beginning of the conference in 2001 until 2017. We discuss the results of such a gender imbalance and potential solutions by summarizing the actions taken by a number of worldwide initiatives that have put an effort into making women’s work visible in our field, with a particular emphasis on Women in Music Tech (WiMT), a student-led organization that aims to encourage more women to join music technology, as a case study. We conclude with a hope for an improvement in the representation of women in NIME by presenting WiNIME, a public online database that details who are the women authors in NIME.
@inproceedings{Xambó2018, author = {Xambó, Anna}, title = {Who Are the Women Authors in NIME?–Improving Gender Balance in NIME Research}, pages = {174--177}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302535}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0037.pdf} }
-
Sarah Reid, Sara Sithi-Amnuai, and Ajay Kapur. 2018. Women Who Build Things: Gestural Controllers, Augmented Instruments, and Musical Mechatronics. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 178–183. http://doi.org/10.5281/zenodo.1302537
Download PDF DOIThis paper presents a collection of hardware-based technologies for live performance developed by women over the last few decades. The field of music technology and interface design has a significant gender imbalance, with men greatly outnumbering women. The purpose of this paper is to promote the visibility and representation of women in this field, and to encourage discussion on the importance of mentorship and role models for young women and girls in music technology.
@inproceedings{Reid2018, author = {Reid, Sarah and Sithi-Amnuai, Sara and Kapur, Ajay}, title = {Women Who Build Things: Gestural Controllers, Augmented Instruments, and Musical Mechatronics}, pages = {178--183}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302537}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0038.pdf} }
-
Robert H Jack, Jacob Harrison, Fabio Morreale, and Andrew P. McPherson. 2018. Democratising DMIs: the relationship of expertise and control intimacy. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 184–189. http://doi.org/10.5281/zenodo.1302539
Download PDF DOIAn oft-cited aspiration of digital musical instrument (DMI) design is to create instruments, in the words of Wessel and Wright, with a ‘low entry fee and no ceiling on virtuosity’. This is a difficult task to achieve: many new instruments are aimed at either the expert or amateur musician, with few instruments catering for both. There is often a balance between learning curve and the nuance of musical control in DMIs. In this paper we present a study conducted with non-musicians and guitarists playing guitar-derivative DMIs with variable levels of control intimacy: how the richness and nuance of a performer’s movement translates into the musical output of an instrument. Findings suggest a significant difference in preference for levels of control intimacy between the guitarists and the non-musicians. In particular, the guitarists unanimously preferred the richest of the two settings whereas the non-musicians generally preferred the setting with lower richness. This difference is notable because it is often taken as a given that increasing richness is a way to make instruments more enjoyable to play, however, this result only seems to be true for expert players.
@inproceedings{Jack2018, author = {Jack, Robert H and Harrison, Jacob and Morreale, Fabio and McPherson, Andrew P.}, title = {Democratising DMIs: the relationship of expertise and control intimacy}, pages = {184--189}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302539}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0039.pdf} }
-
Adnan Marquez-Borbon and Juan Pablo Martinez-Avila. 2018. The Problem of DMI Adoption and Longevity: Envisioning a NIME Performance Pedagogy. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 190–195. http://doi.org/10.5281/zenodo.1302541
Download PDF DOIThis paper addresses the prevailing longevity problem of digital musical instruments (DMIs) in NIME research and design by proposing a holistic system design approach. Despite recent efforts to examine the main contributing factors of DMI falling into obsolescence, such attempts to remedy this issue largely place focus on the artifacts establishing themselves, their design processes and technologies. However, few existing studies have attempted to proactively build a community around technological platforms for DMIs, whilst bearing in mind the social dynamics and activities necessary for a budding community. We observe that such attempts while important in their undertaking, are limited in their scope. In this paper we will discuss that achieving some sort of longevity must be addressed beyond the device itself and must tackle broader ecosystemic factors. We hypothesize, that a longevous DMI design must not only take into account a target community but it may also require a non-traditional pedagogical system that sustains artistic practice.
@inproceedings{Marquez-Borbon2018, author = {Marquez-Borbon, Adnan and Martinez-Avila, Juan Pablo}, title = {The Problem of DMI Adoption and Longevity: Envisioning a NIME Performance Pedagogy}, pages = {190--195}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302541}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0040.pdf} }
-
Charles Patrick Martin, Alexander Refsum Jensenius, and Jim Torresen. 2018. Composing an Ensemble Standstill Work for Myo and Bela. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 196–197. http://doi.org/10.5281/zenodo.1302543
Download PDF DOIThis paper describes the process of developing a standstill performance work using the Myo gesture control armband and the Bela embedded computing platform. The combination of Myo and Bela allows a portable and extensible version of the standstill performance concept while introducing muscle tension as an additional control parameter. We describe the technical details of our setup and introduce Myo-to-Bela and Myo-to-OSC software bridges that assist with prototyping compositions using the Myo controller.
@inproceedings{Martin2018, author = {Martin, Charles Patrick and Jensenius, Alexander Refsum and Torresen, Jim}, title = {Composing an Ensemble Standstill Work for Myo and Bela}, pages = {196--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302543}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0041.pdf} }
-
Alex Nieva, Johnty Wang, Joseph Malloch, and Marcelo Wanderley. 2018. The T-Stick: Maintaining a 12 year-old Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 198–199. http://doi.org/10.5281/zenodo.1302545
Download PDF DOIThis paper presents the work to maintain several copies of the digital musical instrument (DMI) called the T-Stick in the hopes of extending their useful lifetime. The T-Sticks were originally conceived in 2006 and 20 copies have been built over the last 12 years. While they all preserve the original design concept, their evolution resulted in variations in choice of microcontrollers, and sensors. We worked with eight copies of the second and fourth generation T-Sticks to overcome issues related to the aging of components, changes in external software, lack of documentation, and in general, the problem of technical maintenance.
@inproceedings{Nieva2018, author = {Nieva, Alex and Wang, Johnty and Malloch, Joseph and Wanderley, Marcelo}, title = {The T-Stick: Maintaining a 12 year-old Digital Musical Instrument}, pages = {198--199}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302545}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0042.pdf} }
-
Christopher Dewey and Jonathan P. Wakefield. 2018. MIDI Keyboard Defined DJ Performance System. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 200–201. http://doi.org/10.5281/zenodo.1302547
Download PDF DOIThis paper explores the use of the ubiquitous MIDI keyboard to control a DJ performance system. The prototype system uses a two octave keyboard with each octave controlling one audio track. Each audio track has four two-bar loops which play in synchronisation switchable by its respective octave’s first four black keys. The top key of the keyboard toggles between frequency filter mode and time slicer mode. In frequency filter mode the white keys provide seven bands of latched frequency filtering. In time slicer mode the white keys plus black B flat key provide latched on/off control of eight time slices of the loop. The system was informally evaluated by nine subjects. The frequency filter mode combined with loop switching worked well with the MIDI keyboard interface. All subjects agreed that all tools had creative performance potential that could be developed by further practice.
@inproceedings{Dewey2018, author = {Dewey, Christopher and Wakefield, Jonathan P.}, title = {MIDI Keyboard Defined DJ Performance System}, pages = {200--201}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302547}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0043.pdf} }
-
Trond Engum and Otto Jonassen Wittner. 2018. Democratizing Interactive Music Production over the Internet. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 202–203. http://doi.org/10.5281/zenodo.1302549
Download PDF DOIThis paper describes an ongoing research project which address challenges and opportunities when collaborating interactively in real time in a "virtual" sound studio with several partners in different locations. "Virtual" in this context referring to an interconnected and inter-domain studio environment consisting of several local production systems connected to public and private networks. This paper reports experiences and challenges related to two different production scenarios conducted in 2017.
@inproceedings{Engum2018, author = {Engum, Trond and Wittner, Otto Jonassen}, title = {Democratizing Interactive Music Production over the Internet}, pages = {202--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302549}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0044.pdf} }
-
Jean-Francois Charles, Carlos Cotallo Solares, Carlos Toro Tobon, and Andrew Willette. 2018. Using the Axoloti Embedded Sound Processing Platform to Foster Experimentation and Creativity. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 204–205. http://doi.org/10.5281/zenodo.1302551
Download PDF DOIThis paper describes how the Axoloti platform is well suited to teach a beginners’ course about new elecro-acoustic musical instruments and how it fits the needs of artists who want to work with an embedded sound processing platform and get creative at the crossroads of acoustics and electronics. First, we present the criteria used to choose a platform for the course titled "Creating New Musical Instruments" given at the University of Iowa in the Fall of 2017. Then, we explain why we chose the Axoloti board and development environment.
@inproceedings{Charles2018, author = {Charles, Jean-Francois and Cotallo Solares, Carlos and Toro Tobon, Carlos and Willette, Andrew}, title = {Using the Axoloti Embedded Sound Processing Platform to Foster Experimentation and Creativity}, pages = {204--205}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302551}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0045.pdf} }
-
Kyriakos Tsoukalas and Ivica Ico Bukvic. 2018. Introducing a K-12 Mechatronic NIME Kit. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 206–209. http://doi.org/10.5281/zenodo.1302553
Download PDF DOIThe following paper introduces a new mechatronic NIME kit that uses new additions to the Pd-L2Ork visual programing environment and its K-12 learning module. It is designed to facilitate the creation of simple mechatronics systems for physical sound production in K-12 and production scenarios. The new set of objects builds on the existing support for the Raspberry Pi platform to also include the use of electric actuators via the microcomputer’s GPIO system. Moreover, we discuss implications of the newly introduced kit in the creative and K-12 education scenarios by sharing observations from a series of pilot workshops, with particular focus on using mechatronic NIMEs as a catalyst for the development of programing skills.
@inproceedings{Tsoukalas2018, author = {Tsoukalas, Kyriakos and Bukvic, Ivica Ico}, title = {Introducing a K-12 Mechatronic NIME Kit}, pages = {206--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302553}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0046.pdf} }
-
Daniel Bennett, Peter Bennett, and Anne Roudaut. 2018. Neurythmic: A Rhythm Creation Tool Based on Central Pattern Generators. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 210–215. http://doi.org/10.5281/zenodo.1302555
Download PDF DOIWe describe the development of Neurythmic: an interactive system for the creation and performance of fluid, expressive musical rhythms using Central Pattern Generators (CPGs). CPGs are neural networks which generate adaptive rhythmic signals. They simulate structures in animals which underly behaviours such as heartbeat, gut peristalsis and complex motor control. Neurythmic is the first such system to use CPGs for interactive rhythm creation. We discuss how Neurythmic uses the entrainment behaviour of these networks to support the creation of rhythms while avoiding the rigidity of grid quantisation approaches. As well as discussing the development, design and evaluation of Neurythmic, we discuss relevant properties of the CPG networks used (Matsuoka’s Neural Oscillator), and describe methods for their control. Evaluation with expert and professional musicians shows that Neurythmic is a versatile tool, adapting well to a range of quite different musical approaches.
@inproceedings{Bennett2018, author = {Bennett, Daniel and Bennett, Peter and Roudaut, Anne}, title = {Neurythmic: A Rhythm Creation Tool Based on Central Pattern Generators}, pages = {210--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302555}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0047.pdf} }
-
James Granger, Mateo Aviles, Joshua Kirby, et al. 2018. Evaluating LED-based interface for Lumanote composition creation tool. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 216–221. http://doi.org/10.5281/zenodo.1302557
Download PDF DOIComposing music typically requires years of music theory experience and knowledge that includes but is not limited to chord progression, melody composition theory, and an understanding of whole-step/half-step passing tones among others. For that reason, certain songwriters such as singers may find a necessity to hire experienced pianists to help compose their music. In order to facilitate the process for beginner and aspiring musicians, we have developed Lumanote, a music composition tool that aids songwriters by presenting real-time suggestions on appropriate melody notes and chord progression. While a preliminary evaluation yielded favorable results for beginners, many commented on the difficulty of having to map the note suggestions displayed on the on-screen interface to the physical keyboard they were playing on. This paper presents the resulting solution: an LED-based feedback system that is designed to be directly attached to any standard MIDI keyboard. This peripheral aims to help map note suggestions directly to the physical keys of a musical keyboard. A study consisting of 22 individuals was conducted to compare the effectiveness of the new LED-based system with the existing computer interface, finding that the vast majority of users preferred the LED system. Three experienced musicians also judged and ranked the compositions, noting significant improvement in song quality when using either system, and citing comparable quality between compositions that used either interface.
@inproceedings{Granger2018, author = {Granger, James and Aviles, Mateo and Kirby, Joshua and Griffin, Austin and Yoon, Johnny and Lara-Garduno, Raniero A. and Hammond, Tracy}, title = {Evaluating LED-based interface for Lumanote composition creation tool}, pages = {216--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302557}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0048.pdf} }
-
Eduardo Meneses, Sergio Freire, and Marcelo Wanderley. 2018. GuitarAMI and GuiaRT: two independent yet complementary projects on augmented nylon guitars. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 222–227. http://doi.org/10.5281/zenodo.1302559
Download PDF DOIThis paper describes two augmented nylon-string guitar projects developed in different institutions. GuitarAMI uses sensors to modify the classical guitars constraints while GuiaRT uses digital signal processing to create virtual guitarists that interact with the performer in real-time. After a bibliographic review of Augmented Musical Instruments (AMIs) based on guitars, we present the details of the two projects and compare them using an adapted dimensional space representation. Highlighting the complementarity and cross-influences between the projects, we propose avenues for future collaborative work.
@inproceedings{Meneses2018, author = {Meneses, Eduardo and Freire, Sergio and Wanderley, Marcelo}, title = {GuitarAMI and GuiaRT: two independent yet complementary projects on augmented nylon guitars}, pages = {222--227}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302559}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0049.pdf} }
-
Ariane de Souza Stolfi, Miguel Ceriani, Luca Turchet, and Mathieu Barthet. 2018. Playsound.space: Inclusive Free Music Improvisations Using Audio Commons. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 228–233. http://doi.org/10.5281/zenodo.1302561
Download PDF DOIPlaysound.space is a web-based tool to search for and play Creative Commons licensed-sounds which can be applied to free improvisation, experimental music production and soundscape composition. It provides a fast access to about 400k non-musical and musical sounds provided by Freesound, and allows users to play/loop single or multiple sounds retrieved through text based search. Sound discovery is facilitated by use of semantic searches and sound visual representations (spectrograms). Guided by the motivation to create an intuitive tool to support music practice that could suit both novice and trained musicians, we developed and improved the system in a continuous process, gathering frequent feedback from a range of users with various skills. We assessed the prototype with 18 non musician and musician participants during free music improvisation sessions. Results indicate that the system was found easy to use and supports creative collaboration and expressiveness irrespective of musical ability. We identified further design challenges linked to creative identification, control and content quality.
@inproceedings{Stolfi2018, author = {de Souza Stolfi, Ariane and Ceriani, Miguel and Turchet, Luca and Barthet, Mathieu}, title = {Playsound.space: Inclusive Free Music Improvisations Using Audio Commons}, pages = {228--233}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302561}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0050.pdf} }
-
John Harding, Richard Graham, and Edwin Park. 2018. CTRL: A Flexible, Precision Interface for Analog Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 234–237. http://doi.org/10.5281/zenodo.1302563
Download PDF DOIThis paper provides a new interface for the production and distribution of high resolution analog control signals, particularly aimed toward the control of analog modular synthesisers. Control Voltage/Gate interfaces generate Control Voltage (CV) and Gate Voltage (Gate) as a means of controlling note pitch and length respectively, and have been with us since 1986 [2]. The authors provide a unique custom CV/Gate interface and dedicated communication protocol which leverages standard USB Serial functionality and enables connectivity over a plethora of computing devices, including embedded devices such as the Raspberry Pi and ARM based devices including widely available ‘Android TV Boxes’. We provide a general overview of the unique hardware and communication protocol developments followed by usage case examples toward tuning and embedded platforms, leveraging softwares ranging from Pure Data (Pd), Max, and Max for Live (M4L).
@inproceedings{Harding2018, author = {Harding, John and Graham, Richard and Park, Edwin}, title = {CTRL: A Flexible, Precision Interface for Analog Synthesis}, pages = {234--237}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302563}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0051.pdf} }
-
Peter Beyls. 2018. Motivated Learning in Human-Machine Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 238–243. http://doi.org/10.5281/zenodo.1302565
Download PDF DOIThis paper describes a machine learning approach in the context of non-idiomatic human-machine improvisation. In an attempt to avoid explicit mapping of user actions to machine responses, an experimental machine learning strategy is suggested where rewards are derived from the implied motivation of the human interactor – two motivations are at work: integration (aiming to connect with machine generated material) and expression (independent activity). By tracking consecutive changes in musical distance (i.e. melodic similarity) between human and machine, such motivations can be inferred. A variation of Q-learning is used featuring a self-optimizing variable length state-action-reward list. The system (called Pock) is tunable into particular behavioral niches by means of a limited number of parameters. Pock is designed as a recursive structure and behaves as a complex dynamical system. When tracking systems variables over time, emergent non-trivial patterns reveal experimental evidence of attractors demonstrating successful adaptation.
@inproceedings{Beyls2018, author = {Beyls, Peter}, title = {Motivated Learning in Human-Machine Improvisation}, pages = {238--243}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302565}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0052.pdf} }
-
Deepak Chandran and Ge Wang. 2018. InterFACE: new faces for musical expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 244–248. http://doi.org/10.5281/zenodo.1302569
Download PDF DOIInterFACE is an interactive system for musical creation, mediated primarily through the user’s facial expressions and movements. It aims to take advantage of the expressive capabilities of the human face to create music in a way that is both expressive and whimsical. This paper introduces the designs of three virtual instruments in the InterFACE system: namely, FACEdrum (a drum machine), GrannyFACE (a granular synthesis sampler), and FACEorgan (a laptop mouth organ using both face tracking and audio analysis). We present the design behind these instruments and consider what it means to be able to create music with one’s face. Finally, we discuss the usability and aesthetic criteria for evaluating such a system, taking into account our initial design goals as well as the resulting experience for the performer and audience.
@inproceedings{Chandran2018, author = {Chandran, Deepak and Wang, Ge}, title = {InterFACE: new faces for musical expression}, pages = {244--248}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302569}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0053.pdf} }
-
Richard Polfreman. 2018. Hand Posture Recognition: IR, IMU and sEMG. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 249–254. http://doi.org/10.5281/zenodo.1302571
Download PDF DOIHands are important anatomical structures for musical performance, and recent developments in input device technology have allowed rather detailed capture of hand gestures using consumer-level products. While in some musical contexts, detailed hand and finger movements are required, in others it is sufficient to communicate discrete hand postures to indicate selection or other state changes. This research compared three approaches to capturing hand gestures where the shape of the hand, i.e. the relative positions and angles of finger joints, are an important part of the gesture. A number of sensor types can be used to capture information about hand posture, each of which has various practical advantages and disadvantages for music applications. This study compared three approaches, using optical, inertial and muscular information, with three sets of 5 hand postures (i.e. static gestures) and gesture recognition algorithms applied to the device data, aiming to determine which methods are most effective.
@inproceedings{Polfreman2018, author = {Polfreman, Richard}, title = {Hand Posture Recognition: IR, IMU and sEMG}, pages = {249--254}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302571}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0054.pdf} }
-
Joseph Malloch, Marlon Mario Schumacher, Stephen Sinclair, and Marcelo Wanderley. 2018. The Digital Orchestra Toolbox for Max. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 255–258. http://doi.org/10.5281/zenodo.1302573
Download PDF DOIThe Digital Orchestra Toolbox for Max is an open-source collection of small modular software tools for aiding the development of Digital Musical Instruments. Each tool takes the form of an "abstraction" for the visual programming environment Max, meaning it can be opened and understood by users within the Max environment, as well as copied, modified, and appropriated as desired. This paper describes the origins of the Toolbox and our motivations for creating it, broadly outlines the types of tools included, and follows the development of the project over the last twelve years. We also present examples of several digital musical instruments built using the Toolbox.
@inproceedings{Malloch2018, author = {Malloch, Joseph and Schumacher, Marlon Mario and Sinclair, Stephen and Wanderley, Marcelo}, title = {The Digital Orchestra Toolbox for Max}, pages = {255--258}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302573}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0055.pdf} }
-
Bill Manaris, Pangur Brougham-Cook, Dana Hughes, and Andrew R. Brown. 2018. JythonMusic: An Environment for Developing Interactive Music Systems. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 259–262. http://doi.org/10.5281/zenodo.1302575
Download PDF DOIJythonMusic is a software environment for developing interactive musical experiences and systems. It is based on jMusic, a software environment for computer-assisted composition, which was extended within the last decade into a more comprehensive framework providing composers and software developers with libraries for music making, image manipulation, building graphical user interfaces, and interacting with external devices via MIDI and OSC, among others. This environment is free and open source. It is based on Python, therefore it provides more economical syntax relative to Java- and C/C++-like languages. JythonMusic rests on top of Java, so it provides access to the complete Java API and external Java-based libraries as needed. Also, it works seamlessly with other software, such as PureData, Max/MSP, and Processing. The paper provides an overview of important JythonMusic libraries related to constructing interactive musical experiences. It demonstrates their scope and utility by summarizing several projects developed using JythonMusic, including interactive sound art installations, new interfaces for sound manipulation and spatialization, as well as various explorations on mapping among motion, gesture and music.
@inproceedings{Manaris2018, author = {Manaris, Bill and Brougham-Cook, Pangur and Hughes, Dana and Brown, Andrew R.}, title = {JythonMusic: An Environment for Developing Interactive Music Systems}, pages = {259--262}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302575}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0056.pdf} }
-
Steven Leib and Anıl Çamcı. 2018. Triplexer: An Expression Pedal with New Degrees of Freedom. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 263–268. http://doi.org/10.5281/zenodo.1302577
Download PDF DOIWe introduce the Triplexer, a novel foot controller that gives the performer 3 degrees of freedom over the control of various effects parameters. With the Triplexer, we aim to expand the performer’s control space by augmenting the capabilities of the common expression pedal that is found in most effects rigs. Using industrial-grade weight-detection sensors and widely-adopted communication protocols, the Triplexer offers a flexible platform that can be integrated into various performance setups and situations. In this paper, we detail the design of the Triplexer by describing its hardware, embedded signal processing, and mapping software implementations. We also offer the results of a user study, which we conducted to evaluate the usability of our controller.
@inproceedings{Leib2018, author = {Leib, Steven and Çamcı, Anıl}, title = {Triplexer: An Expression Pedal with New Degrees of Freedom}, pages = {263--268}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302577}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0057.pdf} }
-
Halldór Úlfarsson. 2018. The halldorophone: The ongoing innovation of a cello-like drone instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 269–274. http://doi.org/10.5281/zenodo.1302579
Download PDF DOIThis paper reports upon the process of innovation of a new instrument. The author has developed the halldorophone a new electroacoustic string instrument which makes use of positive feedback as a key element in generating its sound. An important objective of the project has been to encourage its use by practicing musicians. After ten years of use, the halldorophone has a growing repertoire of works by prominent composers and performers. During the development of the instrument, the question has been asked: “why do musicians want to use this instrument?” and answers have been found through on-going (informal) user studies and feedback. As the project progresses, a picture emerges of what qualities have led to a culture of acceptance and use around this new instrument. This paper describes the halldorophone and presents the rationale for its major design features and ergonomic choices, as they relate to the overarching objective of nurturing a culture of use and connects it to wider trends.
@inproceedings{Úlfarsson2018, author = {Úlfarsson, Halldór}, title = {The halldorophone: The ongoing innovation of a cello-like drone instrument}, pages = {269--274}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302579}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0058.pdf} }
-
Kyriakos Tsoukalas, Joseph Kubalak, and Ivica Ico Bukvic. 2018. L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 275–280. http://doi.org/10.5281/zenodo.1302581
Download PDF DOILaptop orchestras create music, although digitally produced, in a collaborative live performance not unlike a traditional orchestra. The recent increase in interest and investment in this style of music creation has paved the way for novel methods for musicians to create and interact with music. To this end, a number of nontraditional instruments have been constructed that enable musicians to control sound production beyond pitch and volume, integrating filtering, musical effects, etc. Wii Remotes (WiiMotes) have seen heavy use in maker communities, including laptop orchestras, for their robust sensor array and low cost. The placement of sensors and the form factor of the device itself are suited for video games, not necessarily live music creation. In this paper, the authors present a new controller design, based on the WiiMote hardware platform, to address usability in gesture-centric music performance. Based on the pilot-study data, the new controller offers unrestricted two-hand gesture production, smaller footprint, and lower muscle strain.
@inproceedings{Tsoukalas-b2018, author = {Tsoukalas, Kyriakos and Kubalak, Joseph and Bukvic, Ivica Ico}, title = {L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music Performance}, pages = {275--280}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302581}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0059.pdf} }
-
Jack Armitage and Andrew P. McPherson. 2018. Crafting Digital Musical Instruments: An Exploratory Workshop Study. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 281–286. http://doi.org/10.5281/zenodo.1302583
Download PDF DOIIn digital musical instrument design, different tools and methods offer a variety of approaches for constraining the exploration of musical gestures and sounds. Toolkits made of modular components usefully constrain exploration towards simple, quick and functional combinations, and methods such as sketching and model-making alternatively allow imagination and narrative to guide exploration. In this work we sought to investigate a context where these approaches to exploration were combined. We designed a craft workshop for 20 musical instrument designers, where groups were given the same partly-finished instrument to craft for one hour with raw materials, and though the task was open ended, they were prompted to focus on subtle details that might distinguish their instruments. Despite the prompt the groups diverged dramatically in intent and style, and generated gestural language rapidly and flexibly. By the end, each group had developed a distinctive approach to constraint, exploratory style, collaboration and interpretation of the instrument and workshop materials. We reflect on this outcome to discuss advantages and disadvantages to integrating digital musical instrument design tools and methods, and how to further investigate and extend this approach.
@inproceedings{Armitage2018, author = {Armitage, Jack and McPherson, Andrew P.}, title = {Crafting Digital Musical Instruments: An Exploratory Workshop Study}, pages = {281--286}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302583}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0060.pdf} }
-
Ammar Kalo and Georg Essl. 2018. Individual Fabrication of Cymbals using Incremental Robotic Sheet Forming. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 287–292. http://doi.org/10.5281/zenodo.1302585
Download PDF DOIIncremental robotic sheet forming is used to fabricate a novel cymbal shape based on models of geometric chaos for stadium shaped boundaries. This provides a proof-of-concept that this robotic fabrication technique might be a candidate method for creating novel metallic ideophones that are based on sheet deformations. Given that the technique does not require molding, it is well suited for both rapid and iterative prototyping and the fabrication of individual pieces. With advances in miniaturization, this approach may also be suitable for personal fabrication. In this paper we discuss this technique as well as aspects of the geometry of stadium cymbals and their impact on the resulting instrument.
@inproceedings{Kalo2018, author = {Kalo, Ammar and Essl, Georg}, title = {Individual Fabrication of Cymbals using Incremental Robotic Sheet Forming}, pages = {287--292}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302585}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0061.pdf} }
-
John McDowell. 2018. Haptic-Listening and the Classical Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 293–298. http://doi.org/10.5281/zenodo.1302587
Download PDF DOIThis paper reports the development of a ‘haptic-listening’ system which presents the listener with a representation of the vibrotactile feedback perceived by a classical guitarist during performance through the use of haptic feedback technology. The paper describes the design of the haptic-listening system which is in two prototypes: the “DIY Haptic Guitar” and a more robust haptic-listening Trial prototype using a Reckhorn BS-200 shaker. Through two experiments, the perceptual significance and overall musical contribution of the addition of haptic feedback in a listening context was evaluated. Subjects preferred listening to the classical guitar presentation with the addition of haptic feedback and the addition of haptic feedback contributed to listeners’ engagement with a performance. The results of the experiments and their implications are discussed in this paper.
@inproceedings{McDowell2018, author = {McDowell, John}, title = {Haptic-Listening and the Classical Guitar}, pages = {293--298}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302587}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0062.pdf} }
-
Jacob Harrison, Robert H Jack, Fabio Morreale, and Andrew P. McPherson. 2018. When is a Guitar not a Guitar? Cultural Form, Input Modality and Expertise. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 299–304. http://doi.org/10.5281/zenodo.1302589
Download PDF DOIThe design of traditional musical instruments is a process of incremental refinement over many centuries of innovation. Conversely, digital musical instruments (DMIs), being unconstrained by requirements of efficient acoustic sound production and ergonomics, can take on forms which are more abstract in their relation to the mechanism of control and sound production. In this paper we consider the case of designing DMIs for use in existing musical cultures, and pose questions around the social and technical acceptability of certain design choices relating to global physical form and input modality (sensing strategy and the input gestures that it affords). We designed four guitar-derivative DMIs designed to be suitable to perform a strummed harmonic accompaniment to a folk tune. Each instrument possessed varying degrees of ‘guitar-likeness’, based either on the form and aesthetics of the guitar or the specific mode of interaction. We conducted a study where both non-musicians and guitarists played two versions of the instruments and completed musical tasks with each instrument. The results of this study highlight the complex interaction between global form and input modality when designing for existing musical cultures.
@inproceedings{Harrison2018, author = {Harrison, Jacob and Jack, Robert H and Morreale, Fabio and McPherson, Andrew P.}, title = {When is a Guitar not a Guitar? Cultural Form, Input Modality and Expertise}, pages = {299--304}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302589}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0063.pdf} }
-
Jeppe Larsen, Hendrik Knoche, and Dan Overholt. 2018. A Longitudinal Field Trial with a Hemiplegic Guitarist Using The Actuated Guitar. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 305–310. http://doi.org/10.5281/zenodo.1302591
Download PDF DOICommon emotional effects following a stroke include depression, apathy and lack of motivation. We conducted a longitudinal case study to investigate if enabling a post-stroke former guitarist re-learn to play guitar would help increase motivation for self rehabilitation and quality of life after suffering a stroke. The intervention lasted three weeks during which the participant had a fully functional electrical guitar fitted with a strumming device controlled by a foot pedal at his free disposal. The device replaced right strumming of the strings, and the study showed that the participant, who was highly motivated, played 20 sessions despite system latency and reduced musical expression. He incorporated his own literature and equipment into his playing routine and improved greatly as the study progressed. He was able to play alone and keep a steady rhythm in time with backing tracks that went as fast as 120bpm. During the study he was able to lower his error rate to 33%, while his average flutter also decreased.
@inproceedings{Larsen2018, author = {Larsen, Jeppe and Knoche, Hendrik and Overholt, Dan}, title = {A Longitudinal Field Trial with a Hemiplegic Guitarist Using The Actuated Guitar}, pages = {305--310}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302591}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0064.pdf} }
-
Paul Stapleton, Maarten van Walstijn, and Sandor Mehes. 2018. Co-Tuning Virtual-Acoustic Performance Ecosystems: observations on the development of skill and style in the study of musician-instrument relationships. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 311–314. http://doi.org/10.5281/zenodo.1302593
Download PDF DOIIn this paper we report preliminary observations from an ongoing study into how musicians explore and adapt to the parameter space of a virtual-acoustic string bridge plate instrument. These observations inform (and are informed by) a wider approach to understanding the development of skill and style in interactions between musicians and musical instruments. We discuss a performance-driven ecosystemic approach to studying musical relationships, drawing on arguments from the literature which emphasise the need to go beyond simplistic notions of control and usability when assessing exploratory and performatory musical interactions. Lastly, we focus on processes of perceptual learning and co-tuning between musician and instrument, and how these activities may contribute to the emergence of personal style as a hallmark of skilful music-making.
@inproceedings{Stapleton2018, author = {Stapleton, Paul and van Walstijn, Maarten and Mehes, Sandor}, title = {Co-Tuning Virtual-Acoustic Performance Ecosystems: observations on the development of skill and style in the study of musician-instrument relationships}, pages = {311--314}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302593}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0065.pdf} }
-
Sands A. Fish II and Nicole L’Huillier. 2018. Telemetron: A Musical Instrument for Performance in Zero Gravity. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 315–317. http://doi.org/10.5281/zenodo.1302595
Download PDF DOIThe environment of zero gravity affords a unique medium for new modalities of musical performance, both in the design of instruments, and human interactions with said instruments. To explore this medium, we have created and flown Telemetron, the first musical instrument specifically designed for and tested in the zero gravity environment. The resultant instrument (leveraging gyroscopes and wireless telemetry transmission) and recorded performance represent an initial exploration of compositions that are unique to the physics and dynamics of outer space. We describe the motivations for this instrument, and the unique constraints involved in designing for this environment. This initial design suggests possibilities for further experiments in musical instrument design for outer space.
@inproceedings{Fish2018, author = {Fish II, Sands A. and L'Huillier, Nicole}, title = {Telemetron: A Musical Instrument for Performance in Zero Gravity}, pages = {315--317}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302595}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0066.pdf} }
-
Dan Wilcox. 2018. robotcowboy: 10 Years of Wearable Computer Rock. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 318–323. http://doi.org/10.5281/zenodo.1302597
Download PDF DOIThis paper covers the technical and aesthetic development of robotcowboy, the author’s ongoing human-computer wearable performance project. Conceived as an idiosyncratic manifesto on the embodiment of computational sound, the original robotcowboy system was built in 2006-2007 using a belt-mounted industrial wearable computer running GNU/Linux and Pure Data, external USB audio/MIDI interfaces, HID gamepads, and guitar. Influenced by roadworthy analog gear, chief system requirements were mobility, plug-and-play, reliability, and low cost. From 2007 to 2011, this first iteration "Cabled Madness" melded rock music with realtime algorithmic composition and revolved around cyborg human/system tension, aspects of improvisation, audience feedback, and an inherent capability of failure. The second iteration "Onward to Mars" explored storytelling from 2012-2015 through the one-way journey of the first human on Mars with the computing system adapted into a self-contained spacesuit backpack. Now 10 years on, a new robotcowboy 2.0 system powers a third iteration with only an iPhone and PdParty, the author’s open-source iOS application which runs Pure Data patches and provides full duplex stereo audio, MIDI, HID game controller support, and Open Sound Control communication. The future is bright, do you have room to wiggle?
@inproceedings{Wilcox2018, author = {Wilcox, Dan}, title = {robotcowboy: 10 Years of Wearable Computer Rock}, pages = {318--323}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302597}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0067.pdf} }
-
Victor Evaristo Gonzalez Sanchez, Charles Patrick Martin, Agata Zelechowska, Kari Anne Vadstensvik Bjerkestrand, Victoria Johnson, and Alexander Refsum Jensenius. 2018. Bela-Based Augmented Acoustic Guitars for Sonic Microinteraction. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 324–327. http://doi.org/10.5281/zenodo.1302599
Download PDF DOIThis article describes the design and construction of a collection of digitally-controlled augmented acoustic guitars, and the use of these guitars in the installation \textit{Sverm-Resonans}. The installation was built around the idea of exploring ‘inverse’ sonic microinteraction, that is, controlling sounds by the micromotion observed when attempting to stand still. It consisted of six acoustic guitars, each equipped with a Bela embedded computer for sound processing (in Pure Data), an infrared distance sensor to detect the presence of users, and an actuator attached to the guitar body to produce sound. With an attached battery pack, the result was a set of completely autonomous instruments that were easy to hang in a gallery space. The installation encouraged explorations on the boundary between the tactile and the kinesthetic, the body and the mind, and between motion and sound. The use of guitars, albeit with an untraditional ‘performance’ technique, made the experience both familiar and unfamiliar at the same time. Many users reported heightened sensations of stillness, sound, and vibration, and that the ‘inverse’ control of the instrument was both challenging and pleasant.
@inproceedings{Gonzalez2018, author = {Gonzalez Sanchez, Victor Evaristo and Martin, Charles Patrick and Zelechowska, Agata and Bjerkestrand, Kari Anne Vadstensvik and Johnson, Victoria and Jensenius, Alexander Refsum}, title = {Bela-Based Augmented Acoustic Guitars for Sonic Microinteraction}, pages = {324--327}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302599}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0068.pdf} }
-
Giacomo Lepri and Andrew P. McPherson. 2018. Mirroring the past, from typewriting to interactive art: an approach to the re-design of a vintage technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 328–333. http://doi.org/10.5281/zenodo.1302601
Download PDF DOIObsolete and old technologies are often used in interactive art and music performance. DIY practices such as hardware hacking and circuit bending provide effective methods to the integration of old machines into new artistic inventions. This paper presents the Cembalo Scrivano .1, an interactive audio-visual installation based on an augmented typewriter. Borrowing concepts from media archaeology studies, tangible interaction design and digital lutherie, we discuss how investigations into the historical and cultural evolution of a technology can suggest directions for the regeneration of obsolete objects. The design approach outlined focuses on the remediation of an old device and aims to evoke cultural and physical properties associated to the source object.
@inproceedings{Lepri2018, author = {Lepri, Giacomo and McPherson, Andrew P.}, title = {Mirroring the past, from typewriting to interactive art: an approach to the re-design of a vintage technology}, pages = {328--333}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302601}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0069.pdf} }
-
Seth Dominicus Thorn. 2018. Alto.Glove: New Techniques for Augmented Violin. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 334–339. http://doi.org/10.5281/zenodo.1302603
Download PDF DOIThis paper describes a performer-centric approach to the design, sensor selection, data interpretation, and mapping schema of a sensor-embedded glove called the “alto.glove” that the author uses to extend his performance abilities on violin. The alto.glove is a response to the limitations—both creative and technical—perceived in feature extraction processes that rely on classification. The hardware answers one problem: how to extend violin playing in a minimal yet powerful way; the software answers another: how to create a rich, evolving response that enhances expression in improvisation. The author approaches this problem from the various roles of violinist, hardware technician, programmer, sound designer, composer, and improviser. Importantly, the alto.glove is designed to be cost-effective and relatively easy to build.
@inproceedings{Thorn2018, author = {Thorn, Seth Dominicus}, title = {Alto.Glove: New Techniques for Augmented Violin}, pages = {334--339}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302603}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0070.pdf} }
-
Thanos Polymeneas Liontiris. 2018. Low Frequency Feedback Drones: A non-invasive augmentation of the double bass. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 340–341. http://doi.org/10.5281/zenodo.1302605
Download PDF DOIThis paper illustrates the development of a Feedback Resonating Double Bass. The instrument is essentially the augmentation of an acoustic double bass using positive feedback. The research aimed to reply the question of how to augment and convert a double bass into a feedback resonating one without following an invasive method. The conversion process illustrated here is applicable and adaptable to double basses of any size, without making irreversible alterations to the instruments.
@inproceedings{Liontiris2018, author = {Liontiris, Thanos Polymeneas}, title = {Low Frequency Feedback Drones: A non-invasive augmentation of the double bass}, pages = {340--341}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302605}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0071.pdf} }
-
Daniel Formo. 2018. The Orchestra of Speech: a speech-based instrument system. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 342–343. http://doi.org/10.5281/zenodo.1302607
Download PDF DOIThe Orchestra of Speech is a performance concept resulting from a recent artistic research project exploring the relationship between music and speech, in particular improvised music and everyday conversation. As a tool in this exploration, a digital musical instrument system has been developed for “orchestrating” musical features of speech into music, in real time. Through artistic practice, this system has evolved into a personal electroacoustic performance concept.
@inproceedings{Formo2018, author = {Formo, Daniel}, title = {The Orchestra of Speech: a speech-based instrument system}, pages = {342--343}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302607}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0072.pdf} }
-
Anna Weisling, Anna Xambó, ireti olowe, and Mathieu Barthet. 2018. Surveying the Compositional and Performance Practices of Audiovisual Practitioners. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 344–345. http://doi.org/10.5281/zenodo.1302609
Download PDF DOIThis paper presents a brief overview of an online survey conducted with the objective of gaining insight into compositional and performance practices of contemporary audiovisual practitioners. The survey gathered information regarding how practitioners relate aural and visual media in their work, and how compositional and performance practices involving multiple modalities might differ from other practices. Discussed here are three themes: compositional approaches, transparency and audience knowledge, and error and risk, which emerged from participants’ responses. We believe these themes contribute to a discussion within the NIME community regarding unique challenges and objectives presented when working with multiple media.
@inproceedings{Weisling2018, author = {Weisling, Anna and Xambó, Anna and ireti olowe and Barthet, Mathieu}, title = {Surveying the Compositional and Performance Practices of Audiovisual Practitioners}, pages = {344--345}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302609}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0073.pdf} }
-
Anthony T. Marasco. 2018. Sound Opinions: Creating a Virtual Tool for Sound Art Installations through Sentiment Analysis of Critical Reviews. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 346–347. http://doi.org/10.5281/zenodo.1302611
Download PDF DOIThe author presents Sound Opinions, a custom software tool that uses sentiment analysis to create sound art installations and music compositions. The software runs inside the NodeRed.js programming environment. It scrapes text from web pages, pre-processes it, performs sentiment analysis via a remote API, and parses the resulting data for use in external digital audio programs. The sentiment analysis itself is handled by IBM’s Watson Tone Analyzer. The author has used this tool to create an interactive multimedia installation, titled Critique. Sources of criticism of a chosen musical work are analyzed and the negative or positive statements about that composition work to warp and change it. This allows the audience to only hear the work through the lens of its critics, and not in the original form that its creator intended.
@inproceedings{Marasco2018, author = {Marasco, Anthony T.}, title = {Sound Opinions: Creating a Virtual Tool for Sound Art Installations through Sentiment Analysis of Critical Reviews}, pages = {346--347}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302611}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0074.pdf} }
-
Kosmas Kritsis, Aggelos Gkiokas, Carlos Árpád Acosta, et al. 2018. A web-based 3D environment for gestural interaction with virtual music instruments as a STEAM education tool. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 348–349. http://doi.org/10.5281/zenodo.1302613
Download PDF DOIWe present our work in progress on the development of a web-based system for music performance with virtual instruments in a virtual 3D environment, which provides three means of interaction (i.e physical, gestural and mixed), using tracking data from a Leap Motion sensor. Moreover, our system is integrated as a creative tool within the context of a STEAM education platform that promotes science learning through musical activities. The presented system models string and percussion instruments, with realistic sonic feedback based on Modalys, a physical model-based sound synthesis engine. Our proposal meets the performance requirements of real-time interactive systems and is implemented strictly with web technologies.
@inproceedings{Kritsis2018, author = {Kritsis, Kosmas and Gkiokas, Aggelos and Acosta, Carlos Árpád and Lamerand, Quentin and Piéchaud, Robert and Kaliakatsos-Papakostas, Maximos and Katsouros, Vassilis}, title = {A web-based 3D environment for gestural interaction with virtual music instruments as a STEAM education tool}, pages = {348--349}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302613}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0075.pdf} }
-
Maria C. Mannone, Eri Kitamura, Jiawei Huang, Ryo Sugawara, and Yoshifumi Kitamura. 2018. CubeHarmonic: A New Interface from a Magnetic 3D Motion Tracking System to Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 350–351. http://doi.org/10.5281/zenodo.1302615
Download PDF DOIWe developed a new musical interface, CubeHarmonic, with the magnetic tracking system, IM3D, created at Tohoku University. IM3D system precisely tracks positions of tiny, wireless, battery-less, and identifiable LC coils in real time. The CubeHarmonic is a musical application of the Rubik’s cube, with notes on each little piece. Scrambling the cube, we get different chords and chord sequences. Positions of the pieces which contain LC coils are detected through IM3D, and transmitted to the computer, that plays sounds. The central position of the cube is also computed from the LC coils located into the corners of Rubik’s cube, and, depending on the computed central position, we can manipulate overall loudness and pitch changes, as in theremin playing. This new instrument, whose first idea comes from mathematical theory of music, can be used as a teaching tool both for math (group theory) and music (music theory, mathematical music theory), as well as a composition device, a new instrument for avant-garde performances, and a recreational tool.
@inproceedings{Mannone2018, author = {Mannone, Maria C. and Kitamura, Eri and Huang, Jiawei and Sugawara, Ryo and Kitamura, Yoshifumi}, title = {CubeHarmonic: A New Interface from a Magnetic 3D Motion Tracking System to Music Performance}, pages = {350--351}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302615}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0076.pdf} }
-
Martin M Kristoffersen and Trond Engum. 2018. The Whammy Bar as a Digital Effect Controller. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 352–355. http://doi.org/10.5281/zenodo.1302617
Download PDF DOIIn this paper we present a novel digital effects controller for electric guitar based upon the whammy bar as a user interface. The goal with the project is to give guitarists a way to interact with dynamic effects control that feels familiar to their instrument and playing style. A 3D-printed prototype has been made. It replaces the whammy bar of a traditional Fender vibrato system with a sensor-equipped whammy bar. The functionality of the present prototype includes separate readings of force applied towards and from the guitar body, as well as an end knob for variable control. Further functionality includes a hinged system allowing for digital effect control either with or without the mechanical manipulation of string tension. By incorporating digital sensors to the idiomatic whammy bar interface, one would potentially bring guitarists a high level of control intimacy with the device, and thus lead to a closer interaction with effects.
@inproceedings{Kristoffersen2018, author = {Kristoffersen, Martin M and Engum, Trond}, title = {The Whammy Bar as a Digital Effect Controller}, pages = {352--355}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302617}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0077.pdf} }
-
Robert Pond, Alexander Klassen, and Kirk McNally. 2018. Timbre Tuning: Variation in Cello Sprectrum Across Pitches and Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 356–359. http://doi.org/10.5281/zenodo.1302619
Download PDF DOIThe process of learning to play a string instrument is a notoriously difficult task. A new student to the instrument is faced with mastering multiple, interconnected physical movements in order to become a skillful player. In their development, one measure of a players quality is their tone, which is the result of the combination of the physical characteristics of the instrument and their technique in playing it. This paper describes preliminary research into creating an intuitive, real-time device for evaluating the quality of tone generation on the cello: a “timbre-tuner” to aid cellists evaluate their tone quality. Data for the study was collected from six post-secondary music students, consisting of recordings of scales covering the entire range of the cello. Comprehensive spectral audio analysis was performed on the data set in order to evaluate features suitable to describe tone quality. An inverse relationship was found between the harmonic centroid and pitch played, which became more pronounced when restricted to the A string. In addition, a model for predicting the harmonic centroid at different pitches on the A string was created. Results from informal listening tests support the use of the harmonic centroid as an appropriate measure for tone quality.
@inproceedings{Pond2018, author = {Pond, Robert and Klassen, Alexander and McNally, Kirk}, title = {Timbre Tuning: Variation in Cello Sprectrum Across Pitches and Instruments}, pages = {356--359}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302619}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0078.pdf} }
-
Matthew Mosher, Danielle Wood, and Tony Obr. 2018. Tributaries of Our Lost Palpability. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 360–361. http://doi.org/10.5281/zenodo.1302621
Download PDF DOIThis demonstration paper describes the concepts behind Tributaries of Our Distant Palpability, an interactive sonified sculpture. It takes form as a swelling sea anemone, while the sounds it produces recall the quagmire of a digital ocean. The sculpture responds to changing light conditions with a dynamic mix of audio tracks, mapping volume to light level. People passing by the sculpture, or directly engaging it by creating light and shadows with their smart phone flashlights, will trigger the audio. At the same time, it automatically adapts to gradual environment light changes, such as the rise and fall of the sun. The piece was inspired by the searching gestures people make, and emotions they have while, idly browsing content on their smart devices. It was created through an interdisciplinary collaboration between a musician, an interaction designer, and a ceramicist.
@inproceedings{Mosher2018, author = {Mosher, Matthew and Wood, Danielle and Obr, Tony}, title = {Tributaries of Our Lost Palpability}, pages = {360--361}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302621}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0079.pdf} }
-
Andrew Piepenbrink. 2018. Embedded Digital Shakers: Handheld Physical Modeling Synthesizers. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 362–363. http://doi.org/10.5281/zenodo.1302623
Download PDF DOIWe present a flexible, compact, and affordable embedded physical modeling synthesizer which functions as a digital shaker. The instrument is self-contained, battery-powered, wireless, and synthesizes various shakers, rattles, and other handheld shaken percussion. Beyond modeling existing shakers, the instrument affords new sonic interactions including hand mutes on its loudspeakers and self-sustaining feedback. Both low-cost and high-performance versions of the instrument are discussed.
@inproceedings{Piepenbrink2018, author = {Piepenbrink, Andrew}, title = {Embedded Digital Shakers: Handheld Physical Modeling Synthesizers}, pages = {362--363}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302623}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0080.pdf} }
-
Anna Xambó, Gerard Roma, Alexander Lerch, Mathieu Barthet, and György Fazekas. 2018. Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 364–369. http://doi.org/10.5281/zenodo.1302625
Download PDF DOIThe recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. Finding and retrieving relevant sounds in performance leads to challenges that can be approached using music information retrieval (MIR). In this paper, we explore the use of MIR to retrieve and repurpose sounds in musical live coding. We present a live coding system built on SuperCollider enabling the use of audio content from online Creative Commons (CC) sound databases such as Freesound or personal sound databases. The novelty of our approach lies in exploiting high-level MIR methods (e.g., query by pitch or rhythmic cues) using live coding techniques applied to sounds. We demonstrate its potential through the reflection of an illustrative case study and the feedback from four expert users. The users tried the system with either a personal database or a crowdsourced database and reported its potential in facilitating tailorability of the tool to their own creative workflows.
@inproceedings{Xambó-b2018, author = {Xambó, Anna and Roma, Gerard and Lerch, Alexander and Barthet, Mathieu and Fazekas, György}, title = {Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases}, pages = {364--369}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302625}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0081.pdf} }
-
Avneesh Sarwate, Ryan Taylor Rose, Jason Freeman, and Jack Armitage. 2018. Performance Systems for Live Coders and Non Coders. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 370–373. http://doi.org/10.5281/zenodo.1302627
Download PDF DOIThis paper explores the question of how live coding musicians can perform with musicians who are not using code (such as acoustic instrumentalists or those using graphical and tangible electronic interfaces). This paper investigates performance systems that facilitate improvisation where the musicians can interact not just by listening to each other and changing their own output, but also by manipulating the data stream of the other performer(s). In a course of performance-led research four prototypes were built and analyzed them using concepts from NIME and creative collaboration literature. Based on this analysis it was found that the systems should 1) provide a commonly modifiable visual representation of musical data for both coder and non-coder, and 2) provide some independent means of sound production for each user, giving the non-coder the ability to slow down and make non-realtime decisions for greater performance flexibility.
@inproceedings{Sarwate2018, author = {Sarwate, Avneesh and Rose, Ryan Taylor and Freeman, Jason and Armitage, Jack}, title = {Performance Systems for Live Coders and Non Coders}, pages = {370--373}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302627}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0082.pdf} }
-
Jeff Snyder, Michael R Mulshine, and Rajeev S Erramilli. 2018. The Feedback Trombone: Controlling Feedback in Brass Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 374–379. http://doi.org/10.5281/zenodo.1302629
Download PDF DOIThis paper presents research on control of electronic signal feedback in brass instruments through the development of a new augmented musical instrument, the Feedback Trombone. The Feedback Trombone (FBT) extends the traditional acoustic trombone interface with a speaker, microphone, and custom analog and digital hardware.
@inproceedings{Snyder2018, author = {Snyder, Jeff and Mulshine, Michael R and Erramilli, Rajeev S}, title = {The Feedback Trombone: Controlling Feedback in Brass Instruments}, pages = {374--379}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302629}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0083.pdf} }
-
Eric Sheffield. 2018. Mechanoise: Mechatronic Sound and Interaction in Embedded Acoustic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 380–381. http://doi.org/10.5281/zenodo.1302631
Download PDF DOIThe use of mechatronic components (e.g. DC motors and solenoids) as both electronic sound source and locus of interaction is explored in a form of embedded acoustic instruments called mechanoise instruments. Micro-controllers and embedded computing devices provide a platform for live control of motor speeds and additional sound processing by a human performer. Digital fabrication and use of salvaged and found materials are emphasized.
@inproceedings{Sheffield2018, author = {Sheffield, Eric}, title = {Mechanoise: Mechatronic Sound and Interaction in Embedded Acoustic Instruments}, pages = {380--381}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302631}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0084.pdf} }
-
Jon Pigrem and Andrew P. McPherson. 2018. Do We Speak Sensor? Cultural Constraints of Embodied Interaction . Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 382–385. http://doi.org/10.5281/zenodo.1302633
Download PDF DOIThis paper explores the role of materiality in Digital Musical Instruments and questions the influence of tacit understandings of sensor technology. Existing research investigates the use of gesture, physical interaction and subsequent parameter mapping. We suggest that a tacit knowledge of the ‘sensor layer’ brings with it definitions, understandings and expectations that forge and guide our approach to interaction. We argue that the influence of technology starts before a sound is made, and comes from not only intuition of material properties, but also received notions of what technology can and should do. On encountering an instrument with obvious sensors, a potential performer will attempt to predict what the sensors do and what the designer intends for them to do, becoming influenced by a machine centered understanding of interaction and not a solely material centred one. The paper presents an observational study of interaction using non-functional prototype instruments designed to explore fundamental ideas and understandings of instrumental interaction in the digital realm. We will show that this understanding influences both gestural language and ability to characterise an expected sonic/musical response.
@inproceedings{Pigrem2018, author = {Pigrem, Jon and McPherson, Andrew P.}, title = {Do We Speak Sensor? Cultural Constraints of Embodied Interaction }, pages = {382--385}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302633}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0085.pdf} }
-
Spencer Salazar and Jack Armitage. 2018. Re-engaging the Body and Gesture in Musical Live Coding. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 386–389. http://doi.org/10.5281/zenodo.1302635
Download PDF DOIAt first glance, the practice of musical live coding seems distanced from the gestures and sense of embodiment common in musical performance, electronic or otherwise. This workshop seeks to explore the extent to which this assertion is justified, to re-examine notions of gesture and embodiment in the context of musical live coding performance, to consider historical approaches to synthesizing musical programming and gesture, and to look to the future for new ways of doing so. The workshop will consist firstly of a critical discussion of these issues and related literature. This will be followed by applied practical experiments involving ideas generated during these discussions. The workshop will conclude with a recapitulation and examination of these experiments in the context of previous research and proposed future directions.
@inproceedings{Salazar-b2018, author = {Salazar, Spencer and Armitage, Jack}, title = {Re-engaging the Body and Gesture in Musical Live Coding}, pages = {386--389}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302635}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0086.pdf} }
-
Edgar Berdahl, Eric Sheffield, Andrew Pfalz, and Anthony T. Marasco. 2018. Widening the Razor-Thin Edge of Chaos Into a Musical Highway: Connecting Chaotic Maps to Digital Waveguides. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 390–393. http://doi.org/10.5281/zenodo.1302637
Download PDF DOIFor the purpose of creating new musical instruments, chaotic dynamical systems can be simulated in real time to synthesize complex sounds. This work investigates a series of discrete-time chaotic maps, which have the potential to generate intriguing sounds when they are adjusted to be on the edge of chaos. With these chaotic maps as studied historically, the edge of chaos tends to be razor-thin, which can make it difficult to employ them for making new musical instruments. The authors therefore suggest connecting chaotic maps with digital waveguides, which (1) make it easier to synthesize harmonic tones and (2) make it harder to fall off of the edge of chaos while playing a musical instrument. The authors argue therefore that this technique widens the razor-thin edge of chaos into a musical highway.
@inproceedings{Berdahl2018, author = {Berdahl, Edgar and Sheffield, Eric and Pfalz, Andrew and Marasco, Anthony T.}, title = {Widening the Razor-Thin Edge of Chaos Into a Musical Highway: Connecting Chaotic Maps to Digital Waveguides}, pages = {390--393}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302637}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0087.pdf} }
-
Jeff Snyder, Aatish Bhatia, and Michael R Mulshine. 2018. Neuron-modeled Audio Synthesis: Nonlinear Sound and Control. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 394–397. http://doi.org/10.5281/zenodo.1302639
Download PDF DOIThis paper describes a project to create a software instrument using a biological model of neuron behavior for audio synthesis. The translation of the model to a usable audio synthesis process is described, and a piece for laptop orchestra created using the instrument is discussed.
@inproceedings{Snyder-b2018, author = {Snyder, Jeff and Bhatia, Aatish and Mulshine, Michael R}, title = {Neuron-modeled Audio Synthesis: Nonlinear Sound and Control}, pages = {394--397}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302639}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0088.pdf} }
-
Rodrigo F. Cádiz and Marie Gonzalez-Inostroza. 2018. Fuzzy Logic Control Toolkit 2.0: composing and synthesis by fuzzyfication. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 398–402. http://doi.org/10.5281/zenodo.1302641
Download PDF DOIIn computer or electroacoustic music, it is often the case that the compositional act and the parametric control of the underlying synthesis algorithms or hardware are not separable from each other. In these situations, composition and control of the synthesis parameters are not easy to distinguish. One possible solution is by means of fuzzy logic. This approach provides a simple, intuitive but powerful control of the compositional process usually in interesting non-linear ways. Compositional control in this context is achieved by the fuzzification of the relevant internal synthesis parameters and the parallel computation of common sense fuzzy rules of inference specified by the composer. This approach has been implemented computationally as a software package entitled FLCTK (Fuzzy Logic Control Tool Kit) in the form of external objects for the widely used real-time compositional environments Max/MSP and Pd. In this article, we present an updated version of this tool. As a demonstration of the wide range of situations in which this approach could be used, we provide two examples of parametric fuzzy control: first, the fuzzy control of a water tank simulation and second a particle-based sound synthesis technique by a fuzzy approach.
@inproceedings{Cádiz2018, author = {Cádiz, Rodrigo F. and Gonzalez-Inostroza, Marie}, title = {Fuzzy Logic Control Toolkit 2.0: composing and synthesis by fuzzyfication}, pages = {398--402}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302641}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0089.pdf} }
-
Sang-won Leigh and Pattie Maes. 2018. Guitar Machine: Robotic Fretting Augmentation for Hybrid Human-Machine Guitar Play. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 403–408. http://doi.org/10.5281/zenodo.1302643
Download PDF DOIPlaying musical instruments involves producing gradually more challenging body movements and transitions, where the kinematic constraints of the body play a crucial role in structuring the resulting music. We seek to make a bridge between currently accessible motor patterns, and musical possibilities beyond those — afforded through the use of a robotic augmentation. Guitar Machine is a robotic device that presses on guitar strings and assists a musician by fretting alongside her on the same guitar. This paper discusses the design of the system, strategies for using the system to create novel musical patterns, and a user study that looks at the effects of the temporary acquisition of enhanced physical ability. Our results indicate that the proposed human-robot interaction would equip users to explore new musical avenues on the guitar, as well as provide an enhanced understanding of the task at hand on the basis of the robotically acquired ability.
@inproceedings{Leigh2018, author = {Leigh, Sang-won and Maes, Pattie}, title = {Guitar Machine: Robotic Fretting Augmentation for Hybrid Human-Machine Guitar Play}, pages = {403--408}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302643}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0090.pdf} }
-
Scott Barton, Karl Sundberg, Andrew Walter, Linda Sara Baker, Tanuj Sane, and Alexander O’Brien. 2018. Robotic Percussive Aerophone. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 409–412. http://doi.org/10.5281/zenodo.1302645
Download PDF DOIPercussive aerophones are configurable, modular, scalable, and can be constructed from commonly found materials. They can produce rich timbres, a wide range of pitches and complex polyphony. Their use by humans, perhaps most famously by the Blue Man Group, inspired us to build an electromechanically-actuated version of the instrument in order to explore expressive possibilities enabled by machines. The Music, Perception, and Robotics Lab at WPI has iteratively designed, built and composed for a robotic percussive aerophone since 2015, which has both taught lessons in actuation and revealed promising musical capabilities of the instrument.
@inproceedings{Barton2018, author = {Barton, Scott and Sundberg, Karl and Walter, Andrew and Baker, Linda Sara and Sane, Tanuj and O'Brien, Alexander}, title = {Robotic Percussive Aerophone}, pages = {409--412}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302645}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0091.pdf} }
-
Nathan Daniel Villicaña-Shaw, Spencer Salazar, and Ajay Kapur. 2018. Mechatronic Performance in Computer Music Compositions. Proceedings of the International Conference on New Interfaces for Musical Expression, Virginia Tech, pp. 413–418. http://doi.org/10.5281/zenodo.1302647
Download PDF DOIThis paper introduces seven mechatronic compositions performed over three years at the xxxxx (xxxx). Each composition is discussed in regard to how it addresses the performative elements of mechatronic music concerts. The compositions are grouped into four classifications according to the types of interactions between human and robotic performers they afford: Non-Interactive, Mechatronic Instruments Played by Humans, Mechatronic Instruments Playing with Humans, and Social Interaction as Performance. The orchestration of each composition is described along with an overview of the piece’s compositional philosophy. Observations on how specific extra-musical compositional techniques can be incorporated into future mechatronic performances by human-robot performance ensembles are addressed.
@inproceedings{Villicaña-Shaw2018, author = {Villicaña-Shaw, Nathan Daniel and Salazar, Spencer and Kapur, Ajay}, title = {Mechatronic Performance in Computer Music Compositions}, pages = {413--418}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, editor = {Luke Dahl, Douglas Bowman, Thomas Martin}, year = {2018}, month = jun, publisher = {Virginia Tech}, address = {Blacksburg, Virginia, USA}, isbn = {978-1-949373-99-8}, issn = {2220-4806}, doi = {10.5281/zenodo.1302647}, url = {http://www.nime.org/proceedings/2018/nime2018_paper0092.pdf} }
2017
-
Robert Van Rooyen, Andrew Schloss, and George Tzanetakis. 2017. Voice Coil Actuators for Percussion Robotics. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 1–6. http://doi.org/10.5281/zenodo.1176149
Download PDF DOIPercussion robots have successfully used a variety of actuator technologies to activate a wide array of striking mechanisms. Popular types of actuators include solenoids and DC motors. However, the use of industrial strength voice coil actuators provides a compelling alternative given a desirable set of heterogeneous features and requirements that span traditional devices. Their characteristics such as high acceleration and accurate positioning enable the exploration of rendering highly accurate and expressive percussion performances.
@inproceedings{rrooyen2017, author = {Rooyen, Robert Van and Schloss, Andrew and Tzanetakis, George}, title = {Voice Coil Actuators for Percussion Robotics}, pages = {1--6}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176149}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0001.pdf} }
-
Maurin Donneaud, Cedric Honnet, and Paul Strohmeier. 2017. Designing a Multi-Touch eTextile for Music Performances. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 7–12. http://doi.org/10.5281/zenodo.1176151
Download PDF DOIWe present a textile pressure sensor matrix, designed to be used as a musical multi-touch input device. An evaluation of our design demonstrated that the sensors pressure response profile fits a logarithmic curve (R = 0.98). The input delay of the sensor is 2.1ms. The average absolute error in one direction of the sensor was measured to be less than 10% of one of the matrix’s strips (M = 1.8mm, SD = 1.37mm). We intend this technology to be easy to use and implement by experts and novices alike: We ensure the ease of use by providing a host application that tracks touch points and passes these on as OSC or MIDI messages. We make our design easy to implement by providing open source software and hardware and by choosing evaluation methods that use accessible tools, making quantitative comparisons between different branches of the design easy. We chose to work with textile to take advantage of its tactile properties and its malleability of form and to pay tribute to textile’s rich cultural heritage.
@inproceedings{mdonneaud2017, author = {Donneaud, Maurin and Honnet, Cedric and Strohmeier, Paul}, title = {Designing a Multi-Touch eTextile for Music Performances}, pages = {7--12}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176151}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0002.pdf} }
-
Peter Williams and Daniel Overholt. 2017. bEADS Extended Actuated Digital Shaker. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 13–18. http://doi.org/10.5281/zenodo.1176153
Download PDF DOIWhile there are a great variety of digital musical interfaces available to the working musician, few o er the level of immediate, nuanced and instinctive control that one nds in an acoustic shaker. bEADS is a prototype of a digital musical instrument that utilises the gestural vocabulary associated with shaken idiophones and expands on the techniques and sonic possibilities associated with them. By using a bespoke physically informed synthesis engine, in conjunction with accelerometer and pressure sensor data, an actuated handheld instrument has been built that allows for quickly switching between widely di ering percussive sound textures. The prototype has been evaluated by three experts with di erent levels of involvement in professional music making.
@inproceedings{pwilliams2017, author = {Williams, Peter and Overholt, Daniel}, title = {bEADS Extended Actuated Digital Shaker}, pages = {13--18}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176153}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0003.pdf} }
-
Romain Michon, Julius O. Smith, Matthew Wright, Chris Chafe, John Granzow, and Ge Wang. 2017. Passively Augmenting Mobile Devices Towards Hybrid Musical Instrument Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 19–24. http://doi.org/10.5281/zenodo.1176155
Download PDF DOIMobile devices constitute a generic platform to make standalone musical instruments for live performance. However, they were not designed for such use and have multiple limitations when compared to other types of instruments. We introduce a framework to quickly design and prototype passive mobile device augmentations to leverage existing features of the device for the end goal of mobile musical instruments. An extended list of examples is provided and the results of a workshop, organized partly to evaluate our framework, are provided.
@inproceedings{rmichon2017, author = {Michon, Romain and Smith, Julius O. and Wright, Matthew and Chafe, Chris and Granzow, John and Wang, Ge}, title = {Passively Augmenting Mobile Devices Towards Hybrid Musical Instrument Design}, pages = {19--24}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176155}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0004.pdf} }
-
Alice Eldridge and Chris Kiefer. 2017. Self-resonating Feedback Cello: Interfacing gestural and generative processes in improvised performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 25–29. http://doi.org/10.5281/zenodo.1176157
Download PDF DOIThe Feedback Cello is a new electroacoustic actuated instrument in which feedback can be induced independently on each string. Built from retro-fitted acoustic cellos, the signals from electromagnetic pickups sitting under each string are passed to a speaker built into the back of the instrument and to transducers clamped in varying places across the instrument body. Placement of acoustic and mechanical actuators on the resonant body of the cello mean that this simple analogue feedback system is capable of a wide range of complex self-resonating behaviours. This paper describes the motivations for building these instruments as both a physical extension to live coding practice and an electroacoustic augmentation of cello. The design and physical construction is outlined, and modes of performance described with reference to the first six months of performances and installations. Future developments and planned investigations are outlined.
@inproceedings{aeldridge2017, author = {Eldridge, Alice and Kiefer, Chris}, title = {Self-resonating Feedback Cello: Interfacing gestural and generative processes in improvised performance}, pages = {25--29}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176157}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0005.pdf} }
-
Don Derek Haddad, Xiao Xiao, Tod Machover, and Joseph Paradiso. 2017. Fragile Instruments: Constructing Destructible Musical Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 30–33. http://doi.org/10.5281/zenodo.1176159
Download PDF DOIWe introduce a family of fragile electronic musical instruments designed to be "played" through the act of destruction. Each Fragile Instrument consists of an analog synthesizing circuit with embedded sensors that detect the destruction of an outer shell, which is destroyed and replaced for each performance. Destruction plays an integral role in both the spectacle and the generated sounds. This paper presents several variations of Fragile Instruments we have created, discussing their circuit design as well as choices of material for the outer shell and tools of destruction. We conclude by considering other approaches to create intentionally destructible electronic musical instruments.
@inproceedings{dhaddad2017, author = {Haddad, Don Derek and Xiao, Xiao and Machover, Tod and Paradiso, Joseph}, title = {Fragile Instruments: Constructing Destructible Musical Interfaces}, pages = {30--33}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176159}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0006.pdf} }
-
Florian Heller, Irene Meying Cheung Ruiz, and Jan Borchers. 2017. An Augmented Flute for Beginners. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 34–37. http://doi.org/10.5281/zenodo.1176161
Download PDF DOILearning to play the transverse flute is not an easy task, at least not for everyone. Since the flute does not have a reed to resonate, the player must provide a steady, focused stream of air that will cause the flute to resonate and thereby produce sound. In order to achieve this, the player has to be aware of the embouchure position to generate an adequate air jet. For a beginner, this can be a difficult task due to the lack of visual cues or indicators of the air jet and lips position. This paper attempts to address this problem by presenting an augmented flute that can make the gestures related to the embouchure visible and measurable. The augmented flute shows information about the area covered by the lower lip, estimates the lip hole shape based on noise analysis, and it shows graphically the air jet direction. Additionally, the augmented flute provides directional and continuous feedback in real time, based on data acquired by experienced flutists.
@inproceedings{fheller2017, author = {Heller, Florian and Ruiz, Irene Meying Cheung and Borchers, Jan}, title = {An Augmented Flute for Beginners}, pages = {34--37}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176161}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0007.pdf} }
-
Gabriella Isaac, Lauren Hayes, and Todd Ingalls. 2017. Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 38–41. http://doi.org/10.5281/zenodo.1176163
Download PDF DOIThis paper explores the idea of using virtual textural terrains as a means of generating haptic profiles for force-feedback controllers. This approach breaks from the paradigm established within audio-haptic research over the last few decades where physical models within virtual environments are designed to transduce gesture into sonic output. We outline a method for generating multimodal terrains using basis functions, which are rendered into monochromatic visual representations for inspection. This visual terrain is traversed using a haptic controller, the NovInt Falcon, which in turn receives force information based on the grayscale value of its location in this virtual space. As the image is traversed by a performer the levels of resistance vary, and the image is realized as a physical terrain. We discuss the potential of this approach to afford engaging musical experiences for both the performer and the audience as iterated through numerous performances.
@inproceedings{gisaac2017, author = {Isaac, Gabriella and Hayes, Lauren and Ingalls, Todd}, title = {Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback}, pages = {38--41}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176163}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0008.pdf} }
-
Jiayue Wu, Mark Rau, Yun Zhang, Yijun Zhou, and Matt Wright. 2017. Towards Robust Tracking with an Unreliable Motion Sensor Using Machine Learning. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 42–47. http://doi.org/10.5281/zenodo.1176165
Download PDF DOIThis paper presents solutions to improve reliability and to work around challenges of using a Leap Motion; sensor as a gestural control and input device in digital music instrument (DMI) design. We implement supervised learning algorithms (k-nearest neighbors, support vector machine, binary decision tree, and artificial neural network) to estimate hand motion data, which is not typically captured by the sensor. Two problems are addressed: 1) the sensor cannot detect overlapping hands 2) The sensor’s limited detection range. Training examples included 7 kinds of overlapping hand gestures as well as hand trajectories where a hand goes out of the sensor’s range. The overlapping gestures were treated as a classification problem and the best performing model was k-nearest neighbors with 62% accuracy. The out-of-range problem was treated first as a clustering problem to group the training examples into a small number of trajectory types, then as a classification problem to predict trajectory type based on the hand’s motion before going out of range. The best performing model was k-nearest neighbors with an accuracy of 30%. The prediction models were implemented in an ongoing multimedia electroacoustic vocal performance and an educational project named Embodied Sonic Meditation (ESM).
@inproceedings{jwu2017, author = {Wu, Jiayue and Rau, Mark and Zhang, Yun and Zhou, Yijun and Wright, Matt}, title = {Towards Robust Tracking with an Unreliable Motion Sensor Using Machine Learning}, pages = {42--47}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176165}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0009.pdf} }
- Álvaro Barbosa and Thomas Tsang. 2017. Sounding Architecture: Inter-Disciplinary Studio at HKU. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 48–51. http://doi.org/10.5281/zenodo.1176167
-
Martín Matus Lerner. 2017. Osiris: a liquid based digital musical instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 52–55. http://doi.org/10.5281/zenodo.1176169
Download PDF DOIThis paper describes the process of creation of a new digital musical instrument: Osiris. This device is based on the circulation of liquids for the generation of musical notes. Besides the system of liquid distribution, a module that generates MIDI events was designed and built based on the Arduino platform; such module is employed together with a Proteus 2000 sound generator. The programming of the control module as well as the choice of sound-generating module had as their main objective that the instrument should provide an ample variety of sound and musical possibilities, controllable in real time.
@inproceedings{mlerner2017, author = {Matus Lerner, Martín}, title = {Osiris: a liquid based digital musical instrument}, pages = {52--55}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176169}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0011.pdf} }
-
Spyridon Stasis, Jason Hockman, and Ryan Stables. 2017. Navigating Descriptive Sub-Representations of Musical Timbre. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 56–61. http://doi.org/10.5281/zenodo.1176171
Download PDF DOIMusicians, audio engineers and producers often make use of common timbral adjectives to describe musical signals and transformations. However, the subjective nature of these terms, and the variability with respect to musical context often leads to inconsistencies in their definition. In this study, a model is proposed for controlling an equaliser by navigating clusters of datapoints, which represent grouped parameter settings with the same timbral description. The interface allows users to identify the nearest cluster to their current parameter setting and recommends changes based on its relationship to a cluster centroid. To do this, we apply dimensionality reduction to a dataset of equaliser curves described as warm and bright using a stacked autoencoder, then group the entries using an agglomerative clustering algorithm with a coherence based distance criterion. To test the efficacy of the system, we implement listening tests and show that subjects are able to match datapoints to their respective sub-representations with 93.75% mean accuracy.
@inproceedings{sstasis2017, author = {Stasis, Spyridon and Hockman, Jason and Stables, Ryan}, title = {Navigating Descriptive Sub-Representations of Musical Timbre}, pages = {56--61}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176171}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0012.pdf} }
-
Peter Williams and Daniel Overholt. 2017. Pitch Fork: A Novel tactile Digital Musical Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 62–64. http://doi.org/10.5281/zenodo.1176173
Download PDF DOIPitch Fork is a prototype of an alternate, actuated digital musical instrument (DMI). It uses 5 infra-red and 4 piezoelectric sensors to control an additive synthesis engine. Iron bars are used as the physical point of contact in interaction with the aim of using material computation to control aspects of the digitally produced sound. This choice of material was also chosen to affect player experience. Sensor readings are relayed to a Macbook via an Arduino Mega. Mappings and audio output signal is carried out with Pure Data Extended.
@inproceedings{pwilliams:2017a, author = {Williams, Peter and Overholt, Daniel}, title = {Pitch Fork: A Novel tactile Digital Musical Instrument}, pages = {62--64}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176173}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0013.pdf} }
-
Cagri Erdem, Anil Camci, and Angus Forbes. 2017. Biostomp: A Biocontrol System for Embodied Performance Using Mechanomyography. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 65–70. http://doi.org/10.5281/zenodo.1176175
Download PDF DOIBiostomp is a new musical interface that relies on the use mechanomyography (MMG) as a biocontrol mechanism in live performance situations. Designed in the form of a stomp box, Biostomp translates a performer’s muscle movements into control signals. A custom MMG sensor captures the acoustic output of muscle tissue oscillations resulting from contractions. An analog circuit amplifies and filters these signals, and a micro-controller translates the processed signals into pulses. These pulses are used to activate a stepper motor mechanism, which is designed to be mounted on parameter knobs on effects pedals. The primary goal in designing Biostomp is to offer a robust, inexpensive, and easy-to-operate platform for integrating biological signals into both traditional and contemporary music performance practices without requiring an intermediary computer software. In this paper, we discuss the design, implementation and evaluation of Biostomp. Following an overview of related work on the use of biological signals in artistic projects, we offer a discussion of our approach to conceptualizing and fabricating a biocontrol mechanism as a new musical interface. We then discuss the results of an evaluation study conducted with 21 professional musicians. A video abstract for Biostomp can be viewed at vimeo.com/biostomp/video.
@inproceedings{cerdem2017, author = {Erdem, Cagri and Camci, Anil and Forbes, Angus}, title = {Biostomp: A Biocontrol System for Embodied Performance Using Mechanomyography}, pages = {65--70}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176175}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0014.pdf} }
-
Esben W. Knudsen, Malte L. Hølledig, Mads Juel Nielsen, et al. 2017. Audio-Visual Feedback for Self-monitoring Posture in Ballet Training. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 71–76. http://doi.org/10.5281/zenodo.1181422
Download PDF DOIAn application for ballet training is presented that monitors the posture position (straightness of the spine and rotation of the pelvis) deviation from the ideal position in real-time. The human skeletal data is acquired through a Microsoft Kinect v2. The movement of the student is mirrored through an abstract skeletal figure and instructions are provided through a virtual teacher. Posture deviation is measured in the following way: Torso misalignment is calculated by comparing hip center joint, shoulder center joint and neck joint position with an ideal posture position retrieved in an initial calibration procedure. Pelvis deviation is expressed as the xz-rotation of the hip-center joint. The posture deviation is sonified via a varying cut-off frequency of a high-pass filter applied to floating water sound. The posture deviation is visualized via a curve and a rigged skeleton in which the misaligned torso parts are color-coded. In an experiment with 9-12 year-old dance students from a ballet school, comparing the audio-visual feedback modality with no feedback leads to an increase in posture accuracy (p < 0.001, Cohen’s d = 1.047). Reaction card feedback and expert interviews indicate that the feedback is considered fun and useful for training independently from the teacher.
@inproceedings{eknudsen2017, author = {Knudsen, Esben W. and Hølledig, Malte L. and Nielsen, Mads Juel and Petersen, Rikke K. and Bach-Nielsen, Sebastian and Zanescu, Bogdan-Constantin and Overholt, Daniel and Purwins, Hendrik and Helweg, Kim}, title = {Audio-Visual Feedback for Self-monitoring Posture in Ballet Training}, pages = {71--76}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1181422}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0015.pdf} }
-
Rikard Lindell and Tomas Kumlin. 2017. Augmented Embodied Performance – Extended Artistic Room, Enacted Teacher, and Humanisation of Technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 77–82. http://doi.org/10.5281/zenodo.1176177
Download PDF DOIWe explore the phenomenology of embodiment based on research through design and reflection on the design of artefacts for augmenting embodied performance. We present three designs for highly trained musicians; the designs rely on the musicians’ mastery acquired from years of practice. Through the knowledge of the living body their instruments – saxophone, cello, and flute – are extensions of themselves; thus, we can explore technology with rich nuances and precision in corporeal schemas. With the help of Merleau-Ponty’s phenomenology of embodiment we present three hypotheses for augmented embodied performance: the extended artistic room, the interactively enacted teacher, and the humanisation of technology.
@inproceedings{rlindell2017, author = {Lindell, Rikard and Kumlin, Tomas}, title = {Augmented Embodied Performance – Extended Artistic Room, Enacted Teacher, and Humanisation of Technology}, pages = {77--82}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176177}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0016.pdf} }
-
Jens Vetter and Sarah Leimcke. 2017. Homo Restis — Constructive Control Through Modular String Topologies. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 83–86. http://doi.org/10.5281/zenodo.1176179
Download PDF DOIIn this paper we discuss a modular instrument system for musical expression consisting of multiple devices using string detection, sound synthesis and wireless communication. The design of the system allows for different physical arrangements, which we define as topologies. In particular we will explain our concept and requirements, the system architecture including custom magnetic string sensors and our network communication and discuss its use in the performance HOMO RESTIS.
@inproceedings{jvetter2017, author = {Vetter, Jens and Leimcke, Sarah}, title = {Homo Restis --- Constructive Control Through Modular String Topologies}, pages = {83--86}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176179}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0017.pdf} }
- Jeronimo Barbosa, Marcelo M. Wanderley, and Stéphane Huot. 2017. Exploring Playfulness in NIME Design: The Case of Live Looping Tools. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 87–92. http://doi.org/10.5281/zenodo.1176181
-
Daniel Manesh and Eran Egozy. 2017. Exquisite Score: A System for Collaborative Musical Composition. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 93–98. http://doi.org/10.5281/zenodo.1176183
Download PDF DOIExquisite Score is a web application which allows users to collaborate on short musical compositions using the paradigm of the parlor game exquisite corpse. Through a MIDI-sequencer interface, composers each contribute a section to a piece of music, only seeing the very end of the preceding section. Exquisite Score is both a fun and accessible compositional game as well as a system for encouraging highly novel musical compositions. Exquisite Score was tested by several students and musicians. Several short pieces were created and a brief discussion and analysis of these pieces is included.
@inproceedings{dmanesh2017, author = {Manesh, Daniel and Egozy, Eran}, title = {Exquisite Score: A System for Collaborative Musical Composition}, pages = {93--98}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176183}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0019.pdf} }
-
Stahl Stenslie, Kjell Tore Innervik, Ivar Frounberg, and Thom Johansen. 2017. Somatic Sound in Performative Contexts. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 99–103. http://doi.org/10.5281/zenodo.1176185
Download PDF DOIThis paper presents a new spherical shaped capacitive sensor device for creating interactive compositions and embodied user experiences inside of a periphonic, 3D sound space. The Somatic Sound project is here presented as a) technological innovative musical instrument, and b) an experiential art installation. One of the main research foci is to explore embodied experiences through moving, interactive and somatic sound. The term somatic is here understood and used as in relating to the body in a physical, holistic and immersive manner.
@inproceedings{sstenslie2017, author = {Stenslie, Stahl and Innervik, Kjell Tore and Frounberg, Ivar and Johansen, Thom}, title = {Somatic Sound in Performative Contexts}, pages = {99--103}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176185}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0020.pdf} }
-
Jeppe Veirum Larsen and Hendrik Knoche. 2017. States and Sound: Modelling Interactions with Musical User Interfaces. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 104–109. http://doi.org/10.5281/zenodo.1176187
Download PDF DOIMusical instruments and musical user interfaces provide rich input and feedback through mostly tangible interactions, resulting in complex behavior. However, publications of novel interfaces often lack the required detail due to the complexity or the focus on a specific part of the interfaces and absence of a specific template or structure to describe these interactions. Drawing on and synthesizing models from interaction design and music making we propose a way for modeling musical interfaces by providing a scheme and visual language to describe, design, analyze, and compare interfaces for music making. To illustrate its capabilities we apply the proposed model to a range of assistive musical instruments, which often draw on multi-modal in- and output, resulting in complex designs and descriptions thereof.
@inproceedings{jlarsen2017, author = {Larsen, Jeppe Veirum and Knoche, Hendrik}, title = {States and Sound: Modelling Interactions with Musical User Interfaces}, pages = {104--109}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176187}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0021.pdf} }
-
Guangyu Xia and Roger Dannenberg. 2017. Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 110–114. http://doi.org/10.5281/zenodo.1176189
Download PDF DOIThe interaction between music improvisers is studied in the context of piano duets, where one improviser embellishes a melody, and the other plays a chordal accompaniment with great freedom. We created an automated accompaniment player that learns to play from example performances. Accompaniments are constructed by selecting and concatenating one-measure score units from actual performances. An important innovation is the ability to learn how the improvised accompaniment should respond to variations in the melody performance, using tempo and embellishment complexity as features, resulting in a truly interactive performance within a conventional musical framework. We conducted both objective and subjective evaluations, showing that the learned improviser performs more interactive, musical, and human-like accompaniment compared with the less responsive, rule-based baseline algorithm.
@inproceedings{gxia2017, author = {Xia, Guangyu and Dannenberg, Roger}, title = {Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment}, pages = {110--114}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176189}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0022.pdf} }
-
Palle Dahlstedt. 2017. Physical Interactions with Digital Strings — A hybrid approach to a digital keyboard instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 115–120. http://doi.org/10.5281/zenodo.1176191
Download PDF DOIA new hybrid approach to digital keyboard playing is presented, where the actual acoustic sounds from a digital keyboard are captured with contact microphones and applied as excitation signals to a digital model of a prepared piano, i.e., an extended wave-guide model of strings with the possibility of stopping and muting the strings at arbitrary positions. The parameters of the string model are controlled through TouchKeys multitouch sensors on each key, combined with MIDI data and acoustic signals from the digital keyboard frame, using a novel mapping. The instrument is evaluated from a performing musician’s perspective, and emerging playing techniques are discussed. Since the instrument is a hybrid acoustic-digital system with several feedback paths between the domains, it provides for expressive and dynamic playing, with qualities approaching that of an acoustic instrument, yet with new kinds of control. The contributions are two-fold. First, the use of acoustic sounds from a physical keyboard for excitations and resonances results in a novel hybrid keyboard instrument in itself. Second, the digital model of "inside piano" playing, using multitouch keyboard data, allows for performance techniques going far beyond conventional keyboard playing.
@inproceedings{pdahlstedt2017, author = {Dahlstedt, Palle}, title = {Physical Interactions with Digital Strings --- A hybrid approach to a digital keyboard instrument}, pages = {115--120}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176191}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0023.pdf} }
-
Charles Roberts and Graham Wakefield. 2017. gibberwocky: New Live-Coding Instruments for Musical Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 121–126. http://doi.org/10.5281/zenodo.1176193
Download PDF DOIWe describe two new versions of the gibberwocky live-coding system. One integrates with Max/MSP while the second targets MIDI output and runs entirely in the browser. We discuss commonalities and differences between the three environments, and how they fit into the live-coding landscape. We also describe lessons learned while performing with the original version of gibberwocky, both from our perspective and the perspective of others. These lessons informed the addition of animated sparkline visualizations depicting modulations to performers and audiences in all three versions.
@inproceedings{croberts2017, author = {Roberts, Charles and Wakefield, Graham}, title = {gibberwocky: New Live-Coding Instruments for Musical Performance}, pages = {121--126}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176193}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0024.pdf} }
-
Sasha Leitman. 2017. Current Iteration of a Course on Physical Interaction Design for Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 127–132. http://doi.org/10.5281/zenodo.1176197
Download PDF DOIThis paper is an overview of the current state of a course on New Interfaces for Musical Expression taught at Stanford University. It gives an overview of the various technologies and methodologies used to teach the interdisciplinary work of new musical interfaces.
@inproceedings{sleitman2017, author = {Leitman, Sasha}, title = {Current Iteration of a Course on Physical Interaction Design for Music}, pages = {127--132}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176197}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0025.pdf} }
-
Alex Hofmann, Bernt Isak Waerstad, Saranya Balasubramanian, and Kristoffer E. Koch. 2017. From interface design to the software instrument — Mapping as an approach to FX-instrument building. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 133–138. http://doi.org/10.5281/zenodo.1176199
Download PDF DOITo build electronic musical instruments, a mapping between the real-time audio processing software and the physical controllers is required. Different strategies of mapping were developed and discussed within the NIME community to improve musical expression in live performances. This paper discusses an interface focussed instrument design approach, which starts from the physical controller and its functionality. From this definition, the required, underlying software instrument is derived. A proof of concept is implemented as a framework for effect instruments. This framework comprises a library of real-time effects for Csound, a proposition for a JSON-based mapping format, and a mapping-to-instrument converter that outputs Csound instrument files. Advantages, limitations and possible future extensions are discussed.
@inproceedings{ahofmann2017, author = {Hofmann, Alex and Waerstad, Bernt Isak and Balasubramanian, Saranya and Koch, Kristoffer E.}, title = {From interface design to the software instrument --- Mapping as an approach to FX-instrument building}, pages = {133--138}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176199}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0026.pdf} }
-
Marco Marchini, François Pachet, and Benoît Carré. 2017. Rethinking Reflexive Looper for structured pop music. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 139–144. http://doi.org/10.5281/zenodo.1176201
Download PDF DOIReflexive Looper (RL) is a live-looping system which allows a solo musician to incarnate the different roles of a whole rhythm section by looping rhythms, chord progressions, bassline and more. The loop pedal, is still the most used device for those types of performances, accounting for many of the cover songs performances on youtube, but not all kinds of song apply. Unlike a common loop pedal, each layer of sound in RL is produced by an intelligent looping-agent which adapts to the musician and respects given constraints, using constrained optimization. In its original form, RL worked well for jazz guitar improvisation but was unsuited to structured music such as pop songs. In order to bring the system on pop stage, we revisited the system interaction, following the guidelines of professional users who tested it extensively. We describe the revisited system which can accommodate both pop and jazz. Thanks to intuitive pedal interaction and structure-constraints, the new RL deals with pop music and has been already used in several in live concert situations.
@inproceedings{mmarchini2017, author = {Marchini, Marco and Pachet, François and Carré, Benoît}, title = {Rethinking Reflexive Looper for structured pop music}, pages = {139--144}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176201}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0027.pdf} }
-
Victor Zappi, Andrew Allen, and Sidney Fels. 2017. Shader-based Physical Modelling for the Design of Massive Digital Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 145–150. http://doi.org/10.5281/zenodo.1176203
Download PDF DOIPhysical modelling is a sophisticated synthesis technique, often used in the design of Digital Musical Instruments (DMIs). Some of the most precise physical simulations of sound propagation are based on Finite-Difference Time-Domain (FDTD) methods, which are stable, highly parameterizable but characterized by an extremely heavy computational load. This drawback hinders the spread of FDTD from the domain of off-line simulations to the one of DMIs. With this paper, we present a novel approach to real-time physical modelling synthesis, which implements a 2D FDTD solver as a shader program running on the GPU directly within the graphics pipeline. The result is a system capable of running fully interactive, massively sized simulation domains, suitable for novel DMI design. With the help of diagrams and code snippets, we provide the implementation details of a first interactive application, a drum head simulator whose source code is available online. Finally, we evaluate the proposed system, showing how this new approach can work as a valuable alternative to classic GPGPU modelling.
@inproceedings{vzappi2017, author = {Zappi, Victor and Allen, Andrew and Fels, Sidney}, title = {Shader-based Physical Modelling for the Design of Massive Digital Musical Instruments}, pages = {145--150}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176203}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0028.pdf} }
-
David Johnson and George Tzanetakis. 2017. VRMin: Using Mixed Reality to Augment the Theremin for Musical Tutoring. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 151–156. http://doi.org/10.5281/zenodo.1176205
Download PDF DOIThe recent resurgence of Virtual Reality (VR) technologies provide new platforms for augmenting traditional music instruments. Instrument augmentation is a common approach for designing new interfaces for musical expression, as shown through hyperinstrument research. New visual affordances present in VR give designers new methods for augmenting instruments to extend not only their expressivity, but also their capabilities for computer assisted tutoring. In this work, we present VRMin, a mobile Mixed Reality (MR) application for augmenting a physical theremin, with an immersive virtual environment (VE), for real time computer assisted tutoring. We augment a physical theremin with 3D visual cues to indicate correct hand positioning for performing given notes and volumes. The physical theremin acts as a domain specific controller for the resulting MR environment. The initial effectiveness of this approach is measured by analyzing a performer’s hand position while training with and without the VRMin. We also evaluate the usability of the interface using heuristic evaluation based on a newly proposed set of guidelines designed for VR musical environments.
@inproceedings{djohnson2017, author = {Johnson, David and Tzanetakis, George}, title = {VRMin: Using Mixed Reality to Augment the Theremin for Musical Tutoring}, pages = {151--156}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176205}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0029.pdf} }
-
Richard Graham, Brian Bridges, Christopher Manzione, and William Brent. 2017. Exploring Pitch and Timbre through 3D Spaces: Embodied Models in Virtual Reality as a Basis for Performance Systems Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 157–162. http://doi.org/10.5281/zenodo.1176207
Download PDF DOIOur paper builds on an ongoing collaboration between theorists and practitioners within the computer music community, with a specific focus on three-dimensional environments as an incubator for performance systems design. In particular, we are concerned with how to provide accessible means of controlling spatialization and timbral shaping in an integrated manner through the collection of performance data from various modalities from an electric guitar with a multichannel audio output. This paper will focus specifically on the combination of pitch data treated within tonal models and the detection of physical performance gestures using timbral feature extraction algorithms. We discuss how these tracked gestures may be connected to concepts and dynamic relationships from embodied cognition, expanding on performative models for pitch and timbre spaces. Finally, we explore how these ideas support connections between sonic, formal and performative dimensions. This includes instrumental technique detection scenes and mapping strategies aimed at bridging music performance gestures across physical and conceptual planes.
@inproceedings{rgraham2017, author = {Graham, Richard and Bridges, Brian and Manzione, Christopher and Brent, William}, title = {Exploring Pitch and Timbre through 3D Spaces: Embodied Models in Virtual Reality as a Basis for Performance Systems Design}, pages = {157--162}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176207}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0030.pdf} }
-
Michael Gurevich. 2017. Discovering Instruments in Scores: A Repertoire-Driven Approach to Designing New Interfaces for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 163–168. http://doi.org/10.5281/zenodo.1176209
Download PDF DOIThis paper situates NIME practice with respect to models of social interaction among human agents. It argues that the conventional model of composer-performer-listener, and the underlying mid-20th century metaphor of music as communication upon which it relies, cannot reflect the richness of interaction and possibility afforded by interactive digital technologies. Building on Paul Lansky’s vision of an expanded and dynamic social network, an alternative, ecological view of music-making is presented, in which meaning emerges not from "messages" communicated between individuals, but instead from the "noise" that arises through the uncertainty in their interactions. However, in our tendency in NIME to collapse the various roles in this network into a single individual, we place the increased potential afforded by digital systems at risk. Using examples from the author’s NIME practices, the paper uses a practice-based methodology to describe approaches to designing instruments that respond to the technologies that form the interfaces of the network, which can include scores and stylistic conventions. In doing so, the paper demonstrates that a repertoire—a seemingly anachronistic concept—and a corresponding repertoire-driven approach to creating NIMEs can in fact be a catalyst for invention and creativity.
@inproceedings{mgurevich2017, author = {Gurevich, Michael}, title = {Discovering Instruments in Scores: A Repertoire-Driven Approach to Designing New Interfaces for Musical Expression}, pages = {163--168}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176209}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0031.pdf} }
-
Joe Cantrell. 2017. Designing Intent: Defining Critical Meaning for NIME Practitioners. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 169–173. http://doi.org/10.5281/zenodo.1176211
Download PDF DOIThe ideation, conception and implementation of new musical interfaces and instruments provide more than the mere construction of digital objects. As physical and digital assemblages, interfaces also act as traces of the authoring entities that created them. Their intentions, likes, dislikes, and ultimate determinations of what is creatively useful all get embedded into the available choices of the interface. In this light, the self-perception of the musical HCI and instrument designer can be seen as occupying a primary importance in the instruments and interfaces that eventually come to be created. The work of a designer who self-identifies as an artist may result in a vastly different outcome than one who considers him or herself to be an entrepreneur, or a scientist, for example. These differing definitions of self as well as their HCI outcomes require their own means of critique, understanding and expectations. All too often, these definitions are unclear, or the considerations of overlapping means of critique remain unexamined.
@inproceedings{jcantrell2017, author = {Cantrell, Joe}, title = {Designing Intent: Defining Critical Meaning for NIME Practitioners}, pages = {169--173}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176211}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0032.pdf} }
-
Juan Vasquez, Koray Tahiroğlu, and Johan Kildal. 2017. Idiomatic Composition Practices for New Musical Instruments: Context, Background and Current Applications. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 174–179. http://doi.org/10.5281/zenodo.1181424
Download PDF DOIOne of the reasons of why some musical instruments more successfully continue their evolution and actively take part in the history of music is partially attributed to the existing compositions made specifically for them, pieces that remain and are still played over a long period of time. This is something we know, performing these compositions keeps the characteristics of the instruments alive and able to survive. This paper presents our contribution to this discussion with a context and historical background for idiomatic compositions. Looking beyond the classical era, we discuss how the concept of idiomatic music has influenced research and composition practices in the NIME community; drawing more attention in the way current idiomatic composition practices considered specific NIME affordances for sonic, social and spatial interaction. We present particular projects that establish idiomatic writing as a part of a new repertoire for new musical instruments. The idiomatic writing approach to composing music for NIME can shift the unique characteristics of new instruments to a more established musical identity, providing a shared understanding and a common literature to the community.
@inproceedings{jvasquez2017, author = {Vasquez, Juan and Tahiroğlu, Koray and Kildal, Johan}, title = {Idiomatic Composition Practices for New Musical Instruments: Context, Background and Current Applications}, pages = {174--179}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1181424}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0033.pdf} }
-
Florent Berthaut, Cagan Arslan, and Laurent Grisoni. 2017. Revgest: Augmenting Gestural Musical Instruments with Revealed Virtual Objects. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 180–185. http://doi.org/10.5281/zenodo.1176213
Download PDF DOIGestural interfaces, which make use of physiological signals, hand / body postures or movements, have become widespread for musical expression. While they may increase the transparency and expressiveness of instruments, they may also result in limited agency, for musicians as well as for spectators. This problem becomes especially true when the implemented mappings between gesture and music are subtle or complex. These instruments may also restrict the appropriation possibilities of controls, by comparison to physical interfaces. Most existing solutions to these issues are based on distant and/or limited visual feedback (LEDs, small screens). Our approach is to augment the gestures themselves with revealed virtual objects. Our contributions are, first a novel approach of visual feedback that allow for additional expressiveness, second a software pipeline for pixel-level feedback and control that ensures tight coupling between sound and visuals, and third, a design space for extending gestural control using revealed interfaces. We also demonstrate and evaluate our approach with the augmentation of three existing gestural musical instruments.
@inproceedings{fberthaut2017, author = {Berthaut, Florent and Arslan, Cagan and Grisoni, Laurent}, title = {Revgest: Augmenting Gestural Musical Instruments with Revealed Virtual Objects}, pages = {180--185}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176213}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0034.pdf} }
-
Akito van Troyer. 2017. MM-RT: A Tabletop Musical Instrument for Musical Wonderers. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 186–191. http://doi.org/10.5281/zenodo.1176215
Download PDF DOIMM-RT (material and magnet — rhythm and timbre) is a tabletop musical instrument equipped with electromagnetic actuators to offer a new paradigm of musical expression and exploration. After expanding on prior work with electromagnetic instrument actuation and tabletop musical interfaces, the paper explains why and how MM-RT, through its physicality and ergonomics, has been designed specifically for musical wonderers: people who want to know more about music in installation, concert, and everyday contexts. Those wonderers aspire to interpret and explore music rather than focussing on a technically correct realization of music. Informed by this vision, we then describe the design and technical implementation of this tabletop musical instrument. The paper concludes with discussions about future works and how to trigger musical wonderers’ sonic curiosity to encounter, explore, invent, and organize sounds for music creation using a musical instrument like MM-RT.
@inproceedings{atroyer2017, author = {van Troyer, Akito}, title = {MM-RT: A Tabletop Musical Instrument for Musical Wonderers}, pages = {186--191}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176215}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0035.pdf} }
-
Fabio Morreale and Andrew McPherson. 2017. Design for Longevity: Ongoing Use of Instruments from NIME 2010-14. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 192–197. http://doi.org/10.5281/zenodo.1176218
Download PDF DOIEvery new edition of NIME brings dozens of new DMIs and the feeling that only a few of them will eventually break through. Previous work tried to address this issue with a deductive approach by formulating design frameworks; we addressed this issue with a inductive approach by elaborating on successes and failures of previous DMIs. We contacted 97 DMI makers that presented a new instrument at five successive editions of NIME (2010-2014); 70 answered. They were asked to indicate the original motivation for designing the DMI and to present information about its uptake. Results confirmed that most of the instruments have difficulties establishing themselves. Also, they were asked to reflect on the specific factors that facilitated and those that hindered instrument longevity. By grounding these reflections on existing reserach on NIME and HCI, we propose a series of design considerations for future DMIs.
@inproceedings{fmorreale2017, author = {Morreale, Fabio and McPherson, Andrew}, title = {Design for Longevity: Ongoing Use of Instruments from NIME 2010-14}, pages = {192--197}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176218}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0036.pdf} }
-
Samuel Delalez and Christophe d’Alessandro. 2017. Vokinesis: Syllabic Control Points for Performative Singing Synthesis. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 198–203. http://doi.org/10.5281/zenodo.1176220
Download PDF DOIPerformative control of voice is the process of real-time speech synthesis or modification by the means of hands or feet gestures. Vokinesis, a system for real-time rhythm and pitch modification and control of singing is presented. Pitch and vocal effort are controlled by a stylus on a graphic tablet. The concept of Syllabic Control Points (SCP) is introduced for timing and rhythm control. A chain of phonetic syllables have two types of temporal phases : the steady phases, which correspond to the vocalic nuclei, and the transient phases, which correspond to the attacks and/or codas. Thus, syllabic rhythm control methods need transient and steady phases control points, corresponding to the ancient concept of the arsis and thesis is prosodic theory. SCP allow for accurate control of articulation, using hand or feet. In the Tap mode, SCP are triggered by pressing and releasing a control button. In the Fader mode, continuous variation of the SCP sequencing rate is controlled with expression pedals. Vokinesis has been tested successfully in musical performances, using both syllabic rhythm control modes. This system opens new musical possibilities, and can be extended to other types of sounds beyond voice.
@inproceedings{sdelalez2017, author = {Delalez, Samuel and d'Alessandro, Christophe}, title = {Vokinesis: Syllabic Control Points for Performative Singing Synthesis}, pages = {198--203}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176220}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0037.pdf} }
-
Gareth Young, Dave Murphy, and Jeffrey Weeter. 2017. A Qualitative Analysis of Haptic Feedback in Music Focused Exercises. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 204–209. http://doi.org/10.5281/zenodo.1176222
Download PDF DOIWe present the findings of a pilot-study that analysed the role of haptic feedback in a musical context. To examine the role of haptics in Digital Musical Instrument (DMI) design an experiment was formulated to measure the users’ perception of device usability across four separate feedback stages: fully haptic (force and tactile combined), constant force only, vibrotactile only, and no feedback. The study was piloted over extended periods with the intention of exploring the application and integration of DMIs in real-world musical contexts. Applying a music orientated analysis of this type enabled the investigative process to not only take place over a comprehensive period, but allowed for the exploration of DMI integration in everyday compositional practices. As with any investigation that involves creativity, it was important that the participants did not feel rushed or restricted. That is, they were given sufficient time to explore and assess the different feedback types without constraint. This provided an accurate and representational set of qualitative data for validating the participants’ experience with the different feedback types they were presented with.
@inproceedings{gyoung2017, author = {Young, Gareth and Murphy, Dave and Weeter, Jeffrey}, title = {A Qualitative Analysis of Haptic Feedback in Music Focused Exercises}, pages = {204--209}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176222}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0038.pdf} }
-
Jingyin He, Jim Murphy, Dale A. Carnegie, and Ajay Kapur. 2017. Towards Related-Dedicated Input Devices for Parametrically Rich Mechatronic Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 210–215. http://doi.org/10.5281/zenodo.1176224
Download PDF DOIIn the recent years, mechatronic musical instruments (MMI) have become increasingly parametrically rich. Researchers have developed different interaction strategies to negotiate the challenge of interfacing with each of the MMI’s high-resolution parameters in real time. While mapping strategies hold an important aspect of the musical interaction paradigm for MMI, attention on dedicated input devices to perform these instruments live should not be neglected. This paper presents the findings of a user study conducted with participants possessing specialized musicianship skills for MMI music performance and composition. Study participants are given three musical tasks to complete using a mechatronic chordophone with high dimensionality of control via different musical input interfaces (one input device at a time). This representative user study reveals the features of related-dedicated input controllers, how they compare against the typical MIDI keyboard/sequencer paradigm in human-MMI interaction, and provide an indication of the musical function that expert users prefer for each input interface.
@inproceedings{jhe2017, author = {He, Jingyin and Murphy, Jim and Carnegie, Dale A. and Kapur, Ajay}, title = {Towards Related-Dedicated Input Devices for Parametrically Rich Mechatronic Musical Instruments}, pages = {210--215}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176224}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0039.pdf} }
-
Asha Blatherwick, Luke Woodbury, and Tom Davis. 2017. Design Considerations for Instruments for Users with Complex Needs in SEN Settings. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 216–221. http://doi.org/10.5281/zenodo.1176226
Download PDF DOIMusic technology can provide unique opportunities to allow access to music making for those with complex needs in special educational needs (SEN) settings. Whilst there is a growing trend of research in this area, technology has been shown to face a variety of issues leading to underuse in this context. This paper reviews issues raised in literature and in practice for the use of music technology in SEN settings. The paper then reviews existing principles and frameworks for designing digital musical instruments (DMIs.) The reviews of literature and current frameworks are then used to inform a set of design considerations for instruments for users with complex needs, and in SEN settings. 18 design considerations are presented with connections to literature and practice. An implementation example including future work is presented, and finally a conclusion is then offered.
@inproceedings{ablatherwick2017, author = {Blatherwick, Asha and Woodbury, Luke and Davis, Tom}, title = {Design Considerations for Instruments for Users with Complex Needs in SEN Settings}, pages = {216--221}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176226}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0040.pdf} }
-
Abram Hindle and Daryl Posnett. 2017. Performance with an Electronically Excited Didgeridoo. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 222–226. http://doi.org/10.5281/zenodo.1176228
Download PDF DOIThe didgeridoo is a wind instrument composed of a single large tube often used as drone instrument for backing up the mids and lows of an ensemble. A didgeridoo is played by buzzing the lips and blowing air into the didgeridoo. To play a didgeridoo continously one can employ circular breathing but the volume of air required poses a real challenge to novice players. In this paper we replace the expense of circular breathing and lip buzzing with electronic excitation, thus creating an electro-acoustic didgeridoo or electronic didgeridoo. Thus we describe the didgeridoo excitation signal, how to replicate it, and the hardware necessary to make an electro-acoustic didgeridoo driven by speakers and controllable from a computer. To properly drive the didgeridoo we rely upon 4th-order ported bandpass speaker boxes to help guide our excitation signals into an attached acoustic didgeridoo. The results somewhat replicate human didgeridoo playing, enabling a new kind of mid to low electro-acoustic accompaniment without the need for circular breathing.
@inproceedings{ahindle2017, author = {Hindle, Abram and Posnett, Daryl}, title = {Performance with an Electronically Excited Didgeridoo}, pages = {222--226}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176228}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0041.pdf} }
-
Michael Zbyszyński, Mick Grierson, and Matthew Yee-King. 2017. Rapid Prototyping of New Instruments with CodeCircle. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 227–230. http://doi.org/10.5281/zenodo.1181420
Download PDF DOIOur research examines the use of CodeCircle, an online, collaborative HTML, CSS, and JavaScript editor, as a rapid prototyping environment for musically expressive instruments. In CodeCircle, we use two primary libraries: MaxiLib and RapidLib. MaxiLib is a synthesis and sample processing library, ported from the C++ library Maximillian, which interfaces with the Web Audio API for sound generation in the browser. RapidLib is a product of the Rapid-Mix project, and allows users to implement interactive machine learning, using "programming by demonstration" to design new expressive interactions.
@inproceedings{mzbyszynski2017, author = {Zbyszyński, Michael and Grierson, Mick and Yee-King, Matthew}, title = {Rapid Prototyping of New Instruments with CodeCircle}, pages = {227--230}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1181420}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0042.pdf} }
-
Federico Visi, Baptiste Caramiaux, Michael Mcloughlin, and Eduardo Miranda. 2017. A Knowledge-based, Data-driven Method for Action-sound Mapping. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 231–236. http://doi.org/10.5281/zenodo.1176230
Download PDF DOIThis paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given. Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus. The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance. To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.
@inproceedings{fvisi2017, author = {Visi, Federico and Caramiaux, Baptiste and Mcloughlin, Michael and Miranda, Eduardo}, title = {A Knowledge-based, Data-driven Method for Action-sound Mapping}, pages = {231--236}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176230}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0043.pdf} }
-
Spencer Salazar and Mark Cerqueira. 2017. ChuckPad: Social Coding for Computer Music. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 237–240. http://doi.org/10.5281/zenodo.1176232
Download PDF DOIChuckPad is a network-based platform for sharing code, modules, patches, and even entire musical works written on the ChucK programming language and other music programming platforms. ChuckPad provides a single repository and record of musical code from supported musical programming systems, an interface for organizing, browsing, and searching this body of code, and a readily accessible means of evaluating the musical output of code in the repository. ChuckPad consists of an open-source modular backend service to be run on a network server or cloud infrastructure and a client library to facilitate integrating end-user applications with the platform. While ChuckPad has been initially developed for sharing ChucK source code, its design can accommodate any type of music programming system oriented around small text- or binary-format documents. To this end, ChuckPad has also been extended to the Auraglyph handwriting-based graphical music programming system.
@inproceedings{ssalazar2017, author = {Salazar, Spencer and Cerqueira, Mark}, title = {ChuckPad: Social Coding for Computer Music}, pages = {237--240}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176232}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0044.pdf} }
-
Axel Berndt, Simon Waloschek, Aristotelis Hadjakos, and Alexander Leemhuis. 2017. AmbiDice: An Ambient Music Interface for Tabletop Role-Playing Games. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 241–244. http://doi.org/10.5281/zenodo.1176234
Download PDF DOITabletop role-playing games are a collaborative narrative experience. Throughout gaming sessions, Ambient music and noises are frequently used to enrich and facilitate the narration. With AmbiDice we introduce a tangible interface and music generator specially devised for this application scenario. We detail the technical implementation of the device, the software architecture of the music system (AmbientMusicBox) and the scripting language to compose Ambient music and soundscapes. AmbiDice was presented to experienced players and gained positive feedback and constructive suggestions for further development.
@inproceedings{aberndt2017, author = {Berndt, Axel and Waloschek, Simon and Hadjakos, Aristotelis and Leemhuis, Alexander}, title = {AmbiDice: An Ambient Music Interface for Tabletop Role-Playing Games}, pages = {241--244}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176234}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0045.pdf} }
-
Sam Ferguson, Anthony Rowe, Oliver Bown, Liam Birtles, and Chris Bennewith. 2017. Sound Design for a System of 1000 Distributed Independent Audio-Visual Devices. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 245–250. http://doi.org/10.5281/zenodo.1176236
Download PDF DOIThis paper describes the sound design for Bloom, a light and sound installation made up of 1000 distributed independent audio-visual pixel devices, each with RGB LEDs, Wifi, Accelerometer, GPS sensor, and sound hardware. These types of systems have been explored previously, but only a few systems have exceeded 30-50 devices and very few have included sound capability, and therefore the sound design possibilities for large systems of distributed audio devices are not yet well understood. In this article we describe the hardware and software implementation of sound synthesis for this system, and the implications for design of media for this context.
@inproceedings{sferguson2017, author = {Ferguson, Sam and Rowe, Anthony and Bown, Oliver and Birtles, Liam and Bennewith, Chris}, title = {Sound Design for a System of 1000 Distributed Independent Audio-Visual Devices}, pages = {245--250}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176236}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0046.pdf} }
-
Richard Vogl and Peter Knees. 2017. An Intelligent Drum Machine for Electronic Dance Music Production and Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 251–256. http://doi.org/10.5281/zenodo.1176238
Download PDF DOIAn important part of electronic dance music (EDM) is the so-called beat. It is defined by the drum track of the piece and is a style defining element. While producing EDM, creating the drum track tends to be delicate, yet labor intensive work. In this work we present a touch-interface-based prototype with the goal to simplify this task. The prototype aims at supporting musicians to create rhythmic patterns in the context of EDM production and live performances. Starting with a seed pattern which is provided by the user, a list of variations with varying degree of deviation from the seed pattern is generated. The interface provides simple ways to enter, edit, visualize and browse through the patterns. Variations are generated by means of an artificial neural network which is trained on a database of drum rhythm patterns extracted from a commercial drum loop library. To evaluate the user interface and pattern generation quality a user study with experts in EDM production was conducted. It was found that participants responded positively to the user interface and the quality of the generated patterns. Furthermore, the experts consider the prototype helpful for both studio production situations and live performances.
@inproceedings{rvogl2017, author = {Vogl, Richard and Knees, Peter}, title = {An Intelligent Drum Machine for Electronic Dance Music Production and Performance}, pages = {251--256}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176238}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0047.pdf} }
-
Martin Snejbjerg Jensen, Ole Adrian Heggli, Patricia Alves Da Mota, and Peter Vuust. 2017. A low-cost MRI compatible keyboard. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 257–260. http://doi.org/10.5281/zenodo.1176240
Download PDF DOINeuroimaging is a powerful tool to explore how and why humans engage in music. Magnetic resonance imaging (MRI) has allowed us to identify brain networks and regions implicated in a range of cognitive tasks including music perception and performance. However, MRI-scanners are noisy and cramped, presenting a challenging environment for playing an instrument. Here, we present an MRI-compatible polyphonic keyboard with a materials cost of 850 USD, designed and tested for safe use in 3T (three Tesla) MRI-scanners. We describe design considerations, and prior work in the field. In addition, we provide recommendations for future designs and comment on the possibility of using the keyboard in magnetoencephalography (MEG) systems. Preliminary results indicate a comfortable playing experience with no disturbance of the imaging process.
@inproceedings{mjensen2017, author = {Jensen, Martin Snejbjerg and Heggli, Ole Adrian and Mota, Patricia Alves Da and Vuust, Peter}, title = {A low-cost MRI compatible keyboard}, pages = {257--260}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176240}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0048.pdf} }
-
Sang Won Lee, Jungho Bang, and Georg Essl. 2017. Live Coding YouTube: Organizing Streaming Media for an Audiovisual Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 261–266. http://doi.org/10.5281/zenodo.1176242
Download PDF DOIMusic listening has changed greatly with the emergence of music streaming services, such as Spotify or YouTube. In this paper, we discuss an artistic practice that organizes streaming videos to perform a real-time improvisation via live coding. A live coder uses any available video from YouTube, a video streaming service, as source material to perform an improvised audiovisual piece. The challenge is to manipulate the emerging media that are streamed from a networked service. The musical gesture can be limited due to the provided functionalities of the YouTube API. However, the potential sonic and visual space that a musician can explore is practically infinite. The practice embraces the juxtaposition of manipulating emerging media in old-fashioned ways similar to experimental musicians in the 60’s physically manipulating tape loops or scratching vinyl records on a phonograph while exploring the possibility of doing so by drawing on the gigantic repository of all kinds of videos. In this paper, we discuss the challenges of using streaming videos from the platform as musical materials in computer music and introduce a live coding environment that we developed for real-time improvisation.
@inproceedings{slee2017, author = {Lee, Sang Won and Bang, Jungho and Essl, Georg}, title = {Live Coding YouTube: Organizing Streaming Media for an Audiovisual Performance}, pages = {261--266}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176242}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0049.pdf} }
-
Solen Kiratli, Akshay Cadambi, and Yon Visell. 2017. HIVE: An Interactive Sculpture for Musical Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 267–270. http://doi.org/10.5281/zenodo.1176244
Download PDF DOIIn this paper we present HIVE, a parametrically designed interactive sound sculpture with embedded multi-channel digital audio which explores the intersection of sculptural form and musical instrument design. We examine sculpture as an integral part of music composition and performance, expanding the definition of musical instrument to include the gestalt of loudspeakers, architectural spaces, and material form. After examining some related works, we frame HIVE as an interactive sculpture for musical expression. We then describe our design and production process, which hinges on the relationship between sound, space, and sculptural form. Finally, we discuss the installation and its implications.
@inproceedings{skiratli2017, author = {Kiratli, Solen and Cadambi, Akshay and Visell, Yon}, title = {HIVE: An Interactive Sculpture for Musical Expression}, pages = {267--270}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176244}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0050.pdf} }
-
Matthew Blessing and Edgar Berdahl. 2017. The JoyStyx: A Quartet of Embedded Acoustic Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 271–274. http://doi.org/10.5281/zenodo.1176246
Download PDF DOIThe JoyStyx Quartet is a series of four embedded acoustic instruments. Each of these instruments is a five-voice granular synthesizer which processes a different sound source to give each a unique timbre and range. The performer interacts with these voices individually with five joysticks positioned to lay under the performer’s fingertips. The JoyStyx uses a custom-designed printed circuit board. This board provides the joystick layout and connects them to an Arduino Micro, which serializes the ten analog X/Y position values and the five digital button presses. This data controls the granular and spatial parameters of a Pure Data patch running on a Raspberry Pi 2. The nature of the JoyStyx construction causes the frequency response to be coloured by the materials and their geometry, leading to a unique timbre. This endows the instrument with a more “analog” or “natural” sound, despite relying on computer-based algorithms. In concert, the quartet performance with the JoyStyx may potentially be the first performance ever with a quartet of Embedded Acoustic Instruments.
@inproceedings{mblessing2017, author = {Blessing, Matthew and Berdahl, Edgar}, title = {The JoyStyx: A Quartet of Embedded Acoustic Instruments}, pages = {271--274}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176246}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0051.pdf} }
-
Graham Wakefield and Charles Roberts. 2017. A Virtual Machine for Live Coding Language Design. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 275–278. http://doi.org/10.5281/zenodo.1176248
Download PDF DOIThe growth of the live coding community has been coupled with a rich development of experimentation in new domain-specific languages, sometimes idiosyncratic to the interests of their performers. Nevertheless, programming language design may seem foreboding to many, steeped in computer science that is distant from the expertise of music performance. To broaden access to designing unique languages-as-instruments we developed an online programming environment that offers liveness in the process of language design as well as performance. The editor utilizes the Parsing Expression Grammar formalism for language design, and a virtual machine featuring collaborative multitasking for execution, in order to support a diversity of language concepts and affordances. The editor is coupled with online tutorial documentation aimed at the computer music community, with live examples embedded. This paper documents the design and use of the editor and its underlying virtual machine.
@inproceedings{gwakefield2017, author = {Wakefield, Graham and Roberts, Charles}, title = {A Virtual Machine for Live Coding Language Design}, pages = {275--278}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176248}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0052.pdf} }
-
Tom Davis. 2017. The Feral Cello: A Philosophically Informed Approach to an Actuated Instrument. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 279–282. http://doi.org/10.5281/zenodo.1176250
Download PDF DOIThere have been many NIME papers over the years on augmented or actuated instruments [2][10][19][22]. Many of these papers have focused on the technical description of how these instruments have been produced, or as in the case of Machover’s #8216;Hyperinstruments’ [19], on producing instruments over which performers have ‘absolute control’ and emphasise ‘learnability. perfectibility and repeatability’ [19]. In contrast to this approach, this paper outlines a philosophical position concerning the relationship between instruments and performers in improvisational contexts that recognises the agency of the instrument within the performance process. It builds on a post-phenomenological understanding of the human/instrument relationship in which the human and the instrument are understood as co-defining entities without fixed boundaries; an approach that actively challenges notions of instrumental mastery and ‘absolute control’. This paper then takes a practice-based approach to outline how such philosophical concerns have fed into the design of an augmented, actuated cello system, The Feral Cello, that has been designed to explicitly explore these concerns through practice.
@inproceedings{tdavis2017, author = {Davis, Tom}, title = {The Feral Cello: A Philosophically Informed Approach to an Actuated Instrument}, pages = {279--282}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176250}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0053.pdf} }
-
Francisco Bernardo, Nicholas Arner, and Paul Batchelor. 2017. O Soli Mio: Exploring Millimeter Wave Radar for Musical Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 283–286. http://doi.org/10.5281/zenodo.1176252
Download PDF DOIThis paper describes an exploratory study of the potential for musical interaction of Soli, a new radar-based sensing technology developed by Google’s Advanced Technology and Projects Group (ATAP). We report on our hand-on experience and outcomes within the Soli Alpha Developers program. We present early experiments demonstrating the use of Soli for creativity in musical contexts. We discuss the tools, workflow, the affordances of the prototypes for music making, and the potential for design of future NIME projects that may integrate Soli.
@inproceedings{fbernardo2017, author = {Bernardo, Francisco and Arner, Nicholas and Batchelor, Paul}, title = {O Soli Mio: Exploring Millimeter Wave Radar for Musical Interaction}, pages = {283--286}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176252}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0054.pdf} }
-
Constanza Levican, Andres Aparicio, Vernon Belaunde, and Rodrigo Cadiz. 2017. Insight2OSC: using the brain and the body as a musical instrument with the Emotiv Insight. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 287–290. http://doi.org/10.5281/zenodo.1176254
Download PDF DOIBrain computer interfaces are being widely adopted for music creation and interpretation, and they are becoming a truly new category of musical instruments. Indeed, Miranda has coined the term Brain-computer Musical Interface (BCMI) to refer to this category. There are no "plug-n-play" solutions for a BCMI, these kinds of tools usually require the setup and implementation of particular software configurations, customized for each EEG device. The Emotiv Insight is a low-cost EEG apparatus that outputs several kinds of data, such as EEG rhythms or facial expressions, from the user’s brain activity. We have developed a BCMI, in the form of a freely available middle-ware, using the Emotiv Insight for EEG input and signal processing. The obtained data, via blue-tooth is broad-casted over the network formatted for the OSC protocol. Using this software, we tested the device’s adequacy as a BCMI by using the provided data in order to control different sound synthesis algorithms in MaxMSP. We conclude that the Emotiv Insight is an interesting choice for a BCMI due to its low-cost and ease of use, but we also question its reliability and robustness.
@inproceedings{clevican2017, author = {Levican, Constanza and Aparicio, Andres and Belaunde, Vernon and Cadiz, Rodrigo}, title = {Insight2OSC: using the brain and the body as a musical instrument with the Emotiv Insight}, pages = {287--290}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176254}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0055.pdf} }
-
Benjamin Smith and Neal Anderson. 2017. ArraYnger: New Interface for Interactive 360° Spatialization. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 291–295. http://doi.org/10.5281/zenodo.1176256
Download PDF DOIInteractive real-time spatialization of audio over large immersive speaker arrays poses significant interface and control challenges for live performers. Fluidly moving and mixing numerous sound objects over unique speaker configurations requires specifically designed software interfaces and systems. Currently available software solutions either impose configuration limitations, require extreme degrees of expertise, or extensive configuration time to use. A new system design, focusing on simplicity, ease of use, and live interactive spatialization is described. Automation of array calibration and tuning is included to facilitate rapid deployment and configuration. Comparisons with other solutions show favorability in terms of complexity, depth of control, and required features.
@inproceedings{bsmith2017, author = {Smith, Benjamin and Anderson, Neal}, title = {ArraYnger: New Interface for Interactive 360° Spatialization}, pages = {291--295}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176256}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0056.pdf} }
-
Alexandra Murray-Leslie and Andrew Johnston. 2017. The Liberation of the Feet: Demaking the High Heeled Shoe For Theatrical Audio-Visual Expression. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 296–301. http://doi.org/10.5281/zenodo.1176258
Download PDF DOIThis paper describes a series of fashionable sounding shoe and foot based appendages made between 2007-2017. The research attempts to demake the physical high-heeled shoe through the iterative design and fabrication of new foot based musical instruments. This process of demaking also changes the usual purpose of shoes and associated stereotypes of high heeled shoe wear. Through turning high heeled shoes into wearable musical instruments for theatrical audio visual expressivity we question why so many musical instruments are made for the hands and not the feet? With this creative work we explore ways to redress the imbalance and consider what a genuinely “foot based” expressivity could be.
@inproceedings{aleslie2017, author = {Murray-Leslie, Alexandra and Johnston, Andrew}, title = {The Liberation of the Feet: Demaking the High Heeled Shoe For Theatrical Audio-Visual Expression}, pages = {296--301}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176258}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0057.pdf} }
-
Christiana Rose. 2017. SALTO: A System for Musical Expression in the Aerial Arts. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 302–306. http://doi.org/10.5281/zenodo.1176260
Download PDF DOIWearable sensor technology and aerial dance movement can be integrated to provide a new performance practice and perspective on interactive kinesonic composition. SALTO (Sonic Aerialist eLecTrOacoustic system), is a system which allows for the creation of collaborative works between electroacoustic composer and aerial choreographer. The system incorporates aerial dance trapeze movement, sensors, digital synthesis, and electroacoustic composition. In SALTO, the Max software programming environment employs parameters and mapping techniques for translating the performer’s movement and internal experience into sound. Splinter (2016), a work for aerial choreographer/performer and the SALTO system, highlights the expressive qualities of the system in a performance setting.
@inproceedings{crose2017, author = {Rose, Christiana}, title = {SALTO: A System for Musical Expression in the Aerial Arts}, pages = {302--306}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176260}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0058.pdf} }
-
Marije Baalman. 2017. Wireless Sensing for Artistic Applications, a Reflection on Sense/Stage to Motivate the Design of the Next Stage. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 307–312. http://doi.org/10.5281/zenodo.1176262
Download PDF DOIAcademic research projects focusing on wireless sensor networks rarely live on after the funded research project has ended. In contrast, the Sense/Stage project has evolved over the past 6 years outside of an academic context and has been used in a multitude of artistic projects. This paper presents how the project has developed, the diversity of the projects that have been made with the technology, feedback from users on the system and an outline for the design of a successor to the current system.
@inproceedings{mbaalman2017, author = {Baalman, Marije}, title = {Wireless Sensing for Artistic Applications, a Reflection on Sense/Stage to Motivate the Design of the Next Stage}, pages = {307--312}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176262}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0059.pdf} }
-
Ivica Bukvic and Spencer Lee. 2017. Glasstra: Exploring the Use of an Inconspicuous Head Mounted Display in a Live Technology-Mediated Music Performance. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 313–318. http://doi.org/10.5281/zenodo.1176264
Download PDF DOIThe following paper explores the Inconspicuous Head-Mounted Display within the context of a live technology-mediated music performance. For this purpose in 2014 the authors have developed Glasstra, an Android/Google Glass networked display designed to project real-time orchestra status to the conductor, with the primary goal of minimizing the on-stage technology footprint and with it audience’s potential distraction with technology. In preparation for its deployment in a real-world performance setting the team conducted a user study aimed to define relevant constraints of the Google Glass display. Based on the observed data, a conductor part from an existing laptop orchestra piece was retrofitted, thereby replacing the laptop with a Google Glass running Glasstra and a similarly inconspicuous forearm-mounted Wiimote controller. Below we present findings from the user study that have informed the design of the visual display, as well as multi-perspective observations from a series of real-world performances, including the designer, user, and the audience. We use findings to offer a new hypothesis, an inverse uncanny valley or what we refer to as uncanny mountain pertaining to audience’s potential distraction with the technology within the context of a live technology-mediated music performance as a function of minimizing on-stage technological footprint.
@inproceedings{ibukvic2017, author = {Bukvic, Ivica and Lee, Spencer}, title = {Glasstra: Exploring the Use of an Inconspicuous Head Mounted Display in a Live Technology-Mediated Music Performance}, pages = {313--318}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176264}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0060.pdf} }
- Scott Barton, Ethan Prihar, and Paulo Carvalho. 2017. Cyther: a Human-playable, Self-tuning Robotic Zither. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 319–324. http://doi.org/10.5281/zenodo.1176266
-
Beici Liang, György Fazekas, Andrew McPherson, and Mark Sandler. 2017. Piano Pedaller: A Measurement System for Classification and Visualisation of Piano Pedalling Techniques. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 325–329. http://doi.org/10.5281/zenodo.1176268
Download PDF DOIThis paper presents the results of a study of piano pedalling techniques on the sustain pedal using a newly designed measurement system named Piano Pedaller. The system is comprised of an optical sensor mounted in the piano pedal bearing block and an embedded platform for recording audio and sensor data. This enables recording the pedalling gesture of real players and the piano sound under normal playing conditions. Using the gesture data collected from the system, the task of classifying these data by pedalling technique was undertaken using a Support Vector Machine (SVM). Results can be visualised in an audio based score following application to show pedalling together with the player’s position in the score.
@inproceedings{bliang2017, author = {Liang, Beici and Fazekas, György and McPherson, Andrew and Sandler, Mark}, title = {Piano Pedaller: A Measurement System for Classification and Visualisation of Piano Pedalling Techniques}, pages = {325--329}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176268}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0062.pdf} }
-
Jason Long, Jim Murphy, Dale A. Carnegie, and Ajay Kapur. 2017. A Closed-Loop Control System for Robotic Hi-hats. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 330–335. http://doi.org/10.5281/zenodo.1176272
Download PDF DOIWhile most musical robots that are capable of playing the drum kit utilise a relatively simple striking motion, the hi-hat, with the additional degree of motion provided by its pedal, requires more involved control strategies in order to produce expressive performances on the instrument. A robotic hi-hat should be able to control not only the striking timing and velocity to a high degree of precision, but also dynamically control the position of the two cymbals in a way that is consistent, reproducible and intuitive for composers and other musicians to use. This paper describes the creation of a new, multifaceted hi-hat control system that utilises a closed-loop distance sensing and calibration mechanism in conjunction with an embedded musical information retrieval system to continuously calibrate the hi-hat’s action both before and during a musical performance. This is achieved by combining existing musical robotic devices with a newly created linear actuation mechanism, custom amplification, acquisition and DSP hardware, and embedded software algorithms. This new approach allows musicians to create expressive and reproducible musical performances with the instrument using consistent musical parameters, and the self-calibrating nature of the instrument lets users focus on creating music instead of maintaining equipment.
@inproceedings{jlong2017, author = {Long, Jason and Murphy, Jim and Carnegie, Dale A. and Kapur, Ajay}, title = {A Closed-Loop Control System for Robotic Hi-hats}, pages = {330--335}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176272}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0063.pdf} }
-
Stratos Kountouras and Ioannis Zannos. 2017. Gestus: Teaching Soundscape Composition and Performance with a Tangible Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 336–341. http://doi.org/10.5281/zenodo.1176274
Download PDF DOITangible user interfaces empower artists, boost their creative expression and enhance performing art. However, most of them are designed to work with a set of rules, many of which require advanced skills and target users above a certain age. Here we present a comparative and quantitative study of using TUIs as an alternative teaching tool in experimenting with and creating soundscapes with children. We describe an informal interactive workshop involving schoolchildren. We focus on the development of playful uses of technology to help children empirically understand audio feature extraction basic techniques. We promote tangible interaction as an alternative learning method in the creation of synthetic soundscape based on sounds recorded in a natural outdoor environment as main sources of sound. We investigate how schoolchildren perceive natural sources of sound and explore practices that reuse prerecorded material through a tangible interactive controller. We discuss the potential benefits of using TUIs as an alternative empirical method for tangible learning and interaction design, and its impact on encouraging and motivating creativity in children. We summarize our findings and review children’s biehavioural indicators of engagement and enjoyment in order to provide insight to the design of TUIs based on user experience.
@inproceedings{skountouras2017, author = {Kountouras, Stratos and Zannos, Ioannis}, title = {Gestus: Teaching Soundscape Composition and Performance with a Tangible Interface}, pages = {336--341}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176274}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0064.pdf} }
-
Hazar Emre Tez and Nick Bryan-Kinns. 2017. Exploring the Effect of Interface Constraints on Live Collaborative Music Improvisation. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 342–347. http://doi.org/10.5281/zenodo.1176276
Download PDF DOIThis research investigates how applying interaction constraints to digital music instruments (DMIs) affects the way that experienced music performers collaborate and find creative ways to make live improvised music on stage. The constraints are applied in two forms: i) Physically implemented on the instruments themselves, and ii) hidden rules that are defined on a network between the instruments and triggered depending on the musical actions of the performers. Six experienced musicians were recruited for a user study which involved rehearsal and performance. Performers were given deliberately constrained instruments containing a touch sensor, speaker, battery and an embedded computer. Results of the study show that whilst constraints can lead to more structured improvisation, the resultant music may not fit with performers’ true intentions. It was also found that when external musical material is introduced to guide the performers into a collective convergence, it is likely to be ignored because it was perceived by performers as being out of context.
@inproceedings{htez2017, author = {Tez, Hazar Emre and Bryan-Kinns, Nick}, title = {Exploring the Effect of Interface Constraints on Live Collaborative Music Improvisation}, pages = {342--347}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176276}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0065.pdf} }
-
Irmandy Wicaksono and Joseph Paradiso. 2017. FabricKeyboard: Multimodal Textile Sensate Media as an Expressive and Deformable Musical Interface. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 348–353. http://doi.org/10.5281/zenodo.1176278
Download PDF DOIThis paper presents FabricKeyboard: a novel deformable keyboard interface based on a multi-modal fabric sensate surface. Multi-layer fabric sensors that detect touch, proximity, electric field, pressure, and stretch are machine-sewn in a keyboard pattern on a stretchable substrate. The result is a fabric-based musical controller that combines both the discrete controls of a keyboard and various continuous controls from the embedded fabric sensors. This enables unique tactile experiences and new interactions both with physical and non-contact gestures: physical by pressing, pulling, stretching, and twisting the keys or the fabric and non-contact by hovering and waving towards/against the keyboard and an electromagnetic source. We have also developed additional fabric-based modular interfaces such as a ribbon-controller and trackpad, allowing performers to add more expressive and continuous controls. This paper will discuss implementation strategies for our system-on-textile, fabric-based sensor developments, as well as sensor-computer interfacing and musical mapping examples of this multi-modal and expressive fabric keyboard.
@inproceedings{iwicaksono2017, author = {Wicaksono, Irmandy and Paradiso, Joseph}, title = {FabricKeyboard: Multimodal Textile Sensate Media as an Expressive and Deformable Musical Interface}, pages = {348--353}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176278}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0066.pdf} }
-
Kristians Konovalovs, Jelizaveta Zovnercuka, Ali Adjorlu, and Daniel Overholt. 2017. A Wearable Foot-mounted / Instrument-mounted Effect Controller: Design and Evaluation. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 354–357. http://doi.org/10.5281/zenodo.1176280
Download PDF DOIThis paper explores a new interaction possibility for increasing performer freedom via a foot-mounted wearable, and an instrument-mounted device that maintain stomp-box styles of interactivity, but without the restrictions normally associated with the original design of guitar effect pedals. The classic foot activated effect pedals that are used to alter the sound of the instrument are stationary, forcing the performer to return to the same location in order to interact with the pedals. This paper presents a new design that enables the performer to interact with the effect pedals anywhere on the stage. By designing a foot&instrument-mounted effect controller, we kept the strongest part of the classical pedal design, while allowing the activation of the effect at any location on the stage. The usability of the device has been tested on thirty experienced guitar players. Their performance has been recorded and compared, and their opinion has been investigated through questionnaire and interview. The results of the experiment showed that, in theory, foot&instrument-mounted effect controller can replace standard effect pedals and at the same time provide more mobility on a stage.
@inproceedings{kkonovalovs2017, author = {Konovalovs, Kristians and Zovnercuka, Jelizaveta and Adjorlu, Ali and Overholt, Daniel}, title = {A Wearable Foot-mounted / Instrument-mounted Effect Controller: Design and Evaluation}, pages = {354--357}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176280}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0067.pdf} }
-
Herbert Ho-Chun Chang, Lloyd May, and Spencer Topel. 2017. Nonlinear Acoustic Synthesis in Augmented Musical Instruments. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 358–363. http://doi.org/10.5281/zenodo.1176282
Download PDF DOIThis paper discusses nonlinear acoustic synthesis in augmented musical instruments via acoustic transduction. Our work expands previous investigations into acoustic amplitude modulation, offering new prototypes that produce intermodulation in several instrumental contexts. Our results show nonlinear intermodulation distortion can be generated and controlled in electromagnetically driven acoustic interfaces that can be deployed in acoustic instruments through augmentation, thus extending the nonlinear acoustic synthesis to a broader range of sonic applications.
@inproceedings{hchang2017, author = {Chang, Herbert Ho-Chun and May, Lloyd and Topel, Spencer}, title = {Nonlinear Acoustic Synthesis in Augmented Musical Instruments}, pages = {358--363}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176282}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0068.pdf} }
-
Georg Hajdu, Benedict Carey, Goran Lazarevic, and Eckhard Weymann. 2017. From Atmosphere to Intervention: The circular dynamic of installations in hospital waiting areas. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 364–369. http://doi.org/10.5281/zenodo.1176284
Download PDF DOIThis paper is a description of a pilot project conducted at the Hamburg University of Music and Drama (HfMT) during the academic year 2015-16. In this project we have addressed how interventions via interactive, generative music systems may contribute to the improvement of the atmosphere and thus to the well-being of patients in hospital waiting areas. The project was conducted by both the students of the music therapy and multimedia composition programs and has thus offered rare insights into the dynamic of such undertakings covering both the therapeutic underpinnings, as well as the technical means required to achieve a particular result. DJster, the engine we used for the generative processes is based on Clarence Barlow’s probabilistic algorithms. Equipped with the proper periphery (sensors, sound modules and spatializers), we looked at three different scenarios, each requiring specific musical and technological solutions. The pilot was concluded by a symposium in 2017 and the development of a prototype system. The symposium yielded a diagram detailing the circular dynamic of the factors involved in this particular project, while the prototype was demoed in 2016 at the HfMT facilities. The system will be installed permanently at the University Medical Center Hamburg-Eppendorf (UKE) in June 2017.
@inproceedings{ghajdu2017, author = {Hajdu, Georg and Carey, Benedict and Lazarevic, Goran and Weymann, Eckhard}, title = {From Atmosphere to Intervention: The circular dynamic of installations in hospital waiting areas}, pages = {364--369}, booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression}, year = {2017}, publisher = {Aalborg University Copenhagen}, address = {Copenhagen, Denmark}, doi = {10.5281/zenodo.1176284}, url = {http://www.nime.org/proceedings/2017/nime2017_paper0069.pdf} }
-
Dom Brown, Chris Nash, and Tom Mitchell. 2017. A User Experience Review of Music Interaction Evaluations. Proceedings of the International Conference on New Interfaces for Musical Expression, Aalborg University Copenhagen, pp. 370–375. http://doi.org/10.5281/zenodo.1176286
Download PDF DOIThe need for thorough evaluations is an emerging area of interest and importance in music interaction research. As a large degree of DMI evaluation is concerned with exploring the subjective experience: ergonomics, action-sound mappings and control intimacy; User Experience (UX) methods are increasingly being utilised to analyse an individual’s experience of new musical instruments, from which we can extract meaningful, robust findings and subsequently generalised and useful recommendations. However, many music interaction evaluations remain informal. In this paper, we provide a meta-review of 132 papers from the 2014 – 2016 proceedings of the NIME, SMC and ICMC conferences to collate the aspects of UX resea