dev

Towards Playing in the 'Air' (2020)

In acoustic instruments, sound production relies on the interaction between physical objects. Digital musical instruments, on the other hand, are based on arbitrarily designed action--sound mappings. This paper describes the ongoing exploration of an empirically-based approach for simulating guitar playing technique when designing the mappings of `air instruments'. We present results from an experiment in which 33 electric guitarists performed a set of basic sound-producing actions: impulsive, sustained, and iterative. The dataset consists of bioelectric muscle signals, motion capture, video, and audio recordings. This multimodal dataset was used to train a long short-term memory network (LSTM) with a few hidden layers and relatively short training duration. We show that the network is able to predict audio energy features of free improvisations on the guitar, relying on a dataset of three distinct motion types.

 

 

RAW (2019)

The instrument is built around two Myo armbands located on the forearms of the performer. These are used to investigate muscle contraction, which is again used as the basis for the sonic interaction design. Using a practice-based approach, the aim is to explore the musical aesthetics of naturally occurring bioelectric signals. We are particularly interested in exploring the differences between processing at audio rate versus control rate, and how the level of detail in the signal and the complexity of the mappings influence the experience of control in the instrument. This is exemplified through reflections on four concerts in which RAW has been used in different types of collective improvisation.

 

 

Vrengt (2018)

The piece Vrengt grew from the idea of enabling a true partnership between a musician and a dancer, developing an instrument that would allow for active co-performance. Using a participatory design approach, we worked with sonification as a tool for systematically exploring the dancer’s bodily expressions. The exploration used a "spatiotemporal matrix," with a particular focus on sonic microinteraction. In the final performance, two Myo armbands were used for capturing muscle activity of the arm and leg of the dancer, together with a wireless headset microphone capturing the sound of breathing. In the paper we reflect on multi-user instrument paradigms, discuss our approach to creating a shared instrument using sonification as a tool for the sound design, and reflect on the performers’ subjective evaluation of the instrument.

 

 

 

 

Biostomp v2 (2017)

Biostomp is a new musical interface that relies on the use of mechanomyography (MMG) as a biocontrol mechanism in live performance situations. Designed in the form of a stompbox, Biostomp translates a performer’s muscle movements into control signals. A custom MMG sensor captures acoustic signals of muscle tissue oscillations resulting from contractions. An analogue circuit amplifies and filters these signals, and a micro-controller translates the results into pulses. These pulses are used to activate a stepper motor mechanism, which is designed to be mounted on parameter knobs on effects pedals. The primary goal in designing Biostomp is to offer a robust, inexpensive, and easy-to-operate platform for integrating biological signals into both traditional and contemporary music performance practices without requiring an intermediary computer software. 

 

 

LDS (2016)

A light-dependent step sequencer. 

FullSizeRender-7

  

 

Ellix (2016)

A multimodal modular wearable interface w/ IMU, infrared, blood pressure, flex sensors + joystick and several buttons

 

 

spectacleX (2015)

A pair of glasses equipped with IMU and infrared sensors

 

 

 

modGuit_ (2015)

An augmented baritone fender telecaster w/ IMU, pressure and infrared sensors & microcontroller

 

 

dat-I (2015)

A wearable biomonitoring system (EMG, GSR, blood pressure and accelerometer). 

 

 

NoiseWig~ (2014)

A Mohican style wig equipped with IMU sensors & custom MAX/MSP patch

 

kaybol[ ]a (2014)

A (work-in-progress) generative data sonification based on the dataset of "Enforced Disappearance" cases by Hafıza Merkezi