EXHIBITIONS

2011: Beilharz, K. and Martin, A.Windtraces site-specific sound installation, Bondi Sculpture by the Sea, Tamarama Beach cliffs, Oct.-Nov.

2010: Beilharz, K., Haeusler, M., Barker, Ferguson, S. Polymedia Pixel in Oliver Schuerer, Gernot Tscherteu, Martin Tomitsch  (Curators) Media Architecture Biennale 2010, Künstlerhaus Wien, Austria 7-30 October, catalogue published by Media Architecture Institute.

2007: Sonic Tai Chi in 'Boom' at Media Infinity: Taiwan-Australia New Media Arts Forum, Taiwan Kuandu Museum of Fine Arts, (Jakovich, J. and Beilharz, K.)

2006: Fluid Velocity in 'veloCITY' exhibition, Tin Sheds Gallery, Sydney, Oct.12 - Nov.4 using IRCAM WiSeBox wireless sensors, bike, Max/MSP + Jitter, real time generative sound and on-screen virtual life.

2006: Fabrication (II): The Cry of Silk (CD), Tin Sheds Gallery, University of Sydney.

2005-2006: Sonic Tai Chi (Jakovich & Beilharz) in BetaSpace at Sydney Powerhouse Museum interactive installation using Max/MSP + Jitter sonification with computer vision, November 24 – January 5.

2005: Sonic Tai Chi: Interludes (Jakovich & Beilharz) interactive performance using Max/MSP + Jitter with computer vision, Cité Internationale des Arts, Paris.

2005: Sonic Kung Fu Sydney Esquisse Exhibition interactive multimedia using Max/MSP + Jitter with computer vision

2002: Collaborations Multimedia Gallery Installation (floor pressure sensors, live video, web-cam, Internet, Max/MSP - original images and music) collaborative live interactive public work, developed with Mark Fell - Creativity and Cognition 4 Conference, University of Loughborough Gallery, U.K.

2001: Floriferous Multimedia digital media gallery installation (original music, video and graphic design by Kirsty Beilharz), duration: 14 minutes, first installed Tin Sheds Gallery 'CrossTalk' Exhibition, University of Sydney Faculty of Architecture (with Densil Cabrera, Nigel Helyer, Mark Jones, Konrad Skirlis, Michaela James) October 19 - November 10.

2001: Floriferous Multimedia for Concert performance (original music, video and graphic design by Kirsty Beilharz), duration: 14 minutes, C21 Contemporary Music Festival - Multimedia & Technology Concert, University of Sydney Department of Music.

 

PROJECTS

polymedia pixel vienna

Interactive Polymedia Pixel

Designed by Kirsty Beilharz, M. Hank Haeusler, Sam Ferguson and Tom Barker. Fabrication assisted by Rom Zhirnov and students of Situated Media Installation Studio. This project investoigates Urban Digital Media, a field that inhabits the intersection between architecture, information and culture in the arena of technology and building. It exmines the requirements of public space such as adaptability, new modes of communication, and transformative environments with flexibility for future need. The polymedia pixel is a new form of public display and situated media device protocol. This was first presented as Interactive Polymedia Pixel and Protocol for Collaborative Creative Content Generation on Urban Digital Media Displays' by M. Hank Haeusler, Kirsty Beilharz, Tom Barker at the International Conference on New Media and Interactivity 28-30 April 2010, Istanbul.

Windtraces Sculpture X Sea

Windtraces (Sculpture by the Sea 2011) Site-Specific Sound Installation

Windtraces is a multi-channel, site-specific sound installation that was exhibited as part of the Sculpture by the Sea exhibition in Sydney in November 2011. It uses data from meteorological sensors as inputs to algorithmic processes, to generate a dynamic soundscape in real-time. Sculpture by the Sea is a large-scale art exhibition that takes place each year on the coastal pathway between Bondi beach and Tamarama beach in Sydney, Australia. It is a free event and in 2011 attracted more than half a million visitors. Windtraces comprised a set of 14 loudspeakers distributed across a steep rock face, emitting sounds generated by algorithmic processes that were controlled by sensor data relating to meteorological conditions at the site.

Windtraces

 

In Windtraces, the movement of each sound across the rock is controlled by a finite state grammar. The loudspeakers are located in the crevices in the rock surface. These crevices form natural contours that the spatial movement of sounds is hoped to evoke. When a sound is introduced, it is played from a specific loudspeaker and then in in linear sequence across a number of adjacent speakers, creating a movement following a path around the rock. These paths are probabilistically selected by a finite state grammar. Each state corresponds to a particular speaker, and it is connected to between 1-3 other states. There is a discrete probability distribution associated with each state, which describes the probabilities of subsequent states. Each wind direction is mapped to a set of probability distributions.

multitouch interface

Multimodal data interaction with multi-touch table surface

Interactive sonification using multi-touch multimodal display. This research develops a visual and sonic interface for interactive data enquiry on a multi-touch table surface. The table facilitates collaborative interaction, as well as comparative and sequential analysis tasks. It is currently oriented towards time-series data interrogation. This video is made by Sam Ferguson. The concept was developed by Prof Kirsty Beilharz, Dr Sam Ferguson and Claudia Calo of UTS DAB Sense-Aware Lab.

diamond quills

Charisma Ensemble 'Diamond Quills' Performance

Recording from Sydney Conservatorium of Music, performed by Ros Dunlop, Julia Ryder and David Miller of Charisma Ensemble. They also performed Diamond Quills at NIME (New Interfaces for Musical Expression) conference in Eugene Goossens Hall at the ABC Centre Sydney on 18 June. Duration: approximately 12 minutes.

New Interfaces for Musical Expression (NIME2010++) Sydney

It was our great pleasure to host the NIME++2010 international conference at UTS this year: http://nime2010.org/ in Sydney from 15-18 June. The conference consisted of the fully peer-reviewed paper tracks, poster and demo sessions, installations, concert performance, club night performances and Keynote talks by Stelarc and Nic Collins (Hardware Hacking). The full set of photos can be seen here: http://www.flickr.com/photos/nime2010/

Human DNA genetic data sonifications

In the ICAD (International Conference of Auditory Display) challenge 2009, Sam Ferguson (UTS Sense-Aware Lab) won with his DNA sonification of the human genome using note-length and harmony to represent intron and exon relationships in the gene.

sonic tai chi

'Sonic Tai Chi' INTERACTIVE INSTALLATION (BETASPACE)

Uses computer vision (video tracking in Cycling'74 Max/MSP) to capture movement data producing the visualisation and sonification. Generative Cellular Automata rules (The Game of Life) propagate particles and sonic grains in response to users' lateral motion. Body motion in one direction propogates and promotes liveliness and generativity, while motion in the opposite direction restricts and eventually stifles activity. The user's interaction also affects the promulgation and panning of the audio synthesis to elucidate the spatial relationship between gesture and display. Sonic Tai Chi by Jaonne Jakovich and Kirsty Beilharz (2005-2006), BetaSpace installation at Sydney Powerhouse Museum of Technology.

hypershakuhachi

Hyper-Shaku AUGMENTED INSTRUMENT

A gesture-triggered and sound-sensing hyper-instrument audio and visual augmentation of live performance in which motion (head), noisiness and loudness, pitch-tracking, and velocity are used to scale parameters in Granular Synthesis and Neural Oscillator Network (NOSC) and Evolutionary Looming generative processes. Gestural modification of the generative processes: sound, computer vision and motion sensor input detect gestural effects. These are used to send messages and input values to generator modules. Microphone acoustic input is used to control Looming (with loudness) and the granular synthesis (with loudness and noisiness measures). These gesture attributes also send messages to the Neural Oscillator Network and visualisation.The Max/MSP Neural Oscillator Network patch is used as a stabilizing influence affected by large camera-tracked gestures. It is modelled on individual neurons: dendrites receive impulses and when the critical threshold is reached in the cell body (soma), output is sent to other nodes in the Neural Network. The 'impulses' in the musical system derive from the granular synthesis pitch output. This example uses a Neural Oscillator Network model with four synapse nodes to disperse sounds, audibly dissipating but rhythmic and energetic. Irregularity is controlled by head motion tracked through the computer vision. Transposition and pitch class arrives via the granular synthesis from pitch analysis of the acoustic shakuhachi and Looming intensity as a multiplier (transposition upward with greater intensity of gesture).

foldable display

Foldable, flexible display

(With Andrew Vande Moere) employs multi-modal interpretations of the folding metaphor, embodying wearable visualisation + sonification as self expression. In our research, we are evaluating the effectiveness of abstract display and its social motivation. Muscle wire is used to make very subtle movements. Motion, IR, microphone (audio) sensors and micro-processor calculations gather and impart social data about the wearer and her/his context. The project aims to explore the beauty of everyday materiality and folding as a metaphor capable of embedded complex meanings in subtle ways, electronically altering the externally perceived self-expression of its wearer through parallel, real-time sensor readings. This work queries the interaction between auditory and visual display in this bi-modal scenario in which wearable visualization relates to 'wearable computing', 'smartwear' and electronic fashion technology (e-fashion) instead of focusing on sensor and signal analysis, real-time context recognition or hardware development + miniaturisation. Wearable visualization is specifically concerned with the visual and auditory communication of information to the wearer, or any people present in the wearer's physical vicinity. Hence, it has a social computing element of interaction between devices (InfraRed communication between multiple devices) and provides a representation of social awareness of proximity/sociability.

cyborg gamelan

'Wireless Gamelan' Cyborg gestural interaction

Using RFID tags to control quadraphonic music performance environment - developed with Sam Ferguson and Jeremiah Nugroho (research assistants)

SensorCow

SensorCow Sonification

is a sensor-controlled sonification of motion using La Kitchen Kroonde Gamma wireless Radio Frequency transmission + gyroscopic, acceleromoter and binary motion sensors + Max/MSP. The contiguous data-flow provided by the calf's walking, head-shakes, eating, etc. is mapped to separate channels for each sensor and distinctive timbres to differentiate and isolate the affect of particular gestures. It creates an auditory profile of the normally visually observed actions of the animal.

L-Sys

'Emergent Energies' Sensate Spatial Installation

By Amanda Scott, Kirsty Beilharz and Andrew Vande Moere. Emergent Energies is a socially-aware responsive Lindenmeyer tree visualisation and sonification that displays an embedded history over time, reveals the number of people, proximity, location, pace/velocity of movement, intensity of interaction in a social space. Motion information is captured using pressure mats under the carpet in a sensate lab. Colour is mapped to auditory timbre, vertices to location, line-thickness to duration. Mapping of gesture to visual and auditory display is considered here as a type of aesthetic sonification in which the contiguous data stream comes from the user's rate of movement and scope.

fluid velocity exhibition

'Fluid Velocity' Interactive Installation

For physical bicycle interface, visual projection and stereo audio production in the Tin Sheds Gallery, University of Sydney used IRCAM WiSeBox (Flety 2005) WiFi transmission of data from captors located on the bicycle frame and handlebars to transform the 3D 'creature' on screen and variable filtering and panning of the electronic sound. The programming environment was Max/MSPand Jitter (Puckette & Zicarelli, 1990-2005). Pressure on the handlebars, rotation, braking and pedalling velocity affected the angularity, splay, tentacle-thickness, number of limbs and waviness of the virtual multipod 3D creature in front of the rider. It uses binary, peiso pressure, IR proximity, accelerometer and gyroscopic sensors.

music without

'The Music Without' motion sonification

Using Kroonde wireless sensors and Max/MSP to transform gestural activity of the violinist into real-time collaboration/accompaniment, giving voice to the external physicality of playing music. Most cooperative automated accompaniment programs seek to follow pitch, rhythm or harmonic paradigms of the music ,wheras this work highlights the exertion of tone production and gestural attributes of performance, sometimes revealing surprising features.

icecaps

+90 degrees

GPS data driven composition/sonification using generative structures derived from formal topology, determined by GPS values read into Max/MSP, like 'live ice' dynamic within a determinate form. The next data-generating polar crossing is the North Pole February-March 2009. Completed crossings include Antarctica (-90 degrees) and Greenland. Plotting the GPS points in Max/MSP real-time software; Polar ice-cap crossing data will be used for future projects; GPS points plotted on the map of Greenland; CO2-neutral Greenland crossing by my cousin, Linda Beilharz, provided the GPS data for the project. Other projects include the North Pole and Antarctica.

cry of silk

Fabrication II: The Cry of Silk

Composed for the opening of Amanda Robins' exhibition, What Lies Beneath at the Tin Sheds Gallery in Sydney, March-April 2006. Robins paints and draws highly detailed and realistic interiors of coats and garments, continuing the long tradition of visual arts interpretation through drapery. The art of drapery is concerned with layers, superimposition, veiling, concealing, embodying and appreciation of different textures and folds. These metaphors transgress sensory boundaries and apply equally well to sound design. The idea of fabrics, textiles, textures and their embedded and embodied meanings motivates the integration of collected dynamic fabrics (recorded leather, feathers, fur, silk, corduroy, zippers, velcro, canvas) interpreted through various filters and processes of computer composition. Fabrication II: The Cry of Silk is a synaesthetic and perceptual exploration of fabric sounds eliciting mental images that are normally seen and felt. The work aims to shift our consciousness to a different level of sensory perception of fabrics. Historically, the mythology of The Cry of Silk is also an interesting and inspiring one in which the voice of the finest silk being torn is said to have resembled a cry, with its sensual connotations. We seldom think of 'giving voice' to materials and cloth, yet the textures and diversity of sounds available by brushing, rubbing, tearing, rustling and caressing fabrics, provides a rich sound world capable of focusing our ear at a deep level of attention to minutiae and detail. This coincides with an introspective examination of the micro and particle sound design magnified by musical exploration and processing of sounds in close scrutiny. Subtleties and intricacies of tiny sounds are extended, augmented, transposed and amplified to illuminate the beauty and curiosity of their microstructure in a way that we may not ordinarily hear and appreciate.

aleatory

Generative Composition

Beilharz, K. (2005). Integrating Computational Generative Processes in Acoustic Music Composition, in Edmonds, E., Brown, P. and Burraston, D. (Ed.s) Generative Arts Practice '05: A Creativity and Cognition Symposium, University of Technology Sydney, pp. 5-20.
Beilharz, K. (2006). Interactive Generative Processes for Designing Sound Generative Music Composition: Interactive Generative Installation Design and Responsive Spatialisation (Poster) in Gero, J.S. (Ed.) Proceedings of the Design Computing and Cognition Conference, Kluwer, in press 17/02/06.
Beilharz, K. (2004). Designing Sounds and Spaces: Interdisciplinary Rules & Proportions in Generative Stochastic Music and Architecture, Journal of Design Research, 4 (3): http://jdr.tudelft.nl/

Urban Chimes

Urban Chimes

(Tubular Body Bells) is an urban scale virtual chime instrument that can be played by two or more users collaboratively. It is a site specific interactive sound and visual installation designed to augment the ventilation pipes that are adjacent to IRCAM on Place Igor Stravinsky, a prominent, identifying architectural feature of Renzo Piano, Richard Rogers, and the Rubins' Centre Pompidou and IRCAM. Two internet cameras capture the gestures of two or more visitors to the plaza, which are used to control generative structures of the synthesized audio display. Each pipe represents a different timbre, with pitch mapped along a vertical axis. Sounds can be generated using hand or body motions along this axis. The system is implemented using Max/MSP for the synthesized sounds and Jitter with Pelletier's Computer Vision cv.jit objects for the gesture capture, video manipulation and projection.
Jakovich, J. & Beilharz, K. (2006). "Urban Chimes: Tubular Body Bells" Outdoor Audio-Visual Installation Proposal for IRCAM, Centre Pompidou in Proceedings of New Interfaces for Musical Expression (NIME), IRCAM.

'SYBIL' Information sonification TOOLKIT

Is the process of representing information using sound. This research is concerned with mapping data to appropriate auditory dimensions in real time, using spatialisation to enhance differentiation of information streams. Information sonfication links with my other research - interactive sonification, sonification pedagogy, gestural interaction, sonification of socio-spatial activity in sensate environments. This is part of the larger Key Centre of Design Computing responsive environment project in the Sentient Lab, integrating sonification, visualisation, curious and intelligent agents. The environment transforms human interaction into an adaptive, responsive space that can understand and learn about its users. This designs for ambient display together with cutting-edge sensate, mobile, and pervasive computing technologies.

Real-time generative responsiveness

Generative design methods/structures for visualization and sonification that can operate with low latency in real time for installations or performance of digital audio-visual display, e.g. Neural Network oscillators or homeostatic systems, L-Systems, Cellular Automata for implementation in evolving artworks triggered by interaction or data input

cuttings book chapter

 

Cuttings: Urban Islands

Book: chapter 'Sonic Islands' on sound installation and site specific audio installation and interactive media (University of Sydney Press, 2006).

Sense-Aware Sonic Interaction Research Lab UTS 2008-2013

pixel2

fluid velocity

 

Interaction Design Studio Student Work

SmartSlab

StudentInstallationTubes