INTERACTIVE POLYMEDIA PIXEL


Media Architecture Biennale Vienna 2010

Designed by Kirsty Beilharz, M. Hank Haeusler, Sam Ferguson and Tom Barker.
Fabrication assisted by Rom Zhirnov and electronics with participation by students of Situated Media Installation Studio (UTS B. Sound and Music Design, B. Photography and situated Media).

This research is an investigation into Urban Digital Media, a field that inhabits the intersection between architecture, information and culture in the arena of technology and building. It asks how contemporary requirements of public space in our everyday life, such as adaptability, new modes of communication and transformative environments that offer flexibility for future needs and uses, can be addressed by a new form of public display through the use of an interactive polymedia pixel and situated media device protocol.

The prototype design was first reported in the following paper in Turkey:
'Interactive Polymedia Pixel and Protocol for Collaborative Creative Content Generation on Urban Digital Media Displays' by M. Hank Haeusler, Kirsty Beilharz, Tom Barker at the International Conference on New Media and Interactivity 28-30 April 2010, Istanbul. More info ... Video ...

Polymedia Pixel Vienna

The weakness of many current media façades and building-scale interactive installation environments lies in the dearth of quality creative content and its unresponsiveness by ignoring potential human factors, richness of locative situation and contextual interaction (Sauter, 2004). Media facades have matured from being 2D visual display to 3D voxel arrays for depicting static and moving images with a spatial depth dimension (Haeusler, 2009). As a consequent next step in this development, this research investigates a display that reacts empathetically to human interaction and is responsive to its urban digital media; to integrate multiple modalities; smart energy-saving; and enabling community engagement in urban digital media content, i.e. responsive and interactive sensing capability.

Seven attributes of the Polymedia Pixel that address the above-mentioned inadequacies of public displays:
(1) contextual responsiveness - to physical, environmental factors;
(2) interactive responsiveness - to human intervention and activity in the proximity;
(3) intelligence - smart controls that can adapt physical behaviour to suit conditions,
(4) multimodality - ability to communicate through non-visual channels, such as sound;
(5) sensing and communication - in order to sense/detect conditions of environment, human interaction and to be accessed by networked mobile devices;
(6) energy efficiency - optimising energy expenditure and capturing self-powering energy sources and
(7) open protocol for networked device controllers to receive communication from a wide variety of devices, enabling public access and interactive content, localized to physical context.

 

Multimodal data interaction with multi-touch table surface

Interactive sonification using multi-touch multimodal display. The objective of this research is to develop a visual and sonic interface for interactive data enquiry on a multi-touch table surface. The table facilitates collaborative enquiry, as well as comparative and sequential analysis tasks. It is currently oriented towards time-series data interrogation. This video is made by Sam Ferguson. The concept is developed by Prof Kirsty Beilharz, Dr Sam Ferguson and Claudia Calo of UTS DAB Sense-Aware Lab.

Multitouch multimodal data interaction sonification

 

Charisma Ensemble 'Diamond Quills' Performance

Recording from Sydney Conservatorium of Music, performed by Ros Dunlop, Julia Ryder and David Miller of Charisma Ensemble. They also performed Diamond Quills at NIME (New Interfaces for Musical Expression) conference in Eugene Goossens Hall at the ABC Centre Sydney on 18 June. Duration: approximately 12 minutes.

Diamond Quills max/msp patch

 

New Interfaces for Musical Expression (NIME2010++) Sydney

It was our great pleasure to host the NIME++2010 international conference at UTS this year: http://nime2010.org/ in Sydney from 15-18 June. The conference consisted of the fully peer-reviewed paper tracks, poster and demo sessions, installations, concert performance, club night performances and Keynote talks by Stelarc and Nic Collins (Hardware Hacking). The full set of photos can be seen here: http://www.flickr.com/photos/nime2010/

 

Human DNA genetic data sonifications

In the ICAD (International Conference of Auditory Display) challenge 2009, Sam Ferguson (UTS Sense-Aware Lab) won with his DNA sonification of the human genome using note-length and harmony to represent intron and exon relationships in the gene.

 

'Sonic Tai Chi' INTERACTIVE INSTALLATION (BETASPACE)

Uses computer vision (video tracking in Cycling'74 Max/MSP) to capture movement data producing the visualisation and sonification. Generative Cellular Automata rules (The Game of Life) propagate particles and sonic grains in response to users' lateral motion. Body motion in one direction propogates and promotes liveliness and generativity, while motion in the opposite direction restricts and eventually stifles activity. The user's interaction also affects the promulgation and panning of the audio synthesis to elucidate the spatial relationship between gesture and display.

SonicTaiChi

Sonic Tai Chi by Jaonne Jakovich and Kirsty Beilharz (2005-2006), BetaSpace installation at Sydney Powerhouse Museum of Technology

 

Hyper-Shaku AUGMENTED INSTRUMENT

A gesture-triggered and sound-sensing hyper-instrument audio and visual augmentation of live performance in which motion (head), noisiness and loudness, pitch-tracking, and velocity are used to scale parameters in Granular Synthesis and Neural Oscillator Network (NOSC) and Evolutionary Looming generative processes. HyperShaku HyperShaku

Gestural modification of the generative processes: sound, computer vision and motion sensor input detect gestural effects. These are used to send messages and input values to generator modules. Microphone acoustic input is used to control Looming (with loudness) and the granular synthesis (with loudness and noisiness measures). These gesture attributes also send messages to the Neural Oscillator Network and visualisation. The Max/MSP Neural Oscillator Network patch is used as a stabilizing influence affected by large camera-tracked gestures. It is modelled on individual neurons: dendrites receive impulses and when the critical threshold is reached in the cell body (soma), output is sent to other nodes in the Neural Network. The 'impulses' in the musical system derive from the granular synthesis pitch output. This example uses a Neural Oscillator Network model with four synapse nodes to disperse sounds, audibly dissipating but rhythmic and energetic. Irregularity is controlled by head motion tracked through the computer vision. Transposition and pitch class arrives via the granular synthesis from pitch analysis of the acoustic shakuhachi and Looming intensity as a multiplier (transposition upward with greater intensity of gesture).

 

Foldable, flexible display

(With Andrew Vande Moere) employs multi-modal interpretations of the folding metaphor, embodying wearable visualisation + sonification as self expression. In our research, we are evaluating the effectiveness of abstract display and its social motivation. Muscle wire is used to make very subtle movements. Motion, IR, microphone (audio) sensors and micro-processor calculations gather and impart social data about the wearer and her/his context. The project aims to explore the beauty of everyday materiality and folding as a metaphor capable of embedded complex meanings in subtle ways, electronically altering the externally perceived self-expression of its wearer through parallel, real-time sensor readings. This work queries the interaction between auditory and visual display in this bi-modal scenario in which wearable visualization relates to 'wearable computing', 'smartwear' and electronic fashion technology (e-fashion) instead of focusing on sensor and signal analysis, real-time context recognition or hardware development + miniaturisation. Wearable visualization is specifically concerned with the visual and auditory communication of information to the wearer, or any people present in the wearer's physical vicinity. Hence, it has a social computing element of interaction between devices (InfraRed communication between multiple devices) and provides a representation of social awareness of proximity/sociability.

FoldableDisplay Foldable, flexible display with Andrew Vande Moere, v.1 (above) made with Monika Hoinkis, v.2 (below) with Adrian Lombard (research assistants)

Folding Folding

 

 

Mechanisms to Enable Musical Uses of Complex Sound Sources

Professor Kirsty Beilharz (mentor) with Dr Samuel Ferguson (mentoree Early Career Researcher) Faculty of Arts and Social Sciences Research Development Grant 2009. More info ...

GTR

Ukulele

Some interesting sound sources are not predictable enough to be manipulated in the manner necessary for most typical musical performances. This project attempts to use one example of an unpredictable but interesting sound source, the feedback tones produced when a guitar is placed in close proximity to an amplifier, to investigate whether electro-mechanical systems and acoustic analysis can provide a mechanism for controlling interaction with unpredictable or complex sound sources. A novel interface and electro-mechanical mechanism for interaction with complex musical instruments will be produced, to facilitate new musical and cultural outputs. This project uses Frontier Technologies to develop smart information use and promote an innovation culture and economy, delivering an innovation in new sound-creation methodology and the development of new musical instruments and interfaces: a cultural, creative and technological contribution. This directly relates to the New Interfaces for Musical Expression international conference that we wil host in 2010 (Co-chairs Beilharz & Bongers). The methodology and analysis developed in this project can be applied to broader musical and data-mining (web database) contexts and exemplifies the University's strategic goal of "research that is at the cutting edge of creativity and technology".

 

AeSoniPhone

Smart Mobile Innovation Community of Practice and Learning

Collaboration with Professor Mary-Anne Williams (Innovation and Enterprise Research Laboratory) University of Technology, Sydney; Faculty of I.T., Faculty of Business, Apple and IBM. UTS Learning and Teaching Performance Fund Grant 2009. My contribution, with Sam Ferguson, is developing touch-applications for iPhone using interactive sonification of user-generated data and looking at issues of user-centred information representation.

 

'Wireless Gamelan' Cyborg gestural interaction

Using RFID tags to control quadraphonic music performance environment - developed with Sam Ferguson and Jeremiah Nugroho (research assistants)

WirelessGamelan WirelessGamelan

 

'Sonic Kung Fu' INTERACTIVE installation

Gallery soundspace during Sydney Esquisse art festival.SonicKungFu

Sonic Kung Fu by Joanne Jakovich and Kirsty Beilharz (2005) uses colour tracking computer vision (using webcam) and Max/MSP + Jitter Pelletier's cv.jit objects to recognise gestures of a particular colour. The physical space in the camera view is divided into vertical and horizontal regions that trigger different musical responses so that the air or space can be 'played' like a musical interface or virtual instrument.

 

 

SensorCow

SensorCow Sonification

is a sensor-controlled sonification of motion using La Kitchen Kroonde Gamma wireless Radio Frequency transmission + gyroscopic, acceleromoter and binary motion sensors + Max/MSP. The contiguous data-flow provided by the calf's walking, head-shakes, eating, etc. is mapped to separate channels for each sensor and distinctive timbres to differentiate and isolate the affect of particular gestures. It creates an auditory profile of the normally visually observed actions of the animal.

 

'Emergent Energies' Sensate Spatial Installation

By Amanda Scott, Kirsty Beilharz and Andrew Vande Moere.EmergentEnergies

Emergent Energies is a socially-aware responsive Lindenmeyer tree visualisation and sonification that displays an embedded history over time, reveals the number of people, proximity, location, pace/velocity of movement, intensity of interaction in a social space. Motion information is captured using pressure mats under the carpet in a sensate lab. Colour is mapped to auditory timbre, vertices to location, line-thickness to duration. Mapping of gesture to visual and auditory display is considered here as a type of aesthetic sonification in which the contiguous data stream comes from the user's rate of movement and scope.

 

'Fluid Velocity' Interactive Installation

For physical bicycle interface, visual projection and stereo audio production in the Tin Sheds Gallery, University of Sydney used IRCAM WiSeBox (Flety 2005) WiFi transmission of data from captors located on the bicycle frame and handlebars to transform the 3D 'creature' on screen and variable filtering and panning of the electronic sound. The programming environment was Max/MSP and Jitter (Puckette & Zicarelli, 1990-2005). Pressure on the handlebars, rotation, braking and pedalling velocity affected the angularity, splay, tentacle-thickness, number of limbs and waviness of the virtual multipod 3D creature in front of the rider. It uses binary, peiso pressure, IR proximity, accelerometer and gyroscopic sensors.

FluidVelocity

FluidVelocity

 

'The Music Without' motion sonification

Using Kroonde wireless sensors and Max/MSP to transform gestural activity of the violinist into real-time collaboration/accompaniment, giving voice to the external physicality of playing music. Most cooperative automated accompaniment programs seek to follow pitch, rhythm or harmonic paradigms of the music ,wheras this work highlights the exertion of tone production and gestural attributes of performance, sometimes revealing surprising features.

MusicWithout

 

IceCaps

+90 degrees

GPS data driven composition/sonification using generative structures derived from formal topology, determined by GPS values read into Max/MSP, like 'live ice' dynamic within a determinate form. The next data-generating polar crossing is the North Pole February-March 2009. Completed crossings include Antarctica (-90 degrees) and Greenland. Plotting the GPS points in Max/MSP real-time software; Polar ice-cap crossing data will be used for future projects; GPS points plotted on the map of Greenland; CO2-neutral Greenland crossing by my cousin, Linda Beilharz, provided the GPS data for the project. Other projects include the North Pole and Antarctica.

GPS real-data aesthetic sonification

The purpose behind the geeky device is actually to feed GPS data to Max/MSP for sonification to evaluate our new interactive aesthetic sonification toolkit.
Many auditory display toolkits produce a sequence of scientific but distinctly unmusical-sounding results and we are trying to develop an on-the-fly method of reading, scaling and quantising data that can be aesthetically controlled in dimensions like timbre, modality, tempo and make meaningful interpretations of contiguous linear time-based data such as GPS information. We are using local sounds graphed to significant waypoints to synthesize a geographically unique outcome.




 

Fabrication II: The Cry of Silk

Composed for the opening of Amanda Robins' exhibition, What Lies Beneath at the Tin Sheds Gallery in Sydney, March-April 2006. Robins paints and draws highly detailed and realistic interiors of coats and garments, continuing the long tradition of visual arts interpretation through drapery. The art of drapery is concerned with layers, superimposition, veiling, concealing, embodying and appreciation of different textures and folds. These metaphors transgress sensory boundaries and apply equally well to sound design. The idea of fabrics, textiles, textures and their embedded and embodied meanings motivates the integration of collected dynamic fabrics (recorded leather, feathers, fur, silk, corduroy, zippers, velcro, canvas) interpreted through various filters and processes of computer composition. Fabrication II: The Cry of Silk is a synaesthetic and perceptual exploration of fabric sounds eliciting mental images that are normally seen and felt. The work aims to shift our consciousness to a different level of sensory perception of fabrics. Historically, the mythology of The Cry of Silk is also an interesting and inspiring one in which the voice of the finest silk being torn is said to have resembled a cry, with its sensual connotations. We seldom think of 'giving voice' to materials and cloth, yet the textures and diversity of sounds available by brushing, rubbing, tearing, rustling and caressing fabrics, provides a rich sound world capable of focusing our ear at a deep level of attention to minutiae and detail. This coincides with an introspective examination of the micro and particle sound design magnified by musical exploration and processing of sounds in close scrutiny. Subtleties and intricacies of tiny sounds are extended, augmented, transposed and amplified to illuminate the beauty and curiosity of their microstructure in a way that we may not ordinarily hear and appreciate.

Cry of Silk

 

Tasmanian Wilderness

A soundscape of sampled and processed natural sounds (no machine or urban sounds, 2005).

Paris Metro

A light-hearted quasi-retro urban soundscape and musical flanerie through a sequence of archetypal Parisian photographic images (captured in 2004-2005).

Audio CD: Thread ... Stitch ... Fray

Recorded by Sydney Mandolin Quartet (Jade 070, 1999)

Audio CD: Burning in the Heart of the Void

Performed by Nouvel Ensemble Moderne, Momtreal Quebec (Amberola 7141, 1998)

Bamboo

 

Bamboo Voice

A video-montage setting of a significantly abridged version of The White Face if the Geisha musical composition for solo shakuhachi and chamber ensemble performed by Iwamoto Yoshikazu and Ensemble Recherche Freiburg at Hannover Biennale in 2000. The images are derived from kimono fabrics, calligraphy, typical design patterns infused with natural water, urban and environmental conflicts and dualities, confrontations and tranquil meditative qualities.

 

AleatoryAleatory

Generative Composition

Beilharz, K. (2005). Integrating Computational Generative Processes in Acoustic Music Composition, in Edmonds, E., Brown, P. and Burraston, D. (Ed.s) Generative Arts Practice '05: A Creativity and Cognition Symposium, University of Technology Sydney, pp. 5-20.
Beilharz, K. (2006). Interactive Generative Processes for Designing Sound Generative Music Composition: Interactive Generative Installation Design and Responsive Spatialisation (Poster) in Gero, J.S. (Ed.) Proceedings of the Design Computing and Cognition Conference, Kluwer, in press 17/02/06.
Beilharz, K. (2004). Designing Sounds and Spaces: Interdisciplinary Rules & Proportions in Generative Stochastic Music and Architecture, Journal of Design Research, 4 (3): http://jdr.tudelft.nl/

 

 

Urban Chimes

Urban Chimes

(Tubular Body Bells) is an urban scale virtual chime instrument that can be played by two or more users collaboratively. It is a site specific interactive sound and visual installation designed to augment the ventilation pipes that are adjacent to IRCAM on Place Igor Stravinsky, a prominent, identifying architectural feature of Renzo Piano, Richard Rogers, and the Rubins' Centre Pompidou and IRCAM. Two internet cameras capture the gestures of two or more visitors to the plaza, which are used to control generative structures of the synthesized audio display. Each pipe represents a different timbre, with pitch mapped along a vertical axis. Sounds can be generated using hand or body motions along this axis. The system is implemented using Max/MSP for the synthesized sounds and Jitter with Pelletier's Computer Vision cv.jit objects for the gesture capture, video manipulation and projection.
Jakovich, J. & Beilharz, K. (2006). "Urban Chimes: Tubular Body Bells" Outdoor Audio-Visual Installation Proposal for IRCAM, Centre Pompidou in Proceedings of New Interfaces for Musical Expression (NIME), IRCAM.

 

Sybil

'SYBIL' Information sonification TOOLKIT

Is the process of representing information using sound. This research is concerned with mapping data to appropriate auditory dimensions in real time, using spatialisation to enhance differentiation of information streams. Information sonfication links with my other research - interactive sonification, sonification pedagogy, gestural interaction, sonification of socio-spatial activity in sensate environments. This is part of the larger Key Centre of Design Computing responsive environment project in the Sentient Lab, integrating sonification, visualisation, curious and intelligent agents. The environment transforms human interaction into an adaptive, responsive space that can understand and learn about its users. This designs for ambient display together with cutting-edge sensate, mobile, and pervasive computing technologies.

 

Cuttings Urban Islands

Cuttings: Urban Islands

Book: chapter 'Sonic Islands' on sound installation and site specific audio installation and interactive media (University of Sydney Press, 2006).

 

 

Information Sonification

Information sonification is the process of representing data in an informative way using sound. This research is concerned with mapping data to appropriate auditory dimensions in real time, using spatialisation to enhance differentiation of information streams. Information sonfication is related to other research, such as interactive sonification, sonification pedagogy, gestural interaction, sonification of socio-spatial activity in sensate environments and data-driven art.

 

Gesture-augmented hyper-instrument

Augmenting traditional instruments with electronic audio or visual display triggered by sensors, enhanced and multi-modal interfacing.
(ARC funded research project).

 

Research THEMES

interactive music
timbral analysis in Asian, contemporary electroacoustic and electronic music
analytical and conceptual frameworks (contributing to research in listed areas)
computational algortihmic and generative music systems
understanding bi-modality & multimodality
participant engagement in interaction design
physical/gestural interfaces
responsive computational environments
adaptive installation systems
computer vision / computer listening
gestural computer interaction for specific contexts
biometric data sonification (e.g. ECG, EEG, EMG, GSR, 4D motion) in context of wearable technologies for remote health monitoring and preventative care, intelligent garments/wearables, integrated sensing & user-centred, contextual data representation
eco-aware sonfication + auditory graphing
aesthetic sonification + real time data representation
enhancing current sonification methods
autonomous visualization/sonification & sensing systems

 

Generative Sound Structures

Used for algorithmic and spectral functions in computer-assisted composition
e.g. scaling, interpolation, transposition, arithmetic functions, randomization, generative algorithms
explore new implementations of generative algorithms or structures (CA, GA, L-systems...) in music
generative structures for particle organization in granular synthesis

 

Real-time generative responsiveness

Generative design methods/structures for visualization and sonification that can operate with low latency in real time for installations or performance of digital audio-visual display, e.g. Neural Network oscillators or homeostatic systems, L-Systems, Cellular Automata for implementation in evolving artworks triggered by interaction or data input

 

Sense-Aware Lab

Sense-Aware

 

Polymedia Pixel

Jeremiah Nugroho & Sense-Aware Lab

 

Fluid Velocity interactive installation

 

Windtraces (Sculpture by the sea2011) & 'Interface' in Site-Specific Sound Installation

 


Windtraces is a multi-channel, site-specific sound installation that was exhibited as part of the Sculpture by the Sea exhibition in Sydney in November 2011. It uses data from meteorological sensors as inputs to algorithmic processes, to generate a dynamic soundscape in real-time. Sculpture by the Sea is a large-scale art exhibition [1] that takes place each year on the coastal pathway between Bondi beach and Tamarama beach in Sydney, Australia. It is a free event and in 2011 attracted more than half a million visitors. Windtraces comprised a set of 14 loudspeakers distributed across a steep rock face, emitting sounds generated by algorithmic processes that were controlled by sensor data relating to meteorological conditions at the site. The first part of this article describes the conceptual, practical and artistic perspectives of the work in relation to large-scale musical interfaces.


Mapping Weather Data to Sound Material

The WGS and WSS were developed using the Max interactive platform [9]: The WSS simply plays back short, recorded samples according to instructions from the WGS, related to timing, choice of sample, processing of sample and the loudspeaker from which it is played in the spatial configuration. Timing and loudspeaker choice are controlled mainly by wind-related parameters. The sample and audio processing parameters map data related to daily and instantaneous rainfall, instantaneous ultra-violet radiation intensity, local temperature, pressure and relative humidity. In order to draw concrete connections between sounds and weather conditions, perceptually informed quantities are calculated from this data, e.g. heat index [10]. This quantity is more closely related to perceived temperature than a simple temperature measurement. In addition, numerically derived, categorical representations of weather conditions ('hot and sunny', 'windy and cloudy') are used to select between different collections of sound material, ensuring that different conditions result in clearly distinct sonic results.

setup

The hardware and software infrastructure for Windtraces.

 

Spatialisation

In Windtraces, the movement of each sound across the rock is controlled by a finite state grammar. The loudspeakers are located in the crevices in the rock surface. These crevices form natural contours that the spatial movement of sounds is hoped to evoke. When a sound is introduced, it is played from a specific loudspeaker and then in in linear sequence across a number of adjacent speakers, creating a movement following a path around the rock. These paths are probabilistically selected by a finite state grammar. Each state corresponds to a particular speaker, and it is connected to between 1-3 other states. There is a discrete probability distribution associated with each state, which describes the probabilities of subsequent states.

spatial layout

Speaker locations on the rock surface, and a finite state machine showing the correspondence between states and speakers, and an example set of probability distributions.

Each wind direction is mapped to a set of probability distributions. For example, when there is a sea breeze (i.e. coming from the right hand side of the picture above), probabilities are configured so that sounds tend to originate from loudspeakers on the right and move leftward. Wind speed is mapped to two control variables: the interval between the time that a sound is played from a speaker, and the time that it is played from the next speaker; and the speed with which new sounds arise. A variety of different types of movement can be evoked using different time intervals and sets of probability distributions, e.g. to create clear wave-like motions from one side the other or complex scenes with many sounds following their random paths around the network of speakers.



[1] Sculpture by the Sea official website. Accessed online at www.sculpturebythesea.com, September 21 (2011).
[2] Handley, D. History: Sculpture by the sea. Accessed online at www.sculpturebythesea.com/about/history.aspx, September 21 (2011).
[3] Stenglin, M. Making art accessible: opening up a whole new world. In Visual Communication, 6:2 (2007), 202-13.
[4] Scarlett, K. From Bondi to Aarhus: Sculpture by the Sea. In Art Monthly Australia, No. 222, Aug (2009), 18-20.
[5] Xenakis, I. Le Legende D'Eer. France (music score): Montaigne (ed.1995).
[6] Leslie, G., Schwartz, D., Warifsel, O. and Bevilacqua, F. Wavefield synthesis for interactive sound installations. In Proceedings of the 127th AES convention, New York (2009).
[7] Oregon Scientific WMR100N Weatherstation. Accessed online at www.oregonscientific.com.au, September 21 (2011).
[8] WeatherSnoop software application. Accessed online at www.tee-boy.com/weathersnoop, September 21 (2011).
[9] Puckette, M., Zicarelli, D. et al. Max/MSP software application. San Francisco CA, Cycling 74 (1990-2010)
[10] Steadman, R.G. The Assessment of Sultriness. Part I: A Temperature-Humidity Index Based on Human Physiology and Clothing Science. In Journal of Applied Meteorology, 18:7 (1979), 861-873.
[11] Roads, C. Grammars as Representations for Music. Computer Music Journal, 3:1 (1979), 48-55.
[12] Edmonds, E. Art, Interaction and Engagement. In Candy, L. and Edmonds, E. (Ed.s) Interacting: Art, Research and the Creative Practitioner, Libri Publishing, U.K. (2011).
[13] Costello, B. Many Voices, One Project. In Candy, L. and Edmonds, E. (Ed.s) Interacting: Art, Research and the Creative Practitioner, Libri Publishing, U.K. (2011).
[14] Bilda, Z. Designing for Audience Engagement. In Candy, L. and Edmonds, E. (Ed.s) Interacting: Art, Research and the Creative Practitioner, Libri Publishing, U.K. (2011).
[15] Paine, G. and Drummond, J. Developing an Ontology of New Interfaces for Realtime Electronic Music Performance. In Proceedings of the Electroacoustic Music Studies (EMS), Buenos Aires (2009).
[16] Paine, G. and Drummond, J. TIEM Survey Report: Developing a Taxonomy of Realtime Interfaces for Electronic Music Performance. In Proceedings of the Australasian Computer Music Conference (ACMC), QUT Brisbane (2009).
[17] Garth Paine official website. Accessed online at http://www.activatedspace.com/Installation_Works/Reeds/REEDS.html, February 4 (2012).
[18] Bulley and Jones' Variable 4 official website. Accessed online at http://www.variable4.org.uk, February 4 (2012).
[19] Video of Bowen's Tele-Present Water. Accessed online at http://vimeo.com/25781176, February 4 (2012).
[20] Bowen's Tele-Present Water official website. Accessed online at http://www.dwbowen.com/tp_water_series.html, February 4 (2012).

 

Diffuse

Diffuse no.6

 

 

SARC

 

SMIS

Diffuse3

 

Diffuse2

 

 

Diffuse Season 1

 

 

Polymedia Pixel Presentation by Matthias Hank Haeusler at Soirée

At Wednesday's Sense-Aware research soirée Matthias presented the Polymedia Pixel research project in the context of anamorphic, multidimensional, architectural and autonomous pixel objects that can sense, display and compute independently for the purposes of interaction and visualisation/sonification integrated into the structure of architectural spaces.

Polymedia Pixel Video Overview

By Kirsty Beilharz, Matthias Haeusler, Tom Barker & Samuel Ferguson. Responsive, sensing, autonomous architectural modules for media façades and situated media. Each 'pixel' can sense sound, proximity, light, motion; communicates with other pixels; and displays sound and light in a form of massed ambient visualization & sonification. Being capable of architectural integration, its intention is to respond to eco-data and information about the inhabitants of a space or building and its energy and climatic attributes. This research aims to embed computing in architecture.

 

Smartwear & Wearable Technologies ThinkTank


Advanced textiles, smart materials, sonification + interaction, health monitoring + user experience. My presentation concerns sonification and interaction in wearable technologies and smart clothing, focused on technology integration and user-centred design.

LilypadHat Lili

 

 

Situated Media Installation Exhibition


When: Thursday, 11 November 2010
Where: UTS Building 6 (DAB) & Building 3 (Bon Marche Studio) 702-730 & 755 Harris Street Ultimo
Time: 11:00 - 13:15 & 14:00 - 16:30

SMIS

 

Polymedia Pixel Exhibition Update

In the Media Architecture Biennale, Vienna (photos by Matthias Hank Haeusler):

Vienna1

Vienna3

 

The finished Polymedia Pixel prototype:

 

 

 

Interactive Polymedia Pixel - Media Architecture Biennale Vienna 2010


Designed by Kirsty Beilharz, M. Hank Haeusler, Sam Ferguson and Tom Barker.


Fabrication assisted by Rom Zhirnov and electronics with participation by students of Situated Media Installation Studio (UTS B. Sound and Music Design, B. Photography and situated Media)



Theme 2010: Urban Media Territories; the re-stratification of urban public spaces through digital media.
http://www.mediaarchitecture.org/biennale-2010-exhibition/

This research is an investigation into Urban Digital Media, a field that inhabits the intersection between architecture, information and culture in the arena of technology and building. It asks how contemporary requirements of public space in our everyday life, such as adaptability, new modes of communication and transformative environments that offer flexibility for future needs and uses, can be addressed by a new form of public display through the use of an interactive polymedia pixel and situated media device protocol.


The prototype design was first reported in the following paper in Turkey:
'Interactive Polymedia Pixel and Protocol for Collaborative Creative Content Generation on Urban Digital Media Displays'
by M. Hank Haeusler, Kirsty Beilharz, Tom Barker at the International Conference on New Media and Interactivity 28-30 April 2010, Istanbul. http://iletisim.marmara.edu.tr/newmedia/page/11/main-topics-of-the-conference

 

The Fabrication Process



Icosidodecahedron form 3D printed model by Matthias Hank Haeusler. A precision centred 3D print was made in order to generate mold.

Hank's book also at the Vienna exhibition: http://www.mediaarchitecture.org/media-facades-hank-haeusler/
Das Buch führt in die Terminologie der Medienarchitektur ein und erläutert im ersten Teil die Geschichte der Medienfassaden anhand weltberühmter Beispiele wie Times Square, New York, oder Centre Pompidou in Paris. This book explores the terminology, recent history and developments in Media Façades, showing famous examples from Times Square to the Pompidou Centre in Paris.

References
Haeusler, M. Hank Media facades – History, Technology, Content, avedition, Ludwigsburg 2009.

Sauter, Joachim, "Das vierte Format; Die Fassade als mediale Haut der Architektur", Fleischmann, Monika; Renhard, Ulrike (Eds), Digitale Transformationen. Medienkunst als Schnittstelle von Kunst, Wissenschaft, Wirtschaft und Gesellschaft, (Heidelberg: whois verlags und vertriebsgesellschaft, 2004).

 

Emotiv Epoc for transmitting brain signals to Max/MSP with OSC



We are using the Epoch headset EEG live data feed as the source for sonification of brain activity and to try to distill useful information from the most active and dynamic brain regions, to overcome the intrinsic orthogonality of the feeds.

Screenshot

Here is a picture feeding into Pure Data by 'quantum.Trip' http://vimeo.com/13047029 and MindYourOwnOSC, distributed on
http://sourceforge.net/projects/mindyouroscs/ open source software. You can see the live tracking in action. (photos from EPOC website)

 

 

NIME 2010 Sydney Overview


It was our great pleasure to host the NIME++2010 conference at UTS this year: http://nime2010.org/ in Sydney from 15-18 June. The conference consisted of the fully peer-reviewed paper tracks, poster and demo sessions, installations, concert performance, club night performances and Keynote talks by Stelarc and Nic Collins (Hardware Hacking). The full set of photos can be seen here: http://www.flickr.com/photos/nime2010/

If there is a haute cuisine in hardware hacking, Nic would be the three-star Michelin chef. Dr. Nic Collins is a composer, performer and instrument builder, Professor of music at Department of Sound at the School of the Art Institute of Chicago, editor-in-chief of the Leonardo Music Journal, former artistic director of STEIM in Amsterdam, recipient of the DAAD scholarship in Berlin, and the author of the book Handmade Electronic Music - the art of hardware hacking (Routledge, now in its 2nd edition). www.nicolascollins.com

Interfaces designed to be expressive need to be close to the human skin. Stelarc's work is about getting under the skin (usually his own). In fact he is presently surgically constructing and stem-cell growing an ear on his arm. Stelarc is the pioneer of cyborg art. He is a performer, Chair in Performance Art at the School of Arts, Brunel University, West London, Senior Research Fellow and Visiting Artist at the MARCS Lab at the University of Western Sydney (UWS), and Honorary Professor of Art and Robotics at Carnegie Mellon University, Pittsburgh. He also has an Honorary Doctorate from Monash University in Melbourne.
Over the years Stelarc has explored and applied his body further than skin deep to research the notion of the cyborg, where the interface becomes part of the human body. In fact, for him the body has become obsolete. But rather than a cold, hard, technical cyborg, Stelarc's research through artistic expression shows a deep passion, warmth, (in)sanity, and humour. www.stelarc.va.com.au

Stelarc

 

Student work

 

Interaction Studio HammondCare collaboration with Master of Interaction Design Students of University of NSW Art & Design - designing instruments for creativity and expression for people living with dementia.

 

 

Gesture piano

Using spatial gesture tracked by camera to activate solenoids hitting strings cross-processed live digitally. Microphones under the removed keyboard region collect real physical sounds from the frame and strings of the instrument.

GesturePiano

 

Camera-tracking FIDUCIAL markers

(Unique cvisual identifiers) on objects and reactivision interface. Moving objects on the transparent table control a spatial mixer.

Fiducial

 

Breath-controlled music

BreathController

 

Colour & shape camera tracking

(Colour and spots) on die and motion-detection by the Bluetooth Wii controller (tilt, yaw, rotation)

Wii

 

Rubik

Rubik's Studio

Sound controller Project by Daniel Gallard and Piers Gilbertson. Using colour tracking and Reactivision fiducial marker tracking of individual markers on each surface of the cube to control groups of sounds and individual timbres. This project was developed in the Interactive Sound Studio 2008 University of Sydney.

 

Wii Taiko

Project by Dani Awad, Camilo Castillo, Deon Rowe. Wii-mote controlled taiko set with rim-shots, skin regions and different sounds achieved by button combinations using 2 Wii controllers, velocity relating to hardness and tone, drums of different sizes. This project was developed in the Interactive Sound Studio 2008 University of Sydney.

 

Interactive Internet traffic visualisation

For tangible interaction with data using Reactivision fiducial marker tracking