|
Machinechild
Description:
The Machinechild Ensemble investigates issues in coupling
art and technology - human-machine interface and interaction,
real-time sound and graphics environments, distributed performance
in composed spaces, and interactive display of complex simulations.
The group has had separate cyberarts installations at the
Ars Electronica Museum in Linz, Austria, the Georges Pompidou
Center/IRCAM in Paris, France, Miller Theatre on Broadway,
New York City, and at Cyberfest '97 held at the University
of Illinois. Future exhibits are planned for Siggraph, Ars
Electronica Museum, Columbia University, and other institutions.
The MachineChild Website can be accessed at this URL:
http://machinechild.ncsa.uiuc.edu
Background:
During the past decade, the digital community has concentrated
on visual representation and the improvement of this communication
means, in terms of quality and speed of delivery. However,
pictures are only half the story. Aural tradition has been
around since the beginning of language. We are now transcending
the print media paradigm and going beyond the VCR and CD with
multi-dimensional hyper-linking, and user-definable exploration,
blurring the roles of authoring, viewing and sharing.
Sound and video form the important components in what is
loosely termed "multimedia." It's a digital environment on
the desktop, over local and global networks. Multimedia refers
not only to different media types stored as digital data,
but also to highly structured databases that can be accessed
by users with a high level of interactivity and content control.
The user ideally controls content, acting as a creator as
well as a viewer. By accessing databases, the user can grab
data to create self-authored content or interactive applications.
Sound and video can take a variety of representational forms:
e.g. animation, live action, talking heads, etc. In live action
footage or animation, sound and video are often exploratory,
hence their potential power to educate and inform. But in
the still dominant, traditional paradigm of sound tracks and
television, audio and video are "broadcast" and "consumed"
as a one-way flow of information. The explorers' eye and ear
is necessarily that of the creator, not the viewer. Moreover,
the vicarious exploration afforded by television and music/radio
is experienced almost exclusively as a linear narrative.
My Role:
I was one of four members of the group and served as co-producer,
human-machine interaction specialist/researcher, designer,
and editor. Other responsibilities included film and video
shooting, documenting, and editing, GUI/interface design and
implementation, and museum partnering and installation.
Cyberfest - Performance March 1997
Project Description:
New research coupling artistic and scientific principles indicates
listening and sensing are valuable paradigms for human-machine
interaction. MachineChild is a virtual reality performance
in a composed space, in which live musicians interact and
"play" with kinetic images and sound computation systems.
MachineChild is also a creative testbed integrating five years
of interdisciplinary interface development for virtual environments.
Prior to the Cyberfest Gala premiere of MachineChild, NCSA
virtual environment researchers hosted a seminar on the underlying
technology and its application in a creative project. Integrated
contributions from UIUC researchers were introduced along
with basic principles of VR.
Ground Truth - Installation at Ars Electronica, Linz, Austria, Aug-Dec 1998
Description:
Ground Truth is a distributed interactive presentation application
that demonstrates a new architecture and a new paradigm for
multi-modal information display. Ground Truth presents both
artistic and technological innovations in the interactive
display of complex simulations, including economics, military
strategy, and an uncertainty model in a particle system. Observers
are given indirect influence over the flow of information
within and between these systems. Dynamic images and sounds
are controlled by simulation data and by observers' actions.
The artistic concept provides "Data Dramatization" in the
form of simulation-based data-driven visual and auditory representations.
Valuable aspects for industrial Decision Support include the
focus on real-time interaction with large-scale simulations,
and the use of intuitive, illustrative display paradigms.
Features of the system include a robust multi-user shared
"hands on" capability and scalability between large-format
VR and desktop Java interfaces. To view the Ground Truth essay
(now published in the Ars Electronica Anniversary book by
MIT Press), click here for an
html-version.
The GT quicktime movie: 20MBs
Coney Island - Installation at Georges Pompidou Center/IRCAM,
July 1999
Description:
Coney Island is fourth in a series of virtual environment
performances/installations created by the Machine Child Ensemble.
Coney Island immerses us in the linguistic play of machines,
the production of language, and the automation of pleasure.
Simulated mechanics in Coney Island drive the dynamics of
amusement-machines, forming an arcade landscape of sounding
bodies. "Productions" leave traces of meaning in media, in
movement and in finite state grammar. Visitors participate
over a local area network to modify the simulations. At Ircam,
Coney Island will be displayed in a single-screen version
of the CAVE.
|
|

|