Imprimir Republish

Cover

Laptop Players

New virtual music and science institute researches experimental ways of composing and playing

At times, they are misunderstood both by the folks from the arts and by the people of the sciences. They seem rather beyond the fringe, in sense that their activity is found, in certain cases, beyond the more hallowed lines of study at the universities. Some of them are musicians who like, and make use of, scientific concepts and technological devices to play and to compose, and are frequently viewed with reservations by their peers, with their more classical profile. Others are scientists who use their knowledge in areas like physics, mathematics and biology to interact with composers and instrumentalists, and are quite often looked on as oddities by their academic colleagues. However, when these two extremes draw closer without prejudice and the bonds that unite music and science become stronger, marvelous creations can emerge.

Hybrid products, half art, half technology, like the computer program called MAX/MSP, a sort of work environment and operating system, used all over the world by musicians who are researching into new forms of composing and presenting their works. MAX/MSP was created by the Institut de Recherche et Coordination Acoustique/Musique, better known by its acronym Ircam, a powerhouse of ideas located in the Georges Pompidou Centre, in Paris, which for 34 years has been stimulating different lines of research in which the common thread is marriage of music and science, under the baton of French composer Pierre Boulez.

It was a bit with this vanguard spirit at Ircam that a group of researchers from São Paulo universities – the one from the first paragraph beyond the fringe, in the best sense – decided to create, with support and finance from FAPESP, a similar institute, but virtual, without any physical premises, to coordinate multidisciplinary projects in the areas of music, science and technology. Some of these projects already exist, others will come with time. “Our idea is to join together in a larger program the ventures that are scattered today over several music departments and institutes of science and technology”, says researcher Silvio Ferraz, from the State University of Campinas (Unicamp) and from the Pontifical Catholic University of São Paulo (PUC-SP), the main power behind the virtual institute.

“This is going to give weight, prestige and a common bearing to these lines of research, though, logically, the individuality of each one of them will be respected”, is the comment of José Fernando Perez, FAPESP’s scientific director. To start with, four major lines of work will be stimulated: the study of the acoustics of concert halls; the analysis of works with the help of a computer; the promotion of compositions and performances that use, in real time, a PC or other equipment as if they were musical instruments; and the use of computational intelligence in the study of musical creativity and cognition. As the virtual institute takes shape, other themes may be incorporated into the score.

The first activity to be promoted by the institute, which will shortly boast a website to integrate its members and their respective projects, will be the Ircam-Brazil event, between August 8 and 14, in São Paulo. In the course of seven days, ten researchers from the French center will be giving talks and concertos and will take part in scientific meetings with their Brazilian colleagues. With the exception of the talk on the 8th, which will be Unicamp’s Arts Institute, during the 9th Brazilian Symposium of Musical Computing, the others will be held at the Itaú Cultural Institute, in the in the city of São Paulo, which has lent its installations for the event.

Always at night, the musical presentations will be at the Theater of the Alliance Française and the Teatro Cultura Artística, on dates yet to be confirmed. Although they were conceived specially for researchers in the area, the workshop’s activities will also be open to the public in general, after signing up beforehand at the Itaú Cultural Institute. “We want to increase the interchange with international institutions that are benchmarks in research into music and science”, Ferraz comments. Events along the lines of Ircam-Brazil, which also enjoys the support of the French consulate in São Paulo and the Franco-Brazilian Center for Technical and Scientific Documentation (Cendotec), should be repeated in the coming years, bringing to Brazil, for example, researchers from the Center for New Music and Audio Technologies, and from the Groupe de Musique Experimentale de Marseille (National Center of Musical Creation – Gmem).

Brazil does not have much tradition in fostering researches that join art and science together, two distinct but not incompatible forms of knowledge, of feeling and of interpreting the world. But this does not mean that the virtual music and technology institute is starting from scratch in its mission for tightening the bonds between these two fields. It is starting precisely from projects that already exist and are proving to be instigating. And what already exists, just in São Paulo, is a little umbrella of ideas capable of giving shelter to the crossbreeding of musical research and creation with such different areas as sciences of computing, physics, biology and mathematics, not to mention flirting with other disciplines from the terrain of the arts, like dance and the theater.

With its 20 years of almost silent existence for those who do not circulate in the universe of music and technology, the Interdisciplinary Nucleus of Sonorous Communication (Nics) at Unicamp is making a lot of noise in one of its lines of research: the creation of devices – or, in the jargon of the area, interfaces – that carry the transfer from an abstract model (which can be movement in the dance, mathematical models or a genetic code) to the audio agent, usually a digital musical instrument or a computer. Talking like this, it all seems complicated, out of tune. An example helps one to understand the kind of work done by the nucleus, made up of some 30 persons, counting both teachers and pupils, both from music and from other areas (mathematics, computing sciences and engineering).

In the first image used to illustrate this article, you can pick out a dancer with dark clothes, lit by a spot light, and entangled with red patterns that seem to come out of her body, in a beautiful visual effect created by the photographer, using a set of lights fastened to the performer’s attire. There is also, in the background, a keyboard. Believe or not, this dancer, Andréia Yonashiro, is “playing” the instrument with her movements, with each passage described by her choreography. No, it isn’t an optical illusion, science fiction, or an apparition. The answer to this false mystery lies in the dark surface touched by Andréia’s feet, which hardly appears in the photo: a special one square meter mat, which is connected to the synthesizer.

The mat is fitted with 12 piezoelectric sensors, which record small variations of the electric potential, when pressed in the movements of the dancer. With the assistance of an analog-digital converter, these electrical alternations, measured in microvolts, are transformed into events for the Midi (Music Instrument Digital Interface) protocol, a sort of musical language that makes use of a table of numbers to represent the level of the musical notes (do, re, mi, fa, sol, la, si…) and their intensity.

“These numbers can activate any electronic instrument compatible with the Midi protocol, like a keyboard”, explains musician and mathematician Jônatas Manzolli. He is the coordinator of the nucleus, which developed the mat under the auspices of the Young Researcher program, financed by Fapesp. Besides the floor that makes music, the Nics team created gloves and ballet slippers that also work like audio interfaces. Albeit of an experimental nature, all these devices have already been used in artistic performances conceived by the researchers and presented at events in Brazil and abroad.

Digital luthier
Nics has also developed a series of computational tools that can be used to produce simple or complex events in sound – provided that the user is skillful and learns to use them correctly. Rabisco (Scribble), for example, is a program that generates music from lines drawn in free style on a blank screen. “We are a sort of digital luthier”, explains Adolfo Maia Jr., a professor from the Applied Mathematics Department at Unicamp and an associate coordinator at Nics. A curious paradox between the past and the future, since the luthier’s craft is the delicate and ancient art of making, by hand, string instruments with a sound box, like violins, cellos and guitars.

An extraordinary project of Manzolli’s and Maia Jr.’s team, run in partnership with the Institute of Neuroinformatics at the Zurich Polytechnic, in Switzerland, is a sort of audio installation that uses a robot to control a digital musical instrument. Working on one of Nic’s different lines of study, Flo Menezes, from the Arts Institute of the São Paulo State University (Unesp), is one more researcher to thicken the ranks of the virtual music and science institute. This São Paulo composer is devoting himself to exploring the possibilities of so-called electroacoustic music, an experimental branch created at the end of the 40’s in Europe (in France and in Germany), whose main historical landmark is Germany’s Karlheinz Stockhausen.

Please do not mistake the passion and the subject of work of Menezes, who has been playing the piano since he was 5 years old, with electronic music, that drum and bass sounds that rock interminable parties of a good part of today’s youth parties. The artist who embraces electroacoustic music, sometimes called concrete or acousmatic music, composes works that are an elaborate assembly of sounds modified by the modern resources of computing. The major part of the sounds is prerecorded and originates from musical instruments or from any other source of audio, like a door that is slammed, or a horn that is sounded. The more mischievous would say that electroacoustic music is more electroacoustics than music, but this estrangement is due to the vanguard nature of the movement. “Many of my colleagues, including musicians, see me as the mad scientist of music”, comments Menezes, jokingly.

One of the most marked traits of the electroacoustic works is the extreme concern with the form of spatial diffusion of the music when facing an audience in a theater or concert hall. A good stereo sound, with the traditional two audio outputs, is not enough. Musicians like the researcher from Unesp, who studied in Germany, Italy, Switzerland and France (Ircam), want, at the least, quadraphony, the possibility for spreading their music from a stage fitted with four independent audio outputs. The ideal is even more than that. For them, the movement of the sound through the loud speakers, the path of their collages of sound through the acoustic equipment, is an indissoluble part of their works.

So much so that Menezes is anxiously awaiting the arrival of his “loud speaker orchestra”, to construct his “theater of sounds”, this latter term taken on loan from the Italian Renaissance. A project financed by Fapesp, called Puts (PANaroma/Unesp: Theater of Sounds) will be an “orchestra” made up, to start with, by 12 loudspeakers of the very highest quality and four subwoofers, a kind of amplifier dedicated to reproducing low-pitched sounds. “With this equipment, which can be carried to the place of the presentation, we will be able to do electroacoustic concertos of an excellent audio quality”, Menezes explains.

Acoustic simulation
Talking of audio quality, the study of concert hall acoustics, one of the foundations of the nascent virtual music and science institute, has now, since last year, been the target for a thematic project. Coordinated by Fernando Iazzetta, from the music department of the Communications and Arts School of the University of São Paulo (ECA/USP), the venture has as its final objective the development of a software, using Brazilian technology, to carry out analyses of the dispersion of sounds in small auditoriums, places with up to 100 seats in the audience. In principle, the idea is to develop an application for modest sized places and, at a second point of time, to adapt it for use in more ample environments. “Today, there are imported products that do this, but they can cost up to US$ 40,000”, says Iazzetta, who can count on the collaboration of researchers from the areas of mathematics, architecture and civil engineering in the project. “Our goal is to create an open architecture program, which can becopied by anybody.”

How does this kind of application work? The software runs on a laptop equipped with special microphones that capture sounds, at previously determined frequencies, which were emitted in the environment whose acoustics one wishes to study. Next, the program compares the frequency registered in the concert hall with that of the original sound and thus to provide a ruling on the place. This is a schematic explanation. Obviously, the procedure is not as simple as all that. Actually, the software will not be limited to registering and giving a verdict on the properties of the propagation of sound in concert halls.

In a virtual environment, it will also work like a simulator of the acoustics of any place that one wishes to study, provided that is supplied with the dimensions and other physical characteristics of the place. “This will enable us to make ‘virtual’ alterations in the layout of this concert hall and to foresee what the implications will be for its acoustics”, Iazzetta explains. “The software can be a useful instrument for proposing corrections to the places where the presentations are made.”

Vampires of sound
Before starting the thematic project on acoustics, Iazzetta participated in the Young Researcher program with his colleague Silvio Ferraz, who was behind the virtual music and science institute. At that time, they both tried to understand how technology interferes with the process of creation and to explore new ways of composing and playing. At the beginning of their careers as musicians, Iazzetta studied percussion, and Ferraz, the horn. But today they both frequently define themselves as laptop players. This is because computers nowadays accompany them at almost all their performances, in which they use a lot of material prerecorded and processed by the PCs.

“Today, I have a lot of interest in developing software that composes on their own, in programs that perforate sound”, explains Ferraz, who last year went so far as to play as a musician of a more classical style in orchestras from the states of São Paulo, Bahia and Paraná. “We are vampires of sounds, to use an expression from my colleague Rogério Moraes Costa (of ECA/USP).” Besides an adept musician in performances rocked by the sound of computers, Ferraz, who will be in charge of a major part of the work of making the virtual institute’s team to run, together and in tune, the projects of this interdisciplinary venture, is also developing a side of a more conceptual, more theoretical researcher.

Availing himself of teachings from areas like semiotics (the study of signs) and cognition, he likes to discuss what music is for people. “A lot of folks associate the idea of music with the existence of a beat and a melody”, he says. “But there are indigenous chants without these elements. When reading a poem, why do some people find it musical, and others not? What transforms a soup of sounds into music?” These and many other queries and quests will be the staff and the score that the institute’s activity will be conducted to.

The projects
1.
Gestural Interface Laboratory (95/08479-3); Modality: Young Researcher program; Coordinator: Jônatas Manzolli – Nics/Unicamp; Investment: R$ 44,176.01
2. Project and Acoustic Simulation of Environments for Listening to Music (02/02678-0); Modality: Thematic project; Coordinator: Fernando Iazzetta – ECA/USP; Investment: R$ 123,205.00
3. Application of the Philosophical Idea of a Ritornello in the Design of Digital Interfaces for the Creation and Treatment of Audio in Real Time in Free Improvisations (01/12276-3); Modality: Regular line of grants for research; Coordinator: Silvio Ferraz – PUC/SP; Investment: R$ 47,949.68;
4. PUTS – PANaroma/Unesp – Theater of Sounds (01/12036-2); Modality: Regular line of grants for research; Coordinator: Flo Menezes – Arts Institute/Unesp; Investment: R$ 150,679.63

Republish