Chapter 3

Contemporary Perspectives

The focus of this chapter is on the contemporary musical interface in performance practice. Since current interest in this area largely involves computer technology, I am dating the significant changes to the musical interface to the time of the emergence of computer music in the late 1950s. Although it could be argued that the transformation began much earlier this century with analogue electronic instruments, the 1950s can be considered as the start the period of technological development of interest here. The new technologies have had a considerable impact on the world since that time, and the concept of the musical interface has absorbed many ideas from diverse fields of research, in particular, those of digital, computing, and recording technologies. When considered from a technological perspective, the transfer of certain intellectual and technical discoveries from those fields into the musical domain substantially overshadows music's internal progress. We no longer live in an age where music can be entirely regarded as autonomous.

In the previous chapter it was observed that part of the theremin's attraction was its allusion to a future of performance. Less obvious was the theremin's role in promoting the future of new technologies in music. The difficulty in seeing that at the time may have been due to the theremin's idiosyncrasies. It was more a curiosity than a significant embodiment of the idea of music technology. Examining a number of early electronic instruments in retrospect, we can now appreciate the impact that technology has had on the music of our times. Collectively, these earlier electronic instruments are antecedent to the development we now understand as music technology.

In chapter two, the musical interfaces of three instruments were described as essentially fixed, transparent and taken for granted. For contemporary musical instruments, the interface is becoming viewed as something to be developed and studied. The opacity of the contemporary interface suggests a new and complex musical environment. If considered as musically influential, it must be engaged as an instrument itself, which intervenes in the process of performance.

The technological interface now mediates between the performer and the musical result as an entity that cannot be easily defined, described or anticipated. What descriptions do appear are the prerogative of the technician/composer/performer who fashion the interface to their creative ends. In this respect, these descriptions are often highly technical, specific and personalized.

To facilitate an understanding of some of the essential conditions surrounding the contemporary interface, this chapter begins with an examination of two musical works. Study of these works exposes several critical areas that constitute the expanded notion of the interface and the perspectives on performer/machine interaction.

When seen as an "opening out" of the traditional musical interface, interface expansion has taken place in three important areas, which are unique to the contemporary musical experience and will be examined in this chapter: the instrument, the performer, and the development of channels of communication critical to performer/machine interaction.

3.1 TWO COMPOSITIONS

1. Duet for One Pianist Eight Sketches for MIDI Piano and Computer-Jean-Claude Risset

Duet for One Pianist is a set of eight sketches that demonstrates and explores live interaction between a pianist and a computer playing the same acoustic piano. The piano used was a Yamaha Disklavier connected to the Max music programming environment running on a Macintosh II computer. The composition was written and realized at M.I.T. in 1989. Risset describes each "Sketch" as "corresponding to a set of linked `patches'." These can be viewed and rehearsed through the MAX program.

This work is as much a composition of technology as it is a composition through technology. One of Duet's important consequences is to liberate quietly a perception of pianism from an entrenched historical position. By liberate, I mean something of an emancipation from historical technology and concepts, almost universally regarded as immutable. Until very recently, the culture of the piano made it difficult to imagine that a time might come where aspects of performance could be modified by or delegated to a machine. Duet is sophisticated and subtle, and reflects a mature approach to composing with, and for, technology. The sound and function of the piano remain unchanged, and to a large extent, so does the role of the performer. The unique nature of the music results from Risset's engagement of the interface.

Although the piano has been functionally enhanced (a factory installation of a "state of the art" digital player piano mechanism) it is not obvious how this alone might lead to the technological/creative position Risset's work articulates. This position is difficult to ascertain from an examination of the piano alone, and the digital player piano technology installed in the instrument appears focused on the expectations of the traditional player piano. We need to step away from the Disklavier itself and appreciate that a reasonably powerful computer is interpolated between the performer and the instrument. Even then, whatever the technological arrangement, creative potential is still only hinted at rather than clearly evident in this configuration. What does the computer contribute in this configuration?

From this external consideration of the instrument, I am attempting to draw attention to the idea that we can no longer assume we understand the function and musical consequences of an instrument. Neither can we be certain about the role of the performer in rendering music from it.

Before considering the eight sketches of Duet for One Pianist, it is worth reviewing some of the difficulties encountered in contemplating this type of composition. Traditionally, the score, and later records, provided the most effective means for analysis. But in Duet, these do not reflect the entire codification of the work. As can be appreciated now, there exists a "score" of the interface part in the form of a computer program or, in this case, Max patches. These patches, in addition to their practical function, graphically inform the reader–if they understand the format–of the dynamics of the interaction. They are not readable in the same sense as music notation but rather specify the algorithm of operation. They define precisely the process in question and the scope of possible results, but can never show what those results actually will be at any instant. Risset has gone to some trouble to notate the computer part where possible, but any serious understanding of its activity must be derived from "patches" or descriptions of the functions in effect during the sketch.

It can now be appreciated that the abstract representations of the instrumental part in this piece are somewhat lacking. This is a fundamental problem with computer music, and most musicians who are familiar with traditional notation find alternative representations often more troublesome to interpret than useful in revealing the music's advanced characteristics.

This discussion of Duet will comprise Risset's comments followed by my observations on those comments , the score and the recording.

Double.

(Duration - 1' 45") The pianist plays alone. Then, on the repeat, the computer adds ornaments. These are pre-recorded: they are called when the pianist plays certain notes; their tempo can be influenced by the tempo of the pianist.

The score shows the performer more active than the computer, which is consistent with the idea of the computer supplying material as ornaments, though these are often more complex than traditional ornaments. The computer score includes points where the performer triggers the pre-recorded computer part. Tempo directions, caesura and pedalling are liberally employed.

Mirrors.

Each key played by the pianist is echoed by a key stroke, symmetrical with respect to a certain pitch–a process used in Webern's second Variation, Op. 27, quoted at the beginning (and also at the end with time reversal). The symmetry center and the response delays are changed during the piece to vary the effects.

The computer's part is of some interest here in showing the points of symmetry around which the new material is flipped. From the score, it is evident that the two parts should be quite distinct with contrary motion, inversion and temporal shifts uniquely identifying the players. However, the tempo (q = 120) tends to make the distinction between parts difficult.

Extensions.

To the arpeggios played by the pianist, the computer adds additional notes transposed in pitch and more or less delayed.

In the computer score, Risset notates, in a brief sketch form, the pitch, the direction and the delay for the additional material. This movement is partitioned, after the second caesura, by a brief section uncharacteristically sparse and quiet. Delays between notes are further clarified by drawing kinked vertical lines between the two parts. The final section has the pianist creating the crescendo with the computer executing the final arpeggio flourish. The shape and content of the computer-generated arpeggios make them quite distinct.

Fractals.

To each note played, the computer adds five notes spaced approximately–but not exactly–one octave apart. If the intervals are stretched octaves, when the pianist jumps one octave higher, one will get the sensation of a semitone descentĵ Thus the pitch patterns played by the pianist are distorted in strange ways. In places, the texture sounds as if it were not well-tempered.

Here there is almost no score for the computer part while the piano's is typically detailed. I think Risset's comment, "Thus the pitch patterns played by the pianist are distorted in strange ways", explains the lack of notation and implies that the computer generated material remains unknown until performance. Delays and trigger points are marked along with some notes. The movement begins with short notes which are distinctly echoed by the computer's instrument. Here the five notes are performed as a chord with a significant amount of delay (500 milliseconds). This staccato introduction dissolves into a torrent of notes until shortly before the end where the staccato figure returns briefly.

Stretch.

Pitches are added, as in Extensions, but the intervals are not merely transposed: they are stretched by a factor ranging between 1.3 and 2.7. This extends the harmony as well as the melodies played by the pianist.

Risset's comment here about interval stretching by factors ranging from 1.3 to 2.7 suggests that the intervals are being manipulated electronically in some way. Since this is not evident in the recording nor is it possible on the piano as a normal mode of operation, I suspect the factor to be based on semiħtones. Thus the ranges might be an octave and a minor third to two octaves and a fifth. There is, however, no notated section that provides clear evidence of this hypothesis. The score only indicates the locations where the effect commences.

As with the previous material, the pianist has a sophisticated part. The listener has to concentrate to hear the computer material which can suddenly reveal itself in dramatic gestures. It is evident that the pianist's disposition dominates the interaction. One may occasionally notice pitch material being "stretched", particularly over the course of a sequential passage, but the pianist's material is often so complex that one doesn't get the opportunity to fully appreciate both it and the computer-generated material.

Resonances. At the beginning and the end, the computer plays long sustained chords. In the middle section, the pianist plays mute chords: the strings are set in resonance by the sequences played by the computer.

While this is the most obvious of the movements, it is not easy to differentiate the parts. Without prior knowledge, I suspect it would be difficult to determine whether the computer or the performer was playing or sustaining notes.

Up Down.

Quasiħoctave arpeggios are triggered by the pianist, whose few notes can thus generate many notes. The tempo of the arpeggioes is set first by the tempo of certain patterns played by the pianist; later by the pitch he or she plays; then by the loudness–this is a most unusual type of control.

The "quasiħoctave arpeggios" are an interesting device for this interaction which has the computer part responding with three modes : tempo, pitch and intensity. Having the computer follow the direction of pitch material facilitates some differentiation between the performers. However, it is not that easy to distinguish the tempo control points amid the general detail and diversity of material. It is unknown how much material the computer generates from the performer's input, but frequently it sounds like a significant amount.

Metronomes.

This begins with a short canon: the computer echoes the pianist on transposed pitches and at different tempos. It later plays simultaneously different sequences at different tempos. Then (an allusion to Ligeti's Symphonic Poem for One Hundred Metronomes) it repeats the same pitches, but again at different metronomic tempos, either preset or set by the pianist.

The metronomic quality of this movement is distinctive. It can be imagined that one of the most difficult interactive experiences is with tempo. Playing against the computer is not like playing against a metronome, neither is it like playing with another person.

Risset's score, unlike some of Nancarrow's for example, does not give much indication of the temporal subtlety which the performance reveals. The non-traditional notation and text does, to some extent, inform the reader about the temporal effects taking place. Only when listening to the work, however, are these striking effects fully appreciated.

In summary, the scores for all the "sketches" are intriguing because of the modes of representation. Traditional notation, written text, graphic sketches are needed to enlighten the often sparsely notated computer part and provide a perspective on the interaction as a whole. Although the score available for this discussion was a private performance copy, I think it contains the minimum of information needed to grasp the complexity of the music. How complex to make the score–that is, how to make it show significant performance and musical information–is a problem in itself. A reluctance to fill the score with detail is understandable, since attempting to notate each part accurately would be a difficult and largely futile task. If the score is to be used by a performer, the technical detail associated with the computer part may simply be a nuisance in interpreting the work. Risset has achieved a good compromise; providing notation, text and graphics and occasionally giving musical clues as to the patch's function, informing the observer of his intentions for the composition in terms of interaction. Much of the notation in the computer part serves to sketch the basic effect rather than document specific pitches and durations. Metronomes is a case in point. Even in the pianist's part, notes are frequently circled or arrowed to indicate their importance. In Metronomes the direction "optional" appears in the pianist's part (the material was actually played on my recording) indicating a certain latitude in the Risset's notion of this work.

Duet for One Pianist is an elegant work that seems to nestle quietly between contemporary traditional and technological music in a manner which suggests that performer/machine interaction of this type has a future, one which is a composite of both, but in which neither overshadows the importance of the other. What is immediately discernible as a future is the changing role of the performer. In this unique context, the performer can play the same instrument as the computer yet appear completely separate, and at other times, appear to be solely responsible for the sound. There is certainly something unusual about playing the piano in a conventional manner while collaborating with an invisible performer who is monitoring and responding to what is being played. In practice, it is probable for the human performer to view the engagement as with a complex instrument.

The aesthetics of the sound of this composition look back to the recent past in contemporary music. We hear a style of music that does not draw our attention to the underlying technology, or to a raw newness, but to a recent period of European contemporary music. I am impressed by the way Duet hovers in the present by balancing aspects of a stylistic past with those of a technological future. This reflects a sensitivity towards the musical context and a desire that the technology contribute to the integrity of the composition as a whole. Often use of technology simply overwhelms the fundamental premises for the creation of the work.

How this work substantially differs from a two piano work or a traditional duet, which can be complex and provocative in their own way, is in the contemporary issue of live interaction–the dynamic between computer and performer. Risset's compositional leanings are towards a more traditional position–it is a notated work and in an academic contemporary music style rather than Improvisatory, Avant-Garde, Experimental or Jazz. I get the feeling that this particular encounter can be viewed as somewhat removed from most stylistic or aesthetic issues, as it is a composition with a particularly unique and interesting technological premise, namely performer/machine interaction.

Leaving aside questions of musical value for the moment, what Risset's composition reveals is an integrity towards the creative act, consistent with his general artistic proclivities. I am unable to confirm that he did approach this work with the idea of composing for the interface but it seems reasonable to assume that the composition, his performance and personality contributed to the subtlety and effectiveness of the work. This reflects upon the uniqueness of the occasion.

The music serves to articulate the effectiveness or quality of the performer/machine interaction. Our attention focuses primarily on this, and we may suspend judgement on other issues in the hope of experiencing something new. Music becomes the fallout of an encounter, a kind of sonic discourse about the relationship between performer and machine. If this indicates the beginning of a new musical genre, then Risset's work could perform an important didactic role by being accessible to other performers. I think that it teaches through concept rather than by instrumental example. Others will develop their own software and interactive systems, and compose inspired by that context. Risset's work does does not dwell on the technical aspects of performer/machine interaction. This can never be a musical end. If the use of technology cannot aspire innovative human expression then its pursuit is questionable. The structuring of such interactive encounters–perhaps as a new form of composition in itself–will be of more significance and utility than any arbitrary display of technological innovation with a vague intention of producing music.

Should new music from new technologies allude to the past to gain some sort of artistic credibility or should the music largely eschew all musical contexts other than what it momentarily creates in and of itself? All computer music seems to entail this paradox. Computer music does not appear to inspire and flourish under the aegis of traditional music practices. Like contemporary popular musics, it uses the recording to promote a sense of style and concept. Computer music seeks to transfer the notion of the recording into the more functional notion of the medium. Just because it exists on a recording medium does not necessarily mean that it has to be viewed as a recording. Risset's work which simultaneously evokes traditional compositional thinking and a futuristic performance practice, leaves the listener in an ambivalent position, partly because it is still under the rubric of "Computer Music".

My interpretation of this is that it is part of a general dilemma in the artistic world, and some sort of resolution to this dilemma depends upon the artist's personal means to interpret the moment. Music is not alone in having its historical continuum severely challenged by collisions of the past, present and future within a single work. If there is a crisis of focus in the significance of a work like Duet for One Pianist, it is, perhaps, due to a failure on the part of the listener to appreciate the nature of the intersections between the historical and contemporary technologies. Interpreting this work through traditional expectations, because it initially appears to be in a traditional context, can result in bewilderment or disappointment. Only after contemplation does it become more interesting. The section titles in Duet for One Pianist should signal the listener to be prepared for music which is in service of some objective. Fractals, in particular, suggests music influenced by a theory of dynamic existence beyond music itself–if one has encountered this concept in the scientific world, it already has a strong visual component.

Technology can quickly become the focus of a musical work and dominate the musical experience. The theremin clearly suffered from this. After marvelling at the phenomenon of the theremin, one gets the feeling that its contribution to music was not in proportion to the phenomenon. There is disappointment in the sonic experience after the expectations of the visual act. I attribute this to a failure to produce idiomatic music at the time and a lack of appreciation for the implications of performer/instrument interaction. Now many of us understand it to be an important consideration in the future of music technology, even if an aesthetic framework is still elusive. Such foresight is rarely manifest in the composition/instrument relation. In reality, the instrumental engagement inspires study which in turn leads to a refinement of the process and perhaps then a more profound understanding of the music. We generally don't enter into a musical experience for insight into the future of music itself unless we understand it to be strongly indicative of a future.

The compositional methodology in Duet for One Pianist stands on its own. Risset is working with a compositional language that he knows well and feels comfortable with. This is music he identifies with as an instrumentalist. He is composing the interface through traditional means which reflects more on the music than the technology.

The pitch deployment suggests influences of serial technique and seems conducive with the use of the piano in this computer music context. It is often convenient, when working with computers, to think through serial techniques or terminology, particularly if the musical parameters are well defined–such as the discrete pitch entities of the piano.

I think it no coincidence that Duet depends heavily on a numerical perspective to articulate the intersection of performer and machine. This is clearly evident from the score and has some interesting ramifications.

The term most heavily used by Risset in the description of his work is "addition". Addition meaning something which must always be added to the musical interaction. Note that the joint efforts of human and computer performer are fundamentally an act of summation. Reading through Risset's descriptions on each movement, it is clear that addition dominates thinking in this context. It is also useful to think of addition not only in the arithmetic sense but as the capacity to contribute. I see the term echo as a milder form of a addition. It is momentarily impressive but not as significant as the amassing of new pitch material with an autonomous temporal identity.

From the perspective of contemporary music, overcoming the apparent limitations of the piano's pitch structure is an interesting challenge. The computer offers the richest source of possibilities for changing the historical human/piano nexus. Its interpolation into the performance scenario, at any of a number of points, introduces a myriad of possible connections to the performance space. The mapping of pitch to action, time to action, dynamic to action, for one or multiple performers, has transformed the questions of autonomy and independence to such an extent that the mechanical/technical domain is now in a state of flux.

Each one of the Eight Sketches is a manifestation of a "secondary consequence" of the performer's actions. These actions no longer have a stable meaning but need to be assessed on a performance by performance basis. I assume that Risset realized that to write simply a monolithic work for this interactive scenario would be to risk having the ramifications misunderstood or ignored by an audience. So by encapsulating one idea in each of several short movements and clustering these around a specific title, the listener can focus on the phenomena instance by instance, and not be left later to unravel the experience. The overall musical context, quite rightly, needs didactic material to pave the way for works of a less accessible nature. Such works are usually dependent on an implicit understanding derived from earlier paradigms.

Risset's instrumental context is unique and those who wish to study the music at a sophisticated level will be restricted to those artifacts of his compositional process that can be readily duplicated and disseminated. Although in theory this amounts to almost everything (pianos, computer, score, programs, patches), in Risset's case, the logistics make such an exercise impractical. It is an example with positive prospects. Many computer music works become methodologically obscure on completion because it is not possible to document all the processes involved in the creation of the work. Given the capacity of the computer to "log" progress, management of that task should have become simply a matter of formality rather than the disruptive exercise it is.

2. Wildlife - 5 Movements for Zeta Violin, Boie Radio Drum and Computers David Jaffe and Andrew Schloss

It just so happens that this duo uses instruments that can be interpreted as the technological transformations of the violin and theremin discussed in chapter two. The Zeta violin is a transformation of the traditional violin and the Boie Radio drum is a descendent of the theremin. As with Risset's composition, the instruments have an ancestry that is quite evident to the audience. The Radio drum might be a little perplexing but less so than the theremin. The Radio drum is not really a drum, but the performer is a percussionist and uses it in a manner that frequently appears consistent with the actions of a percussionist.

The concerns of the duo are more visibly technological and greatly expand on Risset's interactive approach. They employ a more elaborate use of technology that requires an appreciation more cognizant with trends in music technology.

Beginning with improvisation rather than notated music, the duo negotiate a complex network of interaction, less constrained than Duet for One Pianist. Wildlife is "a structured improvisation in five movements. All material is generated in response to the performer's actions; there are no pre-recorded sequences or tapes."

The most striking difference between Duet for One Pianist and Wildlife is their respective sound worlds. The Zeta violin and the Boie drum do not have a single sound that characterizes them. Both are controllers; they control synthesis units. The Boie drum makes no sound at all, whereas the Zeta violin can sound like a violin and be used that way. This, however, does not reflect the extent of its potential. The Disklavier falls somewhere between these combinations. It too can be used as a controller but the sound of the piano is always present and there are some technical anomalies which pose problems for complex use as a controller.

Improvisation, as the musical practice of Wildlife, is motivated by the complex use of technology. Jaffe observes, "The full power of the computer in an improvisational context does not show itself until we add a second performer to the ensemble. Now each performer can affect the playing of the other." The permutations of this interaction reflect the potential of technology to influence and change not only the role of a single performer but that of a group of performers. Jaffe continues:

Both performers can be performing the same electronic instrument voice at the same time. One performer can act as a conductor while the other acts as soloist. And these roles can change at a note-by-note rate. In this manner, the barriers that normally separate performers in a conventional instrumental ensemble become instead permeable membranes.
The intention here appears to be that technology should have a significant and noticeable impact on the reception of the music. Here is no subtle engagement in which tradition is balanced against technology; its subtlety is to be appreciated in the nature of the interaction. This is a far more dramatic exercise in exposing the power of technology placed between performer and instrument. For this reason, improvisation is the ideal vehicle for demonstrating the degree of spontaneity and immediacy with which the computer can respond. We also see the computer's role expanded in a poetic sense: "A computer can magnify, transform, invert, contradict, elaborate, comment, imitate or distort the performer's gestures". Risset explored the use of non-pitch parameters such as dynamics and duration within the confines of the computer controlled piano scenario but it is in the Wildlife interaction that these conditions get swept up into a super-category of performance gesture. This becomes particularly rarefied in comparison to the more mundane concept of applying numerical procedures to a given input. In fact, they may be the same but the concept of gesture implies a careful interpretation of whatever time-based parameters can be sent to the computer. Understanding interaction becomes a matter of defining a moment in a stream of gestural information, and less of finding meaning in discrete note values.

The composition Wildlife is aptly named. Jaffe and Schloss do indeed set out to explore the untamed technological jungle of computer interaction over a very broad sound context. The extent of this sound context can be appreciated from a hypothetical scenario in which the computer has been monitoring a particular passage and decides to change the ambience of the sound rather than the sound itself. It might know, for instance, that when a particular synthetic instrument is invoked, it needs to adjust the playback equipment to bring out unique properties of that sound by adjusting equalization, reverb or room simulation. The computer can thus also control mixers, reverb units, special effects boxes and a host of post-processing equipment as well as perform.

From my experiences in listening to Wildlife, live and on tape, I would say that much of the technological processing is transparent. It happens on many levels and one just accepts that it is a complex interaction using sophisticated technology. Its level of sophistication becomes largely theoretical when it cannot be easily observed or discerned.

At this point, I would like to comment on David Jaffe's remarks about three of the five movements of Wildlife. In a manner similar to Risset, the duo are attempting to explicate a structured improvisation through unique movements rather than a sustained single work. Each movement attempts to explore certain modes of interactivity. These modes can be understood as the construction of relationships between the participants. There is also the same desire for the musical context to appear somewhat self-explanatory, although the interactivity tends to get very complex, and perhaps, undermines this objective.

Movement 1

Here is a familiar modus operandi, "...a simple interactive scheme, in order to allow the audience to perceive the causality between a performed action and the resulting synthesizer sound." The movement explores the idea of chord mapping and progresses from what appears to be a simple scenario to one eventually quite ambiguous and convoluted.

The violin triggers synthesized piano sounds which also conveys its amplified acoustic sound, via one of a number of chord mapping sets which are produced and maintained by the computer. A chord mapping set consists of 12 chord maps which are chords produced by the computer when the performer plays a particular pitch-class. The violin is not restricted to this role and can suspend the chord-mapping mode and play single notes and send information that influences the drum sounds. The percussionist determines which of a number of sets is active. This is only the beginning of what transpires to be a very convoluted movement, made more difficult by the performers' use of the same synthesized sound. The outcome creates an ambiguity about the role of each musician and should result in an appreciation of the concept of a duo-instrument. Two performers become one, and as opposed to Risset's case, one performer becomes two.

Movement 2

"Here, the violinist improvises a slow sustained melody, while the percussionist embellishes this melody by playing back the violinist's pitches using his own rhythm and with a variety of timbres". The similarity between the first and second movements lies in the way the technology sits between the performers. In this movement they have more independence and less direct influence on each other. The computer functions as a common memory of the performance. The memory is updated by the violinist and accessed by the percussionist.

Movement 3

This movement pairs the computer with each of the instruments: violin/computer and percussion/computer. The computer is listening to the violin's pitches, generating melodies from the most recent of these pitches. How the computer produces these melodies, that is, something with a melodic contour, register, rhythm and timbre, is through a "fractal" process (Note again Risset's use of Fractals). What is most interesting about this combination is the complexity of the interaction. Jaffe describes it :
This twin responsibility on the part of the performer as both soloist and conductor gives him great power in directing the flow of the music. Yet, the computer is also given great autonomy and at times seems to have a mind of its own. This paradoxical combination makes for a unique blend of the expected and the unexpected, control and surprise, that is particularly exciting in an improvisational context."
The percussionist has a slightly different interactive scheme, one in which the computer selects the pitches using the same "fractal" technique as the violin. The significant difference here is that the computer maps the pitches onto the surface of the drum as a kind of grid or palette which is constantly changing. The percussionist has absolute rhythmic freedom, and uses a set of foot switches to control speed, articulation and dynamics of the computer generated material.

From the description of only the first three movements, it is clear that the duo are transcending the traditional confines of their instruments by whatever means are available to them. It begins with the addition of foot switches which directly affect the computer's processing of their actions, and extends to the indirect influence they have upon each other within a kind of network.

It is tempting to suggest that Risset's work is a clearer, perhaps more illuminating example of interaction, because the context appears simpler than that of Wildlife. From listening to the recording of Duet for One Pianist, I would have to say that this is an oversimplification of the dynamic of performer/machine interaction. Both works, in their own ways, make it difficult to untangle the web of interaction. Sometimes it is clear that the computer is generating material when it sounds unidiomatic. But this is not always a good indication of differentiation. Like polyphony, there are times when the streams merge and becomes momentarily fused, creating sound not representative of either of the separate parts. Differentiation frequently occurs at a structural or gestural level and not down at the level of individual notes.

At this time in the evolution of performer/machine interaction, we are intrigued by the roles of the performer and the machine, perhaps more so than by the music. Consequently, we seek to distinguish their respective roles as the first step in developing insight that will lead to an appreciation of the musical experience associated with the genre. Although music composed in this context will not always be concerned with differentiation of the participants, in this early stage, listeners might seek to evaluate their individual musical experiences based on their perception of the sophistication of the dialogue between participants.

3.2 INSTRUMENTAL DEVELOPMENT

The three instruments : the Zeta violin, Yamaha Disklavier and the Mathews/Boie Radio drum are here interpreted as contemporary technological extensions to the violin, piano and theremin.

In the previous chapter, the violin, piano and theremin were discussed with the purpose of gaining some perspective on the "musical interface" of traditional instruments. While specific reference to a universal driving technology–one common to the evolution of these instruments–was impossible due to the time span involved and the nature of the instruments, each instrument did reveal something of a unique technological imperative. This imperative was strongest where the instruments exhibited a significant dependency on some form of external technology, product design, materials, factories or distribution, which was in turn stimulated by a broader social context. Musical instruments are now the products of highly specialized environments. Today, the presence and acceptance of technological and technical imperatives is important to the world of music technology, and to thinking about musical instruments and their use, now and in the future.

Zeta Violin

The Zeta Violin in its most spectacular manifestation is a solid bodied instrument with individual pickups on each string. The sound of the instrument can be amplified but more significantly it can be converted to MIDI information and passed to synthesis equipment via a computer. This information includes the usual MIDI data of channel number, note on, note off, dynamic level and glissandi.

Amplification by itself would not constitute a significant departure from the traditional instrument, and in every other respect it remains a violin. But the addition of the MIDI interface dramatically shifts its historical position. In effect, the violin is no longer just a violin. It has become a controller of a considerable range of sound with a performance protocol based on violin technique. This does not come without a tradeħoff, but it generates a new enthusiasm for the instrument by exploring how it can be mapped to different sounds.

Yamaha Disklavier

The Yamaha Disklavier is a digital player piano, commercially available as an upright or grand. The addition of digital sophistication to the function of the player piano considerably improves playback and record possibilities. Yet a further augmentation to its position as a player piano comes from the ability to function along with external synthesis equipment via MIDI.

There are however, problems in synchronizing the piano sound to that of external synthesis equipment, and to guarantee that this occurs, the manufacture imposes a 500 millisecond delay on sending the external MIDI signal. During that time the piano action is in operation and the piano note eventually sounds, in theory, along with the synthesizer note. Although Risset was not using external synthesis equipment, MIDI signals travelled between the piano and the external computer equipment. While the standard delay was not necessary, some compensation needed to be put in place. Risset describes the solution, "Van Duyne wrote a program of compensation, based on statistical measures he made of the delay as a function of loudness." This was necessary if Risset was to have the computer playing the piano at the same time as himself.

Mathews/Boie Radio Drum

The Mathews/Boie radio drum differs from the other instruments in that it is not commercially available and thus has the image of being more experimental. Although not entirely apparent, it differs in the mode of performance. While it is viewed as a drum of some sort it need not be struck, and in fact, intentional sound is not produced by striking its only surface.

David Jaffe describes the radio drum :

The Mathews/Boie Radio Drum consists of a flat surface containing an array of receiving antennas and two mallets with transmitting antennas. The performer moves the mallets above or on the drum surface and the device senses the position of the mallets in three dimensions using capacitive sensing of electromagnetic waves. Typically, the drum is used as a percussive device, but can also be used to continuously control several variables simultaneously, where the meaning of the variables is entirely up to the composer who programs the computer. In fact, the drum itself produces no sound; it depends entirely on the computer to process the information it produces and transforms that information into sound-producing commands for synthesizers.
Andrew Schloss describes it in more detail :
The Radio Drum itself could be called a "gesture sensor" that keeps constant track of the 3-dimensional spatial location of the mallets in the following way: A small amount of wire is wrapped around the end of each of two sticks and acts as a conducting surface electrically driven by a radio frequency voltage source. The drum surface beneath the sticks receives the signal, and the x and y positions of the sticks are derived by determining the first moment of capacitance between the sticks and the surface below. The z position (height) is given by the reciprocal of the first movement. The accuracy in x and y is about .5 cm, over a surface of about 26 by 36 cm, for a total of about 4,000 possible values on the surface alone. The greatest accuracy in z is in the region from about 5 cm above the surface and closer. The response time is between 5 and 10 msecs, but this latency can be lowered to below 5 msecs.
The radio drum is a remarkably simple yet powerful technology which has had much less exposure than the theremin. Although it employs a primitive means of performance–percussion mallets–the underlying implications are quite contemporary.

The Mathews/Boie drum's dependency on digital technology to communicate and interpret its signal categorizes it as a quantizing instrument, but the drum itself is an analogue device. "The drum contains a small microprocessor to do preliminary signal processing and to package its data. The drum can send its information either via an RS232 line, via a Midi line, or as a set of three analogue voltages for each stick." The digital signal is a numerical representation of instances in the behaviour of the system,i.e. the three coordinates of the stick positions. The degree of quantization is variable and depends on what device is receiving and responding to the drum's signal. If the computer tracking the baton is to generate MIDI information for a synthesizer, then the data transmission rate from the baton does not need to be higher than the MIDI transfer rate.

It is interesting to note that earlier research by Max Mathews centered around issues of conducting. Mathews' previous research on the Conductor Program seems to be visible in the radio drum, as another manifestation of the concept of control. The implication that the percussionist is now a conductor is an interesting and a fascinating transformation of role and musical orientation.

3.3 PERFORMANCE EVOLUTION

Evident in Duo for One Pianist and Wildlife is a change in the status and role of the performer. To pass over this without comment would be to neglect a significant aspect of the musical interface, although one that is difficult to address. We are accustomed to appreciating the technical evolution of musical instruments but not to changes that directly question the authority of the performer. The computer integration into the performance scenario decentralizes the performer and necessitates some re-thinking of the future of performance as it is understood from the historical instrumental paradigm. Furthermore, there is the question of composing the interface prior to performance. Since the computer's position is so flexible, the interface is necessarily considered "composed" at the commencement of each performance. This is the most dramatic augmentation to the idea of performance since the emergence of musical instruments themselves, and as yet, we do not fully comprehend the implications beyond our contemporary concerns.

Traditional instrumental performance can be interpreted as a metaphor for the triumph of the human condition over the physical world. Advancing performance criteria is a highly symbolic tribute to human effort and prized in our culture. Learning an instrument is the experience of assimilating an external object into the domain of our body consciousness. It is generally understood that a musician must feel at one with their instrument. That musical instruments change is not so much due to an intolerance of this condition but to a deepening understanding of the function and inherent extensibility of the instrument itself. It is assumed that humans accommodate changing external conditions and not vice versa. This has permitted instruments to evolve around their own internal logic which was once sympathetic and concomitant with performance evolution, an evolutionary rate much slower than that at which contemporary technology now proceeds.

In form and material, musical instruments have largely been inconsiderate of the irregularities of the human condition, and it is to the credit of individual ingenuity that instrumental technique is so sophisticated and extensive. The fact that instruments were not explicitly designed to meet the differences between humans has to a degree encouraged the development of musical instruments that are. The early efforts were, unfortunately, based on existing instruments, too simplistic for serious use, and frequently aimed at the leisure market rather than the serious musician. Various mechanical instruments and player pianos come to mind. Consequently, today any suggestion that an instrument is relatively easy to play undermines its potential as an instrument worthy of serious consideration. Paradoxically, contemporary synthesizers, promoted for their timbral diversity and sound sophistication, are rarely discussed as difficult to learn, even though there may be design anomalies which limit the facility to exploit their timbral potential. These instruments invariably have potential not conveniently accessible from their primary design. This situation has prompted many approaches to instrument design and ensure it of a future in music technology. The notion of the virtuoso is thus enriched as participation with a new technology in the performance arena increases. The virtuoso may soon be someone who is considered a master at creatively interacting with the computer during performance or interpreting such works.

When an interactive work is deemed interesting, we understand this to be due to its pre-performance configuration, the concept of the interaction and the instrumentation and technological application, and not entirely on the spontaneity of the performance itself. Since our traditional appreciation of music has been enhanced by historical understanding, we rely on whatever information can be gathered in advance to illuminate the process we are about to witness. This situation is also reinforced by the apparent need to explain the function and presence of technology, and its often vague musical connections. The position of the technology is typically on a superficial level, due to the brevity and instability of its existence. Contemporary technology does not promote the same profound concerns for technique and interpretation found in traditional musical practice but simply the momentary study of functionality in the pursuit of the creative objective. Since this functionality could change with the next revision of the software or hardware, and it may be "buggy" under certain conditions, it simply doesn't warrant an extensive commitment. If the software, perhaps even the hardware, has provision for user-modification, serious study still remains optional. Delving into the code of some music software is not guaranteed to produce better or more significant musical results, but that approach does appear to offer the composer a personalized approach.

From this we can surmise that the surface details of technological implementations, the nature and status of the hardware and software, particularly as they relate to music composition and performance, are not as critical as their counterparts are for the traditional instruments. Or they are critical only in relation to a specific work. The quintessential piano technology, modest though it may be, is not particularly transferable. In computer music an understanding of concepts that can migrate across the perpetual renewal of the technology have become increasingly important. These context-independent experiences, composition strategies, algorithms and intuitions, appear congenial to different technological platforms. In the musical examples cited earlier, such an attitude to technology is evident only when considering the nature of the compositions (notated score and improvisation) and performance contexts. The use of traditional instruments tends to obscure the conceptual diversity employed to explore creatively the technological context. In theory, these concepts are exportable to the next level of technology that emerges.

Contemporary music technology, in keeping with the expectations of technology in general, has stressed facility, flexibility and extensibility as properties of its evolution. This is frequently a reflexive condition. New products are frequently the older versions with fixes rather than truly innovative replacements. However, it is possible that even minor or cosmetic changes can influence attitudes to sound and performance, because the physical appearance of an instrument is now independent of its sound producing functions. But alternatively, and equally frequent, is the appearance of a new piece of musical equipment on the market that fails to become popular precisely because it is so innovative.

I previously observed that the relationship between traditional instrument and performer approximates a kind of intimacy and position similar to other parts of the human body. As a prosthetic extension, its sophistication increases with the maturing mind and the accumulation of experiences, and not only through natural predilection. This position on musical instruments, in one form or another, has been hard to abandon. Music technology has actively sought to maintain this position by accommodating traditional instrument characteristics where possible, but given the potential of technology, this begins to appear somewhat limiting. This potential, promoted by the global impact of technology, has driven a wedge of sophistication between performer and sound which is now widening. The increased distance and lessening of visceral implications between the performer and sound is problematic, and it has become the concern of many composers, scientists and technicians to fill in this gap with various interactive systems. These systems attempt to engage the performer at a level commensurate with an understanding of traditional instrument technique.

The first line of difficulty encountered by these efforts is in the reconciliation between historical and future positions. It is generally appreciated that learning an instrument is an undertaking that can last a lifetime. The implication here is that the instrument will remain relatively unchanged. Mastery of an instrument implies a process of building upon experience and a refinement of knowledge. This cannot be achieved if the instrument is changing significantly at a fundamental level of interaction. The idea of an instrumental tradition has evolved around the fundamental means of producing sound, i.e. striking, blowing and scraping, and remained relatively free from the forces of external technological change. In coopting contemporary technology into the service of sound production, the performer gives up certain positions to technology. The first to be relinquished is the relation of traditional skill to sound. Not only is this because the performer is removed from the vicinity of sound production, but the potential of technology itself demands that the context be made more complex than could be traditionally conceived. The technology is encouraging and inspiring sophistication that is independent of any musical considerations.

Consider both Duo for One Pianist and Wildlife. No matter how much or with what material the computer is pre-configured, the computer's contribution is always quasi-improvisational. The performers are never quite sure what the outcome will be, but accept that the technology must always be granted a certain aesthetic right; this is fundamental to the nature of performer/machine interaction and considered the most challenging direction for the future.

Physical Engagement and Visceral Performance

The Mathews/Boie Radio drum is an indirect technological derivative of the theremin and approaches some of the problems inherent in the earlier instrument with a mature understanding of electronic musical issues and attitude towards audience expectations in concert settings. Whereas the theremin in sound and action presented a spectacle of detachment and ethereality, the Mathews/Boie radio drum returns a sense of primitivism and contact for what also in reality amounts to a detached experience. The radio drum can be struck but it is not that impact that makes the sound. Contemporary society has existed long enough now to have absorbed and normalized the phenomena of invisible communication links but still significantly appreciates the consequences of physical contact, even if as simulacra. The idea of contact is promoted in this description of the instrument by one of its creators. Max Mathews observes:

With the radio drum you define an invisible surface above the drum, and when the radio transmitter, or the tip of the baton, crosses this surface, then the computer senses a beat. You soon learn to feel exactly where this surface is, and you can cross this surface with great delicacy as well as great stealth. Since it is an invisible surface, the baton doesn't make any noise when it penetrates it.
The underlying technology that constitutes the mechanism of the radio drum is quite different from that used in the theremin. An important difference is that the performer is required to hold and direct the transmitting antenna which are integrated into percussion mallets. Simply waving the hands around will not initiate sound production the way it does with the theremin. The relation between the mallets and the flat receiving surface, with its array of antenna, creates a close and complex two-dimensional space in which interaction takes place. Unlike the theremin this can be re-configured along with the nature of the sounds that it triggers. As was pointed out in chapter 2, the theremin is its own sound maker and not a controller for other sounds.

Both the theremin and the Radio drum influence the performer through the manner in which they quantize the results of physical actions. The theremin, at one end of the spectrum, doesn't quantize frequency and amplitude and thus puts the responsibility of discrete values of either on the performer. This dominates the performance experience and although, in some respects, it appears incongruous when compared to traditional instrumental practice, it does conform closely with the experience of traditional instrumental engagement.

The Radio drum is also characterized by no physical contact but does quantize the output signal if it is being converted to a digital format. Since the Radio drum is a controller, the performer can hear discrete changes taking place, and is thus probably not overly concerned about the free floating nature of performance gestures. As a controller, it can have its output data reassigned dynamically which encourages a completely different attitude to playing technique.

It is also significant that the radio drum arrangement returns an object to the performer's grasp, not only as a convenient means of control, but as a symbolic gesture towards the performer and audience paradigm, one might say, as the percussionist/conductor. It could be interpreted as an acknowledgement of historical musical performance in this age of electronic detachment. A paradoxical position for the very visceral act of percussion and something akin to the incongruity experienced with the theremin. But at least, from the performer's point of view, it is a concession to common practice, even if it introduces a different set of concerns.

The improvisational settings of Duo and Wildlife are not without problems of a technical and aesthetic nature. For example, if the technology is employed to do signal analysis or event monitoring there is necessarily some delay incurred. If a performer is accustomed to an immediate response from an acoustic instrument, these delays from signal analysis or event monitoring, prior to sound production, can be disturbing and debilitating. This is recognized as a problem with the Zeta Violin and the Disklavier. Whether they are tolerable or not depends on the performer; it appears that some can work under these conditions while others cannot. The issue of real-time analysis is interesting because it brings into consideration the concept of translating and transferring performance–gesture or sound–into another form for the purpose of yet further sound production. This inevitably involves a temporal sequence of processes, although perhaps not intended to be seen as one, which consumes considerable amounts of time depending on the quality and accuracy of the desired result. For example, with the Zeta violin, the tracking and conversion of pitch is mostly a problem on the G string, where the latency is around 30 ms . This could be viewed as disturbing in context, but through experience and anticipation might come to have a negligible impact on the performer.

The potential of automated analysis and decision making in real-time performance has yet to be fully realized but the implications are widely appreciated as a positive sign for the future of computer music. The scope of interactive systems at the moment appears to be unbounded and their musical significance for composition and performance largely unknown. It is, however, interesting that it is possible to speculate on the future of musical performance through the expectations of technology.

3.4 MUSICAL COMMUNICATION AND NETWORKS

The performances of Duet for One Pianist and Wildlife have in common the implementation of the transfer of information between the performer, computer and sound production equipment as an essential part of the performance, i.e. an essentially unmusical activity. Yet this information is a numerical representation of the performer in action and meant for immediate use by the computer systems or synthesizer units but not for the performers. The information, in its active state in the temporal domain, can only be interpreted through or by the intervening systems. The instrumentalists remain unaware of the details of this communication, which largely emanates from them. Although the data could be converted into a form that might stimulate the performers, e.g. some visual display, performers typically, and more conveniently, engage the consequences of this data through the normal channel of sound.

The idea that certain dimensions of performance: pitch, amplitude, duration and timbre, might be distilled and codified from the act of performance, and sent elsewhere with the intent of influencing the ongoing performance, is as remarkable as it is problematic. The idea and potential of such musical communication would have been incomprehensible to musicians earlier this century and quite obviously to those of any previous period in music history. The ability to communicate information in a form and at a speed that could only be imagined needed time before it was absorbed and utilized as a matter of a convention in music practice. It is an external technology and not exclusively musical. Although Thaddeus Cahill had the idea of generating and transmitting sound through conventional telephone lines at the turn of this century, his work, results and vision had little impact on traditional musical practice and contemporary composition at the time. Almost before any of the canon of twentieth century music had been created, his efforts had already fallen into obscurity along with the several-ton sound-generator used to produce the signals.

The transmission of musical signals from generator or trigger to transducer has been a fundamental fact of music technology from its inception. All electronic instruments transmit signals. In analogue equipment, these signals are defined by variation in the fundamental electrical characteristics, such as voltage. This equipment generates and responds to changes in voltage that are considered, in theory at least, to be continuous. There are no discrete values within this range. The point of human control is typically calibrated in quantifiable units which provide usable subtlety within an appreciable and meaningful range. Quantization is at the user's discretion.

The nature of analogue music systems parallels the responsiveness of traditional instruments. The signal travels at the maximum rate of electricity in a circuit, so there is, in effect, no delay between one component and another, unless artificially created. This responsiveness prevented the analogue signal from being regarded as anything but transient. It can be modified through various circuits but this is also a real-time process. When stored or recorded, the signal is subject to distortion and deterioration from other signals inherent in the storage media itself. Once mixed these signals can not be easily or effectively separated. The point here is that, in the analogue world, what is regarded as information is the same thing as the physical signal. Changes in the electrical characteristics of the signal are almost certainly changes in the information. Therefore analogue circuits are always active when significant voltages are present. Given the historical precedence of analogue circuits and the general view that they are real-time, we see that they are incapable of inspiring a revolutionary type of information distribution and control. Furthermore, the concept of control in the analogue context is thus a question of changing voltages at the user end and there is no basis for the concept of transmitted instructions or parameters as we now understand them in the digital domain.

With the advent of digital signals, communication became more sophisticated across a wider spectrum of use including information representation, analysis, computation, storage, and transmission systems. Represented as discrete states within the physical means of transmission, information has become independent of the potential fluctuations in the electrical signal.

In the context of this chapter we are interested in the communication of instructions and event parameters between a performer and computer music equipment rather than digital audio signals, although the future of computer music composition and performance is likely to involve more complex information streams such as digital audio.

Prior to the emergence of MIDI, communication between computers in a musical context was a matter of proprietary standards for hardware, protocols and specific use. The concept of a standard was not possible within a community of autonomous systems. Descriptions of such early systems as that used by Bischoff, Gold and Horton are noticeably lacking when it comes to explaining the technical details of the communication between the microcomputers. It can only be assumed that in the late 1970s, such details were not considered as important as those of the overall set up and compositional/performance procedures.

Since 1983 MIDI has become the embodiment of a musical nervous system and the definitive musical interface. Understood as the means to connect a variety of music technologies to the composer/performer, it has been in many ways, the prime motivation among many musicians for pursuing musical production using computer technology. It promotes and perpetuates a technological involvement by appearing as a kind of communicative glue, coordinating various unique pieces of equipment, irrespective of their function and contemporaneity, through an encoding of human gesture. Guy Garnett describes it as "ĵmore of a communications protocol than a representation of either signals or music."

Contemplation of the MIDI specification reveals a particular interpretation of musical performance and instrumental interaction. MIDI is an instrument without form or specific sound. It is extremely influential in the realization of real-time electronic music. This influence initially comes from the types of devices constructed to generate signals, then from the level of quantization it imposes on this information and finally through the types of systems it encourages or inspires to manage and conceptually frame the codification of performance information. MIDI codifies, in a peculiar manner, a performer's relation to a piece of equipment deemed musically significant.

For all its popularity MIDI is generally appreciated as limiting because of its fundamental bias towards the keyboard. This limitation is clearly a consequence of its immaturity and initial market orientation. There may have been some technical considerations which suggested that it was easier to implement a protocol scheme for the keyboard rather than some generic electronic instrument but it is likely that, had this direction not been pursued, MIDI would not have had the impact perceived today. Nevertheless, the scope of MIDI is regarded as extremely narrow, particularly by non-keyboard instrumentalists who routinely confront the limitations of the "note on/note off" paradigm.

MIDI does, however, make concessions to different data types. In some ways this creates abstractions of questionable usefulness, in that these become synthesizer-oriented controls and less applicable to real-time performance. Beyond that the existing data classifications can be alternatively interpreted. For example, note on and note off values might simply mark the beginning and end of some event, velocity determines what sound, and the Pitch or Mod wheel, the actual values for the sound. Such a re-interpretation of the data may, of course, be impossible for most commercial synthesis equipment, and in order to get such a scheme to work, an intermediary computer system would have to re-map the data to the appropriate values for the synthesis equipment.

MIDI does not cater to particular performance techniques to the extent that some players might wish. The fact that the note on/note off type occupies a principal position in the protocol suggests an extravagant rather than efficient consideration of communication for the extant range of physical controllers.

Consequently, one might ask, "For what instrument is MIDI designed?" The answer to that question can be found at the sound end of the MIDI transmission. The specification attempts to match a selection of performance gestures to synthesis equipment on a commercial scale. The orientation of this agenda can be appreciated by recognizing the direction of information flow-it is to the sound generating equipment and away from the speciality (idiosyncrasies) of performers or composers.

The underlying assumption about this method of information transfer is that if it is kept simple, the costs will stay down. Transferring simple types of information is convenient and inexpensive, and can be managed efficiently in hardware. When we begin to exercise our imaginations on the subject of information management and use, we quickly get to a point where our expectations extend far beyond our capacity to pay for it, let alone understand the implications of our desires. For example, suppose it were possible to return information to the controlling instrument. At the moment this is regarded as unnecessary in most cases because of the difficult in conceiving how it might be used and, of course, cost. But the concept of significant biħdirectional information flow is very powerful and prevalent in the larger domain of digital communications. How can the simultaneous transmission and reception of data be employed in computer music applications? The Jaffe/Schloss duo are an example of this kind of networking. The Boie radio drum can be dynamically re-configured whereas the violin cannot. Here we have to distinguish between the synthesis unit where the sound is produced and the controller where the musical act is initiated. Generally, networking and bi-directional information flow would need to be handled by a separate computer system. Not only is the system becoming more expensive but ramifications of the information flow are becoming more difficult to imagine and use in composition and performance.

Another issue deeply affected by complex communication arrangements is the transfer of timing information as part of the MIDI stream. In the MIDI context, the time component is effectively divided into two categories. The first concerns events in real-time and the immediate transfer of data to the relevant equipment. The second, involves the idea of "imposed time", e.g. SMPTE encoding, needed to record and synchronize this type of data accurately in non-real-time applications. These categories are never entirely separate but have their own implicit temporally based conditions. For instance, if the information travelling around an interactive musical environment can be captured and stored with the correct timing values, the performance can not only be played back with almost complete accuracy but tweaked and adjusted through the manipulation and editing of the stored information. Under these circumstances the performance moment is not completely lost but reconstructible.

Irrespective of the technical problems with the MIDI time representation, there is the creative concern that in general it tends to oversimplify the concept of time in music. At worst, it encourages thinking about elementary issues of time as somehow new, and because the context is so different and remote from the traditional musical world, searches for correlates between the musical past and present of rarely takes place. Time becomes an issue to be dealt with irrespective of the preceding historical discourse. At best, it is an instance where technology demands that an aspect of music be rethought in entirely new terms. While this is a powerful and exciting position, it is subject to technological trivialization and misuse, which does not advance musical knowledge but rather impedes enquiry and understanding.

Beyond the problems of the current incarnation of MIDI, no one can deny its value in opening up the possibilities of digital communications in musical performance and composition. It has promoted the idea of distributed systems and performance networks, that in a biological and psychological form musicians have used throughout music history. If technology bound musical communication is to develop further, either MIDI will have to be revised or it will have to be abandoned in favor of a more powerful protocol and signal specification. The currently viable alternatives are inevitably more costly. Curtis Roads notes, "LANs [Local Area Networks] are a more complicated and costly communications scheme than MIDI, but they operate many times faster and allow easy n-way communication; any device connected to the LAN can talk to any group of devices on the LAN." Such advances in musical communications would not only continue to serve the needs of those pursuing interactive music but might encourage new modes of data management for non-real-time composition.

Tile Page | Dedication | Abstract | Preface | Contents | Examples
Introduction | Chapter 1 | Chapter 2 | Chapter 4 | Conclusion
Bibliography | Discography | Appendices


Home Page