Chapter 1

The MetaAction Project

The MetaAction project introduces the concept of the musical interface as a dynamic and tangible phenomenon in our contemporary musical world. It is a project based on the development of a portable replacement action for grand pianos which operates exclusively under computer control. The MetaAction was conceived as a technological upgrade to the conventional piano action rather than a further development of the historical player piano. Apart from changes in technology and materials, the MetaAction is a simple replacement for the traditional grand piano action, occupying slightly less space than the original 1. The project has a prehistory, briefly discussed here, which will set up a context for viewing issues surrounding later research. Although the MetaAction has yet to become fully functional, it can be treated as a starting point from which the concept of the musical interface will expand throughout the course of this essay 2. Its idiosyncratic nature is, I feel, typical of virtually all contemporary musical interfaces. This research begins to shed light on a complex subject that extends well beyond the intention of the MetaAction. Moreover, to begin this essay from personal experience reflects my view that the musical interface is, in part, a context-dependent experience ranging over the domain of physical and conceptual human/machine interaction 3. I wish, in this respect, to emphasize that to simply identify a musical interface as something involving only performer interaction–the traditional setting–is to fail to appreciate its formidable complexity and potential.

1.1 THE EARLY COMPUTER CONTROLLED PIANO SYSTEMS

The MetaAction research has its origins in my use of the computer to control acoustic piano performances during the 1980s. Throughout the following review of this earlier research it is worth reflecting upon the cultural positions of both the computer and the piano. As a conjunction of historically disparate technologies such a musical system conveyed, on first impression, an awareness of both past and future. These technologies were united in ways that initially seemed novel or incongruous but on further reflection appeared logical. Here the notion of a musical interface surfaces quite obviously when the implications of substituting the computer for a human performer are realized.

Perhaps the most dramatic realization is in the recognition that this was no mere technological upgrade of the player piano mechanism but for the first time an alternative with a tangible and potential future 4. Whereas in the late nineteenth century, the future had arrived simply with the automation of the piano, connecting a computer foreshadowed not only the return of the human performer but an entirely new performance context. Unlike the historical feat of automation which spoke of some form of closure to the question of human limits, computerization has the potential to reverse the situation, re-opening interest in human/machine interaction from a new perspective. Although there are numerous cultural manifestations of human/machine interaction, it still appears to be in the very early stages of social acceptance and awareness. Mentioning the contribution of computer technology to music composition immediately brings to mind not only a canon of works (from CD anthologies and ongoing collections) and computer enhancements to musical production, but promotes vigorous speculation on the future of music. Such speculation is predicated on the inexorability of technology and encourages prospective rather than retrospective analysis.

The documented history of my early research 5 traces the evolution of three software applications which approached composition and performance on the acoustic instruments in unique ways. It should be noted that for practical purposes the early computer-controlled piano system comprised three physical components: the instrument, the interface hardware and the microcomputer. These were brought together in new and stimulating ways through a variety of composition software and composition projects. They are briefly discussed in order of development.

1. An elementary real-time performance system which simply played a stored score.

The score file was produced on the computer in a text editor and later converted into a form that could be sent to the instruments.

Of the several works that I produced with this system, Atlantic Fears 6, apart from being the first and the longest, exploits the acoustic properties of the piano in an unusual way. In 1983 I modified an existing player piano by removing the keyboard and action. I installed a solenoid action which was capable of some interesting behavior due to its simplicity of operation and sound quality. I anticipated the composition of dense textures and consequently used wooden hammers. These produced very bright and articulate sounds, and allowed the action to operate at a repetition rate of around 30 attacks per second. There was also a simple mechanism for damping groups of notes but not individual notes. Strings could be struck while the damper systems was engaged producing a muted sound, nevertheless there remained throughout a constant sympathetic resonance.

The harpsichord-like sound and inaccurate tuning (it had to be tuned without the keyboard) inspired a contrapuntal work based on a simple melodic fragment which concluded with a dense resonance. Atlantic Fears exploited the idiosyncratic and dynamic nature of the instrument and the idea of computer control.

2. An real-time algorithmic performance system.

From a selection of stored fragments the computer generated an ongoing composition in real-time. The pitch material was altered by the familiar serial techniques and distributed across the range of the instruments at different times. This system could generate extremely long performances, not possible with the previous stored score format.

I produced a series of compositions based on the same initial data, having arrived at this data by serendipity. I decided, after several attempts at producing what I thought was more aesthetically pleasing data, that the original material had the qualities I was after. What I later discovered was that this data was conducive to manipulation and concatenation, resulting in the construction of musical material which was the sum of the individual materials. The data consisted of 20 structures with the following subsections:

Structure {

    Number of pitches in the pitch array
    Array of pitches
    Array of start-times for pitches
    Array of durations for pitches
    Array of dynamics for pitches
}
Example 1.1 Data structure for early computer controlled piano system.
I developed a means of determining which structure would follow which through a modified first-order Markov process. Added to this was a random method that determined the likelihood and number of repetitions a structure may incur. By manipulating these structures on a number of levels, exact repetition was avoided and the impression was of material that decomposed, if the repetition count was high.

The nature of this music was minimalist and ambient. It tended to drift with an overall sameness yet a superficial difference. Although the nature of the performance could swing from violent outbursts of notes to sparse quiet passages, a listener would be constantly frustrated in an attempt to anticipate the course of the music. After listening to 15-20 minutes of a performance, one could almost be convinced that it ended at a meaningful place because the music had stopped there.

For the performance I used two instruments, a normal piano and the modified instrument. They shared each data structure, which was randomly distributed between them, with the effect that whatever distinct qualities the data structure might have, would become less familiar after being dispersed in a curious stereophonic manner.

3. A performer/machine interactive system

By far the oddest application, this software allowed the performer, seated at the computer console, to control a specific type of behavior on the modified instrument only. Using the computer keyboard the performer activated the solenoids which made contact with the bass strings and produced some very unusual sounds.

I recorded a number of experimental and improvisatory sessions in 1987, resulting in a work which I called Black Moon Assails. The range of sounds possible with this technique was quite remarkable. It involved sustained contact between a hammer solenoid and the string, which caused a number of curious phenomena to happen. The initial contact created a new bridge. This immediately changed the pitch of the string and since the string was not otherwise dampened, it resonated with the initial contact. The solenoid itself however was oscillating at 200 Hz, and if contact with the string was maintained, the solenoid was generally able to further energize the string at certain frequencies, causing, in effect, a sustained tone. The frequency of the sustained tone could be varied by changing the pressure exerted on it by the solenoid. This took place under real-time control by the performer at the computer keyboard.

Performance was, at best, a volatile experience in which the sound could change quite rapidly from a compelling cello-tone to a harsh roar. With some practice, I was able to recognize the transition to an unpleasant, and possibly avert it. Otherwise it became an unrecoverable state, from which there was little recourse but to end contact. It may have been the type of sounds or the unpredictable nature of the performance that gave me the impression that this system was in self-destruct mode most of the time. I understood it not as a simulacra of energy like some sounds from an electric guitar, but as a violently physical system.

On reflection, the above compositions were occasionally, and later specifically, creative sessions mediated through an enthusiasm and perceived potential for the micro-computer. Although each software application was the means to multiple interpretations of, at least, a particular composition, it transpired that rarely were they the means to a large number of compositions. For reasons other than the difficulty of working with the system, a compositional idea tended to gestate through the software development period, and before being specifically undertaken had, to a degree, been worked through. As a result, compositions were conceived through the software, manifested through performance, and captured, as instances, in the final recordings. It was a conceptual package. At various times, the software represented a context for working out musical and technical issues beyond the more obvious forum of the sound itself.

The process of composition in the earlier systems was heavily dependent upon the computer. This in turn influenced the agenda of a work. Compositions tended to reflect technological possibilities, not always explicitly, but somewhere in the make-up of the work. The computer was the means to articulate a musical work as much as any other performer, and was consequently consulted, in a sense, on matters of interpretation for any new and idiosyncratic musical idea. In simply contemplating how the computer might be made to produce certain envisioned musical effects, I was succumbing to a level of abstraction impossible to appreciate from the perspective of traditional instrumental composition. It would appear that once certain physical conditions are removed or substituted by others, a new context emerges that is disturbing not simply in its complexity but in its difference. One comes to it without historical perspective or mediation and has to contemplate the unknown on levels other than purely compositional.

The mid 1980s were a period in which the promises of the micro-computer age were still a compelling force for those interested in technology. I felt that the unknown and the possible, surrounding micro-computer technology, had become attractive compositional issues. A substantial reason behind this was that the early computer controlled piano systems functioned up to expectation; they appeared integrated and consistent from an aesthetic and a technological perspective. One tended to look at the system, and immediately realize how it might be used or extended within its own context rather than recognizing some general shortcomings in the implementation.

These software, hardware and instruments provided a particular musical motivation that deviated from the mainstream of current music technology. Within this place, I had begun to enjoy the experience of writing software as a score prior to producing the work. It was, in an archaic sense, notes to the performer. I appreciated the fact that it was dynamic and the score was modifiable after review, in a manner similar to a traditional instrumental rehearsal. I was experiencing, along with many other people, an entirely new mode of composition and performance of music.

However, at the end of this period of research I felt that the computer control of the piano could move towards a more complex real-time involvement with human performers. This was inspired by a deepening awareness of the physical limitations of an exclusively computer controlled approach. I had begun to realize the sense of closure, to the project described earlier.

Exactly what such a transition would entail was unclear in the early design stages, so I returned to consider the traditional performance paradigm. The reasoning here was that real-time computer interaction would demand a greater degree of sophistication across the spectrum of the project, on the part of the performer, the mechanical system and computer control. This provided a basis for further research and so the MetaAction project was born.

1.2 THE COMPUTER MUSIC CONTEXT

At the commencement of the MetaAction project in 1987, the computer music world had clearly divided into two main groups : those concerned with performance using MIDI and those concerned with technical and musical issues based on non-real-time solutions that had existed prior to, or independent of, MIDI's emergence. There were some issues of computer music that spanned both groups, and probably other subgroups as well, which to this time have only just begun to be resolved. One of these was the synthesis and use of traditional instrumental sounds. While synthesis fidelity remained a primary objective across the board, MIDI began to shift the focus of interest to questions of performance. Whatever the fidelity, the imposition of real-time performance changes the appreciation of any sound. Thus MIDI flourished at the time when a particular real-time synthesis technique (Frequency Modulation - FM) emerged and proved both adequate and effective in turning attention away from sound quality to matters of performance and the means of accessing a vast range of sounds. The commercial FM sound became almost definitive of computer music and eventually stimulated growth in the demand for interfaces that would allow access to subtle uses of the technique. Given the volume of FM synthesis keyboard equipment produced through the latter half of the 1980s, one could be forgiven for believing that FM synthesis was primarily created for the keyboard. It is ironic, in light of the fact that member of the musical public are not, for the most part, sufficiently proficient keyboard players or technicians to warrant such devotion to a singular musical interface. The bulk of other controllers–MIDI wind instruments, string instruments and analog-to-MIDI converters systems–failed to infect the buying public with similar enthusiasm.

The proliferation of synthesis equipment highlighted a kind of computer/instrument orthodoxy. If one wanted to work with technology to create a new musical instrument, one looked to the past to find a functional paradigm, tried to imitate it and then find ways to customize it for current musical demands 7. This was clearly the case with sound synthesis and MIDI instruments. Few, if any, appeared without presets. Traditional instruments provided a criterion of proven significance and effect to which technology must aspire before updating. A synthetic or virtual instrument needed to be able to reflect those qualities of sound, history and repertoire that are intrinsic to acoustic instruments, in order to establish initial credibility.

1.3 THE PIANO : EVOLUTION AND INNOVATION

The piano is a product of technical innovation and refinement. It is an exemplary object of the industrial age. Evidence of technical evolution has, for the most part, either been transparent, as in the transition from natural materials to plastics, or masked by an emphasis on style and authoritative endorsement. In the cases where changes are clearly evident and have affected the performer directly (for example, the addition of more keys) the practical change has been upwardly compatible, in that the newer instruments can still play the older works for a smaller keyboard with some aesthetic adjustment. Most people perceive the evolution of the instrument as striving for ever more subtle control over sound production and this expectation has inspired innovation often quite radical in nature.

The piano is still subject to considerable technical research. Today, this research is concerned with the use of contemporary materials that are very stable and hard wearing which allow the instrument to remain in use longer without extensive maintenance. One interesting point about contemporary actions is that they have a heavier touch than their historical counterparts. This would appear to be a consequence of the increase in mechanical sophistication intended to provide an action capable of transferring more energy from the player to the strings. This is obviously one of many precedents that reflect the fundamental fact about complex systems–more sophistication means more technology. The piano's relatively slow technical evolution has ensured that most people are unaware of the significant changes that it has undergone. Unless, of course, a variety of historical instruments are scrutinized at the same time 8.

Technical changes in construction and materials throughout the piano's history have also been taken for granted. This acceptance is largely seen as necessary in the ongoing struggle to reduce the cost of production and maintenance, thus making pianos available to a large number of people throughout society. While such evolution has changed the sound of the instrument by degrees, it usually falls within the bounds of accepted aesthetic criteria. These criteria seem to have expanded the concept of the piano, and it is interesting now to observe that the perception of the instrument begins to include electronic pianos whose sound so uniformly represents the best acoustic instruments. The commercial reality is that technology has again fulfilled a necessary role, in perpetuating the instrument, even when such synthetic pianos are obviously simulacra, by dispensing with the original sound production method altogether.

The piano is the ideal physical platform for experimenting with mechanical systems that replace the performer. This was understood as far back as the 1880s. It is large, well laid-out and widely available. While the player piano is the prime example of such innovation, it is interesting that automation is seldom associated with the grand piano. This probably an economic and marketing issue, since more people could afford upright instruments. The history of the piano, on the other hand, remains the primary source of inspiration for what it means to play the instrument. It is doubtful whether the player piano was or could be viewed as an incentive for future instrumental development on anything other than the player piano itself.

While the player piano embodies an historical position, it is noteworthy that the primary impetus for its development was to completely replace human involvement in production of music 9. An apparent contradiction was that mechanical renditions were expected to possess and convey all the subtlety and nuance of human performance. Although the player piano achieved this to an impressive degree, it had definite limits, particularly in its credibility as a recording medium. There was also rarely a desire to exploit the instrument beyond conventional notions of piano playing, and it was not conceived as an adjunct to conventional performance. Performer/machine interaction as a concept would not have existed during the player piano's Golden Age, even though some external controls allowed for variation in tempo and dynamics. It is interesting that these controls required the operator to sit in front of the instrument during performance. The operator was thus put in a pseudo-performer state, necessitating concentration on the music and response when appropriate but at a technically undemanding level. Today, however, a performer might have sophisticated control over all aspects of a player piano system 10.

1.4 ORIGIN AND DEVELOPMENT OF THE MetaAction

The MetaAction was the result of thinking about post-automatic music and had a long period of gestation, design and construction 11. The earlier computer-controlled piano systems appeared to be characterized by an emphasis on the computer and electronic technologies rather than either the instruments or the music. I imagined it to occupy a position somewhere between Nancarrow's Studies and the return of the human performer through newer technology. The computer technology at that time was the weakest part of the project because its potential was least understood. There was also a naive optimism on my part which viewed the musical outcome as dependent on how the computer functioned or could be made to function. In some respects that was true, but more importantly it encouraged further enquiry into the technology. Microcomputers were new, and few people, including myself, understood the intricacies and advantages of either low-level or conceptually sophisticated higher level programming. Musical expectations were clearly tempered by the technical sophistication of the software. It was a new kind of practical and intellectual limitation uniquely associated with music technology.

Most composers are drawn to music technology in the hope that it will realize their musical vision, which they believe cannot be accessed by traditional musical means. Questions inevitably abound concerning the technological manifestations of musical visions. From the initial financial or intellectual means which temper the vision, it becomes apparent that the technology is not going to fulfill expectations conveniently and expeditiously. Curiously, however, as much as technology becomes an imposition on the imaginations of those not prepared for it, what focus it does impose on the imagination often causes oversight of the latent potential within the technology itself. It is clear that technology, particularly computer technology, has become a significant layer between the intention of the composer and the musical work. This layer, which we now address through an interface, is quite unlike anything previously encountered between the composer or performer and the music.

In the early 1980s available computing technology 12 reached the expectations of real-time but non-interactive performance. It was easy to exceed limitations imposed by physical memory and input/output throughput because of the trade off between programming languages that allowed a high level of conceptualization and those that produced efficient programs. The early piano systems could have worked through many more compositional possibilities had the level of understanding and technical skill been higher. This retrospective consideration is interesting because I suspect that the potential of technology, on first contact, tends to drive creativity with sheer enthusiasm, irrespective of how one views one's competency to undertake a particular task. Technical mastery, would appear the result of an introspection upon a subject which is eventual manifest externally. This is clearly the case for musical performance. But does a profound understanding of the technology result in more significant musical compositions? Those with a traditional view of composition or performance rather than experience with technology might answer in the affirmative. This is predicated on the assumption that experience is the arbiter of success in the traditional field of instrumental composition.

In experimental computer music advances are generally made as part of the compositional process. The work is the impetus for technical evolution. Where musical concepts are coded in software they can be developed through accretion or deletion of musical ideas. Software can also be consciously discarded in order to accommodate radically new approaches which simply cannot be accommodated into an existing program. Unless developed concepts can be discarded or evolve into newer forms, I suspect that existing concepts often have the effect of blocking or stalling further efforts. If it is accepted that technology houses particular concepts then the adoption of new concepts might require the dismantling of the house–the abandonment of that technology.

MetaAction design and construction was certainly the result of observations of the early systems but was also governed by two views on the interactive potential of computers. The first came from an awareness of what a performer might contribute to a work that could not be easily be composed for a computer-only performance. For example, the addition of natural phrasing and expressive qualities to structures not normally conceived in traditional pianistic terms. This is an antithetical position to that adopted by Nancarrow. The difficulty of including nuance and interpretation can be well appreciated by any composer who has tried to get a computer to perform without obvious mechanical qualities. I now regard the challenge of naturalness to be one of the most significant and complex areas of computer music interpretation.

The second was the belief that new sounds could be produced through a re-configuring of the sound production mechanism–the action itself. These views are closely related, the important element being the presence of a human performer. More precisely, I mean human mediation. I had become aware that watching machines work, while fascinating to a degree, was not as interesting as watching humans also engaged in the process. As Norbet Wiener is reputed to have said, To human beings, human things are all-important.

The idea of new sounds from a hyper-active piano action was not original to this research. But I imagined that a performer together with a computer might accommodate a change in the piano/performer relation, allowing access to new and complex sound structures not previously experienced with a sense of human engagement. The sound of the instrument would remain much the same but the effect of these complex sound structures or aggregates would be unlike previously obtainable piano sounds. To illustrate my claim on this point and to cite the initial source of inspiration, I refer the reader to Conlon Nancarrow's Study #25 and also Study #36. Both studies, but particularly Study #25, demonstrate the piano operating in a realm that is not only unplayable but the sound effect is not conceivable in notated form until one actually hears it. The score for Study #25 reveals, for example, on page 6 an aggregate of over 80 notes that is played as a rapid flourish in just under a second 13. It is by no means a simple structure and involves the uneven displacement of notes over the entire range of the keyboard–lowest B to highest A. Such aggregates have a dynamic character independent of their external temporal context. This particular study exploits such aggregates, at first in isolation then finally as a dense and frenetic texture.

Both the medium and the musical objectives pursued by Nancarrow in his music excluded the possibility of human intervention during performance. Such works as Studies #25 and #36 would have been completely different had he been able to free up the performances in a way that corresponded to human performance technique and sensibility 14. My intention in the design of MetaAction was to permit precisely those factors and thus it becomes questionable as to whether the MetaAction is a form of player piano. It depends on how it is used.

The MetaAction is not so much a new means for producing dramatically new sounds but rather a new way of thinking about and accessing the traditional piano sound. Compositional interest lies predominantly in how the instrument is played.

The desire to experiment with computer control and explore new sounds on my early systems after the Nancarrow paradigm obscured the potential and ramifications of performer/instrument interaction. The potential to change performance practice existed, but was not fully appreciated as the focus was specifically on the compositional results and less on the idiosyncrasies of performance. What was happening to the idea of the performer was masked by the more the demanding position of computer control rather than computer interaction. Only later, after the experience of unmediated computer control, did the idea of the interactive system appear attractive and possible. The position was also promoted through an understanding of new concepts and a change in technology.

The transition came about through the realization that the computer could be delegated certain performance tasks, each dependent on either internal or external data. This is a significantly different undertaking than handing over the entire performance to the computer through the preparation of a performance score or instructions.

While it is possible to view this as the evolution of instrumental technique to the point where it no longer resides solely with the performer, the implications for the future of traditional instrumental music are unclear. If performer/machine interaction becomes widely practiced, might it not put some kind of pressure on those still concerned solely with traditional techniques to review their position? Would the future of contemporary instrumental music look different?

1.5 MetaAction OPERATION

There are two distinct aspects to the operation of the MetaAction. The first concerns the nature of its construction and the means to control it, and the second, the musical intention behind its operation and control.

Mechanics

Consider for a moment the keyboard. All keyboard instruments, from the earliest times to the present, maintain an immediate correspondence and connection between the dampening of a string and the release of the key. The exceptions to this are the later introduction of the sustain and sostenuto pedals 15. Neither of these operations involve the hands and fingers nor can they be executed freely or with great discretion.

For string players, the necessity of dampening strings is at the center of the concept of mastery of technique. It is unimaginable that players of stringed instruments are not aware of the effect of their hands against the strings. It is a tactile experience. A player can often detect that an impending, as yet unheard, note will sound not as intended because of faulty finger placement. There may or may not be time to do something about it but there is, for a brief moment, an option. Controlling the existence and timbre of a sound is something wrestled with at an early stage of technical development. One of the most spectacular techniques associated with dampening strings is the production of the harmonic (unfortunately not commonly used on the piano). Such delicate contact appears in contradiction to the normal action of firmly pressing the string against the fingerboard, but desired harmonics will only occur at the critical nodal points.

To appreciate the operational significance of the MetaAction, it is necessary to look at the active components. From the perspective of computer control, it becomes apparent that the otherwise integrated conventional piano action could be split into two autonomous systems. In the MetaAction, the critical components–the hammers and the dampers–have been physically isolated and put under individual control. Viewing the contribution of autonomous dampening operations through the technique of a string player–a Violinist, a Guitarist or a Harpist–one sees that different functions are allocated to each hand but either hand can dampen strings. For the pianist, with years of training and an advanced technical proficiency, the intellectual burden of controlling dampers independently for each note would initially seem an unwelcome degree of sophistication. There is simply no historical provision for thinking that it would contribute anything to the nature of historical repertoires. The introduction of such a technique would imply the study of a new instrument and repertoire.

The MetaAction encourages a radical transition in keyboard technique. Controlling the dampers independently of the hammers introduces a new potential to the piano but at some cost. In theory, it is a realm of technique devoted to control and coordination of the dampers and the hammers in every possible way. To fully appreciate this would require new keyboard Studies. The list of action operations that may be employed is as follows:

Performance Intentions

What is evident in the above list is an increase in sophistication which is more dependent on control of the dampers than the hammers. Coordinating the possibilities of hammer/damper combinations considerably extends the repertoire of piano sounds available during performance. Unfortunately such control can be the direct responsibility of the performer only under relatively simply conditions. In more complex scenarios, he is limited to initiating series of events and conceivably influence their progress.

The MetaAction alters the relation between the performer–who need not necessarily be a keyboard player–and the instrument. It initiates a series of changes towards the performer but does not significantly change the method of sound production. The strings are still struck by hammers and the sound is still percussive in origin.

In a dramatic sense, the delegation of control, particularly in the case of low-level performance function, to a computer positions the performance/instrument relation around a new aesthetic. It brings into existence questions concerning performance, its future, the role of the human performer, and the issue of access and appreciation of the resulting musical discourse.

Unlike most computer controlled piano systems, including its predecessors, the MetaAction and the implication of human/machine interaction shift the emphasis from the computer to a collective relationship. Performer/machine interaction should be understood as a tightly coupled system where it is difficult to separate or distinguish the contribution of each. The performer is no longer responsible for every event that occurs, but for how material is selected, articulated and coordinated. Under those circumstances it might be useful to obfuscate the specific contributions of both parties in order to discourage a desire on the part of the listener to try and identify the roles of the performer and the machine.

By far the most ambitious objective and the reason for the construction of the MetaAction was the desire to change the fundamental concept of how the piano is accessed. This is accomplished by changing the one-to-one correspondence between the player's actions and the resulting sounds. To make it possible to trigger aggregates like those in Nancarrow's Study #25, the computer would need to act upon some input from a performer and generate or manipulate pre-existing material. What is then articulated can be predetermined in a variety of ways–for example, either from the previous input or as data stored and composed prior to the performance.

1.6 SUMMARY OF THE MODEL

This chapter has introduced discussion on the physical nature of musical interfaces by considering quite radical changes to the acoustic piano. As a practical model, it exposes issues of contemporary performance and composition involving computers by suggesting an entirely new paradigm, a paradigm predicated on the past but with an unexplored yet immediately obvious potential to anyone familiar with the instrument. This model has raised some interesting questions.

Although no suggestion has been made as to how a performer might gain access to the action and consequently the instrument, it is evident that MIDI might initially provide the most convenient and flexible method. One could simply use the traditional keyboard, relying on existing keyboard techniques. Here the performer can play with all the precision, grace and confidence acquired through years of practice and get an entirely new sound. This, of course, might come as a shock and dramatically undermine any advantage established technique might offer. So what is the value of technical proficiency in the traditional musical sense to relatively unknown performer/machine interactive systems? It depends on the intentions of those constructing and composing for the MetaAction. Parallels do exist in contemporary works where the performer is required to play beyond historical technique or aesthetic in order to execute the work. John Cage's prepared piano works come to mind. However, it cannot be assumed that skills, particularly those acquired in pursuit of traditional musical results, are transferable to systems which might exhibit autonomy of action and originality of intention. Even if some technical competency is necessary, there are still questions of interpretation. Clearly, the historical gestures and intentions in the expressive actions will neither correspond nor feel appropriate. But how a composer and performer interpret the past or the future inevitably seems to have a bearing on the treatment of the instrument.

The imposition of computing technology is more than an enhancement to the existing mechanism, for it breaks open the closed loop arrangement long thought immutable between performer and instrument. Yet, it also only admits selected influences. Thus with such augmentation, formally closed systems become intrinsically open.

The level of keyboard skill throughout the world is now higher than it has ever been before. This in itself is a powerful argument for aligning technological advancements to the competency of traditional instrumentalists 16. To suggest, however, that it is a superior form of interface is to revert to a traditional assessment of musical objectives. The piano keyboard certainly is the way it is because of evolutionary forces within western music. Exactly what these forces are and how and when they came about may be a subject more appropriate to musicological discourse, but technology is challenging the exclusive position of western musical thought in areas of instrumental practice. The MetaAction has a sense of being part of some evolution around the piano but seeks musical influence from instruments from other cultures. The broader collaboration with technology must be worked out against a historical background of immense cultural significance. The integration of computing technology and the acoustic piano is a collision of conceptual paradigms of a primarily incongruous nature. This incongruity is largely associated with historical positions where the computer, on the one hand, is a rich source of new concepts and the piano, on the other, is rich in history and the human desire for cultural interpretation. This is not to say that they cannot be brought together, but there is resistance to their integration on the grounds that it would sever the instrument from its perceived historical continuum. The instrument effectively becomes new and has only a vague past.

That synthesis instruments are generally virtual–in that they can be updated, changed or replaced, at anytime with no apparent external or physical correspondences–argues for a flexibility in interface design and application. The interface is dependent on a vast range of creative objectives in the music world. While the acoustic piano does not exactly fit the category of a computer instrument in the way a synthesis instrument might, the MetaAction does permit the types of interfaces normally associated with synthesis instruments.

One of the lingering issues about this kind of research is the question of maturity and intrinsic musical value. Assuming for the moment that all engineering and technical details have been resolved and the MetaAction operates as envisioned, there remains the frontier of the musically unknown. Little is actually understood about the potential of the instrument and even less about notions of competency or virtuosity, so it will remain experimental. While this model might exude a potential, it is not until musical works are written, and considered of some aesthetic value, that the instrument can be said to have arrived.

Endnotes to Chapter One

  1. Its physical installation can be seen in the photographs in Appendix A. return
  2. For more a detailed account of this research see the conference papers and photographs in Appendices A, B and C. return
  3. The term interaction is used here in its broadest sense. That is the engagement of any instrument or piece of equipment irrespective of whether the results of the session are in real or non-real-time. return
  4. The history of the mechanical and player piano indicates that this position was not foreseen. For further reference, see footnote 9 on Marx for what may be taken as a global view of the machine in the Nineteenth century. return
  5. Alistair Riddell. A Perspective on the Acoustic Piano as a Performance Medium Under Machine Control. MA Thesis. La Trobe University. Melbourne, Australia. 1989. The thesis confines technical details to the appendices while the body of the text addresses the specific software systems and any hardware related phenomena that they entailed. In brief, there were two instruments : one a conventional piano and the other a modified instrument. The later had no keyboard, wooden hammers and a primitive means of dampening an omnipresent resonance. Consequently, there was some scope for timbral variation. It should also be noted that there was specialized electronic hardware that enabled the instruments to function far more efficiently than any similar system at the time. return
  6. Atlantic Fears is recorded on the Anthology of Australian Music on Disk csm:4 1989. return
  7. Technological evolution always proceeds outwards from the knowledge and concerns of the moment, even when future implications are evident. The demand for different control systems for MIDI emanated from the overwhelming production of keyboard instruments and a feeling of marginalization on the part of non-keyboard performers. return
  8. This is the opposite position to that of Electronic instruments, whose future is predicated on overt replacement. return
  9. Karl Marx in his book Capital (New York : Vintage books. Volume 1. Chapter 15 : Machinery and Large-Scale Industry. pp. 492-639) presents a detailed and fascinating account of the social implications of the machine in the Nineteenth Century. The point of interest here is that Marx is perhaps summarizing the almost universal and singular view of machines: namely, that they are designed and built to replace human beings and their efforts in total or in part. Any other view of the use of the machine would have been premature and difficult to accept under the circumstances. It is, therefore, easily appreciated that in this age, such euphoria should also spill over, among those that could afford them, to mechanical musical instruments. To this day, the use of the machine in the production of music is seen as a separate phenomenon to that of the common performance experience. return
  10. Jean-Claude Risset's work with the Yamaha Disklavier is a particular good example. A discussion of it by Risset can be found in The Computer as Interface: Interlacing Instruments and Computer Sounds; Real-time and Delayed Synthesis; Digital Synthesis and Processing; Composition and Performance. Interface Vol. 21 [9-20]. 1992.
    Note also my discussion of Risset's work in chapter 2. return
  11. Design and construction of the MetaAction took place between June 1987 and August 1989 in the Physics Department's machine shop at La Trobe University, Melbourne. Australia. From the beginning the project benefitted from a more sophisticated development and construction environment, better understanding of what was required and a technologically more sophisticated age. return
  12. I am referring to affordable microcomputer technology in Australia circa 1983. This was a Z-80 based single board computer. The third system's software ran on the newer micro-computer, Amiga 1000, available around 1986-7. This was a significant technological advance from 1983. return
  13. Conlon Nancarrow. Study #25 in Soundings. Vol. 9, 1975. Edited by Peter Garland. return
  14. For such a comparison, listen to the performance of Nancarrow's Tango? by Ursula Oppens (American Piano Music of Our Time. Music & Arts CD-604 1989), which reveals Nancarrow's style but, of course, not quite the same technical concerns. return
  15. Admittedly, some harpsichords can explicitly play damped strings for effect. The so called Buff-stop functions in this manner by dampening a choir of strings. Indeed, it does alter the relation between key release and damped notes. It is interesting to note that the desire to change the sound of the strings on keyboard instruments went out of fashion with the arrival of the piano. That it has returned to some minor extent in the twentieth century with the emergence of the prepared piano is rather more indicative of yet another complex musical agenda, one that was to flourish with the advent of the synthesizer. return
  16. Neil Rolnick in a letter to the editor (Computer Music Journal 16:3 Fall 1992) reminds us of the essential dilemma confronting new computer music instruments and user interfaces: "When a musician moves to the use of the computers for musical purposes, the familiar interface permits the most ready access to musical thought and execution." Abandoning this for innovative interfaces that "free us from the conceptual limitations of interfaces derived from acoustic models whose limitations are not relevant to electronic instruments" has serious consequences, the most significant of which is to alienate an audience from the perception of the connection between sound and performance that is both familiar and satisfying to them. The performance act is a part of the context of appreciating and understanding the music. return

    Tile Page | Dedication | Abstract | Preface | Contents | Examples
    Introduction | Chapter 2 | Chapter 3 | Chapter 4 | Conclusion
    Bibliography | Discography | Appendices


    Home Page