Perhaps the most dramatic realization is in the recognition that this was no mere technological upgrade of the player piano mechanism but for the first time an alternative with a tangible and potential future 4. Whereas in the late nineteenth century, the future had arrived simply with the automation of the piano, connecting a computer foreshadowed not only the return of the human performer but an entirely new performance context. Unlike the historical feat of automation which spoke of some form of closure to the question of human limits, computerization has the potential to reverse the situation, re-opening interest in human/machine interaction from a new perspective. Although there are numerous cultural manifestations of human/machine interaction, it still appears to be in the very early stages of social acceptance and awareness. Mentioning the contribution of computer technology to music composition immediately brings to mind not only a canon of works (from CD anthologies and ongoing collections) and computer enhancements to musical production, but promotes vigorous speculation on the future of music. Such speculation is predicated on the inexorability of technology and encourages prospective rather than retrospective analysis.
The documented history of my early research 5 traces the evolution of three software applications which approached composition and performance on the acoustic instruments in unique ways. It should be noted that for practical purposes the early computer-controlled piano system comprised three physical components: the instrument, the interface hardware and the microcomputer. These were brought together in new and stimulating ways through a variety of composition software and composition projects. They are briefly discussed in order of development.
Of the several works that I produced with this system, Atlantic Fears 6, apart from being the first and the longest, exploits the acoustic properties of the piano in an unusual way. In 1983 I modified an existing player piano by removing the keyboard and action. I installed a solenoid action which was capable of some interesting behavior due to its simplicity of operation and sound quality. I anticipated the composition of dense textures and consequently used wooden hammers. These produced very bright and articulate sounds, and allowed the action to operate at a repetition rate of around 30 attacks per second. There was also a simple mechanism for damping groups of notes but not individual notes. Strings could be struck while the damper systems was engaged producing a muted sound, nevertheless there remained throughout a constant sympathetic resonance.
The harpsichord-like sound and inaccurate tuning (it had to be tuned without the keyboard) inspired a contrapuntal work based on a simple melodic fragment which concluded with a dense resonance. Atlantic Fears exploited the idiosyncratic and dynamic nature of the instrument and the idea of computer control.
I produced a series of compositions based on the same initial data, having arrived at this data by serendipity. I decided, after several attempts at producing what I thought was more aesthetically pleasing data, that the original material had the qualities I was after. What I later discovered was that this data was conducive to manipulation and concatenation, resulting in the construction of musical material which was the sum of the individual materials. The data consisted of 20 structures with the following subsections:
Number of pitches in the pitch array
Array of pitches
Array of start-times for pitches
Array of durations for pitches
Array of dynamics for pitches
The nature of this music was minimalist and ambient. It tended to drift with an overall sameness yet a superficial difference. Although the nature of the performance could swing from violent outbursts of notes to sparse quiet passages, a listener would be constantly frustrated in an attempt to anticipate the course of the music. After listening to 15-20 minutes of a performance, one could almost be convinced that it ended at a meaningful place because the music had stopped there.
For the performance I used two instruments, a normal piano and the modified instrument. They shared each data structure, which was randomly distributed between them, with the effect that whatever distinct qualities the data structure might have, would become less familiar after being dispersed in a curious stereophonic manner.
I recorded a number of experimental and improvisatory sessions in 1987, resulting in a work which I called Black Moon Assails. The range of sounds possible with this technique was quite remarkable. It involved sustained contact between a hammer solenoid and the string, which caused a number of curious phenomena to happen. The initial contact created a new bridge. This immediately changed the pitch of the string and since the string was not otherwise dampened, it resonated with the initial contact. The solenoid itself however was oscillating at 200 Hz, and if contact with the string was maintained, the solenoid was generally able to further energize the string at certain frequencies, causing, in effect, a sustained tone. The frequency of the sustained tone could be varied by changing the pressure exerted on it by the solenoid. This took place under real-time control by the performer at the computer keyboard.
Performance was, at best, a volatile experience in which the sound could change quite rapidly from a compelling cello-tone to a harsh roar. With some practice, I was able to recognize the transition to an unpleasant, and possibly avert it. Otherwise it became an unrecoverable state, from which there was little recourse but to end contact. It may have been the type of sounds or the unpredictable nature of the performance that gave me the impression that this system was in self-destruct mode most of the time. I understood it not as a simulacra of energy like some sounds from an electric guitar, but as a violently physical system.
On reflection, the above compositions were occasionally, and later specifically, creative sessions mediated through an enthusiasm and perceived potential for the micro-computer. Although each software application was the means to multiple interpretations of, at least, a particular composition, it transpired that rarely were they the means to a large number of compositions. For reasons other than the difficulty of working with the system, a compositional idea tended to gestate through the software development period, and before being specifically undertaken had, to a degree, been worked through. As a result, compositions were conceived through the software, manifested through performance, and captured, as instances, in the final recordings. It was a conceptual package. At various times, the software represented a context for working out musical and technical issues beyond the more obvious forum of the sound itself.
The process of composition in the earlier systems was heavily dependent upon the computer. This in turn influenced the agenda of a work. Compositions tended to reflect technological possibilities, not always explicitly, but somewhere in the make-up of the work. The computer was the means to articulate a musical work as much as any other performer, and was consequently consulted, in a sense, on matters of interpretation for any new and idiosyncratic musical idea. In simply contemplating how the computer might be made to produce certain envisioned musical effects, I was succumbing to a level of abstraction impossible to appreciate from the perspective of traditional instrumental composition. It would appear that once certain physical conditions are removed or substituted by others, a new context emerges that is disturbing not simply in its complexity but in its difference. One comes to it without historical perspective or mediation and has to contemplate the unknown on levels other than purely compositional.
The mid 1980s were a period in which the promises of the micro-computer age were still a compelling force for those interested in technology. I felt that the unknown and the possible, surrounding micro-computer technology, had become attractive compositional issues. A substantial reason behind this was that the early computer controlled piano systems functioned up to expectation; they appeared integrated and consistent from an aesthetic and a technological perspective. One tended to look at the system, and immediately realize how it might be used or extended within its own context rather than recognizing some general shortcomings in the implementation.
These software, hardware and instruments provided a particular musical motivation that deviated from the mainstream of current music technology. Within this place, I had begun to enjoy the experience of writing software as a score prior to producing the work. It was, in an archaic sense, notes to the performer. I appreciated the fact that it was dynamic and the score was modifiable after review, in a manner similar to a traditional instrumental rehearsal. I was experiencing, along with many other people, an entirely new mode of composition and performance of music.
However, at the end of this period of research I felt that the computer control of the piano could move towards a more complex real-time involvement with human performers. This was inspired by a deepening awareness of the physical limitations of an exclusively computer controlled approach. I had begun to realize the sense of closure, to the project described earlier.
Exactly what such a transition would entail was unclear in the early design stages, so I returned to consider the traditional performance paradigm. The reasoning here was that real-time computer interaction would demand a greater degree of sophistication across the spectrum of the project, on the part of the performer, the mechanical system and computer control. This provided a basis for further research and so the MetaAction project was born.
The proliferation of synthesis equipment highlighted a kind of computer/instrument orthodoxy. If one wanted to work with technology to create a new musical instrument, one looked to the past to find a functional paradigm, tried to imitate it and then find ways to customize it for current musical demands 7. This was clearly the case with sound synthesis and MIDI instruments. Few, if any, appeared without presets. Traditional instruments provided a criterion of proven significance and effect to which technology must aspire before updating. A synthetic or virtual instrument needed to be able to reflect those qualities of sound, history and repertoire that are intrinsic to acoustic instruments, in order to establish initial credibility.
The piano is still subject to considerable technical research. Today, this research is concerned with the use of contemporary materials that are very stable and hard wearing which allow the instrument to remain in use longer without extensive maintenance. One interesting point about contemporary actions is that they have a heavier touch than their historical counterparts. This would appear to be a consequence of the increase in mechanical sophistication intended to provide an action capable of transferring more energy from the player to the strings. This is obviously one of many precedents that reflect the fundamental fact about complex systems–more sophistication means more technology. The piano's relatively slow technical evolution has ensured that most people are unaware of the significant changes that it has undergone. Unless, of course, a variety of historical instruments are scrutinized at the same time 8.
Technical changes in construction and materials throughout the piano's history have also been taken for granted. This acceptance is largely seen as necessary in the ongoing struggle to reduce the cost of production and maintenance, thus making pianos available to a large number of people throughout society. While such evolution has changed the sound of the instrument by degrees, it usually falls within the bounds of accepted aesthetic criteria. These criteria seem to have expanded the concept of the piano, and it is interesting now to observe that the perception of the instrument begins to include electronic pianos whose sound so uniformly represents the best acoustic instruments. The commercial reality is that technology has again fulfilled a necessary role, in perpetuating the instrument, even when such synthetic pianos are obviously simulacra, by dispensing with the original sound production method altogether.
The piano is the ideal physical platform for experimenting with mechanical systems that replace the performer. This was understood as far back as the 1880s. It is large, well laid-out and widely available. While the player piano is the prime example of such innovation, it is interesting that automation is seldom associated with the grand piano. This probably an economic and marketing issue, since more people could afford upright instruments. The history of the piano, on the other hand, remains the primary source of inspiration for what it means to play the instrument. It is doubtful whether the player piano was or could be viewed as an incentive for future instrumental development on anything other than the player piano itself.
While the player piano embodies an historical position, it is noteworthy that the primary impetus for its development was to completely replace human involvement in production of music 9. An apparent contradiction was that mechanical renditions were expected to possess and convey all the subtlety and nuance of human performance. Although the player piano achieved this to an impressive degree, it had definite limits, particularly in its credibility as a recording medium. There was also rarely a desire to exploit the instrument beyond conventional notions of piano playing, and it was not conceived as an adjunct to conventional performance. Performer/machine interaction as a concept would not have existed during the player piano's Golden Age, even though some external controls allowed for variation in tempo and dynamics. It is interesting that these controls required the operator to sit in front of the instrument during performance. The operator was thus put in a pseudo-performer state, necessitating concentration on the music and response when appropriate but at a technically undemanding level. Today, however, a performer might have sophisticated control over all aspects of a player piano system 10.
Most composers are drawn to music technology in the hope that it will realize their musical vision, which they believe cannot be accessed by traditional musical means. Questions inevitably abound concerning the technological manifestations of musical visions. From the initial financial or intellectual means which temper the vision, it becomes apparent that the technology is not going to fulfill expectations conveniently and expeditiously. Curiously, however, as much as technology becomes an imposition on the imaginations of those not prepared for it, what focus it does impose on the imagination often causes oversight of the latent potential within the technology itself. It is clear that technology, particularly computer technology, has become a significant layer between the intention of the composer and the musical work. This layer, which we now address through an interface, is quite unlike anything previously encountered between the composer or performer and the music.
In the early 1980s available computing technology 12 reached the expectations of real-time but non-interactive performance. It was easy to exceed limitations imposed by physical memory and input/output throughput because of the trade off between programming languages that allowed a high level of conceptualization and those that produced efficient programs. The early piano systems could have worked through many more compositional possibilities had the level of understanding and technical skill been higher. This retrospective consideration is interesting because I suspect that the potential of technology, on first contact, tends to drive creativity with sheer enthusiasm, irrespective of how one views one's competency to undertake a particular task. Technical mastery, would appear the result of an introspection upon a subject which is eventual manifest externally. This is clearly the case for musical performance. But does a profound understanding of the technology result in more significant musical compositions? Those with a traditional view of composition or performance rather than experience with technology might answer in the affirmative. This is predicated on the assumption that experience is the arbiter of success in the traditional field of instrumental composition.
In experimental computer music advances are generally made as part of the compositional process. The work is the impetus for technical evolution. Where musical concepts are coded in software they can be developed through accretion or deletion of musical ideas. Software can also be consciously discarded in order to accommodate radically new approaches which simply cannot be accommodated into an existing program. Unless developed concepts can be discarded or evolve into newer forms, I suspect that existing concepts often have the effect of blocking or stalling further efforts. If it is accepted that technology houses particular concepts then the adoption of new concepts might require the dismantling of the house–the abandonment of that technology.
MetaAction design and construction was certainly the result of observations of the early systems but was also governed by two views on the interactive potential of computers. The first came from an awareness of what a performer might contribute to a work that could not be easily be composed for a computer-only performance. For example, the addition of natural phrasing and expressive qualities to structures not normally conceived in traditional pianistic terms. This is an antithetical position to that adopted by Nancarrow. The difficulty of including nuance and interpretation can be well appreciated by any composer who has tried to get a computer to perform without obvious mechanical qualities. I now regard the challenge of naturalness to be one of the most significant and complex areas of computer music interpretation.
The second was the belief that new sounds could be produced through a re-configuring of the sound production mechanism–the action itself. These views are closely related, the important element being the presence of a human performer. More precisely, I mean human mediation. I had become aware that watching machines work, while fascinating to a degree, was not as interesting as watching humans also engaged in the process. As Norbet Wiener is reputed to have said, To human beings, human things are all-important.
The idea of new sounds from a hyper-active piano action was not original to this research. But I imagined that a performer together with a computer might accommodate a change in the piano/performer relation, allowing access to new and complex sound structures not previously experienced with a sense of human engagement. The sound of the instrument would remain much the same but the effect of these complex sound structures or aggregates would be unlike previously obtainable piano sounds. To illustrate my claim on this point and to cite the initial source of inspiration, I refer the reader to Conlon Nancarrow's Study #25 and also Study #36. Both studies, but particularly Study #25, demonstrate the piano operating in a realm that is not only unplayable but the sound effect is not conceivable in notated form until one actually hears it. The score for Study #25 reveals, for example, on page 6 an aggregate of over 80 notes that is played as a rapid flourish in just under a second 13. It is by no means a simple structure and involves the uneven displacement of notes over the entire range of the keyboard–lowest B to highest A. Such aggregates have a dynamic character independent of their external temporal context. This particular study exploits such aggregates, at first in isolation then finally as a dense and frenetic texture.
Both the medium and the musical objectives pursued by Nancarrow in his music excluded the possibility of human intervention during performance. Such works as Studies #25 and #36 would have been completely different had he been able to free up the performances in a way that corresponded to human performance technique and sensibility 14. My intention in the design of MetaAction was to permit precisely those factors and thus it becomes questionable as to whether the MetaAction is a form of player piano. It depends on how it is used.
The MetaAction is not so much a new means for producing dramatically new sounds but rather a new way of thinking about and accessing the traditional piano sound. Compositional interest lies predominantly in how the instrument is played.
The desire to experiment with computer control and explore new sounds on my early systems after the Nancarrow paradigm obscured the potential and ramifications of performer/instrument interaction. The potential to change performance practice existed, but was not fully appreciated as the focus was specifically on the compositional results and less on the idiosyncrasies of performance. What was happening to the idea of the performer was masked by the more the demanding position of computer control rather than computer interaction. Only later, after the experience of unmediated computer control, did the idea of the interactive system appear attractive and possible. The position was also promoted through an understanding of new concepts and a change in technology.
The transition came about through the realization that the computer could be delegated certain performance tasks, each dependent on either internal or external data. This is a significantly different undertaking than handing over the entire performance to the computer through the preparation of a performance score or instructions.
While it is possible to view this as the evolution of instrumental technique to the point where it no longer resides solely with the performer, the implications for the future of traditional instrumental music are unclear. If performer/machine interaction becomes widely practiced, might it not put some kind of pressure on those still concerned solely with traditional techniques to review their position? Would the future of contemporary instrumental music look different?
For string players, the necessity of dampening strings is at the center of the concept of mastery of technique. It is unimaginable that players of stringed instruments are not aware of the effect of their hands against the strings. It is a tactile experience. A player can often detect that an impending, as yet unheard, note will sound not as intended because of faulty finger placement. There may or may not be time to do something about it but there is, for a brief moment, an option. Controlling the existence and timbre of a sound is something wrestled with at an early stage of technical development. One of the most spectacular techniques associated with dampening strings is the production of the harmonic (unfortunately not commonly used on the piano). Such delicate contact appears in contradiction to the normal action of firmly pressing the string against the fingerboard, but desired harmonics will only occur at the critical nodal points.
To appreciate the operational significance of the MetaAction, it is necessary to look at the active components. From the perspective of computer control, it becomes apparent that the otherwise integrated conventional piano action could be split into two autonomous systems. In the MetaAction, the critical components–the hammers and the dampers–have been physically isolated and put under individual control. Viewing the contribution of autonomous dampening operations through the technique of a string player–a Violinist, a Guitarist or a Harpist–one sees that different functions are allocated to each hand but either hand can dampen strings. For the pianist, with years of training and an advanced technical proficiency, the intellectual burden of controlling dampers independently for each note would initially seem an unwelcome degree of sophistication. There is simply no historical provision for thinking that it would contribute anything to the nature of historical repertoires. The introduction of such a technique would imply the study of a new instrument and repertoire.
The MetaAction encourages a radical transition in keyboard technique. Controlling the dampers independently of the hammers introduces a new potential to the piano but at some cost. In theory, it is a realm of technique devoted to control and coordination of the dampers and the hammers in every possible way. To fully appreciate this would require new keyboard Studies. The list of action operations that may be employed is as follows:
The MetaAction alters the relation between the performer–who need not necessarily be a keyboard player–and the instrument. It initiates a series of changes towards the performer but does not significantly change the method of sound production. The strings are still struck by hammers and the sound is still percussive in origin.
In a dramatic sense, the delegation of control, particularly in the case of low-level performance function, to a computer positions the performance/instrument relation around a new aesthetic. It brings into existence questions concerning performance, its future, the role of the human performer, and the issue of access and appreciation of the resulting musical discourse.
Unlike most computer controlled piano systems, including its predecessors, the MetaAction and the implication of human/machine interaction shift the emphasis from the computer to a collective relationship. Performer/machine interaction should be understood as a tightly coupled system where it is difficult to separate or distinguish the contribution of each. The performer is no longer responsible for every event that occurs, but for how material is selected, articulated and coordinated. Under those circumstances it might be useful to obfuscate the specific contributions of both parties in order to discourage a desire on the part of the listener to try and identify the roles of the performer and the machine.
By far the most ambitious objective and the reason for the construction of the MetaAction was the desire to change the fundamental concept of how the piano is accessed. This is accomplished by changing the one-to-one correspondence between the player's actions and the resulting sounds. To make it possible to trigger aggregates like those in Nancarrow's Study #25, the computer would need to act upon some input from a performer and generate or manipulate pre-existing material. What is then articulated can be predetermined in a variety of ways–for example, either from the previous input or as data stored and composed prior to the performance.
Although no suggestion has been made as to how a performer might gain access to the action and consequently the instrument, it is evident that MIDI might initially provide the most convenient and flexible method. One could simply use the traditional keyboard, relying on existing keyboard techniques. Here the performer can play with all the precision, grace and confidence acquired through years of practice and get an entirely new sound. This, of course, might come as a shock and dramatically undermine any advantage established technique might offer. So what is the value of technical proficiency in the traditional musical sense to relatively unknown performer/machine interactive systems? It depends on the intentions of those constructing and composing for the MetaAction. Parallels do exist in contemporary works where the performer is required to play beyond historical technique or aesthetic in order to execute the work. John Cage's prepared piano works come to mind. However, it cannot be assumed that skills, particularly those acquired in pursuit of traditional musical results, are transferable to systems which might exhibit autonomy of action and originality of intention. Even if some technical competency is necessary, there are still questions of interpretation. Clearly, the historical gestures and intentions in the expressive actions will neither correspond nor feel appropriate. But how a composer and performer interpret the past or the future inevitably seems to have a bearing on the treatment of the instrument.
The imposition of computing technology is more than an enhancement to the existing mechanism, for it breaks open the closed loop arrangement long thought immutable between performer and instrument. Yet, it also only admits selected influences. Thus with such augmentation, formally closed systems become intrinsically open.
The level of keyboard skill throughout the world is now higher than it has ever been before. This in itself is a powerful argument for aligning technological advancements to the competency of traditional instrumentalists 16. To suggest, however, that it is a superior form of interface is to revert to a traditional assessment of musical objectives. The piano keyboard certainly is the way it is because of evolutionary forces within western music. Exactly what these forces are and how and when they came about may be a subject more appropriate to musicological discourse, but technology is challenging the exclusive position of western musical thought in areas of instrumental practice. The MetaAction has a sense of being part of some evolution around the piano but seeks musical influence from instruments from other cultures. The broader collaboration with technology must be worked out against a historical background of immense cultural significance. The integration of computing technology and the acoustic piano is a collision of conceptual paradigms of a primarily incongruous nature. This incongruity is largely associated with historical positions where the computer, on the one hand, is a rich source of new concepts and the piano, on the other, is rich in history and the human desire for cultural interpretation. This is not to say that they cannot be brought together, but there is resistance to their integration on the grounds that it would sever the instrument from its perceived historical continuum. The instrument effectively becomes new and has only a vague past.
That synthesis instruments are generally virtual–in that they can be updated, changed or replaced, at anytime with no apparent external or physical correspondences–argues for a flexibility in interface design and application. The interface is dependent on a vast range of creative objectives in the music world. While the acoustic piano does not exactly fit the category of a computer instrument in the way a synthesis instrument might, the MetaAction does permit the types of interfaces normally associated with synthesis instruments.
One of the lingering issues about this kind of research is the question of maturity and intrinsic musical value. Assuming for the moment that all engineering and technical details have been resolved and the MetaAction operates as envisioned, there remains the frontier of the musically unknown. Little is actually understood about the potential of the instrument and even less about notions of competency or virtuosity, so it will remain experimental. While this model might exude a potential, it is not until musical works are written, and considered of some aesthetic value, that the instrument can be said to have arrived.
interactionis used here in its broadest sense. That is the engagement of any instrument or piece of equipment irrespective of whether the results of the session are in real or non-real-time. return
The Computer as Interface: Interlacing Instruments and Computer Sounds; Real-time and Delayed Synthesis; Digital Synthesis and Processing; Composition and Performance.Interface Vol. 21 [9-20]. 1992.
Soundings. Vol. 9, 1975. Edited by Peter Garland. return
Tango?by Ursula Oppens (American Piano Music of Our Time. Music & Arts CD-604 1989), which reveals Nancarrow's style but, of course, not quite the same technical concerns. return
Buff-stopfunctions in this manner by dampening a choir of strings. Indeed, it does alter the relation between key release and damped notes. It is interesting to note that the desire to change the sound of the strings on keyboard instruments went out of fashion with the arrival of the piano. That it has returned to some minor extent in the twentieth century with the emergence of the
preparedpiano is rather more indicative of yet another complex musical agenda, one that was to flourish with the advent of the synthesizer. return