Chapter 5


The final system discussed in this thesis is a simple interactive program, PMIS (Performer/Machine Interactive System), developed to investigate certain acoustic phenomena peculiar to the modified instrument. The origins of the modified instrument are given in Chapters 2 and 3 and the extent of its timbral diversity can be gained from the compositions on the cassette. The system utilises the existing interface hardware and instrument without any further modifications but the interactive system itself is now written in the C programming language and runs on the AMIGA microcomputer. The AMIGA replaces the CP/M, Z80 system used for the previous systems. PMIS controls the interaction between the lowest 24 consecutive bass strings on the modified piano and their respective solenoids. The solenoids which are activated and controlled by the performer at the computer keyboard excite the strings, thereby producing a variety of unusual sounds and timbres. As a result of the nature of this interaction, the system cannot be used on the traditional piano. A listing of the PMIS software can be found in appendix F and an improvisation produced using PMIS is the last work on the cassette.

5.1 Music in the Age of Human/Computer Interaction

In retrospect, the 1980's may not only come to be regarded as the decade in which microcomputer technology had an impact on almost every level of public life (e.g. work, education, leisure, the Sciences, health care and the Arts) but also a period in which the users of this technology had a powerful reciprocal influence on that technology's development and direction. As the novelty of raw processing power has worn off, what has become more important is the manner in which the technology can be instructed and directed to perform its many tasks. This was clearly demonstrated in the cry that went up from non-technical users for 'user friendly' systems earlier in the decade.

This universal demand has encouraged diverse study in the field of research known as Human/Computer Interaction (HCI). From the primitive and frustrating person-computer interfaces to the heady prospects of symbiotic partnerships, HCI has emerged as a relatively new interdisciplinary study. However, it has in fact existed in the background since the advent of computer technology. It has paralleled the sciences of Cybernetics and Artificial Intelligence (AI) but unlike those fields of research HCI has not aroused strong emotions. This is perhaps because it is not concerned with emulating, excelling or replacing people. The only controversial aspect of HCI might be found in the notion of human/machine symbiotic relationships but that is only one of a number of its long range goals (such a scenario has been fictionalised in the motion picture, "2001. A Space Odyssey." between the astronauts and the computer HAL). To the benefit of the image of HCI, it is a goal that has not been persistently held up as the justification for continued research funding since the outcome of much of the current work has immediate application in the commercial and scientific world.

There is a certain amount of overlap in HCI research with Cybernetics and AI, for example, systems modelling, pattern recognition, knowledge engineering, electrical and mechanical communication and machine dialogues. But rather than the expectations of the technology being placed squarely on the machines, i.e. the automation of problem solving or task performance, HCI research acknowledges and attempts to deal with the fact that humans initially must present the problem or task to the machine, periodically interact with it and finally interpret the results. This may take the form of a simple one-off command or a series of commands and responses, queries and answers which form an ongoing interactive dialogue. The scope of HCI is well defined in this metaphor from Gaines and Shaw's overview of HCI:

Hansen's (1971) comparison of the interaction system designer with a composer of music appears very apt when one examines the wide variety of styles that have been developed for HCI. People and computers are very different systems and there are no universally obvious ways in which they should interact. Different designers have different views, often strongly held, and different users have different preferences, often equally strong. (Gaines and Shaw 1986. p.102)

In recent years the term 'real-time' has come to imply more than simply the immediate performance (or analysis) of musical data on a computer music system. The pursuit of real-time operation has broadened the computer's practical and theoretical involvement with music. Wherever the benefits of an immediate response have been deemed important, considerable effort has been made to achieve it. In various areas of synthesis, the response time and sound quality from dedicated computer music systems is now comparable to that of traditional acoustic instruments. This has given rise to creative applications of HCI such as expressed in the term "New Age Performance" (Boulanger 1986) which links these evolving intelligent instruments with "intelligent" performers to create an entirely different musical milieu.

However, the evolution of computer music systems - especially those based around the current generation of microcomputers - has seen practitioners divided into those that want immediate musical interpretation and those that want a similar response but influenced by the human performance paradigm. For example, M and Jam Factory offer human interaction but through the constraints of the application's user interface and the inherent properties of the Macintosh computer. On the other hand, controlling a synthesis system from a keyboard (MIDI) directly transfers the performer's physical actions to the music.

Music systems that deal with the psychophysiological characteristics of competent performers have had a considerable impact on the course of computer music. Composers who are juggling a vast array of sonic possibilities, literally at their fingertips, can give their music a credibility that would otherwise be difficult to implant using current compositional formats. Moore further explains this attraction:

A fundamental motivation for achieving a real-time performance capability in computer music, then, is to recapture this level of "visceral control" of musical sound - by which I mean a kind of control that includes both intuition and conscious and unconscious thought. Physical capabilities of human performers are simply too magnificent to be ignored altogether in any form of musicmaking. (Moore 1987. p.256-57)

In discussing performer/machine interaction in his work Repons, Boulez also observed:

The contrast between the familiar and the unfamiliar can thus be readily studied by creating close and distant relations between the scored instrumental passages and their computer-mediated transformations. In addition, since the transformations are done instantaneously, they capture all the spontaneity of public performance (along with its human imperfections). (Boulez and Gerzso 1988. pp.27-28)

Discussions on general real-time performance issues have appeared with increasing frequency in the computer music literature (Appleton 1984; Chadabe 1984; Dannenberg, McAvinney and Rubine 1984; Grossman 1987; Laske 1978; Logemann 1987; Paturzo 1985; Puckette 1986; Roads 1985b, 1986a). With the increasing availability of the powerful computing machinery, numerous specific real-time implementations (Chabot, Dannenberg and Bloch 1986; Tarabella 1987; Barbaud 1986; Greenhough 1986; Truax 1984; Teitelbaum 1984) have emerged. However, no single direction or system has focused as much attention on real-time performance issues as MIDI (Loy 1985; Moore 1987; Yavelow 1986, 1987). The standardisation of a protocol for real-time communication between control devices and synthesis engines, and the concerted efforts of the music technology industry have radically changed the perception and practice of real-time computer music.

5.2 Background to the Development of PMIS

The development of a Performer Machine Interactive System (PMIS) was influenced by a number of factors ranging from activities within the computer music community to personal attitudes towards the previous systems.

It would be unusual not to have noticed that there was a prevailing trend towards the direct involvement of musicians with microcomputer based systems. Activities in this area date back to the late 1970's when the microcomputer's impact in society was first being felt. The League of Automatic Composers (Bischoff, Gold and Horton 1978), for example, were early and unique contributors to composer/computer interactive performance. Generating music from an interactive network of microcomputers was particularly innovative for the time.

More recently the work of both George Lewis (Roads 1985b) and Michel Waisvisz (Roads 1986a) have instigated new directions in the field of interactive performance. Lewis' interest in real-time computer interaction came from his skills as a jazz trombonist. In conversation with Curtis Roads he expressed his position as "For me, live electronic performance has less to do with so-called 'interesting' timbres than with the directed quality that you find when a human is playing." (Roads 1985b. p.84). Waisvisz, on the other hand, has developed a system of sensors for interpreting body movement. In this respect he is not drawing on traditional or indoctrinated skills. These examples demonstrate the diversity and potential for performer/machine interaction. In the context of a bipartite interaction, the computer's role is that of a co-constructor. The difficulty for the musician/performer lies in accepting this role and achieving satisfactory results.

A second and more direct influence came from experiences with the previous algorithmic system. Although unsophisticated and limited in scope, it demonstrated that interesting results were possible from such a system in real-time. However, the musical results in general indicated that performances could perhaps be improved by some external aesthetic supervision in the form of real-time intervention. A possible functional augmentation to the previous system along these lines is perhaps worth a brief consideration.

The course of a performance could be influenced by forcing desired structures to occur at certain crucial times. Reflecting upon the algorithmic function of the system discussed in chapter 4, decisions regarding specific musical material for such future performance could be made without seriously disrupting the flow of the performance. This could take place within the duration of the currently performing structure and at least one structure ahead of the estimated time of performance. If a decision was not made within, for example, 75%1 of the current structure's execution time, the system would need to take over in order to maintain performance continuity. Under such time restraints, it is suspected that only the higher level structural features of the work could be successfully influenced.

Human intervention in an automated composition and performance system, occurring at a point where it appears that the performance is aesthetically weak, may to some degree deprive the performance of its essential algorithmic character. This leads to questioning the validity of a performance where the interaction is nothing more than superficial tampering with the performance. Furthermore, there exists the potential for excessive interference which could ruin an otherwise satisfactory algorithmic performance. The difficulty with this approach lies in knowing when to intervene and with what. Considerable effort may be channelled into recognising a deterioration in the performance only to be then confronted by what to do about it. The decision to initiate some form of remedial action would need to be considered on the basis of experience and/or a collection of alternatives. These alternatives could be computed as part of the ongoing composition process and presented to the performer on demand.

While theoretical solutions may abound, a successful implementation is likely to depend on powerful computing machinery and sophisticated software. To successfully influence the course an algorithmic composition may be at least as difficult as having a computer influence the course of a human performance. In the former case the performer has to make decisions objectively without the benefit of being part of the composition process. This 'standby' approach could well be very taxing in a creative context, especially if it were constantly time critical, i.e. where decisions need to be made within a roughly known period of time. In the later, the computer may make changes on the fly to a performance, some of which the performer may be unaware of. This is likely to be far less demanding on the performer's interaction with the system.

In the long term, however, it may be possible to develop basic mechanical skills and artistic insights which would consistently improve the musical results from algorithmic systems. What influence the current interactive and algorithmic systems, such as M, Jam Factory (Zicarelli 1987) and to some extent HMSL (Polansky, Rosenboom and Burk 1987b) will have on algorithmic composer/performers and the development of new systems, has yet to be fully appreciated.

An initial motivation for developing the microcomputer controlled piano system was to exploit the concept of extended performance. In theory the microcomputer should overcome certain physical barriers, allowing access to sounds and instrumental characteristics that have either never before been explored with such precision or even considered. However, in light of the discussion in section 5.1, there has been a growing awareness that real-time computer music is enhanced by human performer input. Moore again illuminates the point:

But there is another even more fundamental reason why real-time performance control is desirable in computer music. We are acutely sensitive to the expressive aspects of sounds, whereby a performer is able to make a single note sound urgent or relaxed, eager or reluctant, hesitant or self-assured, perhaps happy, sad, elegant, lonely, joyous, regal questioning, etc. (Moore 1987. pp.257)

A direction for the performer/machine interactive system finally crystallized around a specific performance objective found only on the modified instrument. As it transpired, PMIS did not evolve to influence the course of an algorithmic composition system. Rather, it developed primarily to allow experimentation with curious phenomena initially observed on the modified piano during the construction of the prototype. These phenomena are introduced in section 3.4.1 and further explained in section 5.3. The system differs from the others by being specifically for one instrument and requiring a performer to operate it in real-time.

Controlling these acoustic effects was an interesting challenge for a real-time system because it required remote management by the performer of an unstable set of conditions. The microcomputer's role in allowing and controlling real-time performance over this effect thus differed from other real-time systems because conditions for sound production are generally assumed to be stable. This made developing such a system all the more intriguing.

On a practical level, a new microcomputer system was available which was far more powerful and sophisticated than the system used previously. The Amiga has facilities for sophisticated interactive graphics, multi-tasking, a range of I/O options and was at least an order of magnitude more powerful than the Z80, CP/M system. The current series of powerful microcomputers are in general, responsible for the flourish of activity in human/machine interaction. This is now evident not only in the diversity of self-contained systems but also in the plethora of utilities for personal system development.

5.3 A Description of the Acoustic Phenomena

During experimentation with a prototype of the modified piano in late 1983, a curious acoustic effect was found to occur when a solenoid was left in contact with the string. It operated in the capacity of a driving function. By accident, one of the solenoids in the bass register of the instrument was left on during a test passage. Rather than the string ceasing to vibrate and the sound fading away, an altogether new sound emerged and persisted. The following hypothesis, explaining the string behaviour, is proposed on the basis of observations of the interaction between the electronic mechanism and the string.

The modified instrument as described in section 3.4.1 is substantially different from the traditional piano. These differences include the facts that it cannot be played in a conventional manner (since it has no keyboard) and that the strings are struck by wooden hammers with no individual dampers accompanying them.

Although the wooden hammers were installed to bring out a greater range of upper harmonics and partials when used in the conventional manner, the effect generated by their long term contact with the strings was not initially anticipated. Sustaining string vibration with the wooden hammers induces a complex tonal spectrum that is a result of interacting elements, i.e. the oscillating solenoid, the power supply to the solenoid, the string/hammer contact point and the vibration of the string. It is also important to bear in mind that the subtlety of this interaction is dependent on the nature of the materials that make contact and the physical proximity between the energy source and the string. The hard wooden hammers transfer to the string more of the solenoid's intricate behaviour over shorter periods of contact.

When a wooden hammer makes contact with one of the wound strings it forms an artificial bridge at the point of contact. The significance of this new bridge is apparent almost instantaneously as the string fundamental rises to a new pitch. Yet while the hammer forms a new bridge, it is ultimately the pressure exerted on the string by the solenoid that determines the eventual pitch. By varying the pressure on the string - this may take place dynamically - some strings can produce as many as eight pitches of uniform timbral quality (the legato passages in the improvisation, Black Moon Assails demonstrates this behaviour) with other pitches of a less immediately appealing and far more complex character. There are 32 possible pitches or timbres which correspond to the 32 dynamic levels decoded by the interface electronics. The outcome is not always stable and may vary from time to time depending on the string and the particular dynamic level. Thus some variation between supposedly similar intervals produced on different strings is likely to occur.

Changing the power to a solenoid changes the string tension but it is not responsible for the excitation of the string. This is achieved by the duty cycle or frequency at which the power is supplied to solenoid. In order to minimize the power consumption of the solenoids and reduce the possibility of heat damage, the original designers implemented a duty cycle that switches the power on and off to the solenoids at 5 millisecond intervals. This effectively means that the solenoids are oscillating at 200Hz. The power duty cycle causes the moveable solenoid core to oscillate and in turn vibrate the string when it makes contact. The strings are thus continually excited at this frequency. Appendix B is an excerpt from the manufacturer's technical manual explaining the operation of the solenoids and associated electronic circuitry. This should be read with reference to the schematic at the end of appendix B. The schematic, however, only represents in principle the generation of the dynamics (expression) and is not the schematic for the interface boards used in the current system.

The sounds produced are at times similar to those of a bowed string instrument, like a cello or double bass. At other times, the harsh, buzzing sound while less immediately appealing is, however, interesting in its dynamic timbral complexity. Exploring the circumstances further, it was found that all 24 wound bass strings on the instrument (for the range see Figure 5.3) responded to the one excitation technique and displayed similar characteristic timbral qualities. The unwound steel strings, on the other hand, did not respond with a similar timbral range. It is presumed that they do not have the mass or the length to produce and sustain complex vibrations using that particular excitation technique.

The two fundamental timbres differ in spectral complexity as a direct result of the dynamic interaction between the solenoid and the string. To produce the first timbre, it is supposed that the solenoid is oscillating at the constant 200Hz duty cycle. The frequency of the string when the hammer is in contact is the result of the solenoid's pressure. If the string vibrates at integer multiples of 200Hz then the driving force of the solenoid will simply maintain that particular string oscillation (Figure 5.1a). The more pleasing sound, by western standards, is therefore, the result of a less complex string/solenoid interaction.

The more complex spectra are produced when the string and the hammer have a varying phase relationship, that is, they are not integer multiples of the 200Hz driving function (Figure 5.1b). The solenoid constantly disturbs the natural vibrating mode of the string and causes it to generate aberrant spectra. The phase variation remains until the power supply to the solenoid is increased or decreased to a point where the actuation of the solenoid is concomitant with the movement of the string.

FIG. 5.1a A graphic representation of a hammer/string interaction when the frequency of the string is some integer multiple of 200Hz. FIG. 5.1b A graphic representation of a hammer/string interaction when the frequency of the string is not some integer multiple of 200Hz.

Although this is only a simple hypothesis based on observations and experiences with the equipment, the nature of the driving function and the oscillation characteristics of the string would seem to indicate that the timbres are the result of a complex and dynamic interaction between the solenoid's essentially stable 200Hz oscillation mode and the oscillation characteristics of the individual strings. Although a more detailed analysis and scientific explanation might be of value in clarifying this interaction it is unfortunately outside the scope of this thesis.

Finally, on a practical point, the solenoids were designed to remain active for a reasonable period of time. This was originally determined from normal piano performance and it was not envisaged that they would be individually subject to either radical fluctuations of power or extended periods at a maximum operational level. As a consequence, if maximum power is used extensively, those solenoids may suffer permanent damage through burnt out windings. A pragmatic approach was taken to preventing this. When a solenoid became hot or was used extensively, it was allowed to cool before being reactivated. Attention was thus paid to the amount of use particular solenoids were receiving.

5.4 Communication and Control in an Interactive Musical Environment

Physical control of real-time digital musical equipment has assumed considerable sophistication with the advent of MIDI. Examples of such controllers range from Yamaha's KX88 keyboard and WX7 wind controller to the more exotic Stepp guitar (Denyer 1987) and Big Briar Multiply-touch-sensitive keyboards (Roads 1987). The guitar has long been considered a difficult instrument to interface with the digital world. Capturing those idiomatic characteristics and artifacts, both familiar and important to guitarists, proved to be a stumbling block for many developers. However, with the motivation for developing such an instrument directed from the market place, substantial research has been focused on meeting the guitarist's needs and whims.

The process by which the sounds are generated on the modified acoustic instrument was initially viewed as having similar characteristics to that of the guitar. At the heart of this is the subtle and complex nature of the solenoid/string interaction. This gave rise to interesting questions about how to effectively and efficiently translate human actions to machine responses and finally music. Given the resources, what would be the most suitable and versatile physical interface? What is the greatest response time acceptable for real-time performance? Since there are few historical models to either musically or physically determine a favourable direction in this case, such questions could only be answered by addressing the musical intention of such an interaction. For example, there is no reason why only one type of controller should be used. Where the actual sound generation takes place remotely from the performer, what type of controller to use is simply a matter of skill or inspiration. With MIDI, for example, it would be as easy to interface a keyboard as it would a wind controller (or two).

How successfully performance characteristics are mapped to instrumental behaviour is dependent on the software which interprets and translates data and the protocol which conveys the information from the controller to the instrument. It must be borne in mind that in some cases (here for example) there can be more than one protocol used, i.e. a different communications protocol between the controller and the microcomputer, and the microcomputer and the instrument.

PMIS uses the console keyboard (also known as the QWERTY keyboard) as a controller. This was initially a matter of expediency as it bypassed problems of interfacing different hardware. The behaviour of the strings and solenoids, and how best to control them, had not been extensively researched, and the adoption of a specialised controller was not initially warranted. General MIDI interface software was also not readily available for the Amiga at the time PMIS was developed. Exactly how MIDI controllers, such as a keyboard or wind device, could improve the performance on this system would need to be researched under different circumstances. Reasonable performance skills would be mandatory with these controllers to ensure optimal research conditions.

The standard computer keyboard has been used frequently in elementary computer music systems because of its availability and simplicity of integration in software. Curtis Roads and John Strawn in their overview to 'Synthesizer Hardware and Engineering' make the following observation:

On E. Kobrin's Hybrid IV system, developed in the 1970's at the University of California at San Diego, even an alphanumeric keyboard was adequate (if not inspiring) for controlling four separate channels of sound. (Roads and Strawn 1985a. p.199)

While the console keyboard is generally perceived as unsuitable for control of serious musical activities, it nevertheless provides a means of assessing and understanding control requirements and parameters during an experimental phase. The physical restrictions and demands of the keyboard may not be as severe as those of specialised instruments. In this case, operational characteristics of hand and finger placement could be reviewed and the active keys rearranged to give better response times and control positions. In evaluating alternative means of control, these keyboard experiences would at least provide some foundation for performance comparisons and expectations.

Although it was never intended to be used for musical performance, the console keyboard may evolve into a controller of considerable subtlety simply because it is versatile and at the forefront of human/machine interfaces. To illustrate this point about evolution, although not for a musical application, keyboard re-design has produced some interesting alternatives. The case of the Morita keyboard (Koyabashi 1984) is perhaps one of the more unusual not only because of its ergonomic layout but also because it is intended for mixed use with alphanumerics and Japanese Katakana characters (Angular form of the Kana syllabary most commonly used in Japanese computer systems). While this tends to make the keyboard more sophisticated, the Morita keyboard with 30 keys has, in fact, 18 less than the current standard keyboard with 48 keys. It is claimed that the major improvements over the standard keyboard are a 33% reduction in training time and a data entry speed increase of 2.5 times.

While piano keyboards and wind controllers currently dominate thinking from a musical performance perspective, devices developed along the lines of the console keyboard may at least begin to influence musical instrument design through their wide spread use and general skill levels. From a traditional musical view point it seems unlikely that they would usurp the role of specialist real-time instruments and controllers, but if workstation based computer music systems like M, Jam Factory and HMSL continue to evolve then it is possible that the keyboard may also.

Traditional acoustic musical instruments on the other hand, were never intended to be used for purposes other than making music. With their control mechanisms, i.e. keys, mechanical actions, bows, etc, an integral part of sound production, they will remain on their own evolutionary path whether it is towards music technology or not. This is largely because of the performance tradition. In the future, however, any control device (non-musical) which provides a similar range of functionality, from the elementary mechanical operations to the microgestural (where sounds are the result of slight changes in body position), may qualify as a vehicle for valid musical expression.

5.5 Operational Aspects of PMIS

PMIS enables the performer to control the following fundamental operations:

- Solenoid contact with the 24 consecutive bass strings
- The energy to the solenoids (the 32 dynamic
levels used in the previous systems)
- Damper mechanism for the lower bass strings
- System and interface reset
- Multiple solenoid activations (of limited value)

The performer accesses these operations through a predefined set of keys on the computer keyboard. The layout of the keys is shown in Figure 5.5. This particular arrangement was determined on the basis of frequency of use and convenience. Activating the solenoids, for example, would obviously require a central group of keys that had a familiarity about them from ordinary use and fell conveniently under the fingers. This design feature, taken from normal keyboard use, ensures that the hands do not move around unnecessarily. The configuration detailed in Figures 5.2, 5.4 and 5.5 reveal this predisposition towards the conventional use of the keyboard.

The keys cover all 24 strings (see Figure 5.3 for range) producing the principal sounds. The remaining strings of the instrument were left to resonate in sympathy, once again giving the sound a natural reverberance. Figure 5.2 shows the console keys allocated to activate and deactivate the solenoids.


Key KeyPitchKeyKeyPitch
1A A13H A
2W A#14U A#
3S B15J B
4D C16K C
5E C#17I C#
6F D18L D
7R D#19O D#
8T E20P E
9C F21, F
10V F#22. F#
11G G23; G
12Y G#24[ G#

Fig. 5.2 PMIS key to string control mappings. Number reference for Figure 5.5.

Fig. 5.3 Pitch range of the 24 strings used in PMIS.

The keys selected for each operation work on the basis of one keystroke per action (toggle operation). For example, if a solenoid is activated by pressing the appropriate key, to deactivate it the same the key has to be pressed again. This expedited software development and from a performance perspective meant that the fingers (and hands) were free to control other parameters of the solenoid/string interaction. To use the traditional piano keyboard would mostly likely have required a more complex interpretation of keyboard events, perhaps to the extent of implementing scheduling strategies.

The control of the solenoids and general system functions are selected by predominantly text management keys as follows:

25Return Reset
26Delete Maximum Dynamic (Solenoid power)
27Backspace Minimum Dynamic (Solenoid power)
28'Q' Exit the PMIS
29'?' Help

30 Up Increment energy level
31 Down Decrement energy level
32 Left Left hand keys control lower octave
33 Right Left hand keys control upper octave

34Space bar Damper for the strings
35Tab Chord option
36'q' Exit the chord option
37'X' Abort the chord option

Fig. 5.4 PMIS key to system function mappings. Number reference for figure 5.5.

Most of these system function keys, especially those used extensively in performance, are activated by the right hand fingers. These keys form a concise group located at the right of the main key area. The right hand, therefore, is not always available to activate solenoids with the same level of precision and fluidity. In order to make all strings available for selection, the upper 12 strings (those assigned to the right hand) can be mapped onto the left hand keys simply by hitting one key. The left and right arrow keys toggle this configuration. When the right hand is controlling solenoid behaviour or system functions, the left hand can have control over all the solenoids.

The technique of extended control over a solenoid, which produces the variety of pitches and timbres from the string, is a singular operation. Since a solenoid/string interaction is complex and occasionally unpredictable, there is little or no scope for monitoring the behaviour of individual multiple solenoids. It is also not physically possible to drive solenoids at different power levels simultaneously since the power supply to them is always uniform. For example, two solenoids cannot operate simultaneously on two different power levels. They must both function at the same level.

When the hammer is in contact with the string for extended periods, the pitch and timbral characteristics of the string can be changed by varying the duration of the power supplied to the solenoid. The keys selected for this purpose were the Up/Down cursor control keys. A numerical display representing the energy level currently allocated to the solenoids is displayed on the screen . When the UP key is pressed, for example, the level goes up by a certain amount. The actual amount can be adjusted in increment values ranging from 1 to a maximum of 5. The increment value is changed by pressing the number key corresponding to the value. A solenoid influences the behaviour of a string at any of the power level values from 0 to 31. The Delete and Backspace keys are used to rapidly shift to the extremes of the solenoid's power level values, i.e. 0 or 31. To cause an abrupt halt to activities during an improvisation, the reset function can be employed. This is implemented through the Return key. It guarantees inactivity on the instrument by turning all solenoids off, setting the dynamic level to 0 and deactivating the damper mechanism (if it is active at the time). The performer will then have a known state from which to recommence.

The damper mechanism suppresses the resonance of the strings. While the strings are dampened they may also be struck. This produces an alternate range of timbral characteristics. It can be used to soften the sounds by muting the higher string frequencies. The space bar toggles the damper operation. A 'D' is present on the screen when the damper is active (see Figure 5.6).

Normally when a console key is depressed for some time it begins to repeat the actual character. Although this is of limited application in this instance, it can be used at anytime within a performance. The frequency with which the action is repeated is called the 'repeat rate'. This rate is determined outside the PMIS environment and unless the lower level console operations are brought into PMIS the rate cannot be changed in real-time. This was not warranted since the repeat function is of incidental value and somewhat less important in the system as a whole.

A help facility explaining the basic functions of PMIS is available at the console by pressing the ? key. This displays information about the keys and their corresponding functions.

A means of activating a number of solenoids simultaneously was implemented as a chord function. This group performance technique requires a setup stage and data management options. The maximum number of solenoids that could be activated is currently 15. The chord mode is entered using the TAB key and the chord is performed when the q is entered. If X is entered or the maximum number of notes is exceeded then the mode is aborted. This chord function, however, proved to be less valuable than was initially thought. In such an unstable performance environment, it is difficult to manage a group of active solenoids much longer than their initial impact time.

Although less interesting and perhaps less challenging than the extended control of a solenoid, the simple technique of solenoid activation, which produces a distinctive percussive sound, has a greater potential for polyphonic applications. Certainly this technique is identifiable with the traditional keyboard instrument, however, it does have some interesting and unique qualities in this particular application. For example, varying the force and duration of the attack (how long it remains in contact with the string) changes the timbral characteristics of the sound. If the hammer remains in contact with the string when the first reflected waves are returning from the ends of the string, then the timbre can vary considerably (Podlesak and Lee 1986). This technique was used on the first work written for the instrument in 1983. A further unexpected sound results from the release of the hammer after contact and was discovered when there was no perceivable resonance of the particular string. On retracting a solenoid, the string attempts to move back to its quiescent state and transient overshoot results. The pitch of the string is clearly audible but fades away quickly as the energy is dissipated.

Input to PMIS from the keyboard is also echoed on the console screen (Figure 5.6). From the time the system is activated to the final shutdown sequence, the console registers every keystroke, both valid and invalid (responds with an error message).

The most important display shows the status of the solenoids. The screen displays the 24 strings as a chromatic series (using sharps # only. See Figure 5.3 for range) from left to right in a fixed display area. When a particular solenoid is active, the character on the screen changes colour. In this way it is possible to determine the state of the system at a glance.

Directly below the note information area is the status area which displays information regarding the current state of the system. This information includes the behaviour of input and output channels (console and parallel devices), and help information for the use of PMIS. Below this yet again, is the command line where the input system commands are displayed. The next line down is the octave display. It shows which half of the 24 strings the left hand keys are currently activating. The two lines below the command line show information regarding the energy supply to the solenoids. They are labelled, Dynamic : and Offset :. The Dynamic line shows the current value of the energy control setting (a number in the range 0 - 31), and the Offset line displays the value by which the energy control setting is changed when the Up/Down arrows are depressed (a number in the range 1 - 5). Underneath these permanent fields, messages occasionally appear usually when input errors occur. They disappear when the input value becomes valid.


# # # # # # # # # #



{System status area - help, parallel device behaviour, etc }

Command : Z
Octave : in Lower/Upper octave
Dynamic : 19
Offset : 02
****** Command U N K N O W N ******

Fig. 5.6 The screen display format for PMIS. (Italic entries show possible or alternative displays).

Unlike the previous systems, PMIS is operational immediately it is powered up. No data is read into the system. It comes up with default values and is ready for performance.

5.6 An Interactive Improvisation

The final musical example on the cassette is an improvisation using PMIS. Black Moon Assails is the culmination of numerous rehearsals which investigated certain performance techniques and musical ideas. For the recording, the microphones were placed in the space between the solenoid frame and the lower part of the instrument. Together with the intimate recording approach, the undamped strings were constantly vibrating in sympathy, thus giving the improvisation a natural reverberant ambience.

The initial musical explorations revealed a more complex sound source than had previously been anticipated. Not until it was possible to directly control the solenoids was the subtlety with which they could be operated fully appreciated.

The four techniques used in the performance to either generate or control the sounds were:

1. Activation of the solenoid, hammer strikes the string (normal operation).
2. Controlled and sustained contact between the solenoid and the string.
3. Use of the damper to control resonance and as a string mute for the
percussive mode.
4. Removal of the hammer from the string (The string vibrates freely when the solenoid is deactivated).

The fundamental and least complex function is simply activating the solenoid and driving the hammer onto the string (technique no. 1). The nature of the sound produced depends upon the force of the solenoid and how long the hammer remains in contact with the string. As discussed earlier, due to the nature of the electronics discrete simultaneous control over multiple solenoids is not possible. This technique is undertaken on one solenoid at a time.

Once the hammer is in contact with the string (technique no. 2) there is a variety of pitches and timbres that can be generated by controlling the power sent to the solenoid. The 32 inherent levels of control provided a functional range for operation and their overall effect was usually divided between the two timbral qualities discussed in section 5.3. This technique features prominently in the improvisation.

The damper acts both as a mute on the strings and as a means of changing their resonant character (technique no. 3). In the improvisation these alternate repertoire of sounds form a contrast with dominant reverberant character of the instrument. The damper mechanism can be activated quite rapidly by pressing and holding down the space bar.

Technique no. 4 is subtle and characterised by an absence of attack transient. When the pressure that the solenoid is exerting on the string is withdrawn the string springs back with enough energy to produce audible sounds.

5.6.1 The Character of the Interactive Improvisation

Black Moon Assails, as an improvisation, explores each of the above techniques a number times. It begins with a motive created from the use of the first, second and fourth techniques.

The motive is stated on one string. It begins with the attack transient and rising pitch. Followed soon after by an abrupt change in the energy level causing another distinct change in pitch. The hammer is removed and the open string vibrates freely. The sequence of the opening motive is then:

  1. Initial attack sound. Pitch rises.
  2. Changes in energy level. Changes in pitch.
  3. String vibrates freely when hammer is removed.

The composite use of these technique are followed shortly by the second again, manifest this time as a rapid series of attacks. This descending secondary motive occurs throughout the performance and signifies a pending change within the work or a partitioning of musical ideas.

When the hammer and the string separate rapidly, the string flexes back and vibrates freely once again (technique no. 4). This also naturally occurs throughout the improvisation, but is featured more prominently towards the middle as an unusual polyphonic accompaniment. While control is limited, the technique has nevertheless an ethereal presence and provides an effective dynamic contrast with the other sounds. The technique is more noticeable when the solenoid is at maximum power and active on the higher wound strings because they initially emit a higher more complex hammer/string sound than is later produced by the strings alone. On the higher strings the solenoid occasionally does not leave the string cleanly and the resulting interference can produce an interesting timbre.

Black Moon Assails, in its experimental capacity, attempts to reveal and exploit the techniques discussed throughout this chapter. As a consequence of this and other performances, two conclusions were drawn. The first and most obvious was that the QWERTY keyboard functioning as a musical instrument controller was not particularly suited to the nature of the sound production. The behaviour of the instrument was too complex and unpredictable. Under the circumstances, the keyboard could offer only limited control from the single key operations. The activation and behaviour modification of the solenoid should have taken place through the one control point and not through two technically different and isolated operations requiring both hands. The QWERTY keyboard also lacks the traditional instrumental aesthetic appeal.

The second conclusion was that although the controlling mechanism was inadequate for managing the variation possible from the instrument, access to its full potential was restricted by the actual controlling electronics. While having functionally autonomous solenoids would make performance much more difficult, it would nevertheless allow greater subtlety and perhaps reduce problems due to operational conflicts. Redesign of the controlling hardware is a major undertaking, and while desirable, would only be likely to take place as part of a more comprehensive system redesign.

The creative possibilities inherent in PMIS have, of course, by no means been fully realised. Yet PMIS has nevertheless fulfilled an important role towards understanding performer/machine/acoustic instrument interactions. The temptation now exists to research and perhaps develop other controllers and pursue modifications which in general might improve or perhaps reveal new approaches to performer/machine interaction on acoustic instruments.

Endnotes to Chapter Five

  1. This figure is an estimation derived from observations of the previous algorithmic system. In practical terms the percentage figure would need to vary, perhaps being decreased where the performance time of the structure is recognised as being short and correspondingly increased where the execution time maybe long. return

    Tile Page | Acknowledgements | Abstract | Contents | Examples
    Introduction | Chapter 1 | Chapter 2 | Chapter 3 | Chapter 4 | Conclusion
    Bibliography | Discography | Appendices