The principal component of a microprocessor is the Central Processing Unit (CPU) which in turn comprises an Arithmetic Logic Unit (ALU), a Control Logic Unit (CLU) and registers (temporary storage for data or memory addresses during processor execution). The co-operation of these units provides the correct data and instruction mix necessary to carry out a particular function at any given time. Although they are now commonly packaged as a single integrated circuit, the equivalent range of functions was previously derived from many discrete circuits.
The augmentation of the microprocessor with additional hardware devices, RAM (Random Access Memory), ROM (Read Only Memory) and I/O (Input and Output) circuits, constitutes the definition of the microcomputer (Sippl 1977; Hartley 1979). Although the terms, microprocessor and microcomputer are often used synonymously, a more valuable definition correctly acknowledges the microprocessor as a sub-component within a microcomputer system.
Microcomputer evolution has been marked by a constant anticipation, on the part of users, towards increases in processing power and associated functionality, such as memory addressing and peripheral component management. The increasingly favourable cost/power ratio, expanding operational parameters and ability to connect with other forms of digital equipment has culminated in their placement in ever more demanding and critical situations:
The low unit cost of microprocessors and microcomputers themselves has one considerable advantage in process control and on-line real-time applications generally. Highly reliable systems incorporating multi-processor configurations are now both practically realisable and economically viable. Thus on-board computer systems for aircraft may expect to have a hardware reliability at least one order of magnitude better than that of the aircraft structure itself. (Hartley 1979. p.7)For reasons similar to those of the microprocessor, microcomputers are of limited value without the potential to communicate with other external and often directly incompatible devices. Interfacing microcomputers to the real world consequently emerged as an important activity in the late 1970's and has grown steadily ever since.
Interfacing is generally defined as the establishment of a common boundary or line of demarcation between two or more pieces of equipment (Housely 1979). Traditionally it has referred to externally located equipment that requires some form of protocol by which useful communication can take place. In this respect, it differs from the addition of a hardware module as, for example, a self-contained board which conforms to the systems inherent standards. This is regarded as an enhancement and expansion of the system per se.
Interfacing requires both hardware and software. Once the physical connection is made, equipment at either end of the communications link needs to be aware that a communication channel exists, and in what form and when the communication will occur.
The early technical literature (Aumiaux 1979; Cluley 1983; Libes and Garetz 1981; Lipovski 1980) provided fundamental information about interfacing equipment to microcomputer systems, usually at a primitive operational level and directed towards standard digital equipment. Later, the standards for microcomputer interfacing stabilised with the RS-232 serial and Centronics parallel technologies (yet later the higher performance RS-422 and SCSI standards), providing a well defined basis for communication with a range of devices. These communications standards define the physical or lower (signal) level protocol for data transfer.
The suitability of these standards for musical applications must be considered in relation to specific musical criteria and the type of equipment that the musical activity will take place on. These criteria may be established through the answers to such questions as: Is the musical application real-time? Are there alternatives to the standard communications I/O facilities provided by the manufacturer? Are large amounts of data expected to be transferred for significant durations? Are the input and output devices (synthesizer modules, keyboards and other controllers) located some distance apart? If so how far? It was through these and similar questions that the communications requirements for the piano project were established.
There are some advantages in developing specialised communication protocols. Cost and performance are the principal reasons but unfortunately they may be mutually exclusive and one may be sacrificed for the other. In general, the inherent difficulties in digital data communications have encouraged the development of standards beyond those of a physical nature as described above and more towards the area of application interface software. This is particularly relevant to the universally adopted musical standard (MIDI) which is discussed at a later stage.
Until recently personal microcomputers have not had the inherent power or affordable peripheral resources, i.e. large storage disks, Digital to Analog Converters (DAC's), Analog to Digital Converters (ADC's) and large internal memory, to usurp the role of the more expensive mainframe or mini-computer systems. Even now a basic microcomputer system provides only the foundation of a workable music system. It remains substantially dependent on additional hardware and software packages to achieve better throughput and turnaround time during a composition/rehearsal/performance cycle.
The commercialisation of some synthesis techniques–notably FM–has seen the advent of low cost equipment with reasonable sound quality. Where this technology cannot be integrated with a microcomputer system, it requires interfacing using a standard protocol. Simplified interfacing between digital equipment has became economically expedient. Commercial activities have further reduced the difficulty of this process to 'Black Box' simplicity, where the computer system controls the remote hardware through a standard interface and common protocol. MIDI - Musical Instrument Digital Interface (International MIDI Association 1983) provides a simple industry standard for commercial stand-alone hardware, thus allowing musicians to connect MIDI compatible equipment quickly and efficiently. Considerable discussion has resulted on the subject of MIDI (Droman 1985; Loy 1985; Moore 1987; Yavelow 1986) and it is now established across a wide base of users.
MIDI, however, does not meet all computer music requirements in the area of the digital transference of a musical performance (as opposed to digital audio sound) and inevitably it will undergo review to accommodate (where possible) demands from the computer music community. If MIDI is fundamentally unsuited to certain musical activities (Moore 1987. pp.256-263) new protocol standards will emerge to meet those requirements.
Nancarrow's principal interest in polytempo and polyrhythmic relations demanded that he should critically examine the technical operations and performance criteria of the player piano. However, this long term investigation neither dominated nor usurped his fundamental musical convictions. His modifications thus appear musically and aesthetically justified.
Although these physical modifications changed the immediate character of the instrument, the most profound change occurred through his music. The modifications alone were not significant enough to change the instrument's image or expressive power; this could only be achieved through composition. In the case of the player piano, change and evolution were slow because there was little opportunity for experimentation and empirical research. To further qualify this, attention must be directed to the fact that the instrument was intended not primarily for serious musical activities but for less demanding domestic entertainment. By contrast, the evolution of traditional instruments has tended to be more vigorous and exacting because of the active and ongoing participation by performers.
A decade and half before Nancarrow began his work, Stravinsky perceived one of the instrument's inherent qualities and perhaps in this statement from Current Opinion No. 71 of 1925, he quietly entices composers to consider the instrument in a new light:
There is a new polyphonic truth in the player piano. There are new possibilities. It is something more. It is not the same thing as the piano. The player piano resembles the piano, but it also resembles the orchestra. It shares the soul of the automobile. Besides the piano it is practical. It has its utility. Men will write for it. But it will create new matter for itself... (Joseph 1983. p.93)Nancarrow's music possesses a peculiar instrumental quality. Such mechanical properties as consistent dynamics, precise staccato attacks and uniform timbre, are meticulously and deliberately evident on the surface of his compositions. The effect is intensely mechanistic and it runs contrary to the evolutionary spirit of the player piano manufacturers.
Throughout the history of the player piano, manufacturers attempted to moderate or obscure the obvious mechanical rendition of the music. Their efforts, ranging from colourful and evocative musical names to elaborate and subtle recording and playback techniques, never fully disguised the absence of direct human participation. In Nancarrow's studies, the temporal relations between the canonic voices are critically dependent on exactly those mechanical qualities. They are prerequisites, necessary for the correct interpretation and presentation of his ideas in music.
Although the player piano studies have stimulated interest in tempo, some of its most sublime manifestations remain elusive to most contemporary composers. The difficulty is in obtaining accurate performances or interpretations which clearly articulate the composer's intentions. The perception of complex timing relationships existing between musical entities was, at the time of the development of the microcomputer controlled piano system, believed to be a function to which such technology could effectively contribute. A closer look at the internal workings of the microprocessor reveals the foundation for this belief. The operation of the microprocessor is dependent on a master clock which oscillates several million times a second. At a higher operational level, more meaningful activities, such as instruction execution and data transfer timings are measured in tens and hundreds of microseconds. This resolution or quantization of time offers the potential for a comparable degree of control over tempo to that found in Nancarrow's studies. The difference between the transport speed of the paper roll and the oscillating crystal is in the abstraction and flexibility of the relevant temporal unit.
In theory the computer's capacity to respond to and manipulate complex groups of events, within extremely short durations, should at least compare favourably with events punched on the piano roll. The player piano's analogue representation time has, of course, the advantage in that the quantization of time is almost at the composer's discretion, i.e. where the hole will be punched. Certainly, the player piano has no added overhead in processing event requests since they are effectively passed directly to each hammer. However, since timing is an integral part of the computer's operation, an event request is prioritized along with all the other general events and consequently could incur a delay. The problem of event scheduling is central to the success of any real-time computer music system. In HMSL (Hierarchical Music Specification Language–refer 4.1.1)(Polansky, Burk, Marsanyi and Hays 1987a; Polansky, Rosenboom and Burk 1987b) two scheduling approaches were adopted: durational and epochal. In the first, event timings are regarded less rigorously. If an event time is missed, it is possible to have it performed shortly after. With epochal scheduling, if the event time is missed the event is disregarded completely or priorities are changed in the scheduling queue. These two approaches allow for a more flexible treatment of event timing.
As well as the potential for sophisticated timing of musical events the computer can simultaneously manage performances on several instruments in a variety of contexts. Nancarrow expressed interest (Amirkhanian 1977; Nancarrow 1987) in the possibilities of instrumental simultaneity using the computer because he had attempted complex timing relationships between his instruments in his later works, Studies #40 and #41. The major difficulty was activating the autonomous player pianos at precisely the right moment. This is not easy, particularly when the moment for switching one of the instruments on has to be anticipated against a complex performance taking place in the other.
Apart from the obvious mechanical advantages that the current technology offers, further motivation to develop the microcomputer controlled piano system came from the belief that the technology could provide a greater degree of compositional flexibility and performance control than had previously been possible. Although the digital piano is essentially a technological upgrade on the earlier pneumatic instruments, it was anticipated that its unique contribution would be manifest in two areas:
The first interface between the microcomputer and the piano made use of existing Pianocorder technology and communications protocol (discussed in part in 2.3.1). This proved to be inefficient and too primitive for the realisation of music anywhere near as complex as that found in Nancarrow's studies. This was clearly evident when measuring the amount of useful information that could be sent to the instrument in one second. The resolution between consecutive notes was only one thirty second of a second. Nevertheless, a series of exploratory works were produced reflecting areas where the system could be improved.
The composition and performance processes discussed in this thesis use the second generation system which came into existence around February 1984. In this system certain fundamental problems, clearly evident in the previous system, were resolved.
Work on the new system focused on three areas :
Such an encoding procedure is extremely inefficient for data storage in either the microcomputer's memory or on disk. There was simply an enormous amount of redundant data. This is evident in Example 2.1 with the lines of identical hexadecimal numbers representing no change to the performance.
A dedicated protocol was designed and the interface hardware was built based on earlier experiences, and an understanding of current and future requirements. It was also in ignorance of any existing or developing general purpose musical protocols i.e. MIDI (MIDI is not used in the transfer of data from the microcomputer system to the piano system). The piano protocol is perhaps a little more suited to the intended application than MIDI but it lacks MIDI's broader functionality. The specific interface codes and instrument functions are detailed in appendix A.
The increased speed of data transfer from the computer to the instrument was achieved through the use of a parallel rather than serial communication technology. In effect it means transferring 8 bits simultaneously rather than sequentially transmitting a start bit, 8 data bits and finally the stop bit, making a total of ten bits transmitted. Although more expensive in physical resources (more signal lines), the alternative would have resulted in slower transfer rates, increased data volume and possibly further timing problems for real-time performance. The serial communication facilities on the standard microcomputer around 1983 was RS232 which technically has an upper limit of 19.2K bits per second depending on the manufacturers intentions. This was considerably slower than the standard parallel port rate which might achieve 50K bytes per second or more from memory to port transfers.
One of the important characteristics of the parallel interface is its hardware handshaking for end to end acknowledgment of transfer activities. Handshaking is the term used for the exchange of predetermined signals when a connection is established between two devices (Sippl 1981. pp.167-68). In the case of the hardware approach, separate signal lines (wires) are used to control data flow. When using a parallel protocol, data are sent only when the receiving device requests it, thus guaranteeing no data overrun, data loss and failure during performance. There are two alternatives to this method. The first is data driven control where independent data (start and stop bits) are used to request or suspend data transfer activities. There is no additional hardware necessary but more data is transferred and the software must handle the control codes. The second is no flow control, leaving the software responsible for capturing all data sent irrespective of loss or error. Many of the microcomputers of that time had Centronics parallel interfaces which were used principally for data output to a printer. This was the only other alternative with a potentially high output rate that could be used with the pianos.
Two interface units were produced. One was considered a master interface while the other could be either a master or a slave depending on how the board was configured. The concept of master and slave interfaces arose after consideration of multiple instrument arrangements and their placement in a performance situation.
A significant problem with parallel data transmission, using the Centronics standard, is that the cable length should not exceed 5 meters. In order to have the microcomputer system located to one side during a performance, one cable would need to have been at least twice as long (if separate cables to each instrument were used). Here it is implied that the location of the instruments forms a stereo sound image to the audience. A practical solution to this problem was to have the second instrument daisy-chained off the first instrument's interface hardware (multi drop line approach). In a multiple instrument configuration, a master interface is a node which permits the data to pass through to the next interface (see Figure 2.1). With a slave interface the data does not pass through and it is considered a terminating interface (there is only one). In effect data are always present on the single communications link.
Each interface can be enabled or disabled, thereby determining which instrument will be selected for performance. With data constantly present on the link, an instrument will only respond if it has received a unique code enabling it to act on the following data. The music therefore can be intricately orchestrated for the instruments. The disadvantage in this approach is that more data is required to be sent and intense switching activity could become a data traffic problem.
While the ability to connect and operate a number of instruments simultaneously constituted a technical milestone, a more significant artistic prospect could be found in the potential for sophisticated and efficient control. The key to this lay in the protocol. Through a careful use of the interface codes extended and diverse performance practices can be achieved. What was therefore needed to complement this language of control was an equally responsive and subtle instrument.
Design and construction of a second instrument was influenced by Nancarrow's instruments. What particular qualities the instrument should possess were extracted empirically from interviews, articles on his work, and the recordings of the Studies.
The sound of Nancarrow's instruments, with their harder hammers and lack of ambient resonance, are the result of more than just a personal aesthetic. In the dense clusters of notes that emanate from his instruments, Nancarrow seeks to give each note a clear but brief identity. Thus by hardening the hammers, he increased the presence of upper harmonics enough to sufficiently emphasis those extremely rapid sequential attacks. These momentary note articulations are essential to the effect and comprehension of his complex polytemporal and polyrhythmic works.
Underlying the fundamental differences between these automatic piano applications is the method of control. In this digital piano system, the principal design criterion was that it should function solely under microcomputer control. The dramatic manifestation of this is that the new instrument has no keyboard. More importantly the instrument incorporates a radical change in action design and function.
During the prototyping stage a number of action designs were considered. Ideally, their operation would conform to that of the normal piano with the hammer and damper operation synchronised. But its response characteristics would be closely allied to the performance of the microcomputer. Without significantly changing the basic solenoid arrangement, i.e. one solenoid per string, designs that utilised simultaneous damper control and hammer operation powered from one solenoid could not be accommodated. The problem with this approach was that it involved too many moving parts and consequently became difficult to regulate and operate at high repetition rates (this being one of the design criteria). Introducing two solenoids per string, one for the hammer and one for the damper, was technically too difficult to implement at the time, although it was obviously the best solution.
A workable solution lay in mechanical simplicity, with the only moving parts the solenoid cores themselves. The compromise in this arrangement was fairly substantial. Plate 2.2 shows the action installed in the 'modified' instrument and Figure 2.2 represents a cross section of the action, showing the components and principal of operation. There were to be no dampers on individual strings. This was in complete contrast to Nancarrow's player pianos which are fitted with dampers on all strings. However, a means of stemming the inevitable unchecked resonance was implemented through the use of large damper bars for the two main ranks of strings. Two such damping devices were installed, thus covering all the traditionally damped notes. An added benefit of this damping strategy was independent control and through this the instrument gained an extra timbral feature derived from the possibility of striking damped notes.
The significant limitations of this microcomputer were in memory addressing, memory expansion and processing speed. Retrieving data from disk during a performance was an unacceptable method for accessing performance data. There was usually, at some point, a momentary halt in the performance while the disk file was accessed for more data. Furthermore, there was no specific upgrade path for the system in order to enhance its power or extend its capabilities.
In contrast, the microcomputer used to run the system described in chapter 5 was an Amiga. The full potential of this system has yet to be realised but it is at least an order of magnitude more powerful than the previous system. Since it is not required to synthesize any sounds its considerable processing power could be directed towards real-time composition and performance applications. It can also be easily and relatively cheaply expanded.
The principle mechanism used by these composers has been the Marantz Pianocorder. In the early 1980's it was the only system available which was reasonably easy to interface with a microcomputer system. Currently, however, technology has resurrected the player piano spirit through new and sophisticated configurations. In light of this, it is interesting to observe that the problems of performance and music representation, inadequately resolved by manufacturers during the instrument's heyday, are now largely overcome or can be. The public's interest is, however, no longer focused on the medium to the degree that it once was and few contemporary composers have emerged with any significant works indicating further creative potential.
The thing that interests me is the sonorities that you can get from several pianos together and these pattern things you can do if you delay it and things like that. It should be nice for pattern music, among other kinds of music. (Siddall 1982. p.14)Later he states:
No second pianist could have played all those notes, because after a while you would need at least eight hands. It's accumulating, so that everything I play stays in, except I can wipe it out a little bit if I play over it a certain way. There are all kinds of things you could do that two pianos couldn't do. (Siddall 1982. p.14)Teitelbaum's approach centres around a microcomputer interface to the Pianocorder 'Vorsetzer' system. This system is not installed within the piano itself but makes contact with the instrument only at the keys. The advantage of this is that it can be used on any type of piano, in particular, grand pianos. Teitelbaum uses multiple pianos in his performances, as he explains:
As presently configured, it interfaces three Marantz Pianocorder-equipped Grand pianos and three micro-computers to create both a multi-instrumental composing and real-time performance system: Musical material played on one keyboard is picked up by switches under the keys and instantly read into computer memory where it can be processed, stored and/or simultaneously output for playback by two Marantz Pianocorder Vorsetzers attached to the additional grands. (Teitelbaum 1984. p.213)Teitelbaum's record Blends and the Digital Piano was released in 1986. It presents Teitelbaum's early work with the digital piano system and his musical interest with integrating improvisation and structured composition (Teitelbaum 1986; Byrd 1986). This is made possible through the system's ability to store, retrieve, manipulate and output material played in real-time by the performer.
In 1983, word came of another composer, Peter Zinovieff (Zinovieff's company, EMS, manufactured the Synthi 100 among others in the 1970's), working with the Pianocorder (Zinovieff 1983). He has installed the Pianocorder in a variety of pianos (Ernst Kapps grand and Yamaha upright).
While Zinovieff mentions a number of ways of working with the Pianocorder system interfaced to a microcomputer, perhaps the most unusual method he has experimented with involved the use of a 64 semitone filter bank to give rough transcriptions of real world sounds input in real-time to the piano. At one performance, the computer analysed sounds from a radio and then played a translation of the analysis on the piano.
There has been mention of other composers and musicians working with the Pianocorder or at least using microcomputer technology with the player piano. Unfortunately, accurate details or reports of their work have not surfaced in any of the widely circulated international music journals or trade magazines.
This specialist market has survived with the changes in technology that appear to bring the automatic performance of the piano closer to the human model. The latest trend has been to integrate micro-electronic technology with the mechanical instrument. At one end of the current spectrum can be found the most technically advanced and expensive system yet realised, and at the other, is a renewed commercial interest in a relatively less expensive system which can be connected to the current MIDI family of digital equipment. It seems that the piano is destined to be an experimental vehicle for all the latest technologies.
The system most widely used by the experimental composers is, however, the Marantz Pianocorder. In the mid 1970's, Joseph Tushinsky's fascination and love of player piano music, led him to consider whether a digital tape recording system could replace the paper rolls used by the pneumatic reproducing pianos. As president of the Marantz company he was in a position to pursue development and construction of such a digital piano system with an ultimate view to commercial release. Tushinsky's vast collection of piano rolls was made available for transcription (where copyright permitted) into the digital format. These formed a library of digital recordings of piano rolls for the Pianocorder. The mechanism appeared in a variety of models, record/playback and playback only, as well as the Vorsetzer system and a complete piano system with the mechanism already installed. The Pianocorder licence was eventually sold to Yamaha (Moog 1988).
The most impressive commercial digital piano system is based on the 9ft 6-in Imperial Bösendorfer Grand Piano with a computer and player action built in (Roads 1986b). Developed by Kimball International of Indiana and Bösendorfer of Vienna, it functions as a record and playback system of considerable subtlety.
The cost of the system from Kimball International (approximately US$60,000) puts it beyond the means of most individuals. Largely confined therefore to institutions, it will be interesting to learn whether music of relative merit is produced. In a comparison with Nancarrow's system it is ten to twenty times as expensive and several orders of magnitude more sophisticated.
This digital piano system has been used in a conventional manner for a performance of Beethoven's Grosse Fugue op. 134. This four hand arrangement was performed by Stephan Möller in Hamburg (Freizeit Journal No.1. 1988 p.2. see appendix C).
The New Scientist magazine (see appendix C) reported that Yamaha were filing patent applications around Europe for their version of a digital player piano. The article states:
The new Yamaha digital piano has a mechanical action and circuitry which processes the signals from a floppy disc. (New Scientist 1988. p.45)This instrument is intended to be integrated into a digital environment through the use of MIDI. In order to synchronise the mechanical instrument with electronic instruments a delay has been introduced. The article explains:
The mechanical reaction of a piano is slower, so a piano and other electronic instruments are out of time when triggered by the same pulses... The signals match the MIDI (musical instrument digital interface) industry standard, but the run through a digital memory. This delays the MIDI output signal by around 500 milliseconds–long enough to bring the mechanical piano and electronic instruments connected to it into step. (New Scientist 1988. p.45)Another report of this instrument (Bulletin/Newsweek 1988. p.100. see appendix C) indicated that it is currently available in Japan as the Yamaha MX-200R. It is now generally known as the 'Disklavier' and its functionality and operation have been discussed more widely in recent times (Moog 1988; Phillips 1988).
Yet as a medium for automated music composition and performance, the mechanical piano cannot compete with the new technologies which offer far greater incentives and rewards. Even without considering the physical advantages of such contemporary music technology, the player piano is not part of this Zeitgeist and therefore stands at odds to the ongoing artistic processes that define current styles and epoches. In all likelihood the player piano will cease to evolve (as it has done sporadically throughout this century) as the music technology industry produces a greater diversity of synthesis equipment, and correspondingly, musicians of music.