Chapter 4

The Interface Composed

At this time in the evolution of western music, composition and performance are deviating dramatically from traditional practice. This deviation can be seen in a radically new type of production context, one identifiable through changes which extend from overt differences in instruments and music dissemination to the deepest recesses of composition and performance. Through the myriad forms of a new technology of music, we are beginning to see a profound change in music and its cultural reception.

This final chapter seeks to illuminate a small but important section of this world and provide a central and unique theme for this essay on the interfaces of music technology. By describing the imprint and inflections this technology causes on the surface of traditional compositional practice and thought, an idea of its scope and impact can be readily gained. Clearly, there are forces at work here which are foreign to the traditional realm of music but are becoming increasingly part of a language of sound production. Consideration of such issues will be undertaken through a study of the intrinsic qualities and implications of one facet, one system of many, in computer music. This computer music system is Cmix, one with which I am currently most familiar.

The early parts of the chapter will elucidate the components of this domain of music technology without detailed reference to Cmix, while the final section presents a brief case study of my idiosyncratic approaches to its compositional application.

The focus throughout is essentially towards the internals of the computer music experience and only occasionally will it touch on cultural conditions, later twentieth century aesthetic issues or the influence of the other arts. Computer music is already such a complex and diverse activity that discussions among computer music composers often reveal practices so divergent as to be mutually incomprehensible.


Initially, the ability to manipulate digitized signals or generate sound within a single computer system appears to facilitate composition and musical production. While this is not an inaccurate observation, this impression, nevertheless, tends to obscure many issues that arise precisely because the compositional forum has been encapsulated within a single context, and in the process, radically transformed.

One significant problem in computer music is the emphasis on the logistics of musical resources, as traditionally understood, rather than on developing an appreciation of what amounts to a new set of resources and their implications. The resources once necessary to produce all music are no longer distributed across many environments nor is their associated expertise the exclusive intellectual property of specialists. But for the composer of computer music, the euphoria over what simply amounts to a kind of convenience is short-lived. After the initial exhilaration, the reality behind issues such as: material coordination, management, access, and sustaining creative enthusiasm and direction within such an environment, infiltrate the experience with their own importance and urgency. Frequently evident in the approach of new comers to computer music is a reluctance or unawareness of the necessity to rethink their attitude towards the nature and use of musical resources effected by the computer. Any change or re-appraisal of compositional practice comes with experience. There needs to be a period of context familiarization, and a time when the composer gains insight into the differences between historical and contemporary practices. The concentration of musical production within one system, within one technological world, ultimately has, on a daily creative basis, little to do with facility, and more to do with the pursuit of a creative experience that lies significantly outside the domain of traditional music practice.

Computer systems provide and encourage a self-contained and personal environment in which a diversity of music can be created and manipulated, but the vacuum left by the absence of the traditional musical context creates a need for a different type of intellectual stimulation and curiosity, something sympathetic and complimentary to the demands of the computer and associated technologies. Computer music composition entails a constant defining, adjusting and evolving within this virtual space. Beyond the mechanical dimension, there are no historical positions that comfortably define the role or world that the computer music composer may adopt and inhabit. This has to be determined through personal experience, after deciding what sort of music one intends given this technology. Furthermore, the composer now wrestles with issues once the exclusive responsibility of the performer. The composer cannot rely on skills developed from an early age and honed over a lifetime. There is no historical background nor meaningful repertoire. The composer must work out for herself methods of organization, management, process and aesthetic for virtually every composition.

Typically, the novice computer music composer is attracted to existing systems and methods (software, hardware and musical) which offer guidance in the direction of style and aesthetics. In due course, such attachments may become creatively restrictive depending on whether the technology matures with the user or demands modification or alternatives. What has effectively taken place is an expansion of the composer's engagement in the music production continuum.

There is a certain solitariness that engulfs the activity of computer music composition. What inspires this solitariness is a unique dialogue that is difficult to follow for anyone not actually part of the interaction. This dialogue demands the construction of an intimate context which is the technological interface of the composition. This may also be true, in part, for traditional composition, however, this historical activity has many moments that require external comment and consultation. A significant difference is in the logic of the environments and ones ability to intuit certain activities. The computer tends to impose a formal and highly rational protocol as part of its code of interaction. Composers coming from traditional music generally find this environment uncomfortable and unnatural, seeing it as inflexible and demanding in situations remote from their musical concerns.

The process of computer composition frequently becomes so convoluted that attempting to explain the important points or even peruse the evidence of activity can thwart even the most persistent enthusiast. Traditionally, musicians have pored over a score, commenting on matters that attract their attention, confident in the knowledge that their fellow musicians are generally capable of appreciating the same points about the nature of the music as they themselves. This is generally not the case in computer music. From the outset, there is little in the description of the process that reveals much about the music itself. Furthermore, it is quite likely that an observer won't be in a position to appreciate the organic sounds in question, the way a traditional musician identifies with the sounds of common practice instruments.

Computer music is almost universally understood as having currently the greatest creative potential. It is encapsulated within a very convenient working environment but one in which the musical world has only partial sovereignty. An impression of unbound creative potential is associated with the dynamic of the underlying technology, a technology constantly being improved and conceptually enhanced. Rarely is the intellectual dynamic associated with technology inspired by music; it is now more likely to be the technology that inspires music. This is exemplified by the current influential trend in Object-Oriented languages. These are understood to facilitate construction of more complex applications, including those in computer music. Consequently, musical experimentation and enquiry are being explicitly encouraged beyond their boundaries, from fields that have no traditional links with music. This has lead to the fear of music production being inexorably detached from traditional musical thinking. Given the energy and enthusiasm pervading computer music, I think this detachment is inevitable and necessary for accessing a wealth of sonic potential as yet barely tapped. Eventually, the complex world surrounding computer music will, perhaps, be thought of as an important dimension of future musical experiences.

Within a computer music system, interfaces reach their most complex and bewildering forms. Here are found, in concept and practice, a radical and sustained departure from traditional notions of musical creation, interpretation and participation. While music was once a complex communal activity, computer music has collapsed the context, so that now one person is the instrument builder, composer, performer, conductor and audience.

The purpose of this chapter is to show how the more esoteric forms of contemporary technology have expanded the notion of the musical interface, and how this functional approach to the musical interface is configured in relation to the traditional concept of musical engagement. The most significant changes permit it to be used as a moveable window on an entirely different concept of sound and musical expression.

The principal dilemma facing the computer music composer is the engagement of a creative environment. This environment is not static, and the development and use of software that harnesses both the power and concept of the computer to creative visions is in itself a kind of composition of the interface.


The earlier references to musical instruments within this essay assumed a traditional paradigm. Briefly, this paradigm entails some type or token of a physical sound producer, controlled by a skilled player, and taking place in real-time. In recent years this paradigm has become considerably sophisticated through the addition of technology, and enthusiasm for it shows no sign of abating. Robert Rowe describes this trend:

Instrument paradigm systems are concerned with constructing an extended musical instrument: performance gestures from a human player are analyzed by the computer and guide an elaborated output exceeding normal instrumental response. Imagining such a system being played by a single performer, the musical result would be thought of as a solo.

Later Rowe quotes Tod Machover describing the Hyperinstrument project: "Our approach emphasizes the concept of `instrument', and pays close attention to the learnability, perfectibility, and repeatability of redefined playing technique, as well as the conceptual simplicity of performing models in an attempt to optimize the learning curve for professional musicians"

In reference to the Jaffe/Schloss Wildlife duo, Rowe observes :

Wildlife is a performance-driven interactive system. A hybrid generation method including transformative and generative algorithms is ingeniously distributed between the two human players and two computers. The system seems close to the instrument paradigm: the computer programs become an elaborate instrument played jointly by the violin and percussion.

While this contemporary technological position reflects the traditional paradigmatic term instrument in evolution or flux, it should be pointed out that from an ethnomusicological perspective, the term instrument has always been a challenge to classify. Margaret Kartomi explains:

Not all cultures have classifications of instruments. If we define the term musical instrument in the usual (though admittedly limited) way as "implements used to produce music, especially as distinguished from the human voice" (Webster's Third New International Dictionary 1976 ed.), where music is defined according to its specific meaning in the relevant cultural context, then a few cultures may be isolated as having no musical instruments at all. These cultures have the materials and technological skill to make instruments but for various reasons choose not to.

Defining musical instruments is neither simple nor straightforward, and in all likelihood, will remain problematic for anyone wishing to trace the evolution of instrumental logic. The addition of the category Computer Music Instrument to any classification system will inevitably enrich and complicate the taxonomy. This has come about because there is now an alternative world for instruments, one in which they exist without physical form or substance. Thus they are conceptual, have no physical form, and exist as specifications or descriptions.

From the external involvement of computer technology in the production of music and augmentation of musical performance, as found in Jaffe and Rowe, it is a small step to the inner musical world of the Virtual Instrument. Indeed, this step appears a natural progression in a musical context destined to exploit technology. In this self-contained milieu, the composer sets about, as a matter of normal procedure, creating instruments that function according to degrees of private purpose and aesthetic, unthought of in the acoustic age.

It is difficult to conceive of a virtual instrument from past physical models. In this world, free from such constraints, physical limitations are generally observed at the conversion stage to our auditory world, digital signal to sound, most noticeably in the transduction of electrical energy to acoustic energy. These are post-factum to the process of composition and while important, are generally accepted as part of the presentation of a composition not always directly under the composer's control.

For the remainder of this essay, the type of computer music instruments I will be concerned with will be termed non-real-time and exist as software. With this in mind, it becomes possible to investigate the musical instrument as a concept.

Substance, Description and Representation

Our first thoughts about musical instruments concern their sound but quickly we begin think of context and physical properties. The sound is usually an abstraction of a known body of music but it may also be a conflation of experiences rather than a unique instance. We can evoke the sound of the flute without being conscious of specific flute works. We also keep in mind the physical being of the instrument because the visual experience has been a component of our appreciation of music. An instrument traditionally has a repertoire which connects it to history, context and culture. There is so much violin music, and our experiences of violin performance are so rich and complex, that mentioning the word violin invokes a memory of "violiness" which may be either auditory, visual or affective.

A software instrument is all but invisible. It is defined not by its appearance but by its fundamental effect, either for the sound it produces as synthesis or as a result of processing an existing sound, and by its application. How an instrument is used is critical to its definition. Stretching strings across a plank of wood and a hollow box doesn't quite make a guitar or violin, although the principles employed in their use may be the same. We understand that there is further sophistication to traditional instruments and ultimately become concerned with their nature and history.

Similarly, a software instrument may have a distinct quality whenever used, yet for it to survive, and indeed flourish, this superficial attribute must be capable of being extended beyond its primary effect. The most striking example of this is the development around FM synthesis and its proliferation in mainstream music. This technique was well suited to development in the market because it was inexpensive and it offered a vast number of sound possibilities without physical regard for how they should be accessed. Consequently, the technique has been re-packaged at regular intervals with new products offering innovative approaches to accessing the sounds by presenting various and alternative physical interfaces.

That a software instrument can be viewed as a description of sound production, indicates in a dramatic manner how far it is removed from historical definitions of instrument. The described instrument exists in a vast number of possible states. These in themselves are instruments, or instances of an instrument subclass, which may yield significantly different results with a simple software modification. Although the function of a software instrument is subject to considerable revision and alteration, composers can still initially conceptualize a sound generator or processor that may be conducive to their creative requirements. In software, the generator or processor has a certain ephemerality, and only through constant and specific use can its nature be appreciated. Software instruments have an unusual, and possibly closer, affinity to a computer music composition than traditional instruments have to their music because the software instrument can be developed exclusively for a composition. Thus, it is primarily aligned with a compositional rather than a performance experience. As software, these instruments tend to function out of real-time, and thereby present the computer music composer with a unique composing challenge which is as problematic as it is exhilarating.

It should be noted that the two functional components of an instrument, the controller and the sound producer, are still present in the software description. It is possible that their interaction can become quite convoluted and, as components of an instrument, mutually dependent. How the sound is produced is at the heart of the software's operations, although it could be delegated to hardware running dedicated autonomous software for producing sound the way the MusicKit produces sound through the DSP on the NeXT machine. Wrapped around this is the control mechanism which can be quite sophisticated.

The computer music instrument has no physical properties. To appreciate its character, it must be invoked–not once, but a number of times–to create a impression of its scope and dimensions. A software instrument can have a very striking sonic imprint, a quality of sound that is readily identifiable if it is used in a straightforward way. Certain signal processing techniques such as Comb filters, high and low-pass filters, phase vocoding, and linear predictive coding, leave a very distinctive trace upon an input signal. When listening to a processed sound, we can forget the source and hear primarily the result of the processing as the musical experience, even though we are not able to specify the nature of the processing. The use of signal processing techniques as instruments depends on prior experience with recorded music. One needs to be aware of the impact of recording technology on sound to appreciate the impact of processing on recording. We have yet to give cultural significance to the ability to take a recording, and process it in ways that might either reflect on the original sound, or on the technique itself or on both. Compare this with the orchestra, the marching band, or the string quartet, and we see that, as a medium for cultural statement, processing sound has yet to come into its own.

This conceptual instrument is then an algorithm or collection of algorithms that produces or processes sound. The quality of its output depends on the programmer's initial ability to define both the instrument and any data it might require for operation. The user can engage this software in a variety of forms, ranging from a command followed by a series of delimiters, modifying switches and arguments to graphic icons which open windows onto locations for the modifying parameters. The exterior and initialization points for conceptual instruments are destined to change through the interpretations of different programmers, denying the instrument a common appearance. However, we can perceive software instruments as falling into two categories: static and dynamic.

An instrument that requires as initial configuration, everything needed to run it in a constant state until its function is complete, can be considered to have a static behavior. This method is common in non-real-time instruments where their operation tends not be interrupted arbitrarily, unless for abnormal termination. Thus the common usage of software instruments is to be set up and run without interference until complete. A single operation yielding a single result becomes the focus of the composer's attention.

The second category, dynamic behavior, entails changing parameters and input signals at various stages during execution of the software, either with the same instrument or for a number of them. This can be controlled precisely by initially defining a set of time points where changes will take place or by assigning these time points randomly. This method introduces the world of algorithmic composition which will be considered later in this chapter.

In both cases, the external configuration information may only be part of the instrument if an input signal is also required (the signal could be generated internally). The importance of the input signal to the result can be interpreted from two positions. First, there is the nature of the signal itself. Experimentation is required with the input signal and the parameter settings to the instrument, in order to determine the effect. Second, there is a question of how much of the input signal is to be combined with the output signal. The effect of mixing the processed material with the original signal occasionally creates a kind of curious mapping between the two signals. The aggregate can have points of emphasis at locations not noted in the original sound as having any significance. Composition that seeks a new focus for listening in pre-existing sound is a significant pursuit in this genre of computer music.

Design Logic and Architecture

Since computer music systems are the product of human intellect, they invariably possess a logic consistent with the strengths and weaknesses of their developer's intentions. The design of a computer music system, given that it is relatively free from technical problems, has a level of subjective bias that can be overwhelming to someone from a traditional instrumental context. One need only consider a few of the many computer music systems to realize that, irrespective of how broad the scope of a system, it is bound to be biased by some predilection of the designer. Frequently, these compositional views can be thought of as personal statements about the production of computer music. Therefore, when a composer undertakes to use a particular computer music system, not of his own designing, he is subscribing to an inherent and idiosyncratic compositional logic; one not always immediately apparent. Until he is in a position to determine the extent to which the compositional environment meets his creative inclinations, it will influence, to varying degrees, aspects of his compositions.

The significance of the logic of a system may appear inconsequential because it is internal. This is not unique to music composition, but within the framework of the computer it takes on a more problematic interpretation. To design a music system that has an initially attractive format and structure one must also attempt to match that with computer functionality, and direct both towards a creative position in music. In general, we are now sceptical of attempts to systematize creativity or the creative context because such efforts frequently result in unaccountable limitations.

Primary Function and Control

The development of a software instrument in an software environment like Cmix usually consumes considerable amounts of time and effort. The instrument, as has been discussed previously, is typically designed to function according to an algorithm, the core processing function. This will effect an input signal in a predictable manner each time it is used. Output is affected by changing the parameters of the processing function or changing the input signal or both.

The two components of a non-real-time instrument, signal production and control, must be present if an instrument is to be of any practical use. The first component, signal production, has two operational modes: the signal processor and the signal generator. The first mode affects a digitized input signal passing through the instrument, while in the second mode, a signal is simply generated internally. It is possible to have an instrument that requires both modes this may be where the input signal is used for analysis and the output signal is the result of synthesis based on the results of the analysis. The analysis may attempt to extract properties of the input sound to build sophisticated parameters for the synthesis engine. The second component controls, through degrees of complexity, the first component. Control can also be inherent in the first component, but the instrument will be more versatile if control is imposed from outside the signal processing environment itself. In Cmix the second component receives its control information from a scorefile or a program. The data instructs the fundamental operation of the instrument during execution, therefore the data can be thought of as part of the performance. A simple description of control would be a basic sequence of operations that specify the input data, the function to invoke and the output file. These instructions say, in name only, what will be done to the data. When the complexity of the instruction sequence is increased, a situation arises where the resulting sound may be significantly different from the output of a simple sequence. This simple sequence may be an elementary set of operations which access the sound production component.

The control component is often considered as a kind of score. This is partially true, but in the case where the output sound was derived from processing an input sound it would be an instruction score and not one destined for musical analysis. The procedure says nothing about the music itself. It is impossible to know how the output might sound unless one has heard it in previous incarnations. A completely different result can be generated simply by changing the input sound and, of course, altering the score part to reflect the new sound file. The new output sound will contain the effect of the instrument but dispersed in a way dependent on the new input file. Even knowing the input sound is of marginal value in attempting to predict the sound. So the idea of hearing the result of processing in advance rests on an expectation matching the nature of the sound with a particular instrument.

Certainly, there is an ambivalence to the control component. I have tended to see it as part of the instrument, but it also could be interpreted as a performer or score. It could be argued that the signal production component is the essential instrument because it is the algorithm that directly affects the signal or produces sound. But I think our most developed understanding of an instrument necessarily includes the manner in which it is controlled.

One final point I wish to consider in this context is that we are dealing with non-real-time instruments and as such they can be used in ways that are difficult to understand if one is thinking in traditional instrumental terms. For example, an instrument can be used, then with some temporal deviation re-used over the same output material. Consequently, the result rapidly evolves into something other than the effect of passing the sound directly through the original instrument. This raises issues around the idea of temporality in instrument usage. Non-real-time computer instruments can invoke two approaches to dealing with time. The first is what I call linear time processing, where the sound is processed in one pass, over one segment, and the result is also one correlating segment. The instrument functions uniformly through the input data, and produces sound which is the combined effect of input signal and single processing function. This approach ranges from the simple mixing of two sound files through to the phase vocoder, where the duration of the output is either an augmentation or diminution of the input. The temporal structure of the input sound has also some bearing on the nature of the output sound. The second I think of as atemporal processing which concerns instruments under algorithmic control or used such that the linearity is disrupted. Here the principal difference is that the instrument is no longer viewed as functioning monolithically but in a multifarious capacity, controlled by external compositional intention. This entails a coupling of the composition of control with the composition of sound.

Thus in the non-real-time environment, the composer has the ability to use an instrument in ways that can only be appreciated through experimentation. This potential to alter control over a pre-configured instrument, the dynamic category mentioned earlier, has up to this time in computer music, been little studied or exploited. The reason behind this is the necessity of further description beyond the application of the processing function to the input signal. But without the development of the controlling function as a description of use, an instrument tends to develop internally but without a sophisticated character for discourse. A secondary discomfort to the idea of yet more software is that this added layer begins to neutralize the impetus of the faster CPU dream in computer music. It seems impossible to deny anyway, that as processors become more powerful the expectations of composers will also become more sophisticated, therefore significant gain in processing speed through a faster CPU remains a fantasy or at best an illusion.

Instrument Maker

Until this century, there has been a very clear understanding of the position of the instrument maker. Consistent with the historical division of labor, the instrument maker essentially made instruments. Rarely were they composers of distinction or performers of celebrity. They were specialist craftsmen, with an understanding in the context of the artisan after apprenticeship/indenture to a recognized master. The worlds of the patron, performer, composer, collector, admirer, and instrument maker had distinct boundaries, each activity was cocooned by an enthusiasm and commitment to a singular dimension of what was a shared global objective. The computer music composer can now bring his influence to bear actively in all these previously disparate spheres.

In the computer music world, every composer is potentially a new instrument maker, or historically, a luthier nouvelle. Through familiarity with such systems as Cmix, the composer might feel inclined to build their own instruments concentrating on either of the two components previously discussed. Traditionally, instrument building has been associated with the low level development of a sound generator or signal processor. This follows after the tradition of the luthier or general instrument maker, but the world of the computer instrument is now somewhat more complex.

The development of an instrument in Cmix takes into account certain design procedures. These amount to input/output format, parameter handling and internal data format and representation. Beyond these the composer is free to develop any particular algorithm suited to the generation or processing of digital signals. Since software is so flexible, an instance of an instrument can become a genus within a short period of time, as the composer makes changes to its behavior.

The composer/instrument builder has power the create, neglect, destroy and rebuild software instruments in the pursuit of a sound or compositional goal. Some instruments, particularly those that have yielded successful results may be kept for re-use or modification at a later time. If the core function of the instrument is not changed, it will, nevertheless, require parameter changes which which may be considered the beginnings of instrument redesign. Consider the following Cmix example score file for the invocation of the Slider instrument (Ex. 4.1):

envelope(time,amp,time,amp, .... time,amp)
wave(partial_0_amp,partial_1_amp, ... partial_n_amp)
glissform(time,offset,time,offset ... time,offset)
float start,dur,pitch1,pitch2



slider(start=0, dur=5, pitch1=8.00, pitch2=8.003)
slider(start=0, dur=5, pitch1=8.00, pitch2=8.000)

Example 4.1 Slider instrument from Cmix.

Although this script is intended as an introductory prototype for new users, a user may need to immediately edit it. Apart from the number and sequence of events, and parameters to the actual Slider function which would need to be considered, perhaps at a more compositionally advanced stage, the path to the output sound file needs consideration, if not alteration. This is the first step towards a private use which in all likelihood will eventually reflect a highly individual interpretation. Beyond this the composer may wish to alter the fundamental operation of the Slider instrument itself. This is a more complex operation in instrument construction and would eventually entail redesign of the processing algorithm and reconsideration of the controlling mode, in order to investigate satisfactorily the instrument's complete operation.

A remarkable phenomenon of the luthier nouvelle in computer music is that the practice of instrumental construction can become an end in itself. The composer/instrument builder can create instruments, compose fragments with them, and get a sense of compositional satisfaction out of the experience. Working on a composition can seem less fulfilling compared with the construction and study of the nature of computer music instruments. Promoting this position in recent years has been the pedagogical aspect of computer music and the increase in powerful software development systems with musical implications such as NeXTSTEP. Considerable computer music software now exists as teaching tools for the concepts of computer music and the promotion of its artistic diversity. The development of such computer music systems as Cmix, Csound, Cmusic, MusicKit, and Max among others, are in themselves what could be called contexts for instruments and have proliferated in recent years along with the facilities to develop them. The development of such systems attracts considerable effort and intellectual commitment, and some find it more interesting to develop these environments than to compose with them. It is likely that the full potential of these systems will never be exploited.

Computer music instruments may also be designed around non-linear behavior which results in sound with complex and natural characteristics. The extent to which this is employed varies greatly. It could be used simply in the production of a sound or, on a higher level, to control the instrument or both. The sounds in principle can be created with nuances similar to what one might find in the acoustic world, thus giving the sound a familiarity. This approach generally relieves the composer of the task of specifying the sound and then trying to make it interesting over the course of a composition. The interest in non-linear systems has been largely promoted by mathematical studies and the popularity of Fractal geometry.

Reproduction, Reconstruction, Deletion and Context

The concepts of reproduction, reconstruction and deletion are unique to the virtual instrument world, and are phenomena related to an accelerated evolutionary process unencumbered by the demands of the physical world. Reproduction and reconstruction indicate an environment where instruments and data can be rapidly amassed creating a context of great diversity from a small core of material. Multiplicity is the logical extension of reproduction.

The idea of Deletion is important to anything virtual because an object can be removed from a context quickly, efficiently and permanently. It represents the antithesis of creation in this context. Deletion can be equally effective on the history of an object. It is both Death and Extinction in virtual worlds.

Context determines the extent and rate of deletion and reproduction. The transience of the virtual instrument tends to suggest that it is for and of the moment. An instrument has a character and nuance that is directed towards the context of its use. It must be configured for the input material and output compositional framework.

Instruments are often closely associated with a particular composition and can have such idiosyncratic characteristics that their re-use or reconstruction without modification would be undesirable unless one was creating a suite of pieces. The type of information dealt with in this context is typically very specific. To reconstruct an instrument necessitates altering the information, effectively dismantling the previous concept, in order to construct new information about the next instrument. It could also mean redesigning the algorithm, but this is done less frequently.

Imperfections, Anomalies, and Serendipity

All constructions are imperfect. The history of music, as of all Art, is marked by examples of imperfections that may eventually be elevated to the highest levels of aesthetic consideration and value. The majority of these imperfections are derivations of an existing technique which has to be considered as the initial criterion. These are the foundation of style and mannerism. Unexpected behavior in the processing function of an instrument can be used compositionally, particularly if it can be offset against the conventional function.

The potential for serendipitous circumstances is far greater in computer music simply because the dimension of the compositional experience has yet to be documented and widely discussed. This is particularly, though not exclusively, something that occurs at the higher level compositional state. Brad Garton relates one such experience :

As I worked with Elthar to create the piece There's no place like Home, an interaction developed that I had not foreseen when designing the program. The interaction was the result of the conjunction of Elthar's script-learning capability with the analogy mechanism. After installing the script functions into Elthar, I began to use scripts to test the program. I created several "debugging" scripts that called upon all of Elthar's signal processing commands. I purposefully left the scripts "open"; I didn't specify many of the parameters for the signal processing algorithms. This ensured that a large portion of Elthar's knowledge-base would be consulted (one of the purposes of the testing).

The sounds that resulted from these test scripts really caught my ear. They weren't at all what I had planned to do, but at the same time they had a fascinating beauty all their own.

Such experiences, particularly at the testing stages, are common in complex systems where the composer has delegated compositional determinacy to the computer.

A similar situation happened to me at the time of writing the scripts found in the appendices. I was under the mistaken impression that each time the program updated a particular section, it re-initialized a particular array. But in fact it didn't, and the array just grew bigger (dynamic memory allocation for arrays is a normal operation in the perl script language). After I discovered and fixed the bug, I was disappointed re-hearing the results. So I changed it back to the previous condition.

Such situations might easily come about in virtually any engagement with computers. In computer music it is something akin to Jazz improvisation when it works. There is a spontaneity about the moment that is truly invigorating.


Analysis of human language rarely produces definitive answers because even everyday verbal communication is complex. Our thoughts about language lead us into ever more complex aspects of human existence that invariably return us to our original position of contemplation about language. But according to Julia Kristeva we are surrounded by instances of language systems in many non-verbal forms:

Whoever says language says demarcation, signification, and communication. In this sense, all human practices are kinds of language because they have as their function to demarcate, to signify, to communicate. To exchange goods and women in the social network, to produce objects of art or explanatory discourses such as religions or myths, etc., is to form a sort of secondary linguistic system with respect to language, and on the basis of this system to install a communications circuit with subjects, meaning and signification. To know these systems (these subjects, these meanings, these significations), to study their particularities as types of language, is the second characteristic of modern thinking, which uses linguistics as the basis for the study of man.

In the dialogue between ourselves and computers, we recognize the importance of language, and lament the primitiveness of available computer languages. We again come up against our inability to fully grasp our own means of communication to the extent that we can effectively transfer that ability to a machine.

In computer music, concern for language as a vehicle for communication is paradoxical. At the center of the dilemma is the position musicians already have with the notion of musical communication. Musicians often think music to be a special type of language with unique conditions and requirements. Thus the available language environment of the computer is perceived as musically weak and inadequate from several positions. First, there is our perception of historical notation systems as intermediary symbolic descriptions of musical activity. Second, we engage in formal and informal discussions about music which inevitably take place around analytical and performance issues. Finally, there is the music itself. The formal languages employed by computers are of considerable sophistication, but appear fundamentally unsuitable and incongruously positioned for musical transactions. I suspect that musicians have a natural aversion to formal and apparently rigid specifications, and look towards the power of natural language as a paradigm for "subject", "meaning" and "signification". Music seeks to be an equivalent and unique parallel to our natural languages.

Computer music is partly about a creative communion between the composer and the computer. The mode of communication is unlike anything previously possible in the history of music, and as such, possesses certain problematic conditions which composers confront at various times. Problems which result from the union of traditional musical practices and technology are often frustrating in the incongruity between their conceptual simplicity and technological implementation. Such conditions are unlikely to be left undisturbed because technology makes us intolerant of situations where there is a possibility of a technological solution. Technology has set up a context of expectancy in which the destiny of music is now inexorably linked.

Like traditional music practice, computer music uses various forms of symbolic notation and interaction to facilitate the composer's tasks and instruct the computer in the generation of sound. The notation might be a set of shell commands in such environments as DOS or UNIX, or the manipulation of icons in now familiar in window based systems, and thereby representing the diversity of the computer's formal language approach to the possible specification of music. Nevertheless, general computer environments are characteristically resistant to the creative evolution of language.

Unlike the traditional music context, the composer cannot step outside, what I will call the formal discourse of the process to another common language in order to further elucidate matters of ambivalence or ambiguity that arise. The composer cannot explain the composition beyond the steps taken in the production of the work; there is no alternative interpretation to the work's primary description. For example, consulting another computer will not resolve compositional questions. Transferring the entire composition in progress to another machine might have some beneficial side effects, but it is by no means certain. The crucial difference for the composer is the absence of the performer as interpreter/mediator. The performer's role is complex because, among other things, it permits the work to have a re-birth, even redemption at various times. Whereas in computer music, the resulting sound is the composer's direct responsibility and immediate point of assessment. Once the composition is presented (a recording played back), it is unlikely to receive further performance interpretation unless aspects of the sound are modified during a presentation, such as equalization or the application of room simulation or sound diffusion in the performance space.

Anyone who uses a computer would ideally like the sophistication of communication that exists between humans. With computers, the potential for such complex bilateral dialogue, without significant compromise towards the computer, is currently impossible. Computers essentially function in context-specific situations and are unable to resolve ambiguity or divine the subtleties humans routinely use in conversation. Many attempts have been made at human/machine language but these tend to reflect the author/programmer's particular interpretation of the problem, and the idea of a successful solution resides within their specifications and scope of research. The many singular efforts, valuable as insights into the specific parts of the problem, seem antithetical to the modes of interaction and manner in which normal human communication is conducted. The phenomena of our behavior cannot be an isolated point on the continuum of our existence.

Computer languages are generally regarded as the communication medium between a programmer and a computer. But between the general computer user and the computer the idea of a language is replaced by the program and, currently, the application, which are specific codifications of a task. Programming languages, as encapsulated specifications of action, thus remain the most powerful and efficient means of communicating and manifesting ideas and concepts within the computer. Unlike ordinary language, computer languages are not normally invoked as a spontaneous means of communication; rather they are used to define the means of doing certain, typically linear operations. Although a computer language can be used to control operations within the computer at various levels of sophistication, attaining such sophistication can prove to be a daunting task. The vast majority of users find it too inconvenient and too complex to construct semantically and syntactically correct command sequences, the way they might in their natural language. Much of the fear of such language sophistication lies in our discomfort with computer interaction at a linguistic level, and the evolution of the technology has not brought us to a more comfortable interactive position. We are well aware that the computer is not tolerant or receptive to the nuances of our modes of thought and action.

From the simple program, we see the rise of the sophisticated application which attempts to provide a context for computer use that is largely self-contained and provides considerable task abstraction. This is attractive for a number of reasons but limiting. The application deals specifically with a task context, like text editing, document preparation or music composition. The extent of abstraction generally demarcates a specific problem space with which we interact. We cannot articulate our problems nor attempt to solve them from outside the application. While a computer language has been used to produce the application, it is not necessarily accessible or apparent within it. The application is a specification and set of algorithms that can resolve matters only within a certain class of problems and generally only within itself.

By all accounts, music is just one of the many tasks a computer can undertake, and of the numerous applications that exist to provide this function, we see at various points a similarity of operation with those of text editing, image processing and system management. In other words, after computerization, tasks once thought to have virtually nothing in common with music begin to share traits with music composition and production. There are a number of conditions that contribute to this but the most significant one centers around the methods of communication between the user and the task. On one level, an exterior level, there is the interface between the user and the computer. On another, there is the interface between the software and the task abstraction. The software is ultimately a compromise between the computer hardware and the power of the language/symbolic interaction used to define the task.

If computer languages are sufficiently powerful to allow the construction of programs to produce music, why can't these languages be used to produce music directly? This would obviate the need for specialized environments, and directly tap the power of the language without restriction. Most computer languages, certainly those used to create applications, can be used to develop the foundation components for a musically creative environment, that is, the atomic operators and functions used constantly, disk input/output routines, management tools and resource handlers in general. The problem arises at the creative level. Computer languages are typically regarded as cumbersome, general purpose, low level and conceptually too neutral for a specific task like music composition or performance. The composer would have to spend considerable time defining low-level aspects of the composition in a language that may be functionally efficient for that purpose but nevertheless remote from any musical aspirations. Broadly speaking, the use of general computer languages for musical composition is possible but not desirable when the definition and context of such languages needs some qualification.

A large percentage of interaction with computers, for what might be called domestic discourse, necessitates the use of a language. These are commonly known as command languages. Such languages define, through alpha-characters and graphic tokens, quite sophisticated operations but are generally less powerful than their fully developed relations. The intention of these languages is directed towards general management and navigation through the computer rather than general programming or musically creative tasks. Section 4.7 illustrates a use of this type of language for music composition.

Public Language

Computer languages are typically general, and therefore public, since the structure of the language is readily accessible to all users.€Use of a language is determined by context. Computer languages have traditionally been task orientated, for example, such languages as COBOL, FORTRAN and LISP were designed for particular interest groups. The languages of Business, Science and Artificial Intelligence are nevertheless public. The corresponding computer languages mentioned above seek to emulate this specialist position of language in the community through the use of unique terms specific to the needs of that community. COBOL appears oriented towards business concerns and functioned until new applications and languages emerged with conceptual dispositions more finely tuned to contemporary business practice. LISP, on the other hand, is certainly more flexible and extensible, but its grammar and syntax suggest a language for the traversal of a less well defined or self-created problem space. It is a language for the description and solution of abstract problems.

Personal Language

Language used by an individual tends to become an expression of that individual. Beyond its communicative function, it serves to identify the user. The situation is similar for the composer who seeks a means of composition that will not only facilitate personal musical production but imprint his identity and character on a work.

Taking into account all the formal constraints inherent in computer languages there still exists a remarkable scope for individual expression. Considerable latitude is possible in purpose, design and structure of a program, revealing as much about the programmer as their handwriting or accent. A programmer can find an elegant and succinct solution to a particular problem, and imprint a unique style of coding in various ways throughout the program. Programs or applications need not be devoid of human character. The development and proliferation of music programs goes some way to addressing one of the central problems of computer music, which is the dominating atmosphere of the computer in the compositional process. Unfortunately, the external diversity of computer music applications is generally not apparent from within one application and the ability to develop a private language of interaction is thus frequently confined to each application.

Given that the computer is a self-contained environment, it is curious that the many methods to produce music are generally isolated from each other with no obvious system of communication between them. Such a means of communication has been partially answered with the proliferation of MIDI but it is essentially an external protocol useful between autonomous equipment. An internal MIDI that works between different applications has yet to be developed. This aspect of communication has not been addressed in a way the makes it clear that this is a fundamentally important position within computer music technology.

Commands, Arguments and Parameters

The computer is essentially a tool that needs to be controlled. This is evident in the general format of primary communication, the command set. For all practical purposes, interaction with the computer, ranging from a mundane to a sophisticated level, seeks to be brief and precise. Commands are repeated often but at unpredictable moments which promotes brevity and simplicity toward the interaction. There is no place or time for the redundancy that, in our natural language, can enrich our world. Commands have a stable function and seek, in name, to be explicit about their operation. They frequently contain explanations of their use and the format for user supplied information.

Command languages with their modifying arguments, switches and parameters, can inhibit the spontaneity and intimacy necessary for constructing a creative environment. Commands are not for thinking with but doing. Everything submitted to the computer is essentially a command. What we, the users, do is to increase the semantic content through a change of context. The approach to take with systems that use the directive is to design modes of interaction that function at a higher level and, through some other interpretation of the problem, call the command primitives transparently, that is, without the user's explicit knowledge. We still issue commands but they can be subtler in their signification and surrounded by a conditional environment that leaves the user feeling that the operation approximates a nuance of real-life.


The absence of the performer as a specialist interpreter and sound producer brings the computer music composer into a more direct relation with sound, both conceptually and practically. The composition, complete or in progress, is heard unmediated by another human being, and frequently at the site of composition. This immediacy and directness sharply defines the quality and success of the interaction between composer and technology. Every sound the composer ventures to make is a critical and essential statement about the communication between themselves and the instrument, even before it becomes compositional material. What the composer finds satisfying in the course of production may well be the result of a mature dialogue. For the inexperienced composer, results can well be arbitrary and not indicative of whether they are approaching the appropriate symbiotic relation. On a prolonged basis, the composer ultimately comes to understand the methods for sound production most likely to manifest her intentions. Here, at once, is both the starting point and evidence for the idea that the interface is composed and that the composition is a specification of the nature of the interaction.

As I have previously outlined, the computer music composer is in a unique and possibly perplexing situation when it comes to understanding the sound of his instruments. Here there is a categorically different situation from the traditional one. Foundational experiences of common practice music begin with the musical instrument. It is rare that an appreciation of music does not contain some knowledge of the canon of musical instruments. For many students, practical performance is an entry into the conceptual world of music. So there is always a distinction between physical production and intellectual appreciation. However, computer music is fundamentally intellectual and conceptual; instrumental appreciation is initially predicated on the symbolic description of the processing algorithm and the sound is itself an auditory code. Representation becomes the means to correlate instrument with sound–the algorithm is the simulacra of instrument and the digitized signal is the simulacra of sound. Even if one cannot understand the workings of the algorithm, its character can be later appreciated in sound. But to know the sound of computer instruments is also to understand the function and implications of their principal algorithms. For example, the effect of comb filters, band-pass filters or resonators can be initially understood or gleaned, through their names and later through more complex descriptions, finally leading to the mathematical intricacies of an algorithm. Here it can be studied in abstract with interest focused on the numerical consequences.

When applied to existing sounds, the algorithm is likely to be heard in a different way from how one might hear traditional instrumental music. One immediate reason for the difference is that the sound context, to which the processing has been applied, will not resemble that of traditional music. But let us consider the processing for a moment. Recognition of the general effect of processing can be quite obvious, and as its use becomes more sophisticated, identifying its presence in many situations becomes a matter of experience. However, an unmediated use of signal processors amounts to a use of effect rather than the manifestation of the instrument. The instrument is capable of making a further statement about the sound without overtly drawing itself in as part of that statement. How effects become instruments is, as previously discussed, a compositional matter.

Furthermore, the kinds of relations and structures one is accustomed to hearing in acoustic instrumental music, have a historical significance beyond the immediacy of performance. Such historical relations and structures can be used in computer composition, but it is inevitable that this music will create its own. For example, in certain computer music relations and structures may be contained in the input signal. Disregarding the possibility that the input signal can be white noise, it could well be sounds which already have significant meaning for the composer (the composer could choose to process a recording of orchestral music).

Taking this even further, a processed sound can not only be evaluated for its explicit quality and effect but also for how it works when put back with the initial signal. The input signal could be perceived as a nuancing of the underlying processing function. This is contrary to what one might expect from processing. In signal processing composition, the composer has thus to make decisions about the mix of processed and unprocessed sounds. This is a position on sound not readily understood by traditional composers. Nicola Bernardini further describes the sound world of the computer music composer:

To put it simply, what this means is that sound itself (a specific instance of a musical performance) becomes infinitely duplicable and that, thus, the relationship between the listener and the work itself changes radically; the listener is now confronted with something that resembles a photograph more than a painting. In fact, virtuality itself allows complete control of connotation attributes; it is now possible to hint, to lie, to fake, and even to create far-fetched metaphors with music. It is furthermore possible to lie in between these modes of communication, moving continuously from one to the other, just as we do when we speak (which is so hard to do when we write). In fact, this is the main issue here–the combination of reproduction and virtuality transforms music in a double-layered grid language. The likeness between its inner structure and functionality and that of spoken natural languages is now evident. Music, however, continues to preserve the auto-referential possibility of being "understood" just by listening to it.

This text is somewhat more optimistic than credible, particularly in the "likeness between computer its [music's] inner structure and functionality and that of natural language." But evident in the text's progress towards the sublime is a kind of excitement about the possibilities. Put in a more realistic context, the important point for the composer is the view that computer music is potentially a powerful medium for reflection upon humanity.

For Cmix signal processing instruments, the composer typically evaluates sound from three positions: recording, generation, and composition. The order is of some significance here as it relates to the natural progression of the composition. In the following sections I will briefly consider some salient points.


Recording is the most significant and influential music technology to emerge since the development of musical instruments and printing. But in the space of 50 or so years since recording as we now understand it emerged (the tape recorder), the ability to capture sounds of any sort and have them played back at a later time is no longer seen as the full extent of the process. Recording has become one of a still point, but from it has sprung the beginning of yet another significant period of musical development.

When recordings are retrievable without loss of integrity, as in the digital format, the recording itself has the potential to become something other than just a record, a captured moment to be simply replayed. The recording becomes something that can be reinterpreted, reused for the purpose of further and, perhaps, radically different recordings which might include the nuances Bernardini suggests. This potential elevates the fundamental importance of recording by expanding the notion of fidelity. Fidelity is no longer a matter of noise-free play back but of quality processible material.

Digitization of sound has brought a significant change to the meaning of recording. It is dependent on computer technology because this technology provided the means to alter the primary significance of the recording. This technology existed before digital sound and for many years it was understood that sound could be transferred into a computer and dealt with in a digital format. For one thing, it has permitted a manipulation of the signal within a relatively stable environment which protects it from contamination and corruption from media noise, so prevalent in analogue recording method. The recording is now independent of the physical media on which it resides but dependent on the integrity of the digital representation.

Being able to manipulate the content of the recording, the digitized signal is a remarkable step beyond the actuality of the recorded material. Yet it appears that the evolution of recording technology has progressed oblivious to the parallel efforts of other technologies to influence this content through various means, including unintended use of the recording device itself. Ironically, this at once undermines the integrity of the recording while promoting the autonomy and importance of the context as artifact. We are thus inclined to value recordings, knowing that their life, like our own, is not simply a question of a slow degradation of materials but can be radically altered at any time.

Beyond the technical dimension of recording are metaphysical concerns about content and representation. It is necessary to appreciate that the recording is generally regarded as having an inherent truth content which empowers it with an authority which we have come to accept. Traditional instruments are also in a sense empowered with a truth content, but it is imparted through the performer. As I see it, the truth content of the recording is critically dependent on the recordist and the technology, because both can vary the quality of the recording and consequently its reception and credibility. One desires to hear in the content of the recording details that minimize ambiguity and indecision, thereby convincing us of the truth of the recorded event.

Analysis and Processing

The next step in a compositional scenario is the ability to analyse, create or transform digital signals through various numerical processes. This is the point at which the composer affects the sound in a practical way. This stage further develops the notion of a disembodiment of sound from the instrument or reality. It is worth observing that during this time the composer or musician is still in record mode. The material is constantly being replicated, examined, renamed, moved, edited and deleted. The environment alone constitutes a processing function. Therefore, if we think of a single context in which these transformations might take place, traditional musical instruments do not come to mind because of their singularity of function and historical identity.

No traditional instrument is capable of analyzing its own output. While the musical value of this may be questionable, it has become a critical computer technique for thinking about sound as music. The composer can initially reflect on the quantifiable properties of a sound which ultimately might influence, inspire and prepare her for aesthetic or semantic issues. The various analytical tools essentially tackle the domains of amplitude, frequency and time, occasionally bringing all together as a snap shot of a moment in the sound. The idea of using spectral analysis, in either two or three dimensional representation, as a means of considering both large and small scale structure in sections of a recording has some merit, and if the resources are available could provide some insights into the sound not previously considered.

There are other less aesthetically abstract procedures that are intended to maximize the quality of the sound and avoid significant flaws in the manipulation of the digitized signals. Recognizing DC bias or frequencies below 30 Hz can improve the playback and avoid speaker damage. These in themselves do not provide great analytical insight but cause the composer to consider the physical consequences of the sound at the time of play back.

Thus the computer immediately comes to mind when we think of sound processing, representation and recording. It is not an instrument in anything like the traditional sense but a Meta-Instrument, a very large collection of sound producing and manipulating functions. While individual numerical processing techniques might suggest an instrument, the computer itself encapsulates a special class of instruments. The idea of an orchestra in a box hardly seems to approach the creative potential with which the technology seems imbued.


Composition can either hide the nature of the previous points or emphasize them in creative ways. The tendency has been for compositions to reflect the potential of technology through techniques variously and curiously applied to an unpredictable collection of sounds. The art seems to be in finding an existing sound context in which the processed material can be effectively located.

Composition can be viewed as a personal statement concerning the creative use of music technology. It provides significant evidence of the sophistication and potential of a particular interface or collection of interfaces. During the course of composition, the interface may have changed several times. The composition, as it evolves, dictates the use of certain programs that may or may not be compatible with current programs and hence can or cannot be used simultaneously. This engenders the effect of layering or merging interfaces. This appears to be what Bernardini is seeking when he states that, "The main problem, then, is a problem of description. As we are talking about computers, what this boils down to is user interfaces. We do not need endless power, we need endless interfaces."


As computer music diversifies and evolves under the influence of practitioners and technology, composers and musicians will inevitably seek diversity in their musical identity. That appears as a significant motivational force for anyone to engage the genre and begin to move beyond the fundamentals.

But the ability to define one's own compositional environment and procedures for the purpose of a more individual position is not without some drawbacks. These relate mainly to the dissemination of experience and ideas, and the development and membership of an evolving pragmatic and aesthetic community. By the very nature of computer technology, the composer is likely to build an environment over some years that exhibits a highly idiosyncratic working style, often unsuitable and unexplainable to others. Yet some highly idiosyncratic systems flourish, possibly due to the musical situation of the developer and their ability to inspire and teach. A system may also have inherent properties attractive to potential composers. But equally, systems that have much to their credit may never reach the critical point of self-generation because too few people have the time and ability for the intellectual effort necessary to understand and propagate them.

I see three points on which the context of computer music departs from tradition and thereby invokes an image that possibly alienates not only a potential listening public, but even one composer from another. These three points I have put under the headings of Implementation, External Factors and Evaluation.


Technology creates its own world of nomenclature, subtlety and nuance. As with all musical contexts, appreciation entails some understanding of the details that distinguish this musical context from that of the traditional context.

With the ability to create musical instruments specifically for one's own compositions, questions as to the instrument's functional correctness or completeness need be addressed only by the composer. If a composer is satisfied with the results then the instrument is functional. Used by other composers, this same instrument may fail under certain conditions and require modification. In the world of non-real-time composition, failure can be reinterpreted as a valuable dimension of compositional investigation. The success of an instrument also depends on how it is configured. Used within a certain range, an instrument may function consistently and with good results but used with extreme or unusual parameters, it begins to produce less satisfying results.

Implementation in the acoustic instrumental world means the instrument itself. Its physical presence implies a degree of stability and persistence. In software, the idea of a stable instrument extends only to the moment of use because it can be altered at the whim of the computer/luthier. Implementation thus implies that instruments have a transience and ephemerality.

External Factors

The production of computer music is inevitably influenced by external factors which lie beyond the composer's control. These factors can be considered in degrees of remoteness from the compositional process. They can be briefly described as: digital technologies (computer, recording, storage, processing), general environments (operating systems, studios), and specific computer music systems (software, hardware).

Consequently computer music composition is an art tied to the progress of a technology not in and of its own making. There is a bewildering array of technology that seems to offer potential but always with a few compromises and conditions. One need only consider the various computer systems available to realize the difficulty of appreciating each for such a task as music. Different computer systems are almost like different cultures. It is evident that different brands of computers compel the development of different solutions to the problems of computer music system development.

As discussed in chapter 3, the computer instrument, whether software or hardware, is a device connected to various points in our musical experience, through a variety of physical and intellectual means. Since both composer and musician cannot be in the computer, the experience is passed in and out through ways that either have historical significance (traditional musical controllers) or that can be learned and assimilated relatively easily (various software applications). It would appear that we are perpetually trying to counter the internal nature of the computer and bring it out into our world.

The type of computer music I am concerned with in this chapter is predicated on a representation of sound made possible only through current technology, but when presented to a listener this technological dependency becomes transparent. This is a technological characteristic of recording systems, radio and television. In effect, the representation attempts to move towards that which it represents rather than that through which it is represented. When we listen to recorded music, we want to be convinced about the reality of the recorded event and not the reality of the playback technology. Paradoxically, this can only take place through technology. The quality of the representation is dependent on technology and is consequently bound to it rather than the other way around. Unlike acoustic musical production, technology remains outside each statement of representation, we would prefer that it not overtly interfere with that authenticity of the recording. Technology is the unheard agent of the simulacra and transparent in the sonic instance. Computer music is an agent within the music technology framework and can either synthesis or process natural acoustic sounds before they are presented as simulacra.

When a computer instrument processes recorded sound, it is far from being innocuous and transparent. Furthermore, it is a reflexive condition in that the quality of the result can be greatly influenced by the quality of the initial recording. The result also depends, not only on the processing technique itself but on the composer's perception of the nature of the output to the input sounds. Sounds with significant background noise, in the form of hiss or hum, can cause results which amplify the presence of this incidental sound to the detriment of the desired result. Thus the question of recording technique, as the composer gathers the sounds at the initial stages of composition, introduces the idea of a preliminary compositional practice. Unwanted sound can sometimes be removed without gross deterioration of the important signal, but there will inevitably be a trace of that processing in the result. Improving bad recordings is extremely difficult without simply putting another type of technological imprint on them.


Computer music is an art that can obscure composing strategies or performance techniques as points of reference for the listener. Exactly how the computer composer produces a composition often remains inexplicable and thus that avenue of appreciation is closed. The listener, and that may even include the composer, must draw upon their broader contemporary experiences. Common practice music has, on the other hand, the tangible historical referents of instrument, repertoire and concert hall which allow the listener to belong momentarily to a communal experience outside their ordinary existence. We are still coming to terms with the phenomena of recording as displaced sound, in space and time, as it applies to either traditional music or sound in general. This further complicates the listening experience when we consider a recording that is not actually of something from the real-world or, even if it once was, has since been processed in a way that now disturbs the credibility and truth-content of the original. Our historical understanding of music has not adequately prepared us for sound that has conflicting signification.

Technology provides a vast potential for conceptualization, methods of control and access to musical information. No computer music composer has yet thought it worth rigorously documenting the throes of the compositional act, even though the computer is capable of logging the compositional process with a high degree of detail and accuracy. The problem is in trying to decide how to manage the information, assuming that whatever is produced will be collected. Any attempt to capture information will itself have a bias towards some aspect of the data collection process. What may get saved are compositional data that most resemble scores or sketches that reveal, without too much difficulty, the basic idea of the work.

The nature of the documentation possible to amass during the course of composing, apart from being overwhelming, may be of little value in providing a general understanding of how computer music is created. I suspect that any mechanical approaches to gathering compositional information which are unmediated, either through human intervention or the use of automated filters, will gather too much data to be immediately useful.

The primary representation of recorded sound as information has the disadvantage of being cumbersome and obfuscating. In the numerical format, there is too much detail with no convenient means of redescribing subjective musical material. Exactly what constitutes music in the computer music environment and how it can be practically represented are significant problems. There are several analytical processes, previously discussed, that can extract data about the physical properties of the sound. These may well be the forerunners of a type of representation that begins to address the idea of a new musical aesthetic in sound.

Evaluating computer music systems or methodologies, appears to be predicated on the composer's compositional inclinations, peer group involvement, technical support and the contemporaneity of the system. It is not a simple process, and opinions can be significantly altered with the revelation of information not previously available. For instance, a composition with traditional sounding instruments may appear remarkably virtuosic, and if the listener is thinking about conventional performance the effect could be impressive. But upon learning that the composition was in fact produced in non-real-time the awe which the work might inspire will be diluted and compromised.

Our most common means of evaluating computer composition and performance systems are through the resulting compositions and descriptions from peers whose judgement we respect. Composers typically embark on approaches or systems because they have heard music by other composers using that system or method, or are initially excited about the potential of the system. Computer music composers can, in their works, allude to the creative potential of a system or technological approach. This is often more encouraging than an explicit account of a methodology. So if composition is the most significant point of evaluation, we have returned to a kind of aural tradition and a particular musical community in which it is understood. This is evident in both the importance of distribution channels for this music and the environments in which it flourishes.

The acceptance of music as an expression from within a distinct technological experience is problematic when set against the historical position and social role of music. Computer music does not encourage the same type of culturally influential structures such as those of the orchestra or opera, but ones defined and maintained by technology and the individual–consequently, it has a cultural and social history that must be defined through a conjunction of diverse structures. There is an uncomfortable feeling, indeed a wariness, on the part of many listeners towards computer music because it appears as one of a number of computer activities. What seems important is the engagement of the computer itself. Furthermore, the map for the traversal of music, historical in nature, now appears to require excursions into the field of technology, an area where many listeners are reluctant to venture. This could be because the preeminence of technology and method in computer music blocks appreciation of the music for those who do not wish to consider the role or position of technology in the musical result.

A final important point about music technology is its ability to inspire a perspective on traditional music as much as it might provide an alternative. Technological evolution necessarily needs to be reflective and have the capacity to compare its position with that of the past. The sophistication of human creativity is invariably dependent on past and coeval paradigms. Computer music is not without references to the greater musical world.


The preceding discussion has sought to create a framework for an appreciation of what I call a topology of music technology, and convey some insight into its density and location beside or within traditional music. By topology, I not only mean what is at the site of music technology itself but possible relations to the broader musical environment. One must be aware of this as much as the locus of technology itself, in order to fulfill certain musical expectations either as composers or audience.

Throughout history, the processes and methods of music making have evolved by refinement of the basic physical materials (instruments), their interaction with the culture of the physical world (performance) and the subject matter dealt with as part of the communicative power of music. In retrospect, we can envision music as an uncharted landscape, since there seems to have been a series of frontier crossings before the arrival of our musical times. As part of this vision we also see the passage of musical evolution characterized by movement towards rarity and extremes. It progresses through a series of sites, mediated by the desires and expectations of various societies and by the will of musicians, to arrive finally at a cul-de-sac of refinement. This is a place of cryogenic creativity where musical production is a reflection upon the quiet canonic pool of a musical past. Then at certain times a creative heat flares and a musical impulse emerges from this immobile state to pursue an evolutionary course to yet another still point. It is as if there are scattered polarities to which new forces in music must migrate in order to fulfill their destinies. Each momentary point or stage is itself a polarity when seen against its recent past. It appears that this phenomena is taking place at the present, particularly in contemporary computer music.

In the world of music technology, we see and experience profound shifts in the concept and conventions of music making. This dynamic is disorientating to the traditional musician, who has matured in a relatively stable and historically aware environment. Composers new to computer music typically find some aspect of it disturbing. Initially, they are aware of the absence of performers but cannot readily grasp what replaces that important role and seem to search initially for a substitute. It is, I think, a significant oversight to conclude that the performer has simply exited the scene, no longer of any particular importance in the proceedings. The role of the traditional performer has always been more than simply a sound producer. So the spirit of the performer inevitably lingers in many composer's minds as a lost companion.

The absence of a human performer is the most obvious of many, often subtle, differences between the computer music environment and common practice music. These differences eventually amount to a critical mass of some importance which progressively influence the composer. Many of these subtleties are software specific or related to methods of material organization in and around the composition. The kinds of involvement these nuances induce, as part of the creative experience, are often so intense that they become fused with the emotional state of the composition. From there, the differences become transformed into a substance associated with the virtual genre. For example, a composition typically exists as collection of sound files. Each of these can be listened to in turn or in any order the composer wishes. Certain applications allow for constant adjustment during assembly of the sound files into the composition as a whole. The experience of dealing with these files in isolation can create the impression that the composition is actually constructed from many compositions because the material is often so self-contained.

In stating that the compositional domain has now a different and expanded topology through technology, I would like to supplement the concept of the topology by suggesting that the essential characteristic one might readily identify with this landscape are the numerous efforts of individuals in either artistic or technical domains. The sophistication of locations within the topology is dependent on individuals. I see this as a central tenet of computer music but generally only appreciable from the perspective of an individual's experience. The many musical constructions which dot the landscape are also characteristic of the essentially private nature of the experience. In this section, I therefore explore the idea that whichever way one chooses to interact with the computer, one inevitably seeks ways to develop a private or personal topology.

Critical consideration of the differences between computer and traditional musics eventually gives way to a contemplation of the internal dynamics of computer music itself. For example, Paul Lansky, in a recent paper on Cmix, notes one of the design principles behind its development. This I regard as not only an important nuance of interaction but also a perspective on traditional music practice:

The first [principle] is that its architecture encourages the working methods of the user to emulate the protocols of rehearsal rather than of a performance. The guiding principle of a good rehearsal is not to waste valuable time rehearsing things which don't need work, and to spend time on individual parts, finally combining the whole ensemble only when all the component parts are in shape, i.e. mixing.

This notion of rehearsal is an important indication that the performer has been replaced by a new process directed specifically towards the composer at work, who repeatedly listens to material that may well be in a complete form. It amounts to a process of accepting what will eventually be the final instance of the work, whole or in part. Such repeated listening may convince the composer that there is no such thing as the final mix but that the work should exist as many versions. This is currently something of a radical idea since distribution favours a one only for circulation practice.

Technical Prelude

Up to this point I have treated technical matters with some circumspection, and avoided detailing characteristics of any particular system. There are, however, three systems involved in the remaining part of this text which require some consideration. These systems are UNIX, Cmix and perl.

The UNIX environment has always been particularly fertile for developing individual approaches to everyday programming and administration problems. Although routinely criticized for its obscure syntax and lack of user-friendliness, it offers considerable flexibility and an open-ended forum for many types of computing problems. In pursuing computer music, one faces precisely these types of administration problems. I have often felt that, to be able to manage musical resources with a degree of power and flexibility, it is necessary to look outside the creative environment in which I was primarily working.

There is a general class of tools available in the UNIX world which are not, and will likely never be, specifically for music; these exist for problems associated with the management of data. In this section, I want to describe how I came to use a general purpose administration script language (perl) to coordinate musical activities and develop an awareness of how to exploit the UNIX environment for musical purposes.

Cmix can be thought of as a collection of functions that can generate, process and analyse digital signals. Some of the functions, particularly those with a connection to composition through sound generation or processing capabilities, are called instruments. The important point about these instruments is that they function autonomously from each other. The nature of composition and sound production has generally overshadowed thoughts on multi-instrument applications which work on common material for a specific composition. In other words, using a collection of Cmix instruments to encapsulate the compositional idea. What typically happens is that instrument A is executed, followed by B then C. Each instrument produces output that may or may not be necessary for the subsequent instruments. But if the output sound files were to be used in sequence then clearly the instruments have a tangible relation. A, B and C instruments can have a much closer relation, one not simply dependent on their output. In this matter, at least, the script language perl was immediately attractive because of its ability to co-ordinate numerous Cmix sessions that up until then had to be treated in isolation.

Cmix depends on and functions under UNIX, which perl administers. The relation in this trio inspires a compositional approach that seems more useful than could be possible within one system alone. This is possible because each component has an open design.

If one understands the C programming language, then perl is little trouble to program in. Its description exceeds the bounds of this section but the examples given later in the section will be explained.

Cmix from the Outside

Discussions about the development of Cmix instruments generally focus on the idea of a stand-alone compiled program that takes an input sound file and produces an output sound file. As the earlier example 4.1 demonstrated, it is frequently not necessary to have an input sound file because the program itself can synthesize material. Nevertheless, this internal view of a Cmix instrument presents the notion of a central function which accepts a range of parameters. These have an effect on the sound according to the prescribed operation of the central algorithm. The core algorithm essentially defines the instrument. In the case where a Cmix instrument takes an input sound file, predicting the output can be a near impossible task.

My experiences with the Cmix signal processing toolkit led me to conclude that it was conducive to augmentation from programming resources generally available in the public domain. The consequence of these programs and utilities is that they tend to affect or enhance the operation of a Cmix instrument from the outside.

My approach and philosophy to using UNIX, perl and Cmix differs, for example, from those of Brad Garton in his Elthar program. But I think there is some agreement with Nicola Bernardini who voiced a need for a plurality of computer music systems. I mention Elthar because it shares with my work an external approach to Cmix. This is achieved through the generation of Cmix score files from a program/language functioning at a higher level. That is, like my own scripts in perl, Elthar produces output that is input to Cmix. Although our underlying approaches and contexts are quite similar–for instance, we both use Cmix as the processing/synthesis toolkit, and the languages (Lisp in Elthar's case) to control the operation of Cmix from external positions–the significant differences lie at a higher creative level.

Elthar possesses a considerable degree of self-reference and autonomy. It is a program in which the composer engages a sophisticated level of interaction directed towards a natural dialogue in music composition rather than the way the computer generally imposes its linguistic imperative/interrogative forum in the process of composition.

Elthar's model for the Artificial Intelligence approach to producing sound is the recording studio such an environment encourages a particular type of dialogue. This environment and the nature of the language used in it, although familiar, are not conducive to my compositional thinking. I prefer a private, specific context in which I construct a momentary dialogue that will pass as the composition is made manifest. The methods for producing the composition are constructed around the compositional idea, which in this case has more to do with a larger sound world than traditional music. But what I find both fascinating and encouraging is that such a private position can be constructed, and largely through the open endedness of Cmix. Brad Garton also essentially composed the interface by constructing a complex scenario on a foundation he knew would provide stimulating results and, in the process, corroborate and fulfill his private expectations.

In Nicola Bernardini's article "Should Musical Instruments be Dreams?", he seems in some kind of ideologically bind because he expects or implies that the solution to the problem of providing "all interaction modalities available, possibly in the very same programs" will come from computer music programmers. Bernardini states that, "We need program designers who are aware that, in music, no single human interface is good for all applications, and are able to pick that or those which will best fit the specific purpose." I propose in the remainder of this chapter that it is now possible to start thinking and acting in this way, on the basis of existing software and computer systems (UNIX, Cmix and perl as examples), and that this position on interfaces is something that computer designers have long accepted. Although reluctant to widely promote options, many companies offer more than one operating system and more than one interactive modality. Highly pluralistic environments that have something of an open connection architecture are unlikely to be the product of one programmer/musician. This task has to undertaken by a team or even a community of users.

It is becoming clear now that computer music composition will involve numerous programs and applications. There may be some central program for dealing with signal processing while another handles control of the processing functions. In the later stages of composition, other applications assemble and unify the material.

My particular view is that the composer should find a way of working that limits and facilitates those tasks which he or she finds unfulfilling, by delegating them to the computer, then savoring those tasks which are quintessentially compositional in nature even when they also can be delegated to the computer. Whether the composer uses an iconic/graphic or a command line system, employs algorithmic methods or a cut/copy/paste approach, all have a place in computer music.

Such approaches can be problematic for those not willing to engage the esoteric world of UNIX or multi-unit systems. At the moment any effort to fulfill Bernardini's vision would be so complex that it would be impractical without the resources of an institution. But on a less ambitious and more pragmatic level it is immediately possible.

Cmix and perl

After working with MINC for some time I began to appreciate the convenience of an interpreted script language that could be summoned without compiling and run immediately from the terminal command line.

With the publication of the book Programming perl, it was evident that, as a script language, perl was ideal for the UNIX/Cmix environment. It has reasonably powerful operators and commands, syntax similar to the C language generic to UNIX, and is thus well integrated into the UNIX environment, the primary purpose being management of the UNIX environment. The user of a UNIX based workstation invariably spends a considerable amount of time working on the machine in a non-compositional capacity and develops a degree of competence in managing that activity. The user as composer is at an advantage if she can understand the compositional environment beyond the act of composition itself. The ability to connect Cmix functions and also have the possibility to invoke UNIX functions initially excited my interest; here was a unified programming environment that could be used between UNIX and Cmix. Composition could take place in an explicitly richer space than is typical for computer music production. Thus perl functions in the Cmix context primarily as a coordinator between UNIX and Cmix functions. Although it has nothing to do with sound processing or synthesis, as there are no perl functions for that purpose, it has considerable potential for compositional constructs, even meta-constructs, as will be described in the next section.


Using my composition, Z says... as an example, the following discussion examines the idea of a composed interface. However, just before I begin, I would like to briefly discuss this composition and its relation to this research.

As with many musical works, it is often difficult to determine whether the musical idea brought on the technique for its composition or whether it emerged from experimenting with some compositional technique. In this case, I clearly see it as somewhere in between. I had the idea and the source recordings before the technology was in place, but transactions with the technology ultimately affected the work. If the technology had not be appropriate its development would not have gone ahead and an alternative method would have been found.

In advance of constructing the means to generate this composition, I had recorded my seventeen month old daughter's voice. I had no real idea of what to do with the recordings, and let them sit for more than six months before renewing interest in them. During that time I had been contemplating a more advanced use of perl, and had begun to solve certain low-level technical problems which were preventing consideration of more ambitious undertakings. With these out of the way, I returned to contemplating the content of the voice recordings, and edited into separate sound files the most interesting passages. As I listened to her voice, it crossed my mind that she was making many more vocal sounds than actual words, which was normal–but what was she thinking to stimulate these sounds? Were these sounds articulations about her activities or about something else?

These questions provided me with the idea of composing a work which made prominent her subconscious thoughts as a kind of simple music. I could create this music from the recordings of her voice so that, in theory, there would be a correlation between the processed material and the original recordings. The composition thus consists of predominantly processed voice sounds as the externalized subconscious, and the original voice recordings, these appearing occasionally as they would during her normal play sessions.

perl inspired and facilitated the composition of the subconscious part. It is divided into three six minute sections, each with a complex sonic surface set in a simple but varying form. This seemed to me an approximate parallel of her cognitive state at that time. Interpolated into this are the original voice recordings used to make up the subconscious part. These recordings initially occur infrequently and always quietly, but eventually build up to a play group scenario just before the end.

Z says... was the first occasion in which I used perl to integrate a number of Cmix instruments that prior to this could not be used together in the one script. The modular nature of Cmix instruments made it possible to employ another system to invoke and coordinate them, thus creating an open compositional setting. The evolution of this application of perl came about through previous experience on the composition Third Hand. That experience led to the idea of unifying the potential of Cmix instruments in a convenient and relatively painless way. In the following discussion, it principally takes the form of an expansion to the scorefile side of Cmix, that I felt necessary to implement during the course of my experiences with Cmix.

The implementation is almost as simple as the concept. Within one perl script, I call a number of Cmix instruments and connect up the appropriate input and output sound files and coordinate external data files and internal variables. By dedicating instruments to separate perl subroutines, I was able call them on demand and supply the appropriate parameters to the template Cmix instrument.

There is always scope for increased clarity and simplicity. In structuring the scripts, I wanted to direct all stdout and stderr (UNIX standard out and standard error streams) through the main script and send them to a specific file (example 4.2). The only complication was ensuring that the buffers for stdout and stderr were flushed immediately in order to monitor script behavior.

sub openstds {
open(STDOUT, ">$se"); # $se is the name of the output file
open(STDERR, ">&STDOUT");
select(STDERR); $| = 1;# buffers will be flushed
select(STDOUT); $| = 1;# immediately for stdout and stderr

Example 4.2 Initialization of stdout and stderr merging.

Now, by default, the output normally observed from a Cmix instrument will appear in the file whose name has been previously assigned to the variable $se. Monitoring could then take place from another terminal window using the UNIX command,

tail ±f syserrors_ver??

where syserrors_ver?? is the particular collection file for that script output activity. The ability of the script to redirect the data flow from the various Cmix operations permits process activity to be monitored and allows the script to respond to abnormal conditions in a variety of ways. If operations were beginning to deviate from a projected plan of execution, it would be possible to get the script either to attempt some kind of correction or shut itself down, with suitable event reporting, thereby terminating the compositional process. I have not found this level of sophistication necessary. In general, my compositional activities did not warrant this sophistication.

Once the output from Cmix functions was under control, consideration was then given to the efficiency of the calls to run the Cmix instruments. The setup for an external function call is fairly straightforward. What was initially somewhat confusing was the stdout/stderr management. Example 4.3 is the setup for a Cmix stereo call. This code comes from a working script (work12a in the appendices) that was used as part of the composition Z says... and as such shows considerable idiosyncrasies.

sub mix {
$cmd = "mix";
open (TOCOMND, "|$cmd") || die "ERROR with $cmd. $!\n";
$fname = "$res2out$lmix$snd";
$filepath = "$input$fname$close";
print TOCOMND "$output$mixout$close\n";
print TOCOMND "reset (2000)\n";
print TOCOMND "setline(0,0,5,1,35,1,40,0)\n";
print TOCOMND "stereo(0,$moskip,0,$amp,$pan)\n";
close TOCOMND;

reset (2000)

Example 4.3 An edited example of a call to the Cmix stereo instrument.

Below the script is a translation into normal Cmix scorefile format which shows the basic operation. The subroutine mix could be called from within a loop as in the following example or anywhere within the script.

for($i=0;$i<$x;$i++) {
# can do other things here...

&mix; # call to subroutine mix
# and here.

Example 4.4 Subroutine mix called within a loop.

The significant advantage to the use of subroutines is that specific functions can be clarified by being isolated and removed from the body of the script. In perl, like other languages, the subroutine could be relegated to an external file. It can be referenced in the main script and accessed as a normal internal subroutine.

It should be noted that the previous examples were deliberately selected to emphasize the external position that perl has to Cmix. I feel that the implications of this relation between UNIX and Cmix have yet to be fully appreciated. I can envision perl being a sophisticated coordinator of multiple Cmix instruments, thus creating a distributed, multi-instrument processing context, motivated by immediate compositional demands. In effect, a composition is not only the end production of this network of instruments but the dynamic structure of the network and its various controlling mechanisms.

There is no doubt that on first considering perl as a musical language, it reflects an anachronistic perspective of computer music. It is a primitive language (no objects, few high level data structures and no built in musical functions) and reflects a view of programming that seems fundamentally too simplistic to be of much use to a composer; certainly it is unmusical. But given the sophistication of many scripts written that have appeared on the Net or in the perl book, concern about its primitiveness are unfounded. Its lack of musical significance is also questionable because the nature of computer music has not been historically defined. It would be premature to exclude anything that can function musically especially when it continues to evolve.

A central concern, from a musical perspective, is perl's lack of specific compositional constructs. Typically, this would require perl to possess some inherent musical functionality, but since it does not promote musical production explicitly, it is difficult to see the correlation between the language and composition. It should be noted that the use of perl in this case is directed towards a particular type of composition. There is something similar here with traditional instruments, for one might argue that nothing about the violin indicates such compositions or musical style as inherent in the Bach violin partitas. While perl is not a musical instrument, it can be put to that use because of its relation to certain musical functions within the computer.

But criticism of perl as a musically insignificant language, at a time when significance is important in the production of music, is central to its evaluation. Many composers could not approach using it, on the grounds that it does not explicitly contain or refer to any musical orthodoxy they may happen to be aware of. In other words, they would look at perl with a hope that it might be musically significant, only to find its potential musical function too transparent. I took that to be an enticement for using it.

However, as I began using perl in a compositional context, I soon realized that there was going to be a limit to the sophistication a single script could contain before I could no longer understand or maintain it. So for practical purposes, the composition would not be one script but many. Since this was taking place in non-real-time, performance was not an issue. Using different script files does not necessarily mean completely different contents. A series of script files may differ by a line or a variable, and the script may become developmental (as most are in the appendix D). They may be developed with the intention of being used, kept intact, copied and changed to become the script used for the actual composition. It is as if there is a workshop full of diverse instruments, some of which have no immediate or perhaps ultimate use.

On the question of script sophistication and composition, it is also possible but not always obvious to have compositional structure defined by things external to the algorithmic function of the script. For example, the script will likely depend on external files. These may contain numerical values, file names, or possibly the names of other script files to run defined as index numbers into a list of routines. The point is that the order and structure of both the contents and quantity of these files is a compositional determinant outside the operation to the script itself. Almost all the scripts contained in the appendices depend on directories of sound files. The script builds a table of file names which it uses to access the sound files when needed. The order of these files can be determined either at the directory level or from within the script.

The following subroutines access the data for the script. The data which can be seen in the appendices contributes to the structure of the composition. I will briefly discuss two subroutines. The subroutine opensffiles (Example 4.5) builds a list of sorted sound file names complete with path extensions. It begins by reading in a list of directory names. The actual directories contain numbered sound files (1.snd, 2.snd, 3.snd, etc.). I determined the order and significance of the numbered sound files while editing them. Some I felt should come before others, but essentially they followed the order in which they were extracted from the initial recording. The rest of the code concatenates the correct path to the names. So numbered directories containing numbered files can be externally adjusted to influence the script. The algorithm within the script can remain static while the output changes through variable input.

sub opensffiles { #get sound files and sort them
open(FILEDIR, $dirsfile) || die "can't open $dirsfile: $!\n";
while () {chop; $sfdirs[$dirtotal] = $_ ; $sfdirs[$dirtotal++]}

for($i=0;$i<$dirtotal;$i++) {
chop($dir); # get rid of trailing space
system ("ls $listdir | sort ±n > $sfiles");

open(FILE, "sfiles") || die "can't open sfiles : $!\n";
while (){

Example 4.5 Subroutine opensffiles.

getfdata (Example 4.6) is called whenever a new sound file is needed. It simply assigns the name string to the variable $sfin. If the end of the list is reached, it is reset to the beginning.

sub getfdata {
if ($sfindex<$sftotal ) { # get sf first
$indur=&sfinfo($sfin,$info); # get the dur of file
else { # reach the list end

Example 4.6 Subroutine getfdata.

In a multi-function script, each function will require particular data for its operations. This data can be kept in an external file. The first example of this was the sound file data in the previous example 4.5. In example 4.7 opensrdata opens and reads in data for the sample rate conversion program used to transpose sounds. The basic function of both examples 4.7 and 4.8 are identical to all the subroutines which read in data.

sub opensrdata { # get sampling rate factors
open(COMP,$transfile) || die"can't open $transfile: $!\n";
while ()

Example 4.7 Subroutine opensrdata.

sub getsrdata {
if ($srindex < $srdtotal) { # get sr

Example 4.8 Subroutine getsrdata.

The point about these simple scripts is that they represent facets of an interface to an external composition. The sophistication of the composition specification becomes dependent on exterior conditions as well as the interior algorithm. It follows that computer music might embrace a larger network of systems in more creative ways than the immediate computer context has so far suggested. In the above case, there are a number of interfaces, on several levels, that the composer engages in the process of composing the work. Although the interfaces appear hierarchical, due to the structure and operation of the script, I see them as localities on a compositional surface. These localities are accessed at unpredictable intervals during the course of operation and in the body of an algorithmic composition are as transparent as the workings of the central algorithm itself.

I have just begun exploring the potential of perl as a meta-language for Cmix, and it may well be useful in a variety of computer music systems. But essentially, it served a more important purpose in allowing contemplation of the compositional interface from beyond any one system and ideology, as frequently happens in the course of musical experience.

Throughout this chapter I have sought to reflect my consternation and curiosity about the idea of the musical interface and technology, knowing that for most people it is a simpler concept than I envisage. My views have arisen from an attitude that seeks to extend the concept beyond the obvious technical/mechanical relation between humans and machines towards a more creative relation with the domain of computer music.

Extending the idea of the musical interface in a creative space renders convoluted paths that criss-cross the subject of computer music, even though I have focused on a non-real-time system.

The title of this chapter has suggested that the interface can be composed and what unfolds is the construction of a scenario in which such conditions might be seen as possible. Since the context that I am concerned with is creative, much of the engagement of both the subject and the computer is treated that way. The themes/subjects of each section: Interface, Technology, Instrument, Language, Sound, Creativity, Composer, Composition, although internally referencing tangible issues, together form something of a compositional statement which addresses the tendency of the concept of the musical interface to become enlarged.

Tile Page | Dedication | Abstract | Preface | Contents | Examples
Introduction | Chapter 1 | Chapter 2 | Chapter 3 | Conclusion
Bibliography | Discography | Appendices

Home Page