Computer music can be defined as music that is generated by, or composed and produced by means of, a computer. The idea that computers might have a role to play in the production of music actually goes back a lot further than one might think.
As early as 1843, Lady Ada Lovelace suggested in a published article that Babbage's 'Analytical Engine' might even be used to compose complex music, if only the correct data could be properly processed. Today, computers are an all-pervasive part of the music-production process and functions that were traditionally the preserve of hardware are now increasingly accomplished in the software domain.
The role of computers in contemporary music production
It is rare to come across a piece of music that has not, at some stage, benefited from involvement of a computer system in its composition, performance, recording or distribution. Composers and producers of music use computers throughout every stage of the process and the various tasks that a computer music system performs can be broken down into a number of discrete, yet interrelated, areas.
Composers have long been fascinated by the idea of music generated independently by systems over which they can exert varying degrees of control. As early as 1787, Wolfgang Amadeus Mozart (1756–91) used a system known as Musikalisches Wurfelspiel to randomly select sections of music to be played.
Algorithmic and aleatoric composition was much beloved by the avant-garde composers of the 1950s and 1960s, including John Cage (1912–92). In computer-generated music, the computer produces musical material within parameters determined by the composer. One of the first computer composers, Iannis Xenakis (1922–2001), wrote a computer program, in the FORTRAN programming language, to produce musical scores that could be played by live musicians.
More recent examples include the program M – originally produced in the 1980s, now revived and distributed by Cycling '74. M is capable of generating endless variations of cyclically looping material and triggers sound via MIDI control of synthesizers or samplers. Musician and producer Brian Eno used SSEYO's Koan Pro software to produce his Generative Music album of 1996. This algorithmic-composition package also plays a role in his 2005 release, Another Day on Earth.
Musicians, composers and publishers use sophisticated music notation software – such as Sibelius or Finale – to produce musical scores. Effectively, musical desktop-publishing systems enable the use to input music on to staff notation using a combination of MIDI keyboard, QWERTY keyboard and mouse. In this way, passages of music can be edited and laid out on the printed page in much the same way as a word processor handles language.
The software uses MIDI sound modules or built-in software instruments to play back the score and allow the composer to hear what the music actually sounds like before it is ever printed and put before live musicians. Becoming increasingly sophisticated, software like Sibelius can even interpret written instructions such as pizz, and automatically switch playback to an appropriate sound, as well as introduce elements of dynamic and rhythmic expression into its simulated performances. The software is also able to instantly generate parts for all the individual musicians from a full force.
The widespread use of such programs, coupled with increased use of the internet, had given rise to a new phenomenon – Internet music publishing. It is now possible to view, listen to and purchase musical scored online and many websites now offer composers a virtual shop window from which to sell their scores. The Pat Metheny Group made the score to the first section of their 2005 release, The Way Up, freely available on the internet as a Sibelius file, which required no more than the installation of the Scorch plug-in to view and listen to.
Programmes such as MAX/MSP enable the user to create computer-music performance environments that can generate musical events; process sounds and even interact with other live performers – all in real time. In the world of rock and pop music, computers are used to replay backing tracks to support live performances, add effects and even control video screens and lighting rigs.
Computer programs known as 'sequencers' enable the musician to use MIDI to record, edit and play back musical ideas. Arrangements and compositions can be built by layering sounds on different tracks, looping material and copying and pasting sections of music. Sequencers offer the user a number of different visual representations of the musical material. A graphical overview of the whole piece enables the musician to move around whole blocks or sections of music – such as 'verse' or 'bass line' – while other editing screens allow for fine tuning of detail.
Lists of numerical values give the musician precise control over every nuance of a performance. Music can be presented as a 'piano-roll' display, each note represented by a graphical block, the position of which indicates the pitch of the note and the length of which indicates its duration - analogous to the holes cut into paper tape of player-piano systems. For those who read music, material can be presented as traditional staff notation. Well-known and widely used sequencers include Apple's logic and Steinberg's Cubase.
The computer has, to a large extent, replaced tape-based media in the recording studio, with most multitrack recordings now being made directly to hard disc. Digital Audio Workstations – or DAWs as they are commonly known – bring to the recording of live audio all the flexible, editing versatility of the MIDI sequencer. Non-destructive editing (the ability to undo actions) allows for creative experimentation in a way that editing tape with a razor blade never did.
Today, most MIDI-sequencing software also included the ability to record and manipulate audio signals and many sound-recording systems also offer MIDI functionality – blurring the distinction between 'sequencer' and Digital Audio Workstation. DAWs include Steinberg's Nuendo and the widely adopted industry standard Digidesign's ProTools.
As with so many tasks that were once handled by dedicated hardware units, the effects-processing of sound is increasingly being undertaken by software – usually by means of 'plug-ins' (small pieces of third-party software that can be installed within the DAW environment). Such software offers many imaginative ways to enhance and transform recorded material through equalization, control of dynamics or the addition of effects such as delay and reverberation.
Many software effects now emulate their classic hardware counterparts and sophisticated reverberation programs – such as Audio Ease's Altiberb – can accurately simulate the natural sound properties of real acoustic spaces, making it possible for, say, a recording made in a London studio to sound as though the music was performed in a concert hall in Vienna.
Synthesis and sampling
The advent of ever-faster, more powerful microprocessor computing power has meant that it is now possible for computers to perform sound synthesis, triggered in real time from connected MIDI keyboards. Software now gives musicians access to a vast range of synthesis techniques. Some companies, like Native Instruments, specialize in producing software that emulates vintage synthesizers such as the Sequential Circuits Prophet 5 or the Yamaha DX7 – even down to the way the on-screen interfaces resemble the visual detail of the original instruments' control panels. Propellerhead's Reason effectively gives the user the software equivalent of a whole rack of synthesis, sampling, sound-processing and sequencing devices – even connected together by virtual patch cords!
Software is also used to perform sampling duties. In fact, recent years have seen a decline in the market for hardware samplers, unable to compete with the relatively low cost of high-speed computer processors, Random Access Memory (RAM) and hard-disc capacity. A significant industry had grown up around sampling, creating and supplying extensive libraries of sampled musical phrases, drum loops and recordings of every conceivable instrument.
Most DAWs include extensive mixing facilities, enabling the sound engineer to accomplish the task of mixing – balance of relative level, stereo (or surround) position, dynamic control and addition of effects – entirely within software, without the need for a hardware mixing desk. Such systems afford great flexibility, including the ability to instantly recall all the settings of mixes from a previous session and full automation of any changes that might be made throughout a piece, such as fade-ins or panning.
However, many musicians find that operating such a complex system with the computer QWERTY keyboard and mouse is unsatisfactory and turn to hardware control surfaces to regain a sense of physical interaction. These controllers are collections of dials and faders, resembling a traditional mixing desk, which, when connected to a DAW, control the software and, in turn, reflect the status of the on-screen virtual controls. Examples of such controllers include the Mackie Control Universal and the Digidesign ICON.
Mastering is the final stage of music post-production that takes place prior to the manufacture and distribution of the chosen medium (CD, vinyl, DVD). At the mastering studio, an engineer will often use a specialist software DAW to make final adjustments to the overall levels and equalization of the sound,before arranging the musical tracks into the desired order. At this stage, the gap between songs can be established and any fades accomplished. If the material is destined for CD, the mastering engineer will also 'tag' each piece of music with track IDs to enable CD players to find each track.