2011-12-24

Has The Soft-synth Killed Synthesis?

1960: In The Beginning, Bob Created The Filter

If you don't know who Bob Moog is, you don't deserve to own a synth (virtual or otherwise) and should leave the internet, find a Strat and a monkey with a drumkit and stick to doing covers of Elvis or The Jams – I hear there's good money to be made on the cabaret circuit :)

Even though Bob had some musical background, his first circuits were never intended to be used as musical instruments, but when a friend heard the first tones and warbles and commented that it sounded like something from the movie 'Forbidden Planet' he changed his mind, figured out how to add an old organ keyboard, and the rest is history. Musical academics soon queued up for one, then the musique concrete crowd, then prog rockers started calling. Walter Carlos' synthetic orchestrations for 'A Clockwork Orange' made the Moog a household name, and Keith Emerson showed what it could do for rock, which led Bob to taking a few key circuits from his wall-filling Series 900 boxes and creating the Minimoog.

1970: Synthetic Cows And The Polybeast - Thanks For The Memories

Seeing the Minimoog being used as a successful live instrument enticed others to make something, such as the ARP Odyssey, the EMS Synthi and Tom Oberheim's SEM – a synth-in-a-box meant to be plugged into a Minimoog but which ended up with a keyboard of its own. The synth escaped the realm of prog-rock and was soon found in classical performance, jazz, and pop-music, as well as creating its own horrendous sub-genre of novelty albums full of cover-tunes played on synths and inevitably using "Moog" in the title even if the synths were from other makers. Thanks to pulp-musicians like Hugo Montenegro, we now know that plastic cows say "moog".

I know. Sad, isn't it.

Despite this recognition, synths were like a saxophone in that they could only make one note or sound at once, and there was no way to save a particular arrangement of knobs & switches except by human memory alone. Some people used patch-sheets to draw knob positions, some people took Polaroid photos of the front panel, and Rick Wakeman would glue the knobs of his Minimoog into place once he got a sound he liked, and pull another Minimoog out of his collection. Moog tried to solve the first problem with their Polymoog, but it was closer to an organ than a synth, so that's why Yamaha's CS50 released a year after in 1976 is considered the first true polyphonic synth. Yamaha's attempt to solve the second problem of remembering sounds was odd – on the CS50's big brother, the massive GX1, there were four boxes on the top, each holding what was essentially a miniaturised but complete copy of the front panel.

The big breakthrough that solved both problems came in 1978 with Sequential Circuits' Prophet 5 – digitally controlled analogue oscillators and filters that could handle a five-finger chord coupled with a small CPU and battery-backed RAM that could save every control's setting in ten (and shortly after, one hundred) patch memories.

1980: The First Rise & Fall Of Originality

As the 80's progressed, the focus on electronic synthesis shifted away from rich prog-rockers and academics to English punk bands, which led to wide acceptance by pop & rock bands once tracks like "Fade To Grey" by Visage and New Order's "Blue Monday" became top-ten hits. Aspiring bands the world over wanted synth-players as well as drummers and bassists and guitarists, and several American and Japanese companies soon had Prophets and Odysseys and Jupiters next to the Fender guitars and the Zyldjian cymbals. One corner of the shop would have staff yelling at guitarists to stop playing "Stairway To Heaven", the other had staff yelling at keyboardists to stop playing "Jump" and "Oxygene".

As more and more broadcast music had synthetic content in some form, synth-nerds would often play synth-spotting – instruments from particular makers often had a recognisable character, which was pretty much the only clues we had because every synth-player made their own sounds. Many synths of the day had no patch-memory, or at least very few, and those with patch memory often had all of the factory sounds overwritten. Watching music videos, many of the synth-players had one hand on the keyboard and the other flying around the controls, twirling knobs and flicking switches as they played. Brian Eno when he was playing with Roxy Music was a good example, as was Manfred Mann in the clip for "Blinded By The Light". At the other end of the spectrum, successful synthesists would be seen with the $30,000 Fairlight (Herbie Hancock's "Rock It") or the $100,000 Synclavier (Tony Clark in Dire Straits' "Money For Nothing").

Then the music died.

On November 22nd 1983, the market's first affordable all-digital synthesiser was launched – the Yamaha DX-7.

Unlike every other synth before it, the DX-7 did away with analogue-based subtractive synthesis in favour of FM (frequency modulation) synthesis, which made for incredibly accurate bell, chime, brass, percussion and piano sounds. It also had 32 slots of patch memory, and in another first, had a cartridge slot where more patches could be stored. Compared to the twenty or so knobs and faders analogue synths usually had, the DX-7 had over 180 separate parameters that combined to make up a single patch or sound ... but they were all buried behind a 2x16 character display and one – count 'em, one – fader. The range of sounds the DX could produce was huge, and it could mimic real instruments much better than anything analogue ever could. It also had, compared to its competition, a ridiculously low price. The DX-7 sold like hotcakes. Air-play became full of bands that sounded a lot like each other, and in almost every one of them the DX-7 'E.Piano2' or 'Marimba' or 'Bell1' factory patches could be heard.

The other two big Japanese synth-makers, Korg and Roland, quickly realised that by adopting digital techniques the price of making the actual hardware of a synth plummeted dramatically. Roland introduced Linear Arithmetic synthesis with over four hundred parameters per patch, Korg went for a mix of tiny sampled waveforms from real instruments and processor control of analogue filter-chips (with up to 200 parameters per patch, depending on model), whilst newcomer Kawai brought cheap additive synthesis to the market, once the domain of supersynths like the Synclavier. These synths all had the same over-all characteristics of a bland front panel, an LCD display, never more than four physical knobs or faders, a depressingly small keypad, a card or cartridge slot, and MIDI jacks. Thanks to MIDI, they could even dispose of the keyboard, and the rack-mount synth module was born. The big Americans, Sequential, Moog and Oberheim, stuck mostly with their analogue synths and tried to increase flexibility whilst keeping as many knobs as possible, but the cost of all those physical controls couldn't let them compete with the Japanese. Even after adopting the same approach, machines like the Oberheim Matrix1000, the Prophet VS and Moog Source could not save the companies.

This made creating new sounds a major chore for most musicians – gone was the ability to reach out and tweak a knob to get an instant change in the sound, instead you had to know your synthesis and know your synth, press a few buttons to reach the parameter you wanted, then press a few more buttons or move the data entry knob to the new value, and then play the keyboard. None of the early digital synths let you change the sound in real-time, you couldn't hold a key down and change a parameter's value and hear what the difference was in the sound without letting go and pressing the key again.

A new market sprang up almost overnight – pre-programmed cartridges full of patches made by other people. Personal computers of the day – Apple II's, Commodore C64's and Ataris – grew MIDI interfaces and patch librarian programs. Soon you could buy a thousand new sounds on a single floppy disk. No-one bought synths to make their own unique sounds any more, unless you were one of the very few anal-retentives who made sounds to sell.

Sampling – recording a sound and then altering the playback speed of the sound to change pitch – had been around as long as the personal computer had been, but as a stand-alone musical instrument with good audio quality you had to spend big bucks on a Fairlight or Synclavier.

An offshoot of cheap digitally controlled synthesis was the introduction of affordable hardware based samplers, such as Emu Systems' Emulator, the Ensoniq Mirage, and the Akai S1000. The patch-library companies were soon adding disks full of samples to their catalogues, but it was Akai's decision to add a CD-ROM drive peripheral that made it become the studio choice, and cemented the Akai sample-format as an industry standard. The Aussies continued to battle on, but the sample-driven Fairlight could not compete against machines one twentieth of the price, and the Synclavier hung on for a few more years as a high end machine thanks to a few huge hard-drive-only sampler additions and the Denny Yaeger String Library, and only because it was cheaper to hire a Synclavier than it was to get the London Symphony Orchestra in to do your film-score.

If you couldn't compete on price, you soon died. Fairlight got out of the music game in 1989, and New England Digital folded completely in 1993. Sequential Circuits was bought up by Yamaha, Oberheim got bought by Gibson and stopped making synths, Voyetra and many other smaller makers died complete, Kawai faded into obscurity, and Bob Moog hung on by the skin of his teeth making expensive theremins. By 1992, eighty percent of all synths sold bore a badge reading Roland, Yamaha, or Korg, and every device had that telltale single display, single control-fader and a grillion parameters to edit.

1990: A Virtual Birth

In 1995, a small Swedish company took a couple of the world's fastest DSP chips, a handful of cheap knob-workalikes taken from computer mice, and some very clever software that could mathematically replicate analogue circuitry, put it all into a box, added a keyboard, and painted it red. Clavia's Nord Lead was the first "virtual analogue" synth, with the mission of recreating not only the warmth and character of the old analogue synths of yesteryear, but the visceral hands-on joy of real-time knob tweakery. The Nord became the most-sought synth for both reasons, and prompted some other companies to release similar knob-infested synths such as the Alesis A6 Andromeda, Viscount's Oberheim OB12 and the Waldorf Q. A German newcomer, Access, had a huge hit with their knobbly Virus, and the Japs soon hopped on board with Roland's JP-8000, Yamaha's AN1x and Korg's MS2000 all vying for the gigging synthesist's attention (and money).

By 2002, it was hard to buy a synth which was not covered in knobs – interactive sound creation was once again in the hands of the musician, and the patch-sellers and sample-sellers started to feel the pinch … for a while.

2000: Komputerwelt

It is around this time two things happened that revolutionised the music business – a massive jump in processing power in affordable personal computers (Apple's PowerMacintosh G3 was the first personal computer to earn a supercomputer classification from the Pentagon!) and massive global adoption and use of the internet.

Whilst it's true that the use of computers in music-making has a long history – the Australian computer "CSIRAC" was the first to play music directly in 1951 – the realm of computer-generated audio synthesis had been a specialty area for some time, and there were no real standards. If you weren't using a dedicated synth-chip on a sound-card, then you needed some impressive CPU grunt to math out the waveforms in something resembling realtime. Early soft-synth programs such as ReBirth, Koblo and Metasynth could easily max out a state of the art PC or Mac if you tried to do a lot at once, and you could forget trying to run more than one synth at a time, too.

Enter Digidesign and Steinberg, two companies that had been in the audio industry since 1984, with almost identical competing products, Pro Tools and Cubase. Both products started off as MIDI-only sequencers at almost the same time, both added realtime audio multitracking at around the same time, and both came up with the concept of a modular plug-in architecture at around the same time. Pro Tools was the professional studio's choice more often than not, because Pro Tools relied heavily on Digidesign's own high end hardware. Cubase on the other hand could make use of audio hardware from many different makers, thanks to the ASIO standard Steinberg developed, and was more often found in semi-pro and hobbyist studios.

Digidesign's concept for plug-ins was quite clever – they were to be written in such a way that they can be run either directly in the host computer's CPU or loaded into a PCI card full of DSP chips. This was great for pro studios who could afford to add four or more 'DSP farms' to a PC or Mac and run a dozen convolution reverbs, twenty mastering limiters and a few synths & other effects on a 100 MHz PowerPC or Pentium. Not great if you've spent all your money & got the fastest computer at the shop and still be limited to three synths and three effects plug-ins before the audio output starts to stutter.

Steinberg decided to make their plug-ins CPU-dependent, but opened the spec, knowing that computing power was on the rise and eventually there would be cheap computers powerful enough to run dozens of the most complex bits of software ever devised without breathing hard.

2005: The Second Rise & Fall Of Originality

Steinberg's double whammy of ASIO and VST standards soon meant that to recreate a full electronic music studio, all you needed was a decent computer and a decent sound card. However, because everything was happening on a computer screen, everything was controlled by proxy. For some users, having to edit each control one at a time using a computer mouse was about as enjoyable as creating a new DX-7 patch from scratch. Having to shift from left-brain creative mode to right-brain logical operate-and-control mode constantly was, they said, the fastest way to kill the creation process and make any recording session one long act of frustration.

Music-making is, after all, an intimately interactive experience, and anything which adds time to the idea of a note in your brain from triggering your fingers and that note reaching your ears is a bad thing. In fact, no matter what the task at hand, the time that encompasses thought-action-reaction needs to be kept as short as possible, preferably less than 200 milliseconds. Interactivity is key, and the interface between you, your thoughts, and the desired outcome from the device you are manipulating must be as intuitive to use as possible. Apple know this, and it's why the iPhone became the most-sought-after and most-emulated smartphone less than six months after launch.

This is also why having lots of physical controls directly under your fingertips is so vital to a smooth flow of creative juices, and is why we have a healthy industry making control-surfaces that talk MIDI and/or USB and have all manner and number of knobs, faders, buttons, pads, sticks and other controls that can be mapped to a virtual synth or effect's controls and thus be recordable. Many computer musicians will be running a dozen or more virtual instruments and effects in a single session, and it is frustrating to have to remember to change a control-surface's internal program to match the front-most plug-in. To reduce some of that frustration, smart controllers like Mackie's Universal Control system or Novation's ReMote-SL stuff can help a lot by watching the screen for you and changing their settings to whatever VST you have focused on.

For some, even this is not enough. Still too slow. Still not enough feedback.

Alas, this is extra hardware which requires extra money, and a lot of budding bedroom musicians have little or no money to spend on such things – in fact, a large proportion of computer musicians didn't even pay for the software they use thanks to the pervasive internet and ease in which illegitimate copies of software can be downloaded and used without worry. Because of the frustration factor I describe above, most users never bother to get minutely involved with controlling their plug-ins and once again rely solely on presets.

2010: Plug In, Browse On, Vague Out

So, the present-day answer to the original question – has the softsynth killed synthesis – the answer is no. Not directly, anyway. So what did kill the art of synthesis? Partly tight-arsed penny pinching (including outright theft), partly intrinsic human laziness, but mostly the reduction in interactivity – once again each aspect of a sound is removed from direct, instant-feedback interaction.

Once again we are seeing presets collections being offered for sale and/or download. We are also seeing a massive increase in the number and size of sample libraries. Unfortunately, neither of these helps creativity, and in fact hinders it to a very large extent – we become buried in a morass of patches, programs, plug-ins and samples to choose from, and flicking through the hundreds and thousands of sounds for that one special thing makes us forget what we're looking for. Jean Michel Jarre sums it up rather eloquently:

"Digital technology took electronic music down a blind alley. Musicians were compelled to work in an increasingly cerebral and abstract fashion on samples of existing sound rather than creating original sounds as they had with the first wave of synthesisers.

"The excitement of being able to work on sounds in a tactile, manual, almost sensual way is what drew me to electronic music in the first place. The lack of limitations is very dangerous. It is like the difference for a painter of getting four tubes with four main colours or being in front of a computer with two million colours. You have to scan the two million colours and when you arrive to the last one you have obviously forgotten the first one. In the Eighties we became archivists and everything became rather cold as a result."


Noted composer and synthesist-non-pareil Evangelos Papathanassiou – better known as Vangelis – has almost identical thoughts:

"The way music technology has been designed today, it takes you away from the spontaneity and from the human touch, because you have to follow the machines, and program. I'm not doing that. Everything I do is not pre-programmed, it is done on the spot. One take. Everything is done live, never programmed.

"Comparing the technology I have today with what I had when I did Chariots Of Fire, there is nothing. I have a wider choice of sound, and the sound quality is better. But the system I use is exactly the same, I never change the approach, it is always live. All I've changed is the source of sound, that's all.

"The playability of modern synthesisers is a big problem, you have to use computers, you have to program, you have to choose, and these kinds of things ... I can't do it that way, so I use my own system to access the sounds, to bypass this difficulty, so that it is instant, immediate.

"As you create sound textures and qualities, so you create the composition."


Less Is More

So what's the solution?

First thing to tick off the checklist: Get a good controller with plenty of knobs and/or faders and MIDI keyboard. If they are both in the same box, even better – plenty to choose from but you can't go wrong with the Novation ReMote SL25. Until you do, you're not making music. If you're determined to stick with a computer keyboard and a mouse and nothing else, you're not a musician. You're a a bit-shuffler, a tinkerer, you don't even rate as an audio sculptor. Chimpanzees can make better rhythms slapping the ground bare-handed than you could by clicking beat-grids in FL, a whale's fart has a better melodic hook than your quantised mouse-drags on a Cubase piano-scroll.

Second is to stop being an archivist. Stop wasting time downloading every VST, patch-set and sample library you see a link to. He who dies with the biggest collection of plugs & samples is the biggest loser of them all. Cure yourself of versionitis, that compulsive affliction to make sure you have the uber-latest version of everything – the world isn't going to end just because you're on version 1.02 when everyone else is using version 4. Newer is not automatically better.

Third: Reduce your palette. One DAW package (can't go wrong with Live, but Tracktion and REAPER also have good interfaces). No more than six soft-synths, one sampler. One delay, one or two reverbs, one compressor or comp/limiter, a gater if you're into dance music. Up to four "weird" sound-manglers. Once you've settled on your plug-ins (any more than ten in total defeats the purpose) delete the rest. That's right, delete them, get rid of them, at the very least dump them to DVD or spare hard drive and leave the backups with a friend or family.

Finally, learn your VSTs backwards. Experiment with every control, every value, learn where the sonic texture boundaries lie. Set up your control-surface so that controls for similar functions across VSTs are in the same location on the control-surface. The aim is to be able to reach out and tweak a control to move closer to the sound in your head without having to think about it – it needs to be intuitive, something you can do almost by instinct. Once you think you're getting a good grip on what a VST can or can't do, delete all the presets it came with. Once you have mastered your less-than-a-dozen instruments and effects, then and only then can you think about adding another synth or effect to the collection.

Claude Monet painted 'Still-Life with Anemones' with eight individual pigments. Leonardo use fourteen on 'Mona Lisa'. Like a good painter, you need to devote time on mixing your palette of available colours and textures together, learning what works and what doesn't. You'll find that with less, you can do more than you thought possible. A lot more.

iPad, You Pad, We All Pad - Let Your Fingers Do The Tweaking

Over the decades, academic musicians such as Steve Vorhaus, Tod Machover and Don Buchla have focused on different methods of interacting to control sound in a performance, and occasionally something clicked and actually made it to the market – things like the ribbon controller, Yamaha and Akai's breath controller, and the Chapman Stick. Toshio Iwai created the odd multibutton "Tenori-On" in 2005, which Yamaha brought to market in 2007 and which spawned the Monome, a low cost, easily modifiable button-grid controller that could be put to all sorts of uses.

The last few years has seen rapid development in the use of touch-screen technology, the two most notable examples to have reached the market are Haken Audio's "Continuum" fingerboard and Stantum's "Lemur" multi-touch interactive display screens. A dedicated computer in its own right, the Lemur can be programmed to display any graphic imaginable, and map regions of its touch-screen surface to send control messages (MIDI, DMX and some others) out to another device such as a hardware (or software) synth, a lighting controller or even an industrial robot. French duo Daft Punk are enthusiastic fans of the Lemur, using up to six of them in their live performances.

The Lemur is an excellent example of instant visual feedback – you see where and how your fingertips are interacting with a control and your ears and brain are registering changes almost instantly. As good as real physical controls you can pinch between thumb and finger to twist or flick … almost. Until they can develop technology to make screens extrude graspable bumps under computer control, the Lemur is about as good as it gets with display-centric control surfaces. Despite being available since 2005, limited production output had meant the price of a Lemur has been high ($2,000 or so), which is one reason why Stantum decided to close the Lemur division completely in late 2010.

Apple's iPhone and iPod Touch were the first to send shivers up Stantum's spine, and despite their tiny screen size compared to the Lemur, keen programmers were soon coding and releasing apps with a decided musical leaning, including a couple which turned the iPhone into an interactive controller for soft-synths in Logic. Once Apple's $600 iPad came out, the pace really picked up, and several big names (MOTU & Ableton for example) released control-surface apps and several others from independents also became available. The iPad became the "poor muso's Lemur" and Stantum rapidly lost sales.

Don't like Apple? Don't worry. iPad knockoffs running Windows, Android and linux are appearing on the market, some have no multitouch, some don't. It means that programs to turn them into Lemur-alikes will appear on them too, and sooner rather than later. The Lemur was the best thing to happen to electronic music in the last decade, but it took Apple to make the hardware affordable and more accessible to programmers, and once Apple had shown the market was ripe for something every geek said would fail miserably, everyone else decided to hop on the bandwagon. Send in the clones …