The active crossover in 1952

In the archive of magazines mentioned earlier, I decided to try to find the earliest reference to active crossovers. By sheer good luck, the first magazine I clicked on at random contained an article on triamplification (not yet named “active crossover”) from 1968.

six amplifiersIt lists the following advantages of active crossovers:

  1. Improved damping
  2. Lower intermodulation distortion
  3. Improved frequency handling by drivers
  4. Higher power handling
  5. Smoother response
  6. Adjustable crossover frequencies and slopes

It mentions that there were several biamplification products in the late fifties, but that when stereo came along the concept was forgotten.

This article then led me to one on biamplification from 1956, and finally to possibly the earliest article on active hi-fi crossovers, from 1952.

biamp title 1952

biamplify 1952

In this article, they design and build their own low level crossover.

1952 xover

Switching back and forth produced a subtle but distinct difference in listening pleasure. The low frequencies seemed a little more pure and less obscured, the middles and highs cleaner. The overall effect was that we had moved one step forward toward exact reproduction of the music as inscribed on the phonograph disk. There was a definite improvement in sound over a considerably better than average single amplifier system with a carefully designed dividing network and well balanced speakers.

They find that other compelling reasons to use the system are the freedom it gives to mix and match drivers without having to worry about their relative sensitivities, and the ability to adjust crossover frequencies easily and quickly.


Hi-fi manufacturers and customers alike are still struggling with passive crossovers despite the problem having been solved 65 years ago! This is as much to do with the ‘culture’ of audio as any technical or economic reasons.


My mid-80s video framestore

I was looking through some old photos the other day and was reminded of a thing I built back in the mid-80s. I had become obsessed by the idea of building a device to capture and display photographic images at a time when no normal computer could do it. Your standard home computer like a BBC micro, for example, could only display a small number of colours and couldn’t even display a smoothly-graduated monochrome image. Later, more sophisticated computers like the Commodore Amiga couldn’t do it without strange restrictions on which colours could be adjacent to others.

I was fully aware of the basic idea of digitising waveforms and storing the results in RAM, having played around with audio sampling prior to this. I found that it was possible to buy chips that could generate frame and line sync pulses from a composite video stream i.e. the output of a standard video recorder or camcorder, and also to split composite video into R, G and B analogue components suitable for sampling. A standard computer monitor could take in separate sync & RGB signals so it wasn’t then necessary to do the reverse and generate composite video again.

Putting it all together, I could build a device that would enable me to grab a single video frame and store it in RAM. I could then replay the frame over and over, reconstituting it via three DACs, to be fed to a standard RGB monitor. I could also stop this process, and allow a computer (a BBC Micro) to read the contents of the RAM for storage on disk. The computer could also upload stored images into the RAM for display – and this would also allow for the possibility of ‘Photoshopping’ images or synthesising them in software.

The pièce de résistance was that what fell out of this arrangement was a live digitised image on the monitor that could be frozen by pressing a button.

As I recall, the main technical hurdles were:

  1. High speed ADCs and DACs were expensive and/or outside my comfort zone. In the end, I used three 6-bit ‘flash’ ADCs, and my own home-made R-2R DACs. Consequently, I could capture and display 262,144 colours which doesn’t sound much compared to today’s standard 16 million but was adequate. In monochrome I could display a 64 grey scale image which was sufficient to be called ‘photographic’.
  2. How to lock my pixel clock to the incoming video stream. As a stopgap while I thought of something better, I made a super-simple analogue oscillator out of CMOS Schmitt triggers that could be started (as opposed to its output being gated) by setting an input logic level.
  3. RAM was pretty expensive – except for dynamic RAM, and I thought this was too complicated to contemplate. In the end I used a bunch of static RAM chips to give me a resolution of 256×256 pixels. Again it doesn’t sound like much, but with the relatively fine colour graduations, it was not too bad at the time.
  4. A standard UK PAL video frame has 625 lines (although only 576 lines are visible) comprising two interlaced fields of half that number of lines. If I was aiming for a resolution of 256 pixels, I clearly could not digitise the whole frame. In the end I think I sampled and displayed just one of the fields, cropping the middle 256 lines out of the 288 visible lines by starting to digitise once a certain line count was reached after the top of the field. When displaying a sampled image, the same image was in effect displayed in each field.
  5. I needed to make double-sided PCBs for at least part of this device in order to simplify its construction. This involved arduous work with acetate sheets, self-adhesive tape and transfer symbols, and a scalpel.

The uppermost of a stack of three identical PCBs incorporating memory, computer I/O, ADC and DAC for the red, green or blue component.

I eventually made it work pretty well. I started with a single channel of monochrome, and I remember that the first time I ‘froze’ a perfect monochrome image was one of those moments that I probably live for.

I didn’t progress beyond the simple analogue pixel clock – which effectively meant that I set the image’s horizontal width with a potentiometer. It seemed to work perfectly well.

Of course, as so often happens, once the initial thrill was over I didn’t use it much after that, eventually putting the thing to the back of a cupboard and never touching it again – it is still there!

Here are a few of the images I grabbed, mainly from broadcast TV or pointing a camera at magazines. As you can see, it actually worked.


(I seem to remember transferring the images from the BBC Micro via serial cable to a PC in 1998 – the datestamp on the images –  and, knowing me, probably substituted the raw image data into a 256×256 image created in Photoshop or similar so I never had to actually understand the image header. I would then have resized the images from their non-square pixels to what looked right on the PC monitor using Photoshop. There would have been no more image manipulation after that, so these are effectively the raw images).

Pop and click remover, old electronics magazines

Just saw a short article about a new product that aims to remove the pops and clicks from vinyl records. It…

…digitizes the signal at 192/24 bit resolution and then uses a “non-destructive” real time program that removes pops and clicks without, the company claims, damaging the music.

…In addition to real-time, non-destructive click & pop Removal the SC-1 features user controllable click & pop removal “strength”, a pushbutton audiophile-grade “bypass” that lets you hear non-digitized versus digitized signal (for when you don’t need pop and click removal), iOS and Android mobile app control and 192/24 bit hi-res digital processing.

Of course it is highly ironic that a vinyl enthusiast should need the services of the digital world to improve the sound of his recordings. And it is obvious (surely) that the digital stream could be stored for later replay without needing to further degrade the original vinyl or wear out the multi-thousand dollar stylus that is no doubt being used. (Omitting to mention the most obvious idea of just listening to a digital recording…)

The aim of the product reminded me of a certain project in an old electronics magazine, a huge number of which I still have in a set of bookshelves that I haven’t touched since 1990 – the date of the last magazine I seem to have bought. Sifting through them, it is amazing how familiar the front covers still are –  a measure of the intensity of youthful hobbies.


From Electronics Today International in April 1979, the project I remembered was a ‘Click Eliminator’ for vinyl records based on an analogue CCD delay line. The idea was to insert a few milliseconds of silence in place of the offensive click. Here’s how it worked:


Electronics Today International was the magazine I would go to WH Smiths for on a Saturday, being terribly disappointed if the latest issue wasn’t in. I would say more than 50% of issues featured an audio or hi-fi project: from 1982 an active speaker project for example, or from 1986 “Can Valves make a comeback?” with an accompanying valve amp project. There were any number of MOSFET amps, phono pre-amps, tape noise reduction units. Electronic music featured prominently with projects for effects pedals and synthesisers galore. I devoured this stuff.

Other magazines included: Practical Electronics, Wireless World, Everyday Electronics, Elektor, Electronics and Music Maker, and one I didn’t recall Hobby Electronics. I also bought any number of computer magazines. I have never thrown any away, so I have hundreds of them gathering dust.

An audio breakthrough

It would appear that there is a particular audiophile DAC with a cult following that gets rave reviews and costs over $2000, and is based on a non-audio DAC chip.

Why would they do that? Well, I think it is so they can run it “NOS” (not New Old Stock, but “non-oversampled”) and add their own “proprietary” filtering – plus it’s different from what the hoi polloi uses so it must be better. But, it would appear that someone has found a glitch, literally.

I am no expert, but I think that because this chip is a non-audio DAC, the output comes directly from a R-2R ladder, or similar. Small capacitive charges are transferred whenever the ladder switches operate, and sometimes the switches don’t all operate at the same speed. This means there is a glitch at the output whenever the DAC value changes, and it is worst when all the switches operate simultaneously i.e. when the most significant bit changes – around the mid range in other words (hmm…). Presumably there are other significant glitches at multiples of 1/4 full scale and 1/8 full scale too.

Low pass filtering the output can reduce the amplitude of the glitch at the expense of increasing the settling time. There are better techniques using a further piece of circuitry (a sample-and-hold) but, apparently, for the designers this was regarded as unacceptable for some reason (why?), and at audio frequencies still wouldn’t be as good as a typical $1 audio DAC in a mobile phone.

The evidence is all in the DAC chip’s data sheet:


I don’t know whether the glitch energy scales with the the VREF (i.e. the full scale signal amplitude), but this glitch is huge compared to the smallest signals that we might generate with the DAC.

An owner of this product now thinks he is hearing a certain harshness in the sound, and seems to have found that when reproducing a sine wave at -90dBFS, the output of the $2000 DAC contains significant glitches at the zero crossings. It would be interesting to know if there are detectable glitches at 1/4 and 1/8 full scale, too. This could be the phenomenon shown in the data sheet, or a by-product of whatever mechanism is being used, unsuccessfully, to suppress the glitches – they are rumoured to be using a combination of two DAC chips. Scrutiny of other reviews and measurements of the device seems to reveal distortion and noise figures that suggest something strange is going on – apparently.

An aspect of integrated circuit DACs is that because they are very small and constructed on a single chip, they have fantastic performance relative to themselves i.e. they remain monotonic and linear at all times. However, their absolute gain and offset may drift slightly with temperature. These temperature coefficients vary from chip to chip and can even be positive for one chip and negative for another (this appears to be the case for this particular DAC chip according to the data sheet). This means that any attempt to blend the outputs of two DAC chips externally using a combination of scaling, offsetting, inverting, mixing and interleaving would be most unlikely to succeed down at the lowest levels.

If these suppositions are correct, then this product is a great example of where the basic engineering of a basic product appears to have been sacrificed in the interests of just making something ‘different’ and supposedly ‘simpler’ – although as usual it ends up being more complicated.

[Last edited 04/05/16]

Understanding vs. Knowledge vs. Expertise

Reading Archimago’s latest article, I am struck by two things:

(a) he’s probably a much nicer person than me

(b) the high end audio business may not be so much a scam as a <word that begins with “cluster”>.

What I’m thinking is that millions of words (not Archimago’s I hasten to add) and thousands of product developments can all stem from an incomplete understanding of something, plus the obscuring of central ideas with mere facts and ‘expertise’. Case in point is the subject of digital cables, discussed in the article. Archimago has endless patience and ‘plays along’ with the manufacturers and their disciples, giving them the benefit of the doubt by testing their products, even though he knows that they will always give more-or-less identical results. I am more inclined towards the idea that we should just ignore them. Unfortunately, neither approach can rid us of the cloud of confusion that surrounds the subject, and which must ultimately divert attention away from real progress (and with the Kii Three and Beolab 90, for example, we at last have some real progress to evaluate, and hopefully to make cheaper).

Which of the the following propositions are false?

1. Digital audio is degraded by jitter.

2. Cables carrying digital audio signals impart jitter onto those signals.

3. High quality cables can be designed to reduce jitter compared to others.

4. Digital audio can be improved by high quality cables.

I would say that none of them is completely false in all circumstances, and herein lies the problem. An entire industry is sustained on these propositions; manufacturers talk knowledgeably (and who knows, maybe honestly) about them. But while the propositions may all be true individually and with qualifications, they are an incomplete representation of the role of cables in digital audio. An understanding of digital audio at a system level would show that many implementations are certain to be immune to cable quality (and the remainder of implementations are highly insensitive to it). Those bi-directional implementations that deliver the audio data in packets on demand, slaved to the DAC’s sample rate, are immune to cable-induced jitter by design. Case closed. We could simply move on.

But if you have the misfortune to enter into a discussion about the subject on a forum, you will quickly encounter many vociferous people with detailed factual knowledge and expertise regarding cables, and that that is all they want to talk about; it won’t be long before someone brings up “eye patterns”, transmission line theory, the “skin effect”, “six nines” copper, monocrystalline metal and so on. Or they want to talk about listening tests and statistical ‘p-values’. This lower level knowledge, even if correct within its own little sub-fields, is not really relevant, but unfortunately forms a smokescreen that obscures all understanding.

Light entertainment

Here’s a little controversy from the archives of Stereophile magazine.

Stereophile has an interesting policy whereby an equipment reviewer writes up his subjective experience of testing a device, and only then is it measured for distortion, frequency response and so on. It seems that the magazine has the integrity to publish the two reports whatever the outcome.

Have you ever seen a more polarised review than this one from 2005?

The reviewer says:

The CyberLights represent one of the greatest technological breakthroughs in high-performance audio that I have experienced in my audiophile lifetime….

…for the first time in your life you’ll hear no cables whatsoever. When you switch back to any brand of metal conductors, you’ll know you’re hearing cables—because what’s transmitted via CyberLight will be the most gloriously open, coherent, delicate, extended, transparent, pristine sound you’ve ever heard from your system…

The measurements person says:

If this review were of a conventional product, I would dismiss it as being broken. …I really don’t see how the CyberLight P2A and Wave cables can be recommended. I am puzzled that Harmonic Technology, which makes good-sounding, reasonably priced conventional cables, would risk their reputation with something as technically flawed as the CyberLight.

You’ll have to read the full review for yourself, because the contrast between the two opinions is almost comical. The measurements are quite something to behold.

You see, I sometimes worry that perhaps I just don’t ‘get’ this hi-fi business. £80,000 analogue systems don’t sound anything special to me. Vinyl doesn’t sound as good as digital to my ears but everyone else says it is much better. Designing and building my own system was really quite straightforward, yet the internet is full of intense discussion about how difficult it is; people spend their entire lives building their own speakers and are never happy with them yet it’s almost three years in and counting, and I haven’t felt motivated to modify mine yet. Are the experts hearing something I am not? Perhaps this review sheds some light on the answer.

Analogue enthusiasts often claim that the signal-modifying effects of whatever product they are listening to actually improve the sound. The usual line is that the indefinable magic of valves and vinyl is down to what those devices add: they are serendipitously restoring something that is supposedly missing from the recording. ‘Poor’ measurements are simply an indication of an harmonious combination of factors that enable the leap from clinical, neutral signal to real music. There is no argument possible against this assertion.

However, in the above review, the writer cannot make that claim. Clearly he has confused high levels of distortion and noise plus extreme frequency response variations as an absence of colouration. For him, replacing metal cables with “light” was all about removing “grunge” and other “well-known problems”. Because of his extreme analogo-philia, I don’t think he actually knew what ‘neutral’ sounded like. When he heard something that was different from anything he had heard before, he automatically assumed that it must be because cables really are the sonic quagmire he thought they were and that the product was doing what he assumed it was designed to do. For once, it actually was a “night and day” difference but his understanding of what he was hearing was 180 degrees wrong. In the scheme of things, it doesn’t really matter, but it reassures me that 99% of the ‘expert’ opinion based on listening is very dubious indeed – I do think there are people out there who would find much to like in a pair of yoghurt pots linked with string as long as they cost enough.

Stereophile, it appears, doesn’t normally measure cables when they are reviewed. I think we can guess why: there is nothing to measure. Each and every review would feature the same distortion and noise measurements at the very lowest depths of the test equipment’s range, plus a ruler-flat frequency response when using the cable in normal circumstances. It wouldn’t matter if the cable cost £1 or £10,000 – which, absurdly, they sometimes do. To arrange anything different would actually be quite difficult. It is this complete, boring neutrality that Michael Fremer and other cable mythologisers are convinced is plagued with “grunge” and other problems. The justification for the Cyberlight product, so appealing to Fremer, is that it replaces a short section of metal with light and fibre optics, and is analogue – you still connect to the input and output with those awful grungy wires. It is no different from becoming excited about the audio quality of headphones that use an analogue wireless link rather than a cable. Just as with those headphones, there is a little “background hiss” but this is a small price to pay, apparently. And just like those headphones, the signal goes through a link of dubious quality. Very dubious. At least there is a valid justification for wireless headphones, though.

If you gave me about £20 to buy a few parts, I could build you this device in an afternoon, probably. But if I did, I would try to make it work properly. I would certainly try to convince you that the whole product was unnecessary and was corrupting the signal, and that if we really had to use fibre optics we should digitise the signal and send it as pulses. I might also point out that the commercial product is a mess: various “wall warts”, $400 battery packs and “pigtails” that could, depending on what equipment you’re using, destroy your speakers.

And don’t ever unplug or plug in the power to the cables with the amplifier turned on or you’ll send a horrendous THUMP through your system.

For people who might dismiss active speakers and DSP as too complex, there are no limits to the Heath Robinson-esqueness that they can tolerate in the name of ‘analogue’.

Digital Audio’s PR Problem

If you’d never heard of digital audio, but were told that it was now possible to store and play back audio signals on a computer, I don’t think you would raise an eyebrow. After all, how difficult can it be? An audio signal is no different from any other ‘wiggly line’ that computers seem to manipulate with ease: graphics, high quality fonts, CAD drawings, maps etc. for all intents and purposes at infinite resolution.

But somehow, digital audio is seen as a special case, where no one quite believes that it works. Looking at various forum discussions it is apparent that, in fact, it wouldn’t matter how many bits, how many MHz of sample rate, how few femtoseconds of jitter was specified, audiophiles would still be convinced they could hear the ‘1’s and ‘0’s, jitter, quantisation distortion and so on. The noise and distortion inherent in tape and vinyl that is many orders of magnitude greater gets a free pass; the noise in digital audio no matter how minuscule must always be portrayed as ear-bleedingly offensive. Why?

I think there are several reasons:

  1. Digital audio is mathematically-based. Long after real world signals have become buried in noise and distortion due to unavoidable physics, the theoretical numbers associated with the maths remain pristine and, quite unambiguously, show errors! Clearly we need better numbers. And so it goes on. In other words, no matter how high the resolution, you can always zoom in and see a theoretical error that looks just as big and clear on the screen or page.
  2. From the outset, the theory behind digital audio was discussed openly, but very few people actually understood it fully (including me). Thirty-odd years later, the misunderstandings persist. These vary from assuming that digital audio cannot know, or fill in, what is in “the gaps”, to failing to understand the significance of dither.
  3. Digital audio provided a complete mathematical solution, in many ways superior to other computer-based wiggly lines. The system is so elegant and simple that people just don’t believe it can work the way it does. [03.03.16 just saw an article that says exactly that“The intriguing aspect is that those who do understand refuse to believe”]
  4. Digital audio must always be chasing its tail, because as soon as a new performance level is achieved, it becomes possible for every Tom, Dick and Harry to buy the hardware for a small number of pounds, and even to start measuring signals at that level. Suddenly we’re all experts for whom -110dB is an average spec and must therefore be highly audible – although no one has ever heard a signal that quiet. No matter how good, digital audio will always seem mundane.
  5. Digital audio hardware is too complex to build using discrete circuitry. Integrated circuits are cheap. Audiophiles need to know they are buying better stuff than the hoi polloi, but digital audio doesn’t play the game. It remains persistently cheap enough for the masses to buy exactly the same measured performance as the most expensive fancily-boxed version of the same chip. (We are talking £30 versus £30,000, say). In the audiophile mind, this proves that measurements mean nothing and that “bits are not bits”, whereas in reality it shows the opposite.

The Rise and Fall of Sony

An article about Sony by the always-interesting Stephen Bayley.

I am trying to think what the last Sony item I bought was. I think it was the AV amplifier I am using for my active crossover system, and this really does seem like a good product – although it is just one of many similar products in a crowded marketplace.

I certainly used to regard Sony as a benchmark for quality design, but I recall a fairly up-market MP3 player that was just plain flaky. An amplifier of theirs distorts audibly when playing a 1 kHz sine wave. Another amplifier had been designed so that live uninsulated wires carrying mains were directly below a ventilation grille. An FM tuner needed its backup battery replacing after a couple of years and I think they wanted £70 to replace it – which of course I didn’t pay. A desktop PC looked very nice but ran hot and was incredibly noisy.

Yes, I think the article may be right.

UPDATE 01.03.16

How could I have forgotten this desperate, cynical ploy?


Handmade electronics

Seen on a forum elsewhere: someone promoting a new ‘DAC’ based on multiple, now-obsolete, consumer-grade integrated circuits cobbled together in series and parallel (I think) using a large circuit board that strongly resembles the sort of thing I used to make in the 1980s. Double-sided rather than using power planes, you can see the multiple power and ground busses running around the board as relatively puny tracks. To any experienced printed circuit board designer, its appearance is literally offensive, presumably designed using a computer but looking as though created with self-adhesive tape and transfers. Integrated circuits in sockets, which is what people do when they’re not quite sure if they might need to replace blown-up chips or are worried about their soldering skills, resulting in multiple cheap contacts in the signal path which is kind of ridiculous when the purchasers are then going to be using multi-thousand dollar ‘audiophile interconnects’.

You may think I am being very unkind, but get this: the manufacturer wants in excess of £50,000 for it!

I always find it interesting when the producers of these devices provide close-up photographs of their efforts. This again takes me back to my teenage years, where I used to be very proud of my own early electronic assemblies and would photograph them in great detail. It means that sceptical people like me can pore over the photos thinking “Oh yes, I used those terminal blocks in that burglar alarm I once made because they were so cheap” and “Look how he has spliced two wires together and covered it with heatshrink sleeving. And what is that extra wire for?” It also brings back memories of the times when, in my ignorance, I ran into trouble with this kind of construction and would probe around with ground wires attempting to reduce hum loops or noise caused by cross-contamination between the digital and analogue sections. Occasionally I found a connection that would reduce the noise a bit. When this happened I would solder the wire in place!

The asking price is, according to contributors to the forum, justified because not many of this particular product will be made. This is one of my pet hates: assuming that because something is “handmade” it must therefore be better than something churned out by the thousands. I am not even sure it is true for things like furniture or musical instruments, but it is most assuredly untrue for electronics where instead of “hand made” we should be thinking “prototype” or “cobbled together”. On whether the electronic design itself is sound… I couldn’t possibly comment. All I know is that my teenage ‘wannabe’ designs were pretty atrocious and they looked remarkably similar to these photographs. It is very easy to knock something together that ‘works’, but how immune is it to radio frequency interference? Would electrostatic discharge (ESD) damage it, or make it go haywire? Does it produce an almighty ‘thump’ when powering on or a horrible squeal when powering down?  What happens if one of the flimsy wires breaks off? What if there’s a mains ‘brown-out’ – will it blow up the speakers?

And that price. It looks kind of typical in the context of certain audio forums, but just consider what it means. If I were considering having an extension built onto my house, it would probably be of that order of cost. It would involve professional architects, planners, builders. Lots of equipment would be needed, and a lot of materials. And a heck of a lot of labour. Or, I could splurge the cash on some cruddy circuit boards of sub-hobby level quality. Please tell me that no one would ever dream of doing that.

You can hear it in the street, see it in the dragging feet

Gosh, but I’m cynical. I was just reading a review of a very audiophile-ish amplifier where the remote control can only “trim” the volume – any major volume change involves getting up and turning the knob on the amp itself. This apparently results in a simpler signal path. What..? I was trying to think how and why this might be. Apparently the amp uses electromechanical relays to control the volume.

I have become so cynical about this whole business that I actually came up with this thought: maybe the manufacturers were having trouble with the remote control receiver and were worried that it might go haywire and go to full volume so they limited it to “trimming”. Then I came up with the rather more charitable view that it was simply a question of ‘bits’. Maybe for some reason their remote receiver only has a few bits, so they have to choose between coarse control over a wide range or fine control over a narrow range; they chose the latter. In the end, my final guess was that maybe the remote control applies to a separate attenuator in series with the one controlled by the knob rather than being ‘OR’-ed into the control of a single relay-based attenuator. This means that a quiet volume setting from a wide-ranging remote control would restrict the range achievable by the knob, and vice versa, hence the necessity to limit one of the controllers to trim-only duties.

You see, that’s what I have become. I read about a perfectly innocent and highly-desirable audiophile design feature such as a “trim-only” remote control, and my mind wanders off, coming up with these sceptical thoughts. This is not an isolated example of the way I regard the ‘high end’ audio industry…

I, too, am an electronics designer. The way my mind works, I would have identified the control of volume as absolutely the central function of my pre-amplifier and noted that the interaction between remote control and front panel knob was important. If one of my marketing ploys (careful of that cynicism…) was to have no nasty, dirty software in the box then it would certainly be a design headache; less so if I was ‘permitted’ to use a microcontroller (we are talking audiophile prejudices here and some people would hate to have a microcontroller in the same box as the audio signal). But I would have completely failed to realise that a trim-only remote is a perfectly viable product in high end audioland, and can even be marketed as a feature!

Other aspects to consider would have been volume resolution (how many steps do I have available) and logarithmic vs. linear response. If my gimmick (damn, it’s that cynicism again) was going to be relay attenuation then I would have to consider finite switching time and contact bounce. A clever arrangement of relays and resistors giving me high resolution with few relays, but where more than one relay changed state at the same moment, might result in some pretty ugly volume changes. That would need thinking about carefully.

The relay attenuator is one of those ideas that really appeals to the audio ‘high end’. It has everything: the ability to choose expensive resistors, the steampunk-ness of relays and the fact they make a mechanical noise when operated! They are necessarily large, physically, so in a long chain of them the signal is forced over quite a long, convoluted path with many solder joints. I imagine that no one worries too much about shielding – do they put the attenuator in a tin can? How would the punter see the lovely expensive resistors if they did?

And another thing that seems less than desirable: people used to go to great lengths to avoid putting switch contacts in series with the signal. If we are talking about micro-diodes being a noteworthy factor in cables, then here is a case where a slightly oxidised, damaged or contaminated contact really could have an effect (and of course even if the relay is just shunting to ground it is still in the signal path). Relays for small voltages and currents need to have gold contacts otherwise oxidation builds up and is not burned off by arcing (yes, controlled arcing is desirable in power relays) but even gold-plated contacts are not immune from atmospheric damage working its way under the gold. An alternative that might have worked quite well, the mercury-wetted relay, is banned in the EU. It is probably desirable that the contacts are in airtight chambers to prevent ingress of dust, moisture and, that great enemy of switch contacts, silicone which can “creep” over long distances and reduce the effectiveness of electrical contacts. Such arrangements may be found in reed relays, or hermetically sealed conventional relays.

So which has the more signal-degrading effect: an expensive relay attenuator or a £2 interconnect cable? If I had to bet…

In the rational world we now have electronic attenuators such as the ones that live in the multi-channel amplifier that I use for my system. These are a natural result of the need for digitally-controllable volume, and have many advantages (high resolution, high precision, high reliability, low distortion, low noise, small size, low cost). Their disadvantage is their high end non-marketability: they are hidden away in integrated circuits and therefore immune to being infused with musicality by skilled designers and artisans. Nor are they are well-suited for low depth-of-field photographs in brochures.

The fact that the ‘high end’ unquestioningly prefers the steampunk version with all its obvious faults to the ‘perfect’ modern alternative is today’s hi-fi industry in a nutshell.


An example of the difficulties of designing a relay attenuator:

I just looked at a beautifully-presented DIY project on the web. If we delve into the data sheet for the miniature relays specified by the designer, we find the following stipulation:

“Min. permissible load: 10uA at 10mV”

with an added note:

“This value was measured at a switching frequency of 120 operations/min and the criterion of contact resistance is 50 Ω. This value may vary depending on the switching frequency and operating environment. Always double-check relay suitability under actual operating conditions.”

To me, selling this as a commercial product would look like a bit of a minefield. The contacts in these relays are not in an airtight chamber and so are exposed to the atmosphere. While my prototype might work OK (as far as I could hear, but measurements might reveal a different story), what if I left it for several months without operating the contacts?

It would be fascinating to know which relays are used in commercial products. I wouldn’t bet much on them being radically different. The ones in a photograph of the interior of a certain high end amplifier certainly look very similar…