The First Lossy Codec

(probably).

Nowadays we are used to the concept of the lossy codec that can reduce the bit rate of CD-quality audio by a factor of, say, 5 without much audible degradation. We are also accustomed to lossless compression which can halve the bit rate without any degradation at all.

But many people may not realise that they were listening to digital audio and a form of lossy compression in the 1970s and 80s!

Early BBC PCM

As described here, the BBC were experimenting with digital audio as early as the 1960s, and in the early 70s they wired up much of the UK FM transmitter network with PCM links in order to eliminate the hum, noise, distortion and frequency response errors that were inevitable with the previous analogue links.

So listeners were already hearing 13-bit audio at a sample rate of 32 kHz when they tuned into FM radio in the 1970s. I was completely unaware of this at the time, and it is ironic that many audiophiles still think that FM radio sounds good but wouldn’t touch digital audio with a bargepole.

13 bits was pretty high quality in terms of signal-to-noise-ratio, and the 32 kHz sample rate gave something approaching 15 kHz audio bandwidth which, for many people’s hearing, would be more than adequate. The quality was, however, objectively inferior to that of the Compact Disc that came later.

Downsampling to 10 bits

In the later 70s, in order to multiplex more stations into a lower bandwidth, the BBC wanted to compress higher quality 14-bit audio down to 10 bits

As you may be aware, downsampling to a lower bit depth leads to a higher level of background noise due to the reduced resolution and the mandatory addition of dither noise. For 10 bits with dither, the best that could be achieved would be a signal to noise ratio of 54 dB (I think I am right in saying) although the modern technique of noise shaping the dither can reduce the audibility of the quantisation noise.

This would not have been acceptable audible quality for the BBC.

Companding Noise Reduction

Compression-expansion is a noise reduction technique that was already used with analogue tape recorders e.g. the dbx noise reduction system. Here, the signal’s dynamic range is squashed during recording i.e. the quiet sections are boosted in level, following a specific ‘law’. Upon replay, the inverse ‘law’ is followed in order to restore the original dynamic range. In doing so, any noise which has been added during recording is boosted downwards in level, reducing its audibility.

With such a system, the recorded signal itself carries the information necessary to control the expander, so compressor and expander need to track each other accurately in terms of the relationships between gain, level and time. Different time constants may be used for ‘attack’ and ‘release’ and these are a compromise between rapid noise reduction and audible side effects such as ‘pumping’ and ‘breathing’. The noise itself is being modulated in level, and this can be audible against certain signals more than others. Frequency selective pre- and de-emphasis can also help to tailor the audible quality of the result.

The BBC investigated conventional analogue companding before they turned to the pure digital equivalent.

N.I.C.A.M

The BBC called their digital equivalent of analogue companding ‘NICAM’ (Near Instantaneously Companded Audio Multiplex). It is much, much simpler, and more precise and effective than the analogue version.

It is as simple as this:

  • Sample the signal at full resolution (14 bits for the BBC)
  • Divide the digitised stream into time-based chunks (1ms was the duration they decided upon);
  • For each chunk, find the maximum absolute level within it;
  • For all samples in that chunk, do a binary shift sufficient to bring all the samples down to within the target bit depth (e.g. 10 bits);
  • Transmit the shifted samples, plus a single value indicating by how much they have been shifted;
  • At the other end, restore the full range by shifting samples in the opposite direction by the appropriate number of bits for each chunk.

Using this system, all ‘quiet chunks’ i.e. those already below the 10 bit maximum value are sent unchanged. Chunks containing values that are higher in level than 10 bits lose their least significant bits, but this loss of resolution is masked by the louder signal level. Compared to modern lossy codecs, this method requires minimal DSP and could be performed without software using dedicated circuits based on logic gates, shift registers and memory chips.

You may be surprised at how effective it is. I have written a program to demonstrate it, and in order to really emphasise how good it is, I have compressed the original signal into 8 bits, not the 10 that the BBC used.

In the following clip, a CD-quality recording has been converted as follows:

  • 0-10s is the raw full-resolution data
  • 10-20s is the sound of the signal downsampled to 8 bits with dither – notice the noise!
  • 20-40s is the signal compressed NICAM-style into 8 bits and restored at the other end.

I think it is much better than we might have expected…

(I was wanting to start with high quality, so I got the music extract from here:

http://www.2l.no/hires/index.html

This is the web site of a label providing extracts of their own high quality recordings in various formats for evaluation purposes. I hope they don’t mind me using one of their excellent recorded extracts as the source for my experiment).

Advertisements

Two hobbies

An acoustic event occurs; a representative portion of the sound pressure variations produced is stored and then replayed via a loudspeaker. The human hearing system picks it up and, using the experience of a lifetime, works out a likely candidate for what might have produced that sequence of sound pressure variations. It is like finding a solution from simultaneous equations. Maybe there is more than enough information there, or maybe the human brain has to interpolate over some gaps. The addition of a room doesn’t matter, because its contribution still allows the brain to work back to that original event.

If this has any truth in it, I would guess that an unambiguous solution would be the most satisfying for the human brain on all levels. On the other hand, no solution at all would lead to a different perception: the reproduction system itself being heard, not what it is reproducing – and people could still enjoy that for what it is, like an old radiogram.

In between, an ambiguous or varying solution might be in an ‘uncanny valley’ where the brain can’t lock onto a fixed solution but nor can it entirely switch off and enjoy the sound at the level of the old radiogram.

I think a big question is: what are the chances that a deviation from neutrality in the reproduction system will result in an improvement in the ability of the brain to work out an unambiguous solution to the simultaneous equations? The answer has go to be: zero. Adding noise, phase shifts, glitches or distortion cannot possibly lead to more ‘realism’; the equations don’t work any more.

But here’s a thought: what if most ‘audiophile’ systems out there are in the ‘uncanny valley’? Speakers in particular doing strange things to the sound with their passive crossovers; ‘high end’ ones being low in nonlinear distortion, but high in linear distortion.

What if some non-neutral technologies ‘work’ by pushing the system out of the uncanny valley and into the realm of the clearly artificial? That is certainly the impression I get from some systems at the few audio shows I go to. People ooh-ing and aah-ing at sounds that, to me, are being generated by the audio system and not through it. I suspect that different ‘audiophiles’ may think they are all talking about the same things, but that in fact there are effectively two separate hobbies: one that seeks to hear through an audio system, and one that enjoys the warm, reassuring sound of the audio system itself.

What more do we want?

As I sit here listening to some big symphonic music playing on my ‘KEF’ DSP-based active crossover stereo system, I am struck by the thought: how could it be any better?

I sometimes read columns where people wonder about the future of audio, as though continuous progress is natural and inevitable – and as though we are accustomed to such progress. But it does occur to me that there is no reason why we cannot have reached the point of practical perfection already.

I think the desire for exotic improvements over what we have now has to be seen within the context of most people having not yet heard a good stereo system. They imagine that if the system they heard was expensive, it must therefore represent the state of the art, but in audio I think they could well be wrong. Some time ago, the audio industry and enthusiasts may even have subconsciously sniffed that they were reaching a plateau and begun to stall or reverse progress just to make life more interesting for themselves.

At the science fiction level, people dream of systems that reproduce live events exactly, including the acoustics of the performance venue. Even if this were possible, would it be worth it without the corresponding visuals? (and smells, temperature, humidity, etc.?)

Something like it could probably be achieved using the techniques of the computer games industry: synthesis of the acoustics from first principles, headphones with head tracking, or maybe even some system of printed transducer array wall coverings that could create the necessary sound fields in mid-air if there was no furniture in the room (and knowing the audio industry, it would also supplement the system with some conventional subwoofers). My prediction is that you would try it a couple of times, find it a rather contrived, unnatural experience, and next time revert to your stereo system with two speakers.

On a more practical level, the increasing use of conventional DSP is predicted. We are now seeing the introduction of systems that aim to reduce the (supposedly) unwanted stereo crosstalk that occurs from stereo speakers. The idea is to send out a slightly attenuated antiphase impulse from one speaker for every impulse from the other speaker, that will cancel out the crosstalk at the ‘wrong ear’. It then needs to send out an anti-antiphase impulse from the other speaker to cancel out that impulse as it reaches the other ear, and so on. My gut instinct is that this will only work perfectly at one precise location, and at all other locations there will be ‘residue’ possibly worse than the crosstalk. In fact we don’t seem bothered by the crosstalk from ordinary stereo – I am not convinced we hear it as “colouration”. Maybe it results in a narrowing of the width of the ‘scene’, but with the benefit of increasing its stability. (Hand-waving justification of the status quo, maybe, but I have tried ambiophonic demonstrations, and I was eventually happy to go back to ordinary stereo).

Other predictions include the increasing use of automatic room correction, ultra-sophisticated tone controls and loudness profiles that allow the user to tailor every recording to their own preferences.

Tiny speakers will generate huge SPLs flat down to 20 Hz – the Devialet Phantom is the first example of this, along with the not-so-futuristic drawback of needing to burn huge amounts of energy to do it. Complete multi-channel surround envelopment will come from hidden speakers.

At the hardware fetish end, no doubt some people imagine that even higher resolution sample rates and bit depths must result in better audible quality. Some people probably think that miniaturised valves will transform the listening experience. High resolution vinyl is on the horizon. Who knows what metallurgical miracles await in the science of audio interconnects?

For the IT-oriented audiophile, what is left to do? Multi-room audio, streaming from the cloud, complete control from handheld devices are all here, to a level of sophistication and ease of use limited only by the ‘cognitive gap’ between computer people and normal human users that sometimes results in clunky user interfaces. The technology is not a limiting factor. Do you want the album artwork to dissolve as one track fades out and the new artwork to spiral in and a CGI gatefold sleeve to open as the new track fades in? The ability to talk to your device and search on artist, genre, label, composer, producer, key signature? Swipe with hand gestures like Minority Report? Trivial. There really is no limit to this sort of thing already.

In fact, for the real music lover, I don’t think there is anything left to do. Truth be told, we were most of the way there in 1968.

The basic test is: how much better do you want the experience of summoning invisible musicians to your living room to be? I can’t imagine many worthwhile improvements over what we have now. The sound achievable from a current neutral stereo system is already at ‘hologram’ level; the solidity of the phantom image is total – the speakers disappear. It isn’t a literal hologram that reproduces the acoustics in absolute terms, allowing you to walk around it, of course, but it is a plausible ‘hologram’ from any static listening position, allowing you to ‘walk around it’ in your mind, and it stays plausible as you turn your head.

It isn’t complete surround envelopment, but there is reverberation from your own room all around you, and it seems natural to sit down and face the music. You will hear fully-formed, discrete, musical parts emerging from an open, three dimensional space, with acoustics that may not be related to the space you are listening in. You have been transported to a different venue – if that is what the recording contains. In terms of volume and dynamics, a modern system can give you the same visceral dynamics as the real performance.

And all this is happening in your living room, but without any visuals of the performance – it is music that you are wanting to listen to after all. If the requirement is to experience a literal night at the opera, then short of a synthesised Star Trek type ‘holodeck’ experience you will be out of luck.

You could always watch a high resolution DVD of some performance or the BBC’s Proms programmes, for example, and such visuals may give you a different experience. They will, however, destroy the pure recreation of the acoustic space in front of you because, by necessity, the visuals jump around from location to location, scene to scene in order to maintain the interest level, and your attention will be split between the sound and the imagery. Anyway, a huge TV will cost you about £200 from Tescos these days so that aspect is pretty well covered, too.

The natural partner to a huge TV is multi-channel surround sound. Quadraphonic sound seemed like the next big thing in the 1970s, but didn’t take off at the time. We now have five or seven channel surround sound. Does this improve the musical experience? Some people say so, but that could just be the gimmick factor, or an inferior stereo system being jazzed up a bit. While the correlation between two good speakers produces an unambiguous ‘solution’ to the equations thereof, multiple sources referring to the same ‘impulse’ could result in no clear ‘solution’ – that is, a fuzzy and indistinct ‘hologram’ that our ears struggle to make sense of. Mr. Linkwitz surmises something similar in the case of the centre speaker, plus he finds it visually distracting; with just two speakers, the space between them becomes a virtual blank space in which it is easier to imagine the audio scene. Most recordings are stereo and are likely to remain that way with a large proportion of listeners using headphones. For these reasons, I am happy that stereo is the best way to carry on listening to music.

Can DSP improve the listening experience further? Hardly at all I would say. So-called ‘room correction’ cannot transform a terrible room into a great one, and it doesn’t even transform a so-so one into a slightly better one. It starts from a faulty assumption: that human hearing is just a frequency response analyser for which real acoustics (the room) are an error, rather than human hearing having a powerful acoustics interpreter at the front end. If you attempt to ‘fix’ the acoustics by changing the source you just end up with a strange-sounding source. At a pinch, the listener could listen in the near(er) field to get rid of the room, anyway.

I am convinced that the audiophile obsession with tailoring recordings to the listener’s exact requirements is a red herring: the listener doesn’t want total predictability, and a top notch system shouldn’t be messed about with. As a reviewer of the Kii Three said:

…the traditional kind of subjective analysis we speaker reviewers default to — describing the tonal balance and making a judgement about the competence of a monitor’s basic frequency response — is somehow rendered a little pointless with the Kii Three. It sounds so transparent and creates such fundamentally believable audio that thoughts of ‘dull’ or ‘bright’ seem somehow superfluous.

The user doesn’t have access to the individual elements of the recording. What can be done in terms of, say, reducing the volume of the hi-hats (or whatever) is crude and unnatural and bleeds over every other element of the recording. The only chance of reproducing a natural sound, maintaining the separation between fully-formed elements and reproducing a three dimensional ‘scene’, is for the system to be neutral. When this happens, the level of the hi-hats likely just becomes just part of the performance. Audiophiles who, without any caveat, say they want DSP tone controls in order to fiddle about with recordings have already given up on that natural sound.

In summary, I see the way music was ‘consumed’ 40 or even 50 years ago as already pretty much at the pinnacle: two large speakers at one side or end of a comfortably-furnished living room, filling the space with beautiful sound – at once combining compatibility with domestic living and the ability to summon musicians to perform in the space in a comprehensible form that one or several people can enjoy without having to don special apparatus or sit in a super-critical location. And the fitted carpets of those times were great for the acoustics!

All that has happened in the meantime is just the ‘mopping up’ of the remaining niggles. We (can) now have better performance with respect to distortion, frequency response, dynamic range, and a more solid, holographic audio ‘scene’; no scratches and pops; instant selection of our choice of the world’s total music library. The incentives for the music lover to want anything more than this are surely extremely limited.

The active crossover in 1952

In the archive of magazines mentioned earlier, I decided to try to find the earliest reference to active crossovers. By sheer good luck, the first magazine I clicked on at random contained an article on triamplification (not yet named “active crossover”) from 1968.

six amplifiersIt lists the following advantages of active crossovers:

  1. Improved damping
  2. Lower intermodulation distortion
  3. Improved frequency handling by drivers
  4. Higher power handling
  5. Smoother response
  6. Adjustable crossover frequencies and slopes

It mentions that there were several biamplification products in the late fifties, but that when stereo came along the concept was forgotten.

This article then led me to one on biamplification from 1956, and finally to possibly the earliest article on active hi-fi crossovers, from 1952.

biamp title 1952

biamplify 1952

In this article, they design and build their own low level crossover.

1952 xover

Switching back and forth produced a subtle but distinct difference in listening pleasure. The low frequencies seemed a little more pure and less obscured, the middles and highs cleaner. The overall effect was that we had moved one step forward toward exact reproduction of the music as inscribed on the phonograph disk. There was a definite improvement in sound over a considerably better than average single amplifier system with a carefully designed dividing network and well balanced speakers.

They find that other compelling reasons to use the system are the freedom it gives to mix and match drivers without having to worry about their relative sensitivities, and the ability to adjust crossover frequencies easily and quickly.

Conclusion

Hi-fi manufacturers and customers alike are still struggling with passive crossovers despite the problem having been solved 65 years ago! This is as much to do with the ‘culture’ of audio as any technical or economic reasons.

Vinyl worship at the extreme

Hats off to the people who thought of this wheeze:

…a £6,300 lacquer of Sarah Vaughan that only survives one play

Yes, it’s a recording on a lacquer-coated aluminium disc, such as is used in the manufacture of LPs. It’s soft, and if it is played it is destroyed in the process. You can buy one of a limited edition of thirty for £6,300, to be played just once. And if you like Sarah Vaughan that would be a bonus.

Presumably the idea is that it gets you one step closer to the original musical event.

But not so fast. This one is derived from a digital transfer. And not just a straight transfer. They digitise the original live recording tapes and then do a bunch of signal processing, explicitly removing some of the original event in the process.

Once the signal is digitised, it’s treated using processing algorithms to try and reduce residual noise – a process that isn’t always easy. While the tapes were in good condition, the Peterson performance proved the most difficult. The tapes hadn’t been opened since 1962, and had much more analog noise than the others.

D’Oria-Nicolas also told us how, in the Evans’ recording, “the drums were too close to the piano and some frequencies did make some drum skins vibrate… We successfully managed to delete that.”

Obviously, the closest you can get to the original event is by playing the analogue tapes, and a straight digital transfer of these will be indistinguishable from the tape. Noise, drop-outs and all.

‘Photoshopping’ is the next stage, and you can actually download the photoshopped version and listen to it. Digital cleaning-up of scratched, dusty images can be a very positive thing, and the audio equivalent may be too. This version may, or may not need some further manipulation in order to cut the lacquer master on a lathe, plus it needs filtering for RIAA equalisation.

As I understand it, in the LP process (which I view with affection, rather like any other ‘heritage’ industry such as keeping steam trains going), the lacquer is then coated in metal and the two layers separated to produce a metal negative of the lacquer disc. This is then coated in metal and the two layers separated to produce a metal positive copy of the lacquer. This is then coated in metal and the two layers separated to produce a negative: the stamper. Multiple stampers are produced – stampers wear out. The stampers are then used to press blobs of hot vinyl to produce the final LPs! It is amazing to me that it works so well.

You can then play the vinyl record using a tiny stylus, a cantilever, and a coil/magnet arrangement to produce a tiny voltage. This is amplified and filtered with the reverse RIAA curve before sending it via the volume control to the power amp and speakers.

A vinyl record is quite a long way from the original event!

In this case, the earliest point in the chain that we have access to is the processed digital file. This is regarded by audiophiles as the poor man’s version of the recording. We pay extra (a lot extra) to listen to the output of the next stage – the self-destructing lacquer. Or, for somewhat less, we can buy the result at the end of the chain: the standard vinyl LP.

Obviously, the people behind this scheme understand exactly what they are doing, and have a good sense of humour. But it does highlight a particular audiophile belief, I think: that music – even the devil’s own digital music – can be purified and cleansed if it is passed through ‘heritage’ technology built by craftsmen and artisans.

The rational person might assume that the earlier in the chain you go should give you the best quality, but audiophiles will pay more – much more – to hear the music passed through extra layers of sanctified materials, such as wood, oil, cellulose, varnish, bakelite, animal glue, silver wire, diamond, waxed paper and plastic vinyl.

The First CD Player

sony cdp-101There’s an amazing online archive of vintage magazines that I have only just begun rummaging through. I was pleased to see this 1982 review of the Sony CDP-101, the first commercial CD player. The reviewer gets hold of a unit even before they go on sale commercially, saying:

I feel as though I am a witness to the birth of a new audio era.

This was the first time that the public had encountered disc loading drawers, instant track selection, digital readouts and digital fast forward and rewind, so he goes into great detail on how these work.

And at that time, the mechanics of the disc playing mechanism seemed inextricably linked with the nature of digital audio itself, so, after reading the more technical sections of the article, the reader’s mind would be awhirl with microscopic dots, collimators and laser focusing servos – possibly not really grasping the fundamentals of what is going on.

Audio measurements are shown, though, and of course these are at levels of performance hitherto unknown. (He is not able to make his own measurements this time, but a month later he has received the necessary test disc and is able to do so).

As I write these numbers, I find it difficult to remember that I am talking about a disc player!

Towards the end, the reviewer finally listens to some music. He is impressed:

I was fortunate enough to get my hands on seven different compact digital disc albums. Some of the selections on these albums were obviously dubbed from analog master tapes, but even these were so free of any kind of background noise that they could, for the first time, be thoroughly enjoyed as music. There’s a cut of the beginning of Also Sprach Zarathustra by Richard Strauss, with the Boston Symphony conducted by Ozawa, that delivers the gut -massaging opening bass note with a depth and clarity that I never thought possible for any music reproduction system. But never mind the specific notes or passages. Listening to the complete soundtrack recording of “Chariots of Fire,” the images and scenes of that marvelous film were re- created in my mind with an intensity that would just not have been possible if the music had been heard behind a veil of surface noise and compressed dynamic range.

He talks about

…the sheer magnificence of the sound delivered by Compact Discs

and concludes:

…after my experiences with this first digital audio disc player and the few sample discs that were loaned to me, I am convinced that, sooner or later, the analog LP will have to go the way of the 78 shellac record. I can’t tell you how long the transition will take, but it will happen!

A couple of months later he reviews a Technics player:

Voices and orchestral sounds were so utterly clean and lifelike that every once in a while we just had to pause, look up, and confirm that this heavenly music was, indeed, pouring forth from a pair of loudspeaker systems. As many times as I’ve heard this noise -free, wide dynamic -range sound, it’s still thrilling to hear new music reproduced this way…

…the cleanest, most inspiring sound you have ever heard in your home

So here we are at the very start of the CD era, and an experienced reviewer finding absolutely no problems with the measurements or sound.

In audiophile folklore, however, we are now led to believe that he was deluded. It is very common for audiophiles to sneer about the advertising slogan “Perfect Sound Forever”.

Stereophile in 1995:

When some unknown copywriter coined that immortal phrase to promote the worldwide launch of Compact Disc in late 1982, little did he or she foresee how quickly it would become a term of ridicule.

But in an earlier article from 1983 they had reviewed the Sony player saying that with one particular recording it gave:

…the most realistic reproduction of an orchestra I have heard in my home in 20-odd years of audio listening!

…on the basis of that Decca disc alone, I am now fairly confident about giving the Sony player a clean bill of health, and declaring it the best thing that has happened to music in the home since The Coming of Stereo.

For sure, there were/are many bad CDs and recordings, but it is now commonly held that early CD was fundamentally bad. I don’t believe it was. I would bet that virtually no one could tell the difference between an early CD player and modern ‘high res’.

Both magazines seemed aware that their own livings could be in jeopardy if ‘all CD players sound the same’, but I think that CD’s main problem was the impossibility of divorcing the perceived sound from the physical form of the players. 1980s audio equipment looked absolutely terrible – as a browse through the magazines of the time will attest.

Within a couple of years, CD players turned from being expensive, heavy and solid, to cheap, flimsy and with the cheesiest appearance of any audio equipment. They all measured pretty much the same, however, regardless of cost or appearance. Digital audio was revealed to be what it is: information technology that is affordable by everyone.

This, of course, killed it in the eyes and ears of many audiophiles.

Beolab 50, Home HiFi Show 2017

For the first time in a while I have been to a hi fi show, this time in Harrogate, North Yorkshire. It was arranged by the forum HiFi Wigwam, and there were both commercial and amateur exhibitors. It was fairly low key: not all that many exhibitors and not too many visitors on the day I was there (Saturday). I liked the venue, The Old Swan Hotel.

IMG_2186

My main reason for going was to hear the Bang and Olufsen Beolab 90s, but those weren’t there. Instead the Beolab 50s were being demonstrated in a very large room as shown above. There was a technical problem: they couldn’t change the settings for the speakers because of wi fi issues – it can only be done from a phone app (I think) and it needs to find the speakers on the network. So they were stuck on a fairly omni-directional setting and I could really hear this: I desperately needed them to be more focused. But anyway, very generously, the sales guy allowed me to play tracks off a memory stick I had brought, and gave me control of the volume.

My impression was of a beautifully clean, effortless sound, and incredible bass, but the setup was just ‘not right’ and everything sounded too diffuse and distant. Nevertheless, I enjoyed playing some good demo tracks and it was easy to hear that these speakers are not troubled by high volume levels – although I didn’t get anywhere near the volume they normally run demos at!

I must try to get a demo when they are set up properly – I expect great things from them. (I feel a bit of a fraud though, because at over £20,000 I won’t be buying them!).

My friend was very impressed by the looks of these speakers: solid-looking aluminium, fine wooden grilles, and a tweeter ‘pod’ that disappears when the speakers are inactive. In fact, he was very taken by the whole B&O ‘ethos’. Even the remote control for the system was a work of art, being made from a single piece of aluminium. And B&O do the best brochures of any hi-fi company, I think!

In the rest of the show, we heard some enormous horn speakers – I am not a fan, KEF LS50 wireless, some BBC-style LS5/9, some early Harbeths, some Focal speakers, some tiny actives based on balanced mode radiators, and quite a few others. There were various vintage components from the 1970s onwards. My friend was quite taken with the sound of some very classic-looking Tannoys with concentric tweeters, and anything that sounded good on a lower budget – I don’t have the brochure to hand, but may fill in some more details later.

Valves were on show of course, a few turntables, some outrageously inefficient Class A solid state amplifiers, and some active crossovers. In some setups, vinyl sounded OK, but because of the pops and clicks I often found myself wishing for digital sources!

There were the usual vinyl stalls, cables and accessories at eye watering prices, an interesting exhibition of photos of pop stars from the 60s and a great jukebox.

Acoustics-wise, the standard rooms were quite good, I thought, having higher ceilings than some other places.

Sgt. Pepper’s Musical Revolution

Image result for howard goodall sgt pepper

Did you see Howard Goodall’s BBC programme about Sgt. Pepper? I thought it was a fine tribute, emphasising how fortunate we are for the existence of the Beatles.

Howard did his usual thing of analysing the finer points of the music and how it relates to classical and other forms, playing the piano and singing to illustrate his points. He showed that twelve of the tracks on Sgt. Pepper contain “modulations”, where the songs shift from one key to another – revealing very advanced compositional skills needless to say. But I don’t think that the Beatles ever really knew or cared that music is ‘supposed’ to be composed in one key and one time signature – they were just instinctive and brilliant. To me, it suggested that formal training might have stifled their creativity, in fact.

He supplemented his survey of the tracks with Strawberry Fields and Penny Lane which although not on the album, were the first tracks produced from the Sgt. Peppers recording sessions.

The technical stuff about studio trickery and how George Martin and his team worked around the limitations of four track tape was interesting (as always), and we listened in on some of the chat in the studio in-between takes.


Obviously, I checked out what versions of the album are available on Spotify, and found that there’s the 2009 remaster and, I think, the new 50th anniversary remixed version..! (Isn’t streaming great?)

Clearly the remixed version has moved some of the previous hard-panned left and right towards the middle, and the sound has more ‘body’ – but I am sure there is a lot more to it than that. The orchestral crescendos and final chord in A Day in the Life are particularly striking.

At the end of the day, however, I actually prefer a couple of more stripped back versions of tracks that appeared on the Beatles Anthology CDs from 1995. These, to me, sound even cleaner and fresher.


But what is this? Archimago has recently analysed some of the new remix and found that it has been processed into heavy clipping i.e. just like any typical modern recording that wants to sound ‘loud’. Archimago also shows that the 1987 CD version doesn’t have any such clipping in it; I won’t be throwing away my original Beatles CDs just yet…