Sgt. Pepper’s Musical Revolution

Image result for howard goodall sgt pepper

Did you see Howard Goodall’s BBC programme about Sgt. Pepper? I thought it was a fine tribute, emphasising how fortunate we are for the existence of the Beatles.

Howard did his usual thing of analysing the finer points of the music and how it relates to classical and other forms, playing the piano and singing to illustrate his points. He showed that twelve of the tracks on Sgt. Pepper contain “modulations”, where the songs shift from one key to another – revealing very advanced compositional skills needless to say. But I don’t think that the Beatles ever really knew or cared that music is ‘supposed’ to be composed in one key and one time signature – they were just instinctive and brilliant. To me, it suggested that formal training might have stifled their creativity, in fact.

He supplemented his survey of the tracks with Strawberry Fields and Penny Lane which although not on the album, were the first tracks produced from the Sgt. Peppers recording sessions.

The technical stuff about studio trickery and how George Martin and his team worked around the limitations of four track tape was interesting (as always), and we listened in on some of the chat in the studio in-between takes.

Obviously, I checked out what versions of the album are available on Spotify, and found that there’s the 2009 remaster and, I think, the new 50th anniversary remixed version..! (Isn’t streaming great?)

Clearly the remixed version has moved some of the previous hard-panned left and right towards the middle, and the sound has more ‘body’ – but I am sure there is a lot more to it than that. The orchestral crescendos and final chord in A Day in the Life are particularly striking.

At the end of the day, however, I actually prefer a couple of more stripped back versions of tracks that appeared on the Beatles Anthology CDs from 1995. These, to me, sound even cleaner and fresher.

But what is this? Archimago has recently analysed some of the new remix and found that it has been processed into heavy clipping i.e. just like any typical modern recording that wants to sound ‘loud’. Archimago also shows that the 1987 CD version doesn’t have any such clipping in it; I won’t be throwing away my original Beatles CDs just yet…

The Sound of a Symphony Orchestra

Last night I went to a symphony concert: Shostakovich’s 10th, preceded by Prokofiev’s Piano Concerto No. 2 at the West Road Concert Hall, Cambridge.

west roadWe were sitting in the second row from the front – so quite close to the piano. I wish I had taken a photograph, but I was so paranoid about my phone ringing mid performance that I left it turned off! The image above shows the empty venue.

We really enjoyed the concert. Chiyan Wong is an amazing piano soloist, and CCSO were spectacular. The sound was formidable from a large orchestra, and we got to hear the fairly new Steinway grand in great detail – the piano was removed during the interval, for the Shostakovich that followed.

Now, I do often listen to this sort of music with my system, but this was the first time I had been to a concert to hear this specific Russian ‘genre’. Of course I couldn’t help but make a mental comparison of the sound of the real thing versus the hi-fi facsimile that I am used to, as I was listening. And you know what? I have to say that a good hi-fi gives a pretty good rendition of the real sound.

The real thing was very loud, but also very rich – I have observed that ‘painfully loud’ is more a function of quality than volume; you need good bass to balance the rest of the spectrum. So this was very loud, but at no time painful. Bass from the orchestra was wonderful, but didn’t take me by surprise – I sometimes hear such bass from my system. (It did take me by surprise the first time I heard it from a hi-fi system, however!).

Some people cite piano as being the most difficult thing for a hi-fi system to reproduce. I don’t know where they get that from: I loved the sound of the piano, and I think a good system can reproduce it fairly easily.

I was struck by the homogeneity within the different sections of the orchestra. Listening to a recording of just a piano, or just the violins, would not tell you very much about an audio system. It is only when you hear a combination of the piano, the violins and the brass, say, that any ‘formant’ (i.e. fixed frequency response signature) within your system would show up.

As discussed previously, ‘imaging’ of the orchestra was not as pin sharp as you get in some recordings, but many purist recordings portray the true effect quite accurately. The width of the ‘soundstage’ of a stereo system is more-or-less right, and the room you are listening in enhances the recording’s ‘ambience’ around and behind you.

Of course the concert is a very special experience. The stereo version isn’t always as deep, open and spacious, nor is the envelopment as complete but, all in all, I think if you sit down in the right frame of mind to listen to a fine orchestral recording using a good hi-fi system, you are getting a very reasonable impression of the sound, excitement and visceral quality of the real thing. And that really is quite an amazing idea.

Room correction. What are we trying to achieve?

The short version…

The recent availability of DSP is leading some people to assume that speakers are, and have always been, ‘wrong’ unless EQ’ed to invert the room’s acoustics.

In fact, our audio ancestors didn’t get it wrong. Only a neutral speaker is ‘right’, and the acoustics of an average room are an enhancement to the sound. If we don’t like the sound of the room, we must change the room – not the sound from the speaker.

DSP gives us the tools to build a more neutral speaker than ever before.

There are endless discussions about room correction, and many different commercial products and methods. Some people seem to like certain results while others find them a little strange-sounding.

I am not actually sure what it is that people are trying to achieve. I can’t help but think that if someone feels the need for room correction, they have yet to hear a system that sounds so good that they wouldn’t dream of messing it up with another layer of their own ‘EQ’.

Another possibility is that they are making an unwarranted assumption based on the fact that there are large objective differences between the recorded waveform and what reaches the listener’s ears in a real room. That must mean that no matter how good it sounds, there’s an error. It could sound even better, right?


A reviewer of the Kii Three found that that particularly neutral speaker sounded perfect straight out of the box.

“…the traditional kind of subjective analysis we speaker reviewers default to — describing the tonal balance and making a judgement about the competence of a monitor’s basic frequency response — is somehow rendered a little pointless with the Kii Three. It sounds so transparent and creates such fundamentally believable audio that thoughts of ‘dull’ or ‘bright’ seem somehow superfluous.”

The Kii Three does, however, offer a number of preset “contour” EQ options. As I shall describe later, I think that a variation on this is all that is required to refine the sound of any well-designed neutral speaker in most rooms.

A distinction is often made between correction of the bass and higher frequencies. If the room is large, and furnished copiously, there may be no problem to solve in either case, and this is the ideal situation. But some bass manipulation may be needed in many rooms. At a minimum, the person with sealed woofers needs the roll-off at the bottom end to start at about the right frequency for the room. This, in itself, is a form of ‘room correction’.

The controversial aspect is the question of whether we need ‘correction’ higher up. Should it be applied routinely (some people think so), as sparingly as possible, or not at all? And if people do hear an improvement, is that because the system is inherently correcting less-than-ideal speakers rather than the room?

Here are some ways of looking at the issue.

  1. Single room reflections give us echoes, while multiple reflections (of reflections) give us reverberation. Performing a frequency response measurement with a neutral transducer and analysing the result may show a non-flat FR at the listening position even when smoothed fairly heavily. This is just an aspect of statistics, and of the geometry and absorptivity of the various surfaces in the room. Some reflections will result in some frequencies summing in phase, to some extent, and others not.
  2. Experience tells us that we “hear through” the room to any acoustic source. Our hearing appears not to be just a frequency response analyser, but can separate direct sound from reflections. This is not a fanciful idea: adaptive software can learn to do the same thing.

The idea is also supported by some of the great and the good in audio.

Floyd Toole:

“…we humans manage to compensate for many of the temporal and timbral variations contributed by rooms and hear “through” them to appreciate certain essential qualities of sound sources within these spaces.”

Or Meridian’s Bob Stuart:

“Our brains are able to separate direct sound from the reverberation…”

  1. If we EQ the FR of the speaker to obtain a flat in-room measured response including the reflections in the measurement, it seems that we will subsequently “hear through” the reflections to a strangely-EQ’ed direct sound. It will, nevertheless measure ‘perfectly’.
  2. Audio orthodoxy maintains that humans are supremely insensitive to phase distortion, and this is often compounded with the argument that room reflections completely swamp phase information so it is not worth worrying about. This denies the possibility that we “hear through” the room. Listening tests in the past that purportedly demonstrated our inability to hear the effects of phase have often been based on mono only, and didn’t compare distorted with undistorted phase examples – merely distorted versus differently distorted, played on the then available equipment.
  3. Contradicting (4), audiophiles traditionally fear crossovers because the phase shifts inherent in (non-DSP) crossovers are, they say, always audible. DSP, on the other hand, allows us to create crossovers without any phase shift i.e. they are ‘transparent’.
  4. At a minimum, speaker drivers on their baffles should not ‘fight’ each other through the crossover – their phases should be aligned. The appropriate delays then ensure that they are not ‘fighting’ at the listener’s position. The next level in performance is to ensure that their phases are flat at all frequencies i.e. linear phase. The result of this is the recorded waveform preserved in both frequency and time.
  5. Intuitively, genuine stereo imaging is likely to be a function of phase and timing. Preserving that phase and timing should probably be something we logically try to do. We could ‘second guess’ how it works using traditional rules of thumb, deciding not to preserve the phase and timing, but if it is effectively cost-free to do it, why not do it anyway?
  6. A ‘perfect’ response from many speaker/room combinations can be guaranteed using DSP (deconvolution with the impulse response at that point, not just playing with a graphic equaliser). Unfortunately, it will only be valid for a single point in space, and moving 1mm from there will produce errors and unquantifiable sonic effects. Additionally, ‘perfect’ refers to the ‘anechoic chamber’ version of the recording, which may not be what most people are trying to achieve even if the measurements they think they seek mean precisely that.
  7. Room effects such as (moderate) reverberation are a major difference between listening with speakers versus headphones, and are actually desirable. ‘Room correction’ would be a bad thing if it literally removed the room from the sound. If that is the case, what exactly do we think ‘room correction’ is for?
  8. Even if the drivers are neutral (in an anechoic situation) and crossed over perfectly on axis, they are of finite size and mounted in a box or on a baffle that has a physical size and shape. This produces certain frequency-dependent dispersion characteristics which give different measured, and subjective, results in different rooms. Some questions are:
    • is this dispersion characteristic a ‘room effect’ or a ‘speaker effect’. Or both?
    • is there a simple objective measurement that says one result is better than any other?
    • is there just one ‘right’ result and all others are ‘wrong’?
  1. Should room correction attempt to correct the speaker as well? Or should we, in fact, only correct the speaker? Or just the room? If so, how would we separate room from speaker in our measurements? Can they, in fact, be separated?

I think there is a formula that gives good results. It says:

  • Don’t rely on feedback from in-room measurements, but do ‘neutralise’ the speaker at the most elemental levels first. At every stage, go for the most neutral (and locally correctable) option e.g. sealed woofers, DSP-based linear phase crossovers with time alignment delays.
  • Simply avoid configurations that are going to give inherently weird results: two-way speakers, bass reflex, many types of passive crossover etc. These may not even be partially correctable in any meaningful way.
  • Phase and time alignment are sacrosanct. This is the secret ingredient. You can play with minor changes to the ‘tone colour’ separately, but your direct sound must always maintain the recording’s phase and time alignment. This implies that FIR filters must be used, thus allowing frequency response to be modified independently of phase.
  • By all means do all the good stuff regarding speaker placement, room treatments (the room is always ‘valid’), and avoiding objects and asymmetry around the speakers themselves.
  • Notionally, I propose that we wish to correct the speaker not the room. However, we are faced with a room and non-neutral speaker that are intertwined due to the fact that the speaker has multiple drivers of finite size and a physical presence (as opposed to being a point source with uniform directivity at all frequencies). The artefacts resulting from this are room-dependent and can never really be ‘corrected’ unambiguously. Luckily, a smooth EQ curve can make the sound subjectively near enough to transparent. To obtain this curve, predict the baffle step correction for each driver using modelling or standard formula with some some trial-and-error regarding the depth required (4, 5, 6 dB?); this is a very smooth EQ curve. Or, possibly (I haven’t done this myself), make many FR measurements around the listening area, smooth and average them together, and partially invert this, again without altering phase and time alignment.
  • You are hearing the direct sound, plus separately-perceived ‘room ambience’. If you don’t like the sound of the ambience, you must change the room, not the direct sound.

Is there any scientific evidence for these assertions? No more nor less than any other ‘room correction’ technique – just logical deduction based on subjective experience. Really, it is just a case of thinking about what we hear as we move around and between rooms, compared to what the simple in-room FR measurements show. Why do real musicians not need ‘correction’ when they play in different venues? Do we really want ‘headphone sound’ when listening in rooms? (If so, just wear headphones or sit closer to smaller speakers).

This does not say that neutral drivers alone are sufficient to guarantee good sound – I have observed that this is not the case. A simple baffle step correction applied to frequency response (but leaving phase and timing intact) can greatly improve the sound of a real loudspeaker in a room without affecting how sharply-imaged and dynamic it sounds. I surmise that frequency response can be regarded as ‘colour’ (or “chrominance” in old school video speak), independent of the ‘detail’ (or “luminance”) of phase and timing. We can work towards a frequency response that compensates for the combination of room and speaker dispersion effects to give the right subjective ‘colour’ as long as we maintain accurate phase and timing of the direct sound.

We are not (necessarily) trying to flatten the in-room FR as measured at the listener’s position – the EQ we apply is very smooth and shallow – but the result will still be perceived as a flat FR. Many (most?) existing speakers inherently have this EQ built in whether their creators applied it deliberately, or via the ‘voicing’ they did when setting the speaker up for use in an average room.

In conclusion, the summary is this:

  • Humans “hear through” the room to the direct sound; the room is perceived as a separate ‘ambience’. Because of this, ‘no correction’ really is the correct strategy.
  • Simply flattening the FR at the listening position via EQ of the speaker output is likely to result in ‘peculiar’ perceived sound, even if the in-room measurements purport to say otherwise.
  • Speakers have to be as rigorously neutral as possible by design, rather than attempting to correct them by ‘global feedback’ in the room.
  • Final refinement is a speaker/room-dependent, smooth, shallow EQ curve that doesn’t touch phase and timing – only FIR filters can do this.

[Last updated 05/04/17]

The Secret Science of Pop


In The Secret Science of Pop, evolutionary biologist Professor Armand Leroi tells us that he sees pop music as a direct analogy for natural selection. And he salivates at the prospect of a huge, complete, historical data set that can be analysed in order to test his theories.

He starts off by bringing in experts in data analysis from some prestigious universities, and has them crunch the numbers on the past 50 years of chart music, analysing the audio data for numerous characteristics including “rhythmic intensity” and “agressiveness”. He plots a line on a giant computer monitor showing the rate of musical change based on an aggregate of these values. The line shows that the 60s were a time of revolution – although he claims that the Beatles were pretty average and “sat out” the revolution. Disco, and to a lesser extent punk, made the 70s a time of revolution but the 80s were not.

He is convinced that he is going to be able to use his findings to influence the production of new pop music. The results are not encouraging: no matter how he formulates his data he finds he cannot predict a song’s chart success with much better than random accuracy. The best correlation seems to be that a song’s closeness to a particular period’s “average” predicts high chart success. It is, he says, “statistically significant”.

Armed with this insight he takes on the role of producer and attempts to make a song (a ballad) being recorded at Trevor Horn’s studio as average as possible by, amongst other things, adjusting its tempo and adding some rap. It doesn’t really work, and when he measures the results with his computer, he finds that he has manoeuvred the song away from average with this manual intervention.

He then shifts his attention to trying to find the stars of tomorrow by picking out the most average song from 1200 tracks that have been sent into BBC Radio 1 Introducing. The computer picks out a particular band who seem to have a very danceable track, and in the world’s least scientific experiment ever, he demonstrates that a BBC Radio 1 producer thinks it’s OK, too.

His final conclusion: “We failed spectacularly this time, but I am sure the answer is somewhere in the data if we can just find it”.

My immediate thoughts on this programme:

-An entertaining, interesting programme.

-The rule still holds: science is not valid in the field of aesthetic judgement.

-If your system cannot predict the future stars of the past, it is very unlikely to be able to predict the stars of the future.

-The choice of which aspects of songs to measure is purely subjective, based on the scientist’s own assumptions about what humans like about music. The chances of the scientist not tweaking the algorithms in order to reflect their own intuitions are very remote. To claim that “The computer picked the song with no human intervention” is stretching it! (This applies to any ‘science’ whose main output is based on computer modelling).

-The lure of data is irresistible to scientists but, as anyone who has ever experimented with anything but the simplest, most controlled, pattern recognition will tell you, there is always too much, and at the same time never enough, training data. It slowly dawns on you that although theoretically there may be multidimensional functions that really could spot what you are looking for, you are never going to present the training data in such a way that you find a function with 100%, or at least ‘human’ levels of, reliability.

-Add to that the myriad paradoxes of human consciousness, and of humans modifying their tastes temporarily in response to novelty and fashion – even to the data itself (the charts) – and the reality is that it is a wild goose chase.

(very relevant to a post from a few months ago)

The Secret Life of the Signal

Some people actually think of stereo imaging as a “parlour trick” that is very low on the list of desirable attributes that an audio system should have. They ‘rationalise’ this by saying that in the majority of recordings, any stereo image is an artificial illusion, created by the recording engineer either deliberately or by accident; it does not accurately represent the live event – because there may not even have been a single live event. So how can it matter if it is reproduced by the playback system or not? Perhaps it is even best to suppress it: muddle it up with some inter-channel crosstalk like vinyl does, or even listen in mono.

At the top of the list of desirable attributes for a hi-fi system, most audiophiles would put “timbre”, “tonality”, low distortion, clean reproduction at high volumes, dynamics, deep bass. All of these qualities can be experienced with a mono signal and a single speaker – in fact in the Harman Corporation’s training for listening, monophonic reproduction is recommended for when performing listening tests.

Because their effects are not so obvious in mono, phase and timing are regarded by many as supremely unimportant. I quote one industry luminary:

Time domain does not enter my vocabulary…

Sound is colour?

We know that our eyes respond to detail and colour in different ways. In the early days of colour TV (analogue) it was found that the signal could be broadcast within practical bandwidths because the colour (chrominance) information could be be sent at lower resolution than the detail (luminance).

There is, perhaps, a parallel in hearing, too: that humans have separate mechanisms for responding to sound in the frequency and time domains. But the conventional hi-fi industry’s implicit view is that we only hear in the frequency domain: all the main measurements are in the frequency domain, and steady state signals are regarded as equivalent to real music. A speaker’s overall response to phase and timing is ignored almost totally or, at best, regarded as a secondary issue.

I think that this is symptomatic of an idea that pervades hi-fi: that the signal is ‘colour’. Sure, it varies as the music is playing, but the exact nature of that variation is almost incidental; secondary in comparison to the importance of the accurate reproduction of colour, and that in testing, all that matters is whether a uniform colour is accurately reproduced.

There has, nevertheless, been some belated lip service paid to the importance of timing, with the hype around MQA (still usually being played over speakers with huge timing errors!), and a number of passive speakers with sloping front baffles for time alignment. Taken to its logical conclusion, we have these:


Their creator says, though:

It’s nice if you have phase coherence, but it is not necessary

So they still fall short of the “straight wire with gain” ideal. It still says that the signal is something we can take liberties with, not aspiring to absolute accuracy in the detail as long as we get a good neutral white and a deep black, and all uniform (‘steady state’) colours reproduced with the correct shading. It says that we understand the signal and it is trivial. Time alignment by moving the drivers backwards and forwards is an easy gimmick, so we can go that far, however.

Another Dimension

I think that with DSP-corrected drivers and crossovers, we are beginning to find that there is another dimension to the common or garden stereo signal; one that has been viewed as a secondary effect until now. Whether created accidentally or not, the majority of recordings contain ‘imaging’ that is so clear that it gives us access to the music in a way we were not aware of. It allows us to ‘walk around’ the scene in which the recording was made. If it is a composite, multitrack recording, it may not be a real scene that ever existed, but the individual elements are each small scenes in themselves, and they become clearly delineated. It is ‘compelling’.

I can do no better than quote a brand new review of the Kii Three written by a professional audio engineer, that echoes something I was saying a couple of weeks ago: imaging is not just a ‘trick’, but improves the separation of the acoustic sources in a way that goes beyond the traditional attributes of low distortion & colouration.

I think he also echoes something I said about believable imaging giving the speaker a ‘free pass’ in terms of measurements. As in my DIY post, he says that the speaker sounds so transparent and believable that there is no point in going any further in criticising the sound. A suggestion, perhaps, that conventional ‘in-room’ measurements and ‘room correction’, are shown up as the red herrings they are if a system sets out to be genuinely neutral by design, at source.

Firstly, the traditional kind of subjective analysis we speaker reviewers default to — describing the tonal balance and making a judgement about the competence of a monitor’s basic frequency response — is somehow rendered a little pointless with the Kii Three. It sounds so transparent and creates such fundamentally believable audio that thoughts of ‘dull’ or ‘bright’ seem somehow superfluous.

… it is dominated by such a sense of realistic clarity, imaging, dynamics and detail that you begin almost to forget that there’s a speaker between you and the music.

…I’ve never heard anything anywhere near as adept at separating the elements of a mix and revealing exactly what is going on. I found myself endlessly fascinated, in particular, by the way the Kii Three presents vocals within a mix and ruthlessly reveals how good the performance was and how the voice was subsequently treated (or mistreated). Performance idiosyncrasies, microphone character, room sound, compression effects, reverb and delay techniques and pitch-correction artifacts that I’d never noticed before became blindingly obvious — it was addictive.

…One of the joys of auditioning new audio gear, especially speakers, is that I occasionally get to rediscover CDs or mixes that I thought I knew intimately. I can honestly say that with the Kii Three, every time I played some old familiar material I heard something significant in the way it performs…

…Low-latency mode …switch[es] off the system phase correction. It makes for a fascinating listening experience. …the change of phase response is clearly audible. The monitor loses a little of its imaging ability and overall precision in low-latency mode so that things sound a little less ‘together’.

“The Kii Three is one of the finest speakers I’ve ever heard and undoubtedly the best I’ve ever had the privilege and pleasure of using in my own home.”

Image is Everything

I have a couple of audiophile friends for whom ‘imaging’ is very much a secondary hi-fi goal, but I wonder if this is because they’ve never really heard it from their audio systems.

What do we mean by the term anyway? My definition would be the (illusion of) precise placement of acoustic sources in three dimensions in front of the listener – including the acoustics of the recording venue(s). It isn’t a fragile effect that only appears at one infinitesimal position in space or collapses at the merest turn of the head, either.

It is something that I am finding is trivially easy for DSP-based active speakers. Why? Well I think that it just falls out naturally from accurate matching between the channels and phase & time-corrected drivers. Logically, good imaging will only occur when everything in a system is working more-or-less correctly.

I can imagine all kinds of mismatches and errors that might occur with passive crossovers, exacerbated by the compromises that are forced on the designer such as having to use fewer drivers than ideal, or running the drivers outside their ideal frequency ranges.

Imaging is affected by the speaker’s interaction with the room, of course. The ultimate imaging accuracy may occur when we eliminate the room’s contribution completely, and sit in a very tight ‘sweet spot’, but this is not the most practical or pleasant listening situation. The room’s contribution may also enhance an illusion of a palpable image, so it is not desirable to eliminate it completely. Ultimately, we are striking a balance between direct sound and ambient reflections through speaker directivity and positioning relative to walls.

A real audiophile scientist would no doubt be interested in how exactly stereo imaging works, and whether listening tests could be devised to show the relative contributions of poor damping, phase errors, Doppler distortion, timing misalignment etc. Maybe we could design a better passive speaker as a result. But I would say: why bother? The DSP active version is objectively more correct, and now that we have finally progressed to such technology and can actually listen to it, it clearly doesn’t need to do anything but reproduce left and right correctly – no need for any other tricks or the forlorn hope of some accidental magic from natural, organic, passive technology.

An ‘excuse’ for poor imaging is that in many real musical situations, imaging is not nearly as sharp as can be obtained from a good audio system. This is true: if you go to a classical concert and consciously listen for where a solo brass instrument (for example) is coming from, you often can’t really tell. I presume this is because you are generally seated far from the stage with a lot of people in the way and much ‘ambience’ thrown in. I presume that the conductor is hearing much stronger ‘imaging’ than we are – and many recordings are made with the mics much closer than a typical person sitting in the auditorium; the sharper imaging in the recording may well be largely artificial.

However, to cite this as a reason for deliberately blurring the image in some arbitrary way is surely a red herring. The image heard by the audience member is still ‘coherent’ even if it is not sharp. And the ‘artificially imaged’ recording contains extra information that is allowing us to separate the various acoustic sources by a different mechanism than the one that might allow us to tease out the various sources in a mono recording, say. It reduces effort and vastly increases the clarity of the audio ‘scene’.

I think that good imaging due to superior time alignment and phase is going to be much more important than going to the Nth degree to obtain ultra-low low harmonic distortion.

If we mess up the coherence between the channels we are getting the worst of all worlds: something that arbitrarily munges the various acoustic sources and their surroundings in response to signal content. An observation that is sometimes made is that the music “sticks to the speakers” rather than appearing in between. What are our brains to make of it? It must increase the effort of listening and blur the detail of what we are hearing.

Not only this, but good imaging is compelling. Solid voices and instruments that float in mid air grab the attention. The listener immediately understands that there is a lot more information trapped in a stereo recording than they ever knew.

The Beatles re-mastered

Since they finally made it onto Spotify and other streaming services, I have begun listening to the Beatles again, following a gap of a few years. The reason for the gap was that it was often too tempting to explore Spotify rather than getting up to place CDs in the drive or getting around to “ripping” them. Also, my Beatles CDs are fairly old, so not in the ‘re-mastered’ category, and this knowledge would no doubt have spoiled the experience of listening to them while not being a strong enough reason to buy new ones.

The experience of listening to the re-mastered Beatles on my über-system has been “interesting” rather than the unalloyed pleasure I was expecting. In years gone by, I very much enjoyed my Beatles CDs on lesser systems, listening to the music without worrying too much about ‘quality’ – although I always marvelled at the freshness of the recordings that had made it across the decades intact. I had built up such expectations of the re-mastered versions playing on a real hi-fi system that I was bound to be disappointed, I suppose.

What I am finding is that, for the first time, I am hearing how the tracks were put together, and I can ‘hear through’ to the space behind them. With the latest re-masters on my system, you can clearly hear the individual tracks cleanly separated, and the various studio techniques being employed – you can’t mistake them for ‘live’ recordings – and they are rather ‘dry’.

With the Beatles I think that we are hearing music and recordings that were brilliantly, painstakingly created in the studio to an exceptional level of quality, that still sounded great when ‘munged’ through the typical record players, TV, radio and hi-fi equipment of the day – mainly in mono. It is now fascinating to hear the individual ingredients so cleanly separated, but I wonder whether the records wouldn’t have been produced slightly differently with modern high quality playback equipment in mind; after all, we are probably hearing the recordings more cleanly than was even possible in the studio at the time. Maybe it really is the case that The Beatles sound best on the equipment they were first heard on. Other musical groups of the time weren’t produced with such a high level of studio creativity and in such quality and so, with their recordings already ‘pre-munged’ to some extent, are not laid bare to the same degree on a modern system.

For the first time, perhaps I am beginning to see the reason for the re-release of the mono versions. They are a way of producing a more ‘cohesive’ mix without resorting to artificial distortion and effects that were not on the original recordings.