Audio – Literature Analogy

An audio recording is a bit like a book: created through artistic or intellectual endeavour, then ‘fixed’ as a collection of pure information and distributed to customers for them to ‘consume’ in their own environments. In the case of digital audio, a recording is literally the same as a book, being stored as numbers in a file; you could store a book as a WAV, or an audio recording as a MSWORD file if you wanted.

In rendering the content to be read, there are things you could do to detract from the content of a book:

  • printed too big/too small
  • lighting too dim/too bright
  • inappropriate use of colour
  • blotchy printout
  • typeface varies with content, or randomly
  • corrupted: missing/duplicated/erroneous characters
  • peculiar paper
  • non-neutral typeface – difficult to read or inappropriate e.g. science fiction font for a Jane Austen novel
  • in the case of some ‘boutique’ printing, an appropriate analogy might be a book that spontaneously becomes too hot to touch, or occasionally ruins valuable furniture.

The emotional or intellectual force of the book would actually be reduced because of these problems. In other words, it is not true to say that the quality of reproduction doesn’t matter.

However, there is a finite envelope of neutral, even ‘mundane’, reproduction which achieves an optimal result for the reader – after reading the book they can’t tell you anything about the quality of the printing; all they remember is the content, and the content was thrilling.

Maybe the author specifies the typeface. Some books may include fine illustrations or intricate frontispieces which are intrinsic to the book. In these cases, the reproduction needs to be particularly accurate in order to do justice to what the author has created.

Beyond this, is there anything that the printer can do to enhance the appeal of the book? Well, they can create a fancy binding that the reader notices before they start reading; they can use particularly high quality paper; they can print the characters with micron precision. But only a book collector or printing technology enthusiast would care about these refinements – they have no effect on the actual experience of reading the content, and could easily detract from it.

The manufacturers of the ink and the mains cable that powers the printing press could read lots of books in their spare time, attend evening classes in English Literature, study the physiology of the eye, get diplomas in grammar, and tell us in interviews with speciality magazines about how it all informs their craft. But clearly the results would do nothing whatsoever to change the reading experience.

The printer might decide to dabble in science for the first time since they left printing college. They could do scientific trials in aspects of book reproduction where lucky participants get to read snippets of text or passages from ‘typical’ books, responding with their perceptions of differences, preferences, or even ‘emotional stimulation level’ in aspects such as:

  • typeface
  • ink
  • reading light
  • paper texture and weight
  • reading room shape/dimensions/finishes

But the results would be rather obvious and predictable, with anything slightly interesting being clearly the result of fashion, novelty and human fickleness rather than being a universal law.

The only way to actually enhance the book would be to change its content. An algorithm that replaces certain words? Re-writes sections to make them longer or shorter? Clearly in the case of literature, such a thing would be meaningless and idiotic. It is not so different in the case of audio. There is nothing but the recording: there is no technology, effect or algorithm that can meaningfully enhance it.

Conclusion

Domestic hi-fi is no more than the equivalent of rendering the printed content of a book: it can be done adequately or badly, and beyond that there is no meaningful way of improving on it. People become deluded by the idea that the rendering technology can enhance the content – which is obviously ridiculous in the case of books, but less obvious with audio.

But this is not to say that hi-fi is, in itself, boring: achieving ‘adequate’ is not trivial.

Many people are simply not used to hearing adequate reproduction regardless of how much money they spend, so they are not aware that the experience vs. quality graph has a horizontal flat top. And needless to say, the audiophile quality vs. cost graph is more-or-less random, which makes it even more confusing.

The audio enthusiast would be much happier and richer if they got a sense of proportion of what matters, then put all their creativity (and money if they’ve got nothing else to spend it on) into building the equivalent of a pleasant reading room, comfy chair and attractive bookcases rather than a solid gold and diamond reading light.

[Last edited  30/05/17]

Reverberation of a point source, compared with a ‘distributed’ loudspeaker

Here’s a fascinating speaker:

CBT36 Manufacturer of loudspeakers that focus on elimination of box resonances.

It uses many transducers arranged in a specific curve, driven in parallel and with ‘shading’ i.e. graduated volume settings along the curve, to reduce vertical dispersion but maintain wide dispersion in the horizontal. I can see how this might appear quite appealing for use in a non-ideal room with low ceilings or whatever.

It is a variation on the phased array concept, where the outputs of many transducers combine to produce a directional beam. It is effectively relying on differing path lengths from the different transducers producing phase cancellation or reinforcement in the air at different angles as you move off axis. All the individual wavefronts sum correctly at the listener’s ear to reproduce the signal accurately.

At a smaller scale, a single transducer of finite size can be thought of as many small transducers being driven simultaneously. At high frequencies (as the wavelengths being reproduced become short compared to the diameter of the transducer) differing path lengths from various parts of the transducer combine in the air to cause phase cancellation as you move off axis. This is known as beaming and is usually controlled in speaker design by using drivers of the appropriate size for the frequencies they are reproducing. Changes in directivity with frequency are regarded as undesirable in speaker design, because although the on-axis measurements can be perfect, the ‘room sound’ (reverberation) has the ‘wrong’ frequency response.

A large panel speaker suffers from beaming in the extreme, but with Quad electrostatics Peter Walker introduced a clever trick, where phase is shifted selectively using concentric circular electrodes as you move outwards from the centre of the panel. At the listener’s position, this simulates the effect of a point source emanating from some distance behind the panel, increasing the size of the ‘sweet spot’ and effectively reducing the high frequency beaming.

There are other ways of harnessing the power of phase cancellation and summation. Dipole speakers’ lower frequencies cancel out at the sides (and top and bottom) as the antiphase rear pressure waves meet those from the front. This is supposed to be useful acoustically, cutting down on unwanted reflections from floor, walls and ceiling. A dipole speaker may be realised by mounting a single driver on a panel of wood with a hole in it, but it behaves effectively as two transducers, one of which is in anti-phase to the other. Some people say they prefer the sound of such speakers over conventional box speakers.

This all works well in terms of the direct sound reaching the listener and, as in the CBT speaker above, may provide a very uniform dispersion with frequency compared to conventional speakers. But beyond the measurements of the direct sound, does the reverberation sound quite ‘right’? What if the overall level of reverberation doesn’t approximate the ‘liveness’ of the room that the listeners notice as they talk or shuffle their feet? If the vertical reflections are reduced but not the horizontal, does this sound unnatural?

Characterising a room from its sound

The interaction of a room and an acoustic source could be thought of as a collection of simultaneous equations – acoustics can be modelled and simulated for computer games, and it is possible for a computer to do the reverse and work out the size and shape of the room from the sound.  If the acoustic source is, in fact, multiple sources separated by certain distances, the computer can work that out, too.

Does the human hearing system do something similar? I would say “probably”. A human can work quite a lot out about a room from just its sound – you would certainly know whether you were in an anechoic chamber, a normal room or a cathedral. Even in a strange environment, a human rarely mistakes the direction and distance from which sound is coming. Head movements may play a part.

And this is where listening to a ‘distributed speaker’ in a room becomes a bit strange.

Stereo speakers can be regarded as a ‘distributed speaker’ when playing a centrally-placed sound. This is unavoidable – if we are using stereo as our system. Beyond that, what is the effect of spreading each speaker itself out, or deliberately creating phased ‘beams’ of sound?

Even though the combination of direct sounds adds up to the familiar sound at the listener’s position as though emanating from its original source, there is information within the reflections that is telling the listener that the acoustic source is really a radically different shape. Reverberation levels and directions may be ‘asymmetric’ with the apparent direct sound.

In effect, the direct sound says we are listening to this:

Image result for zoe wanamaker cassandra

but the reverberation says it is something different.

Image result for zoe wanamaker cassandra

Might there be audible side effects from this? In the case of the dipole speaker, for example, the rear (antiphase) signal reflects off the back wall and some of it does make its way forwards to the listener. In my experience, this comes through as a certain ‘phasiness’ but it doesn’t seem to bother other people.

From a normal listening distance, most musical sources are small and appear close to being a ‘point source’. If we are going to add some more reverberation, should it not appear to be emanating as much as possible from a point source?

It is easy to say that reverberation is so complex that it is just a wash of ‘ambience’ and nothing more; all we need to do is give it the right ‘colour’ i.e. frequency response. And one of the reasons for using a ‘distributed speaker’ may be to reduce the amount of reverberation anyway. But I don’t think we should overdo it: we surely want to listen in real rooms because of the reverberation, not despite it. What is the most side effect-free way to introduce this reverberation?

Clearly, some rooms are not ideal and offer too much of the wrong sort of reverberation. Maybe a ‘distributed speaker’ offers a solution, but is it as good as a conventional speaker in a suitable room? And is it really necessary, anyway? I think some people may be misguidedly attempting to achieve ‘perfect’ measurements by, effectively, eliminating the room from the sound even though their room is perfectly fine. How many people are intrigued by the CBT speaker above simply because it offers ‘better’ conventional in-room measurements, regardless of whether it is necessary?

Conclusion

‘Distributed speakers’ that use large, or multiple, transducers may achieve what they set out to do superficially, but are they free of side-effects?

I don’t have scientific proof, but I remain convinced that the ‘Rolls Royce’ of listening remains ‘point source’ monopole speakers in a large, carpeted, furnished room with a high ceiling. Box speakers with multiple drivers of different sizes are small and can be regarded as being very close to a single transducer, but are not so omnidirectional that they create too much reverberation. The acoustic ‘throw’ they produce is fairly ‘natural’. In other words, for stereo perfection, I think there is still a good chance that the types of rooms and speakers people were listening to in the 1970s remain optimal.

[Last edited 17.30 BST 09/05/17]

The Sound of a Symphony Orchestra

Last night I went to a symphony concert: Shostakovich’s 10th, preceded by Prokofiev’s Piano Concerto No. 2 at the West Road Concert Hall, Cambridge.

west roadWe were sitting in the second row from the front – so quite close to the piano. I wish I had taken a photograph, but I was so paranoid about my phone ringing mid performance that I left it turned off! The image above shows the empty venue.

We really enjoyed the concert. Chiyan Wong is an amazing piano soloist, and CCSO were spectacular. The sound was formidable from a large orchestra, and we got to hear the fairly new Steinway grand in great detail – the piano was removed during the interval, for the Shostakovich that followed.

Now, I do often listen to this sort of music with my system, but this was the first time I had been to a concert to hear this specific Russian ‘genre’. Of course I couldn’t help but make a mental comparison of the sound of the real thing versus the hi-fi facsimile that I am used to, as I was listening. And you know what? I have to say that a good hi-fi gives a pretty good rendition of the real sound.

The real thing was very loud, but also very rich – I have observed that ‘painfully loud’ is more a function of quality than volume; you need good bass to balance the rest of the spectrum. So this was very loud, but at no time painful. Bass from the orchestra was wonderful, but didn’t take me by surprise – I sometimes hear such bass from my system. (It did take me by surprise the first time I heard it from a hi-fi system, however!).

Some people cite piano as being the most difficult thing for a hi-fi system to reproduce. I don’t know where they get that from: I loved the sound of the piano, and I think a good system can reproduce it fairly easily.

I was struck by the homogeneity within the different sections of the orchestra. Listening to a recording of just a piano, or just the violins, would not tell you very much about an audio system. It is only when you hear a combination of the piano, the violins and the brass, say, that any ‘formant’ (i.e. fixed frequency response signature) within your system would show up.

As discussed previously, ‘imaging’ of the orchestra was not as pin sharp as you get in some recordings, but many purist recordings portray the true effect quite accurately. The width of the ‘soundstage’ of a stereo system is more-or-less right, and the room you are listening in enhances the recording’s ‘ambience’ around and behind you.

Of course the concert is a very special experience. The stereo version isn’t always as deep, open and spacious, nor is the envelopment as complete but, all in all, I think if you sit down in the right frame of mind to listen to a fine orchestral recording using a good hi-fi system, you are getting a very reasonable impression of the sound, excitement and visceral quality of the real thing. And that really is quite an amazing idea.

Room correction. What are we trying to achieve?

The short version…

The recent availability of DSP is leading some people to assume that speakers are, and have always been, ‘wrong’ unless EQ’ed to invert the room’s acoustics.

In fact, our audio ancestors didn’t get it wrong. Only a neutral speaker is ‘right’, and the acoustics of an average room are an enhancement to the sound. If we don’t like the sound of the room, we must change the room – not the sound from the speaker.

DSP gives us the tools to build a more neutral speaker than ever before.


There are endless discussions about room correction, and many different commercial products and methods. Some people seem to like certain results while others find them a little strange-sounding.

I am not actually sure what it is that people are trying to achieve. I can’t help but think that if someone feels the need for room correction, they have yet to hear a system that sounds so good that they wouldn’t dream of messing it up with another layer of their own ‘EQ’.

Another possibility is that they are making an unwarranted assumption based on the fact that there are large objective differences between the recorded waveform and what reaches the listener’s ears in a real room. That must mean that no matter how good it sounds, there’s an error. It could sound even better, right?

No.

A reviewer of the Kii Three found that that particularly neutral speaker sounded perfect straight out of the box.

“…the traditional kind of subjective analysis we speaker reviewers default to — describing the tonal balance and making a judgement about the competence of a monitor’s basic frequency response — is somehow rendered a little pointless with the Kii Three. It sounds so transparent and creates such fundamentally believable audio that thoughts of ‘dull’ or ‘bright’ seem somehow superfluous.”

The Kii Three does, however, offer a number of preset “contour” EQ options. As I shall describe later, I think that a variation on this is all that is required to refine the sound of any well-designed neutral speaker in most rooms.

A distinction is often made between correction of the bass and higher frequencies. If the room is large, and furnished copiously, there may be no problem to solve in either case, and this is the ideal situation. But some bass manipulation may be needed in many rooms. At a minimum, the person with sealed woofers needs the roll-off at the bottom end to start at about the right frequency for the room. This, in itself, is a form of ‘room correction’.

The controversial aspect is the question of whether we need ‘correction’ higher up. Should it be applied routinely (some people think so), as sparingly as possible, or not at all? And if people do hear an improvement, is that because the system is inherently correcting less-than-ideal speakers rather than the room?

Here are some ways of looking at the issue.

  1. Single room reflections give us echoes, while multiple reflections (of reflections) give us reverberation. Performing a frequency response measurement with a neutral transducer and analysing the result may show a non-flat FR at the listening position even when smoothed fairly heavily. This is just an aspect of statistics, and of the geometry and absorptivity of the various surfaces in the room. Some reflections will result in some frequencies summing in phase, to some extent, and others not.
  2. Experience tells us that we “hear through” the room to any acoustic source. Our hearing appears not to be just a frequency response analyser, but can separate direct sound from reflections. This is not a fanciful idea: adaptive software can learn to do the same thing.

The idea is also supported by some of the great and the good in audio.

Floyd Toole:

“…we humans manage to compensate for many of the temporal and timbral variations contributed by rooms and hear “through” them to appreciate certain essential qualities of sound sources within these spaces.”

Or Meridian’s Bob Stuart:

“Our brains are able to separate direct sound from the reverberation…”

  1. If we EQ the FR of the speaker to obtain a flat in-room measured response including the reflections in the measurement, it seems that we will subsequently “hear through” the reflections to a strangely-EQ’ed direct sound. It will, nevertheless measure ‘perfectly’.
  2. Audio orthodoxy maintains that humans are supremely insensitive to phase distortion, and this is often compounded with the argument that room reflections completely swamp phase information so it is not worth worrying about. This denies the possibility that we “hear through” the room. Listening tests in the past that purportedly demonstrated our inability to hear the effects of phase have often been based on mono only, and didn’t compare distorted with undistorted phase examples – merely distorted versus differently distorted, played on the then available equipment.
  3. Contradicting (4), audiophiles traditionally fear crossovers because the phase shifts inherent in (non-DSP) crossovers are, they say, always audible. DSP, on the other hand, allows us to create crossovers without any phase shift i.e. they are ‘transparent’.
  4. At a minimum, speaker drivers on their baffles should not ‘fight’ each other through the crossover – their phases should be aligned. The appropriate delays then ensure that they are not ‘fighting’ at the listener’s position. The next level in performance is to ensure that their phases are flat at all frequencies i.e. linear phase. The result of this is the recorded waveform preserved in both frequency and time.
  5. Intuitively, genuine stereo imaging is likely to be a function of phase and timing. Preserving that phase and timing should probably be something we logically try to do. We could ‘second guess’ how it works using traditional rules of thumb, deciding not to preserve the phase and timing, but if it is effectively cost-free to do it, why not do it anyway?
  6. A ‘perfect’ response from many speaker/room combinations can be guaranteed using DSP (deconvolution with the impulse response at that point, not just playing with a graphic equaliser). Unfortunately, it will only be valid for a single point in space, and moving 1mm from there will produce errors and unquantifiable sonic effects. Additionally, ‘perfect’ refers to the ‘anechoic chamber’ version of the recording, which may not be what most people are trying to achieve even if the measurements they think they seek mean precisely that.
  7. Room effects such as (moderate) reverberation are a major difference between listening with speakers versus headphones, and are actually desirable. ‘Room correction’ would be a bad thing if it literally removed the room from the sound. If that is the case, what exactly do we think ‘room correction’ is for?
  8. Even if the drivers are neutral (in an anechoic situation) and crossed over perfectly on axis, they are of finite size and mounted in a box or on a baffle that has a physical size and shape. This produces certain frequency-dependent dispersion characteristics which give different measured, and subjective, results in different rooms. Some questions are:
    • is this dispersion characteristic a ‘room effect’ or a ‘speaker effect’. Or both?
    • is there a simple objective measurement that says one result is better than any other?
    • is there just one ‘right’ result and all others are ‘wrong’?
  1. Should room correction attempt to correct the speaker as well? Or should we, in fact, only correct the speaker? Or just the room? If so, how would we separate room from speaker in our measurements? Can they, in fact, be separated?

I think there is a formula that gives good results. It says:

  • Don’t rely on feedback from in-room measurements, but do ‘neutralise’ the speaker at the most elemental levels first. At every stage, go for the most neutral (and locally correctable) option e.g. sealed woofers, DSP-based linear phase crossovers with time alignment delays.
  • Simply avoid configurations that are going to give inherently weird results: two-way speakers, bass reflex, many types of passive crossover etc. These may not even be partially correctable in any meaningful way.
  • Phase and time alignment are sacrosanct. This is the secret ingredient. You can play with minor changes to the ‘tone colour’ separately, but your direct sound must always maintain the recording’s phase and time alignment. This implies that FIR filters must be used, thus allowing frequency response to be modified independently of phase.
  • By all means do all the good stuff regarding speaker placement, room treatments (the room is always ‘valid’), and avoiding objects and asymmetry around the speakers themselves.
  • Notionally, I propose that we wish to correct the speaker not the room. However, we are faced with a room and non-neutral speaker that are intertwined due to the fact that the speaker has multiple drivers of finite size and a physical presence (as opposed to being a point source with uniform directivity at all frequencies). The artefacts resulting from this are room-dependent and can never really be ‘corrected’ unambiguously. Luckily, a smooth EQ curve can make the sound subjectively near enough to transparent. To obtain this curve, predict the baffle step correction for each driver using modelling or standard formula with some some trial-and-error regarding the depth required (4, 5, 6 dB?); this is a very smooth EQ curve. Or, possibly (I haven’t done this myself), make many FR measurements around the listening area, smooth and average them together, and partially invert this, again without altering phase and time alignment.
  • You are hearing the direct sound, plus separately-perceived ‘room ambience’. If you don’t like the sound of the ambience, you must change the room, not the direct sound.

Is there any scientific evidence for these assertions? No more nor less than any other ‘room correction’ technique – just logical deduction based on subjective experience. Really, it is just a case of thinking about what we hear as we move around and between rooms, compared to what the simple in-room FR measurements show. Why do real musicians not need ‘correction’ when they play in different venues? Do we really want ‘headphone sound’ when listening in rooms? (If so, just wear headphones or sit closer to smaller speakers).

This does not say that neutral drivers alone are sufficient to guarantee good sound – I have observed that this is not the case. A simple baffle step correction applied to frequency response (but leaving phase and timing intact) can greatly improve the sound of a real loudspeaker in a room without affecting how sharply-imaged and dynamic it sounds. I surmise that frequency response can be regarded as ‘colour’ (or “chrominance” in old school video speak), independent of the ‘detail’ (or “luminance”) of phase and timing. We can work towards a frequency response that compensates for the combination of room and speaker dispersion effects to give the right subjective ‘colour’ as long as we maintain accurate phase and timing of the direct sound.

We are not (necessarily) trying to flatten the in-room FR as measured at the listener’s position – the EQ we apply is very smooth and shallow – but the result will still be perceived as a flat FR. Many (most?) existing speakers inherently have this EQ built in whether their creators applied it deliberately, or via the ‘voicing’ they did when setting the speaker up for use in an average room.

In conclusion, the summary is this:

  • Humans “hear through” the room to the direct sound; the room is perceived as a separate ‘ambience’. Because of this, ‘no correction’ really is the correct strategy.
  • Simply flattening the FR at the listening position via EQ of the speaker output is likely to result in ‘peculiar’ perceived sound, even if the in-room measurements purport to say otherwise.
  • Speakers have to be as rigorously neutral as possible by design, rather than attempting to correct them by ‘global feedback’ in the room.
  • Final refinement is a speaker/room-dependent, smooth, shallow EQ curve that doesn’t touch phase and timing – only FIR filters can do this.

[Last updated 05/04/17]

The Secret Life of the Signal

Some people actually think of stereo imaging as a “parlour trick” that is very low on the list of desirable attributes that an audio system should have. They ‘rationalise’ this by saying that in the majority of recordings, any stereo image is an artificial illusion, created by the recording engineer either deliberately or by accident; it does not accurately represent the live event – because there may not even have been a single live event. So how can it matter if it is reproduced by the playback system or not? Perhaps it is even best to suppress it: muddle it up with some inter-channel crosstalk like vinyl does, or even listen in mono.

At the top of the list of desirable attributes for a hi-fi system, most audiophiles would put “timbre”, “tonality”, low distortion, clean reproduction at high volumes, dynamics, deep bass. All of these qualities can be experienced with a mono signal and a single speaker – in fact in the Harman Corporation’s training for listening, monophonic reproduction is recommended for when performing listening tests.

Because their effects are not so obvious in mono, phase and timing are regarded by many as supremely unimportant. I quote one industry luminary:

Time domain does not enter my vocabulary…

Sound is colour?

We know that our eyes respond to detail and colour in different ways. In the early days of colour TV (analogue) it was found that the signal could be broadcast within practical bandwidths because the colour (chrominance) information could be be sent at lower resolution than the detail (luminance).

There is, perhaps, a parallel in hearing, too: that humans have separate mechanisms for responding to sound in the frequency and time domains. But the conventional hi-fi industry’s implicit view is that we only hear in the frequency domain: all the main measurements are in the frequency domain, and steady state signals are regarded as equivalent to real music. A speaker’s overall response to phase and timing is ignored almost totally or, at best, regarded as a secondary issue.

I think that this is symptomatic of an idea that pervades hi-fi: that the signal is ‘colour’. Sure, it varies as the music is playing, but the exact nature of that variation is almost incidental; secondary in comparison to the importance of the accurate reproduction of colour, and that in testing, all that matters is whether a uniform colour is accurately reproduced.

There has, nevertheless, been some belated lip service paid to the importance of timing, with the hype around MQA (still usually being played over speakers with huge timing errors!), and a number of passive speakers with sloping front baffles for time alignment. Taken to its logical conclusion, we have these:

wilson_wamm_master_chronosonic_final_prototype_news_oct

Their creator says, though:

It’s nice if you have phase coherence, but it is not necessary

So they still fall short of the “straight wire with gain” ideal. It still says that the signal is something we can take liberties with, not aspiring to absolute accuracy in the detail as long as we get a good neutral white and a deep black, and all uniform (‘steady state’) colours reproduced with the correct shading. It says that we understand the signal and it is trivial. Time alignment by moving the drivers backwards and forwards is an easy gimmick, so we can go that far, however.

Another Dimension

I think that with DSP-corrected drivers and crossovers, we are beginning to find that there is another dimension to the common or garden stereo signal; one that has been viewed as a secondary effect until now. Whether created accidentally or not, the majority of recordings contain ‘imaging’ that is so clear that it gives us access to the music in a way we were not aware of. It allows us to ‘walk around’ the scene in which the recording was made. If it is a composite, multitrack recording, it may not be a real scene that ever existed, but the individual elements are each small scenes in themselves, and they become clearly delineated. It is ‘compelling’.

I can do no better than quote a brand new review of the Kii Three written by a professional audio engineer, that echoes something I was saying a couple of weeks ago: imaging is not just a ‘trick’, but improves the separation of the acoustic sources in a way that goes beyond the traditional attributes of low distortion & colouration.

I think he also echoes something I said about believable imaging giving the speaker a ‘free pass’ in terms of measurements. As in my DIY post, he says that the speaker sounds so transparent and believable that there is no point in going any further in criticising the sound. A suggestion, perhaps, that conventional ‘in-room’ measurements and ‘room correction’, are shown up as the red herrings they are if a system sets out to be genuinely neutral by design, at source.

Firstly, the traditional kind of subjective analysis we speaker reviewers default to — describing the tonal balance and making a judgement about the competence of a monitor’s basic frequency response — is somehow rendered a little pointless with the Kii Three. It sounds so transparent and creates such fundamentally believable audio that thoughts of ‘dull’ or ‘bright’ seem somehow superfluous.

… it is dominated by such a sense of realistic clarity, imaging, dynamics and detail that you begin almost to forget that there’s a speaker between you and the music.

…I’ve never heard anything anywhere near as adept at separating the elements of a mix and revealing exactly what is going on. I found myself endlessly fascinated, in particular, by the way the Kii Three presents vocals within a mix and ruthlessly reveals how good the performance was and how the voice was subsequently treated (or mistreated). Performance idiosyncrasies, microphone character, room sound, compression effects, reverb and delay techniques and pitch-correction artifacts that I’d never noticed before became blindingly obvious — it was addictive.

…One of the joys of auditioning new audio gear, especially speakers, is that I occasionally get to rediscover CDs or mixes that I thought I knew intimately. I can honestly say that with the Kii Three, every time I played some old familiar material I heard something significant in the way it performs…

…Low-latency mode …switch[es] off the system phase correction. It makes for a fascinating listening experience. …the change of phase response is clearly audible. The monitor loses a little of its imaging ability and overall precision in low-latency mode so that things sound a little less ‘together’.

“The Kii Three is one of the finest speakers I’ve ever heard and undoubtedly the best I’ve ever had the privilege and pleasure of using in my own home.”

Image is Everything

I have a couple of audiophile friends for whom ‘imaging’ is very much a secondary hi-fi goal, but I wonder if this is because they’ve never really heard it from their audio systems.

What do we mean by the term anyway? My definition would be the (illusion of) precise placement of acoustic sources in three dimensions in front of the listener – including the acoustics of the recording venue(s). It isn’t a fragile effect that only appears at one infinitesimal position in space or collapses at the merest turn of the head, either.

It is something that I am finding is trivially easy for DSP-based active speakers. Why? Well I think that it just falls out naturally from accurate matching between the channels and phase & time-corrected drivers. Logically, good imaging will only occur when everything in a system is working more-or-less correctly.

I can imagine all kinds of mismatches and errors that might occur with passive crossovers, exacerbated by the compromises that are forced on the designer such as having to use fewer drivers than ideal, or running the drivers outside their ideal frequency ranges.

Imaging is affected by the speaker’s interaction with the room, of course. The ultimate imaging accuracy may occur when we eliminate the room’s contribution completely, and sit in a very tight ‘sweet spot’, but this is not the most practical or pleasant listening situation. The room’s contribution may also enhance an illusion of a palpable image, so it is not desirable to eliminate it completely. Ultimately, we are striking a balance between direct sound and ambient reflections through speaker directivity and positioning relative to walls.

A real audiophile scientist would no doubt be interested in how exactly stereo imaging works, and whether listening tests could be devised to show the relative contributions of poor damping, phase errors, Doppler distortion, timing misalignment etc. Maybe we could design a better passive speaker as a result. But I would say: why bother? The DSP active version is objectively more correct, and now that we have finally progressed to such technology and can actually listen to it, it clearly doesn’t need to do anything but reproduce left and right correctly – no need for any other tricks or the forlorn hope of some accidental magic from natural, organic, passive technology.

An ‘excuse’ for poor imaging is that in many real musical situations, imaging is not nearly as sharp as can be obtained from a good audio system. This is true: if you go to a classical concert and consciously listen for where a solo brass instrument (for example) is coming from, you often can’t really tell. I presume this is because you are generally seated far from the stage with a lot of people in the way and much ‘ambience’ thrown in. I presume that the conductor is hearing much stronger ‘imaging’ than we are – and many recordings are made with the mics much closer than a typical person sitting in the auditorium; the sharper imaging in the recording may well be largely artificial.

However, to cite this as a reason for deliberately blurring the image in some arbitrary way is surely a red herring. The image heard by the audience member is still ‘coherent’ even if it is not sharp. And the ‘artificially imaged’ recording contains extra information that is allowing us to separate the various acoustic sources by a different mechanism than the one that might allow us to tease out the various sources in a mono recording, say. It reduces effort and vastly increases the clarity of the audio ‘scene’.

I think that good imaging due to superior time alignment and phase is going to be much more important than going to the Nth degree to obtain ultra-low low harmonic distortion.

If we mess up the coherence between the channels we are getting the worst of all worlds: something that arbitrarily munges the various acoustic sources and their surroundings in response to signal content. An observation that is sometimes made is that the music “sticks to the speakers” rather than appearing in between. What are our brains to make of it? It must increase the effort of listening and blur the detail of what we are hearing.

Not only this, but good imaging is compelling. Solid voices and instruments that float in mid air grab the attention. The listener immediately understands that there is a lot more information trapped in a stereo recording than they ever knew.

Neural Adaptation

Just an interesting snippet regarding a characteristic of human hearing (and all our senses). It is called neural adaptation.

Neural adaptation or sensory adaptation is a change over time in the responsiveness of the sensory system to a constant stimulus. It is usually experienced as a change in the stimulus. For example, if one rests one’s hand on a table, one immediately feels the table’s surface on one’s skin. Within a few seconds, however, one ceases to feel the table’s surface. The sensory neurons stimulated by the table’s surface respond immediately, but then respond less and less until they may not respond at all; this is an example of neural adaptation. Neural adaptation is also thought to happen at a more central level such as the cortex.

Fast and slow adaptation
One has to distinguish fast adaptation from slow adaptation. Fast adaptation occurs immediately after stimulus presentation i.e., within 100s of milliseconds. Slow adaptive processes that take minutes, hours or even days. The two classes of neural adaptation may rely on very different physiological mechanisms.

Auditory adaptation, as perceptual adaptation with other senses, is the process by which individuals adapt to sounds and noises. As research has shown, as time progresses, individuals tend to adapt to sounds and tend to distinguish them less frequently after a while. Sensory adaptation tends to blend sounds into one, variable sound, rather than having several separate sounds as a series. Moreover, after repeated perception, individuals tend to adapt to sounds to the point where they no longer consciously perceive it, or rather, “block it out”.

What this says to me is that perceived sound characteristics are variable depending on how long the person has been listening, and to what sequence of ‘stimulii’. Our senses, to some extent, are change detectors not ‘direct coupled’.

Something of a conundrum for listening-based audio equipment testing..? Our hearing begins to change the moment we start listening. It becomes desensitised to repeated exposure to a sound – one of the cornerstones of many types of listening-based testing.

Auditory Scene Analysis

There is a field of study called Auditory Scene Analysis (ASA) that postulates that humans interpret “scenes” using sound just as they do using vision. I am not sure that it necessarily has any particular bearing on the way that audio hardware should be designed: basically the scene is all the clearer if the reproduction of the audio is clean in terms of noise, channel separation, distortion, frequency response and (seemingly controversial to hi-fi folk) the time domain.

However, the seminal work in this field includes the following analogy for hearing:

Your friend digs two narrow channels up from the side of a lake. Each is a few feet long and a few inches wide and they are spaced a few feet apart. Halfway up each one, your friend stretches a handkerchief and fastens it to the sides of the channel. As the waves reach the side of the lake they travel up the channels and cause the two handkerchiefs to go into motion. You are allowed to look only at the handkerchiefs and from their motions to answer a series of questions: How many boats are there on the lake and where are they? Which is the most powerful one? Which one is closer? Is the wind blowing? Has any large object been dropped suddenly into the lake?

Of course, when we listen to reproduced music with an audio system we are, in effect, duplicating the motion of the handkerchiefs using two paddles in another lake (our listening room) and watching the motion of a new pair of handkerchiefs. Amazingly, it works! But the key to this is that the two lakes are well-defined linear systems. Our brains can ‘work back’ to the original sounds using a process akin to ‘blind deconvolution’.

If we want to, we can eliminate the ‘second lake’ by using headphones, or we can almost eliminate it by using an anechoic chamber. We could theoretically eliminate it at a single point in space by deconvolving the reproduced signal with the measured impulse response of the room at that point. Listening with headphones works OK, but listening to speakers in a dead acoustic sounds terrible – probably to do with ‘head related transfer function’ (HRTF) telling us that we are listening to a ‘real’ acoustic but with an absence of the expected acoustic cues when we move our heads. By adding the ‘second lake’ we create enough ‘real acoustic’ to overcome that.

But here is why ‘room correction’ is flawed. The logical conclusion of room correction is to simulate headphones, but this cannot be achieved – and is not what most listeners want anyway, even if they don’t know it. Instead, an incomplete ‘correction’ is implemented based on the idea of trying to make the motion of the two sets of ‘handkerchiefs’ closer to each other than they (in naive measurements) appear to be. If the idea of the brain ‘working back’ to the original sound is correct, it will ‘work back’ to a seemingly arbitrarily modified recording. Modifying the physical acoustics of the room is valid whereas modifying the signal is not.

I think the problem stems ultimately from an engineering tool (frequency domain measurement) proliferating due to cheap computing power. There is a huge difference in levels of understanding between the author of the ASA book and the audiophiles and manufacturers who think that the sound is improved by tweaking graphic equalisers in an attempt to compensate for delays that the brain has compensated for already.

Does hi-fi end here?

kii transparent

Reports are coming in that hi-fi may, after a century of development, have actually reached its logical conclusion. It is beginning to look as though the Kii Three may be the technology beyond which it simply wouldn’t be worth going, for the vast majority of people. If so, this is quite a significant moment.

Everything up to this point has been a flawed, intermediate step.

It all started in the 19th century with the stunningly simple observation that sound is nothing more than variations of air pressure and that these can be picked up by a diaphragm and reproduced by another diaphragm. The hi-fi story has been one of how best to store the information encoded within the vibrations, and how to get the vibrations back out into the world at some time later.

First, we had purely mechanical systems which had to contend with the imbalance between the tiny amount of energy that can be picked up when making a recording versus the large amount of energy that is needed to play the recording back.

Then, with the introduction of electronics into the equation, the path towards the truly linear system was opened up. We had recording on magnetic tape, distributed to the listeners via vinyl LPs. Amplification with valves, then transistors, Class A, AB and now Class D. Horn speakers, multi-way speakers, direct radiators, acoustic suspension, and detours into panel speakers, electrostatics and even plasma. Interestingly, active crossovers are not new: they were used in cinemas in the 1930s, and there was at least one well-heeled enthusiast using them in a domestic system in the 1950s.

A major disruption occurred with the development of digital audio in the 1980s which, at a stroke, propelled performance in terms of noise, distortion and linearity to the point of practical perfection and slashed the size, weight and price of audio storage and playback equipment.

(At this point, ‘high end’ audio as a hobby left the rails and, for many, became an exercise in masochism, superstition and nostalgia).

The next part of the puzzle was solved when computing power became available. Using a computer it is possible to perform digital signal processing (DSP), allowing precise tailoring of crossovers and EQ, and for the characteristics of mechanical transducers (the speaker drivers in their boxes) to be modified.

The linear system

Now, all the pieces were in place to build a linear reproduction system using the following building blocks:

  • Digital storage of stereo or multichannel recording
  • DSP to process the signal for crossover, time alignment between drivers, driver amplitude and phase correction, EQ, woofer distortion correction using voice coil current or motion feedback
  • One DAC per driver
  • One solid state amplifier per driver
  • Loudspeaker comprising several dynamic drivers each allocated to a narrow frequency range, including sealed woofer whose bass can, if necessary, be extended using DSP EQ.

This is all perfectly realisable at low cost using physically small electronics. The advent of Class D amplification makes it even smaller and cheaper. Such a system is virtually noiseless, has extremely low levels of distortion and covers the entire human hearing frequency range.

The final part of the puzzle

There has been a lag in the acceptance of such systems even though they are spectacularly good. The recent development of a system to tackle directly the issue of the speaker’s interaction with the room at bass frequencies may be the final part of the puzzle that means these systems take off. I think the Kii Three is the first speaker to do this using DSP, followed closely behind by the huge and expensive Beolab 90.

There is some confusion over why DSP-based ‘room correction’ is needed, and what it is capable of. Although the room appears to mangle the signal terribly in terms of frequency response and phase when measured, the listener hears the direct sound from the speaker first, and an average room just adds agreeable ‘ambience’ that blends the immediate surroundings with the recording and helps to cement a convincing illusion of ‘being there’. Trying to ‘correct’ the effects of the room will make the system sound worse.

The one area where genuine problems may occur, however, is in the bass, and people attempt to solve this with DSP (not very successfully), and with room treatments (not particularly effective for the bass). The Kii Three and Beolab 90 both take the approach of using extra drivers driven by DSP to make the speaker more directional at low frequencies by cancelling out some of the almost omnidirectional bass that comes from the main driver, at the sides and rear. This effectively provides the same directionality as a huge baffle, but from a compact speaker.

Intuitively, it seems obvious that in a highly reflective, echoey room, this technique would improve the clarity of what was heard. It would also tackle problems of speaker placement near walls and corners. The amount of bass bouncing around the room is being reduced at source, rather than trying to catch it afterwards with bass traps etc. The result, apparently, is spectacularly good.

By all accounts, the Kii Three is a compact, good looking speaker with a moderate (OK, not outrageous) price, that simply disappears acoustically, leaving the music as a solid 3D image. It is loud enough and goes deep enough to satisfy the vast majority of people. No other equipment is needed other than a digital source, which could be a PC, streamer or network.

The search is, apparently, over. While it would be possible to build a bigger system, with bigger drivers, higher powered amps and so on, this would just be scaling the same fundamental design. This has already been done in the form of the Beolab 90. The system could be further scaled to provide more channels than just stereo, and more precise control of dispersion in the vertical as well as the horizontal – if anyone thought it necessary.

Conclusion

In the end, it turned out that the ‘objectivists’ were basically right: you really do just need perfect linearity to build the perfect hi-fi system (but you also have to have accuracy in the time domain, which most audio objectivists ignore).

According to reviews, and based on my own experience of not completely dissimilar DIY systems, the Kii Three is the only hi-fi system anyone will ever need. Valves, vinyl and passive crossovers seem positively quaint in comparison; ‘high tech’ passive speaker systems seem almost perverse. No doubt the Kii Three will be copied, and cheaper versions will appear, but there is no need to fundamentally change the design from now on. It should be game over for other forms of hi-fi. (It won’t be, of course!)

Thoughts on creating stuff

IMG_0206

The mysterious driver at the bottom is the original tweeter left in place to avoid having to plug the hole

I just spent an enjoyable evening tuning my converted KEF Concord III speakers. Faced with three drivers in a box, I was able to do the following:

  • Make impulse response measurements of the drivers – near and far field as appropriate to the size and frequency ranges of the drivers (although it’s not a great room for making the far field measurements in)
  • Apply linear phase crossovers at 500Hz/3100Hz with a 4th order slope. Much scope for changing these later.
  • Correct the drivers’ phase based on the measurements.
  • Apply baffle step compensation using a formula based on baffle width.
  • Trim the gain of each driver.
  • Adjust delays by ear to get the ‘fullest’ pink noise sound over several positions around the listening position.
  • ‘Overwrite’ the woofer’s natural response to obtain a new corner frequency at 40 Hz with 12dB per octave roll off.

The KEFs are now sounding beautiful although I didn’t do any room measurements as such – maybe later. Instead, I have been using more of a ‘feedforward’ technique i.e. trust the polypropylene drivers to behave over the narrow frequency ranges we’re using, and don’t mess about with them too much.

The benefits of good imaging

There is lovely deep bass, and the imaging is spectacular – even better than my bigger system. There really is no way to tell that a voice from the middle of the ‘soundstage’ is coming from anywhere but straight ahead and not from the two speakers at the sides. As a result, not only are the individual acoustic sources well separated, but the acoustic surroundings are also reproduced better. These aspects, I think, may be responsible for more than just the enjoyment of hearing voices and instruments coming from different places: I think that imaging, when done well, may trump other aspects of the system. Poorly implemented stereo is probably more confusing to the ear/brain than mono, leaving the listener in no doubt that they are listening to an artificial system. With good stereo, it becomes possible to simply listen to music without thinking about anything else.

Build a four way?

In conjunction with the standard expectation bias warning, I would say the overall sound of the KEFs (so far) is subtly different from my big system and I suspect the baffle widths will have something to do with this – as well as the obvious fact that the 8 inch woofers have got half the area of 12 inch drivers, and the enclosures are one third the volume.

A truly terrible thought is taking shape, however: what would it sound like if I combined these speakers with the 12 inch woofers and enclosures from my large system, to make a huge four way system..? No, I must put the thought out of my head…

The passive alternative

How could all this be done with passive crossovers? How many iterations of the settings did it take me to get to here? Fifty maybe? Surely it would be impossible to do anything like this with soldering irons and bits of wire and passive components. I suppose some people would say that with a comprehensive set of measurements, it would be possible to push a button on a computer and get it to calculate the optimum configuration of resistors, capacitors and inductors to match the target response. Possibly, but (a) it can never work as well as an active system (literally, it can’t – no point in pretending that the two systems are equivalent), and (b) you have to know what your target response is in the first place. It must surely always be a bit of an art, with multiple iterations needed to home in on a really good ‘envelope’ of settings – I am not saying that there is some unique golden combination that is best in every way.

In developing a passive system, every iteration would take between minutes and hours to complete and I don’t think you would get anywhere near the accuracy of matching of responses between adjacent drivers and so on. I wouldn’t even attempt such a thing without first building a computerised box of relays and passive components that could automatically implement the crossover from a SPICE model or whatever output my software produced – it would be quite big box, I think. (A new product idea?)

Something real

With these KEFs, I feel that I have achieved something real which, I think, contrasts strongly with the preoccupations of many technically-oriented audio enthusiasts. In forums I see threads lasting tens or even hundreds of pages concerning the efficacy of USB “re-clockers” or similar. Theory says they don’t do anything; measurements show they don’t do anything (or even make things worse with added ground noise); enthusiasts claim they make a night and day improvement to the sound -> let’s have a listening test; it shows there is no improvement; there must have been something wrong with the test -> let’s do it again.

Or investigations of which lossless file format sounds best. Or which type of ethernet cable is the most musical.

Then there’s MQA and the idea that we must use higher sample rates and ‘de-blurring’ because timing is critical. Then the result is played through passive speakers with massive timing errors between the drivers.

All of these people have far more expertise than me in everything to do with audio, yet they spend their precious time on stuff that produces, literally, nothing.