Active crossover with Raspberry Pi?

I was a bit bored this afternoon and finally managed to put myself into the frame of mind to try transplanting my active crossover software onto a Raspberry Pi.

It turns out it works, but it’s a bit delicate: although CPU usage seems to be about 30% on average, extra activity on the RPi can cause glitches in the audio. But I have established in principle that the RPi can do it, and that the software can simply be transplanted from a PC to the RPi – quite an improbable result I think!

A future-proof DSP box?

What I’d like to do is: build a box that can implement my DSP ‘formula’, that isn’t connected to the internet, takes in stereo S/PDIF, and gives out six channels of analogue.

Is this the way to get a future-proof DSP box that the Powers-That-Be can’t continually ‘upgrade’ into obsolescence? In other words, I would always be able to connect the latest PCs, streamers, Chromecast to it without relying on the same box having to be the source of the stereo audio itself (which currently means that every time it is booted it up it could stop working because of some trivial – or major – change that breaks the system). Witness only this week where Spotify has ‘upgraded’ its system and consigned many dedicated smart speakers’ streaming capability into oblivion. The only way to keep up with such changes is to be an IT-support person, staying current with updates and potentially making changes to code.

To avoid this, surely there will always have to be cheap boxes that connect to the internet and give out S/PDIF or TOSLink, maintained by genuine IT-support people, rather than me having to do it. (Maybe not…. It’s possible that if fitment of MQA-capable chips becomes universal in all future consumer audio hardware, they could eventually decide it is viable to enable full data encryption and/or restrict access to unencrypted data to secure, licensed hardware only).

It’s unfortunate, because it automatically means an extra layer of resampling in the system (because the DAC’s clock is not the same as the source’s clock), but I can persuade myself that it’s transparent. If the worst comes to the very worst in future, the box could also have analogue inputs, but I hope it doesn’t come to that.

This afternoon’s exercise was really just to see if it could be done with an even cheaper box than a fanless PC and, amazingly, it can! I don’t know if anyone else out there is like me, but while I understand the guts of something like DSP, it’s the peripheral stuff I am very hazy on. To me, to be able to take a system that runs on an Intel-based PC and make it run on a completely different processor and chipset without major changes is so unlikely that I find the whole thing quite pleasing.

[UPDATE 18/02/18] This may not be as straightforward as I thought. I have bought one of these for its S/PDIF input (TOSLink, actually). This works (being driven by a 30-year old CD player for testing), but it has focused my mind on the problem of sample clock drift:

My own resampling algorithm?

S/PDIF runs at the sender’s own rate, and my DAC will run at a slightly different rate. It is a very specialised thing to be able to reconcile the two, and I am no longer convinced that Linux/Alsa has a ready-made solution. I am feeling my way towards implementing my own resampling algorithm..!

At the moment, I regulate the sample rate of a dummy loopback driver that draws data from any music player app running on the Linux PC. Instead of this, I will need to read data in at the S/PDIF sample rate and store it in the circular buffer I currently use. The same mechanism that regulates the rate of the loopback driver will now control the rate at which data is drawn from this circular buffer for processing, and the values will need to be interpolated in between the stored values using convolution with a windowed sinc kernel. It’s an horrendous amount of calculation that the CPU will have to do for each and every output sample – probably way beyond the capabilities of the Raspberry Pi I’m afraid. This problem is solved in some sound cards by using dedicated hardware to do resampling, but if I want to make a general purpose solution to the problem, I will need to bite the bullet and try to do it in software. Hopefully my Intel Atom-based PC will be up to the job. It’s a good job that I know that high res doesn’t sound any different to 16/44.1 otherwise I could be setting myself up for needing a supercomputer.

[UPDATE 20/02/18] I couldn’t resist doing some tests and trials with my own resampling code.

Resampling Experiments

First, to get a feel for the problem and how much computing power it will take, I tried running some basic multiplies and adds on a Windows laptop programmed in ‘C’. If using a small filter kernel size of 51 and assuming two sweeps of two pre-calculated kernels per sample (then a trivial interpolation between), it could only just keep up with stereo CD in real time. Disappointing, and a problem if the PC is having to do other stuff. But then I realised that the compiler had all optimisations turned off. Optimising for maximum speed, it was blistering! At least 20x real time.

I tried on a Raspberry Pi. Even it could keep up at 3x real time.

There may be other tricks to try as well, including processor-specific optimisations and programming for ‘SIMD’ (apparently where the CPU does identical calculations on vectors i.e. arrays of values, simultaneously) or kicking off threads to work on parts of the calculation where the operating system is able to share the tasks optimally across the processor cores. Or maybe that’s what the optimisation is doing, anyway.

There is also the possibility that for a larger (higher quality) kernel (say >256 values), an FFT might be a more economical way of doing the convolution.

Either way, it seems very promising.

Lanczos Kernel

I then wrote a basic system for testing the actual resampling in non-real time. This is based on the idea of wanting to, effectively, perform the job of a DAC reconstruction filter in software, and then to be able to pick the reconstructed value at any non-integer sample time. To do this ‘properly’ it is necessary to sweep the samples on either side of the desired sample time with a sinc kernel i.e. convolve it. Here’s where it gets interesting. The kernel can be created so that its elements’ values compute the kernel as though centred on the exact non-integer sample time desired, even though it is aligned and calculated on the integer sample times.

It would be possible to calculate on-the-fly a new, exact kernel for every new sample, but this would be very processor intensive, involving many calculations. Instead, it is possible to pre-calculate a range of kernels that represent a few fractional positions between adjacent samples. In operation, the two kernels on either side of the desired non-integer sample time are swept and accumulated, and then linear interpolation between these two values used to find the value representing the exact sample time.

You may be horrified at the thought of linear interpolation until you realise that several thousand kernels could be pre-calculated and stored in memory, so that the error of the linear interpolation would be extremely small indeed.

Of course a true sinc function would extend to plus and minus infinity, so for practical filtering it needs to be windowed i.e. shortened and tapered to zero at the edges. Apparently – and I am no mathematician – the best window is a widened duplicate of the sinc function’s central lobe, and this is known as the Lanczos Kernel.

Using this arrangement I have been resampling some floating point sine waves at different pitches and examining the results in the program Audacity. The results when the spectrum is plotted seem to be flawless.

The exact width (and therefore quality) of the kernel and how many filters to create are yet to be determined.

[Another update] I have put the resampling code into the active crossover program running on an Intel Atom fanless PC. It has no trouble performing the resampling in real time – much to my amazement – so I now have a fully functional system that can take in TOSLink (from a CD player at the moment) and generate six analogue output channels for the two KEF-derived three-way speakers. Not as truly ‘perfect’ as the previous system that controls the rate at which data arrives, but not far off.

[Update 01/03/18] Everything has worked out OK, including the re-sampling described in a later post. I actually had it working before I managed to grasp fully in my head how it worked! But the necessary mental adjustments have been made, now.

However, I am finding that the number of platforms that provide S/PDIF or TOSLink outputs ‘out-of-the-box’ without problems is very small.

I would simply have bought a Chromecast Audio as the source, but apparently its Ogg Vorbis encoded lossy bit rate is limited to 256kbps with Spotify as the source (which is what I might be planning to use for these tests) as opposed to the 320 kbps that it uses with a PC.

So I thought I could just use a cheap USB sound card with a PC, but found that with Linux it did a very stupid thing: turned off the TOSLink output when no data was being written to it – which is, of course, a nightmare for the receiver software to deal with, especially if it is planning to base its resampling ratio on the received sample rate.

I then began messing around with old desktop machines and PCI sound cards. The Asus Xonar DS did the same ridiculous muting thing in Linux. The Creative X-Fi looked as though it was going to work, but then sent out 48 kHz when idling, and switched to the desired 44.1 kHz when sending music. Again, impossible for the receiver to deal with, and I could find no solution.

Only one permutation is working: Creative X-Fi PCI card in a Windows 7 machine with a freeware driver and app because Creative seemingly couldn’t be bothered to support anything after XP. The free app and driver is called ‘PAX’ and looks like an original Creative app – my thanks to Robert McClelland. Using it, it is possible to ensure bit perfect output, and in the Windows Control Panel app it is possible to force the output to 16 bit 44.1 kHz which is exactly what I need.

[Update 03/03/18] The general situation with TOSLink, PCs and consumer grade sound cards is dire, as far as I can tell. I bought one of these ubiquitous devices thinking that Ubuntu/Linux/Alsa would, of course, just work with it and TOSLink.

USB 6 Channel 5.1 External SPDIF Optical Digital Sound Card Audio Adapter for PC

It is reputedly based on the CM6206. At least the TOSLink output stays on all the time with this card, but it doesn’t work properly at 44.1 kHz even though Alsa seems happy at both ends: if you listen to a 1kHz sine wave played over this thing, it has a cyclic discontinuity somewhere – like it’s doing nearest neighbour resampling from 48 to 44.1 or something like that..? As a receiver it seems to work fine.

With Windows, it automatically installs drivers, but Control Panel->Manage Audio Devices->Properties indicates that it will only do 48 kHz sample rate. Windows probably does its own resampling so that Spotify happily works with it, and if I run my application expecting a 48 kHz sample rate, it all works – but I don’t want that extra layer of resampling.

As mentioned earlier I also bought one of these from Maplin (now about to go out of business). It, too, is supposedly based on the CM6206:

Under Linux/Alsa I can make it work as TOSLink receiver, but cannot make its output turn on except for a brief flash when plugging it in.

In Windows you have to install the driver (and large ‘app’ unfortunately) from the supplied CD. This then gives you the option to select various sample rates, etc. including the desired 44.1 kHz. Running Spotify, everything works except… when you pause, the TOSLink output turns off after a few seconds. Aaaaaghhh!

This really does seem very poor to me. The default should be that TOSLink stays on all the time, at a fixed, selected sample rate. Anything else is just a huge mess. Why are they turning it off? Some pathetic ‘environmental’ gesture? I may have to look into whether S/PDIF from other types of sound card is constantly running all the time, in which case a USB-S/PDIF sound card feeding a super-simple hardware-based S/PDIF-to-TOSLink converter would be a reliable solution – or simply use S/PDIF throughout, but I quite like the idea of the electrical isolation from TOSLink.

It’s not that I need this in order to listen to music, you understand – the original ‘bit perfect’ solution still works for now, and maybe always will – but I am just trying to make SPDIF/TOSLink work in principle so that I have a more general purpose, future-proof, system.

Advertisements

The problem with IT…

…is that you can never rely on things staying the same. Here’s what happened to me last night.

By default I start Spotify when my Linux audio PC boots up. I often leave it running for days. Last night I was listening to something on Spotify (but I suspect it wouldn’t have mattered if it had been a CD or other source). I got a few glitches in the audio – something that never happens. This threatened to spoil my evening – I thought everything was perfect.

I immediately plugged in a keyboard and mouse to begin to investigate and it was at that moment that I noticed that the Intel Atom-based PC was red hot.

Using the Ubuntu system monitor app I could see that the processor cores were running close to flat out. Spotify was running, and on the default opening page was a snazzy animated advert referring to some artist I have no interest in. The basic appearance was a sparkly oscilloscope type display pulsing in time with the music. I had not seen anything like that on Spotify before. I had an inkling that this might be the problem and so I clicked to a more pedestrian page with my playlists on it. The CPU load went down drastically.

Yes, Spotify had decided they needed to jazz up their front page with animation and this had sent my CPU cores into meltdown. Now, my PC is the same chipset as loads of tablets out there. Maybe Ubuntu’s version of flash (or whatever ‘technology’ the animation was based on) is really inefficient or something, but it looks to me as though there is a strong possibility that this Spotify ‘innovation’ might have suddenly resulted in millions of tablets getting hot and their batteries flattening in minutes.

The animation is now gone from their front page. Will it return? I can’t now check whether any changes I make to Spotify’s opening behaviour (opening up minimised?) will prevent the issue.

This is the problem with modern computer-based stuff that is connected to the internet. It’s brilliant, but they can never stop meddling with things that work perfectly as they are.

[06/01/17] Of course it can get worse. Much worse. Since then, we now know that practically every computer in the world will need to be slowed down in order to patch over a security issue that has been designed into the processors at hardware level. At worst it could be a 50% slowdown. Will my audio PC cope? Will it now run permanently hot? I installed an update yesterday and it didn’t seem to cause a problem. Was this patch in it, or is the worst yet to come?

[04/02/18] I defaulted to Spotify opening up minimised when the PC is switched on. Everything still working, and the PC running cool.

But I would like to get to the point where I have a box that always works. I would like to be able to give my code to other people without needing to be an IT support person – believe me, I don’t know enough about that sort of thing.

It now seems to me that the only way to guarantee that a box will always be future-proof without constant updates and the need for IT support is to bite the bullet and accept that the system cannot be bit-perfect. Once that psychological hurdle is overcome, it becomes easy: send the data via S/PDIF. Resample the data in software (Linux will do this automatically if you let it), and bob’s your uncle: a box that isn’t even attached to the internet, that takes in S/PDIF and gives you six analogue outputs or variations thereof; a box with a video monitor output and USB sockets, allowing you to change settings, import WAV files to define filters, etc. then disconnect the keyboard and mouse. Or a box that is accessible over a standard network in a web browser – or does that render it not future-proof? Presumably a very simple web interface will always be valid. I think this is going to be the direction I head in…

Vinyl sales overtake digital

vinyl

It seems that a milestone was passed last week when UK vinyl sales hit £2.5m versus digital’s £2.1m. Vinyl has enjoyed eight straight years of growth.

It’s no skin off my nose, except where new recordings begin to be produced primarily with the vinyl release in mind. This is where dynamics are reduced, bass and treble attenuated, and stereo effects restricted while the recording is being made, rather than a special post-processed master being made for vinyl. We digital listeners are then forced to listen to the less dynamic version as well.

I just had a quick look to see if I could find an actual ‘Top Tips for Mastering Vinyl’ example for the above. The first site I looked at contained this:

Mastering for Vinyl

…For minimalist recordings, you want to try and minimize large phase differences between channels… This means that spaced omnis are really not such a good idea if you can avoid them.

If you can’t avoid them, try and put loud bass sources in the center of the soundstage, as close to the center mic as possible. Even if you are using coincident miking, this is a good idea.

In other words, once vinyl becomes a major consideration, actual recording techniques are dictated by the medium. In the example above, it is not crazy studio effects that are being limited, but the microphone placement used in minimalist recordings that you might have thought were not a problem.

The Man in the White Suit

man-in-the-white-suit

There’s a brilliant film from the 1950s called The Man in the White Suit. It’s a satire on capitalism, the power of the unions, and the story of how the two sides find themselves working together to oppose a new invention that threatens to make several industries redundant.

I wonder if there’s a tenuous resemblance between the film’s new wonder-fabric and the invention of digital audio? I hesitate to say that it’s exactly the same, because someone will point out that in the end, the wonder-fabric isn’t all it seems and falls apart, but I think they do have these similarities:

  1. The new invention is, for all practical purposes, ‘perfect’, and is immediately superior to everything that has gone before.
  2. It is cheap – very cheap – and can be mass-produced in large quantities.
  3. It has the properties of infinite lifespan, zero maintenance and non-obsolescence
  4. It threatens the profits not only of the industry that invented it, but other related industries.

In the film it all turns a bit dark, with mobs on the streets and violence imminent. Only the invention’s catastrophic failure saves the day.

In the smaller worlds of audio and music, things are a little different. Digital audio shows no signs of failing, and it has taken quite a few years for someone to finally come up with a comprehensive, feasible strategy for monopolising the invention while also shutting the Pandora’s box that was opened when it was initially released without restrictions.

The new strategy is this:

  1. Spread rumours that the original invention was flawed
  2. Re-package the invention as something brand new, with a vagueness that allows people to believe whatever they want about it
  3. Deviate from the rigid mathematical conditions of the original invention, opening up possibilities for future innovations in filtering and “de-blurring”. The audiophile imagination is a potent force, so this may not be the last time you can persuade them to re-purchase their record collections, after all.
  4. Offer to protect the other, affected industries – for a fee
  5. Appear to maintain compatibility with the original invention – for now – while substituting a more inconvenient version with inferior quality for unlicensed users
  6. Through positive enticements, nudge users into voluntarily phasing out the original invention over several years.
  7. Introduce stronger protection once the window has been closed.

It’s a very clever strategy, I think. Point (2) is the master stroke.

Digital cables: a lack of understanding

eyediabinary cropped

There’s a new article about digital cables which is directly relevant to my post a couple of weeks back – it even features the “eye diagram” I referred to.

It purports to ‘debunk’ the idea that “bits are bits”, but in reality it does nothing of the sort. It starts off with a question that it fails to answer and doesn’t come back to again:

Asynchronous USB audio to the rescue?

The remainder of the article is about isochronous USB and seeks to suggest that it is not just about getting all the data through intact, but that there are issues with timing, jitter and noise. Judging by the comments that follow, the readership is entirely unaware that it is a non sequitur they have been reading.

There is a strange little “Note” placed halfway down in tiny lettering that says

Do not confuse ‘asynchronous USB’ with ‘Isochronous,’ an asynchronous USB system still uses Isochronous mode to transfer audio.

I am not quite sure what that is supposed to mean, but again it seems to be an appeal to the false idea that it is beyond the wit of man to deliver digital audio data without it being corrupted, or affected by timing. As I have said before, if this were true, then systems such as TIDAL could not work: the data has come through thousands of miles of cheap cable not even made of silver or raised from the floor on little ceramic pots, through hundreds of utilitarian digital switches, and eventually into your home via the cheapest connector that your telecoms company could provide. And yet it works perfectly; all the electrical noise of the world’s internet, and all the jitter that comes from being re-routed dynamically hundreds of times every second, is completely absent.

As I said in my earlier post, we need to understand the system at a higher level. Asynchronous USB and other on-demand packet-based systems are immune to cable quality once it is beyond some minimum standard, and electrical isolation removes the possibility of noise being injected into your audio system – even enabling you to listen to music off the WWW without all the electrical noise of the global internet spoiling your listening pleasure. Really, getting data from a file/CD/stream into your DAC and reading it out at a precise rate is a simple engineering problem, rendered trivial and cheap for the consumer by some very clever people who actually understand the problem, and have solved it.

The Beatles re-mastered

Since they finally made it onto Spotify and other streaming services, I have begun listening to the Beatles again, following a gap of a few years. The reason for the gap was that it was often too tempting to explore Spotify rather than getting up to place CDs in the drive or getting around to “ripping” them. Also, my Beatles CDs are fairly old, so not in the ‘re-mastered’ category, and this knowledge would no doubt have spoiled the experience of listening to them while not being a strong enough reason to buy new ones.

The experience of listening to the re-mastered Beatles on my über-system has been “interesting” rather than the unalloyed pleasure I was expecting. In years gone by, I very much enjoyed my Beatles CDs on lesser systems, listening to the music without worrying too much about ‘quality’ – although I always marvelled at the freshness of the recordings that had made it across the decades intact. I had built up such expectations of the re-mastered versions playing on a real hi-fi system that I was bound to be disappointed, I suppose.

What I am finding is that, for the first time, I am hearing how the tracks were put together, and I can ‘hear through’ to the space behind them. With the latest re-masters on my system, you can clearly hear the individual tracks cleanly separated, and the various studio techniques being employed – you can’t mistake them for ‘live’ recordings – and they are rather ‘dry’.

With the Beatles I think that we are hearing music and recordings that were brilliantly, painstakingly created in the studio to an exceptional level of quality, that still sounded great when ‘munged’ through the typical record players, TV, radio and hi-fi equipment of the day – mainly in mono. It is now fascinating to hear the individual ingredients so cleanly separated, but I wonder whether the records wouldn’t have been produced slightly differently with modern high quality playback equipment in mind; after all, we are probably hearing the recordings more cleanly than was even possible in the studio at the time. Maybe it really is the case that The Beatles sound best on the equipment they were first heard on. Other musical groups of the time weren’t produced with such a high level of studio creativity and in such quality and so, with their recordings already ‘pre-munged’ to some extent, are not laid bare to the same degree on a modern system.

For the first time, perhaps I am beginning to see the reason for the re-release of the mono versions. They are a way of producing a more ‘cohesive’ mix without resorting to artificial distortion and effects that were not on the original recordings.

 

Radiohead’s Bond Theme That Never Was

Always interesting to hear a new Bond theme. Radiohead produced one for the film Spectre, apparently, but for whatever reason it wasn’t used.

I know Bond themes are always designed to tick certain boxes, but is a certain something now being over-used? I’m thinking of the orchestral chords such as the one at the end of the Radiohead track at 3.05, having been used throughout much of the track. If I were a musical expert I’d be able to tell you exactly what type of chord it is (a particular inversion?) but it seems to me that that they’ve hacked a core element out of John Barry’s compositions, changed the orchestration to the most simplistic heavy-handed shorthand, added some ‘swell’ and are now using it like a ‘Bond preset’ in inappropriate ways. John Barry’s music was usually restrained or ironic in some way, while these chords are now being splashed about in irony-free grandiose fashion on many of the most recent Bond themes.

Beatles on Spotify!

As of today the Beatles are available on Spotify. It could be seen as a huge affirmation of the whole streaming thing, I suppose – a blow against Taylor Swift, Thom Yorke, Adele et al.

Or is it just a label bowing to the inevitable? The Beatles had a tremendous run, remaining a premium price best seller on LP then CD then LP for over 50 years. An EU directive designed to retrospectively extend recording copyright from 50 to 70 years will protect all their albums with the exception of Please Please Me for another 20 years. But by holding out against the latest ‘delivery platforms’ for as long as they did, the ‘brand’ began to fade from the zeitgeist. We oldies had already bought the CDs and it would have been difficult to persuade many of us to buy them yet again on a higher resolution format or vinyl. Maybe sales were dropping anyway, and without streaming it was clear that they would be cutting the ‘brand’ off forever from the upcoming generations.

I must stress that I don’t see the Beatles as just a ‘brand’, but I don’t think today’s youngsters view them as the peerless phenomenon that we (or at least I) did, and still do.

roon

It seems that there is a new smart interface for your music collection, mentioned here and here.

roon

I’ll bet it is good if you like that sort of thing – but worth $119 a year? You decide.

Many’s the time with Spotify I have wished that it could simply display a full screen image of the album art while playing – not much to ask, but seemingly too difficult to arrange. Not to mention being able to sort search results, a useful facility that seemed to disappear with an update some time ago and is bitterly regretted by the users – but bizarrely lives on in the Linux version (I have been trying to work out the story behind why they thought it was a good idea to remove it, but can’t!). Clearly, it must be possible to do something better in the non-Spotify world, and I have every confidence in roon.

But something caught my eye in the various mentions around the web: people are enquiring about roon’s sound quality, and no one knows, or wants to give them a straight answer.

Well let me do it: the sound quality will be exactly what you can get / are getting right now. There is no mystery. Digital audio is not mysterious. It is just numbers. A new user interface is not going to change the numbers. And unless something is very wrong, it is not going to change how the numbers are sent to your DAC. OK?

The Power of Ideas… to enrage audiophiles

I just had a somewhat abrasive online encounter. I had the temerity to comment on some blog articles about digital cables and found that what I was saying seemed to send people apoplectic with rage.

treknorman

Basically, I asked: if digital cables are responsible for changing the sound in any way, then how could high quality online streaming services work at all? The signals are sent over long distances of dubious cable, non-audiophile optical fibres and satellites not even made of silver, and yet, supposedly (and actually), the signals emerge in real time, utterly perfect – except, that is, for the deleterious effects of that pesky final cable…

As always with this sort of idea or thought-experiment, I was told I was “missing the point” – the articles were primarily about cables’ effects on noise injection, hum loops, jitter and so on, so such “philosophical” arguments were irrelevant, they blustered.

But if we accept the notion that high quality online streaming is possible (audiophiles have no trouble accepting that TIDAL is “CD quality”), and that it is independent of the types of cables in the global internet (changing dynamically from moment to moment), and indeed is indistinguishable at the DAC from a network-based CD drive (for example) on our local network, then the implication of the ‘noise’ agenda must be that all the ‘noise’ from the thousands of miles of bog-standard cable and bog-standard digital gubbins along the way can be removed before it even reaches our network. So why can the DAC not incorporate this 100% noise blocking function itself? If it can (and it can) then the final cable in the chain takes on only the same significance as any other of the myriad cheap, long cables in the chain i.e. demonstrably none whatsoever. And, indeed, that was the whole idea of digital audio in the first place – an idea that has somehow become forgotten along the way.

Of course I accept that some digital audio implementations are poorly-designed and could, indeed, be susceptible to noise injection, hum loops and so on. Some may even suffer from power supply noise related to the digital signal and “how hard the chips have to work”. But if so, then messing around with cables is a red herring; trying to fix a fundamental problem with a sticking plaster. But even this overstates the case: at least a sticking plaster is designed by clever people who understand the problem. It is effective at what it does and doesn’t pretend to work simply because it is made of a certain material or incorporates an ancient Celtic weave. A common mistake is to believe that audiophile hardware is ‘higher quality’ than standard, but that rationalists don’t believe it is worth using. No. The truth is that rationalists don’t even believe that it is ‘higher quality’. Probably the opposite. It may be more expensive. It may use materials that are sacred to audiophiles. But it is just a manifestation of ignorance and magical thinking, and is worthless or worse. A more expensive cable can/will not fix the problems with your defective DAC. The only way to fix the problem for real is to design the electronics competently, and to design the last digital node to block ‘noise’ in the same way as earlier nodes can, apparently, block all the noise of the entire internet.

The unpalatable truth is that if you hear differences in your system when you change your digital cables you are either:

  1. imagining it, or
  2. the owner of hardware that is defective (by design).

I would bet that (1) is the more common. Given a suitable measurement setup whose resolution equalled or exceeded the resolution of your audio DAC, you could verify your system’s immunity by feeding in known digital test waveforms and checking that what came out of the DAC was always the same regardless of cable and any other upstream hardware. This idea, too, was met with seething fury!

Maybe I have realised something: many people simply cannot process “philosophical” ideas. The only way they can get a handle on them is for someone first to ‘downsample’ them into a form they are comfortable with:

  • brand names and products
  • industry gossip and hero-worship
  • low level engineering trivialities as a substitute and/or smokescreen for genuine ideas
  • shop-floor nuts and bolts

But this is a lossy process. It is impossible to ‘upsample’ the low level tittle tattle back into the world of ideas. Any attempt to do so, or a refusal to ‘downsample’ in the first place, causes panic, abuse, then meltdown.