My mid-80s video framestore

I was looking through some old photos the other day and was reminded of a thing I built back in the mid-80s. I had become obsessed by the idea of building a device to capture and display photographic images at a time when no normal computer could do it. Your standard home computer like a BBC micro, for example, could only display a small number of colours and couldn’t even display a smoothly-graduated monochrome image. Later, more sophisticated computers like the Commodore Amiga couldn’t do it without strange restrictions on which colours could be adjacent to others.

I was fully aware of the basic idea of digitising waveforms and storing the results in RAM, having played around with audio sampling prior to this. I found that it was possible to buy chips that could generate frame and line sync pulses from a composite video stream i.e. the output of a standard video recorder or camcorder, and also to split composite video into R, G and B analogue components suitable for sampling. A standard computer monitor could take in separate sync & RGB signals so it wasn’t then necessary to do the reverse and generate composite video again.

Putting it all together, I could build a device that would enable me to grab a single video frame and store it in RAM. I could then replay the frame over and over, reconstituting it via three DACs, to be fed to a standard RGB monitor. I could also stop this process, and allow a computer (a BBC Micro) to read the contents of the RAM for storage on disk. The computer could also upload stored images into the RAM for display – and this would also allow for the possibility of ‘Photoshopping’ images or synthesising them in software.

The pièce de résistance was that what fell out of this arrangement was a live digitised image on the monitor that could be frozen by pressing a button.

As I recall, the main technical hurdles were:

  1. High speed ADCs and DACs were expensive and/or outside my comfort zone. In the end, I used three 6-bit ‘flash’ ADCs, and my own home-made R-2R DACs. Consequently, I could capture and display 262,144 colours which doesn’t sound much compared to today’s standard 16 million but was adequate. In monochrome I could display a 64 grey scale image which was sufficient to be called ‘photographic’.
  2. How to lock my pixel clock to the incoming video stream. As a stopgap while I thought of something better, I made a super-simple analogue oscillator out of CMOS Schmitt triggers that could be started (as opposed to its output being gated) by setting an input logic level.
  3. RAM was pretty expensive – except for dynamic RAM, and I thought this was too complicated to contemplate. In the end I used a bunch of static RAM chips to give me a resolution of 256×256 pixels. Again it doesn’t sound like much, but with the relatively fine colour graduations, it was not too bad at the time.
  4. A standard UK PAL video frame has 625 lines (although only 576 lines are visible) comprising two interlaced fields of half that number of lines. If I was aiming for a resolution of 256 pixels, I clearly could not digitise the whole frame. In the end I think I sampled and displayed just one of the fields, cropping the middle 256 lines out of the 288 visible lines by starting to digitise once a certain line count was reached after the top of the field. When displaying a sampled image, the same image was in effect displayed in each field.
  5. I needed to make double-sided PCBs for at least part of this device in order to simplify its construction. This involved arduous work with acetate sheets, self-adhesive tape and transfer symbols, and a scalpel.
framestore_pcb

The uppermost of a stack of three identical PCBs incorporating memory, computer I/O, ADC and DAC for the red, green or blue component.

I eventually made it work pretty well. I started with a single channel of monochrome, and I remember that the first time I ‘froze’ a perfect monochrome image was one of those moments that I probably live for.

I didn’t progress beyond the simple analogue pixel clock – which effectively meant that I set the image’s horizontal width with a potentiometer. It seemed to work perfectly well.

Of course, as so often happens, once the initial thrill was over I didn’t use it much after that, eventually putting the thing to the back of a cupboard and never touching it again – it is still there!

Here are a few of the images I grabbed, mainly from broadcast TV or pointing a camera at magazines. As you can see, it actually worked.

 

(I seem to remember transferring the images from the BBC Micro via serial cable to a PC in 1998 – the datestamp on the images –  and, knowing me, probably substituted the raw image data into a 256×256 image created in Photoshop or similar so I never had to actually understand the image header. I would then have resized the images from their non-square pixels to what looked right on the PC monitor using Photoshop. There would have been no more image manipulation after that, so these are effectively the raw images).

Advertisements

Audiophile Demo Music

 

lizardIn shops that sell televisions, they often play some sort of ‘showreel’ of spectacular scenes; you know the type of thing: ultra-detailed night time cityscapes, ultra-saturated lizards, ultra-contrasty arctic wildlife, and so on. You realise that it is impossible to see any real difference between the televisions with these scenes. They are ‘impressive’, but only at the most superficial level of what a television can display. Basically, any modern television can display them, with the only differentiators being size and absolute brightness. It always seems to me that the only way I can tell if a TV is any good is to watch a local news programme or something like that – not zero ‘production values’, but something that is relatable to everyday life.

Does something similar happen with audio?

When writing this post, I vowed to myself to search for a report of an audio show demo track, and to use the first track I came across as my example – of course I would have quietly forgotten that vow if it hadn’t illustrated my point fairly well, but as it happens, I think it does. The track is by Malia, and is called I Feel It Like You.

Absolutely no criticism is implied of the track, nor its production which is exemplary for this kind of music. But as an audio demo track?

Listening to it on my laptop, it seems to me to be an ‘in-your-face’ studio recording, built from a fairly sparse assemblage of pristine layers, each of which has been processed, compressed and equalised. The vocals are crystal clear and close up, mixed with a carefully-balanced amount of ‘Large Hall’ reverberation. The backing features plenty of detail, with lots of staccato, sampled(?) percussion rhythms and bass.

I think that this track would sound superficially impressive on any system – it even sounds good on my laptop.

What it is missing, if you ask me, is any connection with the organic, natural acoustics we encounter every day. It is like those uber-detailed images used for TV demos; the sound is highly-detailed and everything is at peak contrast and saturation. Such tracks are very common in audio demonstrations.

An alternative staple of audiophile demonstrations is ‘jazz’… I’m not sure what the appeal of this is (as a demonstration). I suspect it is because it often seems like an antidote to over-production – although jazz can still be over-produced. But again, as the potential customer, I don’t think it is telling me very much about the system’s capabilities. Old recordings of jazz are like grainy monochrome pictures, and modern recordings are still showing a ‘scene’ that is ‘smokey’ and sepia-toned (which I am sure is the intention). The style of music and the instrumental line-up (e.g. continuous brushed snare..?) means that I am often not hearing clear delineation between the instruments nor much in the way of transients and dynamics. (Or maybe I just don’t like jazz particularly and cannot engage with it, in which case ignore my objections…).

Just looking through some of the tracks that I might ‘demo’ my system with, one thing strikes me: they usually feature a bit of ‘messiness’. They may, or may not, have been put together in a studio using overdubbing, but the individual layers are a bit raw, organic, and recorded from a bit of a distance, so the room’s natural acoustics are audible. This possibly masks a bit of the pristine detail, but there’s enough there to verify that the system can do detail, anyway. When a short sound stops, and the reverberation remains, the contrast between the two can be particularly revealing. In photographic terms, the image covers all shades of grey and there’s still detail in the shadows; it’s not pushed into excessive contrast, nor selected or processed to be super-detailed. I am not even advocating massive ‘dynamics’ most of the time, which some people cite as proof of a system’s chops. As I will mention later, there are some specific classical tracks that might be played in order to put the system’s dynamic capabilities beyond doubt, anyway!

My favoured demo tracks are not just a single mic recording of a school concert, of course! They have been put together with some high ‘production values’.

It is worth perhaps listing the aspects of the system we might want to show off, or listen to if we are thinking of buying it.

  • frequency response: it is good if the track covers a wide spectrum of frequencies with equal weighting – not just bass and treble. A problem with many a system, would be fixed bumps and dips in the frequency response. These are almost impossible to hear against a recording that also has fixed bumps and dips in its ‘frequency response’. For example, a solo voice or a string quartet, or a piano. All of these are generated by resonant systems characterised by a formant, or a group of similar formants.  Some studio recordings are also augmented with fairly aggressive parametric equalisation of the individual layers in order to make them sound even more detailed. It is only when we hear many different natural musical sources playing in varying combinations that we assemble enough ‘simultaneous equations’ to work out whether the system is neutral or not.
  • bass: of course we want to demonstrate this! Deep organ notes, kick drums, symphony orchestras in natural acoustics are going to show this off well. The best bass does not have ‘one note’ quality; it engages somewhat with ‘room gain’ in order to extend all the way down to below audibility; it starts and stops quickly, hitting you in the chest (the kick drum will show this). In other words, sealed not bass reflex…
  • distortion: a sine wave would show up harmonic distortion, and several musical sources all playing at the same time would show up the resulting intermodulation distortion. A single voice will not really show it, nor percussive sounds. A choir would probably be a pretty good demonstration of low distortion, as would a symphony orchestra playing a varied selection. Less good would be girl-and-guitar, a string quartet, or a ‘world music’ drumming ensemble.
  • imaging: the really great demo, in my opinion, is when the stereo speakers produce a complete 3D audio ‘scene’. It may be an “illusion” as some people are very keen to point out, and not a perfect holographic reproduction especially if the recording was created with multiple mics and overdubs in a studio, but it is very compelling. Some classical recordings are made in purist fashion and do create a very convincing sense of 3D space – not just left-right imaging, but also a sense of distance. Imaging depends at least on low distortion and accurate correlation between left and right speakers, implying (I would say) a requirement for accurate reproduction of phase and timing. Some people would claim that absolute reproduction of phase isn’t important as long as both channels are well matched. I think this is special pleading based on the performance of traditional systems; I sometimes think that the people who are very keen to ‘dis’ imaging probably have very expensive systems based on valves, vinyl and passive crossovers…
  • power: achieving high volume isn’t usually a problem, but we want the system to behave uniformly well at all volumes. I suggest that the way this would be made obvious would be when a musical performer or ensemble plays continuously and naturally between quiet and loud – with minimal dynamic compression being applied. This is different from demonstrating a system playing a less dynamic recording with the volume control low and then high. As the Fletcher Munson curves show, there is only one volume at which we perceive a sound with the correct frequency response: its natural volume. If the system does something peculiar as the volume increases, it will be much more obvious if we are listening at a fixed volume that is closer to the ‘real’ volume at which it was recorded.

Of course, recommending tracks is a bit pointless, because the track’s ‘demo’ qualities are combined with musical taste – and I think you need to like the music in order to engage fully with what you are hearing and to know how it’s going to sound with ‘your’ music. Nevertheless, here’s a few tracks out of hundreds that I tentatively suggest would reveal a system’s attributes (no accounting for Youtube’s sound quality) and are the sort of thing I would want to listen to in order to get some idea of whether a system was any good.

Sufjan Stevens, Jacksonville – not a familiar act to me, but this track is ‘big’, has great bass and enough rawness to hear that the system sounds ‘natural’.

Elton John, Rocket Man – a beautiful, rounded studio recording with a great sense of space (so to speak).

Neil Young, Double E – very simple rock track that doesn’t sound over-produced.

Khachaturian Symphony Number 3 – a *massive* symphonic recording with huge pipe organ and 15 trumpets (apparently). If you play this loud, the end is very loud!

Arvo Part, Credo, for Piano Solo, Mixed Choir and Orchestra – possibly some of the most dynamic, contrasting classical music you will encounter.

(Maybe these classical performances are a bit too dynamic for everyday listening, but if you really want the demo to show what the system is capable of..!)

A less intense classical recording with some great imaging, space and some revealing bass is this one:

It’s An American in Paris by Gershwin, performed by the LA Philharmonic under Zubin Mehta – not sure if the Youtube version is the same as the CD version I listen to.

Sgt. Pepper’s Musical Revolution

Image result for howard goodall sgt pepper

Did you see Howard Goodall’s BBC programme about Sgt. Pepper? I thought it was a fine tribute, emphasising how fortunate we are for the existence of the Beatles.

Howard did his usual thing of analysing the finer points of the music and how it relates to classical and other forms, playing the piano and singing to illustrate his points. He showed that twelve of the tracks on Sgt. Pepper contain “modulations”, where the songs shift from one key to another – revealing very advanced compositional skills needless to say. But I don’t think that the Beatles ever really knew or cared that music is ‘supposed’ to be composed in one key and one time signature – they were just instinctive and brilliant. To me, it suggested that formal training might have stifled their creativity, in fact.

He supplemented his survey of the tracks with Strawberry Fields and Penny Lane which although not on the album, were the first tracks produced from the Sgt. Peppers recording sessions.

The technical stuff about studio trickery and how George Martin and his team worked around the limitations of four track tape was interesting (as always), and we listened in on some of the chat in the studio in-between takes.


Obviously, I checked out what versions of the album are available on Spotify, and found that there’s the 2009 remaster and, I think, the new 50th anniversary remixed version..! (Isn’t streaming great?)

Clearly the remixed version has moved some of the previous hard-panned left and right towards the middle, and the sound has more ‘body’ – but I am sure there is a lot more to it than that. The orchestral crescendos and final chord in A Day in the Life are particularly striking.

At the end of the day, however, I actually prefer a couple of more stripped back versions of tracks that appeared on the Beatles Anthology CDs from 1995. These, to me, sound even cleaner and fresher.


But what is this? Archimago has recently analysed some of the new remix and found that it has been processed into heavy clipping i.e. just like any typical modern recording that wants to sound ‘loud’. Archimago also shows that the 1987 CD version doesn’t have any such clipping in it; I won’t be throwing away my original Beatles CDs just yet…

The Secret Science of Pop

secret-science-of-pop

In The Secret Science of Pop, evolutionary biologist Professor Armand Leroi tells us that he sees pop music as a direct analogy for natural selection. And he salivates at the prospect of a huge, complete, historical data set that can be analysed in order to test his theories.

He starts off by bringing in experts in data analysis from some prestigious universities, and has them crunch the numbers on the past 50 years of chart music, analysing the audio data for numerous characteristics including “rhythmic intensity” and “agressiveness”. He plots a line on a giant computer monitor showing the rate of musical change based on an aggregate of these values. The line shows that the 60s were a time of revolution – although he claims that the Beatles were pretty average and “sat out” the revolution. Disco, and to a lesser extent punk, made the 70s a time of revolution but the 80s were not.

He is convinced that he is going to be able to use his findings to influence the production of new pop music. The results are not encouraging: no matter how he formulates his data he finds he cannot predict a song’s chart success with much better than random accuracy. The best correlation seems to be that a song’s closeness to a particular period’s “average” predicts high chart success. It is, he says, “statistically significant”.

Armed with this insight he takes on the role of producer and attempts to make a song (a ballad) being recorded at Trevor Horn’s studio as average as possible by, amongst other things, adjusting its tempo and adding some rap. It doesn’t really work, and when he measures the results with his computer, he finds that he has manoeuvred the song away from average with this manual intervention.

He then shifts his attention to trying to find the stars of tomorrow by picking out the most average song from 1200 tracks that have been sent into BBC Radio 1 Introducing. The computer picks out a particular band who seem to have a very danceable track, and in the world’s least scientific experiment ever, he demonstrates that a BBC Radio 1 producer thinks it’s OK, too.

His final conclusion: “We failed spectacularly this time, but I am sure the answer is somewhere in the data if we can just find it”.

My immediate thoughts on this programme:

-An entertaining, interesting programme.

-The rule still holds: science is not valid in the field of aesthetic judgement.

-If your system cannot predict the future stars of the past, it is very unlikely to be able to predict the stars of the future.

-The choice of which aspects of songs to measure is purely subjective, based on the scientist’s own assumptions about what humans like about music. The chances of the scientist not tweaking the algorithms in order to reflect their own intuitions are very remote. To claim that “The computer picked the song with no human intervention” is stretching it! (This applies to any ‘science’ whose main output is based on computer modelling).

-The lure of data is irresistible to scientists but, as anyone who has ever experimented with anything but the simplest, most controlled, pattern recognition will tell you, there is always too much, and at the same time never enough, training data. It slowly dawns on you that although theoretically there may be multidimensional functions that really could spot what you are looking for, you are never going to present the training data in such a way that you find a function with 100%, or at least ‘human’ levels of, reliability.

-Add to that the myriad paradoxes of human consciousness, and of humans modifying their tastes temporarily in response to novelty and fashion – even to the data itself (the charts) – and the reality is that it is a wild goose chase.

(very relevant to a post from a few months ago)

The Secret Life of the Signal

Some people actually think of stereo imaging as a “parlour trick” that is very low on the list of desirable attributes that an audio system should have. They ‘rationalise’ this by saying that in the majority of recordings, any stereo image is an artificial illusion, created by the recording engineer either deliberately or by accident; it does not accurately represent the live event – because there may not even have been a single live event. So how can it matter if it is reproduced by the playback system or not? Perhaps it is even best to suppress it: muddle it up with some inter-channel crosstalk like vinyl does, or even listen in mono.

At the top of the list of desirable attributes for a hi-fi system, most audiophiles would put “timbre”, “tonality”, low distortion, clean reproduction at high volumes, dynamics, deep bass. All of these qualities can be experienced with a mono signal and a single speaker – in fact in the Harman Corporation’s training for listening, monophonic reproduction is recommended for when performing listening tests.

Because their effects are not so obvious in mono, phase and timing are regarded by many as supremely unimportant. I quote one industry luminary:

Time domain does not enter my vocabulary…

Sound is colour?

We know that our eyes respond to detail and colour in different ways. In the early days of colour TV (analogue) it was found that the signal could be broadcast within practical bandwidths because the colour (chrominance) information could be be sent at lower resolution than the detail (luminance).

There is, perhaps, a parallel in hearing, too: that humans have separate mechanisms for responding to sound in the frequency and time domains. But the conventional hi-fi industry’s implicit view is that we only hear in the frequency domain: all the main measurements are in the frequency domain, and steady state signals are regarded as equivalent to real music. A speaker’s overall response to phase and timing is ignored almost totally or, at best, regarded as a secondary issue.

I think that this is symptomatic of an idea that pervades hi-fi: that the signal is ‘colour’. Sure, it varies as the music is playing, but the exact nature of that variation is almost incidental; secondary in comparison to the importance of the accurate reproduction of colour, and that in testing, all that matters is whether a uniform colour is accurately reproduced.

There has, nevertheless, been some belated lip service paid to the importance of timing, with the hype around MQA (still usually being played over speakers with huge timing errors!), and a number of passive speakers with sloping front baffles for time alignment. Taken to its logical conclusion, we have these:

wilson_wamm_master_chronosonic_final_prototype_news_oct

Their creator says, though:

It’s nice if you have phase coherence, but it is not necessary

So they still fall short of the “straight wire with gain” ideal. It still says that the signal is something we can take liberties with, not aspiring to absolute accuracy in the detail as long as we get a good neutral white and a deep black, and all uniform (‘steady state’) colours reproduced with the correct shading. It says that we understand the signal and it is trivial. Time alignment by moving the drivers backwards and forwards is an easy gimmick, so we can go that far, however.

Another Dimension

I think that with DSP-corrected drivers and crossovers, we are beginning to find that there is another dimension to the common or garden stereo signal; one that has been viewed as a secondary effect until now. Whether created accidentally or not, the majority of recordings contain ‘imaging’ that is so clear that it gives us access to the music in a way we were not aware of. It allows us to ‘walk around’ the scene in which the recording was made. If it is a composite, multitrack recording, it may not be a real scene that ever existed, but the individual elements are each small scenes in themselves, and they become clearly delineated. It is ‘compelling’.

I can do no better than quote a brand new review of the Kii Three written by a professional audio engineer, that echoes something I was saying a couple of weeks ago: imaging is not just a ‘trick’, but improves the separation of the acoustic sources in a way that goes beyond the traditional attributes of low distortion & colouration.

I think he also echoes something I said about believable imaging giving the speaker a ‘free pass’ in terms of measurements. As in my DIY post, he says that the speaker sounds so transparent and believable that there is no point in going any further in criticising the sound. A suggestion, perhaps, that conventional ‘in-room’ measurements and ‘room correction’, are shown up as the red herrings they are if a system sets out to be genuinely neutral by design, at source.

Firstly, the traditional kind of subjective analysis we speaker reviewers default to — describing the tonal balance and making a judgement about the competence of a monitor’s basic frequency response — is somehow rendered a little pointless with the Kii Three. It sounds so transparent and creates such fundamentally believable audio that thoughts of ‘dull’ or ‘bright’ seem somehow superfluous.

… it is dominated by such a sense of realistic clarity, imaging, dynamics and detail that you begin almost to forget that there’s a speaker between you and the music.

…I’ve never heard anything anywhere near as adept at separating the elements of a mix and revealing exactly what is going on. I found myself endlessly fascinated, in particular, by the way the Kii Three presents vocals within a mix and ruthlessly reveals how good the performance was and how the voice was subsequently treated (or mistreated). Performance idiosyncrasies, microphone character, room sound, compression effects, reverb and delay techniques and pitch-correction artifacts that I’d never noticed before became blindingly obvious — it was addictive.

…One of the joys of auditioning new audio gear, especially speakers, is that I occasionally get to rediscover CDs or mixes that I thought I knew intimately. I can honestly say that with the Kii Three, every time I played some old familiar material I heard something significant in the way it performs…

…Low-latency mode …switch[es] off the system phase correction. It makes for a fascinating listening experience. …the change of phase response is clearly audible. The monitor loses a little of its imaging ability and overall precision in low-latency mode so that things sound a little less ‘together’.

“The Kii Three is one of the finest speakers I’ve ever heard and undoubtedly the best I’ve ever had the privilege and pleasure of using in my own home.”

Television’s first night

firstbroadcast

There was an interesting BBC programme last week which celebrated the 80th anniversary of the launch night of BBC television. It aimed to re-create the original event as closely as possible, even to the extent of building replicas of some of the technology in use at the time.

For those who don’t know the story, the BBC launched television in 1936 running two types of technology in parallel: the Logie Baird mechanical system and EMI’s vacuum tube-based electronic system. Baird’s system was used first, and then the whole thing was repeated using the electronic system. The original television receivers, of which only 300 had been sold by the launch, had a switch to allow the receiver to be put into Baird or EMI mode – I hadn’t realised that, even on launch day, some receivers were using electronic picture tubes even if the Baird camera system wasn’t.

The Baird mechanical system was incredible: for truly live images it had to use a “flying spot” camera where the scene (the face of a presenter sitting in a pitch black booth) was raster-scanned with a high intensity dot of light and the resulting reflected light level picked up by a photo-sensor. In order to achieve 240 lines of resolution, two rotating discs were used; one a metre in diameter and spinning so fast its edges were almost supersonic, and a synchronised slower disc with a spiral mask which selected one of several sets of dots on the main disc.

More general scenes of groups of performers and so on were recorded live to film which was developed in a portable ‘lab’ mounted beneath the camera, ready to be scanned by a flying spot scanner some 54 seconds later – this was effectively the first ever telecine machine. The transition from live to telecine sections required logistical coordination around the 54 second delay, meaning that the performers had to start 54 seconds before the live announcer stopped talking, and the announcer had to wait in silence after the performance ended before someone jabbed him in the ribs through a hole in the side of the booth and he could start talking again. (I found this whole thing baffling: why was it important that any of it was truly ‘live’? Why not just do it all delayed by 54 seconds? Perhaps, as was implied in the program, the telecine images were not quite as crisp as the live..?).

Anyway, the writing was on the wall for the mechanical system, and the six month competition was terminated after only three months. My question is: why did it take so long? Why did people go to such heroic lengths to pursue a solution that was so obviously doomed? Perhaps men’s fascination with spinning discs in preference to electronic solutions is universal. I have no doubt that there were some diehards who thought that the mechanical system somehow captured a better picture than a soulless glass tube.

The Engineering Department of Cambridge University had the fun of developing the replica flying spot camera (although with only 60 lines of resolution as opposed to the original 240). Things got a bit fraught in the build up to the ‘launch’ however: a persistent mechanical howl from the disc mechanism threatened to ruin everything. It seemed to take several hours of effort and anguish before someone had the bright idea of applying a drop of oil…

None of the original presenters, performers or staff present at the launch night are still with us, but the BBC did manage to track down a 104 year old engineer who worked for Baird. The launch of television seems like so long ago, and yet this man was already 24 when it happened. He is still sharp as a pin and when Hugh Hunt of Cambridge University told him he was building a replica flying spot machine using an aluminium disc instead of the original steel, his brow furrowed immediately and he asked “Are you sure aluminium will be strong enough to withstand the centrifugal force?”.

I enjoyed seeing the old abandoned studios in Alexandra Palace, and Paul Marshall‘s barn full of old TV equipment, including some of the earliest camera tubes in existence. He has built a working camera based on a genuine Iconoscope tube, using modern electronics to drive it, giving us a close re-creation of pre-war electronic TV pictures. I somehow find old TV equipment quite moving; TV was an important part of my childhood and I can’t help but think of the snippets of the golden past that might have been captured through those lenses.

Hi-Fi Sci-Fi

stone tape

Last night I watched a BBC TV play from 1972 called The Stone Tape. An electronics company installs its R&D department in an old mansion, with the aim of developing “a new recording medium”. Tape is, apparently, “too delicate and it loses its memory”. They stumble upon a possible ready-made solution in a room in the oldest part of the house, which seems to have a ‘ghost’ – a Victorian maid frozen in time just before she fell to her death. What if it’s not a ghost, but a ‘recording’ of an event that has somehow become embedded in the stone itself? Maybe this could be “the big one” they have been looking for…

What I particularly liked about it, was the idea that – hard to believe – there once was a time before the world went digital, and when everything was still up for grabs. Digital computers do play a role in the story, but only as a way of “correlating” the experimental results in order to spot possible connections that a human might miss.

It’s also a well-observed portrayal of life in a certain kind of company – some of it seemed very familiar.

Later… series 48

Jools Holland’s programme is back on BBC2 (series 48!).

This week I was very impressed with Laura Mvula’s song Overcome which featured a beautiful descending harmony from the three backing singers.

(I saw Ms Mvula presenting a programme on Nina Simone recently. In the programme she showed that she is a ‘proper’ musician who can play classical music on the piano. I am always impressed by that).

I also enjoyed The Coral’s Miss Fortune (very Echo and the Bunnymen-esque vocal..?)

I’m Not in Love: the story of 10cc

10ccAnother excellent in-depth portrait of a band on BBC4. It took us from the individual members’ musical origins in the 1960s, through their self-built studio in Stockport, worldwide fame and global hits, management woes and eventual break-up over ‘musical differences’. The band of four members comprised two distinct duos: Gouldman & Stewart and Godley & Creme; their competitiveness in the studio pushing them to ever-greater heights of creativity and experimentation. Clearly all four members are geniuses who couldn’t help but be innovative and experimental. They also seem to have had quite a nice time doing it, and weren’t in it for the money and the fame.

They are thoroughly nice chaps who have aged well and seem to bear no ill will against each other. Godley and Creme went on to achieve the greater prominence and success after the break-up, moving into video production for other bands in the 1980s (virtually inventing the ‘look’ of the pop video genre) and having several chart hits of their own.

For the geeks among us, the programme went into some depth describing the studio trickery involved in creating the ultimate exercise in overdubbing, I’m Not in Love. We also got to see the inner workings of G&C’s musical instrument invention the Gizmo.

Having said all that, I, personally, have never really been drawn to their music. I was always aware of it, and maybe I admired it for its cleverness, but even after this documentary I won’t be rushing over to Spotify to listen to any of it. I am glad that other people like it, though.

Awkward glossing-over moment of the week: how Jonathan King dreamed up the band’s name.