Let’s begin by cranking up our time machine knobs and going back to the down of digital era. Early 70’s is still considerate by many “golden age” of audio, but major studios are becoming more and more frustrated with magnetic reel-to-reel tapes and start looking for alternatives.
Year is 1971, using NHK’s experimental PCM recording system, Dr. Takeaki Anazawa, an engineer at Denon, records the world’s first commercial digital recording. Only year later, Denon unveils the first 8-channel PCM encoder, the DN-023R, which uses 47.25 kHz 13-bit PCM resolution and 4-head open reel broadcast video tape recorder.
The momentum was given and unstoppable wheel of moore’s law driven innovation was started. Industry was soon to realize that digital audio was coming and this will be “next big thing” on which to capitalize. Of course this transition will only happen if digital playback systems will become affordable to consumers. This leads to a lot of scientific research in the field Digital-to-Analog converters.
There are numerous approaches to obtain a high performance D/A converter, but at that time circuit design has settled down around the R-2R ladder network to obtain the required precision. The accuracy of the converter is determined by the matching of the R-2R resistors. Unfortunately laser trimming or other expensive and time consuming methods are needed to achieve that. People start to look for alternatives.
On of those people was Rudy van de Plassche, a young R&D engineer working at Philips Research Laboratories. In annual IEEE conference of 1976 he presents a paper named “Dynamic Element Matching for High-Accuracy Monolithic D/A Converters” , where he proposes new simple, accurate and reliable design for D/A converters named “Dynamic Element Matching” or DEM for short. Moreover, this method doesn’t need any trimming and can be mass produced using standard bipolar technology. Paper shows a prototype chip, capable of 12bit resolution. Couple years down the road he presents a new paper, named “A Monolithic 14-Bit DA Converter”, where he shows an early prototype of TDA1540, now capable of whooping (at the time) 14bit resolution. And so the legend is born.
These two papers, together with “A Monolithic Dual 16-Bit DA Converter” has all you need to know about TDA1541(A) chips. I recommend everyone who’s interested in building DAC’s around these IC’s to take a deep dive there. This is exceptional opportunity to extract every last bit of performance out of these parts. One can only dream of something similar for a modern day DAC chips.
Let’s just to do a primer for the main DEM idea here. It’s essential to grasp it for our later discussion and (as paper states) it’s really simple. Problem sound like this: How to generate exact binary-weighted (descending in 1/2 fashion) currents for all our bit’s of data? And the obvious answer is: by dividing every next current exactly in half. OK, now how do we do that if we know that every real active or passive element will introduce some error? And (maybe not so obvious) answer is: by using time-averaging.
Here is how it works (conceptual schematics above). We have a reference current 4*I and we divide it using non-ideal elements in four coarse parts, each one having error of ΔI. Then we connect every two of these currents together for an equal amount of time t. Remember, we derived these currents when dividing by four. This means that a sum of these currents is still 4*I, even if all errors ΔI are not equal. So if we connect each current to all other three with equal time periods, all errors averages to zero ( Δ1I+Δ2I+Δ3I+Δ4I=0 ) and at the output of each switch we have exact average current of one I (dashed line in a time graph above).
All that is left now is to connect two switches together and we have our most significant bit (MSB) current of 2mA and a next bit current of 1mA. Last current of 1mA from a switch goes to next divider network and division is repeated (0.5mA, 0.25mA….). This is how to generate exact binary-weighted currents using DEM.
I should add that all above works with just 2 currents in divider, but for practical reasons 4 currents are chosen. Otherwise we would need twice the voltage for cascading of those dividers, so -30V for TDA1541. Which is highly unpractical, even for the 70’s.
Here we must stop and make our first and the most important conclusion from all of what was said above. Bit-current precision and resulting DAC non-linearity depends only on initial divider error ΔI and timing accuracy. In other words:
“DEM frequency doesn’t effect DAC linearity, only frequency stability does!
This is one of the biggest misconceptions about TDA1541 DEM circuit. I believe it originated in DIY community when this post by late Henk ten Pierick was made in one of the audio newsgroups (remember those? yeah, I’m that old). Here he states that: “The DEM frequency should be made such that within one sampling period ALL DEM states are used”. Which people took literally and out of context. And so the urban legend of “DEM current averaging doesn’t work if it’s frequency is lower then sampling rate” was born. When in fact, all he was saying was that you should synchronize DEM frequency to sample rate in order to make it stable (not digital data signal dependable) and still keep it high enough (4fs – 8fs) for standard DEM filtering caps to be effective. This way, just as the author states, all interference from DEM circuit is folded back to DC and there is no leakage to audio band.
Otherwise there will be substantial bleed-trough. Above is spectrum captured for various free-running DEM frequencies that are multiply of 48k sample rate. Spectral peaks as low as fDEM/4 can be observed (4bit shift-register running DEM switches). This in turn will not mean any increase in non-linearity per se! Although at this level, DEM signal will be audible as distorted tone at half of it’s frequency.
What instead can and will increase distortion is the fact that this free-running DEM clock goes in and out-of-lock to our sample rate frequency. It’s clearly visible in the scope-shot above. This is exaggerated view of the problem, as two frequencies are purposefully made close to each other. But this locking happens always for free-running non-sampling-integer DEM oscillator to one degree or another in every TDA154x CD player ever produced.
Above spectrum is captured to illustrate effects of DEM oscillator instability to DAC linearity. Increased THD is not the worst part here. What I (and many others) believe sounds much nastier is those “hairy” side bands to fundamental 1khz and harmonics. This is what a data-dependent jitter folded back to analog audio looks like. It’s very difficult to capture in spectrum, as it’s pseudo-random in nature and it’s energy distributes over wide frequency range. So it hasn’t any pronounced peaks but believe me when I say, that it sounds just dreadful. Hence hard-locking DEM clock to 4-8fs was and is a really good advice. You can see that actually implemented in Grunding CD-9009 player, maybe in few other also.
But there is always a catch. By synchronizing DEM to our system clock, we made sure that the whole internal oscillator circuit together with all the bit-current dividers and bit switches activates at the same time when actual sample conversion happens!
When looking to TDA1541 die allocation, it becomes quite obvious that this is like 80% of die area worth of transistors, all switching at the same time. This poses the undeniable risk of introducing a lot of sample conversion time uncertainty, no matter how clean the incoming digital triggering signal we provide from the outside. Induced switching noise and ground bounce will make sure of that. So forget about all the re-clocking and low-phase-noise super-clocks. We just introduced a serious bottleneck to the system.
What we want to do instead is make use of TDA1541 ability to run in simultaneous data mode, stop the bit clock and commence sample conversion in total digital silence. But how do we achieve that? Even if DEM oscillator is running on stable non-sample-rate-multiply frequency those events will still coincide very often. This is “damn if you do, damn if you don’t” kinda situation. If solution to this conundrum was to just synchronize two clocks, this would have been already done by Mr. Rudy van de Plassche himself, straight on the die. Instead, a free running oscillator was chosen as an engineering compromise.
But what if DEM oscillator was slowed down to frequency low enough, so that it coincide with sample conversion very rarely? This way we shoot two birds with one stone. Solve the switching-all-at-once problem and now, even if DEM clock tries to lock to sample rate, resulting timing error is very small. Hurray!
But wait, why this haven’t been put into datasheet? Those Phillips engineers were surely smart enough to figure this out back then? I believe this wasn’t an option back in ’70 – ’80 for the next reasons:
But can we pull it of now? Let’s find out in the next part of the article.