**In the first part we took a look at the theoretical principals of DEM operation and found out some limitations of actual implementation. We saw how sensitive internal oscillator circuitry is to incoming digital signals and measured resulting linearity degradation. Alternative approach of lowering DEM frequency was suggested that will be the topic of this second part. **

We have made a good theoretical hypothesis on how and why low frequency DEM clock makes sense. In order to find out if it’s even a viable solution that can be implemented with real world elements, some further research and testing are needed. If data analysis will look promising, in-situ experiments with TDA1541A chip will be performed to validate the presumption. Just like the best practice of scientific method asks.

Let’s begin by looking into TDA1541 analog output on-chip circuit (above) and see what could go wrong. Here I_{bit} is our bit-current and it is represented by current source to which a C_{ext} filtering capacitor is connected to. We know that current source connected to a load resistance manages the voltage in such a way as to keep the current constant. Also ideal capacitor voltage (charge) can’t change without current changing. Adding two together we see that this is a place where “real” current averaging of DEM circuit is taking place. Here we make our first conclusion:

“DEM capacitor voltage is proportional to averaged bit-current

It also means, that if we connected ideal capacitor having infinite capacitance and let it reach averaged voltage, DEM circuit could be stopped entirely! Unfortunately charging infinite capacitor means waiting for universe to end so let’s get back to reality. Here we have capacitors that (amongst other flaws) have leakage current I_{leak}. This current is now “stealing” our accurate bit-current and sending it to ground, before it reaches the output. If substantial enough, it could even upset all cascoding stages. This is unlikely though as load resistance is quite low (4-7kΩ).

To have a better picture of all the influence, let’s look at the full conceptual schematics. Here we see that 7 filtering capacitors are used for 6 most significant bits and one shared current for 10-bit passive divider. Now consider what happens if all leakage currents are equal? In that case, all 6 of MSB currents, together with the shared one, are offset by equal amount. In theory this doesn’t introduce any non-linearity, just a static offset error (not important in audio). Let’s make a note, that:

“DEM capacitor leakage current doesn’t degrade DAC linearity

However a real circuit passive 10-bit divider will loose some of it’s accuracy as now it has smaller (62.5μA – I_{leak}) current to work with. Also it’s very unlikely that real capacitors would have exactly equal leakage current. Let’s call this difference between any two leakage currents ΔI_{leak }and say that now it’s the main source of non-linearity. But how much of ΔI_{leak }is still acceptable?

TDA1541A datasheet states that guaranteed integrated and differential linearity error is 1LSB maximum. Great. But what does that mean? To answer this question, we have to define LSB and what differential and integral non-linearity errors are. Let’s quickly dive into this tech-term rabbit-hole.

To put it simply:

- LSB – or Last Significant Bit is minimum 1 step value that DAC can reproduce.
- Differential error is the maximum deviation between an actual step height and the ideal value of 1 LSB.
- Integrated error is maximum deviation between ideal transfer function and any real point.

Both of them are usually specified in relation to 1LSB. Difference between two are subtle and should be thought as either looking into minimal one-step DAC resolution error or how those step-errors can accumulate over whole output range.

It also should be noted, that relationship between these errors and harmonic distortion of a DAC is not simple. Using appropriate math it can be shown that:

“SNR of a converter is directly related to DNL and THD is directly related to INL.

So increase in noise floor will mean increased DNL and increase in THD will mean higher INL. This will come handy later when evaluating results by capturing large FFT size spectrum. Otherwise automated measurement system for generating sub-sets of input codes are needed to evaluated each bit-error with metrological grade precision.

Intuitively this relationship can be found by looking at a real transfer function. If it goes zigzagging around ideal axis with lot’s of small DNL errors, at output signal we will have lot’s of non-harmonically related distortions and they will add to the system noise floor. However symmetrically deviating transfer function will have distortion products with harmonic relationship. So for instance bow-shaped INL curve will produce 2nd harmonic distortion and with s-shaped one 3rd harmonic will dominate etc.

Now let’s suppose that average TDA1541A will have 0.75LSB DNL and INL errors. By introducing additional 0.25LSB error, we would still meet 1LSB datasheet specification for DNL. Same thing is true for INL, but only if this additional 0.25LSB error has truly random distribution.

Summarizing all the above, it is safe to assume that acceptable value for ΔI_{leak }should be less than 0.25LSB. In absolute terms, that means a current of merely 15nA.

ΔI_{leak }< 0.25LSB (15.3nA)

This will be our target for DEM capacitor selection. Let’s quickly remind our selfs that we are talking about *delta *between two leakage currents and not a *total* current here. For film or (small) MLCC capacitors, this is not a challenge as these capacitors usually have a total leakage current somewhere in that range. However we are looking to shift DEM frequency at least 3 magnitudes of order down, from 100kHz to 100Hz or lower. This implies increasing filtering capacity from usual 1μF to at least 100μF or higher. While 100μF MLCC capacitors exists, their leakage currents are substantial. For example 100μF 10V TDK capacitor in 1206 package, has only 1MΩ insulation resistance. This means 7.5μA current at 7.5V for MSB filtering. Film capacitors traditionally have almost non-existent leakage, but package size for this much of capacitance presents a challenge. Designing a PCB around that would be a non-trivial task at all, also cheapest 14 film capacitors of that size means budget hole of 300€ or deeper.

This leaves electrolytic capacitors as only viable option here (at least at the time of writing this). At first glance this is a *terrible* idea. Historically electrolytic capacitors are notorious for their leakage current and are best avoided in leakage-sensitive applications. But times are changing and there has been substantial progress made in this field. Particularly special low-leakage KL series of capacitors made by Nichicon looked very promising and launched further investigation.

I_{leak }= f(V, t, T)

Quick primer on electrolytic capacitor leakage. It’s a function of temperature (T), applied voltage (V) and time (time).

As evident from graphs above, it decreases with time and increases with applied voltage and temperature. Typical steady state I_{leak,op} are specified in datasheets as product of capacitance, rated voltage and self-discharging constant:

I_{leak,op }= const. x C x V

For Nichicon KL series it’s specified as 0.002CV or 0.2 (µA), whichever is greater after 2min. of applied voltage at room temperature of 20C. This means 5µA leakage for a 100μF 25V capacitor . It’s still a lot! And why choose 25V capacitor with higher leakage if maximum voltage on DEM filtering is 7.5V? It’s because leakage current drops significantly when applied voltage is below nominal. Usually, if applied voltage is 30% of nominal voltage, leakage-current drops to 10% of I_{leak,op}. Then it boils down to 50nA for a 100μF 25V capacitor at 7.5V. At least in theory. This is totally acceptable so a batch of 25V 100μF and 680μF KL series capacitors was bought for further evaluation.

When electrolytic capacitors are stored without externally applied voltage, the oxide layer will start to deteriorate. This will depend on storage time, temperature and relative humidity. After first application of voltage the process of “reforming” the oxide layer begins. During this time period leakage current will be higher then normal. Even then, Nichicon KL capacitors are within datasheet specs. of 5 and 8µA after just 1 minute of applied voltage. During next 60 minutes this current gradually falls down to below 0.1µA. Then a temperature sweep with a step size of 5C is performed. Leakage current is below 2µA for both capacitor sizes in operating temperature range up to rated +85C.

After aging capacitors at room temperature and nominal voltage for 24 hours (5 hours @80C has same effect), leakage current drops even further. Now to measure this current at room temperature, lab grade equipment capable of pA range has to be used. This means almost registering single electron events which I’m naturally not equipped to do. Instead a voltage sweep is made at 80C temperature, where leakage current is still measurable. Step size is 5V and each measurement was made after 20min settle time, because of a large hysteresis. After each voltage increase there was an substantial jump in a leakage current which then stabilized after about 20 minutes of constant voltage. Left graph above shows that voltage dependence is almost linear for these capacitors, instead of a more logarithmic in standard series. Temperature sweep was made at 25V nominal voltage and values below 10nA are 5th order polynomial approximation to zero.

Summarizing all the measurement results, we can make following statements:

- After 24hour aging period, both capacitors shows extremely low leakage current below 10nA in usual operating temperature range <50C at nominal voltage of 25V.
- Re-applying voltage again, after 24hour of empty storage, same results can be measured after just 20min settle time.
- Presuming worst-case 60C operating temperature and extrapolating from linear voltage-leakage current dependency at 80C, we can calculate maximum operating leakage current for both capacitors at 7.5V DEM voltage being 9nA and 3nA respectively.
- With a very high degree of certainty we can state that difference of leakage currents ΔI
_{leak }can’t get over 15.3nA boundary in a temperature range up to 60C. - Only way for ΔI
_{leak }> 15.3nA would mean a defective capacitor or operating them above 60C with 30C temperature difference (60C and 90C).

Next, Marantz CD-40 board containing TDA1541A DAC chip was used for in-situ experiments. SAA7220 digital filtering chip was desoldered, TDA1541A reconfigured for I2S operation and digital signals were supplied using USB-to-I2S converter. 100μF DEM capacitors soldered directly to TDA pins and oscillator capacitor was changed to 1μF film type. This lead to a very unstable oscillation. Further experiments showed, that internal emitter-coupled multi-vibrator needs additional asymmetrical pull-down to -15V to reach a stable oscillation. It should be trimmed down exactly using oscilloscope and only when TDA1541A reaches operating temperature!

After 24hours of capacitor aging, above measurements were taken. Green trace was captured playing back -90dB white noise and shows worst case bleed-through of 96Hz DEM and it’s 4 multiplies. Largest spike is 24Hz at -105dB and 50Hz are just mains pickup by capacitor-like-antennas. Black trace is resulting inter-modulation to 1kHz tone with 24Hz spaced sidebands at -108dB and below. This is *absolutely* below hearing threshold even by a furthest stretch of imagination. Despite that, extensive listening tests with real music signals were performed to make absolutely sure of that.

For those of you, who want to hear how low frequency modulation sounds like, here I made an example. It’s most audible 4Hz modulation and all people I know stop hearing it at -60dB or so. But 24Hz modulation is audible as separate tone together with fundamental. I say audible, yet 24Hz are more like “feel-able” with your guts or ear drums so it’s difficult to discerning two signals even when they are 1:1. Implying that’s possible at a level of -108dB is just ludicrous.

And final “proof by pudding” with standard distortion measurements, comparing 100kHz and 100Hz DEM clock frequencies. With 0dBFS fundamental all harmonic distortion (except 2nd) have decreased using 100Hz clock and electrolytic capacitors. However when evaluating low-level linearity by reproducing -40dB 1kHz tone, harmonic distortion increase-decrease equals out. I should also add that at these levels TDA1541A temperature has major influence and all measurements can be invalidated just by blowing air over it. To evaluate noise floor, second -40dB measurement was taken without averaging. Decrease of 2-4dB all across the spectrum is clearly visible with 100Hz DEM, even though there are more bleed-through spikes. All above would suggest that:

“There are no degradation in DNL or INL performance for a real-world 100Hz DEM clock implementation.

Needless to say, that all these measurements doesn’t tell us a thing about what impact slower DEM clock has on sound. But it’s there and it’s very audible. Although only when doing blind A/B test between exactly same “test-mules” running different DEM frequencies. It’s very hard to put it to words, just like describing what influence does a really clean master clock has. It’s simply “a little bit more of everything”. Air, tempo, details, you name it… Just like focusing a photo lens. That’s probably the best analogy I can do. Take it or leave it.