A lot is written over the years about these wonderful milestones of mixed domain engineering. I’m talking about TDA154x family DAC chips from Phillips here. And probably most common implementation of current-to-voltage conversion you will find for these DAC’s are simple resistor and Tube gain stage.
And there is a good reason for that – it’s dead simple and sounds really good. It’s all relative when we talk about “sounding good”, but extrapolating from my own “tubization” experience of variuos CD players over the years, I can definitely say that it is always a day and nigh difference in comparison to those choked with RF garbage 4558/5532 op-amps commonly used for IV/filtering there.
As always in any kind of engineering – there is a price to pay for this kind of begging simplicity. For TDA154x chips, name on the check is output compliance. Datasheet strictly states that is should be maximum of ±25mV . To better understand what this means, let’s look at output internal structure of this DAC .
These are simplified schematics for the first two bits out of 16. TDA1541 uses the same diode-transistor switch circuit for all 6 higher bits. Lower bits uses more sophisticated compensated diode-transistor switch to further reduce glitch currents, and the last 6 bits are switched with base compensated diff-pairs. More curious reader can dive deeper into this subject by reading the whole IEEE paper. Above circuit consist of switches T1,2 and diodes D1,2. Bases of T1,2 are held close to +5V, but when the bit is selected by data register – base voltage goes to GND and current can flow through diodes D1,2 to darlington pairs T6,5 or T5,3 and into the active current dividers which are connected to -15V. Here R1,2 are internal resistors, C1,2 external filtering caps and V1 is analog ground. I should also add, that a current of -2mA is flowing at digital zero from Iout and it produces constant voltage offset on output resistance. This is equivalent to a middle of output signal and changes between 0 and -4mA full scale.
Now let’s consider what happens when there is a voltage swing on Iout. First off all, even if all bits where turned off, there are 16 reverse biased diode PN junctions connected to output and they all have non-linear leakage capacitance. This will introduce modulation current and increase THD on it’s own. Then, as previously mentioned, whole switching circuit is much more sophisticated and has a clever cascading/compensation mechanism that expect to see constant voltage close to ground. Else they perform not as designed and bit-errors are produced. These are primary distortion generation mechanisms that I can see here. I’m sure there are much more that Phillips semiconductor level engineers where well aware of. I’m not one of them unfortunately.
OK, but how much this distortion depends on output voltage swing? Is ±25mV absolute brick-wall limit after which distortion increases exponentially or is this just a precaution in datasheet to meet THD specs? Let’s find out.
Above data was gathered from various experiments with Marantz CD-40 players. Exception was TDA1540 used in Philips CD350. In all cases AOL pin was lifted and I/V resistor value was stepped every 5Ω. No counteracting bias current to +5V was used (output has offset). Voltage was amplified x20 with the same low-thd op-amp and fed to my USB measurement interface MV-1. As can be seen from graph, there is virtually no increase in THD up to 30Ω in all cases. This would translate to output compliance of -120mV and would suggest that datasheet value is very conservative. At least when we are talking about THD. But that’s just one spec, albeit most marketed one.
Much more information can be extracted when looking into spectral content of those measurements. Above is clearly visible increase in noise floor and other non harmonically related spectral impurities when going above datasheet recommended ~12Ω (±25mV) I/V resistance value. This can be another parameter for analysis.
This is basically Spuriuos-Free-Dynamic-Range (SFDR) harmonics excluded. Here older TDA1540 behaves very different from the bunch. Having already high noise floor of -95dB it doesn’t care about I/V value up to the 40Ω. On the other hand, latest A revision of TDA1541 is linearly degrading with every increase in output voltage swing. How audible is this increased noise floor? All I can say is that for a tube stage and reasonable resistor value between 30-40Ω, this doesn’t present any problems sonically. At least to my ears.
And finally here is harmonic distribution of all DAC’s over I/V resistor value. It’s not normalized to fundamental, but the trends are all clearly visible. Up to the 30-40Ω there is a fight between 2nd and 3d harmonic but after that distortion profile is dominated by 2nd all others dropping almost monotonically. If I where to speculate on why so many people to this day loves these dac’s for their “natural” and “analogue” like sound, this “tube like” distortion profile (without using any tubes) would be my first suspect.
So, answering the subject of this article – what is the optimal I/V resistor value? Well, above data suggest that you shouldn’t go beyond 30-40Ω. If feeling adventurous and aiming for more colored sound – up to the 50-60Ω is fine. If you use 2mA bias to +5V and center output voltage at 0V, distortion reduces to half of what is shown above and these resistor values can be doubled. Personally I wouldn’t go higher than 80Ω. After that, things start to degrade faster. But hey, you know what? Best value is the one that sounds good to your ears in your system. And don’t let anyone tell you otherwise!