Phono stylus inspection pt. 1
Have you ever woken up in the middle of the night wondering, “What is the condition of my stylus? Is it worn out? Is it damaging my records? ” If not, then you’re probably not a true vinyl aficionado! Or at least, that’s what people who pretend to be one might say. At this point, I’m probably one of those people. Over the years, I’ve gathered quite an expensive vinyl collection, and ensuring its integrity has become one of my everyday tasks.
And here lies the main question: How do I check the wear on my cartridge stylus? And where do I begin learning about all of this? If you’ve ever had the same questions, then you’re in the right place! So, let’s begin.
Let’s see under the microscope
I must admit, optics was and probably still is not my forte. That’s one of those subjects I vaguely remember from my high school physics class. Over the years, I gathered bits of information here and there by experimenting with astrophotography and selecting a microscope. In other words, I knew too little when I first embarked on this adventure.
Luckily for me, I already had a stereo microscope in my lab for SMD work. It’s nothing special – just a general OEM version of AmScope with 10X eyepieces and a 10X base magnification, allowing for a maximum of 100X magnification. It was obvious at first look that 100X will not cut it as I couldn’t see any details. So I went shopping for 20X and 30X eyepieces. When they arrived, I dove into this marvel and began inspecting my stylus. And soon enough, I realized that I saw… well, still nothing!
Above, you can see the actual view captured by my phone camera through the microscope eyepiece. At 100x total magnification, we can see the entire cantilever and confirm that the stylus is present, but not much beyond that. At 200x magnification, we begin to discern the rough shape of the stylus, and by 300x magnification, we can identify the stylus type with some confidence. But what about the actual contact surfaces? It’s still very difficult to determine where they are. So, I started searching for information and stumbled upon an instruction manual from Shure Brothers that many of you are probably familiar with: the Stylus Evaluation Kit SEK-2.
Above is my ‘artist impression’ of how the Shure setup works. We have two light sources from the sides. They project light onto the contact surfaces, and the light reflects back to our eyes. This works very well because the contact surfaces are at 45° to the eyes and light sources, acting as mirrors. This is due to the fact that stylus wears down in the 90° V groove of an LP. When we view them from above – we see two “blobs” of bright light. The bigger they are – the more wear there is.
What the Shure manual doesn’t mention is that you can do the opposite! By tilting the stylus 45°, the light will no longer reflect off the contact surface, creating a dark patch instead like in the illustration above. This offers an alternative method for inspection.
And finally, we can use a ring light for microscopes, through which we view the stylus. In this setup, the light travels downward and bounces back from the reflective surface as a mirror, so we see our microscope objective surrounded by ring of light. This produces a dark patch surrounded by a bright contour at the edges of contact surface. If we rotate stylus away from 45°, we can make a whole patch bright again – as it will reflect light from our ring source.
So there are several approaches to this, and the most effective one will depend on the specific circumstances. In my case, best setup was with ring light from above. This was a quick method as I could clearly understand when contact surface is dark or bright depending on stylus rotation in my hands.
With careful positioning, I was finally able to take a picture of the contact surface of my stylus. Above you can see a new AT440MLb cartridge with a micro-ridge stylus. It doesn’t look bad at first glance! However, if you try to zoom in, you quickly realize that’s all there is – you can see a shiny patch, but that’s about it. Determining its exact width is very difficult because the image lacks sufficient detail.
If you think there’s much more visual information when viewing directly through the eyepieces, you’d be wrong. I see almost the same thing, just in better definition: a small patch of light. Is it ‘good’ or ‘bad’? Too wide or too narrow? I mean, I’m looking at a new stylus, so it must be good, right? Right? Exactly.
I was back to square one. Unless I always have a new stylus of that particular geometry on hand for cross-checking, or manage to burn that ‘reference’ into my visual memory – I have no idea what I’m looking at. So, a new plan of attack was born: take a high-resolution photo of a stylus and measure the contact surfaces reliably.
The wonderful world of optics
Famous words of Tyrone Biggums say it all – we need more resolution! And this is where things get complicated (hence the meme above to keep you entertained). There is no problem in buying a new camera. You can get decent 4K DSLR these days for as low as 450€, but what you get is just more pixels. So problem here is two fold:
“A camera’s image quality is limited by both the sensor resolution and the optical resolution of the lens.
We need both: a good camera with a decent pixel count and optics that can project an image onto those pixels with similar resolution. At this point, it was clear I was in for some more shopping in the microscope lens department. But before any impulse purchases and accompanying regrets could take place – we need to understand how microscopes work.
The wonderful world of microscopes
In the picture above, you see two optical systems of the simplest monocular microscope. These systems (like all other microscopes) come in two flavors: finite and infinity-corrected. They differ by the type of objective lens they use.
In a finite system, the objective lens has a fixed distance between its output and the image focus plane, called the ‘tube length.’ Infinity-corrected objectives, on the other hand, produce a collimated beam of parallel rays, known as the ‘infinity space’. Cool name, isn’t it? Sounds like something from Star Trek. We can add more optical components in this space without losing any quality, but we still need to focus the beam back with a tube lens.
Since we don’t need any additional optical components for our purposes, a finite system would be more preferable. I should also mention that it’s much easier (and cheaper) to integrate beam focusing into a finite objective than to make a separate high-quality tube lens. Unfortunately, over the past 15 years, the industry has shifted toward infinity-corrected systems, so almost all modern high-quality objectives are designed that way.
For our plan to work, we don’t need eyepieces – we can simply project the image onto the camera sensor. To do this, we need some tubes to isolate the beams from outside light and a tube lens to refocus the image. OK, but how to select the objective? And how to match it to the camera sensor? Since now we are not using any eyepiece, what magnification we should choose?
The wonderful world of objectives
I wont expand here about all the parameters of objective lenses, if you want to read more, there are wonderfull resources like microscopyu.com or edmundoptics.com that go into much more details. For our purposes the most relevant parameters are magnification, numerical aperture and field number. Last one is almost never listed on objective case and you have to dig it out of datasheet.
Also, since our object is not a flat surface like a typical slide with a specimen but rather a 3D stylus cone, the working distance could also come into play. This is especially true at higher magnifications, where the working distance becomes very small, and we might not be able to fit the object into that space.
Since I had already determined that 300X magnification when viewing through eyepieces is only ‘good enough,’ I thought, ‘Let’s start with a 100X objective, right?’ Well, no… It’s much more complicated than that! Sigh… Why do I only write about complicated things here? Well, because real world is like that.
To determine the real magnification on the camera sensor, we must know the sensor size and the field of view (FOV) size at the image focal plane. I checked my Panasonic G7, and it appears to have a Micro Four Thirds (MFT) sensor with dimensions of 17.3 x 13 mm (~21.6mm diagonal).
Now this is where the field number (FN) comes into play – it is the diameter of the observable area in the intermediate image plane where the optical specifications of the objective are met. Since we will only use the objective without an eyepiece, the intermediate plane is effectively our sensor plane. So our FN is equal to FOV at image plane.
Ideally, we want the FN to be close to the sensor diagonal to achieve a good sensor coverage. I should note that having smaller FN than sensor diagonal doesn’t mean there will be no image in the corners. Real light cone diameter is usually much larger than FN! And it’s a job of the eyepiece to limit it to FN. But since we are projecting on the sensor – we will start getting some vignetting and decrease of resolution in the corners.
My online research showed that the usual FN for objective lenses ranges from 18–26mm. So, as luck would have it, with an MFT sensor, I have pretty good coverage all around. Since FN can be expressed as:
For FN of 22mm and 20X magnification lens, we can find that FOV at the object side will be 22/20 = 1.1mm. That is to say that object of 1.2mm will be projected on the full 24mm plane, but the sensor of 21.6mm will crop it, so the real magnification will be:
With some napkin math we find that Real Magnification = 24/21 = ~1.11*20 = 22.22X. Now lets say we have a 24″ monitor and open this image captured by our sensor full wide. That’s 1.2mm object on 610mm monitor, so now Real Magnification is 610/1.2= 508.3X etc.
You see now? Asking a question of what magnification we need is pretty useless, unless we also specify where we are measuring it. Also it becomes plain obvious that even a 20X objective will be sufficient to explore details that where already visible on 300X eyepiece magnification. Instead we should focus on what resolution we can achieve with that magnification.
Resolution of the optical system
I’m going to oversimplify things here grossly, but otherwise, we will end up in a realm of quantum physics, and the SEO of this page will fall below absolute zero.
So, the optical resolution of an objective lens is really not a difficult concept to grasp. When a point of light is imaged through a lens, it will appear not as a single bright point, but as a diffraction pattern called Airy pattern. This pattern will have a bright peak point in the center called Airy disk and many more rings, with their minimum and maximum intensity.
Here we are only interested in the center of the peak and the radius of the first minimum, because when they coincide, we are just able to distinguish between those separate points.
“In imaging, the resolution is defined as the shortest distance between two points on a specimen that can still be distinguished.
When that happens we say that we are at the limit of Rayleigh resolution and the distance between peak centers of two light points is a minimal resolvable distance. At the same time it is a radius of Airy disk and it can be defined as:
Objective lens resolution, when considered in isolation, depends only on it’s NA – Numerical Aperture and λ – wavelength of the light. So it’s not a clear-cut limit, but rather ‘soft’ one, that depends on what colors we are imaging and how tightly we want our Airy disks. Again, napkin math says that for 0.4NA (most common for 20X lenses) in the middle of visible light spectrum (green color of 550nm) Rayleigh resolution is r(R) = 0.61*550nm / 0.4 = 838nm or 0.84 μm.
You might noticed, that up until now we didn’t use magnification in our calculations. That’s because by default this Rayleigh resolution is calculated at object plane. But to find the real size of projected distance between Airy disks at image plane, we need to multiply it by lens magnification. So for a 20X lens that is: 0.84 μm x 20 = 16.8 μm. Great! Now we must find out if we can resolve this distance with our sensor.
First, we need to determine our sensor’s pixel pitch. For a Micro Four Thirds (MFT) sensor with dimensions of 17.3 x 13 mm and a resolution of 4592 x 3448, the pixel pitch is: 17.3 / 4592 = 0.00376 mm or 3.76 μm.
Next, we need to remember that this is digital sampling, and the Nyquist criterion is just as valid as it is in audio. To ensure that Airy disks at Rayleigh distance from each other can be resolved in a digital image without aliasing, we need at least two pixels per this distance.
However, real-world assessments show that if we want some degree of fidelity and a gradient between these Airy disk peaks, we should strive for at least 2.5 pixels and ideally three 3 pixels. So that’s 3 x 3.76 μm = 11.28 μm. This means that if two light points are projected onto the sensor at 11.28 μm apart and they satisfy the Rayleigh criterion, we will be able to distinguish between them in the final image. We already have found that our 20X 0.4NA lens is limited to 16.8 μm, so:
Finally, we see that in this particular example, with a 20X 0.4 NA objective and an MFT sensor, we are limited by the optical resolution of our lens. We can fit almost 4.5 pixels within the distance between our Airy disks, exceeding the Nyquist criterion. This system is ‘over-sampled’ and ‘diffraction-limited’ which might sound bad but it’s actually not. It just means we have a “too good” of a camera for our lens.
Are we there yet?
I’m sorry, but we also have to talk about Depth-of-field. This is the ‘axial resolution‘ along the optical axis. I will use DOF as a short form here on, but there is also Depth-of-focus, and these two are often mixed and misused, so let’s put an end to that.
Again, this is quite an easy concept to visualize, and above you can see my attempt at that. Depth-of-field, in the most basic terms, is the distance that you can move your object and still have it in focus with sufficiently sharp details (80% of perfect sharpness).
Here n – refractive index of the medium and e – resolution or pixel size of the imaging system. So the first term accounts for diffraction limit and second one for the sensor limit.
In the same manner, Depth-of-focus is the distance that you can move your camera sensor and still have enough image sharpness.
What is often overlooked when talking about these subjects is the lens NA’s influence on both of them, and it’s an inverse one. The higher the NA, the less object ‘depth’ you will capture, but there will be more room to adjust the focal length and still be in focus, and vice versa. This is why, when using low NA objectives of 0.25 and lower, it’s better to get your tube length right.
“Depth-of-field is a distance in the object space and Depth-of-focus is a distance in the image space.
Now, when trying to calculate these, we soon find ourselves in a pickle. If you had any kind of education in physics, you probably remember the most awkward part of it – particle duality. The teacher says: light can behave both as particles and as a wave simultaneously, and everybody in class is like, “Right… let’s hope nobody asks why.”
So if we treat light as a two dimensional particle (rays), we must use classical optical geometry. This means accounting for all optical path AND sensor pixel size. So the expression becomes:
Here c = Circle of confusion (no more than 3 pixels), M – magnification of the lens. For our lens and sensor that is: (3 x 3.76 μm) / 0.4 x 20
You see that now we have a lambda λ – light wavelength in our expression so we are dealing with waves. This expression comes from this paper by I.T. Young et al., and it’s much more precise than the others you can find online, especially for high NA lenses. For a middle of the visible light spectrum at 550 nm or 0.55μm:
So DOF of
The peak cone or the central lobe of the Airy disk extending along the optical axis means that light from points slightly out of focus still has significant intensity at the image plane. This results in a broader range where objects appear acceptably sharp, thus increasing the DOF compared to the sharp cut-off predicted by geometric optics.
I hope from all of the above, it’s obvious that with the same lens, classical depth of field (DOF) is more limited by sensor pixel size and resulting circle of confusion (CoC), while wave optics DOF provides a more realistic focus distance. From that follows:
- DOF(wave) > DOF(classical) – we are diffraction limited
- DOF(wave) < DOF(classical) – we are sensor limited
- DOF(wave) ~ DOF(classical) – we are near the system optimum
It’s all a lie!
I know, it’s painful, but bear with me – we have covered all the theoretical limits of our supposed optical system, and it’s time to face reality. Unfortunately, every time we do so, it manages to slap us in the face.
All of what was said above assumes we are dealing with ideal lenses made from a perfect medium (ether) by god-like creatures. These lenses have no flaws and are assembled by aliens who transcend even the concept of precision. Well you guessed it – it’s a lie!
Real microscope objectives are developed by engineers and manufactured and assembled by technicians. Where humans are involved, there are inevitably design compromises, tolerances, and errors. All these imperfections create aberrations – deviations from ideal optical behavior that occur when real light rays passing through actual lenses fail to converge to perfect focal points, resulting in image distortions and blurring.
In order to correct all these aberrations, manufacturers go to great lengths with complex lens designs and mark their objectives accordingly. Bellow is a quick summary of what this nomenclature actually means.
All of the above shows that objective’s NA is actually more of a theoretical resolution limit that can be degraded by a multitude of imperfections. Correcting for them is not cheap, and the better the correction, the more expensive the lens becomes.
It’s also worth noting that a clever trick to eliminate chromatic aberrations is to use monochromatic light. That is, if our sample doesn’t require colors to be comprehensible, we can use a light source with a very narrow color spectrum, resulting in only sharp spots of that color in our image.
Finally, let’s remind ourselves that the Rayleigh limit is (0.61⋅λ)/NA. So for the same NA, making λ smaller is going to make limit smaller and increase the resolution! This is another ‘cheat’ we can use – by illuminating our object with near-UV light, we can effectively improve the lens resolution limit.