Principles of Operation: Optical Profiler

A brief tutorial on how optical profilers use light interference to image surfaces.

 Optical Profilometry



The concepts of wave coherence and wave interference are the keys to understanding how an interferometer works.  We’ll start by defining these terms.

Coherence

Light is an electromagnetic wave, so it has both an electric and magnetic component. For our purposes it is sufficient to just look at what the electric component is doing in space and in time. Consider a highly localized, highly coherent source of light, such as a laser.  By ‘highly localized’ we mean that the light propagates along a tight path, like a beam, which is what light from a typical laser pointer does.  What does ‘highly coherent’ mean?  The explanation will take a few steps and a little imagination.  The following is a prelude to the answer.

  • Imagine that we could take a snapshot of the electric field E along a section of the laser beam– freezing what it is doing in an instant in time–  and then make a graph of what it is doing at that instant.  We would see something like what is shown in Figure 1.  The electric field changes sinusoidally in space.  At position (1), for example, the electric field points up along the positive y direction, and at position (2) it points down along the negative y direction.  Going from position (1) to (2) the electric field gradually decreases in strength in the positive direction, reaches zero strength, and then gradually increases in strength in the negative direction.

Capture.PNG


  • In case you are wondering what the electric field is, an electric field can be thought of a force-field for charges.  If you placed a positive charge, e.g. a proton, at position (1) in the light beam, right at this particular instant in time, the electric field at position (1) would push the positive charge upward because the field is pointing upward.  Similarly, a positive charge placed at position (2) would be pushed downward by the downward pointing electric field.

  • The wavelength of light is the length of one cycle of the sinusoidally varying electric field.  It is usually denoted by the Greek letter lambda λ, as indicated in Figure 1.  On the human scale of existence the wavelength of visible light is microscopic.  Red laser light typically has a wavelength around 630 nm, for example.  The small size of the wavelength of light is what makes it a good probe for measuring surface topology on the micron scale. 

 

Returning now to the concept of light coherence,  let’s examine the relationship between what the electric field is doing at two different points along the light path, points which are separated by integral numbers of wavelengths.  If the light pattern is consistently repeating between two points separated by integral numbers of wavelengths– finishing at exactly the same phase position in the wave cycle at the ending point as in the starting point– then the light is considered to be coherent over that distance.  So in Figure 1,  if you compare what the light is doing over the short distance L1 = 3λ  the waveform is at a maximum negative electric field at both places, therefore the light is coherent over a distance L1. Using the same reasoning over the longer distance L2 = 14λ  we come to the conclusion that the light is coherent over this distance as well.

The coherence length of the light source is the distance over which the this relationship holds.  For a typical laser the coherence length is very long, hundreds to millions of wavelengths.

As a counter example, let’s examine in the same way a light source which has a very short coherence length, such as the light from a yellow incandescent light bulb.  Refer to Figure 2.  At first glance the pattern in the electric field may appear perfectly sinusoidal, as in Figure 1. Closer examination reveals otherwise.  The average wavelength of the yellow light emitted by this bulb is some distance λ.   Over the short distance  L1 = 3λ the ending state of the electric field is only slightly out of synch with its starting state, so we could say the light is ‘fairly coherent’ over this distance, but over the longer distance L2 = 14λ the synchronization between the electric field at the starting and ending points is obviously lost, and the light is not coherent over this distance. The coherence length of this particular light source extends over a distance of several wavelengths, with the degree of coherence gradually fading from ‘good’ over a few cycles to ‘nil’ after ten or so cycles.

Capture.PNG


It is important to remember that the waveform in Figure 2 is a snapshot of the electric field at one particular instant in time. If another snapshot is taken at some other instant in time the waveform will be arbitrarily shifted in some way.  Still, the beginning and ending states over any short distance L1 will be nearly the same because the coherence is good over the short distance.   Over the long L2 distance, however, the ending state will be totally unrelated to the beginning state, because once the distance is reached where coherence is lost the electric field fluctuates more-or-less randomly with respect to the starting point.

Interference

When considering the concept of interference it helps to think in terms of the way water waves combine on the surface of a pond.  Where two small amplitude ripples cross paths on the water’s surface the total amplitude of the water’s motion equals the sum of the individual ripple amplitudes.  Two little ripples combine to give a bigger ripple.  Similarly with light, where two light waves cross paths and overlap the total amplitude of the electric field at some point is space is equal to the sum of the electric fields from the individual light sources. 

It helps to introduce a concrete model of a way to combine two light waves in a controlled fashion.  An easy way to create two light waves from one light source, and recombine them in a controlled way, is by using a set of beam splitters and mirrors as shown in Figure 3.  At the left beam splitter mirror BS1, 50% of the light is transmitted through the mirror and 50% is reflected upward.  The portion which is transmitted travels a distance D1 before it reaches the right beam splitter BS2.  The portion which is reflected travels a longer distance D2  to reach BS2 because it must reflect off of mirrors M1 and M2 to get there.  At BS2 the two waves are recombined, and they then propagate together to the right toward the screen.  The white screen allows us to see how the two beams interact with one another when they are recombined.

Capture.PNG

We will introduce one control variable in the setup of Figure 3 by allowing mirrors M1 and M2 to move vertically in unison.  By raising and lowering these mirrors in unison we can vary the distance D2 traveled by the upper light beam. 

Terminology:  The difference between distances D1 and D2 is called the optical path difference (abbreviated OPD) for the two sources of light.  By moving M1 and M2  the OPD can be varied.

Suppose we adjust D2 so that it is exactly the same as D1.  The optical path difference would be zero.  In this situation the waves from D1 and D2 will always combine constructively:  their maximum and minimum electric field values will align so that as time goes by they will go up and down in unison as they reach the surface of the screen.  Graphs of the individual electric fields and the total electric field hitting the center of the screen as a function of time are shown in Figure 4A.  Constructive interference produces a large amplitude for the total electric field.  Looking at the screen when the OPD has been adjusted to zero we would see a bright spot of laser light at the center of the screen. 

Capture.PNG

Now suppose D2 is adjusted so that it is exactly ½ λ longer than D1.    In this case the two light sources combine destructively:  at all instants in time the value of E1 cancels the value of E2 to produce zero electric field at the center of the screen, as shown in Figure 4B.  Looking at the screen when the OPD has been adjusted to  ½ λ we would see a dark spot (i.e. no visible light from the laser).

It is apparent that as the optical path difference is gradually varied by moving M1 and M2, the intensity of the spot of light will gradually alternate between the states of being very bright (constructive interference) and dark (destructive interference).  The intensity varies sinusoidally with changes to the optical path difference, as indicated in Figure 5A.  Looking at the screen we would see the intensity vary cyclically from dark to light and back to dark every λ change in the optical path difference.

Capture.PNG

And now this discussion about light coherence and light interference will come to its final conclusion.  The laser light source in Figure 3, with its long coherence length, will produce clearly visible interference maxima and minima over a very long optical path difference (Figure 5A).   But if we replace the high coherence laser light source with the low coherence yellow incandescent bulb of Figure 2 the interference maxima and minima will be visible over only a short optical path difference.  With the incandescent light, when the OPD is adjusted to be only a few wavelengths the coherence will be good, and the interference will be clear.  But the interference minima and maxima will gradually fade as the OPD is increased (Figure 5B).  Once the OPD goes beyond the coherence length of the yellow light source the electric fields from the two beams reaching the screen will add together randomly rather than in synch.  The interference will rapidly fluctuate between  various levels of constructive and destructive interference on a time scale of picoseconds.  What the human eye will observed on the screen will be a medium level of light brightness, halfway between the very bright and very dark levels observed when the OPD is small.

Interferometer Design

The building blocks of a typical  interferometer  are shown schematically in Figure 6.  Its operation can be understood by following the light path from the light source to the CCD camera in a step-by-step fashion: 

  1. Light from the light source is collected by the condenser lens  and directed toward a beam splitter.  At the beam splitter 50% of the light is reflected into the objective lens of the interferometer. (The other 50% is not used).  Within the objective the light reaches a second beam splitter.  At this point the light is split into two components– a reflected component and a transmitted component– which will follow two different optical paths to the CCD camera. 

  2. Light component 1 reflects off of the beam splitter, reflects off of the small reference mirror at the center of the Mirau objective lens, and then returns to the beam splitter.  This constitutes optical path D1.  Light component 2 is transmitted by the beam splitter, then reflects off of the sample surface to travel back to the beam splitter plate.  This constitutes optical path D2. 

  3. Light components 1 and 2 are recombined when they return to the beam splitter in the Mirau objective. From there they travel the same path upward through the objective lens, the focusing lens, and finally onto the CCD camera surface.

  4.  The CCD camera records how the two light components recombine.  When the distance between the beam splitter and the sample exactly matches the fixed distance between the beam splitter and the reference mirror,  the optical path traveled by both light components will be exactly the same  (D1 = D2).  The light waves will interfere constructively and the camera will record a bright intensity.  As the sample distance is changed slightly from this position there will be a difference in the optical paths traveled by the two light components (D1not equal D2).  Small changes in the optical path difference cause the interference pattern observed at the CCD camera to vary between constructive and destructive interference– bright and dark intensity.

Terminology:  The working distance of an objective is determined by the distance between the internal beam splitter and reference mirror.   When the separation Z between the front of the objective (i.e. the internal beam splitter) and the sample surface is very close to the working distance interference patterns will appear in the light reaching the CCD camera.

It follows that the interference pattern observed in the CCD camera image gives information about the height of the sample surface.  There are various techniques for converting the interference information in the camera image into the  height profile of the sample surface.  The two methods employed here, phase shift interferometry (PSI) and vertical scanning interferometry (VSI), are outlined in the next sections.

 

 

PSI:  Phase Shift Interferometry

Imagine placing a perfectly flat sample under the interferometer objective, with the sample perfectly level along the y-axis, but tilted slightly along the x-axis, as shown in Figure 7.  Consider the surface point directly below the center of the objective.  If the interferometer is positioned so that the distance between the center of the interferometer and the sample is exactly equal to the distance between the beam splitter and the internal reference mirror, the OPD will be zero at that point, and the corresponding light intensity recorded at the center of the CCD camera will be a bright spot.  Because the sample has no tilt along the y-axis, the same OPD condition will hold for every point along the y-axis of the camera, so the camera image will show a bright band down the middle.  Now consider surface points further down (or up) the sloped surface.  The working distance gradually increases (decreases) for points further down (up) the slope.  The interference for these sample positions gradually alternates between constructive and destructive interference.  This produces alternating bright and dark bands in the CCD camera view, as shown in Figure 7. 

Capture.PNG

Note that the bright bands in the camera view repeat as the separation between the beam splitter and the sample surface increases by integral numbers of half the light wavelength, λ/2.  Recall that the light must traverse this gap twice, once going down toward the sample and again going upward toward the beam splitter.  So the total optical path difference change is equal to two times the separation, or in other words, 1λ, 2λ, 3λ, etc. 

PSI imaging is based on recording an interference pattern across the sample surface and converting this directly into height information.

Terminology:  The bright and dark bands of light in the camera image are referred to as interference fringes.  The pattern of interference fringes recorded in a single CCD camera picture is called an ‘interferogram’.

The algorithm in the software is said to “integrate the phase changes” in the interferogram.  So in our imaginary tilted flat surface there are 6 cycles of intensity change from the left to the right edges of the camera image.  Each cycle of the intensity corresponds to 360º change in phase of the sinusoidal pattern.

Capture.PNG

 Each 360º change in the phase also corresponds to λ/2 change in the surface height.  This ratio provides a conversion factor to convert the measured phase change across the surface to a vertical distance in nanometers.  For example, if the light source has a wavelength of 550 nm then the conversion factor from phase to surface height would be

(550 nm/2)/360º = 0.764 nm

 The height of the ramp in Figure 8 would then be  6 x 360º x 0.764 nm/º =  1.65 µm.

Below is a real example of an interferogram. This is an interferogram for a ball bearing surface.  Notice how the fringes are closer together where the surface slope is steeper.  This is also reflected in the adjacent graph showing the integration of the phase for a cross-section through the center of the image.  As expected for a ball bearing surface, the phase graph is in the form of a circle arc.

Capture.PNG

 Thus far in this discussion about PSI  imaging the coherence length aspect of the light source has not been considered.  It is very important, however.  Returning to the imaginary scenario of Figure 8, if now a light source with a coherence length of only a few light wavelengths is used the interference pattern observed at the CCD camera would change to something like Figure 10.  Notice how the interference pattern is distinct at the center of the camera image but gradually becomes ‘washed out’ toward the left and right edges where the OPD is longer and approaches the coherence length of the light source.  It is apparent that with this light source the interference pattern could be used to determine the surface profile of only very flat surfaces— surfaces with features no greater than about two wavelengths high.  Taller features would produce little or no modulation in the light intensity, and the smooth mode method could not be applied in those regions.  An important point to remember is that the coherence length of the light source determines the maximum range over which smooth mode imaging can be applied.

Capture.PNG

 

 

VSI

The best type of light source for PSI  imaging would be one with a very long coherence length to allow the interference fringes to be observed over a wide height range of surface features. On the other hand, the characteristics of a very short coherence light source open up the possibility for a completely different approach to interferometry imaging, called vertical scanning interferometry (VSI), which in general is more versatile than PSI.

To understand VSI imaging method consider just the center point of the imaginary tilted sample surface of Figure 7.  The light reflecting from this single point will travel through the interferometer optics and be focused onto the center pixel of the CCD camera.  If a long series of rapid camera images is recorded as the interferometer is slowly lowered toward the sample surface the intensity of the light at the center pixel as a function of the camera frame number vary as shown in Figure 11.  When the distance between the front of the Mirau interferometer and the center of the sample surface Z is much larger than the working distance WD of the interferometer, no interference will be observed in the light at this pixel (Figure 11A).  The monochrome camera will record a mid-level gray shade for the light intensity.  When  the difference between Z and WD moves within the coherence length of the light interference will be observed (B).  The oscillations in the light intensity will increase to maximum when Z is exactly equal to WD, then fade to zero as Z moves beyond the coherence length of the light.  Continuing to moving the interferometer downward, once Z is well below WD interference will no longer be observed, and the camera will again record just a mid-level gray light intensity (C). 

Figure 11 Intensity at the central camera pixel when the distance between the Mirau interferometer and the sample is (A) far above the working distance, (B) near the working distance, and (C) far below the working distance. The light intensity scale…

Figure 11 Intensity at the central camera pixel when the distance between the Mirau interferometer and the sample is (A) far above the working distance, (B) near the working distance, and (C) far below the working distance. The light intensity scale runs from black (0) to white (255). The intensity variations are captured in 325 snapshots of the camera view.


Terminology:  A graph of the light intensity at one pixel of the camera view as a function of the position of the interferometer objective is referred to as a correlogram.

Notice how the peak intensity in the graph of Figure 11  occurs at the camera frame where the OPD for this pixel is zero.  If instead of doing this measurement at the center pixel of the camera view it is done at the far right side, where the surface is higher, the oscillating pattern in the intensity would be essentially the same, but the peak would occur in an earlier frame because the OPD=0 condition would be reached earlier in the frame sequence.  Similarly, a measurement of the pixel intensity at the far left side of the camera view would peak in a later camera frame.   This is shown graphically below.

Figure 12 Intensity vs. Z position of the Mirau objective for three different pixels in the CCD camera field of view. The red arrow marks the location of the peak intensity for each pixel. The height difference between pixels can be determined from …

Figure 12 Intensity vs. Z position of the Mirau objective for three different pixels in the CCD camera field of view. The red arrow marks the location of the peak intensity for each pixel. The height difference between pixels can be determined from the camera frame number where the peak occurs.

The camera frame sampling rate in a correlogram is held at a fixed value, typically of order one camera frame for every 100 nm step along the z-axis.  So there is a simple conversion factor to translate the interference patterns recorded by the CCD camera into a map of surface height.  For example, if the peak in the center pixel occurs at camera frame 175, and the frame rate is 100 nm/frame, then the surface height represented by the central pixel in the camera can be defined to be Z = 17.50 µm.  If the peak in the far right pixel occurs at camera frame 165 then the surface height represented by that pixel would be at Z = 16.50 µm.  Therefore the relative height difference between these two pixels in the sample image would be 1.00 µm. 

This is the basis of VSI imaging.  The interferometer objective is slowly scanned over a programmable vertical distance of anywhere from 10’s to 100’s of microns, recording images of the surface about every 100 nm, and the correlogram of each pixel is analyzed to determine the height of each pixel in the surface image.

The width of the group of interference oscillations in the correlogram (segment B of Figure 11) is determined by the coherence length of the light source.  Light sources with shorter coherence lengths produce fewer oscillations in the correlogram.  This enhances the sharpness of the peak in the correlogram, which gives better vertical resolution in VSI images.  Unlike the PSI imaging technique,  short coherence-length light sources work better.