Part I of this article considered the technology of measuring temperature through the infrared radiation of a heated metal mass and some of the misconceptions and inaccuracies that may be present in or about the process. This article will explore ways to improve accuracy through proper calibration and to take temperature measurements of highly reflective bodies.

 

Measuring Radiosity

IR temperature sensors are very common, but their misuse is much more common. Many users have the mistaken impression that the lasers somehow make the temperature measurement. They do not. The laser’s only purpose is to identify what it is being pointed at, but what the infrared sensor “sees” is not the small red dot.

With a thermocouple probe, we easily see the size of the thermocouple element, but the measurement area of an IR sensor is a bit more difficult. This is especially significant when trying to measure the temperature of small targets. In this illustration, the IR temperature sensor has an 8:1 field of view. This means that the sensor is observing a 1-inch-diameter target at a distance of 8 inches, and as you move further away, the target size increases at the 8:1 ratio. The point of this exercise is to demonstrate that “infrared temperature sensors” do not actually measure temperature; they measure radiosity. Radiosity is the total radiation (emitted plus reflected) leaving a surface.

 

Improving Accuracy with IR Sensors

In order to get accurate temperature measurements with infrared temperature sensors, we need to understand the elements that produce the uncertainty of their measurements.

In forging operations, steel billets are heated to temperatures generally ranging from 700-1300˚C (1292-2372˚F). Accurate temperature measurement is essential in order to form the parts and obtain the desired metallurgical properties. Let’s consider a specific example where the desired billet temperature is 1280˚C (2335˚F).

Many forging shops will install two-color IR temperature sensors to monitor the temperature of the billets as they emerge from the induction coil. These two-color sensors make measurements at two slightly different wavelengths, both very near 1 µm. The principle assumes that the emissivity is the same at both wavelengths and drops out of the equation, thus making a measurement independent of emissivity.

This assumption is generally good for carbon steel but becomes less valid for other metals with lower emissivity values. A special characteristic of this principle is that the sensor is very good at measuring the hottest element that it sees. This is a good benefit in forging since the billets will often develop a scale that makes direct sighting of the hot billet difficult in real-time on a running induction line. At the same time, it adds an element of uncertainty because billets sometimes have a small, sharp edge where they were cut. This sharp edge can be heated much hotter than the mass of the billet by the induction coil and result in a false high billet temperature reading. Real-life processes are not so simple.

With a single-color sensor, you start with the emissivity set at 1.0. This is the setting for a blackbody with an emissivity of 1. When the target is not a blackbody, you reduce the emissivity setting to increase the temperature reading to compensate. Reducing the emissivity increases the apparent temperature reading.

When using two-color sensors there is no emissivity adjustment. Instead, there is a ratio adjustment because the target emissivity will not be the same for both wavelengths for all metals. This ratio adjustment can be either increased or decreased, however, meaning you can adjust the temperature reading either up or down from the 1.0 ideal blackbody setting. So, while two-color sensors reduce some aspects of uncertainty, they increase others.

 

Sensor Calibration

In both cases we assume we are working with sensors that are properly calibrated. The issue in this illustration is “How do you calibrate the temperature readings of the product or process?”

We begin this process by selecting the shortest wavelength single-color instrument available that still covers the critical temperature. In this example, the critical temperature is 1280˚C. I have selected a special 0.55 µm short wavelength IR sensor. Another property of the physics of IR sensors is that the error due to emissivity is reduced as you move from longer to shorter wavelengths.

With a 1 µm sensor, the error due to a 1% error in emissivity is about 1.6˚C. If the actual emissivity is 0.3 and we use a setting of 0.4, we incur an error of 16˚C. But by choosing the 0.55 µm sensor, our temperature error for each 1% error in emissivity is about 0.8˚C. If the actual emissivity is 0.3 and we use a setting of 0.4, we incur an error of 8˚C. You can see that our error is reduced by about 50%.

By choosing the single-color 0.55 µm instrument and setting the emissivity at 1.0, the actual true temperature can’t be lower than the measured value. Remember, with the emissivity set at 1.0, reducing the emissivity value will only increase the temperature reading. However, the actual temperature could be higher.

 

Special Case for Highly Reflective Surfaces

Next, we use a special fiber-optic gold-cup infrared sensor. This sensor is designed specifically for highly reflective metal surfaces. The gold cup acts like a blackbody cavity, producing an effective emissivity of close to 1 and blocking out all stray background radiosity. Having established a lower boundary with the 0.55 µm
sensor, we now use the gold-cup instrument to quantify the upper boundary. This process allows the product or process temperatures to be accurately quantified. With these parameters now defined, the ratio adjustment on the two-color sensors can be calibrated to the actual product or process temperature.

The 0.55 µm instrument produces readings of approximately 2305˚F ±15, and the gold-cup instrument produces readings of approximately 2323˚F ±15. The range of measurements is due to variations in each of the hot billets. As a hot billet emerges from the induction coil, it immediately begins to oxidize and form surface scale, which complicates the temperature measurements. When using the 0.55 µm instrument, you scan over the surface and capture the peak temperature observation. By sampling enough billets, we ensure that the average is statistically valid. We collect a parallel data set using the gold-cup instrument and measuring the same sample group of billets. Variation in readings is similarly created by the presence of the developing scale. Data from the gold-cup instrument validates the measurement and substantially minimizes the uncertainty. We can be highly confident that this data closely approximates the actual billet temperatures.

 

Conclusion

When properly applied to specific materials and processes, infrared temperature sensors provide fast, repeatable and accurate temperature measurements. For a given set of specific conditions, infrared temperature sensors work reliably and produce repeatable temperature measurements – but not necessarily accurate ones. It is not enough to simply ensure that IR temperature sensors are calibrated to national standards. Accuracy of measurement comes from properly calibrating them to the specific materials and applications. In forging, the desired metallurgical properties are derived from having the right billet temperature when they are forged.


Author L. Terry Clausing is President of Drysdale & Associates, Inc., Cincinnati, Ohio. He serves on ASTM E07 Committee on Nondestructive Testing and chairs E07.10 on Specialized NDT Methods, ASME BPV Section V on Personnel Qualification and Certification. He may be reached at 513-739-2317 or terry@drysdaleassoc.com.