Product Details Supplier Info More products

Mark Howard, general manager at Zettlex, explains some of the terminology and common misconceptions surrounding the selection factors for position transducers.

The terminology and fairly esoteric technical concepts applied to instrumentation can be confusing.

Nevertheless, they are crucial in selecting the right measuring instruments for an application – especially for position and speed transducers.

Get the selection wrong and you could end up paying way over the odds for over-specified transducers.

Conversely, your product or control system may lack critical performance if the position or speed sensor does not meet the specification.

First, a few definitions: an instrument’s accuracy is a measure of its output’s veracity; its resolution is a measure of the smallest increment or decrement in position that it can measure; its precision is its degree of reproducibility; and its linearity is a measurement of the deviation between a transducer’s output to the actual displacement being measured.

Most engineers get confused about the differences between precision and accuracy.

Using the analogy of an arrow fired at a target, accuracy describes the closeness of an arrow to the bullseye.

If many arrows are fired, precision equates to the size of the arrow cluster.

If all arrows are grouped together, the cluster is considered precise.

A perfectly linear measuring device is also perfectly accurate.

So, specify very accurate, very precise measuring instruments every time and you’ll be OK.

Unfortunately, there are some problems with such an approach.

First, high-accuracy, high-precision instrumentation is always expensive.

Second, high accuracy, high precision instrumentation may require careful installation and this may not be possible due to vibration, thermal expansion and contraction.

Third, certain types of high-accuracy, high-precision instrumentation are also delicate and will suffer malfunction or failure if there are changes in environmental conditions – most notably temperature, dirt, humidity and condensation.

The optimal strategy is to specify what is required – nothing more, nothing less.

In a displacement transducer in an industrial flow meter, for example, linearity will not be a key requirement because it is likely that the fluid’s flow characteristics will be non-linear.

More likely, repeatability and stability over varying environmental conditions are the key requirements.

In a CNC machine tool, it is likely that accuracy and precision will be key requirements.

Therefore, a displacement measuring instrument with high accuracy (linearity), resolution and repeatability, even in dirty, wet environments over long periods without maintenance, are key requirements.

A good tip is always to read the small print of any measuring instrument’s specification – especially about how the claimed accuracy and precision varies with environmental effects, age or installation tolerances.

Another useful tip is to find out exactly how an instrument’s linearity varies.

If this variation is monotonic or slowly varying, the non-linearity could be easily calibrated out using a few reference points.

For example, for a gap measuring device this could be achieved using some slip gauges.

In the example below, a fairly non-linear transducer is calibrated in to a highly linear (accurate) device with a relatively low number of reference points.

However, in this second example, a rapidly varying device is calibrated with 10 points and its linearity hardly changes.

It might take more than 1,000 points for such a rapidly varying measurement characteristic to be linearised.

Such a process is unlikely to be practical with slip gauges but it might be practical to compare the readings in a lookup table against a higher performance reference device such as a laser interferometer.

Optical encoders work by shining a light source onto or through an optical element – usually a glass disk.

The light is either blocked or passes through the disk’s gratings and a signal, analogous to position, is generated.

The glass disks have tiny features that allow manufacturers to claim high precision.

What is often not explicit is what happens if these tiny features are obscured by dust, dirt and grease.

In reality, even very small amounts of foreign matter can cause mis-reads.

There is seldom any warning of failure – the device simply stops working altogether.

What is less well known is the issue of accuracy in optical encoders and optical encoder kits.

Consider an optical device using a 1in nominal disk with a resolution of 18 bits (256k points).

Typically, the claimed accuracy for such a device might be +/-10arc/sec.

However, what should be in big bold print is that the stated accuracy assumes that the disk rotates perfectly relative to the read head and that temperature is constant.

If we consider a more realistic example, the disk is mounted slightly eccentrically by 0.001in (0.025mm).

Eccentricity comes from several sources, including the following: concentricity of the glass disk on its hub; concentricity of the hub’s through bore relative to the optical disk; perpendicularity of the hub relative to the plane of the optical disk; parallelism of the optical disk face with the plane of the read head; concentricity of the shaft on which the hub is mounted; clearances in the bearings and bearing mounts, which support the main shaft; imperfect alignment of the bearings; roundness of the shaft and roundness of the hub’s through bore; locating method (typically a grub-screw will pull the hub to one side); displacements due to stresses or strain from forces on the shaft’s bearings; and thermal effects.

A perfectly mounted optical disk requires such fine engineering that cost becomes prohibitive.

In reality, there is a measurement error because the optical disk is not where the read head thinks it is.

If we consider a mounting error of 0.001in, then the measurement error is equivalent to the angle subtended by 0.001in at the optical track radius.

Let us assume that the tracks are at a radius of 0,5in, which equates to an error of 2 milliradians, or 412arc/sec.

In other words, the device with a specification accuracy of 10arc/sec is more than 40 times less accurate than its data sheet.

If you get an optical disk to position accurately to within 0.001in of an inch you are doing really well.

Realistically, you’re more likely to be in the range 2-10 thousandths of an inch, so the actual accuracy will be 80-400 times worse than you might have originally calculated.

The measurement principle of a resolver or a new generation inductive device is completely different.

Measurement is based on the mutual inductance between the rotor (the disk) and the stator (reader).

Rather than calculating position from readings taken at a point, measurements are generated over the full face of the stator and rotor.

Consequently, discrepancies caused by non-concentricity in one part of the device are negated by opposing effects at the opposite part of the device.

The headline figures of resolution and accuracy are often not as impressive as those for optical encoders.

However, what is important here is that this measurement performance is maintained across a range of non-ideal conditions.

The quoted measurement performance of some of the new-generation inductive devices are not based on perfect alignment of rotor and stator, but realistically achievable tolerances (typically +/-0,2mm) are accounted for in any quoted resolutions, repeatabilities and accuracies.

Furthermore, stated performance for inductive devices are not subject to variation due to foreign matter, humidity, lifetime, bearing wear or vibration.

View full profile