A method for assessing the magnitude uncertainties associated with network analyser measurements
Without correction, errors present within an automatic network analyser (ANA) will significantly affect the response of a measurement, resulting in a misleading representation of the parameter measured. By calibration (i.e. applying error correction), these affects can be significantly reduced which results in a large overall improvement in accuracy. Calibration is the use of known impedance standards that are defined electrically, or physically. These impedance standards are measured by the ANA and used in the appropriate error correction model. After calibration the device under test (DUT) is measured. The error model of the ANA as defined by the standards mathematically relates the raw measurements of the DUT, to produce a corrected measurement. The accuracy to which the standards are known dictates how well systematic errors can be eliminated. Under these conditions some residual systematic errors will remain. The author discusses the effects of the residual errors upon the measurement and demonstrates how well they can be determined.<>