Table of contents

Volume 13

Number 4, April 2002

Previous issue Next issue

PAPERS

421

A computational technique is described that can perform the simultaneous evaluation of a measurement function and the associated combined standard uncertainty. The uncertainty equation is automatically derived from the measurement function and need not be provided separately. The technique is based on automatic differentiation. It is simple, efficient, accurate and easy to implement in software.

428

and

An apparatus is described that facilitates the first detailed photographic study of the vortex ring structure created by a bursting bubble resting on a pool of liquid. The discovery of a novel method of bubble positioning was essential to success. A shallow liquid pool is formed by the convex meniscus on the flat top of a vertically oriented cylinder, on which a single bubble will centre itself. An aerosol is used to inflate the bubble and act as a flow-visualization medium. A spark ruptures the bubble and triggers the photographic process, generating a time-delayed sequence of photographs to study the retraction of the bubble film and the evolution and speed of the resulting vortex ring structure. The in situ surface tension of the centred bubble film can be calculated using Laplace's equation and the measured bubble pressure.

438

, , , , , and

In this paper we present the visualization of the out-of-plane displacement fields produced by Rayleigh and Lamb waves on aluminium surfaces, using a double-pulse TV holography technique.

This method presents several interesting characteristics, as a possible way of making whole-field remote measurements of instantaneous acoustic displacements, with good immunity from environmental perturbations.

We also show examples where different surface and subsurface flaws have been detected, demonstrating the great potential of this technique for non-destructive testing in industrial applications.

445

, and

Optical particle counters are deployed for monitoring possible contamination in gases by particles. The counting and sizing of particles are affected by various kinds of interference encountered during measurement. Two sources of interference are background light in the measurement chamber and electrical noise. Such interference causes measurement errors, which stem from the measurement equipment itself and not from the measured object or from the sampling. For the evaluation of measurement error by the particle counter itself, interference for a typical optical particle counter is identified and its effects on the measured results are discussed qualitatively. For the reduction of measurement error, quantitative determination of interference is important. It is only by doing this that we can judge which kind of interference has the more significant influence on measurement error. A method has been developed which makes it possible to determine individual components of interference. This paper presents an experimental method which allows separation of components of background light consisting of light scattering by the molecules and surface. The results of these measurements can be used to optimize existing particle counters as well as new equipment design.

451

, , , , , and

Accurate structural models are key to the optimization of the vibro-acoustic behaviour of panel-like structures. However, at the frequencies of relevance to the acoustic problem, the structural modes are very complex, requiring high-spatial-resolution measurements. The present paper discusses a vibration testing system based on pulsed-laser holographic electronic speckle pattern interferometry (ESPI) measurements. It is a characteristic of the method that time-triggered (and not time-averaged) vibration images are obtained. Its integration into a practicable modal testing and analysis procedure is reviewed. The accumulation of results at multiple excitation frequencies allows one to build up frequency response functions. A novel parameter extraction approach using spline-based data reduction and maximum-likelihood parameter estimation was developed. Specific extensions have been added in view of the industrial application of the approach. These include the integration of geometry and response information, the integration of multiple views into one single model, the integration with finite-element model data and the prior identification of the critical panels and critical modes. A global procedure was hence established. The approach has been applied to several industrial case studies, including car panels, the firewall of a monovolume car, a full vehicle, panels of a light truck and a household product. The research was conducted in the context of the EUREKA project HOLOMODAL and the Brite-Euram project SALOME.

464

, , and

Degenerate four-wave mixing (DFWM) was successfully used to monitor a wide range of smoke concentrations (0.1-10 mg m-3) in sample cells. To the authors' knowledge, this is the first measurement of smoke by DFWM. The DFWM method is very sensitive, measuring down to ~0.1 ppb soot volume fraction. To verify the visible laser DFWM system, NO2 concentrations from 2 to 200 ppm were measured with a sensitivity of 2 ppm of NO2. Both smoke and NO2 species were resonantly pumped at 500 nm by an excimer-pumped dye laser (~2 mJ/pulse). The DFWM signal from both NO2 and smoke showed a squared dependence on concentration. Single-pulse measurements of NO2 and smoke in sample cells indicate that temporally and spatially resolved in situ DFWM measurements are feasible for engine exhausts.

471

, , , and

We have developed and demonstrated a fibre Bragg grating strain gauge system for high strain applications using spatial and wavelength multiplexing. The system takes advantage of a priori knowledge about the strain levels and relative phases between the different sensor signals to accommodate many sensors per fibre in an application where we expect to measure several thousand microstrains. In combination with scanning Fabry-Perot filter interrogation, this principle has been utilized in the design of a system for health monitoring of a naval surface effect ship and the system has been used in long-term load monitoring on the vessel.

477

, , , , and

An optical fibre reflective sensor was used to analyse the vibrations of a rotating turbo wheel up to 20 400 rpm. The measured signal required correction because of the natural unevenness of the turbo wheel and because of the variable deflection. Because the turbo wheel was rotating the signal became distorted and so we used a special method to extract the frequencies of the vibrations from the power spectra. The analysis showed increased intensity of the first three natural frequencies with an increased speed of rotation. The experimental results match very well with those obtained by numerical computation.

483

, , and

Two kinds of threshold for the acoustic pressure gradient of a schlieren system for the observation of acoustic waves in transparent media are studied experimentally in this paper. The first kind of threshold is related to a short-duration ultrasonic pulse. In this case the acoustic pressure gradient must be large enough to make the deflected light pass the diaphragm (optical filter) and reach the screen. The second kind of threshold is related to the physical limitations of diffraction theory (where the continuous acoustic field is considered as an optical phase grating) and the diffracted light of every non-zero order has a related pressure gradient threshold. The existence of the two kinds of threshold is examined using well-designed experiments and absolute measurements of the acoustic pressure.

488

, and

An InGaN laser diode operating at 398 nm and with output facet coated with a multi-layer anti-reflection coating has been operated within Littrow external cavities. A maximum manual tuning range of 6.3 nm has been achieved and a continuous tuning range of 0.8 GHz with a single piezo control. Stable single-longitudinal-mode operation has been obtained with a linewidth less than 11 MHz. The effects on total tuning range due to the orientation of the diode output facet within the Littrow cavity are discussed.

494

This paper describes an instrument for measuring curvatures by a `null' method. It was originally designed to measure the momentum of tracks in a cloud or bubble chamber operating in a magnetic field. The curvature introduced into an image by projection through a prism is used to straighten the track image, straightness being judged by sighting along the image at a near grazing angle. The curvature required measures that of the image.

The original concept was introduced by Blackett in about 1936 and was widely regarded as a very elegant technique, capable of providing measurements at least as accurate as those obtained by more traditional approaches but in a much shorter time. This paper briefly reviews the work of Blackett (which received only the most cursory description at the time). It then continues to describe a development of Blackett's idea intended to stretch its capabilities, particularly towards higher curvatures. This work was carried out in about 1961 but was not described in the literature at the time.

At present the work is perhaps of mainly historic interest, but it does seem likely that other uses for the technique may exist now or in the future.

503

, , and

A method is proposed for measuring the complex permittivity of thin materials. The method allows us to simultaneously measure the permittivity and thickness of the material, allowing us in particular to obtain a meaningful estimate of the average thickness of natural materials. The measuring system consists of a section of rectangular waveguide, operating in the X band and modified to accommodate the material under measurement, and two standard impedance loadings (short-circuit and matched load). Two reflection coefficient values, measured by a vector network analyser at the input of the waveguide sample-holder, allow the determination of the parameters of interest by means of a numerical procedure employing a genetic algorithm. The method has been tested on certified dielectric materials and preliminary results are presented concerning paper and thin cardboard.

510

When a thin elliptically shaped coil is placed above a flat plate with a coat of metal, the magnetic fields in the vicinity of the coil is altered by eddy currents in the plate and the coat. The thickness of the coat influences the magnetic field and can be determined by measuring the coil impedance. An electromagnetic model utilizing an elliptic cylinder coordinate system accounting for the coil impedance with different values on the numerical eccentricity and the coating thickness is described. The model is based on a potential formulation of the problem from which the magnetic vector potential and hence the impedance is evaluated. The derivation utilizes a proper choice of the transversal field, giving a scalar Helmholtz equation in which the solution to the boundary value problem is separated. The resulting integral equation is expressed in closed form in terms of Mathieu functions. Numerical calculations and experimental measurements show how the model can be used to model a steel surface with a coat of copper to find expected impedance as function of the coating thickness.

520

, , , and

The measurement of the temperature behaviour of initial magnetic susceptibility is a powerful method for the thermomagnetic analysis of ferromagnetic materials. However, its application to nanostructured materials with technical relevance, particularly in the case of metastable systems, is made difficult by several conflicting conditions: the necessity to employ low magnetic fields, the required high sensitivity and the need for rapid scans in the high-temperature range. The vibrating wire susceptometer, an instrument belonging to the class of alternating gradient force magnetometers, has, in theory, the right characteristics to make such measurements. However, management of the instrument when carrying out rapid scans is intrinsically complex and requires a special electronic controller described here in detail. A combination of two phase-locked loop blocks is needed to provide the correct phase shift to ensure the locking of the resonance frequency while the instrument is working. A new measurement procedure that keeps the oscillation amplitude constant has also been implemented and it has proved to be very useful for rapid overview of the sample magnetic properties. The limitations of the controller performance due to the presence of noise are discussed. Extensive test measurements were carried out and analysed.

529

, , , and

Based on the steepest descent theory (Edmonson W, Srinivasan K, Wang C and Principe J 1997 IEEE Trans. Circuits Syst.45 379-84, Neil E and Mian O N 1992 IEEE Trans. Neural Netw.3 308-14) and the original response curve of a current probe used for power system electromagnetic compatibility measurement, an accurate analogue model has been built for the probe, with a maximum error of 5.2%. Before application the analogue model needs to be discretized in the z domain, however the digital model gives a steeper frequency response than the analogue one. To compensate for this effect, another digital band-pass filter was attached to the above model, and its parameters were obtained by computer simulation. The compensated digital model presented a maximum error of 5.8% compared with the probe's response curve. The digital modelling and compensating strategy can feasibly expand the frequency band over which the probe is useful for measurement applications.

533

, , , , and

A bulk interferometric configuration for electrical current remote sensing in high voltage environments based on the Faraday effect is described. The combination of Sagnac's reciprocal properties with a Mach-Zehnder processing interferometer in the same sensing head is used to implement serrodyne processing for current sensing. A theoretical analysis based on the Jones matrix is presented. Experimental results that validate the exploited concept are obtained, showing linearity up to 1800 Arms and waveform reproduction at 50 Hz. The possibility of using the proposed interferometric concept to simultaneously fulfil the requirements associated with metering and relaying applications is also addressed.

539

, and

Ultrafast plasma closing switches rely on sub-nanosecond electrical breakdown of the insulating gas. Until recently, little information was available on gas breakdown occurring within this timescale, because of the difficulties in designing an experimental system for such a study. Recently published papers have reported on the results of studies carried out using two devices designed specifically for the investigation of fast (sub-nanosecond) electrical breakdown processes. The devices are essentially modified transmission line plasma closing switches, and in this paper we describe their structure and operation. Because electromagnetic wave behaviour plays a significant role in sub-nanosecond switching, especially reflections from impedance mismatches, the design of the devices is based on transmission line concepts, rather than those of lumped parameters. One of the switches has a conical transmission line topology and is designed for the study of fast switch closure at insulating gas pressures less than 0.6 MPa. The second has a hybrid radial transmission line/conical transmission line topology and is designed for the study of fast switch closure at pressures up to 10 MPa. The paper also includes details of the D-dot monitors used to investigate sub-nanosecond processes in the two transmission line plasma devices.

547

, and

In this paper, we explain how we have improved the performance of a smart eddy current sensor, based on the induction balance principle, by using blind source separation (BSS) methods. The initial sensor was dedicated to the recognition of metal tags buried in the ground. It has been described in a previous article (Belloir F, Huez R and Billat A 2000 Meas. Sci. Technol.11 367-74 [link]). The problem with this kind of sensor is that it is also sensitive to the presence of conductive perturbations located close to the target to be recognized. Thus, the sensor response can be disturbed. The aim of this paper is to show that starting from some hardware transformations made jointly with the use of BSS algorithms one can restore the tag response perfectly. We first describe the hardware modifications that we had to bring to that sensor in order to use the BSS methods. Then we check theoretically and experimentally if the application conditions required by the BSS are fulfilled. We prove that with these improvements, we have realized a new device which permits one to circumvent all kinds of perturbations. We end by presenting various impressive results obtained using the modified sensor.

556

, , and

Surface topography plays a significant role in functional performance situations like friction, lubrication and wear. Surfaces can be characterized by two-dimensional parameters but nowadays researchers are investigating areal parameters, which are better descriptors of the real nature of surfaces in contact. This paper proposes new spacing parameters, which describe closed peak contact regions and closed valleys lubricating regions. The parameters are shown to have the same order of statistical stability as the amplitude roughness parameters, and preliminary results on correlation with functional performance are illustrated.

565

, , and

The objective of this paper is to elaborate new elements related to metrological analysis in the field of testing, such as measurement uncertainty and traceability. Until now the international standard ISO 45001 did not explicitly require the uncertainty specifications in the area of testing to such an extent as the newly implemented standard ISO/IEC 17025. Therefore several additional steps should be performed in specifying the measurement and testing results, especially concerning performance of testing and other conformity assessment activities.

The paper is focused on uncertainty analysis of a test procedure of the electrical safety of household appliances according to the European standard EN 60335-1. The traceability of measurements is presented. The example is chosen as a useful case study, as well as a very illustrative example, where many dilemmas could be highlighted. This particular case is relatively straightforward to evaluate due to the relative ease of traceability of electrical and thermal quantities. It is important that the measurement result and its uncertainty are correctly evaluated and on that basis the right conclusion of conformity or nonconformity with specifications is made. Therefore, knowledge and awareness of all facts about calibration, testing requirements and traceability are essential.

573

and

The performance of a two-frame particle tracking velocimetry (PTV) system was enhanced with an adaptive hybrid scheme. The original two-frame PTV method, based on the match probability concept, employs global match parameters for the entire flow field. This does not fully consider the detailed local velocity changes of the flow, and reduces the recovery rate of the velocity vectors, while increasing the number of erroneous vectors in regions of high velocity gradients. In the new hybrid PTV method, the preliminary particle image velocimetry (PIV) results are used to determine the local match parameters that are required for a two-frame particle-tracking algorithm. Both computer simulations and real flow measurements were performed to check the performance of the adaptive hybrid PTV. Compared with the original method, the new technique enhances the PTV performance by increasing the velocity vector recovery rate, while greatly reducing the number of erroneous vectors. In addition, the adaptive hybrid method provides better resolution near solid boundaries compared with the conventional cross-correlation PIV method.

583

, , , and

In this paper we report on our results for the applications of fibre Bragg gratings (FBGs) in structural health monitoring. Multiplexed FBG-based strain sensors were fixed onto the reinforced bars (rebars) in concrete structures to determine the strain changes at different locations within the structures during loading and unloading tests. A similar set of FBG-based strain sensor arrays was also mounted onto the surface of the structures for the purpose of comparison. At the same time an FBG-based sensor array optimized for temperature measurement was also distributed alongside the strain sensors to obtain the temperature information, as well as to compensate for the temperature-induced wavelength shifts on those FBG strain sensors. The results obtained followed the same trend as that expected from concrete structures subjected to loading tests.

590

, and

The NEutrino Mediterranean Observatory (NEMO) collaboration are involved in research and development work intended to lead to the construction of an underwater km3 scale Cherenkov neutrino detector. This huge detector will be divided into thousands of tree-interconnected modules, hereafter called optical modules or Benthos spheres. The optical modules will be composed of stand-alone underwater spheres connected upwards by a coaxial cable along which digital data will be transferred and received and by which the optical module will be powered. In this paper, an electronic board prototype for the optical modules is described in detail, as regards signal synchronization, filtering, data compression and packing. In other words, we present the control electronics part for the optical modules. The digital blocks have been developed by means of a VHDL-based FPGA design.

598

The applicability of instruments based on ultrasonic technology to determine variations of the level of liquids contained in tanks of arbitrary shape may be limited by the coherent interference of multiple reflections in the tank walls. This fact, which depends on the geometry, frequency and beamwidth, degrades the reliability of this kind of equipment, that usually requires expensive transducers and/or sophisticated reconstruction techniques to yield acceptable performances. Moreover, their behaviour is too sensitive to temperature or other external parameters to maintain an accurate response in any operating environment. Here a method to maximize accuracy and minimize sensitivity to perturbations is described. It is based on the use of the phase information contained in the main pulse of a complex echo pattern resulting from the sound reflecting in multiple objects. Its performance and the results of its application to a real case are studied.

603

, and

A simple device is described, capable of measuring reliably and reproducibly low levels (0.1 ppbv with a total analysis time of approximately 40 min) of volatile organic compounds (VOCs) in urban air and in the headspace of groundwater. The VOCs were pre-concentrated onto a small bed of Tenax TA®, and subsequently released by temperature-programmed thermal desorption (TPD). TPD profiles were recorded by a solid-state sensor array. The success of the method was based on a well defined TPD peak shape if the absorbent bed was sufficiently thin, linearity and additivity of appropriately transformed sensor signals, the use of a very stable sensor array utilizing chromium-titanium oxide as the sensor material and devices with different electrode gaps and the use of dry air as the carrier gas. Successful analysis of groundwater headspace was achieved by diluting the sample stream 1:4 with dry air (which prevented saturation of the adsorbent surface by water) and by introducing a dry air purge of the bed at room temperature before desorption (which removed excess water from the bed and hence stabilized the sensor baseline before the desorption). Principal components analysis and spectral decomposition (SD) methods were used successfully for identification of VOCs and quantification in simple mixtures. The SD method expressed the measured desorption trace on the elements of the sensor array as the sum of a small number of sensor array spectra. The individual desorption traces for each characteristic sensor spectrum thus derived could be further resolved by a peak fitting procedure because the TPD trace had a well defined form, in which both peak temperature and peak width in TPD correlated with the boiling temperature of the VOCs, and peak width varied according to functional type: phenols>aliphatics>alkyl-substituted benzenes.

613

, and

In this paper we present an optical technique based on the shadow moiré method which allows the measurement and digitization of three-dimensional surfaces. The technique was tested through experimental work and the results were compared with those obtained by a coordinate measuring machine. Moving from the conventional shadow moiré method, new features were implemented enabling us to overcome the main shortcomings of the conventional moiré method. These include the need to assign the fringe order, the incapability of discerning concavity or convexity, the poor resolution and the complexity in the signal processing. All these problems have been solved by adding an element to generate a carrier fringe pattern to the equipment of the conventional shadow moiré technique and processing the obtained signal using the Fourier transform method. The proposed technique was applied to obtain external surfaces of sheet metal stamped parts. The experimental results show the effectiveness of this technique.

623

, , and

We develop a kinetic formalism to determine the charge spatial distribution and the self-consistent electrostatic potential in the grid-collector region of a retarding field analyser system applied to vacuum arc devices. This work is the natural extension of a previous work where a uniform spatial distribution of charge was assumed to explain certain anomalies arising in measurements of vacuum arc ion energy distributions, if charge effects were not included. We compare different approaches (no charge effects, spatially uniform charge density and kinetic treatment) which can be used to find the ion energy distribution in vacuum arcs. We show that under conditions usually met in these devices it is necessary to use the kinetic approach for the correct interpretation of measurements employing retarding field analysers.

631

, , and

a0 mode Lamb waves are generated in a non-piezoelectric plate by mode conversion using P(VDF-TrFE) transducers upon which single-phase comb electrodes are etched. The determination of the propagation equations of the waves according to the device parameters (input voltage, number of comb fingers, plate thickness, ...) allows the influence of the density of a liquid in contact with the plate on the parameters of the propagated waves to be modelled. The model was experimentally validated and followed by a study of the behaviour of the device versus temperature.

638

, , and

The image reconstruction algorithm plays an important role in electrical capacitance tomography systems. The linear back-projection method is simple and fast, but the reconstructed image is blurred, A method based on revised regularization is introduced in this paper. The preliminary experimental results show the method to have advantages as regards the quality of the reconstruction image and the reconstruction rate.

DESIGN NOTES

N31

, , and

A diffractive optical element (DOE) based sensor was applied to investigate optical surface quality of two different commercial laser print papers before and after printing of red, green and blue colour ink. The DOE sensor provides simultaneously information on both reflected and transmitted light, whereas a spectrophotometer, which was applied as a corroborative method, yields non-simultaneous information about the total reflection and transmission from the samples. The DOE sensor images were analysed and information concerning the local anisotropy of the paper was obtained. The border between a colour print and non-print was also investigated using the DOE sensor and a microdensitometer. It is proposed that the DOE sensor provides better resolution of the border than the microdensitometer.

N38

The standard methods are reviewed and an alternative, simple approach is proposed for a common problem in the measurement of luminous or radiant flux: that of deriving a nonlinearity correction from double-aperture measurements on the photodetector. The new algorithm is useful for measurements spanning several decades in the magnitude of the flux. It enables full use to be made of all available flux-doubling data including measurements taken at much closer intervals than a binary sequence of flux multiples and measurement sets which contain several overlapping binary sequences.

N42

, , , , and

Based on the fact that the electrical impedance of biological tissues is very sensitive to temperature, we have proposed a method to monitor local temperature changes inside the tissues. Using an analytic model and a finite element method model, we have analysed the effect of the local temperature change on the phase image obtained by the magnetic resonance current density imaging technique. We show preliminary experimental results of the temperature change monitoring performed with a 0.3 T magnetic resonance imaging system. We expect that the proposed method can be utilized for the development of non-invasive temperature imaging techniques.

N47

, and

Closed-loop control of piezoelectric actuators compensates the nonlinearity, such as hysteresis and creep, in high-resolution positioning applications. To this end an accurate measurement of the piezo extension is required. We compare strain gauges and optical sensors with regard to the achievable closed-loop bandwidth. By using strain gauges a limitation of the attainable control speed can be caused by a non-minimum-phase behaviour. In our case the limitation due to the strain gauge is at one-eighth of the bandwidth achievable when an optical sensor is applied.

CORRESPONDENCE

641

Comment on `Electric potential probes - new directions in the remote sensing of the human body'

Dear Editor

The authors of `Electric potential probes - new directions in the remote sensing of the human body', which appeared in your February issue (2002 Meas. Sci. Technol.13 163-9), are to be congratulated on their demonstration of non-contact ECG monitoring. Although they probably overstate the problem of signal distortion when using conventional electrodes, a monitoring modality that removes the necessity for electrode jelly and skin preparation is certainly advantageous.

However, with reference to their figure 5, the remote off-body ECG trace, I am interested in the authors' assertion that the signal displayed is due to cardiac activity and not movement. Assuming the heart rate of the subject was in the normal range of 60 to 80 bpm, the oscillatory signal clearly visible between beats can be estimated at between 6 and 8 Hz. To my knowledge a signal component of this frequency cannot be obtained from any combination of surface ECG electrodes. Because of the front-to-back measurement used we might expect the non-contact signal to resemble a conventional ECG taken between the equivalent areas of the body, which it doesn't.

Given that the stated filtering for this signal is 1 to 30 Hz, it is unlikely that this oscillation is a function of the filter (in contrast to their figure 6(b) where the 5 to 15 Hz bandpass filter is beginning to selectively pick out the centre band sinusoidal component).

This phenomenon is of interest to me because in recent, unpublished work involving measurement of the cardioballistogram (the physical movement of the torso at each heartbeat in reaction to the ejection of blood from the left ventricle into the aorta) I specifically recorded damped low-frequency oscillations, very similar to those in figure 6, which proved to be a sympathetic mechanical oscillation of the viscera in response to the cardiac ejection stimulus.

My question to the authors, therefore, would be what experiments did they perform to determine that their signal was, in fact, electrical rather than cardioballistic in origin?

John Brydon Consulting Engineer, PO Box 155, Artarmon, New South Wales 1570, Australia

Reply from the authors

Dear Editor

We would like to thank Dr Brydon for his interest in our paper and we appreciate his congratulations on our demonstration of non-contact ECG monitoring.

With regard to his opinion concerning signal distortion, our statements in the paper were based on the observation that our sensors consistently detect larger amplitude surface potentials than conventional gel electrodes. This indicates to us that electrical loading of the body occurs with conventional electrode systems.

Dealing with Dr Brydon's second point, we would make the following observations.

  • Because of the capacitive nature of the coupling between our sensor and the body, one would expect any movement artefacts to be most pronounced at small distances from the surface of the body and to decrease with increasing distance. This is not the case.

  • As regards the remote, non-contact signals measured from the front to the back, we do not agree that these should resemble conventional ECG measurements. First, of course, such remote, off-body detection is new - it has not been carried out before. There is therefore no other data available with which to compare our results. Second, the heart is a multi-polar source with distributions of fields and potentials which must vary strongly as a function of distance from the body. One would therefore not expect the ECG at (say) 1 metre away from the body to be the same in detail as that sensed at the body surface. It is clear that at such large distances the signal detected by our sensors is a spatially averaged composite of the electrical activity across the chest.

C J Harland, T D Clark and R J Prance School of Engineering and Information Technology, University of Sussex, Brighton BN1 9QT, UK

BOOK REVIEWS

643

As an update, or a collection of updates to be more precise, the intention of the book is comfortably fulfilled. The last decade has seen enormous strides in micromachining and microfabrication technology. Coupled with the increasing interest in biological and chemical measurements, it is not surprising that this volume is heavily biased toward microsensors, and in particular biochemical microsensors. However, no book of this size can hope to cover all the recent advancements and there are consequently a number of obvious holes. Nevertheless, when read in conjunction with the more specific Sensor volumes from the same published series, a more coherent picture rapidly emerges.

Though split into three sections, `Technology', `Applications' and `Market', the last merely consists of a single contribution that doesn't appear to `fit' properly into the general theme of the book. This having been said, I must admit to finding the brief potted history of automobile electrics very readable. Despite what is discussed in the preface, very little is mentioned concerning computer-compatible sensor outputs (with the exception of one of the applications-related submissions). However, this will be no real disappointment to those who specialize in new sensor materials and technology, at whom this book is really aimed.

Certainly not a general sensors and transducers book, this text will be useful to scientists and engineers working in research, and to some extent those involved with development, with less interest to production engineers or undergraduate students. It should have particular appeal to those working in interdisciplinary fields where physics and engineering overlap, which is of course what sensors are all about.

Gareth Monkman

643

Engineering Mathematics is a comprehensive textbook for technological and vocational mathematics courses, and is also well designed for self-study objectives. John Bird's approach, based on numerous worked examples and problems, is intended for students of a wide range of abilities and interests. The emphasis is on problem-solving skills, keeping the theory to a minimum, for a thoroughly practical introduction to core mathematics needed for engineering studies and practice. The third edition has been reorganized to present a more logical introduction of the topics, including recent course specifications in the latest National and Vocational Certificate of Education syllabuses.

In Engineering Mathematics, third edition, nine topic areas are covered: number and algebra, mensuration, trigonometry, graphs, vectors, complex numbers, statistics, differential calculus and integral calculus. Each of these areas has multiple chapters, each one starting with a simple outline of the essential definitions, formulae, laws and procedures. The theory is kept to a minimum, as problem solving is extensively used to establish and demonstrate the theory. The readers are thereby guided to solving similar problems by themselves. The book includes some 850 worked examples, 1500 problems (with answers provided), 100 multiple choice questions and 18 assessment papers. The style and content of this textbook are ideal for self-study in preparation for several types of examinations requiring mathematical problem-solving skills.

The author also has other closely related textbooks, a Basic Engineering Mathematics, 2nd edition, and a Higher Engineering Mathematics, 3rd edition, as well as an Engineering Mathematics Interactive consisting of three interactive CD-ROMs, from the same publisher. Therefore, interested students and instructors would be well advised to examine the contents of these books carefully before deciding on one.

J A Rod Blais

643

Peter Congdon's Bayesian Statistical Modelling is not a teaching textbook or introduction to Bayesian statistical modelling. Although the basics of Bayesian theory and Markov Chain Monte Carlo (MCMC) methods are briefly reviewed in the book, I think that one should already be familiar with those topics before using the book. Given that, the book can be very helpful to an applied statistician, as it is an excellent reference source for Bayesian models and literature.

Using nearly 200 worked examples with data examples and computer code available via the World Wide Web, the book reviews a large number of models, e.g., for standard distributions, classification, regression, hierarchical pooling of information, missing data, correlated data, multivariate data, time series, spatial data, longitudinal data, measurement error, life table and survival analysis.

Each chapter starts with an introduction to the model family and then continues with describing variations to basic models, with advice as to model identification, prior selection, interpretation of findings, and computing choices and strategies. In the last chapter also the Bayesian model assessment is briefly reviewed. With 500 pages in the book, there are about 2.5 pages per example, and consequently I believe that in most cases it would be necessary to read also some of the references in order to fully benefit from the models described.

Although the data examples are mainly from medical science, public health and the social sciences, the book should be interesting to any applied statistician seeking new possibilities in data analysis.

Aki Vehtari

643

The main purpose of the book as stated by the author is to examine the recent applications of Lyapunov's method for distributed parameter systems to the control of vibration and noise. This book is divided into six chapters with a reasonable bibliography at the end of the chapters. One of the unique features of this book is the fact that the author uses three to four systems such as the gantry crane string, cantilevered beam and a flexible link robot arm throughout the book to illustrate passive (non-model-based), model-based and adaptive control concepts.

Chapter 1 presents a fairly short introduction to the applications for distributed system control and an overview of what is to be presented in the rest of the book. Chapter 2 focuses on the development of partial differential equations to model simple distributed systems. Boundary control forces are introduced and eigenvalues are obtained to examine the stability of the system. Use of Galerkin's method is made when there are significant gyroscopic effects or complex parameter non-uniformity.

Chapter 3 is a review of the mathematics required for the understanding of the later chapters in the book and there is a good discussion of the semi-group theory. Chapter 4 uses the tools developed in the third chapter to discuss passive control of a boundary-controlled string. Results are shown to agree with experiments conducted. Other examples dealt with are boundary damping of a cantilever beam, and control of a beam with distributed damping. Chapter 5 deals with exact model knowledge (EMK) controllers, which can compensate for actuator dynamics and nonlinearities. Control of noise in an acoustic duct is one of the examples chosen for illustration and once again the Lyapunov function is based on the acoustic energy. In the final chapter the EMK controllers are redesigned as adaptive controllers to account for parameter uncertainties. The time derivative of the Lyapunov function is shown to be negative definite for adaptive control. A number of experimental studies are also reported to validate the adaptive control strategy. While defining the Hilbert space inner product it is not clear how the weight functions are to be chosen. This is one of the limitations of the Lyapunov function method used in the book. It is not always apparent what the function should be, although in most cases potential or kinetic energy or a combination is used. The book also has a bibliography section with over 130 references for the reader interested in further research in the area.

Mouli Padmanabhan

644

The application of thin film materials in advanced technologies is increasing as new generations of complex electronic, optical and magnetic devices are developed. The dynamics of the growth of thin and ultra-thin films and their evolving, dimensionally dependent, properties are of crucial importance in understanding their performance in particular devices. Anyone who has been concerned with the fabrication of thin films and their associated properties will understand the need to have knowledge of their many physical properties and how these are related to the growth mechanisms and environment in which films are deposited. This book provides a reasonably comprehensive overview of severalin situ characterization techniques that, in some cases, are capable of following the developing properties of thin films and surfaces in real time. Several techniques are described including those involving ion- and electron-beam scattering, photoemission electron microscopy, a variety of photometric and ellipsometric optical methods, x-ray reflectivity and curvature-based methods of real-time stress monitoring.

The various chapters in this book have been written by a variety of authors and therefore it is inevitable that each topic reflects the style and particular interests of individuals. Nevertheless, for newcomers to the field of thin film technology, such as graduate scientists and postdoctoral researchers, the contents provide a means of rapidly assimilating the basics of some of the in situ methods that have been used in the past for characterizing thin films. From this point of view the book is successful, and many references are given for further reading for those who wish to pursue a particular topic in greater detail.

As is often the case, specific examples illustrating the techniques tend to be limited and are often chosen to demonstrate the effectiveness of the methods. Whilst this is understandable, it is often the case that a potential user wishes to explore some system or film-thickness regime that will respond poorly to such probe methodologies and a greater discussion of the limitations of the techniques would have been useful, particularly for the beginner. In the chapter on ellipsometry for example, only thick films are dealt with and examples confined to systems that work particularly well (e.g. SiO2 on Si). Likewise, various effective medium models are discussed in relation to non-planar (rough) surfaces. However, the limitations of such models are not dealt with, nor are those of ellipsometry for dealing with more problematic systems such as ultra-thin films and multilayers. A disappointing aspect of the book is that no specific chapter is devoted to magnetic materials and their properties, currently of considerable interest to the information storage industry.

Despite these minor drawbacks the book is very commendable, generally well written and a worthy addition to any library on thin film growth and characterization. It is particularly valuable because of the very real need to understand the dynamically developing properties of thin and ultra-thin films and surfaces, and because it provides readable descriptions of some of the more important instrumental techniques that are used to study the evolution of thin-film characteristics in real time.

R Atkinson

644

According to widespread expectations, we will shortly witness a new era in experimental gravitation. More accurate measurements of classical, weak field effects will soon be performed. Examples of these classical effects are the bending of light rays by the solar mass (or its modern counterparts, based on radio propagation effects), and the relativistic perturbations of the orbit of Mercury. In addition, we will shortly probe a new domain, where predicted but not yet measured effects (e.g. the Lense-Thirring effect, gravitational waves) will provide significant new tests of general relativity and its foundations.

Particularly promising are the relativistic tests in space. Some of these experiments require the use of dedicated missions (e.g. GPB, STEP, LISA), while others are part of complex missions dedicated to astronomy or space exploration in general (e.g. Cassini, BepiColombo, GAIA).

The aim of this book is to provide a detailed review of the subject of experimental gravitation in space. These are the proceedings of an international advanced school which took place in 1999 in Bad Honnef. A positive aspect of this book is that both experimental techniques and theoretical background are presented side by side. Particular emphasis is given to tests of the Lense-Thirring effect and the equivalence principle, and to applications of spaceborne atomic clocks. In such a rapidly developing field, it is almost inevitable that some experimental techniques are left out, and the editors quite rightly decided to focus on only some of the topics in the field of experimental gravitation.

The book starts with an excellent review by Nordtvedt on solar system tests of general relativity. The second section is dedicated to the Lense-Thirring effect, offering a very good introduction for readers unfamiliar with gravitomagnetism. On the experimental side, this part is dominated by the review by Everitt et al on the status of GPB. Also well covered is the equivalence principle (EP). We mention Haugan and Lämmerzahl's review of the role of the EP in gravitational field theories. From the experimental side, Nordtvedt gives an update on the status of the LLR experiment, now able to test the validity of the EP to an accuracy of the order of 10-13, along with a confirmation of the de Sitter precession and an upper limit on the time variation of Newton's constant. In the future, experiments like STEP will be able to test the EP to an accuracy several orders of magnitude better than that currently available from LLR. Lockerbie et al give an updated review of the status of STEP, which gives a clear idea of the very challenging technical progress required in order to meet these expectations.

I found the section on gravitational waves somewhat disappointing. It includes theoretical papers by Blanchet et al on the generation of gravitational radiation, and by Grishchuk on the cosmological background. However, the experimental part is very poorly covered, with a single presentation by Rudiger et al on the GEO600 ground-based laser interferometer (thus, not even a space experiment). Completely missing are contributions on space experiments based on laser interferometry (e.g. LISA) or on the Doppler technique (e.g. Cassini). The book would have certainly gained from the inclusion of a detailed discussion on the status and plans of such detectors.

Finally, like other titles from Springer-Verlag's series on Lecture Notes in Physics, this book is very well produced, although it would certainly have benefited from accurate editing before going to press. Several typos were found (the most evident case being the misspelling of Thirring in the title of Part II). Cited references are extensive and very useful, although, again, they could have been produced with a single and more accurate format throughout the book.

In summary, I strongly recommend this book, in particular to professionals and graduate students interested in learning some theoretical aspects of modern experimental gravitation.

Giacomo Giampieri

645

The object of this book is to present the principles, instrument designs and applications of available magnetic transducers. In order to accomplish this task, the author begins with a fundamental chapter that focuses on the phenomenological magnetism, units and sensor specifications. The book continues by dedicating a full chapter to each magnetic sensor family: Induction, Fluxgate, Magnetoresistor, Hall effect, Magneto-optical, Resonance, SQUIDS and other principles. It ends with three chapters on applications, testing, calibration and magnetic sensors for non-magnetic variables.

Various authors have contributed to some of the chapters. In spite of this, the content, presentation, opinion and notation are consistent throughout the book and uniform. Furthermore, each chapter can be read individually without losing its scope.

The author(s) have focused on devices that are developed or under prototyping by commercial or public institutions. The book's objectives are also to get an insight into sensor design properties for a specific application and to understand the limitations and/or suitability of a specific sensor. Each chapter is therefore accompanied by an extensive list of scientific and technical material that provides a good reference for those interested in further reading.

There are a number of books treating magnetic materials and their applications. However, often only a fundamental point of view is given. Magnetic Sensors and Magnetometers is a comprehensive book on the practice of magnetic transducers and their bases with many contributions from different experts in this field. Indeed, many professionals and researchers have (or will have) the need at some point for a magnetic sensor or transducer, and therefore a book of this nature is a very good reference for building and designing the most suitable solution for a specific application. It also provides design hints for connecting magnetic sensors to electronic devices, such as amplifier noise matching, etc. The book may also be of interest to teachers, students and researchers at universities, to instrumentation and application designers and users and the like.

It is appropriate to list and comment on the various chapters for the reader to know what can be found in them:

  1. Basics (by Hauser and Ripka with 25 references): magnetic material types and properties and sensor specification.

  2. Induction Sensors (by Ripka with 29 refs) describes the air coils and their limitations, coils with ferromagnetic cores, amplifier noise matching, and other induction-based techniques such as rotating, moving, extracting and vibrating coils.

  3. Fluxgate Sensors (by Ripka with 159 refs) presents the principle of the transducer with different sensor geometries. Several aspects of this widely used type of sensor are discussed in more detail: demagnetization, core materials, second-harmonic analogue magnetometer, nonselective detection, short-circuited or current-output, noise and offset stability. Also, different design applications are described.

  4. Magnetoresistors (by Hauser and Tondra with 32 refs) illustrates the sensors and applications of the anisotropic magnetoresistance effect utilized in thin films and the giant magnetoresistance phenomenon.

  5. Hall-effect Magnetic Sensors (by Popovic et al with 51 refs) describes the basic sensor and thin-film Hall elements. Furthermore, integrated and multi-axes Hall sensors are presented.

  6. Magneto-optical Sensors (by Didosyan and Hauser with 33 refs) with the Faraday and Kerr effects and a description of the magneto-optical current transformer.

  7. Resonance Magnetometers (by Primdahl with 52 refs) describes the proton precession and the Overhauser variant effects and the optically pumped magnetometers.

  8. SQUIDs (by Fagaly with 38 refs) illustrates the sensors and operations with regard to noise and cancellation, input circuits, refrigeration and gradiometry.

  9. Other Principles (by Ripka and Kraus with 39 refs) describes, among others, magnetoimpedance, magnetoelastic and magnetostrictive sensors and biological applications.

  10. Application Magnetic Sensors (by Ripka and Acu na with 72 refs) in navigation, automotive, military, testing and planetary magnetic fields.

  11. Testing and Calibration Instruments (by Sasada et al with 38 refs) describes the application of magnetic coils and shieldings.

  12. Magnetic Sensors for Nonmagnetic Variables (by Ripka et al with 40 refs) is an interesting chapter on how to use magnetic properties to measure other physical effects like position, proximity, force, pressure, torque, current, etc.

  • Appendix. Magnetic Sensors, Magnetometers and Calibration Equipment Manufacturers. It gives a fairly comprehensive list of manufacturers in the field.

Overall, I recommend this book to professionals working in magnetism, magnetic instrumentation and related areas. It is highly relevant and contains an extensive and valuable amount of reference material.

Jose M G Merayo

645

This is an interesting but rather brief monograph which is beautifully produced and amusingly illustrated. In it, Alter and Yamamoto provide a series of analyses of specific models of quantum measurements, and sequences of quantum measurements, on single quantum systems. They use these analyses to comment on a number of controversial issues. Their analyses are mainly clear and careful, although a significant amount of work is required to follow the details. There are also some errors, particularly in the penultimate chapter. (For example, in the sentence that includes equation (7.9), it would appear, if equation (7.2) is invoked, that the uncertainty principle is violated.) I also find somewhat problematic Alter and Yamamoto's tendency to generalize from specific models to universal statements. Some of these statements might give a misleading impression of the book as a general treatment of quantum measurement; but it would be more appropriate to think of it as a short and well-organized discussion of some novel and important examples together with useful pointers to a wider literature. Finally, it does not seem to me that they actually justify their oft-repeated central philosophical claim that `the quantum wavefunction cannot be ascribed physical reality'. My problem here is not that I disagree directly with Alter and Yamamoto and believe that there are any situations in which, with no prior knowledge, it is possible to determine the wavefunction of a single system by a series of measurements. Rather, in my opinion, to make the case that wavefunctions do not have physical reality does require at least some explanation of what else might.

Matthew J Donald

646

This is a comprehensive book discussing several methods for the identification of nonlinear systems. Identification is extremely relevant in applications and only recently has much ongoing research addressed the pressing problem of identifying systems with nonlinearities. In this respect, the book is timely as it is a collection of results from many different areas in applied science, ranging from linear optimization techniques to fuzzy logic and nonlinear adaptive control.

The declared aim is `to provide engineers and scientists in academia and industry with a thorough understanding of the underlying principles of nonlinear system identification'. At the same time, the author wishes to enable users to apply the methods illustrated in the book.

The book is well structured and divided into four distinct parts.

The first part is entirely devoted to an overview of the main optimization techniques for nonlinear problems. Least squares methods and other classical strategies such as general gradient-based algorithms are discussed. While the presentation is clear, it is too wordy at times, making it difficult to appreciate the key issues involved. A set of diagrams and summarizing tables is included, though, to improve the overall clarity and highlight similarities and differences.

The second part is mostly devoted to static models such as linear, polynomial and look-up table models. The main emphasis is on neural networks and fuzzy logic. The results are clearly expounded but the aim of giving a general overview of too many different approaches in some cases hampers the clarity of the exposition.

Neuro-fuzzy models are presented in chapter 12 and further detailed in chapters 13 and 14 where local linear Neuro-Fuzzy models are discussed. In particular, chapter 13 focuses on methods proposed by the author. Despite their usefulness, I found that the choice of dedicating two entire chapters to such methods causes a slight imbalance in the presentation. Up to chapter 13, the discussion is quite well balanced and different methods are given the space needed to expound the main results. Unlike the other strategies, in my view, local neuro-fuzzy approaches are treated in far too much detail. This is beyond the scope of the book, which is that of giving a general balanced overview of all possible results. A summary of the second part is reported in chapter 15 where the author reinforces the view that local neuro-fuzzy methods should be more widely applied for static modelling problems.

Dynamic Models are the subject of the third main section of the book. Linear dynamic system identification is discussed in chapter 16, where time series models are presented together with multivariable methods and other linear approaches. Nonlinear dynamic systems are considered in chapter 17 and are followed by classical polynomial approaches in chapter 18. Neural and fuzzy dynamic models are treated together with local neuro-fuzzy dynamic systems in the remaining chapters of this third part. Again particular emphasis is given to local neuro-fuzzy systems which have been the subject of research and development by the author. Unfortunately, this part does not include a chapter dedicated to summarizing the main results expounded. It must be noted though that many diagrams and schematics do help in highlighting the main results. Nevertheless, an extensive summary such as the one included at the end of the second part would have been useful.

As I have indicated, Nelles has certainly described an extensive number of results in the book. On the other hand, more recent methods based on novel developments of Nonlinear Dynamics such as nonlinear time series analysis, which have been successfully used to identify nonlinear systems, have not been included in the book. I hope they will be incorporated in later editions, as they have the potential to play an important role in the identification of complex models.

Applications are discussed in the fourth and last part of the book. The problems presented are interesting but again it becomes apparent that local linear neuro-fuzzy methods are somehow the author's preferred method. This bias, which might well be motivated by the author's experience, should in my view be counterbalanced by applications showing the use of other methods. Some are indeed included in the final chapters of the book but I would have liked to see a few more problems.

Two appendices recall some useful results from linear algebra, vector calculus and statistics and are well suited to a general readership. An impressive reference list of more than 400 items completes the book, representing an invaluable starting point for further research and details.

As mentioned in the Preface, throughout the book Nelles tries to keep the mathematical description to a basic level. This indeed makes the textbook accessible to a wider audience. Unfortunately, it also results at times in lengthy, wordy descriptions of the most intricate approaches. As a consequence, users who wish to apply some of the methods discussed to problems that interest them will often find that they need to look up further details from other sources. In this respect, the extensive reference list at the end of the book will certainly be helpful. Despite this disadvantage, the book is certainly an invaluable archive of available strategies for nonlinear system identification, which will undoubtedly help readers with the choice of the particular method to use.

In conclusion, as I have indicated, I found the book a well-packaged overview of the main results concerned with nonlinear system identification. But I believe that the description is wordy at times and not rigorous enough. Contrary to what is stated in the Preface, I believe that rather than being a self-contained book, readers will undoubtedly need to look up further references to be able to make use of the methods illustrated. On the other hand, the book should be a useful reference for students. It certainly deserves to be included in the reading list of any course on nonlinear system identification and optimization.

Mario di Bernado