Next Article in Journal
A Flexible Miniature Antenna for Body-Worn Devices: Design and Transmission Performance
Next Article in Special Issue
State Recognition of Multi-Nozzle Electrospinning Based on Image Processing
Previous Article in Journal
Multi-Phase Interleaved AC–DC Step-Down Converter with Power Factor Improvement
Previous Article in Special Issue
Predictions of the Wettable Parameters of an Axisymmetric Large-Volume Droplet on a Microstructured Surface in Gravity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Test of a Spatial Nanopositioner for Evaluating the Out-of-Focus-Plane Performance of Micro-Vision

1
State Key Laboratory of Precision Electronic Manufacturing Technology and Equipment, Guangdong University of Technology, Guangzhou 510006, China
2
Guangdong Provincial Key Laboratory of Cyber-Physical System, Guangdong University of Technology, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Micromachines 2023, 14(3), 513; https://doi.org/10.3390/mi14030513
Submission received: 3 February 2023 / Revised: 16 February 2023 / Accepted: 21 February 2023 / Published: 22 February 2023

Abstract

:
Micro-vision possesses high in-focus-plane motion tracking accuracy. Unfortunately, out-of-focus-plane displacements cannot be avoided, decreasing the in-focus-plane tracking accuracy of micro-vision. In this paper, a spatial nanopositioner is proposed to evaluate the out-of-focus-plane performance of a micro-vision system. A piezoelectric-actuated spatial multi-degree-of-freedom (multi-DOF) nanopositioner is introduced. Three in-plane Revolute-Revolute-Revolute-Revolute (RRRR) compliant parallel branched chains produce in-focus-plane motions. Three out-of-plane RRRR chains generate out-of-focus-plane motions. A typical micro-vision motion tracking algorithm is presented. A general grayscale template matching (GTM) approach is combined with the region of interest (ROI) method. The in-focus-plane motion tracking accuracy of the micro-vision system is tested. Different out-of-focus-plane displacements are generated using the proposed nanopositioner. The accuracy degradation of the in-focus-plane motion tracking is evaluated. The experimental results verify the evaluation ability of the proposed nanopositioner.

1. Introduction

Micro-vision, consisting of a microscope and a camera, possesses the advantage of being a non-contact method with visualization capabilities [1,2,3,4,5,6,7]. The larger the eyepiece multiplier, the smaller the depth of field. Due to the small depth of field, micro-vision is generally employed to measure micrometer-scale or sub-micrometer-scale displacements in the focus plane. Several factors affect the in-focus-plane measurement accuracy, such as defocus blur, motion blur, and Gaussian blur. Unfortunately, out-of-focus-plane displacements are ubiquitously unavoidable. The relative distance between the lens of the microscope and the measured object always changes. The result causes different defocus blurs. Compared with macro-vision, the accuracy degradation of micro-vision is more prominent [6,7,8,9,10,11]. Out-of-focus-plane displacements of moving targets are more serious than those of stationary objects. Therefore, the defocus effect of micro-vision is worse for in-focus-plane motion tracking.
Spatial multi-degree-of-freedom (multi-DOF) nanopositioners play important roles in the fields of precision motion generation, measurement, machining, and manipulation [12,13,14,15,16,17,18]. Piezoelectric actuators (PEAs), compliant mechanisms (CMs), and parallel mechanisms (PMs) are widely selected elements to build nanopositioners [12,13,14,15,19,20]. PEAs generate sub-nanometer-scale displacements [12,13,14,15,18,19,20,21,22,23,24,25]. CMs transfer displacements or forces without any clearance or friction [12,13,14,15,19,20,22,24,26,27,28,29]. PMs enable the end-effector to have a higher motion generation precision and payload ability [12,13,14,15,19,20,26,27,28]. Combined with PEAs and compliant parallel mechanisms (CPMs), spatial nanopositioners can generate a motion with nanometer-scale accuracy. In-plane output displacements of the end-effector act as in-focus-plane tracking targets of micro-vision, and out-of-plane output displacements are used to evaluate the out-of-focus-plane performance of micro-vision.
This paper contributes a spatial nanopositioner and an evaluation approach for the motion tracking accuracy degradation characteristics of micro-vision. Firstly, the mechanical design approach of the spatial nanopositioner using six PEAs and a six-branched-chain CPM is proposed in Section 2. Secondly, the micro-vision system, utilizing the typical GTM and ROI methods, is presented to track in-focus-plane motion in Section 3. Thirdly, prototype tests measuring the in-focus-plane motion tracking accuracy degradation of the micro-vision system under different out-of-focus-plane displacements are conducted in Section 4. Finally, a brief conclusion is presented in Section 5.

2. Mechanical Design of the Spatial Nanopositioner

A spatial nanopositioner is proposed. A 6-Revolute-Revolute-Revolute-Revolute (6-RRRR) CPM acts as the mechanical unit of the nanopositioner. The 6-RRRR CPM consists of six parallel branched chains. Each branched chain is composed of four rotating pairs using notch flexure hinges. The first rotational pair, as the equivalent active pair, is denoted using R. The other three rotating pairs, as passive pairs, are represented using RRR. Therefore, every branched chain is labeled as RRRR. The 6-RRRR CPM possesses a two-in-one structural configuration of two layers. The upper layer is an in-plane 3-RRRR CPM. The lower layer is an out-of-plane 3-RRRR CPM. The two layers are connected using a metal plate. The end-effector of the nanopositioner connects the six RRRR branches directly. Six PEAs drive the six RRRR branches separately and act as the actuating unit of the nanopositioner.

2.1. In-Plane Motion Generation and Measurement

The upper layer, namely, the in-plane 3-RRRR CPM, is composed of three RRRR branches. The three branches are located on the same plane. The in-plane three-degree-of-freedom (3-DOF) nanoscale-accuracy motion is generated. The end-effector acts as the tracking target of the micro-vision system. Three capacitive displacement sensors (CDSs) are used to measure the 3-DOF output displacements of the end-effector. Three PEAs (marked in blue) and three CDSs (marked in red) are shown in Figure 1.

2.2. Out-of-Plane Motion Generation and Measurement

The lower layer, namely, the out-of-plane 3-RRRR CPM, is composed of three RRRR branches. The three branches are located on three different planes. The plane of each out-of-plane RRRR branch is perpendicular to the same plane of the three in-plane RRRR branches. The out-of-plane motion is produced and added to the in-plane trajectory of the end-effector. One CDS is employed to measure the out-of-plane movement of the end-effector. Three PEAs (marked in blue) and one CDS (marked in red) are shown in Figure 2.

3. In-Focus-Plane Motion Tracking of Micro-Vision

In order to represent as many application cases as possible, typical calculation methods are selected for use in the micro-vision system. The focus plane of micro-vision is the calculation benchmark of out-of-focus-plane displacements. External measurement and image feature evaluation are two common methods to search the focus plane. Definition evaluation methods based on image features are mature, low-cost, and easy to implement.

3.1. Determination of the Focus Plane

The variance method is an algorithm used to characterize the difference in image sharpness values. The difference in the grayscale values of clear images is larger than that of fuzzy images. The pixel of the image is set as M × N. F is labeled as the result and expressed as follows:
F = i = 1 M j = 1 N [ I ( i , j ) μ ¯ ] 2
where I(i,j) denotes the grayscale value at point (i,j), and μ ¯ represents the average grayscale value.
μ ¯ = 1 M N i = 1 M j = 1 N I ( i , j )
The variance evaluation function is unimodal and anti-noise. Based on the image sharpness function, the position of the clearest image is searched to determine the focus plane.

3.2. Grayscale Template Matching (GTM) Method

Typical template matching algorithms use the sum of squared differences (SSD) or normalized cross-correlation (NCC) to calculate the similarity. Let S(x,y) represent an image of size M × N, and T(x,y) denote a template image of size m × n. The similarity formula D(i,j) using the SSD algorithm is as follows:
D ( i , j ) = s = 1 m t = 1 n [ S ( i + s 1 , j + t 1 ) T ( s , t ) ] 2
where (i,j) represents the upper left corner. A subgraph with a size of m × n is taken to calculate the similarity to the template.

3.3. Region of Interest (ROI) Method

The typical ROI method is also used. Before the motion tracking, an original frame of the image is collected for template matching. The point (u0,v0) represents the central position of the original ROI area m × n. The original ROI area R0 of the original frame 0 is defined as follows:
R 0 = I ( u 0 m 2 : u 0 + m 2 , v 0 n 2 : v 0 + n 2 )
The image of a new frame i is matched based on the ROI region of the previous frame i − 1. The point (Rui,Rvi) of the new frame i represents the relative position in the ROI area of the previous frame i − 1. The point (ui,vi) of the new frame i represents the absolute position in the new frame i and is calculated as follows:
( u i , v i ) = ( u i 1 m 2 + R u i , v i 1 n 2 + R v i )
The updated ROI area Ri of the new frame i is defined as follows:
R i = I ( u i m 2 : u i + m 2 , v i n 2 : v i + n 2 )
The point (u,v) of every frame is acquired to calculate the in-focus-plane displacements.

4. Prototype Test of the Nanopositioner and Out-of-Focus-Plane Evaluation

Aluminum alloy 7075-T651 was selected as the material for the prototype of the presented spatial 6-RRRR CPM. Wire electrical discharge machining (WEDM) and computer numerical control (CNC) technology were used to fabricate the two 3-RRRR CPMs separately. The in-plane 3-RRRR CPM was equipped with three packaged PEAs (P-841.3B, Physik Instrumente GmbH, Karlsruhe, Germany). The packaged PEAs possessed an embedded strain gauge sensor (SGS), a closed-loop elongation of 45.0 μm, and a small size of Φ12 × 68 mm3. The out-of-plane 3-RRRR CPM was equipped with three naked PEAs (NAC2015-H28, Piezomechanik GmbH, Munich, Germany). The naked PEAs possessed a compact size of 10 × 10 × 28 mm3 and a long elongation of 42.3 μm. The four capacitance sensors consisted of three pillars and one flake (D-E20.200 and D-E30.200, Physik Instrumente GmbH). The nominal stroke of the four capacitive sensors was 200 μm, and the resolution was 6 nm. The positioning controller of the end-effector was built using a compact prototyping unit (MicroLabBox, dSPACE GmbH, Paderborn, Germany).
The micro-vision system consisted of a microscope and a camera. The selected microscope had a magnification of 112.5 (Mitutoyo 50× objective, Navitar Inc., Rochester, NY, USA). The sensor of the selected camera had a resolution of 2448 × 2048 @ 75 fps, and a pixel pitch of 3.45 μm (Sony IMX250 CMOS, FLIR Systems Inc., Wilsonville, OR, USA). The microscope and camera were driven by a lifting sliding stage (KA050Z, Zolix Instruments Co., Ltd., Beijing, China). The resolution of this stage was 1 μm, and the positioning precision was better than ±3 μm. This stage was used to search for the focus plane of the micro-vision system. The equivalent pixel displacement relationship was calculated using a negative combined resolution and a distortion test target (R1L1S1N, Thorlabs Inc., Newton, NJ, USA). The calculated pixel displacement conversion relationship was 0.0311 μm/pixel. The whole experiment setup is shown in Figure 3.
As shown in Figure 3, four types of controllers were employed during the prototype test. The third controller (MicroLabBox) was the overall controller of the whole experimental system. The first controller connected the four capacitive sensors of the nanopositioner and the third controller. The second controller connected the six PEAs of the nanopositioner and the third controller. The fourth controller connected the lifting platform of the micro-vision system and the third controller.

4.1. Test Results of the in-Focus-Plane Motion Generation Ability

The in-plane workspace of the nanopositioner was tested. Then, an in-plane circular trajectory was generated using a PID controller. The results are shown in Figure 4.
As shown in Figure 4, the in-plane workspace of the nanopositioner is 140 × 170 μm2. For a circle with a diameter of 25 μm, the positioning error along the x-axis using the 3δ (δ: standard deviation) principle is 0.038 μm, and the 3δ error along the y-axis is 0.054 μm. The in-plane trajectory provides a standard in-focus-plane tracking target for the micro-vision system.
The area of the viewing field of the proposed micro-vision system is 76.1 × 63.7 μm2. The calculated pixel displacement conversion relationship is 0.0311 μm/pixel. The in-plane reachable workspace of the proposed nanopositioner is more than four times larger than the viewing field of the micro-vision system. The 3δ trajectory tracking precision of the nanopositioner is close to the identified displacement of one pixel of the micro-vision system.

4.2. Test Results of the Out-of-Focus-Plane Motion Generation Ability

The out-of-plane stroke of the nanopositioner was tested. The results are shown in Figure 5.
As shown in Figure 5, the out-of-plane stroke of the nanopositioner is 90.4 μm. For every point in the area of the viewing field, 76.1 × 63.7 μm2, the corresponding out-of-plane stroke of the nanopositioner is more than ten times larger than the depth of focus of the micro-vision system. The out-of-plane motion of the nanopositioner is enough for the out-of-focus-plane excitation of the micro-vision system.

4.3. Test Results of the Out-of-Focus-Plane Performance

The field of view of the selected micro-vision system is 76.1 × 63.7 μm², and the sampling rate is 15 Hz. The in-focus-plane circular trajectory was generated, and different out-of-focus-plane harmonic displacements were added to the in-focus-plane trajectory. The motion tracking results of the circular diameter are shown in Table 1.
As shown in Table 1, out-of-focus-plane displacements changed the in-focus-plane measurement results of the micro-vision system. When the out-of-focus-plane displacement reached a threshold value, 7.737 ± 2.512 μm, the micro-vision system was unable to work.

4.4. Performance Comparison of Spatial Nanopositioners

The proposed spatial nanopositioner possesses a compact structure of Φ200 × 56 mm3, an in-plane workspace of 140 × 170 μm2, and an out-of-plane stroke of 90.4 μm. Compared with other nanopositioners [19,20,29], the presented nanopositioner has the ability to evaluate the out-of-focus-plane performance of micro-vision systems and can be easily embedded into these systems. The nanopositioner proposed in [20] can expand the actual application of optical alignment elements in projection lenses with 193 nm immersion lithography.
Additionally, the 3δ positioning accuracy of the proposed nanopositioner is satisfactory, being close to the identified displacement of one pixel of the micro-vision system. A comparison of the key performance indexes of the selected nanopositioners is shown in Table 2.

5. Conclusions

A spatial nanopositioner is proposed in this paper. The end-effector acts as the in-focus-plane measurement target of the micro-vision system. A 3-RRRR CPM is employed to generate in-plane motion. Another 3-RRRR CPM is used to generate different out-of-plane displacements to evaluate the out-of-focus-plane performance of the micro-vision system. The micro-vision system uses the typical GTM and ROI methods. The experimental results verify the accuracy degradation of the in-focus-plane motion tracking of the micro-vision system using different out-of-focus-plane displacements. The proposed nanopositioner possesses a motion generation ability for evaluating the out-of-focus-plane performance of micro-vision systems.
Future research will focus on the accuracy deterioration caused by high-frequency out-of-focus-plane displacements and the diffraction effect, and the real-time compensation or correction of the micro-vision system at the software level.

Author Contributions

Conceptualization, R.W. and H.W.; methodology, R.W. and H.W.; validation, R.W. and H.W.; formal analysis, R.W. and H.W.; investigation, R.W.; data curation, R.W. and H.W.; writing—original draft preparation, R.W.; writing—review and editing, R.W. and H.W.; project administration, R.W. and H.W.; funding acquisition, R.W. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Nos. 51905106, 62173098), and the Guangzhou Basic and Applied Basic Research Foundation (Grant No. 202201010398).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Research data are available from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, H.; Zhang, X.; Gan, J.; Li, H.; Ge, P. Displacement measurement system for inverters using computer micro-vision. Opt. Lasers Eng. 2016, 81, 113–118. [Google Scholar] [CrossRef] [Green Version]
  2. Li, H.; Zhu, B.; Chen, Z.; Zhang, X. Realtime in-plane displacements tracking of the precision positioning stage based on computer micro-vision. Mech. Syst. Signal Process 2019, 124, 111–123. [Google Scholar] [CrossRef]
  3. Xu, J.; He, X.; Ji, W. Mechanical system and template-matching-based position-measuring method for automatic spool positioning and loading in welding wire winding. Appl. Sci. 2020, 10, 3762. [Google Scholar] [CrossRef]
  4. Wang, Y.; Liu, W.; Li, F.; Li, H.; Zha, W.; He, J.; Ma, G.; Duan, Y. A fast template matching method based on improved ring projection transformation and local dynamic time warping. Optik 2020, 216, 164954. [Google Scholar] [CrossRef]
  5. Feemster, M.; Piepmeier, J.A.; Biggs, H.; Yee, S.; ElBidweihy, H.; Firebaugh, S.L. Autonomous microrobotic manipulation using visual servo control. Micromachines 2020, 11, 132. [Google Scholar] [CrossRef] [Green Version]
  6. Li, H.; Zhu, B.; Zhang, X.; Wei, J.; Fatikow, S. Pose sensing and servo control of the compliant nanopositioners based on microscopic vision. IEEE Trans. Ind. Electron. 2020, 68, 3324–3335. [Google Scholar] [CrossRef]
  7. Xu, Z.; Han, G.; Du, H.; Wang, X.; Wang, Y.; Liu, J.; Yang, Y. A generic algorithm for position-orientation estimation with microscopic vision. IEEE Trans. Instrum. Meas. 2022, 71, 5013010. [Google Scholar] [CrossRef]
  8. Sha, X.; Li, W.; Lv, X.; Lv, J.; Li, Z. Research on auto-focusing technology for micro vision system. Optik 2017, 142, 226–233. [Google Scholar] [CrossRef]
  9. Li, H.; Zhang, X.; Wu, H.; Gan, J. Line-based calibration of a micro-vision motion measurement system. Opt. Lasers Eng. 2017, 93, 40–46. [Google Scholar] [CrossRef]
  10. Li, D.; Wang, S.; Fu, Y. Quality detection system and method of micro-accessory based on microscopic vision. Mod. Phys. Lett. B 2017, 31, 1750270. [Google Scholar] [CrossRef]
  11. Yao, S.; Li, H.; Pang, S.; Yu, L.; Fatikow, S.; Zhang, X. Motion measurement system of compliant mechanisms using computer micro-vision. Opt. Express 2021, 29, 5006–5017. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, R.; Zhang, X. A planar 3-DOF nanopositioning platform with large magnification. Precis. Eng. 2016, 46, 221–231. [Google Scholar] [CrossRef] [Green Version]
  13. Wang, R.; Zhang, X. Optimal design of a planar parallel 3-DOF nanopositioner with multi-objective. Mech. Mach. Theory 2017, 112, 61–83. [Google Scholar] [CrossRef]
  14. Wang, R.; Zhang, X. Parameters optimization and experiment of a planar parallel 3-DOF nanopositioning system. IEEE Trans. Ind. Electron. 2017, 65, 2388–2397. [Google Scholar] [CrossRef]
  15. Zhu, Z.; To, S.; Zhu, W.-L.; Li, Y.; Huang, P. Optimum design of a piezo-actuated triaxial compliant mechanism for nanocutting. IEEE Trans. Ind. Electron. 2017, 65, 6362–6371. [Google Scholar] [CrossRef]
  16. Liu, Y.; Deng, J.; Su, Q. Review on multi-degree-of-freedom piezoelectric motion stage. IEEE Access 2018, 6, 59986–60004. [Google Scholar] [CrossRef]
  17. Mohith, S.; Upadhya, A.R.; Navin, K.P.; Kulkarni, S.M.; Rao, M. Recent trends in piezoelectric actuators for precision motion and their applications: A review. Smart Mater. Struct. 2020, 30, 13002. [Google Scholar] [CrossRef]
  18. Yang, C.; Youcef-Toumi, K. Principle, implementation, and applications of charge control for piezo-actuated nanopositioners: A comprehensive review. Mech. Syst. Signal Process 2022, 171, 108885. [Google Scholar] [CrossRef]
  19. Varadarajan, K.M.; Culpepper, M.L. A dual-purpose positioner-fixture for precision six-axis positioning and precision fixturing Part II. Characterization and calibration. Precis. Eng. 2007, 31, 287–292. [Google Scholar] [CrossRef]
  20. Zhang, D.; Li, P.; Zhang, J.; Chen, H.; Guo, K.; Ni, M. Design and assessment of a 6-DOF micro/nanopositioning system. IEEE/ASME Trans. Mechatronics 2019, 24, 2097–2107. [Google Scholar] [CrossRef] [Green Version]
  21. Gu, G.-Y.; Zhu, L.-M.; Su, C.-Y.; Ding, H.; Fatikow, S. Modeling and control of piezo-actuated nanopositioning stages: A survey. IEEE Trans. Autom. Sci. Eng. 2014, 13, 313–332. [Google Scholar] [CrossRef]
  22. Wu, Z.; Xu, Q. Survey on recent designs of compliant micro-/nano-positioning stages. Actuators 2018, 7, 5. [Google Scholar] [CrossRef] [Green Version]
  23. Wang, S.; Rong, W.; Wang, L.; Xie, H.; Sun, L.; Mills, J.K. A survey of piezoelectric actuators with long working stroke in recent years: Classifications, principles, connections and distinctions. Mech. Syst. Signal Process 2019, 123, 591–605. [Google Scholar] [CrossRef]
  24. Yi, C.; Baoxing, W.; Gang, M.; Miao, L.; Hong, Z. Design analysis and optimization of large range spatial translational compliant micro-positioning stage. J. Mech. Eng. 2020, 56, 71. [Google Scholar] [CrossRef]
  25. Lyu, Z.; Xu, Q. Recent design and development of piezoelectric-actuated compliant microgrippers: A review. Sensors Actuators A: Phys. 2021, 331, 113002. [Google Scholar] [CrossRef]
  26. Kang, S.; Lee, M.G.; Choi, Y.-M. Six Degrees-of-freedom direct-driven nanopositioning stage using crab-leg flexures. IEEE/ASME Trans. Mechatronics 2020, 25, 513–525. [Google Scholar] [CrossRef]
  27. Xu, H.; Zhou, H.; Tan, S.; Duan, J.-A.; Hou, F. A six-degree-of-freedom compliant parallel platform for optoelectronic packaging. IEEE Trans. Ind. Electron. 2021, 68, 11178–11187. [Google Scholar] [CrossRef]
  28. Pham, M.T.; Yeo, S.H.; Teo, T.J.; Wang, P.; Nai, M.L.S. A decoupled 6-DOF compliant parallel mechanism with optimized dynamic characteristics using cellular structure. Machines 2021, 9, 5. [Google Scholar] [CrossRef]
  29. Cai, K.; Tian, Y.; Liu, X.; Fatikow, S.; Wang, F.; Cui, L.; Zhang, D.; Shirinzadeh, B. Modeling and controller design of a 6-DOF precision positioning system. Mech. Syst. Signal Process 2018, 104, 536–555. [Google Scholar] [CrossRef] [Green Version]
Figure 1. In-plane actuators and sensors of the proposed spatial nanopositioner. (a) 3D view; (b) top view.
Figure 1. In-plane actuators and sensors of the proposed spatial nanopositioner. (a) 3D view; (b) top view.
Micromachines 14 00513 g001
Figure 2. Out-of-plane actuators and sensors of the proposed spatial nanopositioner. (a) 3D view (b) top view.
Figure 2. Out-of-plane actuators and sensors of the proposed spatial nanopositioner. (a) 3D view (b) top view.
Micromachines 14 00513 g002
Figure 3. Prototype setup of the proposed nanopositioner, micro-vision system, and controllers.
Figure 3. Prototype setup of the proposed nanopositioner, micro-vision system, and controllers.
Micromachines 14 00513 g003
Figure 4. In-focus-plane motion generation ability using the proposed spatial nanopositioner. (a) planar workspace; (b) positioning error, x axis; (c) positioning error, y axis.
Figure 4. In-focus-plane motion generation ability using the proposed spatial nanopositioner. (a) planar workspace; (b) positioning error, x axis; (c) positioning error, y axis.
Micromachines 14 00513 g004
Figure 5. Out-of-focus-plane motion generation ability using the proposed spatial nanopositioner. (a) workspace, z-o-x plane; (b) workspace, z-o-y plane.
Figure 5. Out-of-focus-plane motion generation ability using the proposed spatial nanopositioner. (a) workspace, z-o-x plane; (b) workspace, z-o-y plane.
Micromachines 14 00513 g005
Table 1. In-focus-plane tracking results excited by out-of-focus-plane displacements.
Table 1. In-focus-plane tracking results excited by out-of-focus-plane displacements.
Out-of-Focus-Plane
Displacements/μm
In-Focus-Plane
Tracking Results/μm
In-Focus-Plane
Accuracy Degradation/μm
−0.234 ± 2.51828.474 ± 0.2330.000
1.895 ± 2.51528.606 ± 0.1800.132
3.061 ± 2.51528.638 ± 0.1480.164
5.579 ± 2.51328.892 ± 0.3300.418
7.279 ± 2.51329.027 ± 0.3490.553
7.737 ± 2.512cannot workcannot work
Table 2. Comparison of key performance indexes of selected spatial nanopositioners.
Table 2. Comparison of key performance indexes of selected spatial nanopositioners.
NanopositionerProposedV. [19]Z. [20]C. [29]
Dimension/mm3Φ200 × 56250 × 250 × 80Φ264 × 148Φ150 × 143
In-plane workspace/μm2140 × 17040 × 4080 × 808.2 × 10.5
Out-of-plane stroke/μm90.4806013.2
x-axis, 3δ accuracy/μm0.0380.0330.0300.093 *
y-axis, 3δ accuracy/μm0.0540.0330.0300.081 *
Note: * = the maximum tracking error from the experimental result of bi-axial circular trajectories (3δ accuracy is not provided in this reference).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, R.; Wu, H. Design and Test of a Spatial Nanopositioner for Evaluating the Out-of-Focus-Plane Performance of Micro-Vision. Micromachines 2023, 14, 513. https://doi.org/10.3390/mi14030513

AMA Style

Wang R, Wu H. Design and Test of a Spatial Nanopositioner for Evaluating the Out-of-Focus-Plane Performance of Micro-Vision. Micromachines. 2023; 14(3):513. https://doi.org/10.3390/mi14030513

Chicago/Turabian Style

Wang, Ruizhou, and Heng Wu. 2023. "Design and Test of a Spatial Nanopositioner for Evaluating the Out-of-Focus-Plane Performance of Micro-Vision" Micromachines 14, no. 3: 513. https://doi.org/10.3390/mi14030513

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop