Image stitching algorithm for super-resolution localization microscopy combined with fluorescence noise prior

Super-resolution panoramic pathological imaging provides a powerful tool for biologists to observe the ultrastructure of samples. Localization data can maintain the essential ultrastructural information of biological samples with a small storage space, and also provides a new opportunity for stitching super-resolution images. However, the existing image stitching methods based on localization data cannot accurately calculate the registration offset of sample regions with no or few structural points and thus lead to registration errors. Here, we proposed a stitching framework called PNanoStitcher. The framework fully utilizes the distribution characteristics of the background fluorescence noise in the stitching region and solves the stitching failure in sample regions with no or few structural points. We verified our method using both simulated and experimental datasets, and compared it with existing stitching methods. PNanoStitcher achieved superior stitching results on biological samples with no structural and few structural regions. The study provides an important driving force for the development of super-resolution digital pathology.


Introduction
Super-resolution localization microscopy (SRLM) overcomes the diffraction-limited resolution barrier of optical microscopy and achieves resolution on the order of 10-20 nm [1], which provides a new opportunity for biologists to study the molecular interactions at the subcellular level [2].However, for biomedical researches, pathologists not only need to observe the local structure of biological samples, but also hope to observe the whole structure to count and capture the overall information of biological phenomena [3,4].For example, tumor microenvironment contains epithelial tissue, stromal tissue, as well as various components such as tumor cells and immune cells between these tissues [5].The spatial distribution and interaction between these components are of great value for cancer diagnosis and prognosis [6,7].Super-resolution panoramic pathological imaging combines the advantages of large field of view (FOV) and super-resolution, and thus can not only provide nanoscale resolution to visualize cytoskeleton and small organelles, but also achieves millimeter-scale imaging to observe cell networks [8].That fully satisfies the need of pathologists to observe the entire sample at nanoscale resolution.
A super-resolution panoramic image is typically stitched from a large number of overlapping super-resolution images.Currently, there are three main stitching methods: stitching based on hardware, stitching based on grayscale images, and stitching based on localization data.The first method is hardware-based.During SRLM, a large number of super-resolution images with small FOV are typically acquired sequentially through an automatic XY stage, and these super-resolution images are superimposed with a fixed offset (which is control by the XY stage and is used to obtain overlapping area) to obtain a super-resolution panoramic image.However, it is difficult to control the fixed offset at a very high precision (nanometer or less).Also, the XY stage may be too expensive (50-60 k USD) for some biomedical laboratories.More importantly, multi-FOV mosaic imaging process can cause cumulative errors in the final panoramic image due to nanoscale offset errors.Therefore, the development of an automated super-resolution stitching method has become an urgent need for panoramic pathology.
The second method, stitching based on grayscale image, utilizes the sequential information of grayscale images to measure the similarity of adjacent stitching regions, so as to complete the panoramic image stitching.The BigStitcher plugin, proposed by Hörl et al, can achieve a fast and accurate alignment of three-dimensional high-resolution images, but this plugin requires manual setting of stitching parameters and often relies heavily on user experiences [9].Another plugin, MosaicExplorerJ, does not require prior information about the actual positions of the images, but it requires to adjust the initial positions of the images and input parameters manually [10].The MIST plugin can automatically and quickly stitch a given image sequence, but it has poor stitching results for high-resolution images [11].Therefore, for super-resolution images, the grayscale image-based stitching method not only shows serious misalignment problems at the seams of the super-resolution images, but also some images overlapping issues.More importantly, these stitching methods can only render images at a fixed resolution.Note that if the rendering resolution is not high enough, it may lead to the loss of detailed information.Additionally, if grayscale image technology is used, the size of a super-resolution panoramic pathological image can be very large, which causes storage pressure or memory overflow.For example, if we want to obtain a panoramic image with a FOV of 0.5 mm × 0.5 mm, assume the number of localization points is 115183663.When we rendered the grayscale images with a pixel size of 20 nm, the final file size of the SR image was 21.43 GB, while the file size of the corresponding localization data was only 5.14 GB (each localization point included 12 parameters, each parameter occupies 4 bytes).
The third method, stitching based on localization data, utilizes the original localization table (obtained from the super-resolution imaging process) for stitching.This table typically preserves the peak intensity, localization accuracy, signal-to-noise ratio and other information of the sample during image acquisition.Moreover, the file used to store a localization table occupies much less memory compared to the files for storing the corresponding grayscale images, which makes the stitching efficiency much higher.NanoStitcher [12], a super-resolution image stitching framework based on localization data, was firstly proposed by Du et al. in 2021.NanoStitcher uses a Gaussian Mixture Probability Model (GMPM) to calculate the registration offset between adjacent localization points [13], which effectively addresses the issues in grayscale image stitching, such as large file memory occupation, loss of detailed information, and different levels of rendering.The model performs probabilistic calculations on structural points at positions with cell structures to obtain image stitching parameters.However, in practice, there are a lot of biological gaps between cells or between cells and tissues, where no structural points exist (see the green dotted box in Fig. 1).That is to say, NanoStitcher cannot calculate the registration offset at positions with no or few structural points in the intercellular spaces.Otherwise, the calculation results may deviate greatly.This situation will cause serious misalignment in both horizontal and vertical stitching (see the blue dotted boxes in Fig. 1), and result in cumulative errors, which will affect the observation and analysis of biological information in subsequent super-resolution panoramic images.Therefore, it is currently necessary to develop new stitching schemes to address the remaining issue in NanoStitcher, that is, the stitching misalignment in biological samples with no or few structural points.
By analyzing the process of super-resolution localization imaging, we found that fluorescent dyes, which distribute randomly in the background region of the sample, usually generate fluorescent noise spots, and the distribution of these fluorescent noise spots in the adjacent image is roughly the same.That is, although there are no structural points between some cells or tissues in super-resolution localization images, background fluorescent noise points are likely to appear in adjacent overlapping regions.However, fluorescent noise points excited during the first capture may not be excited during the second capture.In other words, at the seam of adjacent super-resolution localization images, either the corresponding matching noise points cannot be found, or the pairing of fluorescence noise points is a good match.
Based on the above analysis, here we proposed a stitching framework called PNanoStitcher, which fully utilizes the distribution characteristics of fluorescent noise points at the seams during super-resolution localization imaging.We aimed to address the issue of stitching failure in regions with no or few structural points.PNanoStitcher performs probabilistic calculations based on structural points (regions with structural points), and performs noise registration algorithm based on fluorescent noise points (regions with no or few structural points) to obtain stitching parameters, and then completes panoramic image stitching based on localization table.The framework includes five steps, namely path planning, downsampling at the seams, image registration, deduplication at the seams, and panoramic image fusion.We validated the stitching performance of PNanoStitcher over other representative stitching frameworks using both simulated and experimental datasets.The results show that PNanoStitcher achieved better stitching performance for super-resolution panoramic images.

Framework of panoramic super-resolution image stitching
PNanoStitcher (showed in Fig. 2) is based on localization table data, and includes the following steps.Firstly, localization tables for multiple regions of interest (ROI) are input, and the optimal stitching path is selected to reduce cumulative errors.Secondly, the overlapping regions between two stitched images are downsampled to improve computational efficiency.Next, point density is calculated and used to judge the stitching regions to be either structural point region (registration based on structural points) or fluorescent noise point region (registration based on fluorescent noise points).Finally, a series of processing (including coordinate unification, deduplication, and panoramic fusion) are performed to obtain the final panoramic image.In downsampling at the stitching region, deduplication, and panoramic fusion, PNanoStitcher adopts the voxel-gridfiltering algorithm [14], which is the same as NanoStitcher.In the following sections, we focus on introducing the path planning and pairwise image registration algorithms.

Stitching path planning
During the stitching process, due to the influence of multiple factors (such as noise, structural gaps, and data quality in overlapping regions), different stitching paths can accumulate different registration errors, which ultimately affect the stitching performance of the panoramic image.Therefore, rational stitching path planning is an important step in super-resolution panoramic image stitching.
The reported NanoStitcher method quantifies the overlap degree between two adjacent regions by calculating the Euclidean distance between adjacent sub-images [15], and then constructs a minimum spanning tree [16] for path planning.However, NanoStitcher only considers the offset between images and does not fully utilize the offset information from the XY displacement stage, and the latter will cause imaging benchmark error.Therefore, here we proposed the PNanoStitcher method, which combines the calculated offset between adjacent sub-images and the XY stage displacement values to enable high-precision registration.The calculation is shown in Eq. ( 1): Here, dx and dy represent the horizontal and vertical offsets calculated by the registration algorithm.I x and I y are known stage displacement values.
We used the following procedures to compare the path planning performance between NanoStitcher and PNanoStitcher.Firstly, we cut an image into 2 × 2 sub-images, and calculated the distances between adjacent sub-images using NanoStitcher and PNanoStitcher, respectively.Secondly, we constructed a minimum spanning tree to obtain two stitching paths.Finally, we compared the stitching results with the Groundtruth using Mean Squared Error (MSE) and Structural Similarity Index (SSIM) values.The experimental results are shown in Fig. 3.  using NanoStitcher and PNanoStitcher, respectively.It is worth noting that after registration, we did not carry out any deduplication or fusion.In this way, we could avoid random removal of localization points, which may affect the evaluation effect.The experimental results show that PNanoStitcher is capable to provide a higher SSIM value and a smaller MSE value than PNanoStitcher (Fig. 3).From the enlarged images (the white arrows in Fig. 3), we can observe that NanoStitcher has discontinuities and stitching misalignment, which suggests that the stitching result obtained by PNanoStitcher is closer to the Groundtruth.Therefore, the path planning strategy based on known displacement values results in less misalignment and better stitching quality.
Below is a detailed description of the steps in path planning, where image stitching of 5 × 5 ROIs (sub-images) are used as an example.
Step 1: Take an initial sub-image as the starting vertex, and calculate the distance between each pair of sub-images using the known displacement value in Eq. (1).
Step 2: Use the distances obtained from Step 1 as the weights of the edges, and calculate a weighted graph, as shown in Fig. 4.
Step 3: Use Kruskal algorithm to construct the minimum spanning tree, and then obtain the shortest path of the graph, which is the optimal path for stitching a panoramic image.

Pairwise image registration
NanoStitcher utilizes Gaussian Mixture Probability Model (GMPM) to perform probability calculation on regions with structural points, and then complete registration.Actually, the seams of biological samples often face the problem of no or few structural points.In these regions with sparse structural points, it is difficult for the GMPM to perform probability calculation, and thus will lead to significant errors in registration offset.In this paper, we use the distribution characteristics of fluorescent noise to achieve the registration offset for sample regions with no or few structural points.The equation for matching fluorescent noise pairs at the seams is as follows: Here, x i and x j are the horizontal coordinates of a noise point in the first and second sub-images.y i and y j are the vertical coordinates corresponding to x i and x j .d is the displacement value of the XY stage during the imaging process.ε and θ represent the errors of relative displacement (horizontal coordinate direction) and relative offset angle (vertical coordinate direction) of the two noise points (i and j).Equation (2) calculates the distance for matching noise points at the seam, and Eq.(3) calculates the angle for matching noise points at the seam.Using Eqs. ( 2) and (3), we can screen out the matching noise points in the stitching region that meet the distance and angle criteria.
PNanoStitcher uses the GMPM model to perform probability calculation on regions with structural points, and use the average Euclidean distance of matched noise points as registration offset on regions with fluorescent noise points.In this way, we effectively address the issue of stitching failure in regions with limited structural points in biological samples.
The flowchart of the registration process in PNanoStitcher is shown in Fig. 5, and the specific steps are shown below.
Step 1: Divide the seam of two sub-images to be registered into 10 equal regions, and calculate the point density in each region.
Step 2: Determine whether a region is a structural point region or a noise point region according to the point density.If the point density of a region is greater than a threshold value, it is considered as a structural points region (see the blue box in the left part of Fig. 5); otherwise, it is considered as a noise point region (see the yellow box in the left part of Fig. 5).The threshold value in this paper was set to be 400, which has been verified in our experimental datasets.
Step 3: Find matching noise points at the seams.Two noise points are identified as matching points when satisfying Eqs. ( 2) and (3).Here, the values of xy displacement d is known from the image acquisition process.That is, d is 960 pixels for a 5 × 5 ROIs experiment and 870 pixels for a 3 × 3 ROIs experiment.The initial value of relative displacement ε are set to 10 pixels and 5 pixels for 5 × 5 ROIs and 3 × 3 ROIs image, respectively, and the angle θ = ε/d.
Step 4: Use the noise registration algorithm to register the noise point region.Accumulate the calculation results in the noise point regions.Divide them by the number of noise point regions, and obtain the displacement results for noise point regions.Here, the average Euclidean distance of matched noise points is used as registration offset on regions with noise points.
Step 5: Utilize GMPM to register the structural point region.Accumulate the calculation results of the structural point regions.Divide the results by the number of structural point regions, and obtain the displacement result for the structural point region.Note that GMPM is also the pairwise registration algorithm in NanoStitcher.
Step 6: Assign the corresponding weights to their relative displacement values, according to the different proportions of structural point regions and noise point regions at the seam.Add these weights up to obtain the final relative displacement results, and thus complete the pairwise images registration.For example, if there are 7 regions of structural points and 3 regions of noise points in the adjacent two sub-images, the displacement result is the sum of the displacement of the structural point regions multiplied by 0.7 and the displacement of the noise point regions multiplied by 0.3.

Peak intensity
Since the Groundtruth is not taken at the same time as the sub-images, the fluorescent molecules are secondary excited.This will not only cause signal to weaken or even disappear, but also cause changes in the locations of fluorescence spots due to the displacement of XY stage.Therefore, it is not possible to compare the absolute coordinate positions with the Groundtruth.However, the fluorescent spots within the sample region are roughly defined, and the total peak intensity of the molecules can represent the number of spots in the super-resolution localization table [17].Therefore, the total peak intensity of the spots is used as an evaluation metric to measure the stitching performance.When the stitching results are consistent with the Groundtruth, the fluorescent noise points and structural points at the seam are the same, and the total peak intensity within the seam is also the same.The closer the total peak intensity to the Groundtruth is observed within the same region, the better the performance of stitching is confirmed.When the images represented by two localization tables are identical, the peak intensity ratio is equal to 1. Below is the equation for calculating the peak intensity ratio: In the equation, P represents the peak intensity ratio between the experimental results and the Groundtruth.The closer the value of P is to 1, the better the performance of stitching is found.Here, n and m represent the number of points within the seam for the experimental results and the Groundtruth, respectively.(x i , y i ) and (x j , y j ) represent the peak intensities of the points (x i , y i ) and (x j , y j ), respectively.

Structural similarity index (SSIM)
The Structural Similarity Index (SSIM) [18] is a metric for measuring the structural similarity between two images, which is usually calculated based on three components, including luminance, contrast, and structural similarity between samples.The range of SSIM is from -1 to 1.When two images are identical, SSIM is equal to 1.We used SSIM to quantify the performance of image stitching.The average value is used to estimate luminance, the standard deviation is used to estimate contrast, and the covariance is used to measure structural similarity.The equation for calculating SSIM is shown as: Here, µ x and µ y represent the mean value of x and y respectively.σ 2 x and σ 2 y are the standard deviation of x and y, respectively.σ xy is the covariance.We use two constants to maintain the stability of SSIM, which are c 1 = (k 1 L) 2 and c 2 = (k 2 L) 2 .The parameter (k 1 ) is set to 0.01, The parameter (k 2 ) is set to 0.03, and L is the dynamic range of pixel values.

Mean squared error (MSE)
Mean Squared Error (MSE) [19] measures the average squared difference between the estimated values and the true values of parameters.In image processing, MSE is the mean squared sum of the differences in pixel values between the processed and the original image.
Specially, f (i, j) represents the original image.f ′ (i, j) represents the image to be calculated.M and N represent the length and width of the image, respectively.Smaller MSE value indicates better similarity between the two images.

Experimental results
We conducted three experiments on simulated and experimental datasets to verify the feasibility and effectiveness of PNanoStitcher.Firstly, we verified the registration performance of PNanoStitcher through pairwise registration experiments with different proportions of structural points.We wanted to prove that PNanoStitcher is able to deal with the situation where the registration offsets cannot be calculated due to the absence of structural points between cells or tissues.Subsequently, we validated the performance of PNanoStitcher on simulated and experimental datasets of microtubule and mitochondria.

Experimental datasets
The experimental data was obtained from our laboratory at Hainan University.The microscopy system is mainly consisted of an Olympus IX73 microscope, an Olympus 60x/NA1.42 objective lens, a 405 nm activation laser, a 640 nm excitation laser, and a Hamamatsu Flash 4.0 V3 camera.The experimental samples were U2OS cells, and the dye used was Dylight-633.We adopted simulated voltage signals to control the intensity of the 405 nm laser, and the laser intensity is automatically adjusted to maintain a constant fluorescence signal density.The exposure time for a single raw image was 10 ms, and the imaging field of view (FOV) used was 100 µm × 100 µm.The focal plane within each FOV was stabilized using a home-built IRFocus-LocPara system [20], with a laser scanning Z-axis range of ± 480 nm.The focal plane was stabilized at every 200 frames, and the maximum number of frames captured per FOV was 5000.

Pairwise registration with varying proportions of structure and noise
We compared the performance of PNanoStitcher (considering both noise and structural points) with NanoStitcher (only considering structural points) and NoiseStitcher (only considering noise points) under experiments with varying proportions of structure and noise.In this experiment, we selected and cropped ROIs from super-resolution images generated by our home-built imaging system.We simulated super-resolution images with different proportion of structural points, including 0% (no structural points), 5%, 10%,15%, 20%, 25%, 30%, 40%, 60%, 80%, and 100% (full structural points).we calculated MSE, SSIM and peak intensity as metrics (as shown in Fig. 6 and Fig. 8).
When the proportion of structural points is 0%, there are no structural points at the seams of images, and only fluorescent noise points exist.PNanoStitcher utilizes fluorescent noise points for registration.At this point, the performance of PNanoStitcher is consistent with NoiseStitcher, as indicated by the SSIM value (close to 1) and the MSE value (close to 10).In contrast, the stitching results of NanoStitcher are significantly different from the Groundtruth.When the proportion of structural points is 5%, there are a few structural points at the seams.The performance of NanoStitcher is slightly improved, but there is still a severe misalignment in the seam, with a SSIM value of 0.7379 and a MSE value of 384.7431.As the proportion of structural points increases, the performance of PNanoStitcher is always better than that of NanoStitcher.When the proportion of structural points reaches 40%, both NanoStitcher and NoiseStitcher have good performances, with SSIM values approaching 0.84 and MSE values approaching 105.When the structural points in the seams become denser, the performance of NanoStitcher gradually surpasses that of NoiseStitcher, but PNanoStitcher has always shown better performance.When the proportion of structural points is 60%, although NanoStitcher still lags behind PNanoStitcher, the stitching performance shows an upward trend as the proportion of structural points increases.When the proportion of structural points is 100%, due to the difficulty in distinguishing fluorescent noise from structural points, the SSIM value of NoiseStitcher decreases to 0.4795, and the MSE value increases to 398.2, meaning significant stitching errors and severe misalignment.While NanoStitcher and PNanoStitcher have comparable stitching performance and perform well.These findings mentioned above verify that our proposed PNanoStitcher can obtain good stitching performance with both sparse and dense structural points.
Figure 7 shows the dependence of MSE and SSIM on the proportion of structural points.As shown in Fig. 7, the MSE values of PNanoStitcher are consistently lower than that of NanoStitcher, while the SSIM values follow the opposite trend.When the proportion of structural points is 100%, the sudden increase in the stitching error for NoiseStitcher indicates that NoiseStitcher is not suitable for the scenario with a high proportion of structural points.The experiments show that our proposed PNanoStitcher exhibits a stable and excellent performance under different proportions of structural points.That is to say, PNanoStitcher can effectively address the registration failure issue caused by the absence of structural points at the seams of super-resolution images.
Then, we calculated the average peak intensity P of different algorithms at the seams with different proportions of structural points (see the region within the yellow lines in Fig. 6).As shown in Fig. 8, when the proportion of structural points is 0%, the peak intensity ratio of NanoStitcher is 7.89.However, in order to allow readers to observe the numerical differences more intuitively, we have set the upper limit of the vertical axis to 2. We can clearly observe that NanoStitcher has poor performance in regions with sparse structural points, which is consistent with the results in Fig. 6.However, the performance of PNanoStitcher is almost the same as  that of NoiseStitcher.As the proportion of structural points increases, PNanoStitcher remains stable (around 1) as compared to other algorithms.When the proportion of structural points reaches 100%, PNanoSticher and NanoStitcher exhibit outstanding performance.However, at this density, NoiseStitcher performs poorly.These findings indicate that, although PNanoStitcher does not surpass NanoStitcher in performance under densely structural region, it solves the problem that NanoStitcher is unable to accurately calculate registration offset in regions with no or few structural points.

Panoramic image stitching on simulated microtubule datasets
To validate the stitching performance of PNanoStitcher in super-resolution panoramic imaging, we compared the stitching results of our proposed method (PNanoStitcher) with NanoStitcher (considering only structural points) and NoiseStitcher (considering only noise points) using simulated microtubule datasets.Since the maximum imaging area of our home-built microscopy system is 1000 × 1000 pixels, we set it as the Groundtruth for 2 × 2 ROIs (500 × 500 pixels at each ROI, as shown in Fig. 9).
The process of generating simulated datasets is as follows.Firstly, the FOV of the camera was adjusted to 108 µm × 108 µm at the sample plane, and images of 1000 × 1000 pixels were captured as the Groundtruth.Then, the FOV of the camera was adjusted again to capture four ROIs with small FOV (500 × 500 pixels for each ROI).The actual sample size corresponding to each pixel is 108 nm, and the average size of a small ROI is 54 µm × 54 µm.The overlap region of each ROI is approximately 15%, and the noise point region at the seam of the sample is larger than the structural points region.NanoStitcher is slightly inferior, and NoiseStitcher is the worst.This verifies that the proposed PNanoStitcher is feasible and effective.

Panoramic image stitching on experimental microtubule datasets
To verify the performance of our method on experimental datasets, we used an experimental microtubule datasets with 5 × 5 ROIs and without Groundtruth.We compared the stitching performance of PNanoStitcher and NanoStitcher under two scenarios with and without path planning.The average ROI size of the experimental datasets is 1000 × 1000 pixels, and each pixel corresponds to 108 nm.In this case, the average size of a single ROI is 108 µm × 108 µm, and the overlapping region is approximately 15%.
The image processing is as follows.Firstly, 25 localization tables generated by our SRLM system were used as input (as shown in Fig. 11, the localization table was rendered into image for display).Then, a minimum spanning tree based on Euclidean distance was constructed for path planning.Subsequently, the sub-images were registered according to planned path.Following this step, the relative displacements of the 25 localization tables after registration were calculated, and an updated panoramic localization table was generated.Finally, the panoramic localization table was de-duplicated and fused to obtain the final stitching result.We firstly analyzed the experimental results of two stitching algorithms (PNanoStitcher and NanoStitcher) without path planning.We can clearly observe that NanoStitcher has stitching misalignment, and there are faults in the filamentous structures within the cells (indicated by the white arrows in the Fig. 12(a)).There is a serious misalignment in the stitching of the last row of the location table.Moreover, the registration algorithm cannot compute registration offset in sample regions that lack structural points, leading to discontinuities and stitching misalignment (the blue box and red box regions in Fig. 12(a)).While PNanoStitcher performs better, and the filamentous structures within the cells are connected, and the stitching performance at the few structural points within the red box is better than that of NanoStitcher (indicated by the white arrows in the yellow box and red box regions in Fig. 12).Significantly, the black area of the figure represents the actual stitching of unstructured points.Experimental results indicate that, without path planning, PNanoStitcher outperforms NanoStitcher, and can effectively address the issue of misalignment due to gaps between cell tissues.
Then we analyzed the stitching performances of PNanoStitcher and NanoStitcher with path planning.The stitching framework selected the path with smallest stitching error to address  stitching mistakes and cumulative errors.Significantly, the black area of the figure represents the actual stitching of unstructured points.Figure 13(a) shows the stitching results of NanoStitcher with path planning.We can clearly observe that the result is better than that without path planning (Fig. 12(a)), but there are still significant misalignments (indicated by white arrows in Fig. 13(a)).The experimental results of the panoramic image stitched from 5 × 5 ROIs (see Fig. 11) show that the Gaussian Mixture Probability Model without path planning cannot solve the issues caused by gaps in the cellular structure of biological samples, and leads to severe faults in the stitching results (see Fig. 12(a)).Although the Gaussian Mixture Model with path planning shows some improvement, it still suffers from stitching misalignment issues (see Fig. 13(a)).The experimental results of PNanoStitcher are obviously superior to those from NanoStitcher (see Fig. 12 and Fig. 13).Furthermore, the introduction of path planning not only effectively improves the accuracy of super-resolution image stitching, but also addresses the failure issue in gaps between cell tissues.

Panoramic image stitching on mitochondrial datasets
To verify the generalization performance of PNanoStitcher for super-resolution panoramic image stitching, we conducted a more extensive data validation using irregularly structure (mitochondrial datasets).We compared the stitching performance of our proposed PNanoStitcher with other methods using simulated and experimental datasets of mitochondrial.
We firstly compared the stitching performance of various algorithms using simulated mitochondrial datasets with 3 × 3 ROIs.We can clearly observe from Fig. 14 that NanoStitcher has the most severe stitching misalignment at the seams (white arrow in Fig. 14(c)), while PNanoStitcher has slight stitching misalignment.This is because mitochondrial data is relatively sparse (with few structural points).Among them, the MSE and SSIM values of PNanoStitcher are 313.50 and 0.8971, respectively.The MSE and SSIM values of NanoStitcher are 652.51 and 0.8267, respectively.And NoiseSticher performs relatively well, with a MSE value of 423.81 and a SSIM value of 0.8794.These results demonstrated that PNanoStitcher can obtain better stitching performance on mitochondrial datasets with irregular structures, especially when the structure points are sparse.
Then we validated the stitching performance of our algorithm on irregular structures, that is, experimental mitochondrial datasets with 3 × 3 ROIs.From Fig. 15, we can clearly observe that NanoStitcher exhibits significant misalignment phenomena (indicated by the white arrows) at the seams.We also observed that PNanoStitcher presents better stitching results.We note that the mitochondrial experimental datasets have more sparse structure points, that is, there are a large number of regions with few structural points.Therefore, the experimental results from regular microtubule data and irregular mitochondrial data illustrated that the proposed PNanoStitcher algorithm has better stitching results (especially in the case of no structural points or few structural points) and good generalization ability.Significantly, PNanoStitcher, which combines fluorescence noise prior, is mainly applicable to localization data generated by SRLM.These localization data can be regularly structured data (such as microtubule) or irregularly structured data (such as mitochondria).The proposed PNanoStitcher method is mainly used to stitch regions with sparse structural point at the seams, where other reported stitching methods such as NanoStitcher tends to fail.However, PNanoStitcher uses fluorescence noise data to assist the stitching of super-resolution images.If the noise points are removed from the localization data, the algorithm will be unable to find matching noise points, resulting in stitching failure.Additionally, because the registration offsets for both the structured regions and the noise point regions need to be calculated separately, PNanoStitcher operates relatively slow.In the future, we aim to improve the speed of panoramic image stitching, which will contribute to clinical medical applications.

Conclusions
Utilizing the distribution characteristics of fluorescent noise points in adjacent overlapping regions, we proposed a stitching framework for super-resolution localization microscopy.This framework, called PNanoStitcher, is based on localization data.PNanoStitcher addresses the misalignment issue in previous stitching algorithms that are unable to calculate registration offsets in sample regions with no or few structural points.In the path planning step, PNanoStitcher constructs a minimum spanning tree by combining the offset between adjacent sub-images and the stage displacement.In this case, PNanoStitcher can effectively reduce stitching errors and cumulative errors, then presents high precision registration.In the pairwise registration step, PNanoStitcher determines to use structural points or fluorescent noise to registration according to point density.This solves the problem that registration offsets cannot be accurately calculated in regions with no or few structural points due to too few structural points.We used both simulated and experimental datasets to verify that PNanoStitcher exhibited better stitching results than other reported frameworks (NanoStitcher and NoiseStitcher).We also proved that PNanoStitcher has good generalization ability, as evidencing by the extensive experiments on regular microtubule data and irregular mitochondrial data.This study can effectively address the registration failure issue caused by the absence of structural points at the seams of super-resolution images, and provides a new way to obtain panoramic super-resolution image for large biological samples.In the future, we will focus on the development of multi-dimensional and multi-color super-resolution stitching algorithms for SRLM, which will greatly enhance the development of super-resolution digital pathology.

Fig. 1 .
Fig. 1.Stitching errors in NanoStitcher.The green dotted box represents structural gaps, where no or few structural points exist.The blue dotted boxes indicate stitching misalignment of NanoStitcher in horizontally and vertically directions.

Fig. 2 .
Fig. 2. The framework of PNanoStitcher.The framework includes localization tables input, pre-processing, pairwise image registration, post-processing, and panoramic image output.Pre-processing includes path planning and downsampling.Post-processing includes deduplication and fusion.

Figure 3 (
Figure 3(a) shows the Groundtruth of the 2 × 2 sub-image stitching, which was obtained by cutting a real experimental data of 100 µm × 100 µm.Figure 3(b) and Fig.3(c) are stitching results using NanoStitcher and PNanoStitcher, respectively.It is worth noting that after registration, we did not carry out any deduplication or fusion.In this way, we could avoid random removal of localization points, which may affect the evaluation effect.The experimental results show that PNanoStitcher is capable to provide a higher SSIM value and a smaller MSE value than PNanoStitcher (Fig.3).From the enlarged images (the white arrows in Fig.3), we can observe that NanoStitcher has discontinuities and stitching misalignment, which suggests that the stitching result obtained by PNanoStitcher is closer to the Groundtruth.Therefore, the path planning strategy based on known displacement values results in less misalignment and better stitching quality.Below is a detailed description of the steps in path planning, where image stitching of 5 × 5 ROIs (sub-images) are used as an example.Step 1: Take an initial sub-image as the starting vertex, and calculate the distance between each pair of sub-images using the known displacement value in Eq. (1).Step 2: Use the distances obtained from Step 1 as the weights of the edges, and calculate a weighted graph, as shown in Fig.4.Step 3: Use Kruskal algorithm to construct the minimum spanning tree, and then obtain the shortest path of the graph, which is the optimal path for stitching a panoramic image.

Fig. 4 .
Fig. 4. Selection of path planning based on 5 ×5 ROI stitching.The red circles represent ROIs, which are the nodes in the minimum spanning tree.The thin red lines indicate the adjacency relationships between the ROIs, while the thick green lines represent the paths generated through the minimum spanning tree.

Fig. 5 .
Fig. 5. Flowchart of the registration process in PNanoStitcher.The left part represents the pairwise image registration process, the right part shows a panoramic image obtained after repeated pairwise stitching.

Fig. 6 .
Fig. 6.Comparison of the registration performance of three algorithms under different proportion of structural points.The regions between the yellow dashed lines represent the stitching region (the x-axis coordinates range from 180 to 226).The values in the vertical axis represent the proportion of structural points.

Fig. 7 .
Fig. 7.The dependence of evaluation indicators on different proportion of structural points.The horizontal axis represents different proportion of structural points, and the vertical axis represents the values of MSE and SSIM, respectively.The red line represents the values of NanoStitcher.The blue lines represent the values of NoiseStitcher.The yellow lines represent the values of PNanoStitcher.

Fig. 8 .
Fig. 8.The average peak density in seam regions with different proportions of structural points.The horizontal axis represents the different proportions of structural points.The vertical axis represents the peak intensity ratio to Groundtruth.The black line represents the baseline ratio of the Groundtruth.

Fig. 9 .
Fig. 9. Groundtruth of 2 × 2 simulated microtubule datasets.(a)-(d) show the details of four ROIs with small FOV, respectively.The overlap region in each ROI is approximately 15%.The experiment results are shown in Fig. 10.The area surrounded by yellow dashed lines in each image represents the horizontal and vertical stitching seams.The stitching details along the vertical and horizontal seams are shown on the right and bottom of each image, respectively.The MSE and SSIM values of PNanoStitcher are 369.99 and 0.99, respectively.The MSE and SSIM values of NanoStitcher are 403.76 and 0.90, respectively.The MSE and SSIM values of NoiseStitcher are 426.29 and 0.90, respectively.These experimental results show that the stitching results of PNanoStitcher is closest to the Groundtruth and has the best performance.NanoStitcher is slightly inferior, and NoiseStitcher is the worst.This verifies that the proposed PNanoStitcher is feasible and effective.

Fig. 10 .
Fig. 10.The stitching performance of different algorithms for 2 × 2 ROIs of simulated microtubule datasets.(a) Groundtruth.(b) PNanoStitcher.(c) NanoStitcher.(d) NoiseStitcher.The right and bottom images of each figure are vertical and horizontal enlarged images of the seams, respectively.

Fig. 11 .
Fig. 11.The original experimental microtubule datasets with 5 × 5 ROIs.We rendered localization data into images for display.

Fig. 12 .
Fig. 12. Stitching results of NanoStitcher and PNanoStitcher without path planning.(a) NanoStitcher.(b) PNanoStitcher.The green boxes represent the horizontal stitching details.The yellow boxes present the vertical stitching details.The red boxes and the blue boxes represent stitching details at few structural points.The white arrows indicate the misalignment region in stitching.

Figure 13 (
b) shows the stitching result of PNanoStitcher with path planning.PNanoStitcher exhibits excellent performance and links the filamentous structures better, as compared to the case without path planning (see the white arrows in Fig.12(b)), and the filamentous structure with few structural points at the seam is better connected.The introduction of path planning in PNanoStitcher can effectively improve the accuracy of super-resolution image stitching.

Fig. 13 .
Fig. 13.Stitching results of two methods with path planning.(a) NanoStitcher.(b) PNanoStitcher.The green box, yellow box, and red box display the details of horizontal stitching, vertical stitching, and stitching details at few structural points respectively.The white arrows indicate the stitching misalignment region.

Fig. 14 .
Fig. 14.The stitching performance of the three algorithms under simulated mitochondrial datasets.(a) Groundtruth.(b) PNanoStitcher.(c) NanoStitcher.(d) NoiseStitcher.The green and blue boxes are enlarged images.The white arrows indicate stitching details.

Fig. 15 .
Fig. 15.The stitching results of PNanoStitcher and NanoStitcher under experimental mitochondria datasets.(a) PNanoStitcher.(b) NanoStitcher.The blue and yellow boxes represent enlarged images.The white arrows indicate stitching details.Note that there is no Groundtruth image for experimental datasets.

Funding.
National Natural Science Foundation of China (82160345); the Major Science and technology plan of Hainan (ZDKJ2021016).