Skip to main content
Log in

A Convergent Iterative Support Shrinking Algorithm for Non-Lipschitz Multi-phase Image Labeling Model

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

The non-Lipschitz piecewise constant Mumford–Shah model has been shown effective for image labeling and segmentation problems [33], where the non-Lipschitz isotropic \(\ell _p\) (\(0<p<1\)) regularization term can possess strong abilities to maintain sharp edges. However, the Alternating Direction Method of Multiplier (ADMM)-based algorithm used in [33] lacks the convergence guarantee. In this work, we propose an iterative support shrinking algorithm with proximal linearization for multi-phase image labeling problems, which is theoretically proven to be globally convergent. A key step is that we prove a lower bound theory for the nonzero entries of the gradient of the iterative sequence when both box constraint and simplex constraint are involved in the target energy minimization problem. To the best of our knowledge, this is the first theoretical attempt at the non-Lipschitz piecewise constant Mumford–Shah model. Numerical experiments are conducted on both two-phase and multi-phase labeling problems to indicate the efficiency and effectiveness of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Alpert, S., Galun, M., Brandt, A., Basri, R.: Image segmentation by probabilistic bottom-up aggregation and cue integration. IEEE Trans. Pattern Anal. Mach. Intell. 34(2), 315–327 (2011)

    Google Scholar 

  2. Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res 35(2), 438–457 (2010)

  3. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137(1–2), 91–129 (2013)

    MathSciNet  MATH  Google Scholar 

  4. Bae, E., Yuan, J., Tai, X.: Global minimization for continuous multiphase partitioning problems using a dual approach. Int. J. Comput. Vis. 92(1), 112–129 (2011)

    MathSciNet  MATH  Google Scholar 

  5. Bergmann, R., Fitschen, J.H., Persch, J., Steidl, G.: Iterative multiplicative filters for data labeling. Int. J. Comput. Vis. 123(3), 435–453 (2017)

    MathSciNet  MATH  Google Scholar 

  6. Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-\(\text{ L }\)ipschitzian optimization. SIAM J. Optim. 23(3), 1718–1741 (2013)

    MathSciNet  MATH  Google Scholar 

  7. Bian, W., Chen, X.: Linearly constrained non-Lipschitz optimization for image restoration. SIAM J. Imaging Sci. 8(4), 2294–2322 (2015)

    MathSciNet  MATH  Google Scholar 

  8. Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. 149(1), 301–327 (2015)

    MathSciNet  MATH  Google Scholar 

  9. Cai, X., Chan, R., Nikolova, M., Zeng, T.: A three-stage approach for segmenting degraded color images: smoothing, lifting and thresholding (slat). J. Sci. Comput. 72, 1313–1332 (2017)

    MathSciNet  MATH  Google Scholar 

  10. Cai, X., Chan, R., Zeng, T.: A two-stage image segmentation method using a convex variant of the Mumford–Shah model and thresholding. SIAM J. Imaging Sci. 6(1), 368–390 (2013)

    MathSciNet  MATH  Google Scholar 

  11. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor. 52(2), 489–509 (2006)

    MathSciNet  MATH  Google Scholar 

  12. Chan, T.F., Esedoglu, S., Nikolova, M.: Algorithms for finding global minimizers of image segmentation and denoising models. SIAM J. Appl. Math. 66(5), 1632–1648 (2006)

    MathSciNet  MATH  Google Scholar 

  13. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)

    MATH  Google Scholar 

  14. Chen, X., Guo, L., Lu, Z., Ye, J.J.: An augmented lagrangian method for non-Lipschitz nonconvex programming. SIAM J. Numer. Anal. 55(1), 168–193 (2017)

    MathSciNet  MATH  Google Scholar 

  15. Chen, X., Ng, M.K., Zhang, C.: Non-Lipschitz \(\ell _p\)-regularization and box constrained model for image restoration. IEEE Trans. Image Process. 21(12), 4709–4721 (2012)

    MathSciNet  MATH  Google Scholar 

  16. Chen, X., Niu, L., Yuan, Y.: Optimality conditions and a smoothing trust region \(\text{ N }\)ewton method for non-\(\text{ L }\)ipschitz optimization. SIAM J. Optim. 23(3), 1528–1552 (2013)

    MathSciNet  MATH  Google Scholar 

  17. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. SIAM J. Sci. Comput. 32(5), 2832–2852 (2010)

    MathSciNet  Google Scholar 

  18. Chen, X., Zhou, W.: Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 3(4), 765–790 (2010)

    MathSciNet  MATH  Google Scholar 

  19. ChunlinWu, Y.G.: A general non-lipschitz joint regularized model for multi-channel/modality image reconstruction. CSIAM Trans. Appl. Math. 2(3), 395–430 (2021)

    MathSciNet  Google Scholar 

  20. Coll, B., Duran, J., Sbert, C.: Half-linear regularization for nonconvex image restoration models. Inverse Probl. Imaging 9(2), 337 (2015)

    MathSciNet  MATH  Google Scholar 

  21. El-Zehiry, N.Y., Grady, L.: Combinatorial optimization of the discretized multiphase \(\text{ M }\)umford-\(\text{ S }\)hah functional. Int. J. Comput. Vis. 104(3), 270–285 (2013)

    MathSciNet  MATH  Google Scholar 

  22. Gu, Y., Wang, L., Tai, X.: A direct approach toward global minimization for multiphase labeling and segmentation problems. IEEE Trans. Image Process. 21(5), 2399–2411 (2012)

    MathSciNet  MATH  Google Scholar 

  23. Guo, L., Chen, X.: Mathematical programs with complementarity constraints and a non-\(\text{ L }\)ipschitz objective: optimality and approximation. Math. Program. 185(1), 455–485 (2021)

    MathSciNet  MATH  Google Scholar 

  24. Guo, X., Xue, Y., Wu, C.: Effective two-stage image segmentation: a new non-lipschitz decomposition approach with convergent algorithm. J. Math. Imaging Vis. 63(3), 356–379 (2021)

    MathSciNet  MATH  Google Scholar 

  25. Han, J., Song, K.S., Kim, J., Kang, M.G.: Permuted coordinate-wise optimizations applied to \(l_{p}\)-regularized image deconvolution. IEEE Trans. Image Process. 27(7), 3556–3570 (2018)

    MathSciNet  MATH  Google Scholar 

  26. Hintermüller, M., Wu, T.: Nonconvex \(\text{ TV}^{q}\) -models in image restoration: analysis and a trust-region regularization-based superlinearly convergent solver. SIAM J. Imaging Sci. 6(3), 1385–1415 (2013)

    MathSciNet  MATH  Google Scholar 

  27. Huang, Y., Liu, H.: Smoothing projected \(\text{ B }\)arzilai-\(\text{ B }\)orwein method for constrained non-\(\text{ L }\)ipschitz optimization. Comput. Optim. Appl. 65(3), 671–698 (2016)

    MathSciNet  MATH  Google Scholar 

  28. Kappes, J.H., Andres, B., Hamprecht, F.A., Schnörr, C., Nowozin, S., Batra, D., Kim, S., Kausler, B.X., Kröger, T., Lellmann, J., et al.: A comparative study of modern inference techniques for structured discrete energy minimization problems. Int. J. Comput. Vis. 115(2), 155–184 (2015)

    MathSciNet  Google Scholar 

  29. Lanza, A., Morigi, S., Sgallari, F.: Constrained \(\text{ TV}_p-\ell _2\) model for image restoration. J. Sci. Comput. 68(1), 64–91 (2016)

    MathSciNet  MATH  Google Scholar 

  30. Li, C., Chen, X.: Isotropic non-\(\text{ L }\)ipschitz regularization for sparse representations of random fields on the sphere. Math. Comput. 91(333), 219–243 (2022)

    MathSciNet  MATH  Google Scholar 

  31. Li, W., Bian, W.: Smoothing neural network for L0 regularized optimization problem with general convex constraints. Neural Netw. 143, 678–689 (2021)

    Google Scholar 

  32. Li, W., Bian, W., Xue, X.: Projected neural network for a class of non-\(\text{ L }\)ipschitz optimization problems with linear constraints. IEEE Trans. Neural Netw. Learn. Syst. 31(9), 3361–3373 (2019)

    MathSciNet  Google Scholar 

  33. Li, Y., Wu, C., Duan, Y.: The \(\text{ TV}_{p}\) regularized \(\text{ M }\)umford-\(\text{ S }\)hah model for image labeling and segmentation. IEEE Trans. Image Process. 29, 7061–7075 (2020)

    MathSciNet  MATH  Google Scholar 

  34. Ma, J., Wang, D., Wang, X.P., Yang, X.: A characteristic function-based algorithm for geodesic active contours. SIAM J. Imaging Sci. 14(3), 1184–1205 (2021)

    MathSciNet  MATH  Google Scholar 

  35. Mumford, D., Shah, J.: Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42(5), 577–685 (1989)

    MathSciNet  MATH  Google Scholar 

  36. Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4(3), 960–991 (2005)

    MathSciNet  MATH  Google Scholar 

  37. Nikolova, M., Ng, M.K., Tam, C.P.: Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process. 19(12), 3073–3088 (2010)

    MathSciNet  MATH  Google Scholar 

  38. Nikolova, M., Ng, M.K., Zhang, S., Ching, W.K.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 1(1), 2–25 (2008)

    MathSciNet  MATH  Google Scholar 

  39. Niu, L., Zhou, R., Tian, Y., Qi, Z., Zhang, P.: Nonsmooth penalized clustering via \(l_{p}\) regularized sparse regression. IEEE Trans. Cybern. 47(6), 1423–1433 (2016)

    Google Scholar 

  40. Ren, Y., Tang, L.: A nonconvex and nonsmooth anisotropic total variation model for image noise and blur removal. Multimed. Tools Appl. 79(1), 1445–1473 (2020)

    Google Scholar 

  41. Roberts, M., Spencer, J.: Chan-Vese reformulation for selective image segmentation. J. Math. Imaging Vis. 61(8), 1173–1196 (2019)

    MathSciNet  MATH  Google Scholar 

  42. Sun, T., Jiang, H., Cheng, L.: Global convergence of proximal iteratively reweighted algorithm. J. Glob. Optim. 68(4), 815–826 (2017)

    MathSciNet  MATH  Google Scholar 

  43. Vese, L.A., Chan, T.F.: A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 50(3), 271–293 (2002)

    MATH  Google Scholar 

  44. Wang, C., Yan, M., Rahimi, Y., Lou, Y.: Accelerated schemes for the \( l_1/l_2 \) minimization. IEEE Trans. Signal Process. 68, 2660–2669 (2020)

    MathSciNet  MATH  Google Scholar 

  45. Wang, D., Li, H., Wei, X., Wang, X.P.: An efficient iterative thresholding method for image segmentation. J. Comput. Phys. 350, 657–667 (2017)

    MathSciNet  MATH  Google Scholar 

  46. Wang, W., Chen, Y.: An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing. J. Sci. Comput. 90(1), 1–28 (2022)

    MathSciNet  MATH  Google Scholar 

  47. Wang, W., Tian, N., Wu, C.: Two-phase image segmentation by nonconvex nonsmooth models with convergent alternating minimization algorithms. J. Comput. Math. in press (2022)

  48. Wang, W., Wu, C., Tai, X.C.: A globally convergent algorithm for a constrained non-\(\text{ L }\)ipschitz image restoration model. J. Sci. Comput. 83(1), 1–29 (2020)

    MathSciNet  Google Scholar 

  49. Wu, T., Ng, M.K., Zhao, X.L.: Sparsity reconstruction using nonconvex tgpv-shearlet regularization and constrained projection. Appl. Math. Comput. 410, 126170 (2021)

    MathSciNet  MATH  Google Scholar 

  50. Xiao, J., Ng, M.K.P., Yang, Y.F.: On the convergence of nonconvex minimization methods for image recovery. IEEE Trans. Image Process. 24(5), 1587–1598 (2015)

    MathSciNet  MATH  Google Scholar 

  51. Xu, Z., Chang, X., Xu, F., Zhang, H.: \({L}_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)

    Google Scholar 

  52. Yan, S., Liu, J., Huang, H., Tai, X.C.: A dual EM algorithm for TV regularized Gaussian mixture model in image segmentation. Inverse Probl Imaging 13(3), 653–677 (2019)

    MathSciNet  MATH  Google Scholar 

  53. You, J., Jiao, Y., Lu, X., Zeng, T.: A nonconvex model with minimax concave penalty for image restoration. J. Sci. Comput. 78(2), 1063–1086 (2019)

    MathSciNet  MATH  Google Scholar 

  54. Zeng, C., Jia, R., Wu, C.: An iterative support shrinking algorithm for non-\(\text{ L }\)ipschitz optimization in image restoration. J. Math. Imaging Vis. 61(1), 122–139 (2019)

    MathSciNet  MATH  Google Scholar 

  55. Zeng, C., Wu, C.: On the edge recovery property of noncovex nonsmooth regularization in image restoration. SIAM J. Numer. Anal. 56(2), 1168–1182 (2018)

    MathSciNet  MATH  Google Scholar 

  56. Zeng, C., Wu, C.: On the discontinuity of images recovered by noncovex nonsmooth regularized isotropic models with box constraints. Adv. Comput. Math. 45(2), 589–610 (2019)

    MathSciNet  MATH  Google Scholar 

  57. Zhang, C., Chen, X.: A smoothing active set method for linearly constrained non-\(\text{ L }\)ipschitz nonconvex optimization. SIAM J. Optim. 30(1), 1–30 (2020)

    MathSciNet  MATH  Google Scholar 

  58. Zhang, H., Qian, J., Zhang, B., Yang, J., Gong, C., Wei, Y.: Low-rank matrix recovery via modified schatten-\( p \) norm minimization with convergence guarantees. IEEE Trans. Image Process. 29, 3132–3142 (2019)

    MathSciNet  MATH  Google Scholar 

  59. Zheng, Z., Ng, M., Wu, C.: A globally convergent algorithm for a class of gradient compounded non-\(\text{ L }\)ipschitz models applied to non-additive noise removal. Inverse Probl. 36(12), 125017 (2020)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The work was supported by the National Natural Science Foundation of China (NSFC 12071345).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuping Duan.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

Proof of the Lemma 2

Proof

Let \(\varvec{z}^* = (\varvec{u}_{ I^{1,*}}^{*},\varvec{u}_{ I^{2,*}}^{*},\dots ,\varvec{u}_{ I^{l,*}}^{*})\) and \(R(\varvec{z}^*) = F(\varvec{u}^*)\). We prove that \(\varvec{z}^*\) is a local minimizer of (6).

Since \(\varvec{u}^*\) is a local minimizer of the model (3), there is \(F(\varvec{u}) > F(\varvec{u}^*)\) for any \(\varvec{u} \in \mathcal S\) and \(\Vert \varvec{u} - \varvec{u}^*\Vert < \epsilon ^*\) with \(\epsilon ^* > 0\). If \(\varvec{z}^* = (\varvec{u}_{ I^{1,*}}^{*},\varvec{u}_{ I^{2,*}}^{*},\dots ,\varvec{u}_{ I^{l,*}}^{*})\) is not a local minimizer, then we can find a \(\widetilde{\varvec{z}} \in \{\widetilde{\varvec{z}}:0< {\widetilde{z}}^{i}_{j} <1\}\) satisfying \(\Vert \widetilde{\varvec{z}} - \varvec{z}^*\Vert < \epsilon ^*\), \((D_j)_{I^{i,*}} {\widetilde{z}}^{i} = 0\) and \(\sum \limits _{i=i}^{l}{\widetilde{z}}_{j}^{i}=1,~\forall j \in { I}_{0}^{i,*}(=\varvec{\Omega }_{0}^{*} \cap I^{i,*})\) such that

$$\begin{aligned} R(\widetilde{\varvec{z}}) < R(\varvec{z}^*). \end{aligned}$$
(25)

Let \(\widetilde{\varvec{u}} = (\widetilde{\varvec{z}};\varvec{u}_{B^{1,*}}^*,\varvec{u}_{B^{2,*}}^*,\dots ,\varvec{u}_{B^{l,*}}^*)\). We have \(\widetilde{\varvec{u}} \in \mathcal S\), \(\Vert \widetilde{\varvec{u}} - \varvec{u}^*\Vert < \epsilon ^*\) and thus \(F(\widetilde{\varvec{u}}) > F(\varvec{u}^*)\). Now, we look into the relationship between \(R(\widetilde{\varvec{z}})\) and \(F(\widetilde{\varvec{u}})\). Because there is \(t_{j}^{i,x,*} = t_{j}^{i,y,*} = 0\), \(\forall j \in \varvec{\Omega }_{0}^{*}\) and \(i \in [1,l]\), we have

$$\begin{aligned} F(\widetilde{\varvec{u}})&= R(\widetilde{\varvec{z}}) + \sum \limits _{j \in \varvec{\Omega }_{0}^{*}}\phi \Bigg (\sqrt{\sum _{i=1}^{l}\Big (((D_{j}^{x})_{I^{i,*}} {\widetilde{z}}^{i}+t_{j}^{i,x,*})^{2}+((D_{j}^{y})_{I^{i,*}} {\widetilde{z}}^{i}+t_{j}^{i,y,*})^{2}}\Big )\Bigg )\\&=R(\widetilde{\varvec{z}}) +\sum \limits _{j \in \varvec{\Omega }_{0}^{*}}\phi \Bigg (\sqrt{\sum _{i=1}^{l}\Big (((D_{j}^{x})_{I^{i,*}} {\widetilde{z}}^{i})^{2}+((D_{j}^{y})_{I^{i,*}} {\widetilde{z}}^{i})^{2}}\Big )\Bigg )\\&=R(\widetilde{\varvec{z}})+\sum \limits _{j \in \varvec{\Omega }_{0}^{*}}\phi \Bigg (\sqrt{\sum _{i=1}^{l}\Big (((D_{j}^{x})_{I^{i,*}} {\widetilde{z}}^{i})^{2}+((D_{j}^{y})_{I^{i,*}} {\widetilde{z}}^{i})^{2}}\Big )\Bigg ). \end{aligned}$$

By \(t_{j}^{i,x,*} = t_{j}^{i,y,*} = 0,~\forall j \in \varvec{\Omega }_{0}^{*},~i \in [1,l]\), we get the second equation. The third equation is given by Lemma 1 (e), there is \((D_j^x)_{I^{i,*}} {\widetilde{z}}^{i} = 0\) and \((D_j^y)_{I^{i,*}} {\widetilde{z}}^{i} = 0\). Although the content of the second equation is the same as that of the third equation, \(j \in \varvec{\Omega }_{0}^{*}\) in the third equation is reduced. Together with \((D_j)_{I^{i,*}} {\widetilde{z}}^{i} = 0\), we have

$$\begin{aligned} F(\widetilde{\varvec{u}})=R(\widetilde{\varvec{z}}). \end{aligned}$$

We observe that \(R(\widetilde{\varvec{z}}) = F(\widetilde{\varvec{u}}) > F({\varvec{u}^*}) = R(\varvec{z}^*)\), which contradicts (25). Thus, \(\varvec{z}^*\) is a local minimizer of the optimization problem (6). \(\square \)

Appendix B

Proof of the Theorem 1

Proof

By the first-order necessary condition of (6), for \(i\in [1,l]\), we have

$$\begin{aligned} \langle \partial {R}(\varvec{z}^{*}),\hat{\varvec{z}}\rangle = 0,~~\forall \hat{\varvec{z}}\in \mathcal {K}({{I}^{1,*}_{0}}\times {{I}^{2,*}_{0}}\times \dots \times {{I}^{l,*}_{0}}), \end{aligned}$$

where \(\mathcal {K}({{I}^{1,*}_{0}}\times {{I}^{2,*}_{0}}\times \dots \times {{I}^{l,*}_{0}}) = \big \{\varvec{z} \in \mathbb {R}^{|I^{1,*}\times I^{2,*}\times \dots \times I^{l,*}|}:(D_j)_{I^{i,*}} {z}^{i} = 0\) and  \(\sum \limits _{i=i}^{l}{z}_{j}^{i}=1,~\forall j \in I_{0}^{i,*}\big \}\). By computing

$$\begin{aligned} \begin{aligned} \Big \langle \partial R(\varvec{z}^{*}),\hat{\varvec{z}}\Big \rangle =&\sum _{i=1}^{l}\sum _{j\in {\varvec{\Omega }^{*}_{1}}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )\Big \langle \frac{D_j u^{i,*}}{\Vert D_j \varvec{u}^{*}\Vert },(D_j)_{I^{i,*}}{\hat{z}}^i\Big \rangle \\&+\lambda \sum _{i=1}^{l}\Big \langle (g^i)_{I^{i,*}},~~{\hat{z}}^{i}\Big \rangle , \end{aligned} \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned} \sum _{i=1}^{l}\sum _{j\in {\varvec{\Omega }^{*}_{1}}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )\Big \langle \frac{D_j u^{i,*}}{\Vert D_j \varvec{u}^{*}\Vert },(D_j)_{I^{i,*}}{\hat{z}}^i\Big \rangle \le \sum _{i=1}^{l}{\hat{\delta }}_i\Vert {\hat{z}}^{i}\Vert , \end{aligned} \end{aligned}$$

where \({\hat{\delta }}_i=|\lambda ||(g^i)_{I^{i,*}}| > 0 \) is a constant independent of \(\varvec{u}^*\). Next, we prove \(\varvec{\Omega }_1^{*}=\varvec{\Omega }_1(\varvec{u}^{*})\subseteq \varvec{\Omega }_1(\hat{\varvec{u}})\).

We know that \(\Vert D_j \varvec{u}^{*}\Vert >\frac{1}{m}\) for \(\forall j \in \varvec{L}^{*}\) by Definition 7.5 in [48], and since \(\varvec{u}^*\) is very near to \(\hat{\varvec{u}}\), we can find a constant \(v>0\) such that \(\Vert D_j \hat{\varvec{u}}\Vert >v\), for \(\forall j \in \varvec{L}^{*}\). Clearly, \(\varvec{L}^{*}\subseteq \varvec{\Omega }_1(\hat{\varvec{u}})\), we just need to show \({\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*} \subseteq \varvec{\Omega }_1(\hat{\varvec{u}})\).

For all \(j \in {\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*}\), we sort the values of \(\Vert D_j \varvec{u}^{*}\Vert \) as

$$\begin{aligned} \Theta ^{*} = \{{\varpi ^{*}_{1}},{\varpi ^{*}_{2}},\cdots ,{\varpi ^{*}_{r}}\}, \end{aligned}$$

where \({\varpi ^{*}_{1}}> {\varpi ^{*}_{2}}> \cdots >{\varpi ^{*}_{r}} \) and \(r \le \sharp ({\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*}) < m \). Let \({\varvec{E}}^{*}_{h} = \{j \in {\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*}:\Vert D_j \varvec{u}^{*}\Vert = {\varpi ^{*}_{h}}\}\), where \(h \in [1,r]\).

First, we select a \(\bar{\varvec{z}}\) such that

$$\begin{aligned} \begin{aligned}&(D_j)_{I^{i,*}}{\bar{z}}^{i} = \frac{D_j u^{i,*}}{{\varpi ^{*}_{1}}},\\&\forall j \in ({{I}^{i,*}_{0}}\cup {\varvec{\Omega }^{*}_{1}})\backslash \varvec{L}^{*},~\forall i \in [1,l],~\sum _{i=1}^{l}{\bar{z}}^{i}_{j}=1. \end{aligned} \end{aligned}$$
(26)

Since \(\frac{\left| D_{j}^{x} u^{i,*}\right| }{\varpi ^{*}_{1}} \le 1\), \(\frac{\left| D_{j}^{y} u^{i,*}\right| }{\varpi ^{*}_{1}} \le 1\) and \(0\le u_j^i \le 1\), we can obtain \(\left| {\bar{z}}^{i}\right| <1+m\) and \(\left\| {\bar{z}}^{i}\right\| ^{2}<\frac{(m+1)(m+2)(2 m+3)}{6}\) similar Lemma 7.4 in [54]. Moreover, \(\left| \left( D_{j}^{x}\right) _{I^{i,*}} {\bar{z}}^{i}\right| <2m\) and \(\left| \left( D_{j}^{y}\right) _{I^{i,*}} {\bar{z}}^{i}\right| <2m\) for \(\forall j \in J^{i}.\) Thus, \(\left\| \left( D_{j}\right) _{I^{i,*}} {\bar{z}}^{i}\right\| ^{2}=\) \(\left| \left( D_{j}^{x}\right) _{I^{i,*}} {\bar{z}}^{i}\right| ^{2}+\left| \left( D_{j}^{y}\right) _{I^{i,*}} {\bar{z}}^{i}\right| ^{2}<8m^{2}, \forall i \in J^{i} .\) By (26), it yields

$$\begin{aligned} \langle D_j u^{i,*},(D_j)_{I^{i,*}}{\bar{z}}^{i}\rangle = \frac{\Vert D_j u^{i,*}\Vert ^2}{{\varpi ^{*}_{1}}},~~\forall j \in {\varvec{\Omega }^{*}_{1}}\backslash \varvec{L}^{*}. \end{aligned}$$

When choosing \(\hat{\varvec{z}} = \bar{\varvec{z}}\) in (25), we obtain

$$\begin{aligned} \begin{array}{rl} l{\hat{\delta }}\sqrt{\Gamma } &{}> \sum \limits _{i=1}^{l}{\hat{\delta }}_i\Vert {\bar{z}}^{i}\Vert \\ {} &{}\ge \sum \limits _{i=1}^{l}\sum \limits _{j\in {\varvec{\Omega }^{*}_{1}} \backslash \varvec{L}^{*} }\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )\langle \frac{D_j u^{i,*}}{\Vert D_j \varvec{u}^{*}\Vert },(D_j)_{I^{i,*}}{\bar{z}}^i\rangle \\ {} &{}~+\sum \limits _{i=1}^{l} \sum \limits _{j\in {\varvec{\Omega }^{*}_{1}} \cap \varvec{L}^{*}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )\langle \frac{D_j u^{i,*}}{\Vert D_j \varvec{u}^{*}\Vert },(D_j)_{I^{i,*}}{\bar{z}}^i\rangle \\ &{}\ge \sum \limits _{j\in {{\varvec{E}}_{1}^{*}}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )\frac{\Vert D_j \varvec{u}^{*}\Vert }{{\varpi _{1}^{*}}}\\ {} &{}~-\sum \limits _{i=1}^{l}\sum \limits _{j\in {\varvec{\Omega }_{1}^{*}} \cap \varvec{L}^{*}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )\frac{\Vert D_j u^{i,*}\Vert }{\Vert D_j \varvec{u}^{*}\Vert }\Vert (D_j)_{I^{i,*}}{\bar{z}}^i\Vert \\ &{}> \sum \limits _{j\in {{\varvec{E}}_{1}^{*}}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )- 8m^2l\sum \limits _{j\in {\varvec{\Omega }_{1}^{*}} \cap \varvec{L}^{*}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel ) \end{array} \end{aligned}$$
(27)

where \(\Gamma := \frac{(m+1)(m+2)(2m+3)}{6}\) and \({\hat{\delta }} = \max \limits _{i=1}^{l}{\hat{\delta }}_i\). Then, there is

$$\begin{aligned} \begin{aligned} \sum _{j\in {{\varvec{E}}_{1}^{*}}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )&< l{\hat{\delta }}\sqrt{\Gamma } + 8m^2l\sum _{j\in {{\varvec{\Omega }}_{1}^{*}} \cap \varvec{L}^{*}}\phi '(\parallel {D_j \varvec{u}^{*}}\parallel )\\&< l{\hat{\delta }}\sqrt{\Gamma } + 8m^2l\sum _{j\in {{\varvec{\Omega }}_{1}^{*}} \cap \varvec{L}^{*}}\phi '(\frac{1}{m}). \end{aligned} \end{aligned}$$
(28)

We can follow a similar procedure as Theorem 1 in [54] for the rest proof. \(\square \)

Appendix C

Proof of the Lemma 3

Proof

According to Assumption 1(b), we have

$$\begin{aligned} \begin{array}{rl} \phi \left( \left\| D_{j}\varvec{u}\right\| \right) \le \phi \left( \left\| D_{j} \varvec{u}^{k}\right\| \right) +\phi ^{\prime }\left( \left\| D_{j} \varvec{u}^{k}\right\| \right) \left( \left\| D_{j} \varvec{u}\right\| -\left\| D_{j} \varvec{u}^{k}\right\| \right) , \forall j \in {\varvec{\Omega }^{k}_{1}}. \end{array} \end{aligned}$$
(29)

Then, we deduce

$$\begin{aligned} \begin{aligned} F_{k}(\varvec{u})+\frac{\rho }{2}\left\| \varvec{u}-\varvec{u}^{k}\right\| ^{2}&=\lambda \sum _{i=1}^{l}\langle g^i,u^i\rangle +\sum _{j\in {\varvec{\Omega }^{k}_{1}}}\phi (\parallel {D_j \varvec{u}}\parallel )+\frac{\rho }{2}\left\| \varvec{u}-\varvec{u}^{k}\right\| ^{2} \\&\quad \le \lambda \sum _{i=1}^{l}\langle g^i,u^i\rangle +\frac{\rho }{2}\left\| \varvec{u}-\varvec{u}^{k}\right\| ^{2}\\ {}&\qquad +\sum _{j\in {\varvec{\Omega }^{k}_{1}}}\bigg [\phi \left( \left\| D_{j} \varvec{u}^{k}\right\| \right) +\phi ^{\prime }\left( \left\| D_{j} \varvec{u}^{k}\right\| \right) \left( \left\| D_{j} \varvec{u}\right\| -\left\| D_{j} \varvec{u}^{k}\right\| \right) \bigg ] \\&=H_{k}(\varvec{u})+\sum _{j\in {\varvec{\Omega }^{k}_{1}}}\bigg [\phi \left( \left\| D_{j} \varvec{u}^{k}\right\| \right) -\phi ^{\prime }\left( \left\| D_{j} \varvec{u}^{k}\right\| \right) \left\| D_{j} \varvec{u}^{k}\right\| \bigg ]. \end{aligned} \end{aligned}$$
(30)

Therefore, we have

$$\begin{aligned}&F(\varvec{u}^{k+1}) +\frac{\rho }{2}\left\| \varvec{u}^{k+1}-\varvec{u}^{k}\right\| ^{2} =F_{k}(\varvec{u}^{k+1})+\frac{\rho }{2}\left\| \varvec{u}^{k+1}-\varvec{u}^{k}\right\| ^{2}\\&~~~~\le H_{k}(\varvec{u}^{k+1})+\sum _{j\in {\varvec{\Omega }^{k}_{1}}}\bigg [\phi \left( \left\| D_{j} \varvec{u}^{k}\right\| \right) -\phi ^{\prime }\left( \left\| D_{j} \varvec{u}^{k}\right\| \right) \left\| D_{j} \varvec{u}^{k}\right\| \bigg ] \\&~~~~\le H_{k}(\varvec{u}^k)+\sum _{j\in {\varvec{\Omega }^{k}_{1}}}\bigg [\phi \left( \left\| D_{j} \varvec{u}^{k}\right\| \right) -\phi ^{\prime }\left( \left\| D_{j} \varvec{u}^{k}\right\| \right) \left\| D_{j} \varvec{u}^{k}\right\| \bigg ] \\&~~~~=\lambda \sum _{i=1}^{l}\langle g^i,u^{i,k}\rangle + \sum _{j\in {\varvec{\Omega }^{k}_{1}}} \phi ^{\prime }\left( \left\| D_{j} \varvec{u}^{k}\right\| \right) \left\| D_{j} \varvec{u}^{k}\right\| \\ {}&~~~~~~+ \sum _{j\in {\varvec{\Omega }^{k}_{1}}}\bigg [\phi \left( \left\| D_{j} \varvec{u}^{k}\right\| \right) -\phi ^{\prime }\left( \left\| D_{j} \varvec{u}^{k}\right\| \right) \left\| D_{j} \varvec{u}^{k}\right\| \bigg ]\\&~~~~=\lambda \sum _{i=1}^{l}\langle g^i,u^{i,k}\rangle + \sum _{j\in {\varvec{\Omega }^{k}_{1}}}\phi (\parallel {D_j \varvec{u}^{k}}\parallel )\\ {}&~~~~=\lambda \sum _{i=1}^{l}\langle g^i,u^{i,k}\rangle +\sum _{j\in \varvec{J}}\phi (\parallel {D_j \varvec{u}^{k}}\parallel )= F(\varvec{u}^k).\\ \end{aligned}$$

By (12), (13) and (15), there is

$$\begin{aligned} {\overline{F}}^{\omega }\left( \varvec{u}_\omega ^{k}\right) -{\overline{F}}^{\omega }\left( \varvec{u}_\omega ^{k+1}\right) \ge \frac{\rho }{2}\left\| \varvec{u}^{k+1}-\varvec{u}^{k}\right\| ^{2} \ge \frac{\rho }{2}\left\| \varvec{u}_{\omega }^{k+1}-\varvec{u}_{\omega }^{k}\right\| ^{2}. \end{aligned}$$

We have \(\lim \limits _{k\rightarrow \infty } \Vert \varvec{u}^{k+1}-\varvec{u}^{k}\Vert = 0\) from (15) by the boundness and convergence of \({F(\varvec{u}^k)}\). \(\square \)

Appendix D

Proof of the Theorem 2

Proof

The reasoning follows the similar procedure as Theorem 4.3 in [48]. Suppose that \(\varvec{u}^{k+1}\) is a solution of (14). The index sets in Definition 1 and 3 for \(\varvec{u}^{k+1}\) are simplified as \({\varvec{\Omega }^{K}_{1}}\), \({\varvec{\Omega }^{K}_{0}}\), \(I^{i,k+1}\), \(B^{i,k+1}\), \(\varvec{J}_{\omega }^{k+1}\), \( I_{\omega }^{i,k+1}\), \(B_{\omega }^{i,k+1}\). We also represent \(u^{i,k+1}\) as \(\Big (u_{I^{i,k+1}}^{i,k+1};u_{B^{i,k+1}}^{i,k+1}\Big )\), and \(u_{\omega }^{i,k+1}\) as \(\Big ((u_{\omega }^{i,k+1})_{I_{\omega }^{i,k+1}};(u_{\omega }^{i,k+1})_{B_{\omega }^{i,k+1}}\Big )\). Because each row of \(E_{\omega }^{i}\) only has one nonzero element 1 and \(I^{i,k+1} \cap B^{i,k+1} = \emptyset \), we can take

$$\begin{aligned} \begin{aligned} u_{I^{i,k+1}}^{k+1}&=(E_{\omega }^{i})_{I^{i,k+1}}\left( u_{\omega }^{i,k+1}\right) _{I_{\omega }^{i,k+1}}, \\ u_{B^{i,k+1}}^{k+1}&=\left( E_{\omega }^{i}\right) _{B^{i,k+1}}\left( u_{\omega }^{i,k+1}\right) _{B_{\omega }^{i,k+1}}. \end{aligned} \end{aligned}$$

We define \(D_j^x u^{i,k+1}\) and \(D_j^y u^{i,k+1}\) as

$$\begin{aligned} \begin{array}{l} D_{j}^{x} u^{i,k+1}=\left( D_{j}^{x}\right) _{I^{i,k+1}}\left( \left( E_{\omega }^{i}\right) _{I^{i,k+1}}\left( u_{\omega }^{i,k+1}\right) _{I_{\omega }^{i,k+1}}\right) \\ ~~~~~~~~~~~~~~\qquad +\left( D_{j}^{x}\right) _{B^{i,k+1}}\left( \left( E_{\omega }^{i}\right) _{B^{i,k+1}}\left( u_{\omega }^{i,k+1}\right) _{B_{\omega }^{i,k+1}}\right) ; \\ D_{j}^{y} u^{i,k+1}=\left( D_{j}^{y}\right) _{I^{i,k+1}}\left( \left( E_{\omega }^{i}\right) _{I^{i,k+1}}\left( u_{\omega }^{i,k+1}\right) _{I_{\omega }^{i,k+1}}\right) \\ ~~~~~~~~~~~~~~\qquad +\left( D_{j}^{y}\right) _{B^{i,k+1}}\left( \left( E_{\omega }^{i}\right) _{B^{i,k+1}}\left( u_{\omega }^{i,k+1}\right) _{B_{\omega }^{i,k+1}}\right) . \end{array} \end{aligned}$$

Similar to the (5), let

$$\begin{aligned} t_{j}^{i,x, k+1}=\left( D_{j}^{x}\right) _{B^{i,k+1}} u_{B^{i,k+1}}^{i,k+1},~~\forall j \in J^{i}, \end{aligned}$$

and

$$\begin{aligned} t_{j}^{i,y, k+1}=\left( D_{j}^{y}\right) _{B^{i,k+1}} u_{B^{i,k+1}}^{i,k+1},~~\forall j \in J^{i}. \end{aligned}$$

Then there is \(t_{j}^{i,x, k+1} = t_{j}^{i,y, k+1} = 0\), \(\forall j \in {\varvec{\Omega }^{K}_{0}}\). To analyze \((\overline{\mathcal {H}}_k^{\omega })\) in (14), we construct the following optimization problem

$$\begin{aligned} \min \limits _{\varvec{z}_\omega }~~{\overline{R}}^{\omega }_k( \varvec{z}_\omega ):&=\lambda \sum \limits _{i=1}^{l}\langle (g^i)_{I^{i,k+1}},(E_{\omega }^{i})_{I^{i,k+1}} z_{\omega }^{i}\rangle \nonumber +\lambda \sum \limits _{i=1}^{l}\langle (g^i)_{B^{i,k+1}},u^{i,k+1}_{B^{i,k+1}}\rangle \\&\quad +\sum \limits _{i=1}^{l}\chi _{(0,1)}\Big (( E_{\omega }^{i})_{I^{i,k+1}} z_{\omega }^{i} + u_{B^{i,k+1}}^{i,k+1}\Big )\nonumber \\&\quad +\sum \limits _{j\in {\varvec{\Omega }^{k}_{1}}} \phi \left( \Vert (D_j)_{I^{k+1}}\bar{\varvec{z}}_\omega +(D_j)_{B^{k+1}}\varvec{u}_{B^{k+1}}^{k+1}\Vert \right) \nonumber \\&\quad +\frac{\rho }{2}\parallel {((\bar{\varvec{z}}_\omega }+\varvec{u}_{B^{k+1}}^{k+1}) -\varvec{u}^{k}\parallel ^2+ \chi _{\mathcal S}\Big (\bar{\varvec{z}}_\omega +\varvec{u}_{B^{k+1}}^{k+1}\Big )\\&\quad \text{ s.t. }\quad (D_j)_{I^{i,k+1}} (E_{\omega }^{i})_{I^{i,k+1}}z_{\omega }^{i} = 0,~~\forall j\in {\varvec{\Omega }^{k}_{0}}\cap I^{i,k+1}\nonumber , \end{aligned}$$
(31)

where \(\bar{\varvec{z}}_\omega = (( E_{\omega }^{1})_{I^{1,k+1}} z_{\omega }^{1},\dots ,( E_{\omega }^{l})_{I^{l,k+1}} z_{\omega }^{l})\), \(\Vert (D_j)_{I^{k+1}} \bar{\varvec{z}}_\omega +(D_j)_{ B^{k+1}}\varvec{u}_{B^{k+1}}^{k+1}\Vert = \sum \limits _{i=1}^{l} \Big (\big (D_{j}^{x}\big )_{I^{i,k+1}} (E_{\omega }^{i})_{I^{i,k+1}} z_{\omega }^{i} +t_{j}^{i,x, k+1}\Big )^{2}+ \sum \limits _{i=1}^{l}\Big (\big (D_{j}^{y}\big )_{I^{i,k+1}} (E_{\omega }^{i})_{I^{i,k+1}} z_{\omega }^{i}+t_{j}^{i,y, k+1}\Big )^{2}\) and \(\bar{\varvec{z}}_\omega + \varvec{u}_{B^{k+1}}^{k+1} =\Big ((E_{\omega }^{1})_{I^{1,k+1}}z_{\omega }^{1}+u_{B^{1,k+1}}^{k+1}, \cdots , (E_{\omega }^{l})_{I^{l,k+1}}z_{\omega }^{l}+u_{B^{l,k+1}}^{k+1}\Big )\). By the first-order necessary condition of (31), we can follow a similar procedure in Theorem 1 as the rest proof. \(\square \)

Appendix E

Proof of the Lemma 4

Proof

When \(k \ge {\widetilde{K}}\), \(\varvec{u}_\omega ^{k+1}\) is a solution of\(({\overline{H}}_k^\omega )\). By the first-order optimality condition, we have

$$\begin{aligned} \begin{array}{rl} \varvec{0} \in &{}\partial {\overline{H}}_{k}^{\omega }\left( \varvec{u}_{\omega }^{k+1}\right) \\ =&{}\bigg (\sum \limits _{j\in {\varvec{\Omega }_{1}^{K}}}\phi '(\parallel {D_j \varvec{u}^{k}}\parallel )\frac{(D_j E_{\omega }^{1})^\top D_j u^{1,k+1}}{\Vert D_j \varvec{u}^{k+1}\Vert } +(E_{\omega }^{1})^\top \lambda g^1\\ &{}+ \rho (E_{\omega }^{1})^\top ( u^{1,k+1}- u^{1,k})+ \partial \chi _{[0,1]}(u^{1,k+1}) +\partial \chi _{\mathcal S}(\varvec{u}),\\ &{}\dots , \\ &{}\sum \limits _{j\in {\varvec{\Omega }_{1}^{K}}}\phi '(\parallel {D_j \varvec{u}^{k}}\parallel )\frac{(D_j E_{\omega }^{l})^\top D_j u^{l,k+1}}{\Vert D_j \varvec{u}^{k+1}\Vert } +(E_{\omega }^{l})^\top \lambda g^l\\ &{}+ \rho (E_{\omega }^{l})^\top ( u^{l,k+1}- u^{l,k})+ \partial \chi _{[0,1]}(u^{l,k+1}) +\partial \chi _{\mathcal S}(\varvec{u})\bigg ), \end{array} \end{aligned}$$

where \(\partial \chi _{[0,1]}(u^{i,k+1})\) is the subdifferential of \(\chi _{[0,1]}(v)\) at \(u^{i,k+1}\) and \(\partial \chi _{\mathcal S}(\varvec{u})\) is the subdifferential of \(\chi _{\mathcal S}(\varvec{u})\) at \(u^{i,k+1}\). Assume that there exist \(\xi ^{i,k+1} \in \partial \chi _{[0,1]}(u^{i,k+1})\) and \(\varsigma ^{i,k+1} \in \partial \chi _{\mathcal S}(\varvec{u})\), such that

$$\begin{aligned} \begin{array}{rl} 0 =&{}\sum \limits _{j\in {\varvec{\Omega }_{1}^{K}}}\phi '(\parallel {D_j \varvec{u}^{k}}\parallel )\frac{(D_j E_{\omega }^{i})^\top D_j u^{i,k+1}}{\Vert D_j \varvec{u}^{k+1}\Vert }+(E_{\omega }^{i})^\top \lambda g^i\\ &{} + \rho ( E_{\omega }^{i})^\top (u^{i,k+1}- u^{i,k})+\xi ^{i,k+1} + \varsigma ^{i,k+1}. \end{array} \end{aligned}$$
(32)

Since \(\Vert D_j \varvec{u}^{k+1}\Vert \ne 0\) for \(\forall j \in {\varvec{\Omega }_{1}^{K}} \) and \(k \ge {\widetilde{K}}\), the subdifferential of \({\overline{F}}^{\omega }(u_{\omega }^{i,k+1})\) is

$$\begin{aligned} \begin{array}{rl} \partial {\overline{F}}^{\omega }(u_{\omega }^{i,k+1})=&{}(E_{\omega }^{i})^\top \lambda g^i+\sum \limits _{j\in {\varvec{\Omega }_{1}^{K}}}\phi '(\parallel {D_j \varvec{u}^{k+1}}\parallel )\frac{(D_j E_{\omega }^{i})^\top D_j u^{i,k+1}}{\Vert D_j \varvec{u}^{k+1}\Vert }\\ &{}\quad + \partial \chi _{[0,1]}(u^{i,k+1}) + \partial \chi _{\mathcal S}(\varvec{u}). \end{array} \end{aligned}$$

In particular, there exists \(\tau ^{i,k+1} \in \partial {\overline{F}}^{\omega }(u_{\omega }^{i,k+1})\), such that

$$\begin{aligned} \begin{aligned} \tau ^{i,k+1}&= (E_{\omega }^{i})^\top \lambda g^i + \xi ^{i,k+1} + \varsigma ^{i,k+1} \\&\quad + \sum _{j\in {\varvec{\Omega }_{1}^{K}}}\phi '(\parallel {D_j \varvec{u}^{k+1}}\parallel )\frac{(D_j E_{\omega }^{i})^\top D_j u^{i,k+1}}{\Vert D_j \varvec{u}^{k+1}\Vert }. \end{aligned} \end{aligned}$$
(33)

Combine (32) and (33), we obtain that

$$\begin{aligned} \begin{aligned} \tau ^{i,k+1}&=\rho ( E_{\omega }^{i})^\top (u^{i,k}-u^{i,k+1})\\&\quad + \sum _{j\in {\varvec{\Omega }_{1}^{K}}}\left( \phi '(\parallel {D_j \varvec{u}^{k+1}}\parallel )-\phi '(\parallel {D_j \varvec{u}^{k}}\parallel )\right) \frac{(D_j E_{\omega }^{i})^\top D_j u^{i,k+1}}{\Vert D_j \varvec{u}^{k+1}\Vert }. \\ \end{aligned} \end{aligned}$$

Therefore, we can deduce

$$\begin{aligned} \begin{aligned} \Vert \tau ^{i,k+1}\Vert&\le \rho \Vert E_{\omega }^{i}\Vert \Vert u^{i,k+1}-u^{i,k}\Vert \\&\qquad +\sum _{j\in {\varvec{\Omega }_{1}^{K}}}\left( \phi '(\parallel {D_j \varvec{u}^{k+1}}\parallel )-\phi '(\parallel {D_j \varvec{u}^{k}}\parallel )\right) \frac{\Vert D_j u^{i,k+1}\Vert }{\Vert D_j \varvec{u}^{k+1}\Vert }\Vert D_j E_{\omega }^{i}\Vert \\&\le \sum _{j\in {\varvec{\Omega }_{1}^{K}}}L_{\eta }\Vert D_j\Vert ^2\Vert u^{i,k+1}-u^{i,k}\Vert \Vert E_{\omega }^{i}\Vert +\rho \Vert E_{\omega }^{i}\Vert \Vert u^{i,k+1}-u^{i,k}\Vert . \end{aligned} \end{aligned}$$

Then, it gives

$$\begin{aligned} \begin{array}{rl} \Vert \tau ^{k+1}\Vert &{}\le \sum \limits _{i=1}^{l}\Vert \tau ^{i,k+1}\Vert \\ &{}\le \sum \limits _{i=1}^{l}\left( \sum \limits _{j\in {\varvec{\Omega }_{1}^{K}}}L_{\eta }\Vert D_j\Vert ^2\Vert u^{i,k+1}-u^{i,k}\Vert \Vert E_{\omega }^{i}\Vert + \rho \Vert E_{\omega }^{i}\Vert \Vert u^{i,k+1}-u^{i,k}\Vert \right) \\ &{}\le \sum \limits _{i=1}^{l}\left( \sum \limits _{j\in {\varvec{\Omega }_{1}^{K}}}L_{\eta }\Vert D_j\Vert ^2\Vert E_{\omega }^{i}\Vert ^{2} + \rho \Vert E_{\omega }^{i}\Vert ^{2}\right) \Vert \varvec{u}_{\omega }^{k+1}-\varvec{u}_{\omega }^{k}\Vert \\ &{}= (\sum \limits _{i=1}^{l}\Vert E_{\omega }^{i}\Vert ^{2})\Big (\sum \limits _{j\in {\varvec{\Omega }_{1}^{K}}}L_{\eta }\Vert D_j\Vert ^2 + \rho \Big )\Vert \varvec{u}_{\omega }^{k+1}-\varvec{u}_{\omega }^{k}\Vert , \end{array} \end{aligned}$$

which completes the proof. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Y., Li, Y., Wu, C. et al. A Convergent Iterative Support Shrinking Algorithm for Non-Lipschitz Multi-phase Image Labeling Model. J Sci Comput 96, 47 (2023). https://doi.org/10.1007/s10915-023-02268-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-023-02268-5

Keywords

Mathematics Subject Classification

Navigation