Abstract
The non-Lipschitz piecewise constant Mumford–Shah model has been shown effective for image labeling and segmentation problems [33], where the non-Lipschitz isotropic \(\ell _p\) (\(0<p<1\)) regularization term can possess strong abilities to maintain sharp edges. However, the Alternating Direction Method of Multiplier (ADMM)-based algorithm used in [33] lacks the convergence guarantee. In this work, we propose an iterative support shrinking algorithm with proximal linearization for multi-phase image labeling problems, which is theoretically proven to be globally convergent. A key step is that we prove a lower bound theory for the nonzero entries of the gradient of the iterative sequence when both box constraint and simplex constraint are involved in the target energy minimization problem. To the best of our knowledge, this is the first theoretical attempt at the non-Lipschitz piecewise constant Mumford–Shah model. Numerical experiments are conducted on both two-phase and multi-phase labeling problems to indicate the efficiency and effectiveness of the proposed algorithm.
Similar content being viewed by others
Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Alpert, S., Galun, M., Brandt, A., Basri, R.: Image segmentation by probabilistic bottom-up aggregation and cue integration. IEEE Trans. Pattern Anal. Mach. Intell. 34(2), 315–327 (2011)
Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality. Math. Oper. Res 35(2), 438–457 (2010)
Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 137(1–2), 91–129 (2013)
Bae, E., Yuan, J., Tai, X.: Global minimization for continuous multiphase partitioning problems using a dual approach. Int. J. Comput. Vis. 92(1), 112–129 (2011)
Bergmann, R., Fitschen, J.H., Persch, J., Steidl, G.: Iterative multiplicative filters for data labeling. Int. J. Comput. Vis. 123(3), 435–453 (2017)
Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-\(\text{ L }\)ipschitzian optimization. SIAM J. Optim. 23(3), 1718–1741 (2013)
Bian, W., Chen, X.: Linearly constrained non-Lipschitz optimization for image restoration. SIAM J. Imaging Sci. 8(4), 2294–2322 (2015)
Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. 149(1), 301–327 (2015)
Cai, X., Chan, R., Nikolova, M., Zeng, T.: A three-stage approach for segmenting degraded color images: smoothing, lifting and thresholding (slat). J. Sci. Comput. 72, 1313–1332 (2017)
Cai, X., Chan, R., Zeng, T.: A two-stage image segmentation method using a convex variant of the Mumford–Shah model and thresholding. SIAM J. Imaging Sci. 6(1), 368–390 (2013)
Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor. 52(2), 489–509 (2006)
Chan, T.F., Esedoglu, S., Nikolova, M.: Algorithms for finding global minimizers of image segmentation and denoising models. SIAM J. Appl. Math. 66(5), 1632–1648 (2006)
Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)
Chen, X., Guo, L., Lu, Z., Ye, J.J.: An augmented lagrangian method for non-Lipschitz nonconvex programming. SIAM J. Numer. Anal. 55(1), 168–193 (2017)
Chen, X., Ng, M.K., Zhang, C.: Non-Lipschitz \(\ell _p\)-regularization and box constrained model for image restoration. IEEE Trans. Image Process. 21(12), 4709–4721 (2012)
Chen, X., Niu, L., Yuan, Y.: Optimality conditions and a smoothing trust region \(\text{ N }\)ewton method for non-\(\text{ L }\)ipschitz optimization. SIAM J. Optim. 23(3), 1528–1552 (2013)
Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. SIAM J. Sci. Comput. 32(5), 2832–2852 (2010)
Chen, X., Zhou, W.: Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 3(4), 765–790 (2010)
ChunlinWu, Y.G.: A general non-lipschitz joint regularized model for multi-channel/modality image reconstruction. CSIAM Trans. Appl. Math. 2(3), 395–430 (2021)
Coll, B., Duran, J., Sbert, C.: Half-linear regularization for nonconvex image restoration models. Inverse Probl. Imaging 9(2), 337 (2015)
El-Zehiry, N.Y., Grady, L.: Combinatorial optimization of the discretized multiphase \(\text{ M }\)umford-\(\text{ S }\)hah functional. Int. J. Comput. Vis. 104(3), 270–285 (2013)
Gu, Y., Wang, L., Tai, X.: A direct approach toward global minimization for multiphase labeling and segmentation problems. IEEE Trans. Image Process. 21(5), 2399–2411 (2012)
Guo, L., Chen, X.: Mathematical programs with complementarity constraints and a non-\(\text{ L }\)ipschitz objective: optimality and approximation. Math. Program. 185(1), 455–485 (2021)
Guo, X., Xue, Y., Wu, C.: Effective two-stage image segmentation: a new non-lipschitz decomposition approach with convergent algorithm. J. Math. Imaging Vis. 63(3), 356–379 (2021)
Han, J., Song, K.S., Kim, J., Kang, M.G.: Permuted coordinate-wise optimizations applied to \(l_{p}\)-regularized image deconvolution. IEEE Trans. Image Process. 27(7), 3556–3570 (2018)
Hintermüller, M., Wu, T.: Nonconvex \(\text{ TV}^{q}\) -models in image restoration: analysis and a trust-region regularization-based superlinearly convergent solver. SIAM J. Imaging Sci. 6(3), 1385–1415 (2013)
Huang, Y., Liu, H.: Smoothing projected \(\text{ B }\)arzilai-\(\text{ B }\)orwein method for constrained non-\(\text{ L }\)ipschitz optimization. Comput. Optim. Appl. 65(3), 671–698 (2016)
Kappes, J.H., Andres, B., Hamprecht, F.A., Schnörr, C., Nowozin, S., Batra, D., Kim, S., Kausler, B.X., Kröger, T., Lellmann, J., et al.: A comparative study of modern inference techniques for structured discrete energy minimization problems. Int. J. Comput. Vis. 115(2), 155–184 (2015)
Lanza, A., Morigi, S., Sgallari, F.: Constrained \(\text{ TV}_p-\ell _2\) model for image restoration. J. Sci. Comput. 68(1), 64–91 (2016)
Li, C., Chen, X.: Isotropic non-\(\text{ L }\)ipschitz regularization for sparse representations of random fields on the sphere. Math. Comput. 91(333), 219–243 (2022)
Li, W., Bian, W.: Smoothing neural network for L0 regularized optimization problem with general convex constraints. Neural Netw. 143, 678–689 (2021)
Li, W., Bian, W., Xue, X.: Projected neural network for a class of non-\(\text{ L }\)ipschitz optimization problems with linear constraints. IEEE Trans. Neural Netw. Learn. Syst. 31(9), 3361–3373 (2019)
Li, Y., Wu, C., Duan, Y.: The \(\text{ TV}_{p}\) regularized \(\text{ M }\)umford-\(\text{ S }\)hah model for image labeling and segmentation. IEEE Trans. Image Process. 29, 7061–7075 (2020)
Ma, J., Wang, D., Wang, X.P., Yang, X.: A characteristic function-based algorithm for geodesic active contours. SIAM J. Imaging Sci. 14(3), 1184–1205 (2021)
Mumford, D., Shah, J.: Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42(5), 577–685 (1989)
Nikolova, M.: Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 4(3), 960–991 (2005)
Nikolova, M., Ng, M.K., Tam, C.P.: Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction. IEEE Trans. Image Process. 19(12), 3073–3088 (2010)
Nikolova, M., Ng, M.K., Zhang, S., Ching, W.K.: Efficient reconstruction of piecewise constant images using nonsmooth nonconvex minimization. SIAM J. Imaging Sci. 1(1), 2–25 (2008)
Niu, L., Zhou, R., Tian, Y., Qi, Z., Zhang, P.: Nonsmooth penalized clustering via \(l_{p}\) regularized sparse regression. IEEE Trans. Cybern. 47(6), 1423–1433 (2016)
Ren, Y., Tang, L.: A nonconvex and nonsmooth anisotropic total variation model for image noise and blur removal. Multimed. Tools Appl. 79(1), 1445–1473 (2020)
Roberts, M., Spencer, J.: Chan-Vese reformulation for selective image segmentation. J. Math. Imaging Vis. 61(8), 1173–1196 (2019)
Sun, T., Jiang, H., Cheng, L.: Global convergence of proximal iteratively reweighted algorithm. J. Glob. Optim. 68(4), 815–826 (2017)
Vese, L.A., Chan, T.F.: A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 50(3), 271–293 (2002)
Wang, C., Yan, M., Rahimi, Y., Lou, Y.: Accelerated schemes for the \( l_1/l_2 \) minimization. IEEE Trans. Signal Process. 68, 2660–2669 (2020)
Wang, D., Li, H., Wei, X., Wang, X.P.: An efficient iterative thresholding method for image segmentation. J. Comput. Phys. 350, 657–667 (2017)
Wang, W., Chen, Y.: An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing. J. Sci. Comput. 90(1), 1–28 (2022)
Wang, W., Tian, N., Wu, C.: Two-phase image segmentation by nonconvex nonsmooth models with convergent alternating minimization algorithms. J. Comput. Math. in press (2022)
Wang, W., Wu, C., Tai, X.C.: A globally convergent algorithm for a constrained non-\(\text{ L }\)ipschitz image restoration model. J. Sci. Comput. 83(1), 1–29 (2020)
Wu, T., Ng, M.K., Zhao, X.L.: Sparsity reconstruction using nonconvex tgpv-shearlet regularization and constrained projection. Appl. Math. Comput. 410, 126170 (2021)
Xiao, J., Ng, M.K.P., Yang, Y.F.: On the convergence of nonconvex minimization methods for image recovery. IEEE Trans. Image Process. 24(5), 1587–1598 (2015)
Xu, Z., Chang, X., Xu, F., Zhang, H.: \({L}_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)
Yan, S., Liu, J., Huang, H., Tai, X.C.: A dual EM algorithm for TV regularized Gaussian mixture model in image segmentation. Inverse Probl Imaging 13(3), 653–677 (2019)
You, J., Jiao, Y., Lu, X., Zeng, T.: A nonconvex model with minimax concave penalty for image restoration. J. Sci. Comput. 78(2), 1063–1086 (2019)
Zeng, C., Jia, R., Wu, C.: An iterative support shrinking algorithm for non-\(\text{ L }\)ipschitz optimization in image restoration. J. Math. Imaging Vis. 61(1), 122–139 (2019)
Zeng, C., Wu, C.: On the edge recovery property of noncovex nonsmooth regularization in image restoration. SIAM J. Numer. Anal. 56(2), 1168–1182 (2018)
Zeng, C., Wu, C.: On the discontinuity of images recovered by noncovex nonsmooth regularized isotropic models with box constraints. Adv. Comput. Math. 45(2), 589–610 (2019)
Zhang, C., Chen, X.: A smoothing active set method for linearly constrained non-\(\text{ L }\)ipschitz nonconvex optimization. SIAM J. Optim. 30(1), 1–30 (2020)
Zhang, H., Qian, J., Zhang, B., Yang, J., Gong, C., Wei, Y.: Low-rank matrix recovery via modified schatten-\( p \) norm minimization with convergence guarantees. IEEE Trans. Image Process. 29, 3132–3142 (2019)
Zheng, Z., Ng, M., Wu, C.: A globally convergent algorithm for a class of gradient compounded non-\(\text{ L }\)ipschitz models applied to non-additive noise removal. Inverse Probl. 36(12), 125017 (2020)
Acknowledgements
The work was supported by the National Natural Science Foundation of China (NSFC 12071345).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
Proof of the Lemma 2
Proof
Let \(\varvec{z}^* = (\varvec{u}_{ I^{1,*}}^{*},\varvec{u}_{ I^{2,*}}^{*},\dots ,\varvec{u}_{ I^{l,*}}^{*})\) and \(R(\varvec{z}^*) = F(\varvec{u}^*)\). We prove that \(\varvec{z}^*\) is a local minimizer of (6).
Since \(\varvec{u}^*\) is a local minimizer of the model (3), there is \(F(\varvec{u}) > F(\varvec{u}^*)\) for any \(\varvec{u} \in \mathcal S\) and \(\Vert \varvec{u} - \varvec{u}^*\Vert < \epsilon ^*\) with \(\epsilon ^* > 0\). If \(\varvec{z}^* = (\varvec{u}_{ I^{1,*}}^{*},\varvec{u}_{ I^{2,*}}^{*},\dots ,\varvec{u}_{ I^{l,*}}^{*})\) is not a local minimizer, then we can find a \(\widetilde{\varvec{z}} \in \{\widetilde{\varvec{z}}:0< {\widetilde{z}}^{i}_{j} <1\}\) satisfying \(\Vert \widetilde{\varvec{z}} - \varvec{z}^*\Vert < \epsilon ^*\), \((D_j)_{I^{i,*}} {\widetilde{z}}^{i} = 0\) and \(\sum \limits _{i=i}^{l}{\widetilde{z}}_{j}^{i}=1,~\forall j \in { I}_{0}^{i,*}(=\varvec{\Omega }_{0}^{*} \cap I^{i,*})\) such that
Let \(\widetilde{\varvec{u}} = (\widetilde{\varvec{z}};\varvec{u}_{B^{1,*}}^*,\varvec{u}_{B^{2,*}}^*,\dots ,\varvec{u}_{B^{l,*}}^*)\). We have \(\widetilde{\varvec{u}} \in \mathcal S\), \(\Vert \widetilde{\varvec{u}} - \varvec{u}^*\Vert < \epsilon ^*\) and thus \(F(\widetilde{\varvec{u}}) > F(\varvec{u}^*)\). Now, we look into the relationship between \(R(\widetilde{\varvec{z}})\) and \(F(\widetilde{\varvec{u}})\). Because there is \(t_{j}^{i,x,*} = t_{j}^{i,y,*} = 0\), \(\forall j \in \varvec{\Omega }_{0}^{*}\) and \(i \in [1,l]\), we have
By \(t_{j}^{i,x,*} = t_{j}^{i,y,*} = 0,~\forall j \in \varvec{\Omega }_{0}^{*},~i \in [1,l]\), we get the second equation. The third equation is given by Lemma 1 (e), there is \((D_j^x)_{I^{i,*}} {\widetilde{z}}^{i} = 0\) and \((D_j^y)_{I^{i,*}} {\widetilde{z}}^{i} = 0\). Although the content of the second equation is the same as that of the third equation, \(j \in \varvec{\Omega }_{0}^{*}\) in the third equation is reduced. Together with \((D_j)_{I^{i,*}} {\widetilde{z}}^{i} = 0\), we have
We observe that \(R(\widetilde{\varvec{z}}) = F(\widetilde{\varvec{u}}) > F({\varvec{u}^*}) = R(\varvec{z}^*)\), which contradicts (25). Thus, \(\varvec{z}^*\) is a local minimizer of the optimization problem (6). \(\square \)
Appendix B
Proof of the Theorem 1
Proof
By the first-order necessary condition of (6), for \(i\in [1,l]\), we have
where \(\mathcal {K}({{I}^{1,*}_{0}}\times {{I}^{2,*}_{0}}\times \dots \times {{I}^{l,*}_{0}}) = \big \{\varvec{z} \in \mathbb {R}^{|I^{1,*}\times I^{2,*}\times \dots \times I^{l,*}|}:(D_j)_{I^{i,*}} {z}^{i} = 0\) and \(\sum \limits _{i=i}^{l}{z}_{j}^{i}=1,~\forall j \in I_{0}^{i,*}\big \}\). By computing
we have
where \({\hat{\delta }}_i=|\lambda ||(g^i)_{I^{i,*}}| > 0 \) is a constant independent of \(\varvec{u}^*\). Next, we prove \(\varvec{\Omega }_1^{*}=\varvec{\Omega }_1(\varvec{u}^{*})\subseteq \varvec{\Omega }_1(\hat{\varvec{u}})\).
We know that \(\Vert D_j \varvec{u}^{*}\Vert >\frac{1}{m}\) for \(\forall j \in \varvec{L}^{*}\) by Definition 7.5 in [48], and since \(\varvec{u}^*\) is very near to \(\hat{\varvec{u}}\), we can find a constant \(v>0\) such that \(\Vert D_j \hat{\varvec{u}}\Vert >v\), for \(\forall j \in \varvec{L}^{*}\). Clearly, \(\varvec{L}^{*}\subseteq \varvec{\Omega }_1(\hat{\varvec{u}})\), we just need to show \({\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*} \subseteq \varvec{\Omega }_1(\hat{\varvec{u}})\).
For all \(j \in {\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*}\), we sort the values of \(\Vert D_j \varvec{u}^{*}\Vert \) as
where \({\varpi ^{*}_{1}}> {\varpi ^{*}_{2}}> \cdots >{\varpi ^{*}_{r}} \) and \(r \le \sharp ({\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*}) < m \). Let \({\varvec{E}}^{*}_{h} = \{j \in {\varvec{\Omega }}^{*}_{1}\backslash \varvec{L}^{*}:\Vert D_j \varvec{u}^{*}\Vert = {\varpi ^{*}_{h}}\}\), where \(h \in [1,r]\).
First, we select a \(\bar{\varvec{z}}\) such that
Since \(\frac{\left| D_{j}^{x} u^{i,*}\right| }{\varpi ^{*}_{1}} \le 1\), \(\frac{\left| D_{j}^{y} u^{i,*}\right| }{\varpi ^{*}_{1}} \le 1\) and \(0\le u_j^i \le 1\), we can obtain \(\left| {\bar{z}}^{i}\right| <1+m\) and \(\left\| {\bar{z}}^{i}\right\| ^{2}<\frac{(m+1)(m+2)(2 m+3)}{6}\) similar Lemma 7.4 in [54]. Moreover, \(\left| \left( D_{j}^{x}\right) _{I^{i,*}} {\bar{z}}^{i}\right| <2m\) and \(\left| \left( D_{j}^{y}\right) _{I^{i,*}} {\bar{z}}^{i}\right| <2m\) for \(\forall j \in J^{i}.\) Thus, \(\left\| \left( D_{j}\right) _{I^{i,*}} {\bar{z}}^{i}\right\| ^{2}=\) \(\left| \left( D_{j}^{x}\right) _{I^{i,*}} {\bar{z}}^{i}\right| ^{2}+\left| \left( D_{j}^{y}\right) _{I^{i,*}} {\bar{z}}^{i}\right| ^{2}<8m^{2}, \forall i \in J^{i} .\) By (26), it yields
When choosing \(\hat{\varvec{z}} = \bar{\varvec{z}}\) in (25), we obtain
where \(\Gamma := \frac{(m+1)(m+2)(2m+3)}{6}\) and \({\hat{\delta }} = \max \limits _{i=1}^{l}{\hat{\delta }}_i\). Then, there is
We can follow a similar procedure as Theorem 1 in [54] for the rest proof. \(\square \)
Appendix C
Proof of the Lemma 3
Proof
According to Assumption 1(b), we have
Then, we deduce
Therefore, we have
By (12), (13) and (15), there is
We have \(\lim \limits _{k\rightarrow \infty } \Vert \varvec{u}^{k+1}-\varvec{u}^{k}\Vert = 0\) from (15) by the boundness and convergence of \({F(\varvec{u}^k)}\). \(\square \)
Appendix D
Proof of the Theorem 2
Proof
The reasoning follows the similar procedure as Theorem 4.3 in [48]. Suppose that \(\varvec{u}^{k+1}\) is a solution of (14). The index sets in Definition 1 and 3 for \(\varvec{u}^{k+1}\) are simplified as \({\varvec{\Omega }^{K}_{1}}\), \({\varvec{\Omega }^{K}_{0}}\), \(I^{i,k+1}\), \(B^{i,k+1}\), \(\varvec{J}_{\omega }^{k+1}\), \( I_{\omega }^{i,k+1}\), \(B_{\omega }^{i,k+1}\). We also represent \(u^{i,k+1}\) as \(\Big (u_{I^{i,k+1}}^{i,k+1};u_{B^{i,k+1}}^{i,k+1}\Big )\), and \(u_{\omega }^{i,k+1}\) as \(\Big ((u_{\omega }^{i,k+1})_{I_{\omega }^{i,k+1}};(u_{\omega }^{i,k+1})_{B_{\omega }^{i,k+1}}\Big )\). Because each row of \(E_{\omega }^{i}\) only has one nonzero element 1 and \(I^{i,k+1} \cap B^{i,k+1} = \emptyset \), we can take
We define \(D_j^x u^{i,k+1}\) and \(D_j^y u^{i,k+1}\) as
Similar to the (5), let
and
Then there is \(t_{j}^{i,x, k+1} = t_{j}^{i,y, k+1} = 0\), \(\forall j \in {\varvec{\Omega }^{K}_{0}}\). To analyze \((\overline{\mathcal {H}}_k^{\omega })\) in (14), we construct the following optimization problem
where \(\bar{\varvec{z}}_\omega = (( E_{\omega }^{1})_{I^{1,k+1}} z_{\omega }^{1},\dots ,( E_{\omega }^{l})_{I^{l,k+1}} z_{\omega }^{l})\), \(\Vert (D_j)_{I^{k+1}} \bar{\varvec{z}}_\omega +(D_j)_{ B^{k+1}}\varvec{u}_{B^{k+1}}^{k+1}\Vert = \sum \limits _{i=1}^{l} \Big (\big (D_{j}^{x}\big )_{I^{i,k+1}} (E_{\omega }^{i})_{I^{i,k+1}} z_{\omega }^{i} +t_{j}^{i,x, k+1}\Big )^{2}+ \sum \limits _{i=1}^{l}\Big (\big (D_{j}^{y}\big )_{I^{i,k+1}} (E_{\omega }^{i})_{I^{i,k+1}} z_{\omega }^{i}+t_{j}^{i,y, k+1}\Big )^{2}\) and \(\bar{\varvec{z}}_\omega + \varvec{u}_{B^{k+1}}^{k+1} =\Big ((E_{\omega }^{1})_{I^{1,k+1}}z_{\omega }^{1}+u_{B^{1,k+1}}^{k+1}, \cdots , (E_{\omega }^{l})_{I^{l,k+1}}z_{\omega }^{l}+u_{B^{l,k+1}}^{k+1}\Big )\). By the first-order necessary condition of (31), we can follow a similar procedure in Theorem 1 as the rest proof. \(\square \)
Appendix E
Proof of the Lemma 4
Proof
When \(k \ge {\widetilde{K}}\), \(\varvec{u}_\omega ^{k+1}\) is a solution of\(({\overline{H}}_k^\omega )\). By the first-order optimality condition, we have
where \(\partial \chi _{[0,1]}(u^{i,k+1})\) is the subdifferential of \(\chi _{[0,1]}(v)\) at \(u^{i,k+1}\) and \(\partial \chi _{\mathcal S}(\varvec{u})\) is the subdifferential of \(\chi _{\mathcal S}(\varvec{u})\) at \(u^{i,k+1}\). Assume that there exist \(\xi ^{i,k+1} \in \partial \chi _{[0,1]}(u^{i,k+1})\) and \(\varsigma ^{i,k+1} \in \partial \chi _{\mathcal S}(\varvec{u})\), such that
Since \(\Vert D_j \varvec{u}^{k+1}\Vert \ne 0\) for \(\forall j \in {\varvec{\Omega }_{1}^{K}} \) and \(k \ge {\widetilde{K}}\), the subdifferential of \({\overline{F}}^{\omega }(u_{\omega }^{i,k+1})\) is
In particular, there exists \(\tau ^{i,k+1} \in \partial {\overline{F}}^{\omega }(u_{\omega }^{i,k+1})\), such that
Combine (32) and (33), we obtain that
Therefore, we can deduce
Then, it gives
which completes the proof. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yang, Y., Li, Y., Wu, C. et al. A Convergent Iterative Support Shrinking Algorithm for Non-Lipschitz Multi-phase Image Labeling Model. J Sci Comput 96, 47 (2023). https://doi.org/10.1007/s10915-023-02268-5
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-023-02268-5
Keywords
- Non-Lipschitz optimization
- Lower bound theory
- Kurdyka–Łojasiewicz property
- Simplex constraint
- Image labeling