Search ResultsShowing 1-19 of 19
-
arXiv:2506.19545 (Published 2025-06-24)
Fast convergence of a primal-dual dynamical system with implicit Hessian damping and Tikhonov regularization
Comments: 29 pages, 9 figuresCategories: math.OCThis paper proposes two primal-dual dynamical systems for solving linear equality constrained convex optimization problems: one with implicit Hessian damping only, and the other further incorporating Tikhonov regularization. We analyze the fast convergence properties of both dynamical systems and show that they achieve the same convergence rates. To the best of our knowledge, this work provides the first theoretical analysis establishing a convergence rate $o(\frac{1}{t^2})$ for the primal-dual gap and a convergence rate $o(\frac{1}{t})$ for the velocity $\dot{x}(t)$, without imposing additional assumptions on the objective function beyond convexity and L-smoothness. Moreover, we show that the trajectory generated by the dynamical system with Tikhonov regularization converges strongly to the minimum-norm solution of the underlying problem. Finally, numerical experiments are conducted to validate the theoretical findings. Interestingly, the trajectories exhibit smooth behavior even when the objective function is only continuously differentiable.
-
arXiv:2412.05931 (Published 2024-12-08)
Inertial primal-dual dynamics with Hessian-driven damping and Tikhonov regularization for convex-concave bilinear saddle point problems
Categories: math.OCThis paper deals with a second-order primal-dual dynamical system with Hessian-driven damping and Tikhonov regularization terms in connection with a convex-concave bilinear saddle point problem. We first obtain a fast convergence rate of the primal-dual gap along the trajectory generated by the dynamical system, and provide some integral estimates. Then, based on the setting of the parameters involved, we demonstrate that both the convergence rate of the primal-dual gap and the strong convergence of the trajectory can be achieved simultaneously. Furthermore, we evaluate the performance of the proposed system using two numerical examples.
-
arXiv:2411.17656 (Published 2024-11-26)
Asymptotic behavior of the Arrow-Hurwicz differential system with Tikhonov regularization
Categories: math.OCIn a real Hilbert space setting, we investigate the asymptotic behavior of the solutions of the classical Arrow-Hurwicz differential system combined with Tikhonov regularizing terms. Under some newly proposed conditions on the Tikhonov terms involved, we show that the solutions of the regularized Arrow-Hurwicz differential system strongly converge toward the element of least norm within its set of zeros. Moreover, we provide fast asymptotic decay rate estimates for the so-called ``primal-dual gap function'' and the norm of the solutions' velocity. If, in addition, the Tikhonov regularizing terms are decreasing, we provide some refined estimates in the sense of an exponentially weighted moving average. Under the additional assumption that the governing operator of the Arrow-Hurwicz differential system satisfies a reverse Lipschitz condition, we further provide a fast rate of strong convergence of the solutions toward the unique zero. We conclude our study by deriving the corresponding decay rate estimates with respect to the so-called ``viscosity curve''. Numerical experiments illustrate our theoretical findings.
-
arXiv:2411.17329 (Published 2024-11-26)
Strong convergence and fast rates for systems with Tikhonov regularization
We introduce and investigate the asymptotic behaviour of the trajectories of a second order dynamical system with Tikhonov regularization for solving a monotone equation with single valued, monotone and continuous operator acting on a real Hilbert space. We consider a vanishing damping which is correlated with the Tikhonov parameter and which is in line with recent developments in the literature on this topic. A correction term which involves the time derivative of the operator along the trajectory is also involved in the system and makes the link with Newton and Levenberg-Marquardt type methods. We obtain strong convergence of the trajectory to the minimal norm solution and fast convergence rates for the velocity and a quantity involving the operator along the trajectory. The rates are very closed to the known fast convergence results for systems without Tikhonov regularization, the novelty with respect to this is that we also obtain strong convergence of the trajectory to the minimal norm solution. As an application we introduce a primal-dual dynamical system for solving linearly constrained convex optimization problems, where the strong convergence of the trajectories is highlighted together with fast convergence rates for the feasibility measure and function values.
-
arXiv:2406.00852 (Published 2024-06-02)
Tikhonov regularization of monotone operator flows not only ensures strong convergence of the trajectories but also speeds up the vanishing of the residuals
Categories: math.OCIn the framework of real Hilbert spaces, we investigate first-order dynamical systems governed by monotone and continuous operators. It has been established that for these systems, only the ergodic trajectory converges to a zero of the operator. A notable example is the counterclockwise $\pi/2$-rotation operator on $\mathbb{R}^2$, which illustrates that general trajectory convergence cannot be expected. However, trajectory convergence is assured for operators with the stronger property of cocoercivity. For this class of operators, the trajectory's velocity and the opertor values along the trajectory converge in norm to zero at a rate of $o(\frac{1}{\sqrt{t}})$ as $t \rightarrow +\infty$. In this paper, we demonstrate that when the monotone operator flow is augmented with a Tikhonov regularization term, the resulting trajectory converges strongly to the element of the set of zeros with minimal norm. In addition, rates of convergence in norm for the trajectory's velocity and the operator along the trajectory can be derived in terms of the regularization function. In some particular cases, these rates of convergence can outperform the ones of the coercive operator flows and can be as fast as $O(\frac{1}{t})$ as $t \rightarrow +\infty$. In this way, we emphasize a surprising acceleration feature of the Tikhonov regularization. Additionally, we explore these properties for monotone operator flows that incorporate time rescaling and an anchor point. For a specific choice of the Tikhonov regularization function, these flows are closely linked to second-order dynamical systems with a vanishing damping term. The convergence and convergence rate results we achieve for these systems complement recent findings for the Fast Optimistic Gradient Descent Ascent (OGDA) dynamics, leading to surprising outcomes.
-
arXiv:2403.06708 (Published 2024-03-11)
Tikhonov Regularization for Stochastic Non-Smooth Convex Optimization in Hilbert Spaces
Comments: 34 pages, 2 tables. arXiv admin note: text overlap with arXiv:2207.02750Categories: math.OCTo solve non-smooth convex optimization problems with a noisy gradient input, we analyze the global behavior of subgradient-like flows under stochastic errors. The objective function is composite, being equal to the sum of two convex functions, one being differentiable and the other potentially non-smooth. We then use stochastic differential inclusions where the drift term is minus the subgradient of the objective function, and the diffusion term is either bounded or square-integrable. In this context, under Lipschitz's continuity of the differentiable term and a growth condition of the non-smooth term, our first main result shows almost sure weak convergence of the trajectory process towards a minimizer of the objective function. Then, using Tikhonov regularization with a properly tuned vanishing parameter, we can obtain almost sure strong convergence of the trajectory towards the minimum norm solution. We find an explicit tuning of this parameter when our objective function satisfies a local error-bound inequality. We also provide a comprehensive complexity analysis by establishing several new pointwise and ergodic convergence rates in expectation for the convex and strongly convex case.
-
arXiv:2311.13096 (Published 2023-11-22)
R-Continuity with Applications to Convergence Analysis of Tikhonov Regularization and DC Programming
Categories: math.OCIn the paper, we study the convergence analysis of Tikhonov regularization in finding a zero of a maximal monotone operator using the notion of R-continuity. Applications to convex minimization and DC programming are provided.
-
arXiv:2309.13200 (Published 2023-09-22)
Combining Strong Convergence, Values Fast Convergence and Vanishing of Gradients for a Proximal Point Algorithm Using Tikhonov Regularization in a Hilbert Space
Categories: math.OCIn a real Hilbert space $\mathcal{H}$. Given any function $f$ convex differentiable whose solution set $\argmin_{\mathcal{H}}\,f$ is nonempty, by considering the Proximal Algorithm $x_{k+1}=\text{prox}_{\b_k f}(d x_k)$, where $0<d<1$ and $(\b_k)$ is nondecreasing function, and by assuming some assumptions on $(\b_k)$, we will show that the value of the objective function in the sequence generated by our algorithm converges in order $\mathcal{O} \left( \frac{1}{ \beta _k} \right)$ to the global minimum of the objective function, and that the generated sequence converges strongly to the minimum norm element of $\argmin_{\mathcal{H}}\,f$, we also obtain a convergence rate of gradient toward zero. Afterward, we extend these results to non-smooth convex functions with extended real values.
-
arXiv:2210.01413 (Published 2022-10-04)
Tikhonov Regularization is Optimal Transport Robust under Martingale Constraints
Comments: Accepted by NeurIPS 2022Distributionally robust optimization has been shown to offer a principled way to regularize learning models. In this paper, we find that Tikhonov regularization is distributionally robust in an optimal transport sense (i.e., if an adversary chooses distributions in a suitable optimal transport neighborhood of the empirical measure), provided that suitable martingale constraints are also imposed. Further, we introduce a relaxation of the martingale constraints which not only provides a unified viewpoint to a class of existing robust methods but also leads to new regularization tools. To realize these novel tools, tractable computational algorithms are proposed. As a byproduct, the strong duality theorem proved in this paper can be potentially applied to other problems of independent interest.
-
arXiv:2207.12330 (Published 2022-07-25)
Tikhonov Regularization of Sphere-Valued Signals
It is common to have to process signals, whose values are points on the 3-D sphere. We consider a Tikhonov-type regularization model to smoothen or interpolate sphere-valued signals defined on arbitrary graphs. We propose a convex relaxation of this nonconvex problem as a semidefinite program, which is easy to solve numerically and is efficient in practice.
-
arXiv:2108.02602 (Published 2021-08-05)
Tikhonov Regularization of Circle-Valued Signals
It is common to have to process signals or images whose values are cyclic and can be represented as points on the complex circle, like wrapped phases, angles, orientations, or color hues. We consider a Tikhonov-type regularization model to smoothen or interpolate circle-valued signals defined on arbitrary graphs. We propose a convex relaxation of this nonconvex problem as a semidefinite program, and an efficient algorithm to solve it.
-
arXiv:1905.10177 (Published 2019-05-24)
The Kurdyka-Łojasiewicz inequality as regularity condition
We show that a Kurdyka-\L{}ojasiewicz (KL) inequality can be used as regularity condition for Tikhonov regularization with linear operators in Banach spaces. In fact, we prove the equivalence of a KL inequality and various known regularity conditions (variational inequality, rate conditions, and others) that are utilized for postulating smoothness conditions to obtain convergence rates. Case examples of rate estimates for Tikhonov regularization with source conditions or with conditional stability estimate illustrate the theoretical result.
-
arXiv:1905.05422 (Published 2019-05-14)
Critical cones for sufficient second order conditions in PDE constrained optimization\
Comments: 21 pagesCategories: math.OCIn this paper, we analyze optimal control problems governed by semilinear parabolic equations. Box constraints for the controls are imposed and the cost functional involves the state and possibly a sparsity-promoting term, but not a Tikhonov regularization term. Unlike finite dimensional optimization or control problems involving Tikhonov regularization, second order sufficient optimality conditions for the control problems we deal with must be imposed in a cone larger than the one used to obtain necessary conditions. Different extensions of this cone have been proposed in the literature for different kinds of minima: strong or weak minimizers for optimal control problems. After a discussion on these extensions, we propose a new extended cone smaller than those considered until now. We prove that a second order condition based on this new cone is sufficient for a strong local minimum.
-
arXiv:1904.05718 (Published 2019-04-11)
Tikhonov regularization of dynamical systems associated with nonexpansive operators defined in closed and convex sets
Comments: 11 pagesCategories: math.OCIn this paper, we propose a Tikhonov-like regularization for dynamical systems associated with non-expansive operators defined in closed and convex sets of a Hilbert space. We prove the well-posedness and the strong convergence of the proposed dynamical systems to a fixed point of the non-expansive operator. We apply the obtained result to dynamical system associated with the problem of finding the zeros of the sum of a cocoercive operator with the subdifferential of a convex function.
-
arXiv:1901.10382 (Published 2019-01-29)
Tikhonov Regularization Within Ensemble Kalman Inversion
Comments: 39 pagesEnsemble Kalman inversion is a parallelizable methodology for solving inverse or parameter estimation problems. Although it is based on ideas from Kalman filtering, it may be viewed as a derivative-free optimization method. In its most basic form it regularizes ill-posed inverse problems through the subspace property: the solution found is in the linear span of the initial ensemble employed. In this work we demonstrate how further regularization can be imposed, incorporating prior information about the underlying unknown. In particular we study how to impose Tikhonov-like Sobolev penalties. As well as introducing this modified ensemble Kalman inversion methodology, we also study its continuous-time limit, proving ensemble collapse; in the language of multi-agent optimization this may be viewed as reaching consensus. We also conduct a suite of numerical experiments to highlight the benefits of Tikhonov regularization in the ensemble inversion context.
-
arXiv:1711.11241 (Published 2017-11-30)
Asymptotic for a second order evolution equation with vanishing damping term and Tikhonov regularization
Comments: 14 pagesCategories: math.OCWe investigate the asymptotic behavior of solutions to a second order differential equation with vanishing damping term, convex potential and regularizing Tikhonov term.
-
arXiv:1705.01427 (Published 2017-05-03)
Tikhonov regularization of optimal control problems governed by semi-linear partial differential equations
Comments: 25 pages, 3 picturesCategories: math.OCIn this article, we consider the Tikhonov regularization of an optimal control problem of semilinear partial differential equations with box constraints on the control. We derive a-priori regularization error estimates for the control under suitable conditions. These conditions comprise second-order sufficient optimality conditions as well as regularity conditions on the control, which consists of a source condition and a condition on the active sets. In addition, we show that these conditions are necessary for convergence rates under certain conditions. We also consider sparse optimal control problems and derive regularization error estimates for them. Numerical experiments underline the theoretical findings.
-
arXiv:1704.05797 (Published 2017-04-19)
Tikhonov regularization of control-constrained optimal control problems
Comments: 22 pagesCategories: math.OCWe consider Tikhonov regularization of control-constrained optimal control problems. We present new a-priori estimates for the regularization error assuming measure and source-measure conditions. In the special case of bang-bang solutions, we introduce another assumption to obtain the same convergence rates. This new condition turns out to be useful in the derivation of error estimates for the discretized problem. The necessity of the just mentioned assumptions to obtain certain convergence rates is analyzed. Finally, numerical examples confirm the analytical findings.
-
The Residual Method for Regularizing Ill-Posed Problems
Comments: 29 pages, one figureJournal: Appl. Math. Comput. 218(6): pp. 2693-2710 (2011)Keywords: residual method, regularizing ill-posed problems, tikhonov regularization, convergence rates, linear operator equationsTags: journal articleAlthough the \emph{residual method}, or \emph{constrained regularization}, is frequently used in applications, a detailed study of its properties is still missing. This sharply contrasts the progress of the theory of Tikhonov regularization, where a series of new results for regularization in Banach spaces has been published in the recent years. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances, which can also be applied to non-convex regularization functionals. We provide three examples that show the applicability of our theory. The first example is the regularized solution of linear operator equations on $L^p$-spaces, where we show that the results of Tikhonov regularization generalize unchanged to the residual method. As a second example, we consider the problem of density estimation from a finite number of sampling points, using the Wasserstein distance as a fidelity term and an entropy measure as regularization term. It is shown that the densities obtained in this way depend continuously on the location of the sampled points and that the underlying density can be recovered as the number of sampling points tends to infinity. Finally, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions.