arXiv Analytics

Sign in

arXiv:1202.4006 [math.PR]AbstractReferencesReviewsResources

Maximum principle for optimal control of stochastic partial differential equations

AbdulRahman Al-Hussein

Published 2012-02-17Version 1

We shall consider a stochastic maximum principle of optimal control for a control problem associated with a stochastic partial differential equations of the following type: d x(t) = (A(t) x(t) + a (t, u(t)) x(t) + b(t, u(t)) dt + [<\sigma(t, u(t)), x(t)>_K + g (t, u(t))] dM(t), x(0) = x_0 \in K, with some given predictable mappings $a, b, \sigma, g$ and a continuous martingale $M$ taking its values in a Hilbert space $K,$ while $u(\cdot)$ represents a control. The equation is also driven by a random unbounded linear operator $A(t,w), \; t \in [0,T ], $ on $K .$ We shall derive necessary conditions of optimality for this control problem without a convexity assumption on the control domain, where $u(\cdot)$ lives, and also when this control variable is allowed to enter in the martingale part of the equation.

Related articles: Most relevant | Search more
arXiv:1409.4746 [math.PR] (Published 2014-09-16)
Stochastic maximum principle for optimal control of SPDEs driven by white noise
arXiv:0807.3096 [math.PR] (Published 2008-07-19, updated 2011-02-22)
Stochastic Maximum Principle for a PDEs with noise and control on the boundary
arXiv:2305.03676 [math.PR] (Published 2023-05-05)
Stochastic maximum principle for sub-diffusions and its applications