arXiv:1202.4006 [math.PR]AbstractReferencesReviewsResources
Maximum principle for optimal control of stochastic partial differential equations
Published 2012-02-17Version 1
We shall consider a stochastic maximum principle of optimal control for a control problem associated with a stochastic partial differential equations of the following type: d x(t) = (A(t) x(t) + a (t, u(t)) x(t) + b(t, u(t)) dt + [<\sigma(t, u(t)), x(t)>_K + g (t, u(t))] dM(t), x(0) = x_0 \in K, with some given predictable mappings $a, b, \sigma, g$ and a continuous martingale $M$ taking its values in a Hilbert space $K,$ while $u(\cdot)$ represents a control. The equation is also driven by a random unbounded linear operator $A(t,w), \; t \in [0,T ], $ on $K .$ We shall derive necessary conditions of optimality for this control problem without a convexity assumption on the control domain, where $u(\cdot)$ lives, and also when this control variable is allowed to enter in the martingale part of the equation.