arXiv Analytics

Sign in

arXiv:1704.04895 [math.OC]AbstractReferencesReviewsResources

Markov-Dubins Path via Optimal Control Theory

C. Yalçın Kaya

Published 2017-04-17Version 1

Markov-Dubins path is the shortest planar curve joining two points with prescribed tangents, with a specified bound on its curvature. Its structure, as proved by Dubins in 1957 nearly 70 years after Markov posed the problem of finding it, is elegantly simple: a selection of at most three arcs are concatenated, each of which is either a circular arc of maximum (prescribed) curvature or a straight line. The Markov-Dubins problem and its variants have since been extensively studied in practical and theoretical settings. A reformulation of the Markov-Dubins problem as an optimal control problem was subsequently studied by various researchers using the Pontryagin maximum principle and additional techniques, to reproduce Dubins' result. In the present paper, we study the same reformulation, and apply the maximum principle, with new insights, to derive Dubins' result again. We prove that abnormal control solutions, which were ruled out in previous studies, do exist. We characterize them as a concatenation of at most two circular arcs and show that they are also solutions of the normal problem. More importantly, we prove that any feasible path of the types mentioned above is a stationary solution, i.e., that it satisfies the Pontryagin maximum principle. We propose a numerical method for computing Markov-Dubins path. We illustrate the theory and the numerical approach by three qualitatively different examples.

Related articles: Most relevant | Search more
arXiv:1111.1549 [math.OC] (Published 2011-11-07)
Optimal Control Theory on almost-Lie Algebroids
arXiv:0905.2767 [math.OC] (Published 2009-05-17, updated 2011-12-01)
Pontryagin Maximum Principle - a generalization
arXiv:2108.03600 [math.OC] (Published 2021-08-08)
Pontryagin Maximum Principle for Distributed-Order Fractional Systems