arXiv:2006.11005 [physics.flu-dyn]AbstractReferencesReviewsResources
Robust flow control and optimal sensor placement using deep reinforcement learning
Romain Paris, Samir Beneddine, Julien Dandois
Published 2020-06-19Version 1
This paper focuses on a drag-reducing control strategy on a 2D-simulated laminar flow past a cylinder. Deep reinforcement learning algorithms have been implemented to discover efficient control schemes, using two synthetic jets located on the cylinder's poles as actuators and pressure sensors in the wake of the cylinder as feedback observation. The present work focuses on the efficiency and robustness of the identified control strategy and introduces a novel algorithm (S-PPO-CMA) to optimise the sensor layout. An energy-efficient control strategy reducing drag by 18.4% at Reynolds number 120 is obtained. This control policy is shown to be robust both to the Reynolds number in the range [100,216] and to measurement noise, enduring signal to noise ratios as low as 0.2 with negligible impact on performance. Along with a systematic study on sensor number and location, the proposed sparsity-seeking algorithm has achieved a successful optimisation to a reduced 5-sensor layout while keeping state-of-the-art performance. These results highlight the interesting possibilities of reinforcement learning for active flow control and pave the way to efficient, robust and practical implementations of these control techniques in experimental or industrial systems.