Non-stationary Anderson acceleration with optimized damping

02/10/2022
by   Kewang Chen, et al.
0

Anderson acceleration (AA) has a long history of use and a strong recent interest due to its potential ability to dramatically improve the linear convergence of the fixed-point iteration. Most authors are simply using and analyzing the stationary version of Anderson acceleration (sAA) with a constant damping factor or without damping. Little attention has been paid to nonstationary algorithms. However, damping can be useful and is sometimes crucial for simulations in which the underlying fixed-point operator is not globally contractive. The role of this damping factor has not been fully understood. In the present work, we consider the non-stationary Anderson acceleration algorithm with optimized damping (AAoptD) in each iteration to further speed up linear and nonlinear iterations by applying one extra inexpensive optimization. We analyze this procedure and develop an efficient and inexpensive implementation scheme. We also show that, compared with the stationary Anderson acceleration with fixed window size sAA(m), optimizing the damping factors is related to dynamically packaging sAA(m) and sAA(1) in each iteration (alternating window size m is another direction of producing non-stationary AA). Moreover, we show by extensive numerical experiments that the proposed non-stationary Anderson acceleration with optimized damping procedure often converges much faster than stationary AA with constant damping or without damping.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset