A Q-learning Control Method for a Soft Robotic Arm Utilizing Training Data from a Rough Simulator

09/13/2021
by   Peijin Li, et al.
0

It is challenging to control a soft robot, where reinforcement learning methods have been applied with promising results. However, due to the poor sample efficiency, reinforcement learning methods require a large collection of training data, which limits their applications. In this paper, we propose a Q-learning controller for a physical soft robot, in which pre-trained models using data from a rough simulator are applied to improve the performance of the controller. We implement the method on our soft robot, i.e., Honeycomb Pneumatic Network (HPN) arm. The experiments show that the usage of pre-trained models can not only reduce the amount of the real-world training data, but also greatly improve its accuracy and convergence rate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset