Phaniteja S*1 Parijat Dewangan*1 Pooja Guhan1 K. Madhava Krishna1 Abhishek Sarkar1
Performing dual arm coordinated (reachability) tasks in humanoid robots require complex planning strategies and this complexity increases further, in case of humanoids with articulated torso. These complex strategies may not be suitable for online motion planning. This paper proposes a faster way to accomplish dual arm coordinated tasks using methodology based on Reinforcement Learning. The contribution of this paper is twofold. Firstly, we propose DiGrad (Differential Gradients), a new RL framework for multi-task learning in manipulators. Secondly, we show how this framework can be adopted to learn dual arm coordination in a 27 degrees of freedom (DOF) humanoid robot with articulated spine. The proposed framework and methodology are evaluated in various environments and simulation results are presented. A comparative study of DiGrad with its parent algorithm in different settings is also presented.