Complex & Intelligent Systems (Feb 2025)

Control strategy of robotic manipulator based on multi-task reinforcement learning

  • Tao Wang,
  • Ziming Ruan,
  • Yuyan Wang,
  • Chong Chen

DOI
https://doi.org/10.1007/s40747-025-01816-w
Journal volume & issue
Vol. 11, no. 3
pp. 1 – 14

Abstract

Read online

Abstract Multi-task learning is important in reinforcement learning where simultaneously training across different tasks allows for leveraging shared information among them, typically leading to better performance than single-task learning. While joint training of multiple tasks permits parameter sharing between tasks, the optimization challenge becomes crucial—identifying which parameters should be reused and managing potential gradient conflicts arising from different tasks. To tackle this issue, instead of uniform parameter sharing, we propose an adjudicate reconfiguration network model, which we integrate into the Soft Actor-Critic (SAC) algorithm to address the optimization problems brought about by parameter sharing in multi-task reinforcement learning algorithms. The decision reconstruction network model is designed to achieve cross-network layer information exchange between network layers by dynamically adjusting and reconfiguring the network hierarchy, which can overcome the inherent limitations of traditional network architecture in handling multitasking scenarios. The SAC algorithm based on the decision reconstruction network model can achieve simultaneous training in multiple tasks, effectively learning and integrating relevant knowledge of each task. Finally, the proposed algorithm is evaluated in a multi-task environment of the Meta-World, a benchmark for multi-task reinforcement learning containing robotic manipulation tasks, and the multi-task MUJOCO environment.

Keywords