University of Worcester Worcester Research and Publications
 
  USER PANEL:
  ABOUT THE COLLECTION:
  CONTACT DETAILS:

Reward shaping using directed graph convolution neural networks for reinforcement learning and games

Sang, J., Khan, Zaki Ahmed, Yin, H. and Wang, Y. (2023) Reward shaping using directed graph convolution neural networks for reinforcement learning and games. Frontiers in Physics: Section of Social Physics, 11. pp. 1-9. ISSN 2296-424X

[thumbnail of Reward shaping using directed graph convolution neural networks for reinforcement learning and games.pdf]
Preview
Text
Reward shaping using directed graph convolution neural networks for reinforcement learning and games.pdf - Published Version
Available under License Creative Commons Attribution.

Download (3MB) | Preview

Abstract

Game theory can employ reinforcement learning algorithms to identify the optimal policy or equilibrium solution. Potential-based reward shaping (PBRS) methods are prevalently used for accelerating reinforcement learning, ensuring the optimal policy remains consistent. Existing PBRS research performs message passing based on graph convolution neural networks (GCNs) to propagate information from rewarding states. However, in an irreversible time-series reinforcement learning problem, undirected graphs will not only mislead message-passing schemes but also lose a distinctive direction structure. In this paper, a novel approach called directed graph convolution neural networks for reward shaping φDCN has been proposed to tackle this problem. The key innovation of φDCN is the extension of spectral-based undirected graph convolution to directed graphs. Messages can be efficiently propagated by leveraging a directed graph Laplacian as a substitute for the state transition matrix. As a consequence, potential-based reward shaping can then be implemented by the propagated messages. The incorporation of temporal dependencies between states makes φDCN more suitable for real-world scenarios than existing potential-based reward shaping methods based on undirected graph convolutional networks. Preliminary experiments demonstrate that the proposed φDCN exhibits a substantial improvement compared to other competing algorithms on both Atari and MuJoCo benchmarks.

Item Type: Article
Additional Information:

© 2023 Sang, Ahmad Khan, Yin and Wang.
This is an open-access article distributed
under the terms of the Creative
Commons Attribution License (CC BY).
The use, distribution or reproduction in
other forums is permitted, provided the
original author(s) and the copyright
owner(s) are credited and that the original
publication in this journal is cited, in
accordance with accepted academic
practice. No use, distribution or
reproduction is permitted which does not
comply with these terms.

Uncontrolled Discrete Keywords: Markov decision process, reinforcement learning, directed graph convolutional network, reward shaping, game
Divisions: College of Business, Psychology and Sport > Worcester Business School
Related URLs:
Copyright Info: Open Access Article
Depositing User: Katherine Small
Date Deposited: 22 Jan 2024 10:20
Last Modified: 22 Jan 2024 10:59
URI: https://worc-9.eprints-hosting.org/id/eprint/13513

Actions (login required)

View Item View Item
 
     
Worcester Research and Publications is powered by EPrints 3 which is developed by the School of Electronics and Computer Science at the University of Southampton. More information and software credits.