Publication Details
- Keywords:
- collaborative robotics
- learning
Abstract
Using reinforcement learning to enable Unmanned aerial vehicles (UAV) carry out missions in unknown environments in which their mathematical model may not be available, is an active research topic. However, there is a challenge in implementing RL for real-world applications. This paper provides a framework for using RL to allow the UAV to navigate successfully in such environments. A performance comparison of three different Q learning methods: classical Q Learning (QL), Fixed Sparse Representation-based Q Learning (FSR-QL), and Radial Basis Function-based Q Learning (RBF-QL), is presented. We conducted simulations to show how the UAVs can successfully learn to navigate through an unknown environment. Through the simulation comparison of these three different learning methods (QL, FSR-QL, RBF-QL), the RBF-QL outperforms the others in term of space reduction and learning speed (convergence time).
Author Details
Name: | Huy Pham |
Status: | Inactive |
Name: | Hung La |
Status: | Active |
Name: | David Feil-Seifer | |
email: | dave@cse.unr.edu | |
Website: | http://cse.unr.edu/~dave | |
Phone: | (775) 784-6469 | |
Status: | Active |
Name: | Luan Nguyen |
Status: | Inactive |
BibTex Reference
title={Performance Comparison of Function Approximation-Based Q Learning Algorithms for Autonomous UAV Navigation},
author={Huy X. Pham and Hung La and David Feil-Seifer and Luan Nguyen},
year={2018},
month={June},
url={https://arxiv.org/abs/1801.05086},
address={Hawaii, US},
doi={arXiv preprint \url{https://arxiv.org/abs/1801.05086}},
booktitle={Proceedings of the International Conference on Ubiquitous Robots (UR)},
}
HTML Reference
Support
CHS: Small: Collaborative Research: Spatio-Temporal Situational Awareness in Large-Scale Disasters Using Low-Cost Unmanned Aerial Vehicles, National Science Foundation PI: David Feil-Seifer, Amount: $166,666, Jan. 1, 2016 - Dec. 31, 2017