TY - GEN
T1 - Learning in Restless Multi-Armed Bandits using Adaptive Arm Sequencing Rules
AU - Gafni, Tomer
AU - Cohen, Kobi
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/15
Y1 - 2018/8/15
N2 - We consider a class of restless multi-armed bandit (RMAB) problems with unknown arm dynamics. At each time, a player chooses an arm out of N arms to play, referred to as an active arm, and receives a random reward from a finite set of reward states. The reward state of the active arm transits according to an unknown Markovian dynamic. The reward state of passive arms (which are not chosen to play at time t) evolves according to an arbitrary unknown random process. The objective is an arm-selection policy that minimizes the regret, defined as the reward loss with respect to a player that always plays the most rewarding arm. This class of RMAB problems has been studied recently in the context of communication networks and financial investment applications. We develop a strategy that selects arms to be played in a consecutive manner in which the selection sequencing rules are adaptively updated controlled by the current sample reward means, referred to as Adaptive Sequencing Rules (ASR) algorithm. By designing judiciously the adaptive sequencing rules of the chosen arms, we show that ASR algorithm achieves a logarithmic regret order with time and a finite-sample bound on the regret is established. Although existing methods have shown a logarithmic regret order with time in this RMAB setting, the theoretical analysis presents significant improvement in the regret scaling with respect to the system parameters under ASR. Extensive simulation results support the theoretical study and demonstrate strong performance of the algorithm as compared to existing methods.
AB - We consider a class of restless multi-armed bandit (RMAB) problems with unknown arm dynamics. At each time, a player chooses an arm out of N arms to play, referred to as an active arm, and receives a random reward from a finite set of reward states. The reward state of the active arm transits according to an unknown Markovian dynamic. The reward state of passive arms (which are not chosen to play at time t) evolves according to an arbitrary unknown random process. The objective is an arm-selection policy that minimizes the regret, defined as the reward loss with respect to a player that always plays the most rewarding arm. This class of RMAB problems has been studied recently in the context of communication networks and financial investment applications. We develop a strategy that selects arms to be played in a consecutive manner in which the selection sequencing rules are adaptively updated controlled by the current sample reward means, referred to as Adaptive Sequencing Rules (ASR) algorithm. By designing judiciously the adaptive sequencing rules of the chosen arms, we show that ASR algorithm achieves a logarithmic regret order with time and a finite-sample bound on the regret is established. Although existing methods have shown a logarithmic regret order with time in this RMAB setting, the theoretical analysis presents significant improvement in the regret scaling with respect to the system parameters under ASR. Extensive simulation results support the theoretical study and demonstrate strong performance of the algorithm as compared to existing methods.
UR - http://www.scopus.com/inward/record.url?scp=85052477394&partnerID=8YFLogxK
U2 - 10.1109/ISIT.2018.8437583
DO - 10.1109/ISIT.2018.8437583
M3 - Conference contribution
AN - SCOPUS:85052477394
SN - 9781538647806
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 1206
EP - 1210
BT - 2018 IEEE International Symposium on Information Theory, ISIT 2018
PB - Institute of Electrical and Electronics Engineers
T2 - 2018 IEEE International Symposium on Information Theory, ISIT 2018
Y2 - 17 June 2018 through 22 June 2018
ER -