Efficient Reinforcement Learning Through Uncertainties

Speaker:
Mr. ZHOU Dongruo

Abstract:

Reinforcement learning (RL) has achieved great empirical success in many real-world problems in the last few years. However, many RL algorithms are inefficient due to their data-hungry nature. Whether there exists a universal way to improve the efficiency of existing RL algorithms remains an open question.

In this talk, I will give a selective overview of my research, which suggests that efficient (and optimal) RL can be built through the lens of uncertainties. I will show that uncertainties can not only guide RL to make decisions efficiently, but also have the ability to accelerate the learning of the optimal policy over a finite number of data samples collected from the unknown environment. By utilizing the proposed uncertainty-based framework, I design computationally efficient and statistically optimal RL algorithms under various settings, which improve existing baseline algorithms from both theoretical and empirical aspects. At the end of this talk, I will briefly discuss several additional works, and my future research plan for designing next-generation decision making algorithms.

Biography:
ZHOU Dongruo is a final-year PhD student in the Department of Computer Science at UCLA, advised by Prof. Quanquan Gu. His research is broadly on the foundation of machine learning, with a particular focus on reinforcement learning and stochastic optimization. He aims to provide a theoretical understanding of machine learning methods, as well as to develop new machine learning algorithms with better performance. He is a recipient of the UCLA dissertation year fellowship.

Join Zoom Meeting:
https://cuhk.zoom.us/j/93549469461?pwd=R0FOaFdxOG5LS0s2Q1RmaFdNVm4zZz09
Meeting ID: 935 4946 9461
Passcode: 202300

Enquiries: Mr Jeff Liu at Tel. 3943 0624

Date

Mar 07, 2023
Expired!

Time

10:00 am - 11:00 am

Location

Zoom

Comments are closed.