Unless otherwise noted, all seminars will take place in the 6th floor conference room of Donald Bren Hall (DBH 6011). Refreshments will be served at 10:50am, and the seminar talks will run from 11:00am until noon.
For additional information, please contact CS Seminar Administrative Coordinator, Mare Stasik, at firstname.lastname@example.org or (949) 824-7651.
November 30, 2018
11:00am - 12:00pm
Donald Bren Hall 6011
A recurring problem in the efficient control of networked systems is the need to operate under uncertainty of critical system dynamics. It is usually necessary to develop mechanisms that optimize the target performance measures while also allocating part of their available resources to learning necessary to develop mechanisms that optimize the target performance measures while also allocating part of their available resources to learning the uncertainties. Optimization of this, learning (exploration) - earning (exploitation) tradeoff has formed the core of many interesting approaches over the last decade that build over the multi-armed bandit (MAB) framework.
An important challenge in this research space is concerned with the modeling and utilization of side-information that is typically available about other arms when an arm is pulled. Numerous applications, ranging from social to communication networks, possess different forms of side-information structures, which call for new learning and optimization mechanisms that utilize them for provably low-regret performance guarantees. In this talk, I will provide our research findings from multiple domains, including advertising in social networks, delay-constrained multi-channel communication, and multi-armed bandits for renewal processes, that reveal the gains and means of utilizing various forms of side-information in the optimal learning and control of networks under uncertainty.