国家数学与交叉科学中心合肥分中心报告【何 袅】

发布者:系统管理员发布时间:2016-07-18浏览次数:7


报告题目:Simple Optimization, Bigger Models, and Faster Learning

报告人:何 袅,University of Illinois at Urbana-Champaign

时  间:2016年7月22日    下午16:00-17:00

地  点:东区管理科研楼  数学科学学院1318室

内容提要:
The era of Big Data is introducing big challenges for machine learning to accommodate to rapidly growing data and increasingly complex models. Many traditional learning methods fail to work due to the lack of scalability or theoretical guidance. In this talk, I am going to show how simple optimization algorithms such as stochastic gradient descent, when combined with newly developed techniques, could make a huge difference. I will discuss some of our recent works that advance three important sub-domains of machine learning, including kernel machines, Bayesian inference, and reinforcement learning. For all these cases, we developed simple new algorithms that allow to train bigger models, learn better yet faster, come with provable guarantees, and improve significantly over previous state-of-the-arts. These advances are proven to be useful in a wide range of machine learning applications on large-scale real world datasets.

报告人简介:
She is an Assistant Professor in the Department of Industrial and Enterprise Systems Engineering at University of Illinois at Urbana-Champaign. She completed her PhD in Operations Research under the supervision of Professor ArkadiNemirovski in the Department of Mathematics at Georgia Institute of Technology in 2015.Meanwhile she got a Master degree in Computational Science and Engineering. She has been working on developing fast algorithms and theoretical analysis for large-scale convex/stochastic/robust/distributed optimization with applications to finance, machine learning, and decision-making under uncertainty. She also has a particular interest in the integration of optimization, machine learning and statistics.