学校主页 加入收藏 English
当前位置: 首页 >> 学术科研 >> 学术讲座 学术讲座
龙马统数·见微知著大讲堂第81讲:Mini-batch Gradient Descent with Buffer
来源:  点击次数: 次 发布时间:2024-11-05   编辑:统计与数学学院

学术报告:Mini-batch Gradient Descent with Buffer

报告时间:11月7日(星期四)上午10:00-11:00

报告地点:沙河校区,学院1号楼102会议室

报告人:亓颢博,北京师范大学,师资博士后

报告摘要In this paper, we studied a buffered mini-batch gradient descent (BMGD) algorithm for training complex model on massive datasets. The algorithm studied here is designed for fast training on a GPU-CPU system, which contains two steps: the buffering step and the computation step. In the buffering step, a large batch of data (i.e., a buffer) are loaded from the hard drive to the graphical memory of GPU. In the computation step, a standard mini-batch gradient descent(MGD) algorithm is applied to the buffered data. Compared to traditional MGD algorithm, the proposed BMGD algorithm can be more efficient for two reasons.First, the BMGD algorithm uses the buffered data for multiple rounds of gradient update, which reduces the expensive communication cost from the hard drive to GPU memory. Second, the buffering step can be executed in parallel so that the GPU does not have to stay idle when loading new data. We first investigate the theoretical properties of BMGD algorithms under a linear regression setting.The analysis is then extended to the Polyak-Lojasiewicz loss function class. The theoretical claims about the BMGD algorithm are numerically verified by simulation studies. The practical usefulness of the proposed method is demonstrated by three image-related real data analysis.

报告人简介:2023年毕业于北京大学光华管理学院,获经济学博士学位,现为北京师范大学统计学院数理统计系师资博士后。研究方向为统计优化、大规模数据统计分析、网络数据分析与深度学习中的统计理论。在Journal of Computational and Graphical Statistics、Neurocomputing、Computational Statistics & Data Analysis等期刊发表多篇论文。

撰稿人:刘洁

审稿人:邓露

学术科研

          版权所有:中央财经大学统计与数学学院  
          地址:北京市昌平区沙河高教园中央财经大学沙河校区1号学院楼   邮政编码:102206   电 话:(010)61776184    
          邮箱:samofcufe@cufe.edu.cn    
         

学院公众号