WANT@NeurIPS'23 best paper award
We would like to thank all contributed authors for submitting their papers! The were many great papers and the selection process was not easy for the program committee 😅
We are happy to announce that WANT@NeurIPS’23 best paper awards are going to the following papers:
- Sparse Backpropagation for MoE Training (by Liyuan Liu, Jianfeng Gao, Weizhu Chen)
- Efficient Parallelization Layouts for Large-Scale Distributed Model Training (by Johannes Hagemann, Samuel Weinbach, Konstantin Dobler, Maximilian Schall, Gerard de Melo)
Congratulations to the best paper award winners!