👉 WANT poll - Tell us your insights and thought about efficient training of neural networks! Your vote does matter! (Poll results)

📜 WANT page at OpenReview - Accepted papers (Orals & Posters) are here!

📅 WANT page at Whova - Add to your NeurIPS agenda!

🎥 WANT page at NeurIPS.cc - Streaming and virtual chat are here! (now open to everyone)

🏰 Gather Town - Online poster sessions and networking

NeurIPS'23 Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization

The Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization will give all researchers the tools necessary to train neural networks at scale. It will provide an interactive platform for researchers and practitioners to delve into the latest advancements in neural network training. Our workshop focuses on practically addressing challenges to enhance computational efficiency, scalability, and resource optimization.

The unprecedented availability of data, computation and algorithms have enabled a new AI revolution, as seen in Transformers and LLMs, diffusion models, etc, resulting in revolutionary applications such as ChatGPT, generative AI and AI for science. However, all of these applications have in common an always-growing scale, which makes training models more difficult. This can be a bottleneck for the advancement of science, both at industry scale and for smaller research teams that may not have access to the same training infrastructure.

By optimizing the training process, we can accelerate innovation, drive impactful applications in various domains and enable progress in applications such as AI for good and for science. Our WANT@NeurIPS 2023 workshop will cover topics such as optimization of computations (re-materialization, tensorized layers), and parallelization across devices (offloading, different types of parallelisms, pipelining). However, there is often a gap between these latest techniques and their application in research and production code, which we aim to close with this workshop

Through panel sessions, poster presentations, lightning talks, and open Q&A sessions, we will show how to scale model training and tackles challenges such as managing increased computational complexity, handling data dependencies, optimizing algorithmic design, and striking the right balance between efficiency and model expressiveness. This workshop will bring together diverse communities, including industry and academia, theoretical researchers and practical application developers, as well as experts from the domains of high-performance computing and artificial intelligence. We hope this workshop will foster collaboration and knowledge exchange among these community, and drive interdisciplinary advancements in neural network training.