πŸ‘₯ Discord community on AI & HPC - Join to connect with all excited about efficient neural network training, WANT participants and organizers!

πŸ‘‰ WANT poll - Tell us your insights and thought about efficient training of neural networks! Your vote does matter! (Poll results from previous WANT@NeurIPS’23 iteration)

πŸ“œ WANT page at OpenReview - Accepted papers (Orals & Posters) are here!

πŸ“… WANT page at Whova - Add to your ICML agenda!

πŸŽ₯ WANT page at ICML.cc - Streaming and virtual chat are here! (now open to everyone)

🏰 Gather Town - Online poster sessions and networking

ICML'24 Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization

The Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization will give all researchers the tools necessary to train neural networks at scale. It will provide an interactive platform for researchers and practitioners to delve into the latest advancements in neural network training. Our workshop focuses on practically addressing challenges to enhance computational efficiency, scalability, and resource optimization.

The unprecedented availability of data, computation and algorithms have enabled a new AI revolution, as seen in Transformers and LLMs, diffusion models, etc, resulting in revolutionary applications such as ChatGPT, generative AI and AI for science. However, all of these applications have in common an always-growing scale, which makes training models more difficult. This can be a bottleneck for the advancement of science, both at industry scale and for smaller research teams that may not have access to the same training infrastructure. By optimizing the training process, we can accelerate innovation, drive impactful applications in various domains and enable progress in applications such as AI for good and for science.

The WANT@ICML 2024 aims to address the increasing challenges in AI training scale and complexity. It builds on previous success to expand discussions on efficiency in neural network training, targeting AI, HPC, and science communities to foster collaboration and advance techniques for real-world applications. Compared to its predecessor, this iteration delves deeper into advanced arithmetic, computation operations, scheduling techniques, and resource optimization for both homogeneous and heterogeneous resources. Additionally, it broadens the discussion to encompass diverse science applications beyond AI, including healthcare, earth science, and manufacturing.

Through panel sessions, poster presentations, lightning talks, and open Q&A sessions, we will show how to scale model training and tackles challenges such as managing increased computational complexity, handling data dependencies, optimizing algorithmic design, and striking the right balance between efficiency and model expressiveness. This workshop will bring together diverse communities, including industry and academia, theoretical researchers and practical application developers, as well as experts from the domains of high-performance computing and artificial intelligence. We hope this workshop will foster collaboration and knowledge exchange among these community, and drive interdisciplinary advancements in neural network training.