We invite researchers and practitioners to submit their work to the WANT@ICML2024, which aims to explore cutting-edge advancements in neural network training and address the challenges associated with training models at scale as well as under limited resources.

  • Full paper submission (all authors must have an OpenReview profile when submitting) deadline: May 28 (23:59 AOE), 2024 June 2 (23:59 AOE), 2024

  • Author notification: June 17 (AOE), 2024

  • Camera-ready, poster, and video (optionally) submission: to be announced

  • Submission link: OpenReview (double-blind review process)

  • Submission format: up to 8 pages, plus unlimited references and appendix. Submitted .pdf file should satisfy formatting templates (.tex, .sty)

  • Submission to the workshop is non-archival (i.e. double submission is allowed, accepted papers will be posted on the workshop website)

We welcome submissions on the following topics, but not limited to:

  • Training for large scale models
  • Efficient training for different applications (NLP/CV/Climate/Medicine/Finance/etc.)
  • Model/tensor/data and other types of parallelisms
  • Pipelining
  • Communication optimization
  • Re-materialization (activation checkpointing)
  • Offloading
  • Efficient computations: tensorized layers, low-precision computations, etc.
  • Energy-efficient training
  • Efficient data loading and preprocessing
  • Network-aware resource allocation
  • Architecture-aware resource allocation
  • Scheduling for AI