Bruno Raffin
INRIA
Bruno Raffin is a Senior Scientist (Director of Research) at INRIA, Grenoble France and leader of the DataMove team, a joint team from INRIA, Univ. Grenoble Alpes and CNRS. Bruno Raffin holds a PhD from the University of Orléans on parallel programming language design (1997) and spent 2 years at Iowa State University as postdoc fellow. He investigated various topics including large scale data-flow oriented parallel processing, cache-efficient parallel data structures, task-based multi-CPU and multi-GPU programming, in situ data processing for singleton and ensemble simulation runs. His recent research activity focuses on SCiML and how to combine traditional parallel solvers with deep learning. Publications Website
Zachary Mueller
HuggingFace
Zach Mueller is the Technical Lead for the Accelerate project at Hugging Face. He's a graduate of the University of West Florida and has considerable experience with Hugging Face as well as the fastai communities. Website
Urmish Thakker
SambaNova Systems
Urmish leads the LLM Team at SambaNova Systems. The LLM team at SambaNova focuses on adapting LLMs to enterprise use-cases and HW-SW co-design of LLMs to enable efficient training and inference. Before SambaNova, Urmish was in various engineering and research roles at Arm, AMD and Texas Instruments. Urmish also helped drive the TinyML Performance Working Group in MLPerf, contributing to the development of key benchmarks for IoT ML. He has 35+ publications and patents focusing on efficient deep learning and LLMs. His papers have been published at top ML and HW conferences like NeurIPS, ICLR, EMNLP, ISCA and MICRO. He completed his masters at the University of Wisconsin Madison and bachelors from Birla Institute of Technology and Science. Publications Website
Beidi Chen
Carnegie Mellon University & Meta
Beidi Chen is an Assistant Professor at Carnegie Mellon University and a Research Scientist at FAIR. Before that, she was a postdoctoral scholar at Stanford University. She received her Ph.D. from Rice University. Her research focuses on efficient AI; specifically, she designs and optimizes algorithms on current hardware to accelerate large machine learning systems. Her work has won best paper runner-up at ICML 2022 and she was selected as a Rising Star in EECS by MIT and UIUC. Publications Website
Adam DeConinck
NVIDIA
Adam DeConinck is a senior manager on the NVIDIA Applied Systems Engineering team, where he supports a team of system architects who design next-generation AI supercomputers. His past experience includes building high-performance compute and storage systems at scale at Los Alamos National Laboratory and Facebook, as well as for NVIDIA’s HPC and AI customers. Publications Website
Max Ryabinin
TogetherAI
Max Ryabinin is a Distinguished Research Scientist at Together AI working on large-scale and efficient deep learning. Previously, he was a Senior Research Scientist at Yandex, studying a variety of topics in NLP and machine learning systems. In 2021-2022, Max has served as the working group chair for the BigScience Research Workshop, helping build BLOOM — the largest multilingual language model at that time. Max received his PhD on decentralized deep learning from HSE University: in a series of publications, he proposed methods for training large neural networks over slow and unstable networks. Publications Website