Ana Gainaru
Oak Ridge National Laboratory
Ana Gainaru is a computer scientist in the CSM division at Oak Ridge National Laboratory, working on data management and performance optimization for large scale scientific workflows with a focus on codes coupling traditional HPC with AI. She received her PhD from the University of Illinois at Urbana-Champaign working on fault tolerance and scheduling for large-scale systems. In her current position she is working with application developers in fusion, neutron scattering and materials sciences to deploy digital twins and large models and improve their performance at scale.
Publications
Website
Lionel Eyraud-Dubois
INRIA
Lionel Eyraud-Dubois received his PhD degree in computer science from the Université de Grenoble. He is currently a full-time researcher with Inria Bordeaux Sud-Ouest in the Topal team. His main research interests encompass combinatorial optimization and operation research techniques for scheduling and resource allocation problems in high performance computer systems, including for optimizing the training and inference processes of Deep Neural Networks.
Publications
Website
Yang You
HPC-AI Technology, University of Singapore, ColossalAI
Yang You is a Presidential Young Professor at the National University of Singapore. He received his Ph.D. in Computer Science from UC Berkeley under Prof. James Demmel. Yang's research interests include Parallel/Distributed Algorithms, High Performance Computing, and Machine Learning. He is a winner of the IPDPS 2015 Best Paper Award (0.8%), ICPP 2018 Best Paper Award (0.3%), and ACM/IEEE George Michael HPC Fellowship. Yang is also a Siebel Scholar and a winner of the Lotfi A. Zadeh Prize. He also made the Forbes 30 Under 30 Asia list (2021) for young leaders and the IEEE-CS TCHPC early career award.
Publications
Website
Tunji Ruwase
Microsoft DeepSpeed
Olatunji (Tunji) Ruwase is a co-founder and Principal Research Sciences Manager of the DeepSpeed project at Microsoft. His broad industry and research background spans compilers, operating systems, and hardware accelerators. He is currently interested in building systems and convergence optimizations, and frameworks for distributed training and inference of deep learning models. His research results on DL training, inference, and hyperparameter search are used in multiple Microsoft systems and products, such as Bing, Ads, HyperDrive, and Catapault.
Publications
Website
Natalia Vassilieva
Cerebras Systems
Natalia Vassilieva is a Sr. Director of Product at Cerebras Systems, a computer systems company dedicated to accelerating deep learning. She leads the vision and strategy for Cerebras products, market, application, and algorithm analysis for machine learning use cases. Her focus is machine learning and artificial intelligence, analytics, and application-driven software-hardware optimization and co-design. Prior to joining Cerebras, Natalia was a Sr. Research Manager at Hewlett Packard Labs, where she led the Software and AI group and served as the head of HP Labs Russia from 2011 until 2015. Prior to Hewlett Packard, she was an Associate Professor at St. Petersburg State University in Russia and worked as a software engineer for several IT companies. Natalia holds a Ph.D. in computer science from St. Petersburg State University.
Publications
Website
Johnathan Frankle
Databricks, ex-MosaicML
Jonathan Frankle is Chief Scientist (Neural Networks) at Databricks, where he leads the research team toward the goal of developing more efficient algorithms for training neural networks. He arrived via Databricks’ $1.3B acquisition of MosaicML, where he was part of the founding team. He recently completed his PhD at MIT, where he empirically studied deep learning with Prof. Michael Carbin, specifically the properties of sparse networks that allow them to train effectively (his "Lottery Ticket Hypothesis" - ICLR 2019 Best Paper). In addition to his technical work, he is actively involved in policymaking around artificial intelligence. He earned his BSE and MSE in computer science at Princeton and has previously spent time at Google Brain and Facebook AI Research as an intern and Georgetown Law as an Adjunct Professor of Law.
Publications
Website
Mohammad Shoeybi
NVIDIA
Dr. Mohammad Shoeybi is the Director of Applied Research at NVIDIA. His team focuses on building large foundational models and improving them to downstream applications. His team has build Megatron-LM, a framework for efficiently training LLMs and used it to train several large scale models such as Megatron-Turing NLG with 530 billions of parameters. He received his PhD. from Stanford University in 2010. Prior to NVIDIA, he worked at DeepMind and Baidu USA leading efforts on bringing deep learning and reinforcement learning to applications.
Publications