Dr. Milind Chabbi
Uber Technologies, Inc.

13 July 2021

Milind Chabbi conducts research in the areas of high-performance parallel computing, shared- memory synchronization algorithms, performance analysis tools, and compiler optimizations. He is currently employed as a senior researcher at Uber Technologies in Palo Alto, USA and is also the president of his independent research company Scalable Machine Research. Previously Milind worked at Baidu Research, Hewlett Packard Labs, and Microsoft. Milind Chabbi obtained his doctoral degree in computer science from Rice University working in the areas of software tools and algorithms for high-performance parallel computing. Milind has published over 30 conference and journal publications, received numerous best paper awards, and owns eight USPTO patents.


Research papers covered during the seminar:

Dr. Tan Nguyen
Lawrence Berkeley National Laboratory

21 June 2021

Tan Nguyen is a research scientist at Lawrence Berkeley National Laboratory. His recent research focuses on performance analysis and code optimizations for various processor architectures, including multi- and many-core CPUs, GPUs, FPGAs, and CGRAs. He is also interested in compiler analysis and code generation, programming models and runtime systems for scientific applications. Nguyen received his Ph.D. degree in Computer Science from University of California, San Diego in 2014.


Research papers covered during the seminar:

Wahib Avatar

Dr. Mohamed Wahib
AIST/TokyoTech Open Innovation Laboratory

25 May 2021

Mohamed Wahib is a senior scientist at AIST/TokyoTech Open Innovation Laboratory, Tokyo, Japan. Prior to that he worked as a researcher in RIKEN Center for Computational Science (RIKEN-CCS). He received his Ph.D. in Computer Science in 2012 from Hokkaido University, Japan. Prior to his graduate studies, he worked as a researcher at Texas Instruments (TI) R&D labs in Dallas, TX for four years. His research interests revolve around the central topic of “Performance-centric Software Development”, in the context of HPC. He is actively working on several projects including high-level frameworks for programming traditional scientific applications, as well as high-performance AI and data analytics.


Research papers covered during the seminar:

  • ParDNN: An Oracle for Characterizing and Guiding Large-Scale Training of Deep Neural Networks, HPDC’21
  • Scaling Distributed Deep Learning Workloads beyond the Memory Capacity with KARMA. SC’20
  • A Study of Single and Multi-device Synchronization Methods in Nvidia GPUs. IPDPS’20