HPC User Assistance and Outreach Group, Oak Ridge National Laboratory
Fernanda Foertter is a member of the User Assistance Team at the National Center for Computational Sciences (NCCS) located at Oak Ridge National Laboratory (ORNL). This team is responsible for assisting all users at the Oak Ridge Leadership Computing Facility (OLCF). Fernanda is responsible for the training program at the center and represents OLCF at both the OpenACC and OpenMP organizations.
Director of Science, Oak Ridge Leadership Computing Facility
Jack Wells is the Director of Science for the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science national user facility, and the Titan supercomputer, located at Oak Ridge National Laboratory (ORNL). Wells is responsible for the scientific outcomes of the OLCF's user programs. Jack has previously lead both ORNL's Computational Materials Sciences group in the Computer Science and Mathematics Division and the Nanomaterials Theory Institute in the Center for Nanophase Materials Sciences. Prior to joining ORNL as a Wigner Fellow in 1997, Wells was a postdoctoral fellow within the Institute for Theoretical Atomic and Molecular Physics at the Harvard-Smithsonian Center for Astrophysics. Jack has a Ph.D. in physics from Vanderbilt University, and has authored or co-authored over 80 scientific papers and edited 1 book, spanning nanoscience, materials science and engineering, nuclear and atomic physics computational science, applied mathematics, and text-based data analytics.
Research Scientist in Deep Learning, Oak Ridge National Laboratory
Steven Young is a researcher at Oak Ridge National Laboratory working in the Computational Data Analytics Group. His research focuses on applying deep learning to challenging datasets using HPC to enable faster training and quicker discovery. He has a Ph.D. in computer engineering from the University of Tennessee, where he studied machine learning in the Machine Intelligence Lab.
Principal Research Physicist, Princeton University
William Tang of Princeton University is principal research physicist at the Princeton Plasma Physics Laboratory for which he served as chief scientist (1997-2009) and is currently lecturer with rank and title of professor in astrophysical sciences, and member of the executive board for the Princeton Institute for Computational Science and Engineering, which he helped establish and served as associate director (2003-2009). William is internationally recognized for expertise in the mathematical formalism and associated computational applications dealing with electromagnetic kinetic plasma behavior in complex geometries -- with over 200 publications with more than 150 peer-reviewed papers and an "h-index" or "impact factor" of 44 on the Web of Science, including well over 7,000 total citations. William has taught for over 30 years and has supervised numerous Ph.D. students, including recipients of the Presidential Early Career Award for Scientists and Engineers in 2000 and 2005. He is also head of the Intel Parallel Computing Center at the Princeton Institute for Computational Science & Engineering at Princeton University.
Scientist, University of Illinois at Urbana-Champaign, National Center for Supercomputing Applications
Daniel George is a Ph.D. student in astronomy, pursuing the computational science and engineering concentration, at the University of Illinois at Urbana-Champaign. He obtained his bachelor's degree in engineering physics from IIT Bombay. He is currently a research assistant in the Gravity Group at the National Center for Supercomputing Applications and a member of the LIGO collaboration working at the interface of deep learning, high performance computing, and gravitational wave and multimessenger astrophysics. His long-term interests lie in applying cutting-edge computer science and technology, especially machine learning and artificial intelligence, to accelerate discoveries in the fundamental sciences.
Deep learning has become a popular tool for insight on problems where deterministic models don't yet exist. Recent development of deep learning frameworks using GPUs has allowed the application of deep learning to problems where fast solutions are required. The scientific community has traditionally sought to develop deterministic models to describe physical phenomena, using highly scalable systems to simulate problems with ever increasing fidelity. While many science domains have developed robust predictive methods, there are still problems lacking models that can describe observed phenomena. In many of these cases, the problem may contain unknown variables, or be fundamentally hard to solve, where the simulation cannot fully predict observations. These areas include biological systems, chaotic systems, and medical research. There are also fields where a priori models do exist, but surveying the parameter space through simulation of large datasets would have very long time-to-solutions. These areas include instrument data analysis and materials by design. We'll explore how the scientific community is using deep learning to conduct leading-edge research outside of traditional modeling techniques. We'll also explore opportunities and obstacles to scaling deep learning workloads on high performance computing systems.
Tags: HPC and Supercomputing; Deep Learning and AI
Industry Segments: Higher Education / Research