Washington D.C. October 26-27, 2016

Schedule Planner

Print
Download PDF
 
  • List View

 
Refine:
  • Session Levels:
  • |
  • |
  • |
  • |

SPECIAL EVENT

Presentation
Details

DCE16108 - Exhibits

Level: All
Type: Special Event
Tags: Exhibits

Day: Wednesday, 10/26
Time: 07:30 - 08:30
Location: Amphitheater Foyer & Meridian Foyer

DCE16123 - Registration

Level: All
Type: Special Event
Tags: Registration

Day: Wednesday, 10/26
Time:
Location: Amphitheater Foyer

DCE16124 - Virtual Reality Experience

Level: All
Type: Special Event
Tags: Virtual Reality Experience

Day: Wednesday, 10/26
Time:
Location: Meridian CDE & Amphitheater Foyer

Special Event

KEYNOTE

Presentation
Details

DCS16158 - Keynote Address

Bill Dally Chief Scientist, NVIDIA
Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered "wormhole" routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of two textbooks. Dally received a bachelor's degree in Electrical Engineering from Virginia Tech, a master's in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He is a cofounder of Velio Communications and Stream Processors.

Opening Keynote Speech

Level: All
Type: Keynote
Tags: IoT

Day: Wednesday, 10/26
Time: 09:00 - 09:50
Location: Amphitheater

DCS16164 - Advancing the Frontiers of Science

France Córdova Director, National Science Foundation
France A. Córdova, was sworn in as Director of the National Science Foundation (NSF) on March 31, 2014. Nominated by President Barack Obama to head the $7.5-billion independent federal agency, and confirmed by the U.S. Senate, Dr. Córdova leads the only government science agency charged with advancing all fields of scientific discovery, technological innovation, and science, technology, engineering and mathematics (STEM) education. NSF's programs and initiatives keep the United States at the forefront of science and engineering, empower future generations of scientists and engineers, and foster U.S. prosperity and global leadership.

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. With an annual budget of $7.5 billion, NSF awards grants to nearly 2,000 colleges, universities and other institutions in all 50 states. Hear how NSF is advancing discovery and technological innovation in all fields, including artificial intelligence, to keep the United States at the forefront of global science and engineering leadership.

Level: All
Type: Keynote
Tags: Federal

Day: Wednesday, 10/26
Time: 10:00 - 10:30
Location: Amphitheater

DCS16182 - The Economic Implications of Artificial Intelligence

Jason Furman Chairman, White House Council of Economic Advisors , Office of the President of the United States
Jason Furman was confirmed by the Senate on August 1, 2013 as the 28th Chairman of the Council of Economic Advisers. In this role, he serves as President Obama’s chief economist and a Member of the Cabinet. Furman has served the President since the beginning of the Administration, previously holding the position of Principal Deputy Director of the National Economic Council and Assistant to the President. Immediately prior to the Administration, Furman was Economic Policy Director for the President’s campaign in 2008 and a member of the Presidential Transition Team. Furman held a variety of posts in public policy and research before his work with President Obama. In public policy, Furman worked at both the Council of Economic Advisers and National Economic Council during the Clinton administration and also at the World Bank. In research, Furman was a Senior Fellow at the Brookings Institution and the Center on Budget and Policy Priorities and also has served in visiting positions at various universities, including NYU’s Wagner Graduate School of Public Policy. Furman has conducted research in a wide range of areas, such as fiscal policy, tax policy, health economics, Social Security, and domestic and international macroeconomics. In addition to numerous articles in scholarly journals and periodicals, Furman is the editor of two books on economic policy. Furman holds a Ph.D. in economics from Harvard University.

TBA

Level: All
Type: Keynote
Tags: IoT

Day: Wednesday, 10/26
Time: 10:30 - 10:45
Location: Amphitheater

Keynote

SPECIAL EVENT

Presentation
Details

DCE16110 - Exhibits

Level: All
Type: Special Event
Tags: Exhibits

Day: Wednesday, 10/26
Time: 11:00 - 13:30
Location: Amphitheater Foyer & Meridian Foyer

DCE16111 - Lunch

Level: All
Type: Special Event
Tags: Lunch

Day: Wednesday, 10/26
Time: 11:00 - 13:30
Location: Atrium Ballroom

DCE16117 - Self-Paced Labs

Level: All
Type: Special Event
Tags: Self-Paced Labs

Day: Wednesday, 10/26
Time: 11:00 - 12:30
Location: Polaris Foyer

Special Event

HANDS-ON LAB

Presentation
Details

DCL16103 - Deep Learning for Image Segmentation

Jonathan Bentz Solutions Architect, NVIDIA
Jonathan Bentz is a Solutions Architect with NVIDIA, focusing on Higher Education and Research customers. In this role, he works as a technical resource for customers to support and enable their adoption of GPU computing. He delivers GPU training such as programming workshops to train users and help raise awareness of GPU computing. He also works with ISV and customer applications to assist in optimization for GPUs through the use of benchmarking and targeted code development efforts. Prior to NVIDIA Jonathan worked for Cray as a software engineer where he developed and optimized high-performance scientific libraries such as BLAS, LAPACK, and FFT specifically for the Cray platform. Jonathan obtained his PhD in physical chemistry and his MS in computer science from Iowa State University.

There are a variety of important applications that need to go beyond detecting individual objects within an image and instead segment the image into spatial regions of interest. For example, in medical imagery analysis, it is often important to separate the pixels corresponding to different types of tissue, blood or abnormal cells so that we can isolate a particular organ. In this lab, we will use the TensorFlow deep learning framework to train and evaluate an image segmentation network using a medical imagery dataset.

Level: Beginner
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 11:00 - 12:30
Location: Hemisphere A

DCL16104 - Getting Started with Deep Learning (End-to-end Series Part 1)

Abel Brown Solution Architect, NVIDIA
Abel holds degrees in Mathematics and Physics as well as a PhD in the field of Geodesy & Geophysics from The Ohio State University. For the past eight years, Abel has been developing distributed software frameworks and administering high performance computing clusters. He has deployed and managed many sensor networks around the world in Antarctica, South America, and Greenland. Abel is dually appointed on the Magnetopheric Multiscale (MMS) Ground System and Conjunction Assessment development teams and manages numerous research projects at a.i. solutions on GPU computing, image analytics, and advanced satellite perturbation techniques. As co-author, Abel's recent work contributed to the PNAS publication which was featured in WIRED Magazine's "Best Scientific Figures of 2012" titled "Greenland Rising".

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing, and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab, you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

Level: Beginner
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 11:00 - 12:30
Location: Horizon

DCL16105 - Deep Learning for Object Detection (End-to-end Series Part 2)

Ryan Olson Solutions Architect, NVIDIA
Ryan Olson is a Solutions Architect at NVIDIA. Prior to this, Ryan was a Member of the Performance Engineering Team at Cray. Prior to Cray, Ryan was a Postdoctoral Research Associate at the University of Minnesota, and he completed graduate work at the Ames Laboratory. Ryan holds a PhD in Physical Chemistry from Iowa State University, and a BA in Chemistry & Mathematics from Saint John's University.

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab, you will understand each approach and their relative merits. You'll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

Level: Intermediate
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 11:00 - 12:30
Location: Hemisphere B

Hands-on Lab

PANEL

Presentation
Details

DCS16190 - Artificial Intelligence and America's Future

Matthew Schruers Vice President, Law & Policy, Computer & Communications Industry Association (CCIA)
Matthew Schruers is Vice President for Law & Policy at the Computer & Communications Industry Association (CCIA), where he represents and advises the association on domestic and international policy issues including intellectual property, competition, and trade. He is also an adjunct professor at the Georgetown University Law Center and the Georgetown Graduate School Program on Communication, Culture, and Technology (CCT), where he teaches courses on intellectual property.Mr. Schruers joined CCIA from Morrison & Foerster LLP in 2005, where he practiced intellectual property, antitrust, and administrative law. Mr. Schruers received his J.D. from the University of Virginia School of Law, where he served on the editorial board of the Virginia Law Review, and received his B.A. from Duke University.
Bill Dally Chief Scientist and Senior Vice President of Research, NVIDIA
Bill is an internationally acclaimed computer scientist whose pioneering work in parallel processing, along with his research leadership at NVIDIA and Stanford, have helped enable the AI revolution. Before joining NVIDIA in 2009, Dally served as chairman of Stanford's Computer Science department, where he has taught since 1997. The IEEE Computer Society conferred its highest award on Dally in 2010, calling Dally a "visionary" for advancing the state of computing with parallel processors.
Lynne Parker Division Director for Information and Intelligent Systems, National Science Foundation
Lynne directs NSF research in the areas of AI, machine learning, robotics, sensor networks, brain and cognitive science, human-robot cooperation, and embedded systems, overseeing a $200 million annual research budget. She has a Ph.D. from MIT.
Victor Bennett Senior Economist, Council of Economic Advisers, The White House
Victor provides policy input to the White House Council of Economic Advisers in the areas of innovation, technology, and industrial organization. He has a BA from Stanford University and a Ph.D. in Business and Public Policy from the University of California, Berkeley. Prior to graduate school he worked at Google. He is on leave from Duke University, where he is a Professor in the Fuqua School of Business.
Alan Davidson Director of Digital Economy , U.S. Department of Commerce
Alan is the first Director of Digital Economy at the U.S. Department of Commerce and Senior Advisor to the Secretary of Commerce. Previously he was director of the Open Technology Institute at the New America Foundation in Washington, D.C. He also was a Research Affiliate at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), where he was a co-founder of the new MIT Information Policy Project and a Fellow at the Sloan School’s Center for Digital Business. Until 2012, Alan was the Director of Public Policy for Google.

It's not just America's business community that is embracing Artificial Intelligence, but the Federal Government as well. In early October, the White House issued a report entitled "Preparing for the Future of Artificial Intelligence," after conducting a series of public workshops with prominent universities around the country. Join us for a conversation with leading thinkers on how the government and the private sector are preparing for that future, including discussion of the economic impact of AI, the role of the government in funding critical AI research, and how AI can help address current public policy challenges.

Level: All
Type: Panel
Tags: Federal

Day: Wednesday, 10/26
Time:
Location: Amphitheater

Panel

TALK

Presentation
Details

DCS16142 - Deep Learning for Accelerating Remote Sensing Feature Extraction

Will Rorrer Program Manager, Harris
Will Rorrer has worked with the Harris Corporation for over 15 years providing management and guidance to key business units. These areas include the NightVision operations team, the JagWire program for streaming, cataloguing and analyzing Full Motion Video, and leading research and development on deep learning tools and applications. Throughout his career he has been honored to support the National Geospatial Intelligence Agency and other parts of the Department of Defense in using high tech capabilities for solving global security problems.

This presentation will walk through several use cases Harris has developed for implementing deep learning/AI/machine learning concepts with NVIDIA hardware and discuss the implications of this research in the wider remote sensing community. Federal use cases with large data archives that need to reduce time for manual analyst search and reduce processing time will be highlighted.

Level: All
Type: Talk
Tags: Federal; HPC

Day: Wednesday, 10/26
Time: 12:30 - 12:55
Location: Polaris

DCS16149 - Artificial Intelligence is Accelerating the Race to Self Driving Cars

Danny Shapiro Sr. Director, Automotive, NVIDIA
Danny Shapiro is Senior Director of NVIDIA's Automotive Business Unit, focusing on solutions that enable faster and better design of automobiles, as well as in-vehicle solutions for self-driving cars, infotainment systems, and digital instrument clusters. Danny holds a BSE in Electrical Engineering and Computer Science from Princeton University and an MBA from the Hass School of Business at UC Berkeley. Danny serves on the Advisory Boards for the LA Auto Show, the Connected Car Council and the NVIDIA Foundation which focuses on computational solutions for cancer research.

Overview of AI in self-driving cars

Level: All
Type: Talk
Tags: Autonomous Vehicles

Day: Wednesday, 10/26
Time: 12:30 - 12:55
Location: Amphitheater

DCS16150 - Learning Building Extraction in Aerial Scenes with Convolutional Networks

Jiangye Yuan Research Scientist, Oak Ridge National Laboratory
Dr. Jiangye Yuan is a Research Scientist in the Geographic Information Science and Technology group at ORNL. His current research focuses on developing computational methods for interpreting and understanding large volumes of geospatial images. He received the M.S. degree in computer science and engineering and the Ph.D. degree in geodetic science from The Ohio State University.

Extracting buildings from aerial scene images is an important yet challenging task. We take a unique approach that leverages the convolutional network framework combined with massive labeled data obtained from geographic information systems (GIS). We design a convolutional network with a simple structure that integrates activation from multiple layers for pixel-wise prediction, and introduce the signed distance function of building boundaries as the output representation, which has an enhanced representation power. To train networks, we use building footprint data in GIS maps to generate large amounts of labeled data. We have trained models that achieve superior performance on city-scale and country-scale imagery.

Level: All
Type: Talk
Tags: HPC

Day: Wednesday, 10/26
Time: 12:30 - 12:55
Location: Rotunda

DCS16166 - Artificial Intelligence for Computational Pathology

Andy Beck Co-Founder at PathAI and an Associate Professor (Part-time) at Harvard Medical School, PathAI
Dr. Andy Beck earned his M.D. from Brown Medical School and completed residency and fellowship training in Anatomic Pathology and Molecular Genetic Pathology from Stanford University. He completed a Ph.D. in Biomedical Informatics from Stanford University, where he developed one of the first machine-learning based systems for cancer pathology. He is board certified by the American Board of Pathology in Anatomic Pathology and Molecular Genetic Pathology. He joined the faculty of Harvard Medical School in 2011, where he is now an Associate Professor (part-time). He has published over 90 publications in the fields of cancer biology, cancer pathology, and biomedical informatics. In 2016, Dr Beck co-founded PathAI, a company that develops artificial intelligence technology for pathology.

GPU-accelerated computing is driving major advances in AI, offering great promise for the development of AI-powered diagnostics that are more accurate, standardized, and predictive than conventional approaches. In this talk, I will present our work to use AI to build applications for pathology. I will discuss recent results that demonstrate the power of integrating AI-based systems with human experts to achieve super-human performance in diagnostic accuracy. This work aims to unlock the full potential of pathology data to reduce medical error and improve patient outcomes.

Level: All
Type: Talk
Tags: Healthcare

Day: Wednesday, 10/26
Time: 12:30 - 13:20
Location: Oceanic

DCS16126 - Deep Learning Techniques for Overhead Imagery Analysis Applications

Terrell Nathan Mundhenk Computational Engineering Division, Lawrence Livermore National Laboratory
TBA
Wesam Adel Sakla Computational Engineering Division, Lawrence Livermore National Laboratory
Wesam Sakla is a computer vision scientist at LLNL. He obtained his B.Sc. in Computer Engineering from the University of South Alabama, his M.Sc. in Electrical Engineering from the University of South Alabama, and his Ph.D. in Electrical Engineering from Texas A&M University. His current research interests include the use of deep convolutional neural networks for automated detection and localization, recognition, and classification applications on multi-modal overhead imagery

The use of NVIDIA TitanX GPUs has facilitated recent successes in the use of deep learning techniques for overhead imagery analysis applications at Lawrence Livermore National Laboratory. We have introduced a novel "one-look" method for counting cars in imagery using deep convolutional neural networks. The novelty of our method is that we have trained the deep network to learn to count directly without the preliminary stage of detection/localization or density estimation. In particular, we have customized a deep network architecture which we call ResCeption, which combines residual learning layers with Inception-style layers to yield accurate results on a large diverse set of overhead imagery containing vehicles in various challenging operating conditions.

Level: Intermediate
Type: Talk
Tags: HPC; Federal

Day: Wednesday, 10/26
Time: 13:00 - 13:25
Location: Rotunda

DCS16128 - Deep Neural Networks in Super-Resolution of Satellite Imagery

Patrick Hagerty Director of Research, Cosmic Works, an IQT Lab
Patrick Hagerty is the Director of Research of CosmiQ Work, an In-Q-Tel Lab focusing on Space 3.0 startups. Prior to working at In-Q-Tel, Patrick was an Applied Research Mathematician for the Department of Defense working on High Performance Computing and Emerging Technologies. He received his Ph.D. of Mathematics from the University of Michigan in the area of Geometric Mechanics.

Deep convolutional neural networks can be trained to enhance lower resolution imagery by learning features within the images. Performance, measured by Peak Signal-to-Noise Ratio, has shown to be improved by several dB using deep learning. For very large satellite imagery, the distribution of dB gain determines the value of image enhancement. We architect and train a convolutional neural network with parameters that measure each layer's importance to the super-resolution process. The architectural technique that we present assists network compression and is applicable to previous super-resolution results as well as other deep learning applications.

Level: All
Type: Talk
Tags: Federal; HPC

Day: Wednesday, 10/26
Time: 13:00 - 13:25
Location: Polaris

Talk

SPECIAL EVENT

Presentation
Details

DCE16118 - Self-Paced Labs

Level: All
Type: Special Event
Tags: Self-Paced Labs

Day: Wednesday, 10/26
Time: 13:30 - 17:30
Location: Polaris Foyer

Special Event

HANDS-ON LAB

Presentation
Details

DCL16107 - Getting Started with Deep Learning (End-to-end Series Part 1)

Abel Brown Solution Architect, NVIDIA
Abel Brown holds degrees in Mathematics and Physics as well as a PhD in the field of Geodesy & Geophysics from The Ohio State University. For the past eight years, Abel has been developing distributed software frameworks and administering high performance computing clusters. He has deployed and managed many sensor networks around the world in Antarctica, South America, and Greenland. Abel is dually appointed on the Magnetopheric Multiscale (MMS) Ground System and Conjunction Assessment development teams and manages numerous research projects at a.i. solutions on GPU computing, image analytics, and advanced satellite perturbation techniques. As co-author, Abel's recent work contributed to the PNAS publication which was featured in WIRED Magazine's "Best Scientific Figures of 2012" titled "Greenland Rising".

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing, and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab, you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

Level: Beginner
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 13:30 - 15:00
Location: Hemisphere A

DCL16112 - Deep Learning for Object Detection (End-to-end Series Part 2)

Ryan Olson Solutions Architect, NVIDIA
Ryan Olson is a Solutions Architect at NVIDIA. Prior to this, Ryan was a Member of the Performance Engineering Team at Cray. Prior to Cray, Ryan was a Postdoctoral Research Associate at the University of Minnesota, and he completed graduate work at the Ames Laboratory. Ryan holds a PhD in Physical Chemistry from Iowa State University, and a BA in Chemistry & Mathematics from Saint John's University.

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab, you will understand each approach and their relative merits. You'll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

Level: Intermediate
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 13:30 - 15:00
Location: Horizon

DCL16116 - Deep Learning for Image Segmentation

Jonathan Bentz Solutions Architect, NVIDIA
Jonathan Bentz is a Solutions Architect with NVIDIA, focusing on Higher Education and Research customers. In this role, he works as a technical resource for customers to support and enable their adoption of GPU computing. He delivers GPU training such as programming workshops to train users and help raise awareness of GPU computing. He also works with ISV and customer applications to assist in optimization for GPUs through the use of benchmarking and targeted code development efforts. Prior to NVIDIA Jonathan worked for Cray as a software engineer where he developed and optimized high-performance scientific libraries such as BLAS, LAPACK, and FFT specifically for the Cray platform. Jonathan obtained his PhD in physical chemistry and his MS in computer science from Iowa State University.

There are a variety of important applications that need to go beyond detecting individual objects within an image and instead segment the image into spatial regions of interest. For example, in medical imagery analysis, it is often important to separate the pixels corresponding to different types of tissue, blood or abnormal cells so that we can isolate a particular organ. In this lab, we will use the TensorFlow deep learning framework to train and evaluate an image segmentation network using a medical imagery dataset.

Level: Beginner
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 13:30 - 15:00
Location: Hemisphere B

Hands-on Lab

TALK

Presentation
Details

DCS16115 - Deep Patient: Predict the Medical Future of Patients with Deep Learning

Joel Dudley Associate Professor, Icahn School of Medicine at Mount Sinai, New York
Joel Dudley is a recognized leader in applying biomedical big data to healthcare and drug discovery. He currently holds positions as Associate Professor of Genetics and Genomic Sciences and Director of Biomedical Informatics at the Icahn School of Medicine at Mount Sinai. He also directs the newly formed Institute for Next Generation Healthcare at Mount Sinai. Prior to Mount Sinai, he held positions as Co-founder and Director of Informatics at NuMedii, Inc., one of the first companies to apply big data to drug discovery, and Consulting Professor of Systems Medicine in the Department of Pediatrics at Stanford University School of Medicine. His work is focused on developing and applying advanced computational methods to integrate the digital universe of information to build better predictive models of disease, drug response. He and his team are also developing pioneering methods to bring about a next generation of medicine that leverages advances in diagnostics, wearables, digital health to enable new approaches to precision medicine and scientific wellness. He has authored and co-authored more than 80 publications and his research has been featured in the Wall Street Journal, Scientific American, Forbes, and other popular media outlets. His recent work using a Big Data approach to identify sub-types of Type 2 diabetes was recently highlighted by NIH director Francis Collins on the the NIH Director's Blog as a significant advance in precision medicine. He was named in 2014 as one of the 100 most creative people in business by Fast Company magazine. He is co-author of the book Exploring Personal Genomics from Oxford University Press, which is used as a text in personalized and precision medicine courses at universities worldwide. He holds an MSc. and Ph.D. in Biomedical Informatics from Stanford University School of Medicine. Joel serves on the Scientific Advisory boards of numerous startups and companies in biotech and health tech.

This talk focuses on advances in deep learning applied to precision medicine and, especially, on "deep patient", a general-purpose patient representation derived from the electronic health records (EHRs) that facilitates clinical predictive modeling. Precision medicine raises big challenges in dealing with large and massive data from heterogeneous sources, such as EHRs, genomics, and wearables. Deep learning provides a unique opportunity to retrieve information from these complex and heterogeneous sources. Here, in particular, we show how a deep architecture was able to process aggregated EHRs from the Mount Sinai Health System data warehouse to derive domain-free patient representations that can improve automatic medical predictions given the patient clinical status.

Level: Intermediate
Type: Talk
Tags: Healthcare; HPC

Day: Wednesday, 10/26
Time: 13:30 - 13:55
Location: Oceanic

DCS16140 - Computing to Cure Cancer: Building Exascale Deep Learning Tools for Effective Cancer Surveillance

Arvind Ramanathan Staff Scientist, Oak Ridge National Laboratory
Arvind Ramanathan is a staff scientist in the Computational Science and Engineering Division and the Health Data Sciences Institute. He obtained his Ph.D. in computational biology from Carnegie Mellon University and his Masters in computer science from Stony Brook University. His research interests lie at the intersection of computational biology, machine learning and high performance computing systems. In particular, he is interested in developing data analytic tools relevant for applications in drug-discovery and public health dynamics.

The nation has recently embarked on an "all government" approach to the problem of cancer. As part of this initiative, the Department of Energy (DOE) has entered into a partnership with the National Cancer Institute (NCI) of the National Institutes of Health (NIH). This partnership has identified three key challenges that the combined resources of DOE and NCI can accelerate. An interagency agreement has been developed that provides a framework for this joint work. Four DOE national laboratories are collaborating with the NCI. At Oak Ridge National Lab, as part of the population health pilot that we lead, we have been developing deep learning approaches using supercomputing resources to automatically annotate cancer pathology reports and we present initial results based on our experience.

Level: All
Type: Talk
Tags: HPC; Healthcare

Day: Wednesday, 10/26
Time: 13:30 - 13:55
Location: Rotunda

DCS16181 - The Future of Autonomous Vehicles in a Nation of Autos

Bruce Daley Principal Analyst, Tractica
Bruce Daley is a principal analyst contributing to Tractica's Artificial Intelligence practice, with a focus on artificial intelligence for enterprise applications. Daley is based in Denver and has extensive experience as an industry analyst, writer, and publisher focused on the global IT market. Prior to his work with Tractica, Daley was vice president and principal analyst with Constellation Research, where his coverage areas included business research themes related to customer relationship management, mobility, and infrastructure. He was previously founder of Great Divide, co-founder of Rabbit Ears Capital Advisors, founder of Test Common Inc., founder of the Enterprise Software Summit, and founder of The Siebel Observer, the largest publication devoted to Siebel Systems. Daley's background also includes consulting and management roles at Oracle and Bain & Company.Daley has been widely quoted as an industry expert in major publications including The Wall Street Journal, The New York Times, The Financial Times, The International Herald Tribune, IEEE Spectrum, The San Jose Mercury News, and many more. He is also the author of a recently published book on data storage, Where Data is Wealth. Daley holds a BA from Tufts University.

Looking beyond the current work being done in the field, this presentation examines how autonomous vehicles are most likely to change the future. U.S. culture is built around the automobile. Songs are sung about it. Driving is an important part of parenting. Getting a license is an significant rite of passage. How will customs change when cars drive themselves? What will be the demand for Consumer and Commercial vehicles in the years ahead? Will the role of Government change as a consequence? Is the human driven car fated to suffer the same fate as the horse? All these questions and more will be examined during the course of this presentation.

Level: All
Type: Talk
Tags: Autonomous Vehicles

Day: Wednesday, 10/26
Time: 13:30 - 13:55
Location: Amphitheater

DCS16191 - GEOINT Revolution – and the Associated Enabling Technologies.

Keith Masback GEOINT Revolution and the Associated Enabling Technologies, The United States Geospatial Intelligence Foundation (USGIF)
Keith Masback is the President of the United States Geospatial Intelligence Foundation (USGIF). He is responsible for carrying out the Foundation's mission of promoting the geospatial intelligence tradecraft and developing a broader, more capable GEOINT Community among government, industry, academic, professional organizations and individuals whose mission focus is the development and application of geospatial intelligence to address national and international security objectives.

TBA

Level: All
Type: Talk
Tags: Federal

Day: Wednesday, 10/26
Time: 13:30 - 13:55
Location: Polaris

DCS16106 - Accelerating Phylogenetic Inference Using GPUs: The BEAGLE Library

Michael Cummings Professor, University of Maryland
Michael Cummings is a professor in the Center for Bioinformatics and Computational Biology at the University of Maryland where he leads the Laboratory of Molecular Evolution, and holds appointments in the Institute for Advanced Computer Studies, Department of Biology, and Department of Computer Science. He received his Ph.D. from Harvard University, and did his postdoctoral research at the University of California, Berkeley as an Alfred P. Sloan Foundation Postdoctoral Fellow in Molecular Studies of Evolution, and at the University of California, Riverside. He has published broadly in molecular evolution, phylogenetics, computational biology and bioinformatics, computer science, and bioinformatics education. Because of his varied expertise he has served the scientific community through participation in numerous national and international committees,panels,symposia, workshops, and advisory boards.

The session provides an example of a scientific analysis problem greatly accelerated by using GPUs. Estimating the evolutionary history of organisms, and phylogenetic inference is a critical step in analyses involving biological sequence data. Modern phylogenetic analyses involve DNA sequence data from a set of organisms, and using model-based methods to infer a binary tree representing the evolutionary history of the organisms going back to their most recent common ancestor. These phylogenetic relationships are very important in understanding the evolutionary dynamics, timing, and spread of many disease-causing organisms, such as viruses causing AIDS, flu, and Ebola among others. We have developed an open source library, BEAGLE, which greatly accelerates phylogenetic analyses using GPUs.

Level: All
Type: Talk
Tags: Healthcare; HPC

Day: Wednesday, 10/26
Time: 14:00 - 14:25
Location: Oceanic

DCS16138 - Computer Vision Applications in Agriculture Weeding and Phenotyping

Lee Redden CTO, Blue River Technology
As a Nebraska native, Lee Redden understood agriculture from an early age. As Co-founder and CTO of Blue River Technology, Redden specializes in building breakthrough technology that merges computer vision, machine learning, and robotics to deliver real-time, precise care for every plant, instead of the blanket approach most farmers are forced to use today. The result is higher yields from the same acre of land, with dramatically lower chemical use. Redden has also spent time at some of the top robotics research labs in the country, including Johns Hopkins Applied Physics Lab, Stanford and NASA's Johnson Space Center.

Blue River Technology builds "See & Spray" robots for agricultural applications. Our current product sees, detects, optimizes and acts on 10% of the lettuce produced in the US and is capable of plant-by-plant care. We'll go through the development and deployment of computer vision systems capable into a market where high reliability is expected, data is biased, compute platforms need to be rugged and the system needs to run in real time.

Level: Beginner
Type: Talk
Tags: Robotics; Federal

Day: Wednesday, 10/26
Time: 14:00 - 14:25
Location: Amphitheater

DCS16143 - Inside Pascal: New Features of NVIDIA's Latest Computing Architecture

Mark Harris Chief Technologist, GPU Computing Software, NVIDIA
Highly-Rated Speaker
Mark Harris is Chief Technologist for GPU Computing at NVIDIA, where he works as a developer advocate and helps drive NVIDIA's GPU computing software strategy. His research interests include parallel computing, general-purpose computation on GPUs, physically based simulation, and real-time rendering. Mark founded www.GPGPU.org while he was earning his Ph.D in computer science from the University of North Carolina at Chapel Hill. Mark lives completely off grid out the back of Byron Bay, Australia with his family, some domestic animals, and a variety of wildlife.

The revolutionary NVIDIA® Pascal™ architecture is purpose-built to be the engine of computers that learn, see, and simulate our world—a world with an infinite appetite for computing. Pascal incorporates ground-breaking technologies to deliver the highest absolute performance for HPC, technical computing, deep learning, and many computationally intensive datacenter workloads. In this talk you'll see how new the new Pascal architecture provides extreme performance and scaling using the new NVLink high-speed GPU interconnect, HBM2 stacked memory for massive bandwidth, and high computational throughput for artificial intelligence with new 16-bit floating point instructions. You'll also learn how Unified Memory in CUDA benefits from Pascal's new Page Migration Engine.

Level: All
Type: Talk
Tags: Federal; Healthcare; HPC

Day: Wednesday, 10/26
Time: 14:00 - 14:50
Location: Atrium Ballroom

DCS16170 - GPU Power for End Users - Rapidly Operationalizing Technology through a Customer Driven Partnership

Leonel Garciga J6 Chief / CTO, Joint Improvised-Threat Defeat Organization(JIDO)
Leonel Garciga is the Joint Improvised Threat Defeat Agency (JIDA) J6 Chief and Chief Information Officer (CIO). In these roles, he provides technical leadership, oversight and direction for mission and enterprise information technology planning, research, development, experimentation, validation, acquisition, accreditation and application of future information technology programs, architectures and capabilities within JIDA. His expertise enable the rapid integration of cutting-edge technology for counter-IED and counter threat network operations and intelligence support to a myriad of missions for the Department of Defense (DoD) and U.S. national security.

JIDO Mission IT directly supports the warfighter and decision maker when and where they need it with advanced analytical tools that improve their situational understanding of complex operations and environments. Our users' Latest Time of Value is our performance metric, 80% solution on time is better than 100% solution late. We leverage advanced technology and make it readily available to users to increase accuracy, efficiency, and discovery. JIDO rapidly pushes technology and innovation to the user through our methodology of operational experimentation and advanced research. JIDO works directly with the user, namely the warfighter, from the start to understand their requirements for new technology...tailoring technology to their urgent needs. We actively involve the warfighter in what we call operational experimentation to achieve quick results and ensure our users' success. JIDO also maintains awareness of technological advances and works collaboratively with innovators across the community to apply them to existing problems. Leveraging GPUs for video analysis and threat network "sensemaking" to improve advanced analytics which exploit large data sets can pay dividends to the user by providing recommendations to make sense of interwoven linkages amidst convoluted data. As a community of innovators we must understand user requirements and work closely with the user to rapidly meet their needs. We do not always need a purpose built solution to help our user solve an intractable problem set or hard challenge. At JIDO we understand what is important to the analysts and operators. We know the difficult questions that need to be answered, and then seek technology that we can tailor to their needs.

Level: All
Type: Talk
Tags: Federal

Day: Wednesday, 10/26
Time: 14:00 - 14:25
Location: Polaris

DCS16172 - Accelerating Applications with CPU-GPU NVLink

Drew Vandeth Senior Intelligence Advisor, IBM
Drew Vandeth is an IBM Distinguished Researcher and Senior Intelligence Advisor at IBM Research. He is a leading technology researcher, strategist, and scholar in the research and development of high performance computing capabilities for intelligence and national security applications. Over the last fifteen years Dr. Vandeth has previously held positions at IBM Research, the Tutte Institute for Mathematics and Computing (which he founded and served as Deputy Director), the Canadian Department of National Defence, and the University of Ottawa. He holds a B.Math and an M.Math in Number Theory from the University of Waterloo and a Ph.D. in Number Theory for Macquarie University.

Bringing the GPU closer to the CPU enables a new level of acceleration. Learn how CPU-GPU NVLink enables three new things: higher performance, better programmability and more part of the application can be GPU accelerated. Learn how new developments are changing how you program, opening the application aperture, and ushering in the next phase of GPU computing. GPU programming has always been impacted by the challenge of manual data management and the limits of the PCI-E bus. New features including Page Migration Engine are making data management easier. Now, systems with NVLink from CPU to GPU (POWER8 with NVLink) are breaking down the barriers between CPU and GPU even further. These changes have profound impacts on you as a developer, end-user, or administrator of GPU accelerated applications, how systems are architected. We'll explore how.

Level: All
Type: Talk
Tags: HPC

Day: Wednesday, 10/26
Time: 14:00 - 14:25
Location: Rotunda

DCS16107 - Medical Image Deep Learning with a Supercharged Machine Learning System DGX-1

Synho Do Assistant Medical Director, Massachusetts General Hospital and Harvard Medical School
Synho Do Ph.D. is a director at Laboratory of Medical Imaging and Computation (LMIC), Director of Rapid Collaboration Hub (RCH), assistant medical director at Advanced Health Technology Engineering, Research, and Development, Massachusetts General Physicians Organization. Assistant Professor, Harvard Medical School, Department of Radiology, Massachusetts General Hospital. Synho is a member of IEEE Signal Processing Society in Bio-Imaging and Signal Processing (BISP). He received his Ph.D. in Biomedical Engineering from University of Southern California and currently, he is the Principal Investigator at MGH for NVIDIA CUDA Research Center (CRC). Some of his current research interests include statistical signal and image processing, estimation, detection, and medical signal and image processing, such as computed tomography and machine learning.

We'll introduce the first implementation of a clinical medical image machine learning platform on DGX-1. Attendees will learn how to develop and implement their own medical image machine learning platforms in this session. DGX-1 is a supercharged machine learning system, which can process huge hospital data sets with high-bandwidth NVLINK connected to 8 GPUs. We'll explain our data extraction strategy, preprocessing methods, and implementation processes step by step. We will also describe the details of how we installed the system in our hospital (Massachusetts General Hospital) and outline the concerns to deploying the developed algorithms in the clinical workflow using DGX-1 and other GPU systems. We'll include a few real clinical examples recently developed on the NVIDIA platform.

Level: All
Type: Talk
Tags: Healthcare; HPC

Day: Wednesday, 10/26
Time: 14:30 - 14:55
Location: Oceanic

DCS16121 - DeepSAT: A Deep Learning Framework for Satellite Image Classification

Sangram Ganguly Senior Research Scientist, NASA Ames Research Center
Sangram Ganguly is a senior research scientist at the Biosphere Science Branch at NASA Ames Research Center, Moffett Field, California, and at the Bay Area Environmental Research Institute. His work leverages expertise across a range of disciplines, including cloud computing solutions for big data science and analytics, machine learning, advanced satellite remote sensing and image analytics, and climate sciences. Sangram did his Ph.D. at Boston University (USA). Prior to that, he graduated with an integrated masters (B.Sc. and M.Sc.) degree in geosciences from the Indian Institute of Technology (IIT), Kharagpur, India, in 2004. He is an active panelist for the NSF and NASA carbon and ecosystem programs and a science team member for the NASA Carbon Monitoring System Program. His research has been highlighted in mainstream news media, and he is the recipient of six NASA achievement awards that were recognized in the fields of ecosystem forecasting, climate science, and remote sensing. Sangram is also a cofounding member of the NASA Earth Exchange Collaborative and Supercomputing Facility at NASA Ames and a founding member and developer of the OpenNEX Platform.

High resolution land cover classification maps are needed to increase the accuracy of Land ecosystem and climate model-based outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. We'll discuss these issues and our initial exercise, and show you how we have implemented a novel semi-automated feature enhanced by deep learning.

Level: All
Type: Talk
Tags: HPC

Day: Wednesday, 10/26
Time: 14:30 - 14:55
Location: Rotunda

DCS16122 - Fighting Malware with Machine Learning

Edward Raff Lead Scientist, Booz Allen Hamilton
Edward Raff is a Computer Scientist at Booz Allen Hamilton, specializing in machine learning problems and solutions. As the author of the JSAT library, Edward has extensive experience implementing all manner of algorithms. In particular, he has worked on problems involving bioinformatics, signal classification, sentiment analysis, real time object tracking, and change detection. He currently works at the Laboratory for Physical Sciences researching new methods of applying deep learning to cyber security, and in particular malware classification and analysis. Edward holds a Bachelor's and Master's degree from Purdue University, and is working on a Ph.D. at the University of Maryland, Baltimore County.
Jared Sylvester Senior Consultant , Booz Allen Hamilton
Jared Sylvester joined Booz Allen Hamilton in 2014 as a member of the Strategic Innovation Group, where he has been doing machine learning research focusing on cybersecurity applications at the Laboratory for Physical Sciences. Prior to that he got his doctorate in AI at the University of Maryland, working in both the Computer Science Department doing neural network cognitive modeling, and the Marketing department doing social network analytics. He lives in Rockville, Maryland with his wife, infant son, and terrier, and enjoys animation, calligraphy, bread baking and archery.

We'll talk about some of our initial work in applying machine learning to the task of malware classification using neural networks. This is a particularly challenging problem, with data labeling issues and a type of data representation far different than the current successes in deep learning. We'll talk about our current results tackling a subset of the problem and how a neural network improved upon a classical tree based approach, while retaining many of the benefits. Using an attention based LSTM, we show agreement between what the networks learned compared to a tree based approach. We'll discuss some of our future plans for deep learning in this space, while attempting to process a binary to determine its maliciousness.

Level: Intermediate
Type: Talk
Tags: Federal; HPC

Day: Wednesday, 10/26
Time: 14:30 - 14:55
Location: Polaris

DCS16177 - Autonomous Vehicles – Volvo DriveMe Project

Anders Eugensson Director Government Affairs, Volvo Cars
Anders Eugensson received his Master Degree in Civil Engineering from Chalmers University of Technology, Gothenburg, Sweden and Imperial College, London, England in 1978. After working as a designer on various structural design projects he joined Volvo in 1984. Between 1984 and 1987 he was part of the team that worked on the structural crash worthiness design of the Volvo 850. For a number of years he was then the manager for the legal requirements dept. and joined the Volvo Safety Center in 1998 working on strategic issues as well as interacting with governments and policymakers. Since the beginning of 2003 he is the Director of Governmental Affairs within Volvo Car Corporation. In this role he is part of the cross-functional team responsible for defining the long-term Volvo Cars safety strategies. In 2013 he received the National Highway Traffic Safety Administration's US Government Special Award of Appreciation.

Transportation is the backbone of modern society. However, with transportation comes a number of challenges. Congestion, lack of space, air pollution and traffic casualties all are global issues that have to be addressed. New technologies linked to autonomous vehicles have the outlook of changing the future of mobility and will offer many opportunities. Urban citizens will be able to save time and being connected while being mobile and there are opportunities for saving fuel and reshaping cities at the same time as creating the no crash, and no casualties road transportation system. Volvo is, since 2014, working on its DriveMe project, a project preparing the launch of self-driving autonomous vehicles to be sold to customers in early 2020s.

Level: All
Type: Talk
Tags: Autonomous Vehicles

Day: Wednesday, 10/26
Time: 14:30 - 14:55
Location: Amphitheater

DCS16103 - Deep Neural Networks in Medical Imaging and Radiology: Preventative and Precision Medicine Perspectives

Le Lu Staff Scientist, National Institutes of Health
Le Lu is a staff scientist in Department of Radiology and Imaging Sciences, National Institutes of Health (NIH) Clinical Center, Bethesda, Maryland since 2013. His research is focused on medical image understanding and semantic parsing to fit into "revolutionary" clinical workflow practices, especially in the area of preventive cancer early detection and diagnosis via large scale imaging protocols and statistical (deep) learning principles. He worked on various core R&D problems in colonic polyp and lung nodule CADx systems, and vessel, bone imaging at Siemens Corporate Research and Siemens Healthcare from Oct. 2006 until Jan. 2013, and his last post was a senior staff scientist. He is the (co-)inventor of 16 US/International patents, 30 inventions and has authored or coauthored more than 80 peer reviewed papers (many appeared in tier-one journals and conferences). He received his Ph.D. in Computer Science from Johns Hopkins University in May 2007. He won the Mentor of the Year award (staff scientist/staff clinician category) at NIH in 2015, and the best summer intern mentor award from NIH-CC in 2013.

Employing deep learning (DL), especially deep neural networks for high performance radiological image computing is the main focus. We'll present the motivation, method details, and quantitative results for three core problems: 1) Improving computer-aided detection (CAD) using convolutional neural networks and decompositional image representations; 2) Integrated bottom-up deep convolutional networks for robust automated organ segmentation; 3) Text/Image deep mining on a large-scale radiology image database for automated interpretation. We validate some very promising observations of using DL to both significantly improve upon CAD tasks in (1) and enable exciting new research directions as (2,3). We would discuss their positive impacts in both preventative and precision medicine aspects.

Level: Intermediate
Type: Talk
Tags: Healthcare; HPC

Day: Wednesday, 10/26
Time: 15:00 - 15:50
Location: Oceanic

DCS16119 - Build and Train Your First TensorFlow Graph from the Ground Up with Deep Learning Analytics

Aaron Schumacher Senior Data Scientist and Software Engineer, Deep Learning Analytics
Aaron Schumacher is a data scientist and software engineer for Deep Learning Analytics. He has taught with Python and R for General Assembly and the Metis data science bootcamp. Aaron has also worked with data at Booz Allen Hamilton, New York University, and the New York City Department of Education. Aaron's career-best breakdancing result was advancing to the semi-finals of the R16 Korea 2009 individual footwork battle. He is honored to now be the least significant contributor to TensorFlow 0.9.

TensorFlow is a powerful software framework from Google used more and more for deep learning research and applications. It seamlessly executes computation on GPUs and has a convenient Python API, but aspects of its design can be unfamiliar to newcomers. We'll explore the data flow graph that defines TensorFlow computations, how to train models with gradient descent using TensorFlow, and how TensorBoard can visualize work with TensorFlow. Participants will leave with a thorough and practical understanding of the fundamentals that make TensorFlow such an attractive option and how to start using the framework with the Python API.

Level: Beginner
Type: Talk
Tags: Federal; HPC

Day: Wednesday, 10/26
Time: 15:00 - 15:25
Location: Polaris

DCS16161 - Thor's Hammer

Jim McHugh Vice President and General Manager, NVIDIA
Jim McHugh, Vice President and General Manager at NVIDIA, has over 25 years of experience as a marketing and business executive with start-up, mid-sized, and high-profile companies. He leads DGX-1, the world's first AI supercomputer in a box. Jim focuses on building a vision of organizational success and executing strategies to deliver computing solutions that benefit from GPUs in the data center. He has a deep knowledge and understanding of business drivers, market/customer dynamics, and technology-centered products.

We are entering a new computing paradigm, an era where software will write software. This is the biggest and fastest transition since the advent of the Internet. Big data and analytics brought us information and insight; AI and Deep Learning turn that insight into super human knowledge and real-time action. It's unleashing new business models, new ways to build, dream and experience the world, new geniuses to advance humanity, faster than ever before. Companies and industries are transforming our everyday experiences and services we depend upon. This includes everything from fraud detection, product recommendations, insurance pricing, athletic performance, and weather prediction to more advanced capabilities in agriculture, medicine, and investing.

Level: All
Type: Talk
Tags: HPC; IoT

Day: Wednesday, 10/26
Time: 15:00 - 15:50
Location: Rotunda

DCS16175 - How Data and AI will Transform the Nation's Roads

Chris Gerdes Chief Innovation Officer, United States Department of Transport
Chris Gerdes is the first chief innovation officer at the United States Department of Transportation (U.S. DOT). In this role, he works with the Secretary of Transportation to foster the culture of innovation across U.S. DOT and find ways to support transportation innovation taking place both inside and outside of government. He serves as an internal champion for innovation and idea generation and as a departmental resource for problem-solving approaches, advanced research, automation, and connected vehicles. Gerdes is serving at U.S. DOT while on leave from Stanford University, where he is a professor of mechanical engineering. His laboratory studies how cars move, how humans drive cars, and how to design future cars that work cooperatively with the driver or drive themselves. Vehicles in the lab include X1, an entirely student-built test vehicle; Shelley, an automated Audi TT-S that can lap a racetrack as quickly as an expert driver; and MARTY, an electrified DeLorean capable of controlled drifts.

Automated vehicles offer an unparalleled opportunity to eliminate the 94% of vehicle crashes attributable to human choice or error and dramatically reduce the 35,092 fatalities that occur annually in the United States. Replacing human drivers with automation, however, is no simple task and requires real-world testing to ensure that the automated vehicles can handle the range of conditions that human drivers navigate routinely. Furthermore, the public rightfully expects that such testing must itself be safe. To enable the safe testing and deployment of automated vehicles, the United States Department of Transportation recently released guidance for developers, including a 15 Point Safety Assessment that should be performed prior to testing. The guidance is not prescriptive and enables developers to take a variety of approaches to address areas such as the vehicle's operational design domain, fall-back behavior, human-machine interface and ethical considerations. Thus approaches that hard code specific behaviors and those that learn from data are both possible under the guidance.Data driven approaches in particular are appealing because of their ability to leverage the large amount of data that can be easily generated by an automated vehicle. But they raise other questions of performance guarantees in the event that a situation the vehicle encounters is different from those used in the training set. Such issues are not unique to automated vehicles but rather represent a broader issue at the center of AI and regulation, as highlighted in the recent report "Preparing for the Future of Artificial Intelligence" by the National Science and Technology Council. This talk discusses some of the benefits and challenges of data-driven approaches and how some level of data sharing across the automated vehicle ecosystem can advance development, public acceptance and safety. The talk concludes with a look at the government's role in this rapidly developing area and the opportunities for developers to weigh in on the guidance both now and as it develops in the future.

Level: All
Type: Talk
Tags: Autonomous Vehicles

Day: Wednesday, 10/26
Time: 15:00 - 15:50
Location: Amphitheater

DCS16188 - Understanding the Behavior of American Homeowners Using Deep Neural Networks

Kapil K Jain Lecturer, Stanford University
TBA

This presentation will outline how deep neural networks can be used to understand the behavior of American homeowners broadly in the categories of: pricing/ purchasing behaviors, mortgage prepayment behaviors, and credit/delinquency behaviors. We also present a number of issues for policymakers and regulators to consider in regards to data, privacy and modelling, as deep neural networks approaches have a deeper impact in the housing markets overall. This is joint work with Dr. Kay Giesecke and the Stanford CFRA and was supported by Nvidia Corporation and Stanford University.

Level: All
Type: Talk
Tags: Federal; HPC

Day: Wednesday, 10/26
Time: 15:00 - 15:25
Location: Atrium Ballroom

Talk

HANDS-ON LAB

Presentation
Details

DCL16106 - Deep Learning Network Deployment (End-to-end Series Part 3)

Ryan Olson Solutions Architect, NVIDIA
Ryan Olson is a Solutions Architect at NVIDIA. Prior to this, Ryan was a Member of the Performance Engineering Team at Cray. Prior to Cray, Ryan was a Postdoctoral Research Associate at the University of Minnesota, and he completed graduate work at the Ames Laboratory. Ryan holds a PhD in Physical Chemistry from Iowa State University, and a BA in Chemistry & Mathematics from Saint John's University.

In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA GPU Inference Engine (GIE) which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You'll also explore inference for a variety of different DNN architecture

Level: Intermediate
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 15:30 - 17:00
Location: Hemisphere A

DCL16110 - Deep Learning for Object Detection (End-to-end Series Part 2)

Abel Brown Solution Architect, NVIDIA
Abel holds degrees in Mathematics and Physics as well as a PhD in the field of Geodesy & Geophysics from The Ohio State University. For the past eight years, Abel has been developing distributed software frameworks and administering high performance computing clusters. He has deployed and managed many sensor networks around the world in Antarctica, South America, and Greenland. Abel is dually appointed on the Magnetopheric Multiscale (MMS) Ground System and Conjunction Assessment development teams and manages numerous research projects at a.i. solutions on GPU computing, image analytics, and advanced satellite perturbation techniques. As co-author, Abel's recent work contributed to the PNAS publication which was featured in WIRED Magazine's "Best Scientific Figures of 2012" titled "Greenland Rising".

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab, you will understand each approach and their relative merits. You'll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

Level: Intermediate
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 15:30 - 17:00
Location: Hemisphere B

DCL16115 - Deep Learning for Image Segmentation

Jonathan Bentz Solutions Architect, NVIDIA
Jonathan Bentz is a Solutions Architect with NVIDIA, focusing on Higher Education and Research customers. In this role, he works as a technical resource for customers to support and enable their adoption of GPU computing. He delivers GPU training such as programming workshops to train users and help raise awareness of GPU computing. He also works with ISV and customer applications to assist in optimization for GPUs through the use of benchmarking and targeted code development efforts. Prior to NVIDIA Jonathan worked for Cray as a software engineer where he developed and optimized high-performance scientific libraries such as BLAS, LAPACK, and FFT specifically for the Cray platform. Jonathan obtained his PhD in physical chemistry and his MS in computer science from Iowa State University.

There are a variety of important applications that need to go beyond detecting individual objects within an image and instead segment the image into spatial regions of interest. For example, in medical imagery analysis, it is often important to separate the pixels corresponding to different types of tissue, blood or abnormal cells so that we can isolate a particular organ. In this lab, we will use the TensorFlow deep learning framework to train and evaluate an image segmentation network using a medical imagery dataset.

Level: Beginner
Type: Hands-on Lab
Tags: Science and Research

Day: Wednesday, 10/26
Time: 15:30 - 17:00
Location: Horizon

Hands-on Lab

TALK

Presentation
Details

DCS16133 - Image Retrieval: Joint Representations of Images and Text

Karl Ni Senior Data Scientist, IQT Lab41
Dr. Karl Ni obtained his B.Sc. from UC Berkeley and Doctorate from the University of California, San Diego in 2008. Subsequently, he joined the MIT Lincoln Laboratory (MIT/LL) research staff from 2008 until 2013. There, he served as algorithms co-lead and program manager on projects in signal & image processing, text analytics, and computer vision. He was the secretary of the IEEE Boston Chapter, lecturing for classes on image processing. In 2013, he left MIT/LL to join Lawrence Livermore National Laboratory (LLNL) as a staff scientist researching large scale data analytics. Currently, Karl is a Senior Data Scientist at In-Q-Tel's Lab41 in Menlo Park, CA. His interests include the application of Deep Learning to specific U.S. Intelligence Community problems.
Vishal Sandesara Senior Manager, ITQ Lab41
Vishal Sandesara is a Senior Manager at In-Q-Tel's Lab41. He spends his time thinking about how Lab41 can solve some of the toughest big data problems faced within the US Intelligence Community. He is interested in seeing how machine learning and deep learning techniques can automate and improve workflows within the analyst workforce. He enjoys running and crispy tofu.

We present an approach to tag and retrieve images from the open source using a combination of GPUs and CPUs to jointly train image and text neural networks. The method addresses issues of inconsistently tagged and noisy labels at scale, problems common to work on user generated content. We use a recent idea of deep and wide neural networks promoted by the Tensorflow API. The deep portion of the training is GPU-based, focusing on images, while the joint training with text is CPU-based. The resulting implementation has advantages that include faster training speed, the ability to keep a large vocabulary in memory, and robustness to noisy content, all done at scale. The pipeline enables search in both image and text modalities over multiple corpora from various sources.

Level: Intermediate
Type: Talk
Tags: Federal; HPC

Day: Wednesday, 10/26
Time: 15:30 - 15:55
Location: Polaris

DCS16151 - NVIDIA DGX: Integrating the Power of Deep Learning and Accelerated Analytics

Charlie Boyle Senior Director, DGX products, NVIDIA
Charlie Boyle leads the product efforts to drive the world wide adoption of NVIDIA's DGX products. Charlie brings a wealth of knowledge in the IT and service provider industry, having run marketing, engineering and data center operations for some of the world's largest telcos and service providers. He is a frequent speaker at industry and analyst events.

Customers are looking to extend the benefits beyond big data with the power of the deep learning and accelerate the insights they can get from data. The NVIDIA® DGX-1™ is the platform of AI Pioneers, which integrates power of deep learning and accelerated analytics together in a single hardware and software system. This session will cover the learnings and successes of real world customer examples for deep learning and accelerated analytics.

Level: All
Type: Talk
Tags: Deep Learning & Artificial Intelligence; IoT

Day: Wednesday, 10/26
Time: 15:30 - 15:55
Location: Rotunda

DCS16184 - Deep Learning Demystified

William Ramey Director, Developer Marketing | NVIDIA Corporation, NVIDIA
Will Ramey is NVIDIA's director of developer marketing. Prior to joining NVIDIA in 2003, he managed an independent game studio and developed advanced technology for the entertainment industry as a product manager and software engineer. He holds a BA in computer science from Willamette University and completed the Japan Studies Program at Tokyo International University. Outside of work, Will learns something new every day, usually from his two kids. He enjoys hiking, camping, open water swimming, and playing The Game.

What is Deep Learning? In what fields is it useful, and how does it relate to Artificial Intelligence? Join this session to get a working understanding of deep learning and why this powerful new technology is getting so much attention. Learn how deep neural networks are trained to perform tasks with super-human accuracy, and the challenges organizations face in adopting this new approach. We'll also cover the software, hardware and training resources that many organizations are using to overcome the challenges and deliver breakthrough results.

Level: All
Type: Talk
Tags: Deep Learning & Artificial Intelligence; IoT

Day: Wednesday, 10/26
Time: 15:30 - 15:55
Location: Atrium Ballroom

DCS16136 - Defending the Planet with Machine Learning

James Parr Co-Director, NASA Frontier Development Lab
James Parr is Co-Director of NASA's Frontier Development Lab (www.frontierdevelopmentlab.org) and Co-founder of Trillium Technologies, a technology contractor focused on developing solutions in Space, Oceans and Climate Change. James is also founder of the Open Space Agency (OSA) which most recently has developed an ultra-low cost robotic observatory for Asteroid Hunting and a zero-gravity glass designed to work in space.

It's not often that NASA asks you to come to Silicon Valley to save the world using AI, but that's exactly what happened this summer to 12 young researchers. NASA's Frontier Development Lab (FDL) is an applied research accelerator established to close 'knowledge gaps'. The goal is to demonstrate how breakthroughs can be industrialized over an accelerated timeframe, in a way that is useful for America's Space Program - and, in the process, help defend the planet - specifically, 'PHAs' or Potentially Hazardous Asteroids. In this talk, the FDL team will show how machine learning was applied to three key knowledge gaps in planetary defense, informing both data gathering and strategic policy.

Level: All
Type: Talk
Tags: HPC; Federal; Drones

Day: Wednesday, 10/26
Time: 16:00 - 16:25
Location: Rotunda

DCS16139 - Deep Computational Phenotyping: Learning Data-Driven Representations of Health

Dave Kale PhD Student, USC Information Sciences Institute
Dave Kale is a Ph.D. candidate in Computer Science and an Alfred E. Mann Innovation in Engineering Fellow at the University of Southern California. He is advised by Prof. Greg Ver Steeg at the USC Information Sciences Institute, a member of Aram Galstyan's lab at ISI, and an affiliate of Nigam Shah's lab at the Stanford Center for Biomedical Informatics Research. Dave co-founded the Machine Learning for Healthcare Conference (MLHC), the preeminent venue for research on machine learning applied to health. Dave holds a B.Sc. and M.Sc. from Stanford University.

One of the most anticipated frontiers in deep learning is healthcare, where electronic health records have generated an explosion in digital data. We'll explore the challenges of applying deep learning to health data in the context of computational phenotyping, in which we build models to answer questions like, "Does this patient have diabetes?" Deep learning is a natural choice for learning data-driven phenotypes but faces substantial obstacles, including the lack training labels. We'll discuss how to overcome this problem with distant supervision and transfer learning and present results from experiments using 1 million records from the Stanford hospital system. This is a joint project with Greg Ver Steeg of USC Information Sciences Institute and Nigam Shah of Stanford.

Level: All
Type: Talk
Tags: Healthcare; HPC

Day: Wednesday, 10/26
Time: 16:00 - 16:25
Location: Oceanic

DCS16168 - Deep Learning Systems at Scale

Soumith Chintala AI Researcher, Facebook
Soumith Chintala is a Researcher at Facebook AI Research, where he works on deep learning, reinforcement learning, generative image models, agents for video games and large-scale high-performance deep learning.

Deep Learning is an emerging subfield in the area of machine learning, often involving compute intensive, but embarrassingly parallel problems. We'll give a very brief background about deep learning, discuss the typical computational workloads from a systems perspective, and finally give an overview of building deep learning systems that scale over multiple GPUs, machines and clusters. We'll also discuss the current frameworks and tools used in the deep learning space such as Torch, Theano, TensorFlow, Caffe, MXNet.

Level: All
Type: Talk
Tags: Deep Learning & Artificial Intelligence; IoT

Day: Wednesday, 10/26
Time: 16:00 - 16:25
Location: Atrium Ballroom

DCS16180 - NVIDIA's VRWorks SDK: Accelerating and Enhancing VR Experiences

David Weinstein Director Pro VR, NVIDIA
David Weinstein is the Director for Professional Virtual Reality at NVIDIA. As Director of Pro VR, he is responsible for NVIDIA's Professional VR Products, Projects, and SDK's. Prior to joining NVIDIA, Dave founded and ran three tech start-up companies.

NVIDIA has created a Virtual Reality SDK, called VRWorks, for VR software and hardware developers. VRWorks improves performance, reduces latency, improves compatibility, enables immersive environments, and accelerates 360 video broadcast. Available as a free download from NVIDIA's developer site, the VRWorks SDK is being used by VR companies across the globe to accelerate and enhance VR applications.

Level: Intermediate
Type: Talk
Tags: IoT

Day: Wednesday, 10/26
Time: 16:00 - 16:25
Location: Amphitheater

DCS16193 - Optimized Deep Learning Deployment with TensorRT

Robert Keating Solution Architect, NVIDIA
TBA

NVIDIA TensorRT™ is a high performance neural network inference engine for production deployment of deep learning applications. TensorRT can be used to rapidly optimize, validate and deploy trained neural network for inference to hyperscale data centers, embedded, or automotive product platforms. Developers can use TensorRT to deliver fast inference using INT8 or FP16 optimized precision that significantly reduces latency, as demanded by real-time services such as streaming video categorization on the cloud or object detection and segmentation on embedded and automotive platforms. With TensorRT developers can focus on developing novel AI-powered applications rather than performance tuning for inference deployment. TensorRT runtime ensures optimal inference performance that can meet the needs of even the most demanding throughput requirements.

Level: All
Type: Talk
Tags: Federal

Day: Wednesday, 10/26
Time: 16:00 - 16:25
Location: Polaris

DCS16144 - GPU-Accelerated Big Graph Analytics

Howie Huang Associate Professor, The George Washington University
Dr. Huang is a recipient of the prestigious National Science Foundation CAREER Award, NVIDIA Academic Partnership Award, Comcast Technology Research and Development Fund Award, IBM Real Time Innovation Faculty Award, and Outstanding Young Researcher Award of School of Engineering and Applied Science. His research won the ACM Undergraduate Student Research Competition at SC'12, a Best Student Paper Award Finalist at SC'11, the Best Poster Award at PACT'11, and a High-Performance Storage Challenge Finalist at SC'09. He received a PhD in Computer Science from the University of Virginia.

Future high-performance computing systems shall enable fast processing of large data sets, as highlighted by President Obama's Executive Order on National Strategic Computing Initiative. Of significant interest is the need for analyzing big graphs arising from a variety of areas from social networks, biology, to national security. This talk will present our on-going efforts at the George Washington University in accelerating big graph analytics on GPUs. We have developed GPU-based graph analytics system that delivers exceptional performance through efficient scheduling of a large number of GPU threads and effective utilization of GPU memory hierarchy.

Level: All
Type: Talk
Tags: HPC; Federal

Day: Wednesday, 10/26
Time: 16:30 - 16:55
Location: Rotunda

DCS16152 - GPU Enabled Machine Learning for Atmospheric Cloud Detection

David Hughes Image Scientist, Oak Ridge National Laboratory
David Hughes, Ph.D. is a subject matter expert in the areas of image science, remote sensing, and imaging spectroscopy. He currently works at Oak Ridge National Laboratory as an image scientist. Previously, David served as a lead remote sensing scientist for Harris Corporation/ITT/Exelis/Kodak/RSI for 15 years servicing commercial and government customers.

Learn how to detect clouds from remotely-sensed imagery using a deep learning (DL) approach enhanced by parallelism across multiple CPUs and GPUs. Our cloud detect work, improving upon auto-tiepoint creation, is a tool in the Oak Ridge National Laboratory PRIMUS suite for photogrammetric registration. Initial work used the Open Source Cloudless model as a proof of concept in development of our own convolutional neural network. Our model takes red, green, and blue image values as input and produces a cloud confidence heat map. We parallelize the computationally expensive cloud search task across multiple NVIDIA K-80 GPUs by assigning OpenMP threads to each GPU. We'll discuss data pre-processing, test data sampling and application to our DL model, and cloud detect results.

Level: Intermediate
Type: Talk
Tags: Federal; Autonomous Vehicles; Drones

Day: Wednesday, 10/26
Time: 16:30 - 16:55
Location: Polaris

DCS16169 - DGX-1 Deep Learning and Cloud Service Solutions

Phil Rogers Chief Architect for Compute Server, NVIDIA
Phil Rogers is the chief architect for compute server at NVIDIA. Phil works on software architecture, GPU computing, multi-GPU scalability and heterogeneous memory systems. Phil has been working on graphics software, GPU computing, and making those things run fast, for as long as there have been GPUs.

We describe the software for DGX-1, including the system software, optimized deep learning frameworks and cloud services. We describe how DGX-1 can be operated through its cloud services to provide high performance compute resources for an individual, team or department, complete with scheduling, monitoring, notifications and dashboard UIs for both users and administrators. We describe the advantages of application delivery through NVIDIA Docker containers and show how the customer's investment grows in value over time as we add new containers and improve performance through system software updates.

Level: All
Type: Talk
Tags: Deep Learning & Artificial Intelligence; IoT

Day: Wednesday, 10/26
Time: 16:30 - 17:20
Location: Amphitheater

DCS16176 - Accelerating Machine Learning Applications with the HPE Cognitive Computing Toolkit

Ben Chandler Senior Research Scientist, HPE
Ben Chandler is a Senior Research Scientist in the High Performance Computing business unit at Hewlett Packard Enterprise. He holds a PhD from Boston University in Cognitive and Neural Systems and a BS in Cognitive Science from Carnegie Mellon.

Deep convolutional neural networks and other machine learning algorithms are both performance-critical and difficult to optimize. The Cognitive Computing Toolkit (CCT) from Hewlett Packard Enterprise provides a domain-specific embedded language (DSL) tuned for machine learning and data analysis applications, an optimizing compiler, and a runtime.The CCT DSL, hosted on the Scala language, simplifies the process of writing accelerated applications, while preserving the information the compiler requires to emit efficient accelerator code. Our compiler performs kernel fusion and common subexpression elimination, among other optimizations. Our runtime provides a simple control interface and reuses buffers in order to reduce the application's GPU global memory footprint.

Level: All
Type: Talk
Tags: Healthcare; IoT

Day: Wednesday, 10/26
Time: 16:30 - 16:55
Location: Oceanic

Talk

PANEL

Presentation
Details

DCS16179 - Is Artificial Intelligence Ready to Join the Cancer Fight

Jerry Lee Deputy Director for Cancer Research and Technology , Office of the Vice President
Jerry S.H. Lee, PhD is the Deputy Director for Cancer Research and Technology for the Cancer Moonshot Task Force. Prior to this role, he has spent the last decade in the National Cancer Institute (NCI) Office of the Director developing and implementing over a dozen large-scale advanced technology initiatives as the Deputy Director of NCI's Center for Strategic Scientific Initiatives. Prior to joining the NCI, his research involved elucidating mechanisms of age-related diseases and has co-authored over twenty papers, five book chapters, and one book. He continues his research as an adjunct associate professor at Johns Hopkins University, where he also earned his bachelor's degree in biomedical engineering and Ph.D. in chemical and biomolecular engineering. Dr. Lee also holds an appointment at the Washington D.C. Veterans Affairs Medical Center and collaborates with clinicians on next generation patient-centered outcomes research. He is a member of the Innovation Policy Forum of the National Academies Board on Science, Technology, and Economic Policy, the Foundation for the NIH's Biomarkers' Consortium Cancer Steering Committee, the Health and Environmental Sciences Institute's Board of Trustees, and the editorial board of the Convergence Science Physical Oncology journal.
Joel Dudley Associate Professor, Icahn School of Medicine at Mount Sinai, New York
Joel Dudley is a recognized leader in applying biomedical big data to healthcare and drug discovery. He currently holds positions as Associate Professor of Genetics and Genomic Sciences and Director of Biomedical Informatics at the Icahn School of Medicine at Mount Sinai. He also directs the newly formed Institute for Next Generation Healthcare at Mount Sinai. Prior to Mount Sinai, he held positions as Co-founder and Director of Informatics at NuMedii, Inc., one of the first companies to apply big data to drug discovery, and Consulting Professor of Systems Medicine in the Department of Pediatrics at Stanford University School of Medicine. His work is focused on developing and applying advanced computational methods to integrate the digital universe of information to build better predictive models of disease, drug response. He and his team are also developing pioneering methods to bring about a next generation of medicine that leverages advances in diagnostics, wearables, digital health to enable new approaches to precision medicine and scientific wellness. He has authored and co-authored more than 80 publications and his research has been featured in the Wall Street Journal, Scientific American, Forbes, and other popular media outlets. His recent work using a Big Data approach to identify sub-types of Type 2 diabetes was recently highlighted by NIH director Francis Collins on the the NIH Director's Blog as a significant advance in precision medicine. He was named in 2014 as one of the 100 most creative people in business by Fast Company magazine. He is co-author of the book Exploring Personal Genomics from Oxford University Press, which is used as a text in personalized and precision medicine courses at universities worldwide. He holds an MSc. and Ph.D. in Biomedical Informatics from Stanford University School of Medicine. Joel serves on the Scientific Advisory boards of numerous startups and companies in biotech and health tech.
Erik Lindahl Professor, Stockholm University
Erik Lindahl is Professor of Theoretical Chemistry and Stockholm University, Sweden. His research is focused on understanding structure and function of membranes and membrane proteins (the key workhorse molecules in all our cells), in particular by using molecular simulation in combination with experimental techniques to understand how atomic motions and interactions explain complex biological phenomena. His team develops GROMACS, which is one of the most widely used molecular dynamics simulations codes in the world, and they are also heavily engaged in using GPU technology to accelerate research discovery in fields such as cryo-electron microscopy.
Gurvaneet Randhawa Medical Officer, NCI
Gurvaneet Randhawa, M.D., M.P.H., is a Medical Officer in the Health Systems and Interventions Research Branch (HSIRB). Before joining NCI, he worked at the AHRQ for 13 years where he was a Medical Officer and a Senior Advisor on Clinical Genomics and Personalized Medicine. Prior to joining AHRQ, he completed his Preventive Medicine residency at Johns Hopkins University in 2002, which included a stint at NIAID. He completed an Internal Medicine internship at University of Pennsylvania in 2000. Prior to that, he trained for nine years in biomedical research at Johns Hopkins and at M.D. Anderson Cancer Center. His research interests at that time were in molecular biology and genomics with a focus on chronic myelogenous leukemia. He obtained his medical degree from Medical College, Amritsar, India.
Ronald Summers Senior Investigator, NIH Radiology
Ronald M. Summers received the B.A. in physics and the M.D. and Ph.D. in Medicine/Anatomy & Cell Biology from the University of Pennsylvania. He completed a medical internship at the Presbyterian-University of Pennsylvania Hospital, Philadelphia, PA, a radiology residency at the University of Michigan, Ann Arbor, MI, and an MRI fellowship at Duke University, Durham, NC. In 1994, he joined the Diagnostic Radiology Department at the NIH Clinical Center in Bethesda, MD where he is now a tenured Senior Investigator and Staff Radiologist. He is currently Chief of the Clinical Image Processing Service and directs the Imaging Biomarkers and Computer-Aided Diagnosis (CAD) Laboratory. In 2000, Ronald received the Presidential Early Career Award for Scientists and Engineers, presented by Dr. Neal Lane, President Clinton's science advisor. In 2012, he received the NIH Director's Award, presented by NIH Director Dr. Francis Collins. His research interests include deep learning, virtual colonoscopy, CAD and development of large radiologic image databases. His clinical areas of specialty are thoracic and abdominal radiology and body cross-sectional imaging. He is a member of the editorial boards of the journals Radiology, Journal of Medical Imaging and Academic Radiology. He is a program committee member of the Computer-aided Diagnosis section of the annual SPIE Medical Imaging conference. He has co-authored over 400 journal, review and conference proceedings articles and is a co-inventor on 14 patents.

There are 14 million new cancer cases and 8.2 million cancer-related deaths worldwide per year.Innovation in the fight against cancer requires a multi-faceted approach. As patients and as stakeholders, healthcare ecosystem experts in genomics, proteomics, imaging, medicine and data sciences are cooperating in new ways. GPU computing, integrated data and novel algorithms enable the use of deep learning and artificial intelligence to transform cancer research and care. Dr. Jerry S.H. Lee, Whitehouse Cancer Moonshot Deputy Director for Research and Technology, will facilitate a thought provoking panel discussion on leveraging Artificial Intelligence to fight cancer.

Level: All
Type: Panel
Tags: Healthcare

Day: Wednesday, 10/26
Time: 16:30 - 17:20
Location: Atrium Ballroom

Panel

TALK

Presentation
Details

DCS16124 - SpaceNet - An Open Corpus for Geospatial Deep Learning

Todd Bacastow Director, Strategic Alliances - Services, DigitalGlobe
Todd M. Bacastow leads Strategic Alliances for DigitalGlobe's services business. Todd works with DigitalGlobe's ecosystem partners to incubate, launch, and grow products that utilize geospatial data and location analytics to help answer key analytic questions. Todd joined DigitalGlobe in January 2013 through the combination with GeoEye and served in Product Management prior to his current role.Prior to GeoEye, Todd served as Manager of Strategic Initiatives at SPADAC, an innovative startup in predictive mapping technology, where he worked closely with the company founders. Todd was previously an Associate at FirstMark Capital, a New York-based venture capital firm. Todd is a graduate of The Pennsylvania State University and the Schreyer Honors College earning a B.Sc. in Information Sciences and Technology (IST).
Todd Stavish CTO, CosmiQ Works
Todd Stavish is a cofounder and CTO of CosmiQ Works, a division of In-Q-Tel Labs. CosmiQ Works' mission is to help the Intelligence Community leverage new and emerging commercial space capabilities against mission problems. At CosmiQ Works, Todd leads the SpaceNet Challenge, a corpus of commercial satellite imagery and associated algorithm design competitions. The goal of SpaceNet is to foster innovation in the development of computer vision to automatically extract information from remote sensing data. Before working at CosmiQ, Todd was the technical lead on In-Q-Tel's big data, geospatial, and commercial space investments. Todd spent his early career working in Silicon Valley start-ups.

The commercialization of the geospatial industry has led to an explosive amount of data being collected to characterize our changing planet. One area for innovation is the application of computer vision and deep learning to extract information from satellite imagery at scale. DigitalGlobe, NVIDIA, CosmiQ Works, and AWS have partnered to release this data set to the public to enable developers and data scientists. Today, map features such as roads, building footprints, and points of interest are primarily created through manual or semi-automated techniques. Solving this challenge will enable more advanced use cases for GPU accelerated AI such as change detection, wide area search, automated tipping, as well as downstream uses of map data including autonomous vehicles.

Level: All
Type: Talk
Tags: Federal; HPC

Day: Wednesday, 10/26
Time: 17:00 - 17:25
Location: Polaris

DCS16167 - Accurate Prediction of Protein Kinase Inhibitors with Deep Convolutional Neural Networks

Olexandr Isayev Research Assistant Professor, University of North Carolina at Chapel Hill
Olexandr Isayev is a scientist at UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill. His research interests focus on making sense of chemical data with molecular modeling and machine learning. Before joining UNC in 2013, he was a post-doctoral research fellow at the Case Western Reserve University and scientist at a government research lab. In 2008, he received his Ph.D. in computational chemistry. He received the "Emerging Technology Award" from the American Chemical Society (ACS) and the GPU computing award from NVIDIA in 2014.

The human genome encodes 518 protein kinases that are collectively referred as a human kinome. Kinases are among the most important targets for drug discovery and development in the pharmaceutical industry. A large number of protein kinase inhibitors are either in clinical development or have been approved to treat a wide variety of diseases including cancer, inflammation, diabetes, immunodeficiency and neurological disorders. Traditionally Quantitative Structure-Activity Relationship (QSAR) models were developed for every individual target separately. Therefore, accurate prediction of full kinome profile for a molecule is a great challenge for computational drug discovery. Here, we address this challenge using an approach based on recent advances in machine learning.

Level: Intermediate
Type: Talk
Tags: Healthcare

Day: Wednesday, 10/26
Time: 17:00 - 17:25
Location: Oceanic

DCS16189 - Need for Speed: Accelerated Deep Learning on Power

Michael Gschwind Chief Engineer, Machine Learning and Deep Learning, IBM
Michael Gschwind is Chief Engineer for Machine Learning and Deep Learning for IBM Systems where he leads the development of hardware/software integrated products for cognitive computing. During his career, Dr. Gschwind has been a technical leader and manager for IBM's key transformational initiatives, leading the development of the OpenPOWER Hardware Architecture as well as the software interfaces of the OpenPOWER Software Ecosystem. In previous assignments, he was a chief architect for Blue Gene, Power8, Power7, and Cell BE. Dr. Gschwind is a Fellow of the IEEE, an IBM Master Inventor and a Member of the IBM Academy of Technology.

With its high-performance NVlink connection, the new generation S822LC for HPC server offers a sweet spot of scalability, performance and efficiency for Deep Learning applications. The next generation S822LC systems include the P100 GPUs which were optimized for Deep Learning workloads, NVlink for enhanced peer-to-peer GPU multiprocessing, and CPU-GPU NVlink for enhanced performance and programmability. At the same time, they remain upwardly compatible with earlier systems, accelerating existing Deep Learning frameworks building on the the familiar CUDA and cuDNN libraries. As part of the focus on cognitive application enablement at IBM, the new server will be accompanied with a rich pre optimized and pre-built deep learning software distribution to simplify and accelerate deployment.

Level: All
Type: Talk
Tags: HPC

Day: Wednesday, 10/26
Time: 17:00 - 17:25
Location: Rotunda

Talk

SPECIAL EVENT

Presentation
Details

DCE16112 - Reception & Exhibits

Level: All
Type: Special Event
Tags: Reception & Exhibits

Day: Wednesday, 10/26
Time: 17:30 - 19:30
Location: Amphitheater Foyer & Meridian Foyer

DCE16109 - Exhibits

Level: All
Type: Special Event
Tags: Exhibits

Day: Thursday, 10/27
Time: 08:00 - 09:00
Location: Meridian Foyer

DCE16125 - Registration

Level: All
Type: Special Event
Tags: Registration

Day: Thursday, 10/27
Time:
Location: Amphitheater Foyer

DCE16126 - Virtual Reality Experience

Level: All
Type: Special Event
Tags: Virtual Reality Experience

Day: Thursday, 10/27
Time:
Location: Meridian CDE & Amphitheater Foyer

Special Event

KEYNOTE

Presentation
Details

DCS16165 - Cancer Research and Technology Cancer Moonshot Project

Jerry Lee Deputy Director for Cancer Research and Technology , Office of the Vice President
Jerry S.H. Lee, PhD is the Deputy Director for Cancer Research and Technology for the Cancer Moonshot Task Force. Prior to this role, he has spent the last decade in the National Cancer Institute (NCI) Office of the Director developing and implementing over a dozen large-scale advanced technology initiatives as the Deputy Director of NCI's Center for Strategic Scientific Initiatives. Prior to joining the NCI, his research involved elucidating mechanisms of age-related diseases and has co-authored over twenty papers, five book chapters, and one book. He continues his research as an adjunct associate professor at Johns Hopkins University, where he also earned his bachelor's degree in biomedical engineering and Ph.D. in chemical and biomolecular engineering. Dr. Lee also holds an appointment at the Washington D.C. Veterans Affairs Medical Center and collaborates with clinicians on next generation patient-centered outcomes research. He is a member of the Innovation Policy Forum of the National Academies Board on Science, Technology, and Economic Policy, the Foundation for the NIH's Biomarkers' Consortium's Cancer Steering Committee, the Health and Environmental Sciences Institute's Board of Trustees, and the editorial board of the Convergence Science Physical Oncology journal.

In this keynote, we'll show how the Cancer Moonshot Task Force under Vice President Biden is unleashing the power of data to help end cancer as we know it. We'll discuss global efforts inspired by the Cancer Moonshot that will empower A.I. and deep learning for oncology with larger and more accessible datasets.

Level: All
Type: Keynote
Tags: Healthcare

Day: Thursday, 10/27
Time: 09:00 - 10:00
Location: Amphitheater

Keynote

SPECIAL EVENT

Presentation
Details

DCE16119 - Self-Paced Labs

Level: All
Type: Special Event
Tags: Self-Paced Labs

Day: Thursday, 10/27
Time: 10:00 - 12:00
Location: Polaris Foyer

Special Event

HANDS-ON LAB

Presentation
Details

DCL16109 - Deep Learning Network Deployment (End-to-end Series Part 3)

Jonathan Bentz Solutions Architect, NVIDIA
Jonathan Bentz is a Solutions Architect with NVIDIA, focusing on Higher Education and Research customers. In this role, he works as a technical resource for customers to support and enable their adoption of GPU computing. He delivers GPU training such as programming workshops to train users and help raise awareness of GPU computing. He also works with ISV and customer applications to assist in optimization for GPUs through the use of benchmarking and targeted code development efforts. Prior to NVIDIA Jonathan worked for Cray as a software engineer where he developed and optimized high-performance scientific libraries such as BLAS, LAPACK, and FFT specifically for the Cray platform. Jonathan obtained his PhD in physical chemistry and his MS in computer science from Iowa State University.

In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA TensorRT™ which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You'll also explore inference for a variety of different DNN architecture

Level: Intermediate
Type: Hands-on Lab
Tags: Science and Research

Day: Thursday, 10/27
Time: 10:00 - 11:30
Location: Hemisphere A

DCL16120 - Getting Started with Deep Learning (End-to-end Series Part 1)

Larry Brown Sr. Solution Architect, NVIDIA
Larry is a Sr. Solution Architect with NVIDIA, where he helps customers design and deploy GPU-accelerated workflows in high-performance computing and data analytics. He has a Ph.D. from the Johns Hopkins University in the area of Vision Science, and a graduate certificate in Software Engineering from the University of Colorado. Larry has over 15 years of experience designing, implementing and supporting a variety of advanced software and hardware systems for defense and national security applications. He has designed electro-optical systems for head-mounted displays and training simulators, developed GIS applications for multi-touch displays, and adapted computer vision code in UGVs for the GPU. Currently, Larry enjoys learning about data analytics and machine learning. Larry has spent much of his career working for technology start-up companies but was most recently with Booz Allen Hamilton before joining NVIDIA.

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

Level: Beginner
Type: Hands-on Lab
Tags: Science and Research

Day: Thursday, 10/27
Time: 10:00 - 11:30
Location: Atrium Ballroom

Hands-on Lab

TALK

Presentation
Details

DCS16141 - Enabling 10x Acceleration of Research in Molecular Life Sciences Using CUDA

Erik Lindahl Professor, Stockholm University
Erik Lindahl is Professor of Theoretical Chemistry and Stockholm University, Sweden. His research is focused on understanding structure and function of membranes and membrane proteins (the key workhorse molecules in all our cells), in particular by using molecular simulation in combination with experimental techniques to understand how atomic motions and interactions explain complex biological phenomena. His team develops GROMACS, which is one of the most widely used molecular dynamics simulations codes in the world, and they are also heavily engaged in using GPU technology to accelerate research discovery in fields such as cryo-electron microscopy.

Join this session to learn how to use GPUs and CUDA programming to achieve order-of-magnitude speedup even for large codes that are more complex than tutorial examples. We'll cover our multi-year effort on heterogeneous CPU-GPU accelerating for the GROMACS package for molecular dynamics simulations on a wide range of architectures. We'll introduce new results where CUDA has made it possible to accelerate the costly 3D image reconstruction used in single-particle cryo-electron microscopy (cryo-EM) by 20-200X. You'll learn how you can use these tools in your application work, and what strategies to pursue to accelerate difficult codes where neither libraries nor directives use useful, and even moving computational kernels to CUDA seems to fail.

Level: Intermediate
Type: Talk
Tags: Healthcare; HPC

Day: Thursday, 10/27
Time: 10:00 - 10:50
Location: Oceanic

DCS16159 - Confluence on the Path to Exascale

Galen Shipman Computer Scientist, Los Alamos National Laboratory (LANL)
Galen Shipman is a research computer scientist at Los Alamos National Laboratory (LANL). His research interests include programming models, scalable runtime systems, and I/O. He currently leads multiple interdisciplinary efforts including the advancement and integration of next-generation programming models within LANL's next-generation code project, advancing data analysis programming models as part of the Exascale Computing Project's Data Analytics at the Exascale for Free Electron Lasers application, and co-design of new programming models and runtime services as part of the Center for Exascale Simulation of Combustion in Turbulence and Exascale Co-Design Center for Materials in Extreme Environments projects. Prior to this position Mr Shipman was a research staff member in the Computer Science and Mathematics Division and Director of the Compute and Data Environment for Science at Oak Ridge National Laboratory (ORNL). His work includes addressing some of the computational and data challenges of major scientific facilities such as those of the Spallation Neutron Source, the Center for Nanophase Materials Science (Basic Energy Sciences) and major data centers focusing on Climate Science.

Exascale computing holds the promise of addressing key challenges in basic research and national security but will require advances across the entire High Performance Computing ecosystem. Co-design of applications, software technologies, and underlying hardware will be critical to this endeavor. We'll highlight a few of the applications targeted to make this transition to exascale and our efforts to co-design key components of the software stack address fundamental challenges in software productivity and performance portability. In addition to efforts in traditional scientific simulation, the increasing importance of data intensive computing across these applications will be highlighted alongside our efforts focussed on core components of a converged HPC ecosystem to enable them.

Level: Intermediate
Type: Talk
Tags: HPC

Day: Thursday, 10/27
Time: 10:00 - 10:25
Location: Rotunda

DCS16178 - AI for IoT

Jesse Clayton Sr. Mgr. Product Management | Intelligent Machines, NVIDIA
Jesse Clayton is the Senior Manager of Product Management for Intelligent Machines at NVIDIA. He has more than 20 years of experience in technology spanning software, GPU computing, embedded systems, and aeronautics. His current focus is bringing advanced computer vision and deep learning solutions to autonomous machines and intelligent devices. He holds a B.S. in Electrical and Computer Engineering from the University of Colorado, Boulder.

Artificial Intelligence is affecting almost every industry and is transforming the way businesses operate. The combination of new algorithms, big data, and GPUs has made it possible to address problems that were not practically solvable until now. With NVIDIA Jetson it's possible to deploy Artificial Intelligence to small, mobile platforms. During this session we'll provide an overview of the different AI and deep learning applications for IoT, including factory robotics, warehouse management, aerial inspection, search and rescue, and agriculture, and explain how these applications can be easily deployed via Jetson.

Level: All
Type: Talk
Tags: IoT; Drones; Robotics

Day: Thursday, 10/27
Time: 10:00 - 10:25
Location: Amphitheater

DCS16185 - Recent Advances in GPU Accelerated Deep Learning for Defense

Abel Brown Solution Architect, NVIDIA
Abel holds degrees in Mathematics and Physics as well as a PhD in the field of Geodesy & Geophysics from The Ohio State University. For the past eight years, Abel has been developing distributed software frameworks and administering high performance computing clusters. He has deployed and managed many sensor networks around the world in Antarctica, South America, and Greenland. Abel is dually appointed on the Magnetopheric Multiscale (MMS) Ground System and Conjunction Assessment development teams and manages numerous research projects at a.i. solutions on GPU computing, image analytics, and advanced satellite perturbation techniques. As co-author, Abel's recent work contributed to the PNAS publication which was featured in WIRED Magazine's "Best Scientific Figures of 2012" titled "Greenland Rising".

Deep learning technology development continues to accelerate in many areas relevant to defense and national security missions. In this talk, we'll provide a brief introduction to the technology of deep learning, then explore what's at the forefront of research and development in defense. We'll also cover advances in deep learning theory, software, and GPU-acceleration hardware. The three key takeaways are: 1. The rate of deep learning technology development continues to accelerate and demonstrate applicability to a growing set of defense and national security missions. 2. The deep learning software ecosystem, including the NVIDIA SDK, makes deep learning easily accessible in many application areas today. 3. NVIDIA® Tesla® GPUs are the world's fastest deep learning acceler

Level: Intermediate
Type: Talk
Tags: Federal

Day: Thursday, 10/27
Time: 10:00 - 10:25
Location: Polaris

DCS16110 - GPUs for Sustained Computational and Analytics Research on the Blue Waters Extreme Scale Systems

William Kramer Blue Waters Director and Principle Investigator; Computer Science Research Professor, National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign
William T.C. Kramer is the Principal Investigator and Project Director for the Blue Waters Leadership Computing Project at the National Center for Supercomputing Applications. Blue Waters is a National Science Foundation-funded project, to deploy the first general purpose, open science, sustained-petaflops supercomputer as a powerful resource for the nation's researchers. Blue Waters is the largest system Cray Inc has every built, and is almost 50% larger than the next largest Cray system. Blue Waters is one of the world's most balanced leadership system with almost 30,000 nodes with x86 and GPU processors, 1.7 PB of main memory and the fasted I/O subsystem in the open research community. Kramer's accomplishments are deploying and operating extreme-scale computational systems, data systems, best of class facilities and leading intense, high visibility projects. He combines broad and significant technical contributions combined with leadership and management experience in high performance, interactive and real-time computing, data focused analysis, cyber infrastructure, applications and software development He has substantial and sustained expertise in managing world class, trend-setting organizations, a commitment to excellence, a record of fostering the education and development of the next generation of researchers and leaders and a track record for building sustained collaborations and relationships. Kramer is a full Research Professor in the Computer Science Department pursuing research in system performance evaluation, large scale resiliency and reliability and system resource management.

The "sustained Petascale" Blue Waters Supercomputer is the most powerful and productive supercomputer serving the entire academic and open science communities and the largest system Cray has ever created. Blue Waters is enabling "grand challenge" solutions for problems ranging from HIV and Ebola virus, to earthquake analysis, to severe weather, to the search for gravitational waves to the economics of climate change policy. The talk will discuss the architectural decisions for Blue Waters to include GPUs, cover experiences advanced research teams have using GPUs, highlight some of the efforts underway to expand the use of GPUs and then draw observations for future generation systems.

Level: All
Type: Talk
Tags: HPC

Day: Thursday, 10/27
Time: 10:30 - 10:55
Location: Rotunda

DCS16171 - Embedded Deep Learning with NVIDIA Jetson

Dustin Franklin Jetson Developer Evangelist , NVIDIA
Highly-Rated Speaker
Dustin Franklin is a renowned GPGPU developer and systems architect, Dustin Franklin works to deploy CUDA-accelerated embedded applications and solve challenging sensor processing problems on Jetson, NVIDIA's low-power, high-performance embedded platform. Dustin and his team utilize the parallel compute horsepower of NVIDIA GPUs for real-time computer vision, autonomous navigation, and other toolsets, to enable new product categories and industries.

Start developing applications with advanced AI and computer vision today using NVIDIA's deep learning tools, including TensorRT and DIGITS.

Level: All
Type: Talk
Tags: Robotics

Day: Thursday, 10/27
Time: 10:30 - 10:55
Location: Amphitheater

DCS16108 - Accelerating Drug Discovery with Free Energy Calculations on GPUs

Robert Abel VP Scientific Development, Schrödinger Inc.
Robert Abel, Vice President of Scientific Development, is responsible for leading efforts to further improve induced-fit docking, free energy perturbation theory, physics-based affinity scoring, and protein structure prediction; as well as supervising the Schrödinger R&D portfolio. Robert obtained his Ph.D. from Columbia University for his work with Professor Richard Friesner on methods to quantify the role of the solvent in protein-ligand binding. He has also been awarded NSF and DHS research fellowships; published on a variety of topics including protein-ligand binding, protein dynamics, protein structure prediction, energy function development, and fluid thermodynamics; served as a referee for several prestigious journals; and co-authored multiple patent applications. Since joining Schrödinger in 2009, Robert has advanced through a number of roles to his current position.

Over the last decade, free energy calculation methods have matured tremendously, which has in turn better positioned these methods to positively impact preclinical drug discovery. We'll report recent progress toward adapting these methods to well-describe a number of different end-points relevant to drug discovery including ligand binding potency, binding selectivity, and small-molecule solubility; as well a number of prospective applications of the methods to advance discovery projects. We'll further highlight how optimization of these methods to take advantage of GPU computing resources now allows these techniques to fully enable drug discovery projects in an industrial setting.

Level: All
Type: Talk
Tags: Healthcare; HPC

Day: Thursday, 10/27
Time: 11:00 - 11:25
Location: Oceanic

DCS16112 - NOAA's Software Engineering for Novel Architectures (SENA) Effort

Leslie Hart Senior HPC Software Engineer, USDOC/NOAA/HPCC
Leslie Hart is the Senior HPC Software Engineer for NOAA's HPC Program. He leads the Software Engineering for Novel Architectures (SENA) effort. He has over 30 years of experience with a variety of HPC architectures.

NOAA's Software Engineering for Novel Architectures (SENA) project is an effort to ensure NOAA's model suite is ready for future landscape changes in HPC. In the short term SENA efforts including support of standards activities, porting codes to fine-grain architectures (including NVIDIA products) and examination of programming methods. SENA also will address alternative algorithms and general approaches to solving environmental modeling problems.

Level: All
Type: Talk
Tags: HPC; Federal

Day: Thursday, 10/27
Time: 11:00 - 11:25
Location: Rotunda

DCS16116 - Intelligent Cities: How GE is Taking Video Analytics from the Research Lab to the Street and into the Cloud

Peter Tu Senior Researcher, GE Global Research
Peter Tu received his bachelors of science degree in 1990 for Systems Design Engineering from the University of Waterloo, Canada. He then earned his doctorate in 1995 from Oxford University's Engineering Science department. In 1997 Peter joined General Electric's image understanding group. He started his tenure at GE by developing a number of latent fingerprint matching algorithms for the FBI's Automatic Fingerprint Identification System. He became the principal investigator for the FBI ReFace program, which has focused on the construction of a system capable of estimating 3D models of the human face given their skeletal remains. Dr. Tu has also developed a number of optical metrology algorithms enabling the precise measurement of various mechanical parts. In 2001 Dr. Tu was one of the founders of GE's intelligent video initiative. He has helped build technologies focused on object detection, object tracking, scene understanding, behavior recognition, affective analysis and biometric-at-a-distance capture systems. Application domains of interest have included homeland protection, retail, rail, aerial analysis, healthcare, digital signage and IED detection. He has been a Principal Investigator for DARPA, the FBI, the Department of Homeland Security and the National Institute of Justice, he has 25 issued patents and over 50 peer reviewed publications.

We'll explore how deep learning is transforming the types and depth of data that cities can leverage to solve impactful problems and open up new services for citizens and opportunities for businesses - from tackling traffic and parking, new types of data service business models, to solving key law enforcement problems.

Level: All
Type: Talk
Tags: IoT; Federal

Day: Thursday, 10/27
Time: 11:00 - 11:25
Location: Amphitheater

DCS16130 - Multi-Scale Object Detection in Satellite Imagery

Adam Van Etten Research Scientist, In-Q-Tel
Researcher at CosmiQ Works, an In-Q-Tel Lab

The promise of detecting objects of interest over large areas is one of the primary drivers of interest in satellite imagery analytics, and of keen interest to the intelligence community. A number of integrated pipelines using convolutional neural nets have proven very successful for detecting objects such as people, cars or bicycles in cell phone pictures. These methods are not optimized to detect small objects in large images, and perform poorly when applied to such problems. We discuss efforts to adapt state-of-the-art deep learning frameworks to the task of detecting objects of vastly different scales in satellite imagery.

Level: Intermediate
Type: Talk
Tags: Federal; HPC

Day: Thursday, 10/27
Time: 11:00 - 11:25
Location: Polaris

Talk

SPECIAL EVENT

Presentation
Details

DCE16115 - Exhibits

Level: All
Type: Special Event
Tags: Exhibits

Day: Thursday, 10/27
Time: 11:30 - 13:30
Location: Meridian Foyer

Special Event

HANDS-ON LAB

Presentation
Details

DCL16121 - Deep Learning Network Deployment (End-to-end Series Part 3)

Abel Brown Solution Architect, NVIDIA
Abel holds degrees in Mathematics and Physics as well as a PhD in the field of Geodesy & Geophysics from The Ohio State University. For the past eight years, Abel has been developing distributed software frameworks and administering high-performance computing clusters. He has deployed and managed many sensor networks around the world in Antarctica, South America, and Greenland. Abel is dually appointed on the Magnetospheric Multiscale (MMS) Ground System and Conjunction Assessment development teams and manages numerous research projects at a.i. solutions on GPU computing, image analytics, and advanced satellite perturbation techniques. As co-author, Abel's recent work contributed to the PNAS publication which was featured in WIRED Magazine's "Best Scientific Figures of 2012" titled "Greenland Rising".

Deep learning software frameworks leverage GPU acceleration to train deep neural networks (DNNs). But what do you do with a DNN once you have trained it? The process of applying a trained DNN to new test data is often referred to as 'inference' or 'deployment'. In this lab, you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case, DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA TensorRT™ which will automatically create an optimized inference run-time from a trained Caffe model an

Level: Intermediate
Type: Hands-on Lab
Tags: Science and Research

Day: Thursday, 10/27
Time: 11:30 - 13:00
Location: Hemisphere B

Hands-on Lab

TALK

Presentation
Details

DCS16111 - The Impact of Deep Learning on Radiology

Ronald Summers Senior Investigator, NIH Radiology
Ronald M. Summers received the B.A. in physics and the M.D. and Ph.D. in Medicine/Anatomy & Cell Biology from the University of Pennsylvania. He completed a medical internship at the Presbyterian-University of Pennsylvania Hospital, Philadelphia, PA, a radiology residency at the University of Michigan, Ann Arbor, MI, and an MRI fellowship at Duke University, Durham, NC. In 1994, he joined the Diagnostic Radiology Department at the NIH Clinical Center in Bethesda, MD where he is now a tenured Senior Investigator and Staff Radiologist. He is currently Chief of the Clinical Image Processing Service and directs the Imaging Biomarkers and Computer-Aided Diagnosis (CAD) Laboratory. In 2000, Ronald received the Presidential Early Career Award for Scientists and Engineers, presented by Dr. Neal Lane, President Clinton's science advisor. In 2012, he received the NIH Director's Award, presented by NIH Director Dr. Francis Collins. His research interests include deep learning, virtual colonoscopy, CAD and development of large radiologic image databases. His clinical areas of specialty are thoracic and abdominal radiology and body cross-sectional imaging. He is a member of the editorial boards of the journals Radiology, Journal of Medical Imaging and Academic Radiology. He is a program committee member of the Computer-aided Diagnosis section of the annual SPIE Medical Imaging conference. He has co-authored over 400 journal, review and conference proceedings articles and is a co-inventor on 14 patents.

Major advances in computer science are beginning to have an impact on radiology. The rapid achievements in performance for object detection in natural images have enabled these impacts. There has been an explosion of research interest and number of publications regarding the use of deep learning in radiology. In this presentation, we'll show examples of how deep learning has led to major performance improvements in radiology image analysis, including image segmentation and computer aided diagnosis.

Level: All
Type: Talk
Tags: Healthcare; HPC

Day: Thursday, 10/27
Time: 11:30 - 11:55
Location: Oceanic

DCS16114 - Application Readiness for Exascale

Jacqueline Chen DMTS, Sandia National Laboratories
Jacqueline H. Chen is a Distinguished Member of Technical Staff at the Combustion Research Facility at Sandia National Laboratories. She has contributed broadly to research in petascale direct numerical simulations (DNS) of turbulent combustion focusing on fundamental turbulence-chemistry interactions. These benchmark simulations provide fundamental insight into combustion processes and are used by the combustion modeling community to develop and validate turbulent combustion models for engineering CFD simulations. In collaboration with computer scientists and applied mathematicians she is the founding Director of the Center for Exascale Simulation of Combustion in Turbulence (ExaCT). She leads an interdisciplinary team to co-design DNS algorithms, domain-specific programming environments, scientific data management and in situ uncertainty quantification and analytics, and architectural simulation and modeling with combustion proxy applications. She received the DOE INCITE Award in 2005, 2007, 2008-2016, the DOE ALCC Award in 2012, the 34th International Combustion Symposium Distinguished Paper Award 2012, and the Asian American Engineer of the Year Award in 2009. She is a member of the DOE Advanced Scientific Computing Research Advisory Committee (ASCAC) and Subcommittees on Exascale Computing, and Big Data and Exascale. She was the editor of Flow, Turbulence and Combustion, the co-editor of the Proceedings of the Combustion Institute, volumes 29 and 30, the Co-Chair of the Local Organizing Committee for the 35th International Combustion Symposium, and a member of the Board of Directors of the Combustion Institute.

One of the primary challenges to achieving exascale computing is designing new architectures that will work under the enormous power and cost constraints. The mission of co-design is to absorb the sweeping changes necessary for exascale computing into software and to ensure that the hardware will meet the requirements to perform extreme scale applications. This session focuses on multi-disciplinary research required to co-design all aspects of simulation including numerical algorithms for PDE's, domain specific programming languages, asynchronous programming environments, and scientific data management and analytics for in situ uncertainty quantification (UQ).

Level: All
Type: Talk
Tags: HPC

Day: Thursday, 10/27
Time: 11:30 - 11:55
Location: Rotunda

DCS16123 - Designing a Wearable Personal Assistant for the Blind: The Power of Embedded GPUs

Saverio Murgia CEO, Horus Technology
Founder and CEO of Horus Technology, Saverio Murgia is passionate about machine learning, computer vision and robotics. Both engineer and entrepreneur, in 2015 he obtained a double MSc/MEng in Advanced Robotics from the Ecole Centrale de Nantes and the University of Genoa. He also owns a degree in management from ISICT and a BSc in Biomedical Engineering from the University of Genoa. Before founding Horus Technology, Saverio was visiting researcher at EPFL and the Italian Institute of Technology.

With the introduction of embedded platforms featuring GPUs with advanced GPGPU capability, it is now possible to design systems and products that are able to extract and process in real time an amount of information not imaginable in the past. An example of what mobile GPGPU computing allows to do is the design of a wearable device that uses deep learning and other computational heavy techniques from Computer Vision and Machine Learning to describe the world to blind and visually impaired people. Horus is a wearable personal assistant for blind and visually impaired people that thanks to its stereo camera and sensor suite can detect obstacles, describe pictures and scenes, identify objects and people and read texts. All the processing is done locally in real time.

Level: All
Type: Talk
Tags: IoT; Healthcare; Federal

Day: Thursday, 10/27
Time: 11:30 - 11:55
Location: Amphitheater

DCS16132 - Real-Time In-Situ Intelligent Video Analytics for Mobile Platforms

Christiaan Gribble Principal Research Scientist, SURVICE Engineering Company
Christiaan Gribble is a Principal Research Scientist and the Team Lead for High-Performance Computing in the Applied Technology Operation of SURVICE Engineering Company. His work explores the synthesis of interactive visualization and HPC, focusing on algorithms, architectures, and systems for computationally intense problems in physics-based simulation and computer vision. Gribble also leads SURVICE’s officially certified NVIDIA GPU Research Center, as well as several other state-of-the-art R&D contracts for the US Government. Gribble brings more than 10 years of practical experience in R&D of efficient software systems for computationally intense problems. Prior to joining SURVICE in 2012, Gribble held the position of Associate Professor in the Department of Computer Science at Grove City College. He has also served as an Assistant Professor of Computer Science at Grove City, as a post-doctoral research fellow and research assistant at the Scientific Computing and Imaging (SCI) Institute at the University of Utah, and as a research assistant at the Pittsburgh Supercomputing Center. Gribble received the BS degree in mathematics from Grove City College in 2000, the MS degree in information networking from Carnegie Mellon University in 2002, and the PhD degree in computer science from the University of Utah in 2006.

We highlight Sentinel, a system for real-time in-situ intelligent video analytics on mobile computing platforms. Sentinel combines state-of-the-art techniques in HPC with Dynamic Mode Decomposition (DMD), a proven method for data reduction and analysis. By leveraging CUDA, our early system prototype achieves significantly better-than-real-time performance for DMD-based background/foreground separation on high-definition video streams, thereby establishing the efficacy of DMD as the foundation on which to build higher level real-time computer vision techniques. In this talk, we present an overview of the Sentinel system, including the application of DMD to background/foreground separation in video streams, and outline our current efforts to enhance and extend the prototype system.

Level: All
Type: Talk
Tags: Federal; HPC

Day: Thursday, 10/27
Time: 11:30 - 11:55
Location: Polaris

Talk

SPECIAL EVENT

Presentation
Details

DCE16116 - Lunch

Level: All
Type: Special Event
Tags: Lunch

Day: Thursday, 10/27
Time: 12:00 - 14:00
Location: Atrium Ballroom

DCE16120 - Self-Paced Labs

Level: All
Type: Special Event
Tags: Self-Paced Labs

Day: Thursday, 10/27
Time: 14:00 - 17:30
Location: Polaris Foyer

Special Event

HANDS-ON LAB

Presentation
Details

DCL16111 - Deep Learning for Image Segmentation

Jonathan Bentz Solutions Architect, NVIDIA
Jonathan Bentz is a Solutions Architect with NVIDIA, focusing on Higher Education and Research customers. In this role, he works as a technical resource for customers to support and enable their adoption of GPU computing. He delivers GPU training such as programming workshops to train users and help raise awareness of GPU computing. He also works with ISV and customer applications to assist in optimization for GPUs through the use of benchmarking and targeted code development efforts. Prior to NVIDIA Jonathan worked for Cray as a software engineer where he developed and optimized high-performance scientific libraries such as BLAS, LAPACK, and FFT specifically for the Cray platform. Jonathan obtained his Ph.D. in physical chemistry and his MS in computer science from Iowa State University.

There are a variety of important applications that need to go beyond detecting individual objects within an image and instead segment the image into spatial regions of interest. For example, in medical imagery analysis, it is often important to separate the pixels corresponding to different types of tissue, blood or abnormal cells so that we can isolate a particular organ. In this lab, we will use the TensorFlow deep learning framework to train and evaluate an image segmentation network using a medical imagery dataset.

Level: Beginner
Type: Hands-on Lab
Tags: Science and Research

Day: Thursday, 10/27
Time: 14:00 - 15:30
Location: Hemisphere A

Hands-on Lab

TALK

Presentation
Details

DCS16109 - Improving Medicine, Saving Lives: Developing Visual Computing Technologies for Health Care

Amitabh Varshney Professor and Director, University of Maryland
Amitabh Varshney is the Director of the Institute for Advanced Computer Studies (UMIACS), Professor of Computer Science at the University of Maryland at College Park, and Co-Director of the Center for Health-related Informatics and Bioimaging. Varshney's research focus is on exploring the applications of high-performance computing and visualization in engineering, science, and medicine. He has worked on a number of research areas including visual saliency, summarization of large visual datasets, and visual computing for big data. He is currently exploring general-purpose high-performance parallel computing using clusters of CPUs and graphics processing units (GPUs). He has served in various roles in the IEEE Visualization and Graphics Technical Committee, including as its Chair, 2008–2012. He received the IEEE Visualization Technical Achievement Award in 2004. He is a Fellow of IEEE.

Come and learn about how we're using the GPUs to enable advances in a wide-variety of healthcare technologies ranging from stem cell classification for regenerative medicine, to modeling of the optical forces applied by lasers onto micro particles for assembly of nano component devices, to using GPUs to understand traumatic brain injury via study of patterns in brain imaging data, using diffusion kurtosis imaging (DKI) data. I will conclude with some of our ongoing research in the use of GPUs to create high-precision next-generation virtual and augmented reality environments for surgery, medical training, and telemedicine.

Level: All
Type: Talk
Tags: Healthcare; HPC

Day: Thursday, 10/27
Time: 14:00 - 14:25
Location: Oceanic

DCS16125 - Brains For Unmanned Systems: Artificial Intelligence for Embedded Perception, Navigation, Obstacle Avoidance

Massimiliano Versace CEO, Neurala
Max Versace is the co-founder and CEO of Neurala Inc., a DIUx company, and the founding Director of the Boston University Neuromorphics Lab. After his pioneering research in brain-inspired computing and deep networks, he continues to inspire and lead the world of autonomous robotics. He has spoken at dozens of events and venues, including TedX, NASA, The Pentagon, DIUx, Los Alamos National Labs, Air Force Research Labs, HP, iRobot, Samsung, LG, Qualcomm, Ericsson, BAE Systems, Mitsubishi, ABB and Accenture, among others. His work was featured in IEEE Spectrum, New Scientist, Geek Magazine, CNN, MSNBC and many other media. Max is a Fulbright scholar, has authored dozens of academic publications, holds several patents and two Ph.Ds: Experimental Psychology, University of Trieste, Italy; Cognitive and Neural Systems, Boston University, USA.

Today's Unmanned Systems need advanced and coordinated capabilities in perception and mobility to be effectively 'put to work' in complex environments. To date, the best implementations of these capabilities come from biology. Max Versace, CEO of Neurala and Director of the Boston University Neuromorphics Lab, will explain how a variety of Unmanned Systems can use embedded GPUs coupled with relatively inexpensive sensors to enable machines to sense and navigate intelligently, and safely, their environment. The talk will illustrate an nVidia compatible working "mini-brain" that can drive UxS.

Level: All
Type: Talk
Tags: Robotics; Drones; Federal

Day: Thursday, 10/27
Time: 14:00 - 14:25
Location: Amphitheater

DCS16131 - cuSTINGER: Supporting Dynamic Graph Algorithms for GPUs

Oded Green Research Scientist, Georgia Institute of Technology
Oded Green is a research scientist at Georgia Tech in the School of Computational Science and Engineering, where he received his Ph.D. Oded received his MS.c. in electrical engineering and his BS.c. from in computer engineering, both from the Technion. Prior to his return to Georgia Tech, Oded spent some time working as the Chief Operating Officer for ArrayFire where he was responsible for managing the company's daily activities. Oded's research primarily focuses on improving the performance and scalability of large-scale data analytics using a wide range of high performance computation platforms. This also includes designing algorithms and data structure for dynamic graph problems.

cuSTINGER, a new graph data structure targeting NVIDIA GPUs is designed for streaming graphs that evolve over time. cuSTINGER enables algorithm designers greater productivity and efficiency for implementing GPU-based analytics, relieving programmers of managing memory and data placement. In comparison with static graph data structures, which may require transferring the entire graph back and forth between the device and the host memories for each update or require reconstruction on the device, cuSTINGER only requires transferring the updates themselves; reducing the total amount of data transferred. cuSTINGER gives users the flexibility, based on application needs, to update the graph one edge at a time or through batch updates. cuSTINGER supports extremely high update rates, over 1 million

Level: All
Type: Talk
Tags: Federal; HPC

Day: Thursday, 10/27
Time: 14:00 - 14:25
Location: Polaris

Talk

PANEL

Presentation
Details

DCS16173 - HPC: The Catalyst for the Next Wave of Scientific Innovation

Jack Wells Director of Science, Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory
Jack Wells is the Director of Science for the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science national user facility, and the Titan supercomputer, located at Oak Ridge National Laboratory (ORNL). Wells is responsible for the scientific outcomes of the OLCF's user programs. Jack has previously lead both ORNL's Computational Materials Sciences group in the Computer Science and Mathematics Division and the Nanomaterials Theory Institute in the Center for Nanophase Materials Sciences. Prior to joining ORNL as a Wigner Fellow in 1997, Wells was a postdoctoral fellow within the Institute for Theoretical Atomic and Molecular Physics at the Harvard-Smithsonian Center for Astrophysics. Jack has a Ph.D. in physics from Vanderbilt University, and has authored or co-authored over 80 scientific papers and edited 1 book, spanning nanoscience, materials science and engineering, nuclear and atomic physics computational science, applied mathematics, and text-based data analytics.
Arvind Ramanathan Staff Scientist, Oak Ridge National Laboratory
Arvind Ramanathan is a staff scientist in the Computational Science and Engineering Division and the Health Data Sciences Institute. He obtained his Ph.D. in computational biology from Carnegie Mellon University and his Masters in computer science from Stony Brook University. His research interests lie at the intersection of computational biology, machine learning and high performance computing systems. In particular, he is interested in developing data analytic tools relevant for applications in drug-discovery and public health dynamics.
William Kramer Blue Waters Director and Principle Investigator; Computer Science Research Professor, National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign
William T.C. Kramer is the Principal Investigator and Project Director for the Blue Waters Leadership Computing Project at the National Center for Supercomputing Applications. Blue Waters is a National Science Foundation-funded project, to deploy the first general purpose, open science, sustained-petaflops supercomputer as a powerful resource for the nation's researchers. Blue Waters is the largest system Cray Inc has ever built, and is almost 50% larger than the next largest Cray system. Blue Waters is one of the world's most balanced leadership system with almost 30,000 nodes with x86 and GPU processors, 1.7 PB of main memory and the fasted I/O subsystem in the open research community. Kramer's accomplishments are deploying and operating extreme-scale computational systems, data systems, best of class facilities and leading intense, high visibility projects. He combines broad and significant technical contributions combined with leadership and management experience in high performance, interactive and real-time computing, data focused analysis, cyber infrastructure, applications and software development He has substantial and sustained expertise in managing world class, trend-setting organizations, a commitment to excellence, a record of fostering the education and development of the next generation of researchers and leaders and a track record for building sustained collaborations and relationships. Kramer is a full Research Professor in the Computer Science Department pursuing research in system performance evaluation, large scale resiliency and reliability and system resource management.
Jacqueline Chen DMTS, Sandia National Laboratories
Jacqueline H. Chen is a Distinguished Member of Technical Staff at the Combustion Research Facility at Sandia National Laboratories. She has contributed broadly to research in petascale direct numerical simulations (DNS) of turbulent combustion focusing on fundamental turbulence-chemistry interactions. These benchmark simulations provide fundamental insight into combustion processes and are used by the combustion modeling community to develop and validate turbulent combustion models for engineering CFD simulations. In collaboration with computer scientists and applied mathematicians she is the founding Director of the Center for Exascale Simulation of Combustion in Turbulence (ExaCT). She leads an interdisciplinary team to co-design DNS algorithms, domain-specific programming environments, scientific data management and in situ uncertainty quantification and analytics, and architectural simulation and modeling with combustion proxy applications. She received the DOE INCITE Award in 2005, 2007, 2008-2016, the DOE ALCC Award in 2012, the 34th International Combustion Symposium Distinguished Paper Award 2012, and the Asian American Engineer of the Year Award in 2009. She is a member of the DOE Advanced Scientific Computing Research Advisory Committee (ASCAC) and Subcommittees on Exascale Computing, and Big Data and Exascale. She was the editor of Flow, Turbulence and Combustion, the co-editor of the Proceedings of the Combustion Institute, volumes 29 and 30, the Co-Chair of the Local Organizing Committee for the 35th International Combustion Symposium, and a member of the Board of Directors of the Combustion Institute.
Thomas Jordan University Professor and Director, Southern California Earthquake Center, University of Southern California
Dr. Thomas H. Jordan is a University Professor and the W. M. Keck Foundation Professor of Earth Sciences at the University of Southern California. His current research is focused on system-level models of earthquake processes, earthquake forecasting, continental structure and dynamics, and full-3D waveform tomography.As the director of the Southern California Earthquake Center (SCEC), he coordinates an international research program in earthquake system science that involves over 1000 scientists at more than 70 universities and research organizations. He is an author of more than 230 scientific publications, including two popular textbooks.

High Performance Computing (HPC) has been a cornerstone of scientific discovery from Dr. Ken Wilson's Nobel Prize in Physics in 1982 to the breaking of the Capsid Code for the HIV virus his past year. EXAscale Computing, which follows the progression from TERAscale and PETAscale marks the next milestone in the evolution of HPC that will allow the scientific community to continue its dynamic pace of discovery and innovation.

Level: All
Type: Panel
Tags: HPC

Day: Thursday, 10/27
Time: 14:00 - 14:50
Location: Rotunda

Panel

TALK

Presentation
Details

DCS16135 - Miniaturization of HPC for Cyber Security: GPUs for Better Threat Detection

Keith Kraus Data Engineer , Accenture Labs
Keith Kraus is an associate principal engineer for the Accenture Security Lab in the Washington, DC, area. Over the past year, Keith has done extensive data engineering, systems engineering, and data visualization work in the cybersecurity domain. His main focus is on building a GPU-accelerated big data solution for advanced threat detection and cyber-hunting capabilities. Prior to working for the Accenture Security Lab, Keith was a member of a research team that built a tool designed to optimally place automated defibrillators in urban environments. Keith graduated from Stevens Institute of Technology with a BEng in computer engineering and an MEng in networked information systems.
Louis DiValentin Data Scientist, Accenture Cyber Security Labs
Louis DiValentin is a Data Scientist with a focus on tackling analytical problems in cybersecurity. He currently works to develop novel cyber security analytic solutions for Accenture Cyber Security Labs. He has developed multiple models for identifying and detecting anomalous behavior of users within organizations as well as models that minimize risk in security postures. His recent primary topic of research has been around Graph Analytics in cyber security enhanced using GPU computation.

Cyber security has an unique complex data problem; 250M to 2B events daily is common. In addition, data is scattered across numerous protection/detection systems and data silos. Rethinking the cybersecurity problem as a data-centric problem, Accenture Labs Cyber Security team uses emerging big-data tools, graph databases & analysis, and GPUs to exploit the connected nature of the data. Pairing GPUs with traditional big data technology created a best of breed evolving system, ASGARD, to allow users to hunt for new unknown threats and risks at speeds much faster than pure CPU systems. Learn how we're visualizing orders of magnitude more data with Graphistry, a GPU powered visualization engine, and accelerating complex analytics on GPUs to level the playing field against new cyber threats.

Level: Intermediate
Type: Talk
Tags: Federal; HPC; IoT

Day: Thursday, 10/27
Time: 14:30 - 14:55
Location: Polaris

DCS16154 - Deep Learning Pipelines for Drug Discovery

Qingsong Zhu COO, InSilico Medicine, Inc.
Dr. Qingsong Zhu is the Chief Operating Officer of Insilico Medicine, Inc. and is responsible for Insilico Medicine operations and drug development. He obtained his Ph.D. degree from Kansas State University in biochemistry. Dr. Zhu received his postdoctoral training at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University under the supervision of Dr. Nancy Davidson in breast cancer. He has over 12 years of experience in genomics research and drug development. His latest entrepreneurial endeavors focused on drug development and repurposing using newly developed pathway analysis algorithms and deep learning technologies for cancer and other age-related diseases.

Deep neural networks (DNNs) demonstrated spectacular performance in many applications and were rapidly adopted in many areas of science and technology. However, in the pharmaceutical drug discovery the progress has been comparatively slow. Recently we demonstrated the applications of DNNs to predicting the pharmacological properties of molecules using transcriptional response data and integrated these predictors into comprehensive pipelines for drug discovery integrating structural chemistry data. Here we will present the pipeline structure and describe the methodology for using transcriptional response data from experiments, where multiple human cell lines were incubated with multiple small molecules to predicting the pharmacological class of the molecule.

Level: Beginner
Type: Talk
Tags: Healthcare

Day: Thursday, 10/27
Time: 14:30 - 14:55
Location: Oceanic

DCS16156 - Developing Computer Vision Applications with VisionWorks™

Elif Albuz Vision Software Manager, NVIDIA
Elif Albuz is the technical manager of VisionWorks Toolkit at NVIDIA, driving features and optimizations with CUDA acceleration on Tegra GPUs. Elif was involved in face detection, video stabilization, panoramic stitching, pedestrian detection, car detection and tracking and various other computer vision applications and demos. Before Computer Vision Group, she was leading CUDA FFT Library; and prior to that, worked on designing new algorithms for motion estimation, super-resolution and frame-rate up-conversion and accelerating these algorithms on NVIDIA GPUs; designing architecture for error concealment, adaptive quantization hardware for video encoder; and implementing low-level code for h.264, MPEG2 codecs. Before joining NVIDIA, she worked at Sony Electronics, leading DVD decoder firmware stack that was used in DVD players and Playstation 2, implementing real-time OS for multi-processor systems and working on h.264 encoder acceleration in the Multimedia Research Labs. Elif Albuz holds dual degree on Electrical Engineering and Computer Science where she focused on Artificial Intelligence and Robotics, and holds a Masters degree in Electrical Engineering where she did research on content based image retrieval, parallel architectures and algorithms.

We'll introduce NVIDIA VisionWorks™ toolkit, a software development package for computer vision (CV) and image processing. VisionWorks originated with Khronos OpenVX standard and extended beyond. VisionWorks Library is optimized for CUDA-capable GPUs and SOCs enabling computer vision applications on a scalable and flexible platform. VisionWorks implements a thread-safe API and framework for seamlessly adding user defined primitives. The talk will give an overview of the VisionWorks toolkit, its API and framework, and computer vision pipeline samples exercising its API.

Level: All
Type: Talk
Tags: Autonomous Vehicles; Drones; Robotics

Day: Thursday, 10/27
Time: 14:30 - 14:55
Location: Amphitheater

DCS16113 - Earthquake Simulations at Extreme Scale

Thomas Jordan University Professor and Director, Southern California Earthquake Center, University of Southern California
Dr. Thomas H. Jordan is a University Professor and the W. M. Keck Foundation Professor of Earth Sciences at the University of Southern California. His current research is focused on system-level models of earthquake processes, earthquake forecasting, continental structure and dynamics, and full-3D waveform tomography.As the director of the Southern California Earthquake Center (SCEC), he coordinates an international research program in earthquake system science that involves over 1000 scientists at more than 70 universities and research organizations. He is an author of more than 230 scientific publications, including two popular textbooks.

The highly nonlinear, multiscale dynamics of large earthquakes is a difficult physics problem that challenges HPC systems at extreme computational scales. This presentation will summarize how earthquake simulations at increasing levels of scale and sophistication have contributed to our understanding of seismic phenomena, focusing on the practical use of simulations to reduce seismic risk and enhance community resilience. Full realization of the projected gains in forecasting probability will require enhanced computational capabilities, but it could have a broad impact on risk-reduction strategies, especially for critical facilities such as large dams, nuclear power plants, and energy transportation networks.

Level: Intermediate
Type: Talk
Tags: HPC; Federal

Day: Thursday, 10/27
Time: 15:00 - 15:25
Location: Rotunda

DCS16148 - Deep Learning for Quantitative Ultrasound Image Analysis

Michal Sofka Team Leader, 4Catalyzer
Michal Sofka completed his undergraduate work at the Czech Technical University. He received the M.Sc. in Electrical Engineering from Union College in 2001. He received the M.Sc. and Ph.D. in Computer Science from the Rensselaer Polytechnic Institute (RPI) in 2006 and 2008, respectively. In 2004, he was a technical employee at Siemens Corporate Research. He joined the same company as a full time Research Scientist in 2008 and became a Project Manager in 2011. During this time, he managed and directly contributed to research and development projects for various Siemens business units and external customers. In 2013, Michal joined Cisco Systems where he developed new algorithms in the area of large scale traffic analysis for threat defense. In 2016, he joined 4Catalyzer to revolutionize healthcare by designing deep neural networks for analyzing data from new types of sensors. He published more than 30 technical articles in leading journals and conferences and more than 22 patent applications.

This talk outlines challenges of training deep neural networks to segment, classify, and detect structures in medical images and proposes practical solutions for the ultrasound image analysis. Unlike training neural networks on natural images, the difficulty in the medical domain is in accurately labeling the structures of interest in sufficient quantities, data variation caused by medical conditions, privacy and compliance issues, and accurately defining the requirements based on domain knowledge. We will present how to train convolutional deep learning network architectures for accurate segmentation of structures in ultrasound images. We will also show how to use the segmented structures, such as heart chambers, as reliable biomarkers to evaluate health outcomes.

Level: All
Type: Talk
Tags: Healthcare

Day: Thursday, 10/27
Time: 15:00 - 15:25
Location: Oceanic

DCS16160 - Improving Deep Learning Accessibility with Docker

Pedro Rodriguez Senior Research Scientist, JHU Applied Physics Laboratory
Dr. Pedro A. Rodriguez is the Senior Technical Leader of multiple Deep Learning projects at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). His work includes developing and deploying Deep Learning algorithms for automatic target recognition (ATR) on a variety of sensor modalities such as: Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR) and Full Motion Video (FMV). His current research focuses on implementing Deep Learning ATR algorithms on low SWaP systems, e.g., NVIDIA Jetson TX1. Dr. Rodriguez holds a M.S. in Applied Biomedical Engineering from Johns Hopkins University and a Ph.D in Electrical Engineering from the University of Maryland, Baltimore County. He has more than 12 years of experience developing novel image detection, tracking, classification and fusion algorithms for a variety of Information, Surveillance and Reconnaissance (ISR) projects.

We will discuss a new Deep Learning architecture developed at the Johns Hopkins University Applied Physics Laboratory (JHU/APL). APL's architecture is based on Docker which allows users to easily train and deploy DL applications in diverse environments, where computation is constrained by power, memory, internet connectivity, and system security requirements.

Level: All
Type: Talk
Tags: HPC; Federal

Day: Thursday, 10/27
Time: 15:00 - 15:25
Location: Polaris

DCS16187 - VR: Not Just for Games

Simon Jones Director, Unreal Engine Enterprise, Epic Games
Simon Jones has amassed over two decades of experience within the video games industry, and has spent the last four years focusing on visualization solutions within the automotive, industrial and aviation sectors. As a seasoned games industry professional he joined Epic Games to head up their newly created Enterprise division at the end of 2015, and is currently building the team that will enable Unreal Engine to be the real-time visualization solution of choice within the enterprise sector.

Epic's Unreal Engine is not just for games. Simon Jones will be detailing how Epic's Enterprise division enables major players across verticals such as automotive, aerospace, data visualization, AEC, medical, media & entertainment and virtual reality to design and deliver engaging user experiences that change the way they do business.

Level: All
Type: Talk
Tags: IoT

Day: Thursday, 10/27
Time: 15:00 - 15:25
Location: Amphitheater

Talk

HANDS-ON LAB

Presentation
Details

DCL16118 - Deep Learning Network Deployment (End-to-end Series Part 3)

Abel Brown Solution Architect, NVIDIA
Abel holds degrees in Mathematics and Physics as well as a PhD in the field of Geodesy & Geophysics from The Ohio State University. For the past eight years, Abel has been developing distributed software frameworks and administering high-performance computing clusters. He has deployed and managed many sensor networks around the world in Antarctica, South America, and Greenland. Abel is dually appointed on the Magnetopheric Multiscale (MMS) Ground System and Conjunction Assessment development teams and manages numerous research projects at a.i. solutions on GPU computing, image analytics, and advanced satellite perturbation techniques. As co-author, Abel's recent work contributed to the PNAS publication which was featured in WIRED Magazine's "Best Scientific Figures of 2012" titled "Greenland Rising".

In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA GPU Inference Engine (GIE) which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You'll also explore inference for a variety of different DNN architecture

Level: Intermediate
Type: Hands-on Lab
Tags: Science and Research

Day: Thursday, 10/27
Time: 15:30 - 17:00
Location: Hemisphere A

Hands-on Lab

TALK

Presentation
Details

DCS16118 - The Accelerated Climate Model for Energy (ACME) on Exascale Computers

Philip Jones ACME Performance Group Lead, Los Alamos National Laboratory
Phil Jones is the Performance Co-lead for the multi-laboratory Accelerated Climate Model for Energy (ACME) project and a staff scientist at Los Alamos National Laboratory in the Theoretical Fluid Dynamics and Solid Mechanics group. He has been developing climate models for high-performance computers for 25 years, starting with the vector-to-parallel architecture transitions in the '90s. He was the primary software architect for the Parallel Ocean Program (POP) and now participates in the development of the ACME ocean and ice components based on the Model for Prediction Across Scales (MPAS) framework. He has a Ph.D. in Astrophysical, Planetary and Atmospheric Sciences from the University of Colorado, Boulder.

Numerical models of the Earth's climate system are used to understand and predict future climate change and its impacts. The Accelerated Climate Model for Energy (ACME) is a US Department of Energy (DOE) project tasked with developing Earth System Models for simulating the feedbacks between the climate system and our energy use. Earth System Models require computational capabilities at the exascale and beyond and the ACME project is specifically targeted at utilizing exascale systems being deployed by the DOE, including GPU-accelerated systems. We'll describe the ACME model and the challenges of developing these complex models for exascale machines. We'll present early results on target architectures as well as future algorithmic and software strategies for exascale architectures.

Level: All
Type: Talk
Tags: HPC; Federal

Day: Thursday, 10/27
Time: 15:30 - 15:55
Location: Rotunda

DCS16134 - Medical Deep Learning: Lessons Learned

Diogo Almeida Senior Data Scientist, Enlitic
Diogo Moitinho de Almeida is a data scientist, software engineer, and hacker. He has previously been a medalist at the International Math Olympiad ending a 13-year losing streak for the Philippines, received the top prize in the Interdisciplinary Contest in Modeling achieving the highest distinction of any team from the Western Hemisphere, and won a Kaggle competition setting a new state-of-the-art for black box identification of causality. Now, he is at Enlitic where he works to radically improve the quality of medical diagnosis using deep learning, advance the state-of-the-art in modeling, and build novel ways to interact with neural networks.

One might think that solving hard medical problems is as simple as running an existing Deep Learning architecture on a new dataset, but that is unfortunately insufficient. In this talk, we will discuss how generalizable some of the current best practices are, problems unique to the medical domain, and our solutions to those problems - both current solutions and ones for the near future.

Level: Intermediate
Type: Talk
Tags: Healthcare; HPC

Day: Thursday, 10/27
Time: 15:30 - 15:55
Location: Oceanic

DCS16137 - Leveraging Azure for Deep Learning and Visualization

Tejas Karmarkar Principal Program Manager Azure Big Compute, Microsoft
Tejas Karmarkar is a Principal Program Manager for Azure High Performance Computing in Microsoft HQ in Redmond WA. He has worked in Microsoft for 10 years starting from the data center business and then moving on to Private Cloud and then to High Performance Computing. Currently, Tejas is responsible for engineering and business strategy for Big Compute and High Performance Computing solutions running on Microsoft Azure Cloud for the Simulation industry. Prior to joining Microsoft, Tejas worked with Altair engineering Inc. for more than 8 years and has extensive experience in the automotive industry. Altogether, he has more than 17 years of experience in the manufacturing industry. Tejas has a Bachelor and Master's Degree in Mechanical Engineering as well as an MBA from Michigan State University.
Jaideep Bangal Sr. Application Engineer , solidThinking Inspire
Jaideep Bangal is passionate about sharing knowledge with industry professionals to help them make design decisions. He is excited to work with design engineers, mechanical engineers showing benefits of solidThinking Inspire, an Altair company. He has more than 10 years of experience in CFD and optimization. He really enjoyed journey from analyst to messenger of Upfront design engineering tools in CFD and optimization areas, to be used in a product development process by multitasking engineers.

Level: All
Type: Talk
Tags: Federal; HPC

Day: Thursday, 10/27
Time: 15:30 - 15:55
Location: Polaris

DCS16186 - Changing the Landscape with Deep Learning and Accelerated Analytics

Keith Kraus Data Engineer, Accenture Security Labs
Keith Kraus is an associate principal engineer for the Accenture Security Lab in the Washington, DC, area. Over the past year, Keith has done extensive data engineering, systems engineering, and data visualization work in the cybersecurity domain. His main focus is on building a GPU-accelerated big data solution for advanced threat detection and cyber-hunting capabilities. Prior to working for the Accenture Security Lab, Keith was a member of a research team that built a tool designed to optimally place automated defibrillators in urban environments. Keith graduated from Stevens Institute of Technology with a BEng in computer engineering and an MEng in networked information systems.
Jim McHugh Vice President & General Manager, NVIDIA
Jim McHugh is vice president and general manager at NVIDIA with over 25 years of experience as a marketing and business executive with startup, mid-sized, and high-profile companies. He currently leads DGX-1, the world's first AI supercomputer in a box. Jim focuses on building a vision of organizational success and executing strategies to deliver computing solutions that benefit from GPUs in the data center. He has a deep knowledge and understanding of business drivers, market/customer dynamics, technology-centered products, and accelerated solutions.
Jonathan Symonds VP of Marketing, MapD
Jonathan Symonds is responsible for the messaging, demand generation and overall corporate awareness for MapD. He has over 16 years of enterprise software and analytics marketing experience, most recently at machine intelligence pioneer Ayasdi. Previously, Jonathan held senior marketing roles at Ace Metrix, 2Wire and Tandberg. He holds an MBA from Cornell and a BA from Washington and Lee University.
Geoff Lunsford CIO, Kinetica
Responsible for leading customer facing interaction at Kinetica; focused on educating organizations on how they can utilize our GPU-accelerated In-memory database for better performance in real-time application environments.

Customers are looking to extend the benefits beyond big data with the power of the deep learning and accelerated analytics ecosystems. The NVIDIA® DGX-1™ is the platform of AI Pioneers, which integrates power of deep learning and accelerated analytics together in a single hardware and software system. This session will cover the learnings and successes of real world customer examples for accelerated analytics. Learn how customers are leveraging Deep Learning and Accelerated Analytics to turn insights into AI-accelerated knowledge. The growing ecosystem of solutions and technologies that are delivering on this promise will be covered in this session.

Level: All
Type: Talk
Tags: Deep Learning & Artificial Intelligence; IoT

Day: Thursday, 10/27
Time: 15:30 - 16:20
Location: Amphitheater

DCS16104 - Evolution of Machine Learning from Expert Systems to Adaptive Exploratory Systems

Trung Tran DARPA PM, US Government
Trung Tran joined DARPA as a program manager in the Microsystems Technology Office in October 2015. Trung earned a B.Sc. in electrical engineering from the US Air Force Academy and an MBA from The Wharton School of the University of Pennsylvania. While in the Air Force, he was stationed at Hanscom Air Force Base and Fort Meade, working for the Air Intelligence Agency. In that role, he developed cryptographic chips and command and control networks, which focused on reducing the amount of time between the acquisition of sensor data and the use of that data by shooters or, more generally, weapons systems. He received four medals in recognition of his work in these areas. For 15 years prior to joining DARPA, Trung worked in Silicon Valley developing products, among them 100G Top of the Rack Switches,1U Server Blades, and semiconductors including field-programmable gate arrays (FPGAs), Memory, Physical Layer Devices (PHYs), and Framers.Tran is a former Vice Chairman of the Board of Directors of JEDEC, a microelectronics standards development organization, where he worked on DDR3 and FBDIMM (types of memory chips) specifications. His interests include machine learning, data analytics and non-conventional computer architecture.

We'll explore the evolution of machine learning from expert systems to shallow learners to and deep learners. We look at the types of algorithms and problems which are addressed by each of these areas. Explore what they mean by artificial intelligence and make a case for a new type of machine learning algorithm embodied in adaptive exploratory systems. The talk will then address how those systems would work in the future to include training sets and adaptation to events in real time. It will address the need for uncertainty quantification and talk to the need for better models and estimation techniques need to do truly predictive analytics.

Level: All
Type: Talk
Tags: Federal; IoT; Robotics

Day: Thursday, 10/27
Time: 16:00 - 16:25
Location: Polaris

DCS16117 - An Optimized GPU Implementation of a Multicolor Point-Implicit Linear Solver

Mohammad Zubair Professor, Old Dominion University
Mohammad Zubair has more than twenty-five years of research experience in the area of experimental computer science and engineering both at the university as well as at the Industry. He is a professor of computer science at Old Dominion University. His primary interest is in the area of application of high performance computing to scientific computing and big data analytics. He works closely with researchers at NASA Langley and Jefferson Laboratory in the area of high performance scientific computing. He is collaborating with researchers at NASA Langley on two projects: design and implementation of computation kernels for large scale unstructured-grid fluid dynamic simulations code (FUN3D) on a cluster with GPU accelerator; and Real-Time Probabilistic Structural Health Monitoring using High Performance Computing. He is working with scientists at Jefferson Laboratory for implementing a large scale Lattice QCD simulation on Intel Xeon Phi architecture; and to implement high-fidelity simulation of collective effects in electron beams on emerging parallel architectures such as GPUs. He has also looked at how parallel and distributed framework such as Hadoop and Spark can be utilized efficiently for big data problems such as: detecting communities in a social network, and large scale financial analytics. Mohammad Zubair has experience working with industry, his major industrial assignment was at IBM T.J. Watson Research Center for three years, where his research focus was in high performance computing and some of his work was integrated into IBM products: Engineering Scientific Subroutine Library (ESSL), and Parallel ESSL.

NASA Langley Research Center's FUN3D computational fluid dynamics (CFD) software is used to solve the Navier-Stokes (NS) equations for a broad range of aerodynamics applications across the speed range. Accurate and efficient simulations of complex aerodynamic flows are challenging and require significant computational resources. We'll describe our experiences in porting the point-implicit algorithm to GPUs. The main computation involves solving a large tightly-coupled systems of block-sparse linear equations. The solver was initially reformulated to leverage two CUDA library functions. Numerical experiments showed that the performance of these functions was suboptimal for matrices representative of those encountered in FUN3D simulations.

Level: Intermediate
Type: Talk
Tags: HPC

Day: Thursday, 10/27
Time: 16:00 - 16:25
Location: Rotunda

DCS16162 - Deep Learning on Metastasis Detection of Breast Cancer using DGX-1

Quanzheng Li Associate Professor, Massachusetts General Hospital
Quanzheng Li is an Associate Professor of Radiology at Massachusetts General Hospital, Harvard Medical School. He received his M.S. degree from Tsinghua University in 2000, and his Ph.D degree in Electrical Engineering from the University of Southern California (USC) in 2005. He did his postdoctoral training at USC from 2006 to 2007, and was a Research Assistant Professor from 2008 to 2010. In 2011, he joined the Radiology Department at Massachusetts General Hospital in Boston where he is currently the director of image reconstruction and artificial intelligent program in Gordon Center and a principal investigator at Center for Clinical Data Science. Dr. Li is the recipient of 2015 IEEE Nuclear and Plasma Sciences Society (NPSS) early achievement award. He is an associate editor of IEEE Transaction on Image Processing and editorial board member of Theronostics. His research interests include image reconstruction methods in PET, SPECT, CT and MRI as well as data science in health and medicine.

In this talk we will describe our work on applying deep learning to detect metastasis in the pathological images of lymph nodes in breast cancer. We will demonstrate the whole processing pipeline of our pathological image detection, including pre-processing, deep learning using convolutional neuron network and post-processing. We implemented our processing pipeline on a workstation using P40, a workstation using P100 and the latest dedicated deep learning machine DGX-1. We will demonstrate our detection results, the run time of different system on training and testing, and share our experience on applying DGX-1 on the deep learning of medical imaging.

Level: Intermediate
Type: Talk
Tags: Healthcare

Day: Thursday, 10/27
Time: 16:00 - 16:25
Location: Oceanic

DCS16145 - Extending the Reach of Medical Imaging with VR/AR

Sergio Aguirre CTO , EchoPixel, Inc.
Sergio Aguirre is Founder and CTO of EchoPixel, Inc. and is a pioneer in stereoscopic 3D systems with over 15 years of experience in visualization systems. Prior to forming EchoPixel, Sergio developed one of the first stereoscopic 3D video systems. He is responsible for leading technological development at EchoPixel as they continue to partner with leading luminary sites. Sergio completed a B.Sc. and M.Sc. EE from ITESM.

When a physician examines CT or MRI images, they're piecing together multiple 2D perspectives and sometimes 3D renderings that are displayed on a 2D screen; to imagine a patient's 3D anatomy. That mental leap means they're forced to make assumptions about what the patient's anatomy truly looks like—which can slow down workflow and open the door to overlooking critical clinical information. We present the functionality, features and benefits of a first embodiment of an interactive virtual reality software solution for diagnostic and surgical planning applications using CT and MRI images that enable a clinician to visualize and interact with medical images of patient specific organs and tissue as if they were real, physical objects.

Level: All
Type: Talk
Tags: Healthcare

Day: Thursday, 10/27
Time: 16:30 - 16:55
Location: Oceanic

DCS16163 - Deep Learning Applications in Neutrino Physics with the NOvA Experiment

Evan Niner Research Associate, Fermi National Accelerator Laboratory
Evan Niner received a Ph.D. in physics from Indiana University in 2015. He works now as a Research Associate at Fermi National Accelerator Laboratory in the Neutrino Physics Department. His primary research focus is on the NOvA neutrino oscillation experiment and R&D in liquid argon technologies.

The observation of neutrino oscillation provides evidence of physics beyond the standard model, and the precise measurement of those oscillations remains an important goal for the field of particle physics. NOvA is a long-baseline neutrino oscillation experiment at Fermi National Accelerator Laboratory designed to measure unknown parameters in neutrino physics. NOvA requires the accurate characterization of neutrino interactions. This presentation will show a deep convolutional neural network designed for neutrino identification, CVN, which is an innovative and powerful new approach to classification which uses deep learning techniques to identify events. This technique has shown a 30% performance improvement in NOvA over traditional selection techniques.

Level: All
Type: Talk
Tags: HPC

Day: Thursday, 10/27
Time: 16:30 - 16:55
Location: Rotunda

DCS16192 - The New Face of Biometrics: How Deep Learning Helps solve Facial Identification in the Field

Marios Savvides CEO, CyLab Biometrics Center
Marios Savvides is the Founder and Director of the CyLab Biometrics Center and currently a Research Professor of the Electrical and Computer Engineering Department with Carnegie Mellon University (CMU). His research is mainly focused on developing algorithms for robust face and iris biometrics using advanced pattern recognition and deep learning. He has authored or co-authored over 170 journal and conference publications, including several book chapters in the area of Biometrics and served as the Area Editor of the Springer's Encyclopedia of Biometrics. His work has been featured in the news: 60 minutes, CNN, NOVA, Popular Mechanics to name a few. He has filed over 20 patent applications in the area of biometrics and was a recipient of the CMU's 2009 Carnegie Institute of Technology Outstanding Faculty Research Award and the 2015 Gold Edison Award.

A large problem in deploying biometric identification in the field includes scenarios where occluded and/or masked faces are common. We will show how leveraging deep learning can overcome these challenges even with severe occlusion. We will discuss our leading performance on the WIDER face challenge and the efficiency of our algorithms allowing real-time analysis on few GPUs. We will also cover some ongoing research around facial identification and biometrics in general.

Level: All
Type: Talk
Tags: Federal

Day: Thursday, 10/27
Time: 16:30 - 16:55
Location: Polaris

DCS16105 - Designing with the Crowd: Heart Disease Diagnosis in the Data Science Bowl with Deep Learning

Aaron Sander Lead Scientist, Booz Allen Hamilton
Aaron Sander focuses on deep learning, machine learning, and algorithm development. He has worked on solutions in a variety of domains, including pharmaceutical business analytics, image analysis for computed tomography, Next Generation Sequencing based forensic bioinformatics, cybersecurity strategy, and cloud based applications of deep learning. Aaron has led projects across multiple sectors including pharmaceutical, nonprofit, and government. He has authored numerous publications in physics and astrophysics journals and holds a patent in the area of forensic bioinformatics. Aaron holds a B.Sc. in Physics from the University of Minnesota and a Ph.D. in Physics from Ohio State University where he developed an analysis to search for dark matter in the Milky Way with the Fermi Space Telescope.

Utilizing the Kaggle platform, we crowdsourced the design of deep learning solutions to the problem of heart disease diagnosis in The 2nd annual Data Science Bowl. We challenged competitors to develop a computer model for measuring the ejection fraction of the heart. The Data Science Bowl also demonstrated the ability of domain-area novices and non-experts in radiology to develop well-performing deep learning models for segmenting the left ventricle of the heart. While the results of the competition were impressive, the ultimate aim is to develop systems that can be used by medical professionals. This requires detailed statistical analyses of model performance and implementation support.

Level: All
Type: Talk
Tags: Healthcare; HPC

Day: Thursday, 10/27
Time: 17:00 - 17:25
Location: Oceanic

DCS16129 - Image Manifold Translation in Synthetic Aperture Radar Imaging

John Kaufhold Data Scientist, Deep Learning Analytics
John Kaufhold is a data scientist at Deep Learning Analytics.
Jennifer Sleeman Senior Research Scientist, Deep Learning Analytics
Jennifer Sleeman is a Senior Research Scientist at Deep Learning Analytics.

Deep learning dominates the state of the art for recognizing objects in photographic images where large training data sets are available. While deep learning has also revolutionized machine translation in NLP with encoder-decoder networks, images have not enjoyed the same machine translation benefits. In synthetic aperture radar (SAR) imaging, where real, labeled, collected SAR training data is sparse, investigators have traditionally attempted to substitute SAR model data. We show that (1) this proxy SAR model data falls on different manifolds than the real SAR data (2) real and model SAR data paint out approximately 1D submanifolds tesselating acquisition geometry (3) there is a learnable function that can translate model SAR submanifolds into their real SAR submanifold counterparts.

Level: Advanced
Type: Talk
Tags: Federal; HPC

Day: Thursday, 10/27
Time: 17:00 - 17:25
Location: Polaris

DCS16157 - Envrmnt: Building a GPU Based End to End VR Platform for Verizon

Mohammad Khalid Chief Engineer - VR, VERIZON
Raheel Khalid is a game industry and mobile operating systems veteran that serves as the Chief Engineer for Verizon Labs Virtual Reality division. Using years of experience in building large scale game engines and graphics platforms he architected and set the technical vision for Verizon's future in VR streaming services.

Level: All
Type: Talk
Tags: IoT; HPC

Day: Thursday, 10/27
Time: 17:00 - 17:25
Location: Amphitheater

Talk

HANDS-ON LAB

Presentation
Details

DCL16114 - Deep Learning for Object Detection (End-to-end Series Part 2)

Ryan Olson Solution Architect, NVIDIA
Abel holds degrees in Mathematics and Physics as well as a PhD in the field of Geodesy & Geophysics from The Ohio State University. For the past eight years, Abel has been developing distributed software frameworks and administering high performance computing clusters. He has deployed and managed many sensor networks around the world in Antarctica, South America, and Greenland. Abel is dually appointed on the Magnetopheric Multiscale (MMS) Ground System and Conjunction Assessment development teams and manages numerous research projects at a.i. solutions on GPU computing, image analytics, and advanced satellite perturbation techniques. As co-author, Abel's recent work contributed to the PNAS publication which was featured in WIRED Magazine's "Best Scientific Figures of 2012" titled "Greenland Rising".

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab, you will understand each approach and their relative merits. You'll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

Level: Intermediate
Type: Hands-on Lab
Tags:

Day: Thursday, 10/27
Time: 11:30 - 13:00
Location: Hemisphere A

Hands-on Lab

SPECIAL EVENT

Presentation
Details

DCE16113 - Reception & Exhibits

Level: All
Type: Special Event
Tags:

Day: Wednesday, 10/26
Time: 17:30 - 20:30
Location: Meridian Foyer