In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.

I have read and understood the above.

Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.

Second Computational and Data Science school for HEP (CoDaS-HEP 2018)

407 Jadwin Hall

Princeton University

The second school on tools, techniques and methods for Computational and Data Science for High Energy Physics (CoDaS-HEP 2018) will take place on 23-27 July, 2018, at Princeton University.

Advanced software is a critical ingredient to scientific research. Training young researchers in the latest tools and techniques is an essential part of developing the skills required for a successful career both in research and in industry.

The CoDaS-HEP school aims to provide a broad introduction to these critical skills as well as an overview of applications High Energy Physics. Specific topics to be covered at the school include:

Parallel Programming

Big Data Tools and Techniques

Machine Learning

Practical skills like performance evaluation, use of git, etc.

The school offers a limited number of young researchers an opportunity to learn these skills from experienced scientists and instructors. Successful applicants will receive travel and lodging support to attend the school.

Princeton University

407 Jadwin Hall

Princeton University

Fundamentaly, a Version Control System (VCS) is a system that records changes to a file or set of files over time, so that you can recall specific versions later.

Git is a modern VCS that is fast and flexible to use thanks to its
lightweight branch creation. Git is very popular, this is due in part to the availability of cloud hosting services like GitHub, Bitbucket and GitLab. Hosting a Git repositories on a remote service like GitHub greatly facilitates working collaboratively as well as allowing you to frequently backup your work on a remote host.

We will start this talk by introducing the fundamental concepts of Git. The second part of the talk will show how to publish to a remote repository on GitHub.

No prior knowledge of Git or version control will be necessary, but some familiarity with the Linux command line will be expected.

407 Jadwin Hall

Princeton University

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

We start with a discussion of the historical roots of parallel computing and how they appear in a modern context. We'll then use OpenMP and a series of hands-on exercises to explore the fundamental concepts behind parallel programming.

407 Jadwin Hall

Princeton University

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

We will explore through hands-on exercises the common core of OpenMP; that is, the features of the API that most OpenMP programmers use in all their parallel programs. This will provide a foundation of understanding you can build on as you explore the more advanced features of OpenMP.

407 Jadwin Hall

Princeton University

We'll explore more complex OpenMP problems and get a feel for how to work with OpenMP with real applications.

Speaker:
Tim Mattson
(Intel)

10:30
→
10:40

Group Photo - Jadwin Hall plaza10m

10:40
→
11:00

Coffee Break
20m
407 Jadwin Hall

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

11:00
→
12:30

Parallel Programming - The world beyond OpenMP1h 30m407 Jadwin Hall

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

Parallel programming is hard. There is no way to avoid that reality. We can mitigate these difficulties by focusing on the fundamental design patterns from which most parallel algorithms are constructed. Once mastered, these patterns make it much easier to understand how your problems map onto other parallel programming models. Hence for our last session on parallel programming, we'll review these essential design patterns as seen in OpenMP, and then show how they appear in cluster computing (with MPI) and GPGPU computing (with OpenCL and a bit of CUDA).

Speaker:
Tim Mattson
(Intel)

12:30
→
13:30

Lunch
1h
407 Jadwin Hall

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

13:30
→
15:00

Machine Learning1h 30m407 Jadwin Hall

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

Machine learning (ML) is a thriving field with active research topics. It has found numerous practical applications in natural language processing, understanding of speech and images as well as fundamental sciences. ML approaches are capable of replicating and often surpassing the accuracy of hypothesis driven first-principles simulations and can provide new insights to a research problem.

We here provide an overview about the content of the Machine Learning tutorials.
Although the theory and practice sessions are described separately, they will be taught alternating one to the other, during the four lectures. In this way, after we’ve introduced new concepts, we can immediately use them in a tailored exercise, which will help us absorb the material covered.

Theory

We’ll start with a gentle introduction to the ML field, introducing the 3 learning paradigms: supervised, unsupervised, and reinforcement learning. We’ll then delve into the two different supervised sub-categories: regression and classification using neural nets’ forward and backward propagation.
We'll face overfitting and fight it with regularisation.
We'll soon see that smart choices can be done to exploit the nature of the data we're dealing with, and introduce convolutional, spectral, recurrent, and graph neural nets.
We'll move on to unsupervised learning, and we'll familiarise with generative models, such as variational autoencoders and generative adversarial networks.

Practice

We will introduce machine learning technology focusing on the open source software stack, namely PyTorch and Keras frameworks.
Brief introduction to PyTorch architecture, primitives and automatic differentiation, implementing multi-layer perceptron and convolutional layers, a deep dive into recurrent neural networks for sequence learning tasks. Introduction to Keras.
Learn to debug machine learning applications and visualize training and validation process with pytorchviz or TensorBoard. Discuss ways to train multi-GPU and distributed models on a cluster with Horovod package.
All exercises will use PyTorch or Keras. Python programming experience is desirable, but previous experience with PyTorch and Keras is not required.

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

11:00
→
12:30

The Scientific Python Ecosystem1h 30m407 Jadwin Hall

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

In recent years, Python has become a glue language for scientific computing. Although code written in Python is generally slow, it has a good C API and Numpy as a common data abstraction, and many data processing, statistical, and most machine learning software packages have a Python interface as a matter of course.

This tutorial will introduce you to core Python packages for science— Numpy, Pandas, SciPy, Numba, Dask— as well as HEP-specific tools— uproot, histbook, NumPythia, pyjet— and how to connect them in analysis code.

407 Jadwin Hall

Princeton University

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

All modern CPUs boost their performance through vector processing units (VPUs). Typically this gain is achieved not by the programmer, but by the compiler through automatic vectorization of simple loops in the source code. Compilers generate SIMD instructions that operate on multiple numbers simultaneously by loading them together into extra-wide registers. Intel's latest processors feature a plethora of vector registers, as well as 1 or 2 VPUs per core that operate on 16 floats or 8 doubles in every cycle. Vectorization is an important component of parallel performance on CPUs, and to maximize performance, it is vital to consider how well one's code is being vectorized by the compiler.

In the first part of our presentation, we look at simple code examples that illustrate how vectorization works and the crucial role of memory bandwidth in limiting the vector processing rate. What does it really take to reach the processor's nominal peak of floating-point performance? What can we learn from things like roofline analysis and compiler optimization reports?

In the second part, we consider how a physics application may be restructured to take better advantage of vectorization. In particular, we focus on the Matriplex concept that is used to implement parallel Kalman filtering in our group's particle tracking R&D project. Drastic changes to data structures and loops were required to help the compiler find the SIMD opportunities in the algorithm. In certain places, vector operations were even enforced through calls to intrinsic functions. We examine a suite of test codes that helped to isolate the performance impact of the Matriplex class on the basic Kalman filter operations.

407 Jadwin Hall

Princeton University

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

Improving the performance of scientific code is something that is often considered to be some combination of difficult, mysterious, and time consuming, but it doesn't have to be. Performance tuning and optimization tools can greatly aid in the evaluation and understanding of the performance of scientific code. In this talk we will discuss how to approach performance tuning and introduce some measurement tools to evaluate the performance of compiled-language (C/C++/Fortran) code. Powerful profiling tools, such as Intel VTune and Advisor, will be introduced as well as demonstrated in practical applications. A hands-on example will allow students to gain some familiarity using VTune in a simple, yet realistic setting. Some of the more advanced features of VTune, including the ability to access the performance hardware counters on modern CPUs, will be introduced.

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

13:30
→
15:00

Low-level Python1h 30m407 Jadwin Hall

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

Python is a high-level language that usually hides "bare metal" details from the user. This is desirable in organizing a complex workflow, but it can get in the way of performance or interfacing with C/C++ code.

This tutorial will demonstrate how to "jailbreak" your Python for low-level computing. It will include Numpy tricks, memory mapped files, mixing C++ and Python through Cython, GPU programming through PyCUDA, and accessing ROOT functions from Python without loss of performance.

407 Jadwin Hall

Princeton University

The role of machine learning in extracting the secrets of the Higgs (Guest Lecture)45m407 Jadwin Hall

407 Jadwin Hall

Princeton University

Princeton Center For Theoretical Science (PCTS)

Machine learning has transformed how many analyses are performed at the LHC. I will demonstrate this by showing how machine learning has been and is used in studying the selected properties of the Higgs boson. I will discuss selected example analyses and highlight the sensitivity and other improvements from machine learning. I will conclude by discussing limitations and future perspectives.