Supercomputing (B.Sc.)

Format

The seminar takes place in the following time slot: Mon, 12:00PM - 2:00PM. All of our meetings are face-to-face in room 3220, EAP2.

The seminar has two parts. During the first part we read selected chapters of the book Introduction to High Performance Computing for Scientists and Engineers. Earlier chapters are presented by the teaching staff, later chapters may be selected as student seminar topics. The second part discusses recent supercomputing research papers. Students may also choose any of the papers given below as their seminar topic.

The general format of the seminar is similar to that of a reading group. This means that all participants read the book chapter(s) or paper before attending the respective sessions. A single person, either a student or somebody of the teaching staff, becomes an expert in the topic. This person presents the topic in 30 minutes and leads the discussion afterwards.

Student Papers

All participants write a scientific paper about their chosen seminar topic. The paper has to be submitted via email four weeks after the respective topic was discussed in the seminar. Use the ACM proceedings template with the sigconf option for your paper. The paper should be 4-6 pages in length (excl. references). You may write your paper in either English or German.

Supervision

Preparing presentations and writing scientific papers is hard. You may ask for advise at any time! Start early and keep in touch with your advisor!

Two meetings with your advisor are mandatory:
  • The first meeting should be at least one week before your presentation.

  • The second meeting should be at least one week before your paper submission deadline.

Schedule

Date

What?

04/03

Kickoff

04/10

Deadline for choosing a topic

04/17

Modern Processors (Ch. 1)

04/24

Basic optimization techniques for serial code (Ch. 2)

05/08

Data access optimization (Ch. 3)

05/15

Parallel Computers (Ch. 4)

05/22

Basics of Parallelization (Ch. 5)

06/05

OpenMP (Ch. 6)

06/12

STRONGHOLD: Fast and Affordable Billion-Scale Deep

Learning Model Training

06/19

Locality and NUMA (Ch. 8)

06/26

Message Passing Interface (Ch. 9 and 10)

06/27

Get Together (06:30 PM, Daheme im Garten )

Topics

Select any of the following chapters/papers as your seminar topic. Additionally, you may also suggest a topic/paper. Topics will be given out on a first come, first served basis.

  • Introduction to High Performance Computing for Scientists and Engineers (book):

    • Data Access Optimization (Ch. 3)

    • Parallel Computers (Ch. 4)

    • Basics of Parallelization (Ch. 5)

    • OpenMP (Ch. 6 and 7)

    • Locality and NUMA (Ch. 8)

    • Message Passing Interface (Ch. 9 and 10)

  • SC22:

    • HammingMesh: A Network Topology for Large-Scale Deep Learning (paper, AI)

    • CA3DMM: A New Algorithm Based on a Unified View of Parallel Matrix Multiplication (paper)

    • DeepSpeed-Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale (paper, AI)

    • SpDISTAL: Compiling Distributed Sparse Tensor Computations (paper, AI)

    • STRONGHOLD: Fast and Affordable Billion-Scale Deep Learning Model Training (paper, AI)

    • Lessons Learned on MPI+Threads Communication (paper)

  • IPDPS23:

    • Accelerating CNN inference on long vector architectures via co-design (preprint, AI)

    • Exploiting Sparsity in Pruned Neural Networks to Optimize Large Model Training (preprint, AI)

  • MLSYS23:

    • Reducing Activation Recomputation in Large Transformer Models (preprint, AI)

    • Efficiently Scaling Transformer Inference (preprint, AI)

    • On Optimizing the Communication of Model Parallelism (preprint, AI)

AI Summer School 2023

The AI Summer School 2023 will host poster sessions where all attendees present cutting-edge research on artificial intelligence. Preparing a poster presentation for the summer school may count as a seminar presentation. Only AI-related topics may be presented at the summer school.