At the end of 2023, the contract was signed to build JUPITER, a new European supercomputer and the first to reach 1 ExaFLOP/s HPL performance.
At the end of 2023, the contract was signed to build JUPITER, a new European supercomputer and the first to reach 1 ExaFLOP/s HPL performance.
In November of 2022, I created a table comparing GPU programming models and their support on GPUs of the three vendors (AMD, Intel, NVIDIA) for a talk. The audience liked it, so I beefed it up a little and posted it in this very blog.
The Supercomputing Conference 2023 took place in Denver, Colorado, from November 12th to 17th. For the Women in HPC workshop, we submitted a paper, which focused on benchmarking different accelerators for AI. The paper was accepted and I was invited to hold a lightning talk to show the work, spun off our OpenGPT-X project.
sup.wayback { font-size: 0.6em; color: gray; } .imagegrid { display: grid; grid-template-columns: auto auto; grid-gap: 20px; } .imagegrid a { display: flex; align-items: center; justify-content: center; } .imagegrid a img { margin: 0; } table#asm-highlight, table#asm-highlight figure code { font-size: smaller;
Poster in institute repository: http://dx.doi.org/10.34734/FZJ-2023-04519
Environment Setup Enabling UCC in OpenMPI Enabling NCCL in UCC (Team Layer Selection) All The Variables Results 1. Plain OpenMPI 2. OpenMPI with UCC 3. OpenMPI with UCC+NCCL Scaling Plots Average Latency Bus Bandwidth Comparing MPI, UCC, UCC+NCCL Comparing UCC+NCCL, NCCL Summary Technical Details This post
Together with our colleagues from Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and in collaboration with HIDA and OpenHackathons, we hosted the Helmholtz GPU Hackathon 2023 in Jülich in May. I’ve blogged about the event for the Zweikommazwei blog of Forschungszentrum;
MSA Concept MSA Software Building Blocks Workshop Exercises 1: Hello World! 2. GPU Hello World! 3: CPU-GPU Ping Pong Slides On May 29, we held a workshop about using the Modular Supercomputing Architecture (MSA) together with project partners from ParTec.
Poster publication: http://hdl.handle.net/2128/34532
For a recent talk at DKRZ in the scope of the natESM project, I created a table summarizing the current state of using a certain programming model on a GPU of a certain vendor, for C++ and Fortran. Since it lead to quite a discussion in the session, I made a standalone version of it with some updates and elaborations here and there. I present, the GPU Vendor/Programming Model Compatibility Table ! .pub { line-height: 40px;