Electrical EngineeringJekyll

JSC Accelerating Devices Lab

Various notes from the Accelerating Devices Lab (X-Dev) of Jülich Supercomputing Centre
Home PageAtom Feed
language
Published

TL;DR: I held a HPC intro talk. Slides are below. In MAELSTROM, we connect three areas of science: 🌍Weather and climate simulation with 🤖Machine Learning methods and workflows using 📈HPC techniques and resources . Halfway into the project, we held a boot camp at JSC to teach this Venn diagram to a group of students a few days ago. Some were ML experts, but had never used a HPC system.

Published

** Poster publication:** http://hdl.handle.net/2128/32006 The 14th JLESC workshop (JLESC: Joint Laboratory for Extreme-Scale Computing) was hosted by the National Center for Supercomputing Applications (NCSA) in Urbana, Illinois from 28th September to 30th September. We had the opportunity to present the OpenGPT-X project in form of a poster.

Published

1 This blog is an experiment. We want to share bits and pieces of our work; the reports we write, the presentations we hold, or the little discoveries we make, or even some first, water-testing investigations; and all the rest. It’s a documentation of what we do. Little bits of science, collected in the open, and sometimes even not that little.

Published

A few months ago, we extended the JURECA Evaluation Platform 1 at JSC by two nodes with AMD Instinct MI250 GPUs (four GPUs each). The nodes are Gigabyte G262-ZO0 servers, each with a dual socket AMD EPYC 7443 processor (24 cores per socket, SMT-2) and with four MI250 GPUs (128 GB memory). OSU Bandwidth Micro-Benchmark A100 Comparison GPU STREAM Variant Data Size Scan Threads and Data Sizes Conclusion Technical Details OSU

Published

A few days ago, OPTIMA announced the release of deliverable 3.5, to which I contributed. This deliverable is part of a set of five deliverables under work package 3. But first, let’s talk about OPTIMA . OPTIMA is an EU-funded project whose goal is to prove that several HPC applications can take advantage of the future highly heterogeneous FPGA-populated HPC systems.

Published

About This blog post is based on a presentation I held at the “New Trends in Computational Science in Engineering and Industrial Mathematics” workshop in Magdeburg on 01/07/2022. My goal is to give a brief introduction to the state of current large language models, the OpenGPT-X project, and the transformer neural network architecture for people unfamiliar with the subject. About What is a language model?

Published

On June 21 and 22, we held a workshop looking back on the last ten years of our lab together with NVIDIA – the NVIDIA Application Lab at Jülich (or NVLab , as I sometimes abbreviate it). The material can be found in the agenda at Indico. We invited a set of application owners with which we worked together during that time to present past developments, recent challenges, and future plans.