Wednesday, August 27th

Integrating HPC with Quantum Accelerators: Challenges and Opportunities

Martin Schulz, Full Professor and Chair of Computer Architecture and Parallel Systems, Technical University of Munich (TUM), member of the Board of Directors at the Leibniz Supercomputing Centre

Quantum computing is maturing and first installations are now available in HPC centers. Integrating such a novel and radically different technology leads to substantial challenges, for facilities, hardware and software. As part of the Munich Quantum Valley, we have installed several quantum systems at the Leibniz Supercomputing Centre, and we are designing the software stack to tightly couple them to HPC systems. In this talk, I will present the challenges we faced, how we overcame them and provide an overview of the current state of development and the opportunities for HPC-QC going forward.

Thursday, August 28th

Data-Centric Parallelism: A Journey from the Past to the Future

Domenico Talia, Full professor of computer engineering at the University of Calabria, Italy | Honorary professor at Noida University, India | co-founder of the start-up DtoK Lab.

Instructions (time) and data (memory) are the two most fundamental elements in computation. Every algorithm takes some time to run and requires some memory space to store data to compute results. Nowadays, with the availability of massive data sets, data have taken on a fundamental role as key artifact for scalable algorithms and applications. These data, commonly referred to as “big data”, are challenging current storage, processing, and analysis systems and capabilities. For this reason, in the area of parallel computing new algorithms, models, tools, and systems are being studied, designed, developed, and deployed, aiming at effectively exploiting the benefits coming from big data and from learning processes based on them. This scenario requires attention on data-aware parallel processing where parallelism exploitation is based on scalable data processing strategies and mechanisms. In this talk, I will first discuss the importance of using HPC systems for Big Data processing and how data awareness in parallel computing may bring scalable solutions, also mentioning how parallel data processing is a key element for AI solutions. Then, I will describe a series of research activities performed to exploit parallelism in Big Data and machine learning applications. I will particularly focus on data-centric strategies and frameworks we designed, which can be promising approaches for future parallel applications.

Friday, August 29th

Performance, Portability, and Productivity for Sustainable Scientific Computing in the AI Era

Florina M. Ciorba, Associate Professor and head of the High Performance Computing (HPC) Lab at the University of Basel

This keynote addresses the evolving interplay of performance, portability, and productivity for sustainable parallel and distributed computing in the era of AI. Drawing from latest developments in state-of-the-art simulation frameworks and large-scale simulation on LUMI and Alps, we explore strategies for porting across heterogeneous architectures, leveraging AI for code generation, and optimizing energy usage via adaptive scheduling. These insights point toward a future where HPC software and systems must not only port, perform, and scale but also work together to adapt intelligently and operate sustainably across an increasingly diverse and AI-augmented landscape.

Silver Sponsoring

DDN LogoIBM Logo

Bronze Sponsoring

Megware Logo
TU Dresden LogoZIH Logo
Springer logoLNCS Logo