UNIverse - Public Research Portal
Profile Photo

Prof. Dr. Florina M. Ciorba

Department of Mathematics and Computer Sciences
Profiles & Affiliations

Projects & Collaborations

21 found
Show per page
Project cover

Swiss Participation in the Square Kilometre Array Observatory (SKACH)

Research Project  | 3 Project Members

Imported from Grants Tool 4722431


The Square Kilometre Array Observatory (SKAO) is a next-generation radio astronomy facility, involving partners around the globe, that will lead to groundbreaking new insights in astrophysics and cosmology. Established on March 12, 2019 the SKAO is the second inter-governmental organisation dedicated to astronomy in the world. It will be operated over three sites: the Global Headquarters in the UK, the mid-frequency array in South Africa (SKA-mid), and the low-frequency array in Australia (SKA-low). 


The two telescopes under construction, SKA-Mid and SKA-Low, will combine the signals received from thousands of small antennae spread over a distance of several thousand kilometres to simulate a single giant radio telescope capable of extremely high sensitivity and angular resolution, using a technique called aperture synthesis. Some of the sub-arrays of the SKA will also have a very large field-of-view (FOV), making it possible to survey very large areas of the sky at once!


Switzerland has become the eighth country to join the intergovernmental nations that will collaborate in building the Square Kilometre Array Observatory (SKAO), to be built in Australia and South Africa. Swiss involvement is organized through a strong consortium of research institutions, called SKACH, including, Fachhochschule Nordwestschweiz (FHNW), Universität Zürich (UZH), Eidgenössische Technische Hochschule Zürich (ETHZ), École Polytechnique Fédérale de Lausanne (EPFL), Zürcher Hochschule für Angewandte Wissenschaften (ZHAW), Universität Basel (UniBas), Université de Genève (UniGE), Haute École spécialisée de Suisse Occidentale (HES-SO), Centro Svizzero di Calcolo Scientifico (CSCS).


As part of SKACH, the aim of our group is to extend the SPH-EXA simulation framework to include proper cosmological physics to reach trillion particle simulations on hybrid Tier-0 computing architectures. To this end we aim at coupling relevant physics modules with our SPH framework enabling the possibility of addressing both long-standing and cutting-edge problems via beyond state state-of-the-art simulations at extreme scales in the fields of Cosmology and Astrophysics. Such simulations include the formation, growth, and mergers of supermassive black holes in the early universe which would greatly impact the scientific community (for instance, the 2020 Nobel Prize in Physics has been awarded for pioneering research on super-massive black holes). Moreover, the ability to simulate planet formation with high-resolution models will play an important role in consolidating Switzerland’s position as a leader in experimental physics and observational astronomy. Additional targets will be related to explosive scenarios such as core-collapse and Type Ia supernovas, in which Switzerland has also maintained a long record of international renown. These simulations would be possible with a Tier-0-ready SPH code and would have a large impact on projects such as the current NCCR PlanetS funded by the SNF.

Project cover

Swiss Participation in the Square Kilometre Array Observatory

Research Project  | 3 Project Members

The Square Kilometre Array Observatory ( SKAO ) is a next-generation radio astronomy facility, involving partners around the globe, that will lead to groundbreaking new insights in astrophysics and cosmology. Established on March 12, 2019 the SKAO is the second inter-governmental organisation dedicated to astronomy in the world. It will be operated over three sites: the Global Headquarters in the UK, the mid-frequency array in South Africa (SKA-mid), and the low-frequency array in Australia (SKA-low). The two telescopes under construction, SKA-Mid and SKA-Low, will combine the signals received from thousands of small antennae spread over a distance of several thousand kilometres to simulate a single giant radio telescope capable of extremely high sensitivity and angular resolution, using a technique called aperture synthesis. Some of the sub-arrays of the SKA will also have a very large field-of-view (FOV), making it possible to survey very large areas of the sky at once! Switzerland has become the eighth country to join the intergovernmental nations that will collaborate in building the Square Kilometre Array Observatory (SKAO), to be built in Australia and South Africa. Swiss involvement is organized through a strong consortium of research institutions, called SKACH , including, Fachhochschule Nordwestschweiz (FHNW), Universität Zürich (UZH), Eidgenössische Technische Hochschule Zürich (ETHZ), École Polytechnique Fédérale de Lausanne (EPFL), Zürcher Hochschule für Angewandte Wissenschaften (ZHAW), Universität Basel (UniBas), Université de Genève (UniGE), Haute École spécialisée de Suisse Occidentale (HES-SO), Centro Svizzero di Calcolo Scientifico (CSCS). The SKA telescopes will look at the history of the Universe as far back as the Cosmic Dawn, when the very first stars and galaxies formed. These key facilities will help Swiss scientists discover answers to the burning questions throughout several key topics in the field of astrophysics, including: dark energy, cosmic reionization, dark matter, galaxy evolution, cosmic magnetic fields, tests of gravity, solar physics, and others. During its operation, the SKAO will collect unprecedented amounts of data, requiring the world's fastest supercomputers to process this data in near real time. Swiss data scientists are working on complex Big Data algorithms enhanced by High-Performance Computing and machine learning techniques to handle these large data streams. As part of SKACH , the aim of our group is to extend the SPH-EXA simulation framework to include proper cosmological physics to reach trillion particle simulations on hybrid Tier-0 computing architectures. To this end we aim at coupling relevant physics modules with our SPH framework enabling the possibility of addressing both long-standing and cutting-edge problems via beyond state state-of-the-art simulations at extreme scales in the fields of Cosmology and Astrophysics. Such simulations include the formation, growth, and mergers of supermassive black holes in the early universe which would greatly impact the scientific community (for instance, the 2020 Nobel Prize in Physics has been awarded for pioneering research on super-massive black holes). Moreover, the ability to simulate planet formation with high-resolution models will play an important role in consolidating Switzerland's position as a leader in experimental physics and observational astronomy. Additional targets will be related to explosive scenarios such as core-collapse and Type Ia supernovas , in which Switzerland has also maintained a long record of international renown. These simulations would be possible with a Tier-0-ready SPH code and would have a large impact on projects such as the current NCCR PlanetS funded by the SNF.

Project cover

SPH-EXA2: Smoothed Particle Hydrodynamics at Exascale

Research Project  | 4 Project Members

The goal of the SPH-EXA2 project is to scale the Smoothed Particle Hydrodynamics (SPH) method implemented in SPH-EXA1 to enable Tier-0 and Exascale simulations . To reach this goal we define four concrete and interrelated objectives: physics, performance, correctness, and portability & reproducibility . We aim at coupling relevant physics modules with our SPH framework enabling the possibility of addressing both long-standing and cutting-edge problems via beyond state state-of-the-art simulations at extreme scales in the fields of Cosmology and Astrophysics. Such simulations include the formation, growth, and mergers of supermassive black holes in the early universe which would greatly impact the scientific community (for instance, the 2020 Nobel Prize in Physics has been awarded for pioneering research on super-massive black holes). Moreover, the ability to simulate planet formation with high-resolution models will play an important role in consolidating Switzerland's position as a leader in experimental physics and observational astronomy. Additional targets will be related to explosive scenarios such as core-collapse and Type Ia supernovas , in which Switzerland has also maintained a long record of international renown. These simulations would be possible with a Tier-0-ready SPH code and would have a large impact on projects such as the current NCCR PlanetS funded by the SNF. The long-term and ambitious vision of the SPH-EXA consortium is to study fluid and solid mechanics in a wide range of research fields, that nowadays are unfeasible (with the current models, codes, and architectures). To this end, in SPH-EXA2 we build on SPH-EXA1 and develop a scalable bare-bones SPH simulation framework , and refer to it as SPH-EXA . In Switzerland, within the framework of the PASC SPH-EXA (2017-2021) project , we developed the SPH-EXA miniapp as a scalable SPH code that employs state-of-the-art parallel programming models and software engineering techniques to exploit the current HPC architectures, including accelerators. The current SPH-EXA mini-app performs pure hydrodynamical simulations with up to 1 trillion SPH particles using only CPUs on 4,096 nodes on Piz Daint at CSCS. With relatively limited memory per GPU, the miniapp can still scale up to 250 billion SPH particles. In terms of performance , the use of accelerators is necessary to meet the above SPH-EXA2 goal and objectives. Offloading computationally-intensive steps to hardware accelerators, such as self-gravity evaluation and ancillary physics, will enable SPH-EXA to simulate increasingly complex cosmological & astrophysical scenarios. We envision that various types of hardware accelerators will be deployed on the supercomputers that we will use in this project, such as NVIDIA GPUs (in Piz Daint) or AMD GPUs (in LUMI). Portability across GPUs will be ensured by using OpenACC and OpenMP target offloading, which is supported by different GPU vendors. Scheduling & load balancing and fault tolerance are major challenges on the way to Exascale. We will address these challenges in SPH-EXA2 by employing locality-aware data decomposition, dynamic & adaptive scheduling and load balancing , and advanced fault tolerance techniques. Specifically, we will schedule & load balance the computational load across heterogeneous CPUs, various NUMA domains (e.g., multiple sockets or memory controllers, multi-channel DRAM, and NV-RAM), and between CPUs and GPUs. To achieve correctness , we will examine and verify the effectiveness of the new MPI 4.0 and beyond standard support for fault tolerance, in addition to selective particle replication (SPR) and optimal checkpointing (to NV-RAM or SSD). To ensure performance portability & reproducibility , we will benchmark SPH-EXA1's performance on a wide variety of platforms, as well as build off-the-shelf SPH-EXA containers that can easily be deployed with no additional setup required. This will also enlarge the SPH-EXA code user base. The primary advantage of the SPH-EXA2 project is its scientific interdisciplinarity . The project involves computer scientists, computer engineers, astrophysicists, and cosmologists. This is complemented by a holistic co-design, which involves applications (cosmology, astrophysics, CFD), algorithms (SPH, domain decomposition, load balancing, scheduling, fault tolerance, etc.), and architectures (CPUs, GPUs, etc.) as opposed to the traditional binary software-hardware co-design.

Project cover

DAPHNE: Integrated Data Analysis Pipelines for Large-Scale Data Management, HPC, and Machine Learning

Research Project  | 2 Project Members

The DAPHNE project aims to define and build an open and extensible system infrastructure for integrated data analysis pipelines, including data management and processing, high-performance computing (HPC), and machine learning (ML) training and scoring. Key observations are that (1) systems of these areas share many compilation and runtime techniques, (2) there is a trend towards complex data analysis pipelines that combine these systems, and (3) the used, increasingly heterogeneous, hardware infrastructure converges as well. Yet, the programming paradigms, cluster resource management, as well as data formats and representations differ substantially. Therefore, this project aims - with a joint consortium of experts from the data management, ML systems, and HPC communities - at systematically investigating the necessary system infrastructure, language abstractions, compilation and runtime techniques, as well as systems and tools necessary to increase the productivity when building such data analysis pipelines, and eliminating unnecessary performance bottlenecks.

Project cover

MLS: Multilevel Scheduling in Large Scale High Performance Computers (extension)

Research Project  | 2 Project Members

This project proposes to investigate and develop multilevel scheduling (MLS), a multilevel approach for achieving scalable scheduling in large scale high performance computing systems across the multiple levels of parallelism, with a focus on software parallelism. By integrating multiple levels of parallelism, MLS differs from hierarchical scheduling, traditionally employed to achieve scalability within a single level of parallelism. MLS is based on extending and bridging the most successful (batch, application, and thread) scheduling models beyond single or a couple of parallelism levels (scaling across) and beyond their current scale (scaling out). The proposed MLS approach aims to leverage all available parallelism and address hardware heterogeneity in large scale high performance computers such that execution times are reduced, performance targets are achieved, and acceptable efficiency is maintained. The methodology for reaching the multilevel scheduling aims involves theoretical research studies, simulation, and experiments. The expected outcome is an answer to the following research question: Given massive parallelism, at multiple levels, and of diverse forms and granularities, how can it be exposed, expressed, and exploited such that execution times are reduced, performance targets (e.g., robustness against perturbations) are achieved, and acceptable efficiency (e.g., tradeoff between maximizing parallelism and minimizing cost) is maintained? This proposal leverages the most efficient existing scheduling solutions to extend them beyond one or two levels, respectively, and to scale them out within single levels of parallelism. The proposal addresses four tightly coupled problems: scalable scheduling, adaptive and dynamic scheduling, heterogeneous scheduling, and bridging schedulers designed for competitive execution (e.g., batch and operating system schedulers) with those for cooperative execution (e.g., application level schedulers). Overall, the project aims to make a fundamental advance toward simpler to use large scale high performance computing systems, with impacts not only in the computer science community but also in all computational science domains.

Project cover

MODA at sciCORE: Monitoring and Operational Data Analytics at sciCORE

Research Project  | 3 Project Members

The goal of this project is to improve HPC operations and research regarding system performance, resilience, and efficiency. The performance optimization aspect targets optimal resource allocation and job scheduling. The resilience aspect strives to ensure orderly operations when facing anomalies or misuse, this includes security mechanisms against malicious applications. The efficiency aspect is about resource management and energy efficiency of HPC systems. To this end, appropriate techniques are employed to (a) monitor the system and collect data, such as sensor data, system logs, and job resource usage, (b) analyze system data through statistical and machine learning methods, and (c) make control and tuning decisions to optimize the system and avoid waste and misuse of computing power. The operational ideals that this project follows are (a) to gain a data-driven understanding of the system instead of operating it like a black box, (b) to continuously monitor all system states and application behavior, (c) to holistically consider the interaction between system states and application behavior, and (d) to develop solutions that can detect and resolve performance issues autonomously.

Project cover

MODA: Monitoring and Operational Data Analytics for HPC Systems

Research Project  | 2 Project Members

The goal of this project is to improve HPC operations and research regarding system performance, resilience, and efficiency. The performance optimization aspect targets optimal resource allocation and job scheduling. The resilience aspect strives to ensure orderly operations when facing anomalies or misuse, this includes security mechanisms against malicious applications. The efficiency aspect is about resource management and energy efficiency of HPC systems. To this end, appropriate techniques are employed to (a) monitor the system and collect data, such as sensor data, system logs, and job resource usage, (b) analyze system data through statistical and machine learning methods, and (c) make control and tuning decisions to optimize the system and avoid waste and misuse of computing power. The operational ideals that this project follows are (a) to gain a data-driven understanding of the system instead of operating it like a black box, (b) to continuously monitor all system states and application behavior, (c) to holistically consider the interaction between system states and application behavior, and (d) to develop solutions that can detect and resolve performance issues autonomously.