Data challenges in radio astronomy
Modern radio telescopes generate data at unprecedented scales, pushing current processing systems to their limits. Instruments like LOFAR and the SKA require substantial computational power to turn this data into scientifically useful products, and the demands are growing. This project seeks to meet these challenges by enhancing the computational efficiency of radio astronomy data pipelines, particularly for post-correlation processing steps like calibration and imaging.
Accelerated processing with GPUs
To address both speed and energy constraints, the project will employ GPUs, significantly improving throughput and reducing energy consumption. Alongside this hardware acceleration, we are investigating modern programming languages such as Rust and Julia, which offer potential advantages in performance and portability. This dual focus allows us to enhance radio astronomy processing while exploring new software methods that can keep up with increasing demands.
Efficient and scalable pipelines
The project focuses on optimizing software for both single-node efficiency and multi-node scalability. By leveraging frameworks like Apache Spark and Dask, we aim to improve data movement, scalability, and overall pipeline performance. These improvements in computational efficiency and scalability will establish new best practices and could be integrated into production, benefiting both the radio astronomy field and other data-intensive scientific disciplines.