Imaging SKA-scale data in three different computing environments
2016
Abstract We present the results of our investigations into options for the computing platform for the imaging pipeline in the chiles project, an ultra-deep HI pathfinder for the era of the Square Kilometre Array. chiles pushes the current computing infrastructure to its limits and understanding how to deliver the images from this project is clarifying the Science Data Processing requirements for the SKA. We have tested three platforms: a moderately sized cluster, a massive High Performance Computing (HPC) system, and the Amazon Web Services (AWS) cloud computing platform. We have used well-established tools for data reduction and performance measurement to investigate the behaviour of these platforms for the complicated access patterns of real-life Radio Astronomy data reduction. All of these platforms have strengths and weaknesses and the system tools allow us to identify and evaluate them in a quantitative manner. With the insights from these tests we are able to complete the imaging pipeline processing on both the HPC platform and also on the cloud computing platform, which paves the way for meeting big data challenges in the era of SKA in the field of Radio Astronomy. We discuss the implications that all similar projects will have to consider, in both performance and costs, to make recommendations for the planning of Radio Astronomy imaging workflows.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
18
References
14
Citations
NaN
KQI