High performance computing&Monte Carlo

2004 
High performance computing (HPC), used for the most demanding computational problems, has evolved from single processor custom systems in the 1960s and 1970s, to vector processors in the 1980s, to parallel processors in the 1990s, to clusters of commodity processors in the 2000s. Performance/price has increased by a factor of more than I million over that time, so that today's desktop PC is more powerful than yesterday's supercomputer. With the introduction of inexpensive Linux clusters and the standardization of parallel software through MPI and OpenMP, parallel computing is now widespread and available to everyone. Monte Carlo codes for particle transport are especially well-positioned to take advantage of accessible parallel computing, due to the inherently parallel nature of the computational algorithm. We review Monte Carlo particle parallelism, including the basic algorithm, load-balancing, fault tolerance, and scaling, using MCNP5 as an example. Due to memory limitations, especially on single nodes of Linux clusters, domain decomposition has been tried, with partial success. We conclude with a new scheme, data decomposition, which holds promise for very large problems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []