Modern, scalable, reliable modeling of turbulent combustion

2011 
"Turbulence is the most important unsolved problem of classical physics". That was Richard Feynman decades ago, referring to a century old problem. Today, the situation is no different. Turbulent combustion, which deals with a fluid mixture reacting and mixing under turbulent conditions (as found in rockets, jet engines, power generators, car engines, furnaces,...), is harder still. While a solution that would satisfy a physicist is yet to be found, engineers all over the world are tackling the problem with computational modeling and simulation. There are a plethora of models for turbulence and combustion with a whole wide range of competing characteristics of applicability, accuracy, reliability and computational cost. Nowadays, reliability is the key feature required of such modeling (but, most often than not, sacrificed or oversight) for the design of environment friendly and efficient machines. There exists an unproven (but undeniable) direct correlation between reliability and computational cost. However, the era of sacrificing the former because one cannot overcome and afford the latter for a full scale engineering application is over, thanks to TeraGrid and other resources for open research coupled with relentless efforts of countless developers to provide software that runs faster and better. This project is one sampling of how these resources are utilized to overcome an important research problem. We take on the Filtered Density Function (FDF) for large eddy simulation (LES) of turbulent reacting flow, which is a novel and robust methodology that can provide very accurate predictions for a wide range flow conditions. FDF involves an expensive particle/mesh algorithm where stiff chemical reaction computations cause quite interesting, problem specific, and in most cases extremely imbalanced (a couple of orders of magnitude) computational loads. The authors present the Irregularly Portioned Lagrangian Monte Carlo [?], an advanced implementation of FDF based on a simple and smart parallelization strategy that is implemented via optimized solvers and high-level public domain parallelization libraries (eg. Zoltan[?]). The methodology and a discussion of the implementation is presented along with results and benchmarks on the TeraGrid (NICS/Kraken and PSC/Bigben). Scaling and speed up comparisons demonstrate that a conventional parallelization is unable to scale beyond a 100 processors, whereas the new implementation can efficiently utilize 1000 processors for the same size problem, and has enabled the FDF methodology to tackle ever larger problems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    2
    References
    0
    Citations
    NaN
    KQI
    []