Parallel Algorithms and Software for Nuclear, Energy, and Environmental Applications. Part II: Multiphysics Software
Derek GastonLuanjing GuoGlen HansenHai HuangRichard W. JohnsonD. A. KnollChris NewmanHyeong Kae ParkRobert PodgorneyMichael TonksR.L. Williamson
12
Citation
89
Reference
10
Related Paper
Citation Trend
Abstract:
Abstract This paper is the second part of a two part sequence on multiphysics algorithms and software. The first [1] focused on the algorithms; this part treats the multiphysics software framework and applications based on it. Tight coupling is typically designed into the analysis application at inception, as such an application is strongly tied to a composite nonlinear solver that arrives at the final solution by treating all equations simultaneously. The application must also take care to minimize both time and space error between the physics, particularly if more than one mesh representation is needed in the solution process. This paper presents an application framework that was specifically designed to support tightly coupled multiphysics analysis. The Multiphysics Object Oriented Simulation Environment (MOOSE) is based on the Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning to provide the underlying mathematical structure for applications. The report concludes with the presentation of a host of nuclear, energy and environmental applications that demonstrate the efficacy of the approach and the utility of a well-designed multiphysics framework.Keywords:
Multiphysics
Solver
Abstract We present in the paper a hybrid method for motion editing combining motion blending and Jacobian‐based inverse kinematics (IK). When the original constraints are changed, a blending‐based IK solver is first employed to find an adequate joint configuration coarsely. Using linear motion blending, this search corresponds to a gradient‐based minimization in the weight space. The found solution is then improved by a Jacobian‐based IK solver by further minimizing the distance between the end effectors and constraints. To accelerate the searching in the weight space, we introduce a weight map, which pre‐computes the good starting positions for the gradient‐based minimization. The advantages of our approach are threefold: first, more realistic motions can be generated by utilizing motion blending techniques, compared with pure Jacobian‐based IK. The blended results also increase the rate of convergence of the Jacobian‐based IK solver. Second, the Jacobian‐based IK solver modifies poses in the pose configuration space and the computational cost does not scale with the number of examples. Third, it is possible to extrapolate the given example motions with a Jacobian‐based IK solver, while it is generally difficult with pure blending‐based techniques. Copyright © 2014 John Wiley & Sons, Ltd.
Solver
Minification
Cite
Citations (4)
In this article, a large-scale electromagnetic-thermal-mechanical co-simulation solver is implemented, to simulate complex radio frequency components by using a high performance computing framework. The proposed solver integrates the frequency-domain electric field simulation, with time-domain thermal and thermal-induced stress simulations, via a multiphysics coupling iterative process. In order to speed up the cosimulation, a Krylov subspace method with a domain decomposition method (DDM) preconditioner is used. First, the developed multiphysics solver is verified with the commercial software COMSOL Multiphysics. Then, the parallel performance of our cosimulation solver, with different ghost mesh thicknesses, is tested on a supercomputer. Finally, some multiphysics results of filters and a real-world system-in-package (SiP) are obtained with our proposed solver.
Multiphysics
Solver
Krylov subspace
Cite
Citations (11)
Solver
Multi-core processor
Cite
Citations (1)
The focus of the current research is to develop a numerical framework on the Graphic Processing Units (GPU) capable of modeling chemically reacting flow. The framework incorporates a high-order finite volume method coupled with an implicit solver for the chemical kinetics. Both the fluid solver and the kinetics solver are designed to take advantage of the GPU architecture to achieve high performance. The structure of the numerical framework is shown, detailing different aspects of the optimization implemented on the solver. The mathematical formulation of the core algorithms is presented along with a series of standard test cases, including both nonreactive and reactive flows, in order to validate the capability of the numerical solver. The performance results obtained with the current framework show the parallelization efficiency of the solver and emphasize the capability of the GPU in performing scientific calculations.
Solver
Cite
Citations (0)
Electromagnetic (EM) computations are the cornerstone in the design process of several real-world applications, such as radar systems, satellites, and cell-phones. Unfortunately, these computations are mainly based on numerical techniques that require solving millions of linear equations simultaneously. Software-based solvers do not scale well as the number of equations-to-solve increases. FPGA solver implementations were used to speed up the process. However, using emulation technology is more appealing as emulators overcome the FPGA memory and area constraints. In this paper, we present a scalable design to accelerate the finite element solver of an EM simulator on a hardware emulation platform. Experimental results show that our optimized solver achieves 101.05x speed-up over the same pure software implementation on MATLAB and 35.29x over the best iterative software solver from ALGLIB C++ package in case of solving 2,002,000 equations.
Solver
Speedup
Cite
Citations (4)
In this paper, the application of a GPU-based particle method to three-dimensional sloshing problem is presented. Moving particle semi-implicit (MPS) method is a Lagrangian method which can be used to simulate nonlinear flow effectively. But one of its drawbacks is the high computation cost with the increase of particle number. Based on modified MPS, the MPS-GPU-SJTU solver is developed to simulate a large sum of particles by using GPU which supports large-scale scientific computations. In addition, one optimization strategy is applied to reduce the storage and computation cost of Poisson equation of pressure (PPE). Then the convergent validation is carried out to verify the accuracy of present solver. And the accuracy and performance of GPU-based solver are investigated by comparing the results with those by CPU. As a summary of results, the GPU-based solver shows a good agreement with CPU solver (MLParticle-SJTU). And the computation efficiency of GPU is much higher than CPU.
Solver
Cite
Citations (0)
We have previously suggested a minimally invasive approach to include hardware accelerators into an existing large-scale parallel finite element PDE solver toolkit, and implemented it into our software FEAST. Our concept has the important advantage that applications built on top of FEAST benefit from the acceleration immediately, without changes to application code. In this paper we explore the limitations of our approach by accelerating a Navier-Stokes solver. This nonlinear saddle point problem is much more involved than our previous tests, and does not exhibit an equally favourable acceleration potential: Not all computational work is concentrated inside the linear solver. Nonetheless, we are able to achieve speedups of more than a factor of two on a small GPU-enhanced cluster. We conclude with a discussion how our concept can be altered to further improve acceleration.
Solver
Saddle point
Code (set theory)
Cite
Citations (61)
The temperature field is simulated in the OpenFOAM package using standard and created solv-ers. The object of research is numerical solvers of the OpenFOAM package and auxiliary utili-ties for calculating the temperature field. The influence of OpenFOAM package solvers on the calculation time of the temperature field is revealed, which makes it possible to create a solver with a shorter calculation time. The analysis of heat exchange was carried out by solving the problem of thermal conductivity. The created solver is written using the OpenFoam open source code in the C++ programming language in the Visual Studio Code environment. For the work of the solver on the Salome platform, a calculation grid is generated. The created solver for cal-culating temperatures in the OpenFOAM package requires less time for calculation.
Solver
Code (set theory)
Cite
Citations (0)
In this paper we introduce a novel full-wave electromagnetic solver based on the Finite-Difference Time Domain method, which is extremely efficient in terms of CPU performance and scalability. These features of the HIPERCONE solver are attained by the use of asynchronous mesh updates, localization of data in the fast memory, and parallelism at all levels including vectorization. The algorithms in the solver are used to achieve the performance up to 1-2 orders of magnitude higher than the traditional approaches. Unlike the traditional memory-bound electromagnetic solvers, the maximal performance rate of HIPERCONE FDTD in terms of mesh cell updates per second is reached for large meshes occupying or even exceeding the total available CPU RAM. Therefore, the HIPERCONE solver is especially advantageous in solving large-scale problems. In this work we describe the algorithmic background of the simulation method and give an example of a typical large application which benefits from the solver's performance.
Solver
Vectorization (mathematics)
Cite
Citations (1)
YALES2BIO is a massively parallel multiphysics solver based on the YALES2 solver developed at CORIA. YALES2BIO is dedicated to the simulation of blood flows at the macroscopic and microscopic scales. This chapter describes some achievements and current modeling efforts based on the YALES2BIO solver. An interesting use of a flow solver is the generation of reference data as part of a modeling effort. The chapter provides some examples of the use of YALES2BIO simulations to support modeling. Since YALES2BIO relies on fully unstructured and parallelized numerical methods, it has the potential to tackle complex industrial configurations. Most of the current challenges of YALES2BIO involve multi-physics and/or multi-scale situations. The chapter provides three examples: modeling developments for the prediction of thrombotic events, modeling of the velocity measurement by 4D flow magnetic resonance imaging and the extension of YALES2BIO to dense suspensions of red blood cells.
Solver
Multiphysics
Cite
Citations (1)