Efficient parallel computing using digital filtering algorithms
1998
This chapter introduces a technique to enable integration of unstable systems by using block filtering. It is found that to compute a problem with many blocks, it is also necessary to use interface filtering to obtain a convergent and stable solution. The chapter illustrates a filtering technique that leads to the identification and suppression of these unstable spatial frequencies, and allows the system to be integrated along the stable frequencies. The parallel version of the NPARC code with an explicit Runge–Kutta time-stepping algorithm is used to demonstrate the technique. Domain decomposition is used to divide the domain into blocks and interfaces. The Courant number inside the blocks is increased beyond the stability limit. With a combination of block and interface filtering, high speed-up, and efficiency can be achieved. For the purposes of parallel computation, domain decomposition is used to divide the problem to be solved into subdomains called blocks. The inter-block boundaries are called interfaces and can be of matching or non-matching, overlapping or non-overlapping types. The algorithm employed to calculate the flow field at each grid point in the NPARC code is called the “Block Solver” and communication between the interfaces is handled by the “Interface Solver.” This approach is highly suited for parallel computing because each block may represent a separate process and can be assigned to available processors. The block filtering procedure assumes an average Mach number inside the block to construct the filter. If the Mach number variation inside the block is too high, then the block filtering procedure produces poor results. Hence, blocks with shocks are integrated with a stable Courant number, while neighboring blocks with no shocks or strong Mach number variations can be integrated with higher Courant numbers.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
5
References
0
Citations
NaN
KQI