logo
    VME Bus-Based Memory Channel Architecture for High Performance Computing
    0
    Citation
    10
    Reference
    10
    Related Paper
    The MPI (Message-Passing Interface) Standard has been successful in its goal of promoting portable parallel programming for both application writers and library implementors. MPI-I confined itself to the well-known and understood message-passing model, in which a fixed number of processes with separate address spaces communicate only though cooperative operations such as send/receive or collective operations such as broadcast and reduce. In a second round of activity, the MPI Forum has recently concluded work on the MPI-2 Standard, which extends MPI beyond the message-passing programming model in a number of ways, including dynamic process management, one-sided operations, and some shared-memory operations. The message-passing model has also been used in MPI-2 as a model for parallel I/O. This paper describes the salient features of the MPI-2 Standard with special emphasis on the programming model that results from these extensions to the message-passing model.
    Message Passing Interface
    SPMD
    Interface (matter)
    Citations (4)
    Because of no shared memory in distributed parallel computer system, message passing is used for processors to data exchange with each other. The standard of message passing interface and several parallel programming environments are introduced. The characteristics in network parallel programming environments based on message passing and the development problems are also discussed.
    Message Passing Interface
    Interface (matter)
    Remote procedure call
    Citations (0)
    Kokkos provides in-memory advanced data structures, concurrency, and algorithms to support performance portable C++ parallel programming across CPUs and GPUs. The Message Passing Interface (MPI) provides the most widely used message passing model for inter-node communication. Many programmers use both Kokkos and MPI together. In this paper, Kokkos is integrated within an MPI implementation for ease of use in applications that use both Kokkos and MPI, without sacrificing performance. For instance, this model allows passing first-class Kokkos objects directly to extended C++-based MPI APIs.
    Message Passing Interface
    Interface (matter)
    Citations (1)
    MPI(Message-Passing Interface)[1]is an ideal parallel programming model,which has been confirmed on distributed storage system.Because MPI is based on message passing and adopts the method of message passing to realize the communication between every node,the performance of MPI parallel program is deeply depended on the efficiency of communication.This paper puts forward a common method of optimizing MPI parallel program via the experiment on improving on the MPI parallel program of DNS twice and advancing the performance of it.
    Message Passing Interface
    Interface (matter)
    Citations (0)
    Abstract The sections in this article are Introduction Parallel Programming Model Message Passing Message Passing Research and Experimental Systems The Message Passing Interface Standard ‐ MPI Lessons Learned
    Message Passing Interface
    Interface (matter)
    Citations (1)
    본 연구에서는 분산 메모리시스템에서의 압력 방정식의 병렬해법을 위하여 MPI(Message Passing Interface)와 하이브리드 병렬기법을 사용하였다. 두 모델은 영역분할 기법을 활용하며, 하이브리드 기법은 성능이 양호한 두 가지 영역분할에 대해 수행하였다. 두 병렬기법의 성능을 비교하기 위해서 다양한 문제 크기에 대해 최대 96개의 쓰레드를 사용하여 속도향상을 측정하였다. 병렬 성능은 캐쉬 메모리에 따른 문제의 크기 및 MPI 통신, OpenMP 지시어의 부하에 대해 영향을 받음을 확인하였다. 문제의 크기 가 작은 경우에는 쓰레드가 증가할수록 MPI 통신 및 OpenMP 지시어 부하에 대한 비율이 상대적으로 크기 때문에 병렬 성능이 좋지 않으며, MPI 통신 부하보다는 OpenMP 지시어 부하가 상대적으로 크므로 MPI 병렬 기법의 병렬 성능이 더 우수하다. 문제의 크기가 큰 경우에는 캐쉬 메모리의 활용도가 높고 MPI 통신 및 OpenMP 지시어 부하에 대한 비율이 낮아 병렬 성능이 좋으며, OpenMP 지시어보다 MPI 통신에 의한 부하가 더 지배적이어서 하이브리드 병렬 성능이 MPI 병렬 성능보다 더 양호하다.
    Message Passing Interface
    Interface (matter)
    Distributed memory
    SPMD
    Many scientific problems can be distributed on a large number of processes to take advantage of low cost workstations. In a parallel systems, a failure on any processor can halt the computation and requires restarting all applications. Checkpointing is a simple technique to recover the failed execution. Message Passing Interface (MPI) is a standard proposed for writing portable message-passing parallel programs. In this paper, we present a checkpointing implementation for MPI programs, which is transparent, and requires no changes to the application programs. Our implementation combines coordinated, uncoordinated and message logging techniques.
    Message Passing Interface
    Workstation
    Interface (matter)
    Citations (19)
    Abstract Parallel system in this research has been built up form of cluster. A Cluster means group of computers which integrated that connected to each other through fast local area network and can be used as integrated computing source. Implementation of parallel process primary to do a parallel computing was succeeded at the cluster. This programming based on MPI (Message Passing Interyace), that runs on Linux operating system. MPI is a de facto standard for message passing programming on parallel computers
    Message Passing Interface
    Computer cluster
    Parallel processing
    Parallel programming model
    Citations (0)
    We describe a number of early efforts to make use of the Message-Passing Interface (MPI) standard in appli cations, based on an informal survey conducted in May-June, 1994. Rather than a definitive statement of all MPI developmental work, this paper addresses the initial successes, progress, and impressions that appli cation developers have had with MPI, according to the responses received. We summarize the important as pects of each survey response, and draw conclusions about the spread of MPI into applications. An understanding of message passing and access to the MPI standard are prerequisites for appreciating this paper. Some background material is provided to ease this requirement.
    Message Passing Interface
    Interface (matter)
    Statement (logic)
    Problem statement
    Citations (10)