The challenging task of analyzing on-chip power (ground) distribution networks with multimillion node complexity and beyond is key to today's large chip designs. For the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT)-based graphics processing unit (GPU) platforms to tackle large-scale power grid analysis with promising performance. Several key enablers including GPU-speciflc algorithm design, circuit topology transformation, workload partitioning, performance tuning are embodied in our GPU-accelerated hybrid multigrid (HMD) algorithm (GpuHMD) and its implementation. We also demonstrate that using the HMD solver as a preconditioner, the conjugate gradient solver can converge much faster to the true solution with good robustness. Extensive experiments on industrial and synthetic benchmarks have shown that for DC power grid analysis using one GPU, the proposed simulation engine achieves up to 100× runtime speedup over a state-of-the-art direct solver and more than 50× speedup over the CPU based multigrid implementation, while utilizing a four-core-four-GPU system, a grid with eight million nodes can be solved within about 1 s. It is observed that the proposed approach scales favorably with the circuit complexity, at a rate about 1 s per two million nodes on a single GPU card.
Modern graph neural networks (GNNs) can be sensitive to changes in the input graph structure and node features, potentially resulting in unpredictable behavior and degraded performance. In this work, we introduce a spectral framework known as SAGMAN for examining the stability of GNNs. This framework assesses the distance distortions that arise from the nonlinear mappings of GNNs between the input and output manifolds: when two nearby nodes on the input manifold are mapped (through a GNN model) to two distant ones on the output manifold, it implies a large distance distortion and thus a poor GNN stability. We propose a distance-preserving graph dimension reduction (GDR) approach that utilizes spectral graph embedding and probabilistic graphical models (PGMs) to create low-dimensional input/output graph-based manifolds for meaningful stability analysis. Our empirical evaluations show that SAGMAN effectively assesses the stability of each node when subjected to various edge or feature perturbations, offering a scalable approach for evaluating the stability of GNNs, extending to applications within recommendation systems. Furthermore, we illustrate its utility in downstream tasks, notably in enhancing GNN stability and facilitating adversarial targeted attacks.
Mutation breeding induced by irradiation with highly energetic photons and ion beams is one of the important methods to improve plant varieties, but the mutagenic effects and molecular mechanisms are often not entirely clear. Traditional research is focused on phenotype screening, chromosome aberration tests and genetic variation analysis of specific genes. The whole genome sequencing technique provides a new method to understand and undertake the comprehensive identification of mutations caused by irradiations with different linear energy transfer (LET). In this study, ten Arabidopsis thaliana M3 lines induced by carbon-ion beams (CIB) and ten M3 lines induced by gamma-rays were re-sequenced by using the Illumina HiSeq sequencing platform, and the single base substitutions (SBSs) and small insertions or deletions (indels) were analysed comparatively. It was found that the ratio of SBSs to small indels for M3 lines induced by CIB was 2.57:1, whereas the ratio was 1.78:1 for gamma-rays. The ratios of deletions to insertions for carbon ions and gamma-rays were 4.8:1 and 2.8:1, respectively. The single-base indels were more prevalent than those equal to or greater than 2 bp in both CIB and gamma-ray induced M3 lines. Among the detected SBSs, the ratio of transitions to transversions induced by carbon-ion irradiation was 1.01 and 1.42 for gamma-rays; these values differ greatly from the 2.41 reported for spontaneous substitutions. This study provides novel data on molecular characteristics of CIB and gamma-ray induced mutations at the genome-wide scale. It can also provide valuable clues for explaining the potential mechanism of plant mutation breeding by irradiations with different LETs.
While traditional worst-case corner analysis is often too pessimistic for nanometer designs, full-blown statistical circuit analysis requires significant modelling infrastructures. In this study, a design-dependent statistical interconnect corner extraction (SICE) methodology is proposed. SICE achieves a good trade-off between complexity and pessimism by extracting more than one process corners in a statistical sense, which are also design dependent. Our new approach removes the pessimism incurred in prior work while being computationally efficient. The efficiency of SICE comes from the use of parameter dimension reduction techniques. The statistical corners are further compacted by an iterative output clustering method. Numerical results show that SICE achieves up to 260X speedups over the Monte Carlo method.
Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy. Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage. In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms. GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information. This fused graph is then repeatedly coarsened into much smaller graphs by merging nodes with high spectral similarities. GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs. We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks. Our experiments show that GraphZoom can substantially increase the classification accuracy and significantly accelerate the entire graph embedding process by up to 40.8x, when compared to the state-of-the-art unsupervised embedding methods.
Vectorless integrity verification is becoming increasingly critical to the robust design of nanoscale integrated circuits. This article introduces a general vectorless integrity verification framework that allows computing the worst-case voltage drops or temperature (gradient) distributions across the entire chip under a set of local and global workload (power density) constraints. To address the computational challenges introduced by the large power grids and three-dimensional mesh-structured thermal grids, we propose a novel spectral approach for highly scalable vectorless verification of large chip designs by leveraging a hierarchy of almost linear-sized spectral sparsifiers of input grids that can well retain effective resistances between nodes. As a result, the vectorless integrity verification solution obtained on coarse-level problems can effectively help compute the solution of the original problem. Our approach is based on emerging spectral graph theory and graph signal processing techniques, which consists of a graph topology sparsification and graph coarsening phase, an edge weight scaling phase, as well as a solution refinement procedure. Extensive experimental results show that the proposed vectorless verification framework can efficiently and accurately obtain worst-case scenarios in even very large designs.
On account of complex, dangerous and limited area are urgently explored, an underwater machine which can substitute for people to complete the underwater detection is needed imminently. Autonomous underwater vehicles (AUVs) have been developed to accomplish the tasks of resource exploration in the sea. However, much higher requirements of mobility and maneuverability are of great concern with the accelerated development of AUV. Recently, vectorial thrusters have been applied to ships and offshore platforms, which can increase propulsion efficiency and flexibility. In this paper, a novel small scale AUV with vectorial thrusters is designed, the hydrodynamic performance, pressure distribution, resistance under different angles of attack (AOAs) in vertical plane motion are calculated by Computer Fluid Dynamics (CFD). The numerical simulation suggests the best underwater gesture of the new-type AUV, beyond that the content of this article will provide a basic reference for the further research of motion control.