저자들은 일반 흰쥐를 1.5% cholesterol과 0.25% cholic acid, 우지 25% 첨가한 식이로 고지혈증을 유발시킨 후 KH-305를 8주간 경구투여하여 혈중 cholesterol과 HDL-C, LDL-C 등을 측정, 지질대사에 미치는 영향을 알아보았다. 또한, 최대해면체 내압과 내피 관련 NOS 발현정도를 측정하여 음경발기 촉진 및 지속에 미치는 영향을 알아보았다. 그 결과 KH-305를 투여한 모든 그룹에서 고지방식이 그룹 보다 총 cholesterol 수치와 LDL-C를 낮추었으며 HDL-C의 수치를 높였다. 최대해면체 내압에서는 KH-305 투여 그룹이 고지방식이 그룹보다 최대해면체 내압에 이르는 시간이 짧았고 최대해면체 내압의 수치가 높았으며 이 중 300 ㎎/㎏일 경우 가장 효과가 좋았다. 또한, KH-305 모든 투여 그룹은 고지방식이 그룹과 비교해 볼 때 모든 그룹에서 eNOS와 nNOS의 발현이 뚜렷이 증가되었다. 따라서, 혈중 cholesterol을 낮추면서 NOS의 발현을 증가, 최대해면체 내압이 큰 KH-305는 고지혈증으로 유발된 발기부전에 효능이 있는 약물이 될 수 있으리라 사료된다.
In this paper, we present a scalable, numerically stable, high-performance tridiagonal solver. The solver is based on the SPIKE algorithm for partitioning a large matrix into small independent matrices, which can be solved in parallel. For each small matrix, our solver applies a general 1-by-1 or 2-by-2 diagonal pivoting algorithm, which is also known to be numerically stable. Our paper makes two major contributions. First, our solver is the first numerically stable tridiagonal solver for GPUs. Our solver provides comparable quality of stable solutions to Intel MKL and Matlab, at speed comparable to the GPU tridiagonal solvers in existing packages like CUSPARSE. It is also scalable to multiple GPUs and CPUs. Second, we present and analyze two key optimization strategies for our solver: a high-throughput data layout transformation for memory efficiency, and a dynamic tiling approach for reducing the memory access footprint caused by branch divergence.
To assess the health impacts of radon exposure over a lifetime, in the present study, the annual effective dose (AED) and cumulative excess lifetime cancer risk (ELCR-C) were evaluated by considering various indoor microenvironmental exposures based on age-specific time–activity patterns using Monte Carlo simulations. Significant regional variations in indoor radon concentrations across the Republic of Korea were observed, with the highest levels found in schools and single detached houses. Based on the standard annual total of 8760 h spent indoors and outdoors, the AED varied by age group and dwelling type, with the ELCR-C for single detached houses being approximately 1.36 times higher than that for apartments on average. The present study highlights the importance of comprehensive health risk assessments that consider differences across indoor environments and age groups, indicating that limited evaluations of specific sites or areas may distort actual exposure levels.
Abstract We established a hypothetical acrylic acid leak accident scenario, conducted a health risk assessment of local residents, and compared an actual accident case and the hypothetical scenario. The exposed subjects were divided into four age groups, and a non-carcinogenic risk assessment was conducted for inhalation and soil ingestion. In the hypothetical scenario, 40 tons of acrylic acid were leaked in Ulsan for 1 h from 12:00 am on January 1, 2017, and in the actual accident case, 3 L of acrylic acid were leaked in Hwaseong, Gyeonggi Province, for 1 h from 11:00 am on March 5, 2020. The environmental concentration of acrylic acid was calculated using the dynamic multimedia environmental model. Non-carcinogenic assessment of the hypothetical scenario showed the hazard index exceeded 1 across all age groups, suggesting that a health risk is likely to occur due to inhalation exposure to acrylic acid resulting from a chemical accident. Hazard acute exceeded 1 until 2 h after the accident under the hypothetical scenario, indicating the likelihood of a health risk. Thus, we propose a methodology that can assess changing concentrations in a hazardous chemical leak from a chemical accident based on the time, place, the chemical’s behaviors in different environmental media, and the health risk posed by the exposure of the chemical to local residents in the area affected by the accident.
The rising pressure for simultaneously improving performance and reducing power is driving more diversity into all aspects of computing devices. An algorithm that is well-matched to the target hardware can run multiple times faster and more energy efficiently than one that is not. The problem is complicated by the fact that a program's input also affects the appropriate choice of algorithm. As a result, software developers have been faced with the challenge of determining the appropriate algorithm for each potential combination of target device and data. This paper presents DySel, a novel runtime system for automating such determination for kernel-based data parallel programming models such as OpenCL, CUDA, OpenACC, and C++AMP. These programming models cover many applications that demand high performance in mobile, cloud and high-performance computing. DySel systematically deploys candidate kernels on a small portion of the actual data to determine which achieves the best performance for the hardware-data combination. The test-deployment, referred to as micro-profiling, contributes to the final execution result and incurs less than 8% of overhead in the worst observed case when compared to an oracle. We show four major use cases where DySel provides significantly more consistent performance without tedious effort from the developer.
Objectives This study aimed to find out the effect of the ecological empathy program that affects the perspective of ecology. Methods For this study, a total of 209 students from the 3rd to 5th grade of elementary school were selected. The experimental group was set as one class of the 3rd grade, and compared with the control group of the 3rd to 5th grade. The program developed in this study consisted of a total of eight sessions, and consisted of the process of forming empathy through scientific knowledge, experience, and moral judgment. The pre- and post-test were classified into ecological perspective, species-oriented perspective, and family-oriented perspective. The McNemar test was conducted to find out the educational effect. Results As a result of the study, it was found that the students in the control class did not naturally change to the ecological perspective by maturation regardless of the grade. In the case of the experimental class students who studied the program developed in this study, the areas where statistically significant changes were observed were empathy for the situation in which wolves harm villagers and why pests such as mosquitoes and poisonous snakes must exist in the ecosystem. However, it was difficult for students to form an ecological perspective due to difficulties in empathizing in the situation of family damage. In addition, it was confirmed that it is difficult for elementary school students to develop an ecological perspective only with the contents of the ecosystem-related unit of the science curriculum presented in the 5th grade. Conclusions Therefore, based on the proposal of this study, it is necessary to develop an ecological empathy program to provide elementary school students with opportunities to form and deepen their ecological perspectives.
With heterogeneous computing on the rise, executing programs efficiently on different devices from a single source code has become increasingly important. OpenCL, having a bulk-synchronous programming model, has been proposed as a framework for writing such performance-portable programs. Execution order of work-items in a program is unconstrained except at barrier synchronization events, giving some freedom to an implementation when scheduling work-items between synchronization points. Many OpenCL (and CUDA) compilers have been designed for targeting multicore CPU architectures. However, scheduling work-items in prior work has been done with primary focus on correctness and vectorization. To the best of our knowledge, no existing implementations consider the impact of work-item scheduling on data locality. We propose an OpenCL compiler that performs data-locality-centric work-item scheduling. By analyzing the memory addresses accessed in loops within a kernel, our technique can make better decisions on how to schedule work-items to construct better memory access patterns, thereby improving performance. Our approach achieves geomean speedups of 3.32× over AMD's and 1.71 × over Intel's implementations on Parboil and Rodinia benchmarks.
An increasing portion of the top supercomputers in the world, including Blue Waters, have heterogeneous CPUGPU computational units. As we move towards exascale, we can expect even more pervasive deployment of heterogeneous computational units. While a handful of science teams can already use heterogeneous computational units in their production applications, there is still significant room for the growing use. This paper presents the current state and projected path for transitioning software into this new paradigm. We first summarize the currently practical languages such as OpenCL, OpenACC, and C++AMP, in increasing levels of productivity, highlighting their recent advancements in supporting performance portability and maintainability. We will then give a brief overview of some emerging programming systems such as TANGRAM and Troilet that are designed to further enhance developer productivity for heterogeneous computing.