Coronary artery calcium score (CACS) is a reliable predictor for future cardiovascular disease risk. Although deep learning studies using computed tomography (CT) images to predict CACS have been reported, no study has assessed the feasibility of machine learning (ML) algorithms to predict the CACS using clinical variables in a healthy general population. Therefore, we aimed to assess whether ML algorithms other than binary logistic regression (BLR) could predict high CACS in a healthy population with general health examination data.This retrospective observational study included participants who had regular health screening including coronary CT angiography. High CACS was defined by the Agatston score ≥ 100. Univariable and multivariable BLR was performed to assess predictors for high CACS in the entire dataset. When performing ML prediction for high CACS, the dataset was randomly divided into a training and test dataset with a 7:3 ratio. BLR, catboost, and xgboost algorithms with 5-fold cross-validation and grid search technique were used to find the best performing classifier. Performance comparison of each ML algorithm was evaluated with the area under the receiver operating characteristic (AUROC) curve.A total of 2133 participants were included in the final analysis. Mean age and proportion of male sex were 55.4 ± 11.3 years and 1483 (69.5%), respectively. In multivariable BLR analysis, age (odds ratio [OR], 1.12; 95% confidence interval [CI], 1.10-1.15, p < 0.001), male sex (OR, 2.91; 95% CI, 1.57-5.38, p < 0.001), systolic blood pressure (OR, 1.02; 95% CI, 1.00-1.03, p = 0.019), and low-density lipoprotein cholesterol (OR, 1.00; 95% CI, 0.99-1.00, p = 0.047) were significant predictors for high CACS. Performance in predicting high CACS of xgboost was AUROC of 0.823, followed by catboost (0.750) and BLR (0.585). The comparison of AUROC between xgboost and BLR was significant (p for AUROC comparison < 0.001).Xgboost ML algorithm was found to be a more reliable predictor of CACS in healthy participants compared to the BLR algorithm. ML algorithms may be useful for predicting CACS with only laboratory data in healthy participants.
Abstract Supply chain management (SCM) practitioners in inventory sites are often required to predict the future sales of products in order to meet customer demands and reduce inventory costs simultaneously. Although a variety of forecasting methods have been developed, many of them may not be used in practice for various reasons, such as insufficient viable information about sales and oversophisticated methods. In this paper, we provide a new forecasting scheme to evaluate long‐term prediction performances in SCM. Three well‐known forecasting methods for time series data—moving average (MA), autoregressive integrated MA, and smoothing spline—are considered. We also focus on two representative sales patterns, each of which is with and without a growth pattern, respectively. By applying the proposed scheme to various simulated and real datasets, this research aims to provide SCM practitioners with a general guideline for time series sales forecasting, so that they can easily understand what prediction performance measures and which forecasting method can be considered.
Although fashion-related products account for most of the online shopping categories, it becomes more difficult for users to search and find products matching their taste and needs as the number of items available online increases explosively. Personalized recommendation of items is the best method for both reducing user effort on searching for items and expanding sales opportunity for sellers. Unfortunately, experimental studies and research on fashion item recommendation for online shopping users are lacking. In this paper, we propose a novel recommendation framework suitable for online apparel items. To overcome the rating sparsity problem of online apparel datasets, we derive implicit ratings from user log data and generate predicted ratings for item clusters by user-based collaborative filtering. The ratings are combined with a network constructed by an item click trend, which serves as a personalized recommendation through a random walk. An empirical evaluation on a large-scale real-world dataset obtained from an apparel retailer demonstrates the effectiveness of our method.
This study aims to analysis of suspended-load concentration in related to those data by measuring vertical sediments distribution with rainfall using the ASM (Argus Surface Meter)- IV at the channel reach of a upstream and a downstream in small river. The watershed, small river basin where had taken for experimental study was selected, which is a drainage area lied at Walha in Yunkee-Gun, Chungnam Province. Measured data of suspended-load concentration consists of two groups with 2,145 data during 1hr 11min 30sec and 1,216 data during 40min 32sec for measuring time of 2 second in the study reaches at river, respectively. In order to analyze of the vertical concentration distribution, using the data sets are selected the measuring time 16 sets one of these data by random in the study reaches. As a results, the Rouse number of a measured and a calculated value show that a rang of , averaged value of 0.01129 md, a rang of , averaged value of 0.00436 in upstream reaches, and also a rang of , averaged value of 0.06521, and a rang of , averaged value of 0.05795 in downstream reaches, respectively. These difference show that measured Rouse number compared with downstream reach errors of less than in upstream reach, but between measured and calculated of the Rouse number compared with downstream reach errors of more than in upstream reach, respectively. It seems to will be included one of the occurrence errors of variable estimations when Rouse number of calculated value to be made computed by the fall velocity with a high temperature of water using equation of empirical kinematic viscosity was derived in this study.
본 연구에서는 하천에 유입된 오염물질의 거동 및 확산 특성을 파악하기 위하여 실제 하천에서 RI(Radioactive Isotope) 추적자를 이용하여 오염물 확산을 실측하고 그 결과를 수치모형과 비교분석하였다. 연구대상 수로구간은 금강 상류지역의 용담댐 부근 합류지점으로부터 하류로 약 2km구간에서 실험을 하였으며, 수치모형으로는 RMA-2(Resource Modeling Associates-2), RMA-4를 사용하였다. RI를 이용한 현장실험은 모델링을 적용한 지역과 동일 지역에서 실험을 실시하였고, 각 구간의 간격은 1km로 정하되 현장 사정에 따라 차이를 조금 두어 RI계측기인 NaI계측기를 통한 1초 간격의 농도 데이터를 계측하였다. 계측결과는 수치모형의 결과와 실제 하천에서의 확산범위 및 확산에 큰 영향을 미치는 확산계수 변화에 따른 농도 분포를 비교 분석하였다. In this study, in order to find the movement of polluted substance that is flown into the river and the characteristics of dispersion, the experiment that used the RI (Radioactive Isotope) tracer in the river was undertaken, and by using the experiment result, the figure modelling was undertaken to analyze the general type of pollutant dispersion. In addition, in order to calculate more accurate dispersion range and moving time, the experiment was done in about 2km from the measuring points of Namdae Stream around the Yongdam Dam of the upper Geum River to the lower stream. In order to find out the flow of river and dispersion of polluted substance, RMA (Resource Modeling Associates)-2 and RMA-4 program are used in study. The site experiment using the RI was implemented for the experiment in the applied area and the same area, and the distance between each zone was set for 1km with the slight difference for site situation and measured the density date of one second distance through the NaI apparatus to measure the density data of one second interval. On the basis of this measured data, it is compared and analyzed with the result of figure copy of the models to make the comparison and analysis of density distribution following the change in expansion coefficient that makes great influence on expansion range and dispersion in natural rivers.
Stringent global regulations aim to reduce nitrogen dioxide (NO2) emissions from maritime shipping. However, the lack of a global monitoring system makes compliance verification challenging. To address this issue, we propose a systematic approach to monitor shipping emissions using unsupervised clustering techniques on spatio-temporal georeferenced data, specifically NO2 measurements obtained from the TROPOspheric Monitoring Instrument (TROPOMI) on board the Copernicus Sentinel-5 Precursor satellite. Our method involves partitioning spatio-temporally resolved measurements based on the similarity of NO2 column levels. We demonstrate the reproducibility of our approach through rigorous testing and validation using data collected from multiple regions and time periods. Our approach improves the spatial correlation coefficients between NO2 column clusters and shipping traffic frequency. Additionally, we identify a temporal correlation between NO2 column levels along shipping routes and the global container throughput index. We expect that our approach may serve as a prototype for a tool to identify anthropogenic maritime emissions, distinguishing them from background sources.
The authors have made a valuable contribution to the hydraulics of alluvial rivers by reconsidering a former approach with an updated database. Regardless, their results are dimensionally incorrect as stated in the paper. Relations among the parameters for river width W, average flow depth h, mean velocity V, discharge Q, channel slope S, median grain size of bed material d50 and Shields parameter are given. The authors’ Table 2 compares the results of Lee and Julien L-J with those of Julien and Wargadalam J-W. It is notable that some parameters have practically no effect in these relations, such as d50 relative to W, h and V in the L-J relations, whereas the d50 effect appears not to be negligible in the J-W formulation. Will the authors discuss this fact? By accounting for the L-J formulation, it is of course interesting to consider an average flow velocity formula. Using the authors’ Eqs. 2a and 3a and eliminating discharge Q gives V = 7.70 S 0.277 h 0.589 d50 0.0217 1
This paper introduces the practical guidelines for function point analysis in the Republic of Korea. Function Point Analysis is adopted as a standard method for measuring the functional size of software by Korean government. It was proposed by an international organization. We will present the procedure of counting Function Point during the software implementation phase. The functional size and development cost of software can be estimated from the number of Function Point.