This paper gives a study of the major risk factors which lead to preterm delivery in women. Preterm birth is the leading cause of perinatal morbidity and mortality worldwide. For the prediction of preterm delivery, inputs such as the height of the mother (maternal height), gravida (number of pregnancies) and para (number of pregnancies which crossed minimum gestational age) are used. To train the model for prediction, soft computing techniques such as Softmax regression using Neural Networks and Gradient Descent Optimizer are used. The success rate of prediction obtained is 89.99% with a stepwise cost of 0.52 on average. Hence, this model proves as a reliable predictor to identify women with a high risk of preterm, so as to provide sufficient time to plan for required antenatal and clinical interventions during pregnancy.
When a vehicle under warranty comes for service, the technician identifies the potential fault and the corresponding service is done. From the raw information of customer complaint, the complaint codes are identified and allocation of qualified technician involves considerable man hours and cost per hour for the technician to do the service. Around 1059 vehicles under warranty were studied starting from customer complaint to the study of warranty cost to the manufacturer. In this paper, initially, we present the results of classifying the complaint code master into several classes using K-means cluster analysis and subsequently cluster analysis for a specific component say, water pump assembly was carried out. Then, analysis of cost to the automobile manufacturer on warranty claims are also presented here.
In this paper we have analyzed the huge volume of warranty data for segregating the fraudulent warranty claims using pattern recognition and clustering methodology. Recent survey of automotive industry shows up to 10% of warranty costs are related to warranty claims fraud, costing manufacturers several billions of dollars. Most of the automotive companies are suspecting and aware of warranty fraud. But they are not sure of the extent and ways to eliminate it. The existing methods to detect warranty fraud are very complex and expensive as they are dealing with inaccurate and vague data, causing manufacturers to bear the excessive costs. We are proposing model to find anomalies on warranty data along with component failure data and patterns based on historic warranty claims data under particular region and for specific component as the data are of high volume. We are managing to isolate all the imapcting the factors that indicate a claim, that has a high probability of fraudulence such as failure date and claim date, mode of failure etc., In addition to this we discover suspecting claims that have the greatest adjustment potential for further review by claim process. We altogether integrating data with with claims processing, reports and business rules along with reported mode of failure as we are minimizing changes to existing systems, since the analysis is carried out by identifying patterns. Since we are working with factual data, it gives more room to identify the actual cost involved on warranty claim.
This paper proposes a hybrid model combining artificial neural networks (ANN) and simple average exponential smoothing (SES) forecasting models, termed as the ANNSES model. The proposed model attempts to incorporate the linear characteristics of SES and nonlinear patterns of ANN for predicting the score of suppliers in an e-procurement system of an automobile industry. The MAPE and RMSE errors obtained indicate that predictions upto a month ahead was accurate using the hybrid model compared to those obtained using ANN and SES forecasting models individually.
In healthcare, the persistent challenge of arrhythmias, a leading cause of global mortality, has sparked extensive research into the automation of detection using machine learning (ML) algorithms.However, traditional ML and AutoML approaches have revealed their limitations, notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI, an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection, effectively bridging the gap between data preprocessing and model selection.To validate our system, we have rigorously tested AutoRhythmAI using a multimodal dataset, surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline, we employ signal filtering and ML algorithms for preprocessing, followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification, utilizing deep learning models.Notably, we introduce the 'RRI-convoluted transformer model' as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models, considering their respective weights, resulting in an optimal model pipeline.In our study, the VGGRes Model achieved impressive results in multi-class arrhythmia detection, with an accuracy of 97.39% and firm performance in precision (82.13%), recall (31.91%), and F1-score (82.61%).In the binary-class task, the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection, with notably high accuracy and well-balanced performance metrics.
The prevalence of cardiovascular diseases (CVDs) means that they account for a large percentage of all deaths worldwide. Based on the development of AI, various automatic classifications of cardiac arrhythmias have recently been successfully applied to numerous models. But during training, most models separate the intrinsic properties of each lead in a 12-lead ECG on their own, leaving them short on inter-lead features to automate the classification of normal rhythms and 26 cardiac diseases. In this paper, we present a systematic approach that combines Auto-CardiacML's (Auto-ML) classification with ResNet-50's feature extraction techniques (Auto-Resnet). The model uses a 12-lead ECG that was digitally reconstructed from the original. Auto-ML is used to classify cardiac arrhythmias, and the ResNet model is used to get both the inner and interlead features of an ECG at the same time. It has been determined, based on experimental results from the CPSC 2018 test set, that our model has an average accuracy of 0.82 for distinguishing normal rhythm from cardiac arrhythmias. In the future, the results will be used to make a structure that uses both PCG and ECG signals.
In this paper, we propose to transform the global matching mechanism in an electronic exchange between the producers and consumers in the SCM system for perishable commodities over large scale data sets. Matching of of consumers and producers satisfactions are mathematically modeled based on preferential evaluations based on the bidding request and the requirements data which is supplied as a matrix to Gale Shapely matching algorithm. The matching works over a very transparent approach in a e-trading environment over large scale data. Since, Bigdata is involved; the global SCM could be much clearer and easier for allocation of perishable commodities. These matching outcomes are compared with the matching and profit ranges obtained using simple English auction method which results Pareto-optimal matches. We are observing the proposed method produces stable matching, which is preference-strategy proof with incentive compatibility for both consumers and producers. Our design involves the preference revelation or elicitation problem and the preference-aggregation problem. The preference revelation problem involves eliciting truthful information from the agents about their types that are used for computation of Incentive compatible results. We are using Bayesian incentive compatible mechanism design in our match-making settings where the agents' preference types are multidimensional. This preserves profitability up to an additive loss that can be made arbitrarily small in polynomial time in the number of agents and the size of the agents' type spaces.