Abstract The name PM 10 refers to small particles with a diameter of less than 10 microns. The present research analyses different models capable of predicting PM 10 concentration using the previous values of PM 10 , SO 2 , NO, NO 2 , CO and O 3 as input variables. The information for model training uses data from January 2010 to December 2017. The models trained were autoregressive integrated moving average (ARIMA), vector autoregressive moving average (VARMA), multilayer perceptron neural networks (MLP), support vector machines as regressor (SVMR) and multivariate adaptive regression splines. Predictions were performed from 1 to 6 months in advance. The performance of the different models was measured in terms of root mean squared errors (RMSE). For forecasting 1 month ahead, the best results were obtained with the help of a SVMR model of six variables that gave a RMSE of 4.2649, but MLP results were very close, with a RMSE value of 4.3402. In the case of forecasts 6 months in advance, the best results correspond to an MLP model of six variables with a RMSE of 6.0873 followed by a SVMR also with six variables that gave an RMSE result of 6.1010. For forecasts both 1 and 6 months ahead, ARIMA outperformed VARMA models.
Abstract: In this paper we present two novel algorithms belonging to the extended family of PSO: the PP-GPSO and the RR-GPSO. These algorithms correspond respectively to progressive and regressive discretizations in acceleration and velocity. PP-GPSO has the same velocity update than GPSO, but the velocities used to update the trajectories are delayed one iteration, thus, PP-GPSO acts as a Jacobi system updating positions and velocities at the same time. RR-GPSO is similar to a GPSO with stochastic constriction factor. Both versions have a very different behavior from GPSO and the other family members introduced in the past: CC-GPSO and CP-GPSO. The numerical comparison of all the family members has shown that RR-GPSO has the greatest convergence rate and its good parameter sets can be calculated analytically since they are along a straight line located in the first order stability region. Conversely PP-GPSO is a more explorative version.
Using a random forest regression (RFR) machine learning technique, the critical temperature (Tc) of a superconductor was predicted in the context of Industry 4.0 in this study using features derived from the material's physico-chemical properties, containing atomic mass, electron affinity, atomic radius, valence, and thermal conductivity. The same experimental data were also fitted with multilayer perceptron (MLP) artificial neural networks (ANN), M5 model tree and multivariate linear regression (MLR) model for comparison. The current investigation's findings show that the proposed RFR–relied model can successfully forecast the critical temperature of a superconductor. Additionally, the Tc estimate was reached with a correlation coefficient of 0.9565 and a coefficient of determination 0.9146, when the observed dataset was used to test this unique technique. Additionally, the outcomes from the MLP, M5, and MLR models are obviously worse than those from the RFR–relied model. When it comes to fully comprehending the superconductivity, this investigation is noteworthy. Regarding forecasting effectiveness and feature reduction rate, the RFR approach has obvious advantages and generalizability, and it also demonstrates suitability for high-temperature superconductor Tc forecasting. In fact, it offers a practical and affordable approach to data-driven superconductor investigation.