logo
    Abstract:
    Calculation Of Susceptibility through Multiple Orientation Sampling (COSMOS) is assessed comparing the optimal, a clinically feasible and multiple-orientation schemes. The optimal COSMOS estimation is used as a gold standard and is compared to the other schemes using the similarity index (SSIM), mean absolute error (MAE) and Pearson’s coefficient (PC). Further comparisons include Thresholded K-space Division (TKD) quantitative susceptibility mapping. For selected white-matter regions, linear regression is used to assess the similarities between the different estimations.
    Keywords:
    Quantitative Susceptibility Mapping
    Similarity (geometry)
    Cosmos (plant)
    Linear regression analysis is ubiquitous in many areas of scholarly inquiry, including substance abuse research. In linear regression it is common practice to test whether the squared multiple correlation coefficient, R2, differs significantly from zero. However, this test is misleading because the expected value of R2 is not zero under the null hypothesis. In this brief methodological note I discuss the implications of this realization for calculating and interpreting the squared multiple correlation coefficient, R2. In addition, I discuss and offer freely available software that calculates the expected value of R2 under the null hypothesis that ρ-the population value of the multiple correlation coefficient-equals zero, an adjusted R2 value and effect size measure that both take into account the expected value of R2, and an F statistic that tests the significance of difference between the obtained R2 and the expected value of R2 under the null hypothesis that ρ=0.
    Null (SQL)
    Value (mathematics)
    Statistic
    Citations (4)
    In studies examining associations between dietary factors and biomedical risk factors, the relations, if they exist, are frequently attenuated by measurement error. Measurement error may be due to a large intraindividual variation and an inadequate number of measurements or to an inaccurate measuring instrument. This paper evaluates the impact of measurement error on partial correlation and multiple linear regression analyses. Quantitative methods are derived to estimate the potential attenuation of associations. The results indicate that when the controlled variables do not have measurement error, but the correlated variables do, the attenuation of the partial correlation coefficient (or multiple regression coefficient) is greater than that of the simple correlation (or regression) coefficient. When both the correlated variables and the controlled variables have measurement error, the partial correlation (or the regression) coefficients can be either increased or decreased.
    Partial correlation
    Correction for attenuation
    Standardized coefficient
    Multiple correlation
    In the method comparison approach, two measurement errors are observed. The classical regression approach (linear regression) method cannot be used for the analysis because the method may yield biased and inefficient estimates. In view of that, the Deming regression is preferred over the classical regression. The focus of this work is to assess the impact of censored data on the traditional regression, which deletes the censored observations compared to an adapted version of the Deming regression that takes into account the censored data. The study was done based on simulation studies with NLMIXED being used as a tool to analyse the data. Eight different simulation studies were run in this study. Each of the simulation is made up of 100 datasets with 300 observations. Simulation studies suggest that the traditional Deming regression which deletes censored observations gives biased estimates and a low coverage, whereas the adapted Deming regression that takes censoring into account gives estimates that are close to the true value making them unbiased and gives a high coverage. When the analytical error ratio is misspecified, the estimates are as well not reliable and biased.
    Censoring (clinical trials)
    Censored regression model
    Regression diagnostic
    Robust regression
    Citations (0)
    Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient (r). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation (y = a + bx), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.
    Linear predictor function
    Variables
    Fisher transformation
    Partial correlation
    Multiple correlation
    Correlation ratio
    Standardized coefficient
    Distance correlation
    Citations (196)
    asreg can fit three types of regression models; (1) a model of depvar on indepvars using linear regression in a user's defined rolling window or recursive window (2) cross-sectional regressions or regressions by a grouping variable (3) Fama and MacBeth (1973) two-step procedure. asreg is order of magnitude faster than estimating rolling window regressions through conventional methods such as Stata loops or using the Stata's official rolling command. asreg has the same speed efficiency as asrol. All the rolling window calculations, estimation of regression parameters, and writing of results to Stata variables are done in the Mata language. asreg reports most commonly used regression statistics such as number of observations, r-squared, adjusted r-squared, constant, slope coefficients, standard errors of the coefficients, fitted values, and regression residuals.
    Variables
    Citations (0)
    본 논문에서는 상관계수와 거리계수의 조합형 유사성 척도에 기반을 둔 효과적인 영상인식 방법을 제안하였다. 여기서 상관계수는 Pearson coefficient에 의한 통계적 유사성을 측정하기 위함이고, 거리계수는 city-block에 의한 공간적인 유사성을 측정하기 위함이다. 또한 영상사이의 전체 유사성은 각 영상이 가지는 특징사이의 유사성으로 계산되며, 영상의 특징은 PCA와 ICA로 각각 추출하였다. 제안된 방법을 40*50 픽셀의 960(30명*4표정*2조명*4포즈)개 다른 표정영상을 대상으로 실험한 결과, ICA 기반 조합형 척도를 이용하는 것이 PCA 기반 조합형 척도보다 우수한 인식률을 가지며, 또한 조명과 같은 주변 환경에도 강건한 인식성능이 있음을 확인하였다. This paper presents an efficient image recognition method using the hybrid coefficient measure of correlation and distance. The correlation coefficient is applied to measure the statistical similarity by using Pearson coefficient, and distance coefficient is also applied to measure the spacial similarity by using city-block. The total similarity among images is calculated by extending the similarity between the feature vectors, then the feature vectors can be extracted by PCA and ICA, respectively. The proposed method has been applied to the problem for recognizing the 960(30 persons * 4 expressions * 2 lights * 4 poses) facial images of 40*50 pixels. The experimental results show that the proposed method of ICA has a superior recognition performances than the method using PCA, and is affected less by the environmental influences so as lighting.
    Similarity (geometry)
    Similarity measure
    Feature (linguistics)
    Least-squares regression has been applied as a tool to understand traffic growth patterns and to predict future growth. Specifically, given a set of historical annual average daily traffic (AADT) values for a location, regression can be used to summarize traffic growth patterns and to predict growth. However, this technique is vulnerable to outliers because standard linear regression techniques can produce arbitrarily large errors in their results if points are badly placed. The situation is made worse when thousands of traffic sites are analyzed at once because it is infeasible to examine each set of regression results individually. In this paper two outlier detection and removal techniques and one robust regression technique are compared with simple least-squares regression for accuracy in traffic growth prediction, with both linear and log-linear models of traffic growth on historical AADT values for several thousand sites in the state of New York. Each method was evaluated by the median absolute error in predictions being computed for 1 year, 4 years, and 8 years beyond the modeled values and also by the mean percent error being computed, giving each site equal weight. When all sites were equally weighted, the robust regression technique produced significantly better results than either plain regression or outlier detection techniques. Using median absolute error, none of the robust techniques produced significantly more accurate results than ordinary regression.
    Robust regression
    Ordinary least squares
    Regression diagnostic
    Citations (10)