3-D Volume of Interest Based Image Classification
2
Citation
28
Reference
10
Related Paper
Citation Trend
Keywords:
Contextual image classification
Data set
In this paper, we develop a metric designed to assess and rank uncertainty measures for the task of brain tumour sub-tissue segmentation in the BraTS 2019 sub-challenge on uncertainty quantification. The metric is designed to: (1) reward uncertainty measures where high confidence is assigned to correct assertions, and where incorrect assertions are assigned low confidence and (2) penalize measures that have higher percentages of under-confident correct assertions. Here, the workings of the components of the metric are explored based on a number of popular uncertainty measures evaluated on the BraTS 2019 dataset.
Rank (graph theory)
Cite
Citations (4)
A new analitycally approximated interior metric for a stationary compact perfect fluid with equation of state µ+(1–n)p = µ0 is presented. Also, it is concluded that an object of this kind can be a source of Wahlquist's metric.
Perfect fluid
Circular symmetry
Cite
Citations (1)
Black-body radiation
Cite
Citations (17)
There have been significant improvements in the image quality metrics used in the NVESD model suite in recent years. The introduction of the Targeting Task Performance (TTP) metric to replace the Johnson criteria yielded significantly more accurate predictions for under-sampled imaging systems in particular. However, there are certain cases which cause the TTP metric to predict optimistic performance. In this paper a new metric for predicting performance of imaging systems is described. This new weighted contrast metric is characterized as a hybrid of the TTP metric and Johnson criteria. Results from a number of historical perception studies are presented to compare the performance of the TTP metric and Johnson criteria to the newly proposed metric.
Performance metric
Cite
Citations (5)
Super-resolution results are usually measured by full-reference image quality metrics or human rating scores. However, these evaluation methods are general image quality measurement, and do not account for the nature of the super-resolution problem. In this work, we analyze the evaluation problem based on the one-to-many mapping nature of super-resolution, and propose a novel distribution-based metric for super-resolution. Starting from the distribution distance, we derive the proposed metric to make it accessible and easy to compute. Through a human subject study on super-resolution, we show that the proposed metric is highly correlated with the human perceptual quality, and better than most existing metrics. Moreover, the proposed metric has a higher correlation with the fidelity measure compared to the perception-based metrics. To understand the properties of the proposed metric, we conduct extensive evaluation in terms of its design choices, and show that the metric is robust to its design choices. Finally, we show that the metric can be used to train super-resolution networks for better perceptual quality.
Cite
Citations (0)
As part of the PLOS Collection on Universal Health Coverage, Stephen Lim and colleagues review the concept of effective coverage and discuss the ways in which current health information systems can support generating estimates of effective coverage.
Universal Coverage
Cite
Citations (186)
Metric validation in Grammatical Error Correction (GEC) is currently done by observing the correlation between human and metric-induced rankings. However, such correlation studies are costly, methodologically troublesome, and suffer from low inter-rater agreement. We propose MAEGE, an automatic methodology for GEC metric validation, that overcomes many of the difficulties with existing practices. Experiments with \maege\ shed a new light on metric quality, showing for example that the standard $M^2$ metric fares poorly on corpus-level ranking. Moreover, we use MAEGE to perform a detailed analysis of metric behavior, showing that correcting some types of errors is consistently penalized by existing metrics.
Metric system
Cite
Citations (0)
Cite
Citations (360)
Rail network
Metrics
Cite
Citations (3)
We introduce a new performance metric, called Load Balancing Factor (LBF), to assist programmers with evaluating different tuning alternatives. The LBF metric differs from traditional performance metrics since it is intended to measure the performance implications of a specific tuning alternative rather than quantifying where time is spent in the current version of the program. A second unique aspect of the metric is that it provides guidance about moving work within a distributed or parallel program rather than reducing it. A variation of the LBF metric can also be used to predict the performance impact of changing the underlying network. The LBF metric can be computed incrementally and online during the execution of the program to be tuned. We also present a case study that shows that our metric can predict the actual performance gains accurately for a test suite of six programs.
Performance metric
Test suite
Software metric
Factor (programming language)
Cite
Citations (5)