logo
    Cross-entropy minimization estimation for two-phase sampling and non-response
    0
    Citation
    17
    Reference
    10
    Related Paper
    Tomographic data are inevitably corrupted by noise, and the number of projections available is often small. Such data cannot define an image uniquely, but are consistent with a whole range of "feasible images". Recognising that our choice of a single image is not unique, the Maximum Entropy Method chooses the feasible image which has the greatest configurational entropy: Where p i is the proportion of intensity originating from pixel "i" and m i is the corresponding measure or initial estimate.
    Citations (0)
    The maximum entropy principle advocates to evaluate events' probabilities using a distribution that maximizes entropy among those that satisfy certain expectations' constraints. Such principle can be generalized for arbitrary decision problems where it corresponds to minimax approaches. This paper establishes a framework for supervised classification based on the generalized maximum entropy principle that leads to minimax risk classifiers (MRCs). We develop learning techniques that determine MRCs for general entropy functions and provide performance guarantees by means of convex optimization. In addition, we describe the relationship of the presented techniques with existing classification methods, and quantify MRCs performance in comparison with the proposed bounds and conventional methods.
    Citations (10)
    Quality of research is determined by many factors and one such climacteric factor is sample size. Inability to use correct sample size in study might lead to fallacious results in the form of rejection of true findings or approval of false results. Too large sample size is wastage of resources and use of too small sample size might fail to answer the research question or provide imprecise results and may question the validity of study. Despite being such a paramount aspect of research, the knowledge about sample size calculation is sparse among researchers. Why is it important to calculate sample size; when to calculate it; how to calculate it and what details about sample size calculation should be reported in research protocols or articles; are the lesser known basics to majority of researchers. The present review is directed to address these aforementioned fundamentals about sample size. Sample size should be calculated during the initial phase of planning of study. Several components are required for sample size calculation such as effect size, type-1 error, type-2 error, and variance. Researchers must be aware that there are different formulas for calculating sample size for different types of study designs. The researcher must include details about sample size calculation in the methodology section, so that it can be justified and it also adds to the transparency of the study. The literature about calculation of sample size for different study designs is scattered over many textbooks and journals. Scrupulous literature search was conducted to find the passable information for this review. This paper presents the sample size calculation formulas in a single review in a simplified manner with relevant examples, so that researchers may adequately use them in their research.
    Primer (cosmetics)
    Citations (29)
    Extract Overview Sample size determination is an important part of planning for clinical trials. One of the key aspects of the protocol is sample size estimation. The goal is to ensure that a trial is large enough to detect reliably the smallest possible differences in the primary outcome, with treatment that is considered clinically worthwhile. It is possible for studies to be underpowered, failing to detect even large treatment effects because of inadequate sample size. Therefore, sample size must be planned carefully to ensure that the resources invested including patient participation, are not wasted. It may be considered unethical to recruit patients into a study that does not have a large enough sample size to deliver meaningful information. Elements of sample size calculation The minimum information required to calculate the sample size for a randomized controlled trial includes: ... Power The power of a study is its ability to detect a true difference in outcome between the control arm and the intervention arm. Sample size increases as power increases. The higher the power, the lower the chance of missing a real effect of treatments. Type II error is directly proportional to sample size.
    Sample (material)
    Statistical power
    Feasible images in tomographic image reconstruction are defined as those images compatible with the data by consideration of the statistical process that governs the physics of the problem. The first part of this paper reviews the concept of image feasibility, discusses its theoretical problems and practical advantages, and presents an assumption justifying the method and some preliminary results supporting it. In the second part of the paper two different algorithms for tomographic image reconstruction are developed. The first is a Maximum Entropy algorithm and the second is a full Bayesian algorithm. Both algorithms are tested for feasibility of the resulting images and we show that the Bayesian method yields feasible reconstructions in Positron Emission Tomography.
    Tomographic reconstruction
    Citations (6)
    Determination of a sample size of a clinical study is one of the essential design factors. This paper focuses on a problem of sample size determination of a clinical trial in the clinical development phase. Sample size calculation can be performed using a formula or simulation, once a set of conditions is given pertaining treatment effect, variability of observations, significance level and power. Contrary to calculation, determination of sample size is not an easy task. For example, formal application of a formula may result in a very large sample size. In designing a study, however, sample size calculation is affected by constraints from various design factors, amount of information at hand and feasibility of the study. I first present a principle for sample size calculation and stress on importance of assessment of risks of a given sample size. Next, I propose two conservative approaches to sample size calculation; double confidence limits method and effect size confidence limit method, and compare properties of these methods with an ordinary method for a two treatment parallel group study. A few actual examples of a conservative approach are presented and discussed. Further issues in and points to consider for sample size determination are also discussed. Several examples of approaches to sample size calculation in exploratory studies and dose-response studies are also presented.
    Sample (material)
    Citations (3)
    The calculation of sample size helps a medical researcher to assess cost, time, and feasibility of his project besides scientific justification and validity. Although frequently reported in journals, the details or the elements of sample size calculation are not consistently provided by the authors. Sample size calculations reported do not match with replication of sample size in many studies. Most trials with negative results do not have a large enough sample size. Hence, reporting of sample size and power needs to be improved. The sample size calculation can be guided by previous literature, pilot studies, and past clinical experiences. The collaborative effort of the researcher and the statistician is required at this stage. Estimated sample size is our best guess. Issues such as anticipated loss to follow-up, large subgroup analysis, and complicated study designs, demand a large sample size to ensure power throughout the trial. The present article will help the reader understand the importance of pilot study in sample size estimation, second understand the relationship between primary objective and sample size of a study, third understand the essential components required in a sample size estimation, and fourth calculate sample sizes using real-life examples using an online software.
    Statistician
    Sample (material)
    Replication
    Statistical power
    Citations (4)
    Deng entropy has been proposed to measure the uncertainty degree of basic probability assignment in evidence theory. In this paper, the condition of the maximum of Deng entropy is discussed. According to the proposed theorem of the maximum Deng entropy, we obtain the analytic solution of the maximum Deng entropy, which yields that the most information volume of Deng entropy is bigger than that of the previous belief entropy functions. Some numerical examples are used to illustrate the basic probability assignment with the maximum Deng entropy.
    Citations (81)
    Sample size calculation is more complex and crucial area of attention in a research process. Appropriate sample size of the study act as a strong foundation for evidence based practice, as small sample size may fail to detect the effect or large sample size may waste the resources. As a researcher we have to ensure that needed sample size is estimated to generate desirable power from the study, so that the findings could be generalized to the population. But it is difficult unless the researcher is aware about the influence of each component of the sample size estimates on sample size. This article briefly reviewed the relationship between the components of the sample size estimates and sample size.
    Sample (material)
    A technique based on entropy minimization principle is developed for fusing netted radar data. It is proved that entropy minimization principle and SNR maximization principle are consistent when used in point radar target. However, the entropy minimization principle is more suitable for a complex radar target, and the weights are calculated iteratively. In this aspect, the weights of SNR maximization principle cannot be calculated easily.
    Minification
    Entropy maximization
    Maximization
    Citations (0)