In this article, we study the problem of predicting future records and order statistics (two-sample prediction) based on progressive type-II censored with random removals, where the number of units removed at each failure time has a discrete binomial distribution. We use the Bayes procedure to derive both point and interval bounds prediction. Bayesian point prediction under symmetric and symmetric loss functions is discussed. The maximum likelihood (ML) prediction intervals using “plug-in” procedure for future records and order statistics are derived. An example is discussed to illustrate the application of the results under this censoring scheme.
We consider the problem of predictive interval for the range of the future observations from an exponential distribution. Two cases are considered, (1) Fixed sample size (FSS). (2) Random sample size (RSS). Further, I derive the predictive function for both FSS and RSS in closely forms. Random sample size is appeared in many application of life testing. Fixed sample size is a special case from the case of random sample size. Illustrative examples are given. Factors of the predictive distribution are given. A comparison in savings is made with the above method. To show the applications of our results, we present some simulation experiments. Finally, we apply our results to some real data sets in life testing.
This paper deals with Bayesian inference and prediction problems of the Burr type XII distribution based on progressive first failure censored data. We consider the Bayesian inference under a squared error loss function. We propose to apply Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples, and they have in turn, been used to compute the Bayes estimates with the help of importance sampling technique. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We further consider two sample Bayes prediction to predicting future order statistics and upper record values from Burr type XII distribution based on progressive first failure censored data. The predictive densities are obtained and used to determine prediction intervals for unobserved order statistics and upper record values. A real life data set is used to illustrate the results derived.
IntroductionPolycystic Ovary Syndrome (PCOS) is the most common endocrinopathy in reproductive-aged women in the United States, affecting around 7% of women.Although the specific cause of PCOS is unknown, it is assumed to be caused by a complex interplay of hereditary and environmental factors.Changes in luteinizing hormone (LH) action, insulin resistance, and a probable propensity to hyperandrogenism have all been related to the pathophysiology of PCOS (Dafopoulos et al., 2009).The importance of ovarian stimulation in the success of in vitro fertilization and embryo transfer (IVF-ET) treatment has long been recognised.As a result, since the 1980s, a gonadotropin releasing hormone (GnRH) agonist protocol has been created and used in the context of IVF-ET treatment.By desensitising pituitary receptors, the GnRH agonist regimen is aimed to restrict the release of pituitary follicle-stimulating hormone (FSH) and luteinizing hormone (LH) (Huirne et al., 2007).The introduction of a GnRH antagonist regimen, which blocks pituitary receptors, has recently provided another option for ovarian stimulation.The use of a GnRH antagonist strategy has been shown to minimize the length of the ovulatory stimulu sand the occurrence of ovarian hyperstimulation syndrome.The shorter time of analogue medication, the shorter duration of FSH stimulation, and the lesser chance of developing ovarian hyperstimulation syndrome (OHSS) are all advantages of antagonists (Al-Inany et al. ,2016).Because the GnRH antagonist protocol is straightforward, convenient, and flexible, and because it does not cause functional ovarian cysts or "menopausal" symptoms like the agonist protocol, many doctors and patients like it.However, results from randomised clinical trials show that the antagonist protocol retrieves fewer oocytes and has lower pregnancy rates than the agonist long treatment (Kim CH et al., 2011).
Surprising perceptions may happen in survey sampling. The arithmetic mean estimator is touchy to extremely enormous or potentially small observations, whenever selected in a sample. It can give one‐sided (biased) results and eventually, enticed to erase from the selected sample. These extremely enormous or potentially small observations, whenever known, can be held in the sample and utilized as supplementary information to expand the exactness of estimates. Also, a supplementary variable is consistently a well‐spring of progress in the exactness of estimates. A suitable conversion/transformation can be utilized for getting much more precise estimates. In the current study, regarding population mean, we proposed a robust class of separate type quantile regression estimators under stratified random sampling design. The proposed class is based on extremely enormous or potentially small observations and robust regression tools, under the framework of Särndal. The class is at first defined for the situation when the nature of the study variable is nonsensitive, implying that it bargains with subjects that do not create humiliation when respondents are straightforwardly interrogated regarding them. Further, the class is stretched out to the situation when the study variable has a sensitive nature or theme. Sensitive and stigmatizing themes are hard to explore by utilizing standard information assortment procedures since respondents are commonly hesitant to discharge data concerning their own circle. The issues of a population related to these themes (for example homeless and nonregular workers, heavy drinkers, assault and rape unfortunate casualties, and drug users) contain estimation errors ascribable to nonresponses as well as untruthful revealing. These issues might be diminished by upgrading respondent participation by scrambled response devices/techniques that cover the genuine value of the sensitive variable. Thus, three techniques (namely additive, mixed, and Bar‐Lev) are incorporated for the purposes of the article. The productivity of the proposed class is also assessed in light of real‐life dataset. Lastly, a simulation study is also done to determine the performance of estimators.
Censoring is very common in life testing experiments and reliability studies. Progressive first-failure-censoring and an adaptive progressive Type II censoring schemes will be a good choice in this situation. Also, record values and associated statistics are of great importance in several real life problems. There are a number of situations in which an observation is retained only if it is a record value. In this book, we propose different methods to estimate the parameters of the Burr-XII distribution using different censoring schemes and record values. We used the maximum likelihood estimator, different parametric bootstrap methods and we provide a Bayesian method to estimate these parameters as well as the coefficient of variation, the stress-strength reliability model and hazard functions. In the Bayesian method we propose two approaches to approximate the posterior: Lindley’s approximation and the Markov chain Monte Carlo (MCMC) methods. Also, the statistical Bayesian predictions have been treated. Bayesian prediction intervals based on progressive first-failure-censored from Burr-XII as a formative sample are obtained and discussed
Abstract The coefficient of variation (CV) is extensively used in many areas of applied statistics including quality control and sampling. It is regarded as a measure of stability or uncertainty, and can indicate the relative dispersion of data in the population to the population mean. In this article, based on progressive first-failure-censored data, we study the behavior of the CV of a random variable that follows a Burr-XII distribution. Specifically, we compute the maximum likelihood estimations and the confidence intervals of CV based on the observed Fisher information matrix using asymptotic distribution of the maximum likelihood estimator and also by using the bootstrapping technique. In addition, we propose to apply Markov Chain Monte Carlo techniques to tackle this problem, which allows us to construct the credible intervals. A numerical example based on real data is presented to illustrate the implementation of the proposed procedure. Finally, Monte Carlo simulations are performed to observe the behavior of the proposed methods. Keywords: Burr-type XII distributioncoefficient of variationMarkov Chain Monte CarloGibbs samplingprogressive first-failure-censored samplebootstrap Acknowledgements The authors thank the referees for their helpful remarks and suggestions that improved the original manuscript.