Objective: Liver cancer is the third leading cause of cancer mortality in China. This study assesses the cost-effectiveness of sorafenib, lenvatinib, and FOLFOX4 in the treatment of advanced hepatocellular carcinoma (HCC) to inform clinical decision-making.Material and Methods: We used a Markov model to simulate the progression of HCC and calculate Quality-Adjusted Life Years (QALYs) and Incremental Cost-Effectiveness Ratios (ICERs) under two scenarios. Costs were obtained from the Yaozhi Network, while transition probabilities and utilities were derived from the REFLECT, EACH, and CELESTIAL clinical trials. One-way sensitivity analysis and probabilistic sensitivity analysis were conducted to evaluate model robustness and parameter uncertainty.Results: In Scenario A, using market-listed prices, sorafenib, and lenvatinib were found to be more cost-effective than FOLFOX4, with ICERs of $11,635.28 and $1,499.93 per QALY, respectively, both below the cost-effectiveness threshold. In Scenario B, with centralized procurement prices, sorafenib had a negative ICER of -$7,351.26 per QALY, indicating cost savings with improved outcomes, while lenvatinib had an ICER of $2,685.99 per QALY. Sensitivity analysis revealed that drug costs, utilities of disease progression, and discount rates were key determinants of ICER values.Conclusion: Sorafenib and lenvatinib are significantly more cost-effective compared to FOLFOX4, particularly under centralized procurement pricing. These results support the inclusion of these treatments in public health policy to enhance healthcare outcomes and optimize resource allocation, thereby improving the economic and quality-of-life metrics for patients with HCC.
In blast furnace (BF) ironmaking process, hot metal silicon content is an important index. Not only is silicon content a significant quality variable, it also reflects the thermal state of BF. A novel genetic algorithm was proposed. Time lag of each pivotal variable was calculated respectively. Based on the time lag analysis, the genetic algorithm is used to approach the fittest equation that describes the behavior between [Si] and the variables. With the calculated equation, the forecasting accuracy is up to 88%. Data, used in this paper, were collected from No.1 BF at Laiwu Iron and Steel Group Co
Object detection is a hot topic in the field of visual detection.Deep learning can greatly compensate for the defect that traditional methods sacrifice real-time for improving accuracy.This paper mainly introduces the main networks and methods of two-stage deep learning algorithm and single-stage deep learning algorithm in the field of target detection.The advantages and disadvantages, usage scenarios and development of each network are described in detail.Finally, the follow-up development in this field is prospected..
The detection of the contaminants in daily food and drinking water is crucial for global public health. For heavy metals detection of Mercury (Hg) and Arsenic (As), our group has proposed a novel paper-based and microfluidic device integrated with a mobile phone and an image analysis pipeline to capture and analyze the sensor images on-site. Still, the detection of lower contamination levels remains challenging due to the small number of available data samples and large intra-class variance of our application. To overcome this challenge, we explore traditional data augmentation and GAN-based augmentation techniques for synthesizing realistic colorimetric images; and we propose a CNN classifier for five-contamination-levels classification. Our proposed system is trained and evaluated on a limited dataset of 126 phone captured images of five contamination levels. Our system yields 88.1% classification accuracy and 91.92% precision, demonstrating the feasibility of this approach. We believe that this approach of training deep learning models on limited detection images datasets presents a clear path toward phone-based contamination-levels detection.
Properly exploiting image properties is crucial for boosting the hyperspectral unmixing performance. Recent advanced image processing methods use deep architectures to learn image priors. However, these deep priors take effect in an implicit manner and it is nontrivial to characterize their properties. Introducing extra regularization terms is an explicit way of encoding image priors, and the plug-and-play technique enables to construct priors from data by denoisers. In this work, we propose a new unmixing framework to combine both the deep image priors (DIP) and plug-and-play (PnP) priors to further enhance the unmixing performance. The alter-nating direction method of multipliers (ADMM) framework is used to separate the optimization problem into two subproblems. The first one is solved using a U-net training step to obtain DIP, and a proximal denoising step is then used to solve the second subproblem to add denoiser priors. Experiment results demonstrate the effectiveness of our proposed method.
An algorithm is implemented that constructs the Winterbottom morphology for any given crystal structure based on its preferred growth planes expressed in Miller indices and their corresponding surface energies and interfacial energy. By varying parameters and using {100} and {111} crystal facets as substrates, this work generated a diverse range of Winterbottom morphologies. In addition, we have developed a model based on the random forest regression to obtain the interfacial energy and facet-dependent surface energy from experimentally determined equilibrium shapes. Polynomial regression analysis is used to develop predictive models for surface energy. By comparing the experimental and simulation results, the accuracy and reliability of the simulation method are validated. The model’s predictive capability and stability are verified through cross-validation and error analysis. Our findings indicate distinct surface energy differences between Winterbottom morphologies on different crystal facets, with a positive correlation observed between surface area and surface energy.
Hyperspectral recovery using RGB images has recently attracted considerable attention in many imaging and computer vision applications because of its ability to equip a low cost tool in acquiring spectral signatures of natural scenes. Current methods of recovering hyperspectral information via RGB measurements may fail for objects sharing similar RGB features. In this paper, we introduce a novel framework with the U-net-based architecture, namely C2H-Net, which is used to reconstruct high quality hyperspectral images from their RGB measurements. C2H-Net also exploits prior information comprising of category and coordinate information of specific objects of interest to address the restriction of the existing methods. C2H-Net is highly accurate and outputs "true" spectral information of objects/scenes. In addition, a new hyperspectral dataset namely C2H-Data (available at Github) is developed in this work and used for additional extensive evaluation on the proposed framework. The C2H-Data contains a variety of objects with large number of images and category information which would be useful for the research community. We conduct experiments using three different datasets to show the effectiveness of C2H-Net. The experimental results show that our proposed method outperforms several existing methods.