Histopathological examination of biopsy tissues is still utilized to diagnose and classify brain cancers today. The current approach is inconvenient, time-consuming, and prone to human mistake. These disadvantages emphasize the significance of establishing a fully automated deep learning-based system for classifying brain tumors. In this paper, we suggest an approach to improve the classification for four types of brain tumors by providing the classifier with segmentation as semantic features. 1,452 multi model magnetic resonance images from the Siberian Brain Tumor Dataset (SBT) are used for training, validation, and testing. The training and validation are implemented with our experimental simple convolutional neural network and a pre-trained VGG16. Best performed models are selected and tested on both SBT and the Brain Tumor Segmentation Challenge 2020 dataset (BraTS). The models with segmentation outperform all models without segmentation on the same dataset. We also found that, compare to a general purposed network such as VGG16, a simple convolutional neural network trained on a specific task have better generalization when tested with a public dataset.
Brain tumor segmentation is an important and time-consuming part of the usual clinical diagnosis process. Multi-class segmentation of different tumor types is a challenging task, due to the differences in shape, size, location and scanner parameters. Many 2D and 3D convolution neural network architectures have been proposed to address this problem achieving a significant success. It is well known that 2D approach is generally faster and more popular in the most of such problems. However, the usage of 3D models allows us to simultaneously improve the quality of segmentation. Accounting the context along the sagittal plane leads to the learning of 3-dimensional features that we used for computationally expensive 3D operations what in its turn increases the learning time as well as decreases the speed of operation.In this paper, we compare the 2D and 3D approaches on 2 datasets with MRI images: the one from the BraTS 2020 competition and a private Siberian Brain tumor dataset. In each dataset, any single scan is represented by 4 sequences T1, T1C, T2 and T2-Flair, annotated by two certified neuro-radiologist specialists. The datasets differ from each other in the dimension, grade set and tumor type. Numerical comparison was performed based on the Dice score index. We provide the case by case analysis for the samples that caused most difficulties for the models. The results obtained in our work demonstrate the significant over performing of 3D methods keeping robustness in a regard of data source and type that allow us to get a little closer to AI-assisted diagnosis.
The study of brain tumor structure and its type-dependent variations is one of the most important research areas in which medical imaging techniques are used. The structural and statistical analysis of these lesions raises various related problems and projects, such as the detection of the neuro oncology diseases, the shape and the segmentation of specific sub-regions (i.e. necrotic part, (non-)enhanced part, edema), the classification of the tumor occurrence and the subsequent treatment up-prognosis. Almost all of these problems are usually solved numerically, particularly with the tendency to use methods related to artificial intelligence (AI), often including deep learning (DL) networks. One of the most complicated, least researched and challenging tasks in this field is the classification of tumor types. This difficulty can be explained by several reasons, the most important of which is the severe limitation of existing open-source datasets that contain clinically confirmed tumor type designations based on radiological examination protocols. Magnetic resonance imaging (MRI) is the most common method for screening, primary detection and non-invasive diagnosis of brain diseases, as well as a source of recommendations for further treatment and observation. In this paper, we extend the previous research works on the robust multi-sequences segmentation and classification methods which allows to consider all available information from MRI scans by the composition of TI, TIC, T2 and T2-FLAIR sequences. It is based on the clinical radiology hypothesis and presents an efficient approach to combining and matching 3D methods to search for areas of comprised the GD-enhancing tumor in order to significantly improve the model's performance of the particular applied numerical problem of brain tumor classification and metastasis segmentation. All investigations performed and results presented are based on the private Siberian brain tumor dataset, including labeled volumetric MRI scans describing a wide variety of tumors and associated clinically relevant ground truth (GT) information.
Main goal of any industry is to increase productivity which in oil and gas field is to increase reservoir oil asset by producing oil in an effective and economically efficient manner. The objective of the study is to develop a water flood model for oil production enhancement using artificial neural networks and provide a model that maximizes oil production for a given water injection that in turn will extend mature fields life and decrease operational costs. Using the data comprising of daily water injection rates, oil production rates, water production, and gas production from the year 2004 to 2016 for 577 injection wells, 1344 production wells, and 36 events which had occurred during the course. Comparative analysis on the deep neural models such as Multi-Layer Perception, Convolutional Neural Networks, Long Short-Term Memory, and Gated Recurrent Neural Networks are used, and Gated Recurrent Neural Networks outperformed them. To minimize the loss and improve the performance of the water flood model tabular data mix-up was adopted on all the models above. The results showed that the data mixed up Gated Recurrent Neural Network outperformed all the other models. To maximize the oil production Nelder-Mead optimization method was adopted to find appropriate water injection rates. A simple two-layered multi-layer perceptron was used in modeling the nonlinear relationship between water injection and oil production to avoid function complexity.
In speech synthesis and speech enhancement systems, melspectrograms need to be precise in acoustic representations. However, the generated spectrograms are over-smooth, that could not produce high quality synthesized speech. Inspired by image-to-image translation, we address this problem by using a learning-based post filter combining Pix2PixHD and ResUnet to reconstruct the mel-spectrograms together with super-resolution. From the resulting super-resolution spectrogram networks, we can generate enhanced spectrograms to produce high quality synthesized speech. Our proposed model achieves improved mean opinion scores (MOS) of 3.71 and 4.01 over baseline results of 3.29 and 3.84, while using vocoder Griffin-Lim and WaveNet, respectively.
<p>The article sets a task to create and implement models of schoolchildren's development, which are based on the basic ideas of the cultural-historical approach proposed by L.S.Vygotsky's scientific school. The authors of the article believe that such models can help to overcome the limitations of explanatory models, this hypothesis is based on stimulus-reactive algorithmic strategies that negatively affect human development. Using the material of the project "Schoolchildren as Scientific Volunteers", the authors of the article show how this mediation model can be arranged usштп digital tools and a system with elements of artificial intelligence. The project sets the task of forming a research vision (a new functional organ) in schoolchildren with the help of digital mediating tools. Using a cultural-historical approach to AI as a means of developing thinking and research skills while working with information, we propose to consider ethical AI frames as part of an educational environment that promotes the adaptation of risky technologies.. Critical analysis of risk-generating technologies has been developed in bioethics. As ethical guidelines, we use the principles of precaution and proactive response. This article is the first part, which describes the preparatory phase of the study. The second part will show how the project proceeded, what the first results were and what difficulties were met during the implementation of the tasks.</p>