The buzz words for today, namely artificial intelligence (AI), machine learning (ML), and deep learning (DL), are slowly entering the medical industry over the past decade, which brings technologies and solutions to a change in the structure of the medical field. These technologies are connected, and each one offers something different in the medical field by showing a difference in how medical professionals treat the patient. DL provides the power to transform and deliver a more decadent layer of medical technology solutions. It is progressively available with innovative technologies that have broad applications in the natural world medical field. DL plays a vital role in providing insight to medical professionals, which helps identify diseases at an early stage, thus delivering better tailored and most effective patient care. DL has become a well-known initiative that everyone has an idea about. The AI-DL industry has been developing quickly, which provides some sufficient development opportunities to the medical industry to bring a significant change. According to Gartner, almost 37% of all sectors surveyed use DL in their profession. It has been foreseen that by 2022, around 80% of modern developments will use AI and DL. It has been observed that the year 2021 would bring the most powerful deep understanding and AI trends that might reshape the country's economic, social, and medical domains.This chapter describes DL, the history of deep knowledge in the medical field, the barriers to deep understanding, and DL opportunities in the medical industry. It also focuses on the various methods or algorithms of DL such as convolutional neural network, deep autoencoder, deep Boltzmann machine, deep belief network in biological systems, medical imaging, and health record and report management. It also discusses the various applications of DL in healthcare and how deep understanding is used in medical image analysis.
The standard language is assessed, and the feelings transmitted by the individual are brought up. The purpose of sentiment analysis is to determine the polarity of a person's textual opinion. Most of the people use social media to market ourselves and share all knowledge, thoughts, ideas, and experiences with the rest of the world. Sentiment analysis has been studied by several internet companies that offer materials based on human emotions expressed through social media. Many people use social media to post and share their thoughts and feelings. The sentiment analysis standard assorts a sentence with all forms of emotions, each of which has an intensity. This study includes a sentiment analysis of diverse perspectives on social topics that are trending on social media, along with a recommendation system. This method recommends related content to viewers based on their current statement or issue. The algorithm also suggests customized video sets to users based on their social media activity, such as diverse perspectives on a particular social problem about which they actively express their thoughts on social media. NLP algorithms are utilized for sentiment analysis and recommendation systems.
The primary aim of this work was to compare the performance of the ML and DL algorithms for the CHD dataset. The dataset holds 4,240 with 16 features. We have almost done the entire process step by step, starting from visualizing the dataset, removing the missing values, followed by selecting the best performing features using the Bourta algorithm, which identified AGE, BP and BMI as the best. Since the number of features was too low we took the top eight features. The next step was to balance the class in the dataset as the class was identified to be imbalanced and hence applied SMOTE method to balance the class, and finally, standard scalar was applied to normalize the dataset. Once all these data preparation and pre-processing methods were done, ML and DL methods, namely, logistic regression, decision tree, k-nearest neighbours, random forest, naïve Bayes, support vector machine and feed-forward neural network was deployed, to evaluate the performance of the models. Several metrics were calculated and compared, from which it was identified that the random forest and neural network methods gave the same results. Although the working methodology and the assumptions made by the algorithms also differ, with these results it could not be concluded that a particular classification method is the best. One of the major drawbacks is that the dataset available does not hold all the features which are mandatory to confirm the disease. The dataset was not approved by any doctors in that field and hence the model developed could not be used for real-time implementation.
Rainfall prediction is considered to be an esteemed research area that impacts the day-to-day life of Indians. The predominant income source of most of the Indian population is agriculture. It helps the farmers to make the appropriate decisions pertaining to cultivation and irrigation. The primary objective of this investigation is to develop a technique for rainfall prediction utilising the MapReduce framework and the convolutional long short-term memory (ConvLSTM) method to circumvent the limitations of higher computational requirements and the inability to process a large number of data points. In this work, an adaptive salp-stochastic-gradientdescent-based ConvLSTM (adaptive S-SGD-based ConvLSTM) system has been developed to predict rainfall accurately to process the long time series data and to eliminate the vanishing problems. To optimize the hyperparameter of the convLSTM model, the S-SGD methodology proposed combine the SGD and the salp swarm algorithm (SSA). The adaptive S-SGD based ConvLSTM has been developed by integrating the adaptive concept in S-SGD. It tunes the weights of ConvLSTM optimally to achieve better prediction accuracy. Assessment measures, such as the percentage root mean square difference (PRD) and mean square error (MSE), were employed to compare the suggested method with previous approaches. The developed system demonstrates high prediction accuracy, achieving minimal values for MSE (0.0042) and PRD (0.8450).
In the modern era, several opportunities are provided to transfer data through graph models in which digital transformation plays a vital role. Maintaining the data using several devices will cause a processing time delay. Data collection is an important task in all data processing units, as is storing this type of information, as is providing security on this data through a database. To improve this process, the data retrieval is done using a graph data model. The proposed method is used to find the best way to store each record in a graph database rather than in another cloud or distributed database. In this, various techniques used in providing a better solution for data processing are done on graph databases without schema. To provide a good solution without any time delay, the graph analytics algorithm will help in making decisions on better results. In this method, many applications will be taken as case studies for finding the best relationship on the given graph database. In this, the collected data will be converted into graph format, an easy way of finding the duplication. The data model generated on each vertex is converted into low- and high-dimensional data forms. This chapter will go over a number of real-time Neo4j applications that are used to find optimal relationships on various datasets in an efficient manner.
Water is elixir of life. So rainfall becomes the inevitable part of every nation which decides the prosperity and economic scenario of a country. In this fast moving world, estimation of rainfall has become a necessity especially when the global heat levels are soaring. The proposed approach here is to use the digital cloud images to predict rainfall. Considering the cost factors and security issues, it is better to predict rainfall from digital cloud images rather than satellite images. The status of sky is found using wavelet. The status of cloud is found using the Cloud Mask Algorithm. The type of cloud can be evolved using the K-Means Clustering technique. As per previous research works done by the researchers, it is stated the Nimbostratus and Cumulonimbus are the rainfall clouds and other clouds like cumulus will produce rain at some rare chances. The type of rainfall cloud is predicted by analyzing the color and density of the cloud images. The cloud images are stored as JPEG file in the file system. Analysis was done over several images. The result predicts the type of cloud with its information like classification, appearance and altitude and will provide the status of the rainfall. The proposed approach can be utilized by common people to just take the photograph of cloud and can come to conclusion about the status of rainfall and to get the desired detail.
A physically disabled person nourishing himself was considered as a great deed in the 19th century. But this age has become an era where talents are considered as the matter of fact, in spite of their physical weakness. Here is a newfangled technology, through which a robot imitates human handwriting and acts as a proxy during strenuous circumstances. What the robot does is that, it acts according to the voice commands imposed on it, thereby fulfilling the physical ailment of the needy. Writing is brought by feeding a particular style in the form of images, which is stored as fonts in its memory. This not only assists disabled persons, but also supports people who need a proxy for their hand writing. When a particular font is fed, it writes what was dictated, thereby, acting as a human hand. This methodology that has been proposed here is much practical and the design shows improved efficiency.