People are actively expressing their views and opinions via the use of visual pictures and text captions on social media platforms, rather than just publishing them in plain text as a consequence of technical improvements in this field. With the advent of visual media such as images, videos, and GIFs, research on the subject of sentiment analysis has expanded to encompass the study of social interaction and opinion prediction via the use of visuals. Researchers have focused their efforts on understanding social interaction and opinion prediction via the use of images, such as photographs, films, and animated GIFs (graphics interchange formats). The results of various individual studies have resulted in important advancements being achieved in the disciplines of text sentiment analysis and image sentiment analysis. It is recommended that future studies investigate the combination of picture sentiment analysis and text captions in more depth, and further research is necessary for this field. An intermodal analysis technique known as deep learning-based intermodal (DLBI) analysis is discussed in this suggested study, which may be used to show the link between words and pictures in a variety of scenarios. It is feasible to gather opinion information in numerical vector form by using the VGG network. Afterward, the information is transformed into a mapping procedure. It is necessary to predict future views based on the information vectors that have been obtained thus far, and this is accomplished through the use of active deep learning. A series of simulation tests are being conducted to put the proposed mode of operation to the test. When we look at the findings of this research, it is possible to infer that the model outperforms and delivers a better solution with more accuracy and precision, as well as reduced latency and an error rate, when compared to the alternative model (the choice).
In recent years, intelligent emotion recognition is active research in computer vision to understand the dynamic communication between machines and humans. As a result, automatic emotion recognition allows the machine to assess and acquire the human emotional state to predict the intents based on the facial expression. Researchers mainly focus on speech features and body motions; identifying affect from facial expressions remains a less explored topic. Hence, this paper proposes a novel approach for intelligent facial emotion recognition using optimal geometrical features from facial landmarks using VGG-19s (FCNN). Here, we utilize Haarcascade to detect the subject face and determine the distance and angle measurements. The entire process is to classify the facial expressions based on extracting relevant features with the normalized angle and distance measures. The experimental analysis shows high accuracy on the MUG dataset of 94.22% and 86.45% on GEMEP datasets, respectively.
This paper deals the mathematical modeling of second wave COVID19 pandemic in India, also we discussed such as uniformly bounded of the system, Equilibrium analysis and basic reproduction number R0. We calculated the analytic solutions by HPM (Homotopy Perturbation Method) and used Mathematica 12 software for numerical analysis up to 8th order approximation. It checked the error values of the approximation while the system has residual error, absolute error and h curve initial derivation of square error at up to 8th order approximation. The basic reproduction number ranges between 0.8454 and 2.0317 form numerical simulation, it helps to identify the whole system fluctuations. Finally, our proposed model validated from real life data for highly affected 5 states
A flood is defined as a surplus of water or sludge on parched soil, and a flood has originated through the runoff of water inside the water route from the various water sources like canals, etc. Intense rainfall, deforestation, urbanization, deprived water and sewerage administration, and lack of concentration toward the environment of the hydrological scheme have been the causes of urban flooding. In addition, there is a deficiency in flood assessment due to the impediment in getting data on floods to the control room from the flood-affected area. To diminish the possessions due to flooding, there ought to be an immediate move of captured statistics as of the hectic region en route to the observation room with no further wait for a completely fledged technique in the wireless settings data from the Internet of Things (IoT). The Internet of Everything (IoE) is a concept that extends the Internet of Things. In view of the fact that the wireless nodes are changeable in their environment, those effects lead to unsteadiness and uncertainty in information distribution. Therefore, there is a requirement for flood-predictable region data that may be exaggerated between the source and the control room. In the past, there were a lot of techniques set up and put into practice intended for keeping an eye on the flood spots. However, one of the biggest challenges is to have data sharing without delay and loss of data among source and destination nodes. In addition to that, the video quality also needs to be taken into consideration at the same time in receipt, as it is a tough task to determine and preplan the flood happenings completely from the normal disaster that makes scientific complicatedness more than the information being received in a wireless ad-hoc environment using IoT-based sensors. Considering all the abovementioned reasons, the proposed work comprises of three folded goals, namely, the design of a mobile ad-hoc flooding environment, the development of an urban flood high definition video surveillance system using IoT-based sensors, and experimental work on simulation.