In this paper a semantic-probabilistic network for event recognition is proposed. The approach uses pre-defined domain ontology to describe the events and scenarios in the scene as a hierarchical decomposition of simple concepts and variables and then perform an automated conversion of the ontology into a Bayesian network. A novel approach to Bayesian network nodes weights calculation is used based on the weighted relation between concepts of the ontology. We then test the performance of our approach to recognize gestures in a human gesture recognition system.
This paper focuses on the development of a methodology to compress neural networks thatis based on the mechanism of prun-ingthe hidden layer neurons. The aforementioned neural networks are created in order to process the data generated by numerous sensors present in a transducer network that would be employed in a smart building. The proposed methodology implements a single approach for the compression of both Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) that are used for the tasks of classification and regression. The main principle behind this method is based on the dropout mechanism, which is employed as a regulation mechanism for the neural networks. The idea behind the method proposed consists of selecting optimal exclusion probability of a hidden layer neuron, based on the redundancy of the said neuron. The novelty of this method is theusage of a custom compression network thatis based on an RNN, which allows us to determine the redundancy parameter not just in a sin-gle hidden layer, but across severallayers. The additional novelty aspect consists of an iterative optimization of the network-optimizer, to have continuous improvement of the redundancy parameter calculator of the input network. For the experimental evalu-ation of the proposed methodology, the task of image recognition with a low-resolution camera was chosen, the CIFAR10 dataset was used to emulate the scenario. The VGGNet Convolutional Neural Network, that contains convolutional and fully connected lay-ers, was used as the network under test for the purposes of this experiment. The following two methods were taken as the analogous state of the art, the MagBase method, which is based on the sparcification principle as well as the method which is based on rarefied representation by employing the approach of rarefied encoding SFAC. The results of the experiment demonstrated that the amount of parameters in the compressed model is only 2.56% of the original input model. This has allowed us to reduce the logical output time by 93.7% and energy consumption by 94.8%. The proposed method allows to effectively usingdeep neural networks in transducer networks that utilize the architecture of edge computing. This in turn allows the system to process the data in real time, reduce the energy consumption and logical output time as well as lower the memory and storage requirements of real-world applications.
This paperis aimedto develop a method for a depth map generation based on objects localization in images, obtained througha stereopair. The proposed solution describes theobjects by the following informative elements: contours, interestpoints (points of the greatest curvature of the contour), center of mass of the object. Moreover, to describe the contour of the image, it is proposed to use methods with adjustable detailing, basedonthe wavelet transform, which has frequency-selective properties. The novelty of this method is the possibility of obtaining an approximate depth map by simplifying the calculation of stereo image difference values, which is traditionally used to generate a depth map. Software was developed based on the proposed solutions. Modeling confirmed the effectiveness of the proposed approach. The proposed method makes it possible to significantly reduce the number of computational operations and, consequently, improve depth map generation performance and recommend the proposed method for mobile navigation systems operating in conditions of limited computing and energy resources. The method provides object detection and spatial positioning, makes it possible to obtain reliable information about the distance to objects for other subsystems that use technical vision in their operation, for example, navigation systems for visually impaired people, robotic devices, etc.
The paperfocuses on the content-based image retrieval systems building. The main challengesin the construction of such sys-tems are considered, the components of such systems are reviewed, and a brief overview of the main methods and techniques that have been used in this area to implement the main components of image search systems is given.As one of the options for solving such a problem, an image retrievemethodology based on the binary space partitioning method and the perceptual hashing method is proposed. Space binary partition trees are a data structuresobtained as follows: the space is partitioned by a hyperplane into two half-spaces, and theneach half-space is recursively partitioned until each node contains only a trivial part of the input features. Perceptual hashing algorithms make it possible to represent an image as a 64-bit hash value, with similar images represented by similar hash values. As a metric for determining the distance between hash values, the Hamming distance is used, this counts the number of dis-tinct bits.To organize the base of hash values, a vp-tree is used, which is an implementation of the binary space partitioning struc-ture.For the experimental study of the methodology, the Caltech-256 data set was used, which contains 30607 images divided into 256 categories, the Difference Hash, P-Hash and Wavelet Hash algorithms were used as perceptual hashing algorithms, the study was carried out in the Google Colab environment.As part of an experimental study, the robustnessof hashing algorithms to modification, compression, blurring, noise, and image rotation was examined. In addition, a study was made of the process of building a vp-tree and the process of searching for images in the tree. As a result of experiments, it was found that each of the hashing algorithms has its own advantages and disadvantages. So, the hashing algorithm based on the difference inadjacentpixel values in the image turned out to be the fastest, but it turned out to be not very robustto modification and image rotation. The P-Hash algorithm, based on the use of the discrete cosine transform, showed better resistance to image blurring, but turned out to be sensitive to image compression. The W-Hash algorithm based on the Haar wavelet transform made it possible to construct the most efficient tree structure and provedto be resistant to image modification and compression.The proposed technique is not recommended for use in general-purpose image retrieval systems; however, it can be useful in searching for images in specialized databases. As ways to improve the methodology, one can note the improvement of the vp-tree structure, as well as the search for a more efficient method of image representation, in addition to perceptual hashing.
Conventional group recommender systems fail to take into account the impact of group dynamics on group recommendations, such as the process of reconciling individual preferences during collective decision-making. This scenario has been previously examined in the context of group decision making, specifically in relation to consensus reaching procedures. In such processes, experts engage in negotiations to determine their preferences and ultimately pick a mutually agreed upon option. The objective of the consensus procedure is to prevent dissatisfaction among group members about the suggestion. Prior studies have tried to accomplish this characteristic in group recommendation by using the minimal operator for the process of aggregating recommendations. Nevertheless, the use of this operator ensures just a minimal degree of consensus on the proposal, but it does not provide a satisfactory level of agreement among group members over the group recommendation. This paper focuses on analyzing consensus reaching procedures in the context of group recommendation for group decision making. The goal of the study is to use consensus reaching processes to provide group recommendations that satisfy all members of the group. Additionally, study aims to enhance group recommender systems by ensuring an acceptable level of agreement among users regarding the group recommendation. Therefore, group recommender systems are expanded by including consensus reaching mechanisms to facilitate group decision making. In the context of group decision making, a collective resolution is reached by a group of persons, who may be specialists, from a pool of options or potential solutions to the issue at hand. To do this, each specialist obtains their preferences about each possibility. The conventional selection techniques for group decision-making difficulties fail to include the possibility of dissent among experts over the chosen choice. This issue is alleviated by using consensus-building techniques, in which a substantial degree of agreement is attained prior to picking the ultimate decision. To facilitate alignment of experts' tastes, they repeatedly modify them to increase their proximity. Prior to making collective choices, it is sometimes necessary to establish a certain degree of consensus. Thus, this paper presents a group recommendation architecture that utilizes automated consensus reaching models to provide accepted suggestions. More specifically, we are considering the minimal cost consensus model and the automated consensus support system model that relies on input. The minimal cost consensus model calculates the collective suggestion of a group by adjusting individual preferences based on a cost function. This is achieved via the use of linear programming. The feedback-based automated consensus support system model mimics the interaction between group members and a moderator. The moderator offers adjustments to individual suggestions in order to bring them closer together and achieve a high degree of agreement before generating the group recommendation. Both models are assessed and contrasted with baseline procedures in the testing.
The rapid growth of data volumes has led to information overload, which impedes informed decision-making. To solve this problem, recommender systems have emerged that analyze user preferences and offer relevant products on their own. One type of recommender system is group recommender systems, which are designed to facilitate collaborative decision-making, increase user engagement, and promote diversity and inclusion. However, these systems face challenges such as accommodating diverse group preferences and maintaining transparency in recommendation processes. In this study, we propose a method for aggregating preferences in group recommendation systems to retain as much information as possible from group members and improve the accuracy of recommendations. The proposed method provides recommendations to groups of users by avoiding the aggregation process in the first steps of recommendation, which preserves information throughout the group recommendation process and delays the aggregation step to provide accurate and diverse recommendations. When the object of a collaborative filtering-based recommender system is not a single user but a group of users, the strategy for calculating similarity between individual users to find similarity should be adapted to avoid aggregating the preferences of group members in the first step. In the proposed model, the nearest neighbors of a group of users are searched, so the method of finding neighbors is adapted to compare individual users with the group profile. An experimental study has shown that the proposed method achieves a satisfactory balance between accuracy and diversity. This makes it well suited for providing recommendations to large groups in situations where accuracy is more or less important compared to diversity. These results support the assumption that retaining all information from group members without using aggregation techniques can improve the performance of group recommender systems, taking into account various features.
The purpose of this study is to provide a comprehensive overview of the latest developments in the field of recommender systems. In order to provide an overview of the current state of affairs in this sector and highlight the latest developments in recommender systems, the research papers available in this area were analyzed. The place of recommender systems in the modern world was defined, their relevance and role in people's daily lives in the modern information environment were highlighted. The advantages of recommender systems and their main properties are considered. In order to formally define the concept of recommender systems, a general scheme of recommender systems was provided and a formal task was formulated. A review of different types of recommender systems is carried out. It has been determined that personalized recommender systems can be divided into content filtering-based systems, collaborative filtering-based systems, and hybrid recommender systems. For each type of system, the author defines them and reviews the latest relevant research papers on a particular type of recommender system. The challenges faced by modern recommender systems are separately considered. It is determined that such challenges include the issue of robustness of recommender systems (the ability of the system to withstand various attacks), the issue of data bias (a set of various data factors that lead to a decrease in the effectiveness of the recommender system), and the issue of fairness, which is related to discrimination against users of recommender systems. Overall, this study not only provides a comprehensive explanation of recommender systems, but also provides information to a large number of researchers interested in recommender systems. This goal was achieved by analyzing a wide range of technologies and trends in the service sector, which are areas where recommender systems are used.
This work is devoted to the development of a distributed framework based on deep learning for processing data from various sensors that are generated by transducer networks that are used in the field of smart buildings. The proposed framework allows you toprocess data that comes from sensors of various types to solve classification and regression problems. The framework architecture consists of several subnets: particularconvolutional netthat handle input from the same type of sensors, a single convolutional fusion netthat processes multiple outputs of particularconvolutional nets. Further, the result of a single convolutional fusion netis fed to the input of a recurrent net, which allows extracting meaningful features from time sequences. The result of the recurrent netopera-tion is fed to the output layer, which generates the framework output based on the type of problem being solved. For the experimental evaluation of the developed framework, two tasks were taken: the task of recognizing human actions and the task of identifying a person by movement. The dataset contained data from two sensors (accelerometer and gyroscope), which were collected from 9 users who performed 6 actions. A mobile device was used as the hardware platforms, as well as the Edison Compute Module hardware device. To compare the results of the work, variations of the proposed framework with different architectures were used, as well as third-party approaches based on various methods of machine learning, including support machines of vectors, a random forest, lim-ited Boltzmann machines, and so on. As a result, the proposed framework, on average, surpassed other algorithms by about 8% in three metrics in the task of recognizing human actions and turned out to be about 13% more efficient in the task of identifying a per-son by movement. We also measured the power consumption and operating time of the proposed framework and its analogues. It was found that the proposed framework consumes a moderate amount of energy, and the operating time can be estimated as acceptable.