We briefly introduced the development of the association rule,and analysed the classic algorithm.Discussed the basic framework during mining association rules.Finally,the development of the association rules are summarized.
We present NeFF, a 3D neural scene representation estimated from captured images. Neural radiance fields(NeRF) have demonstrated their excellent performance for image based photo-realistic free-viewpoint rendering. However, one limitation of current NeRF based methods is the shape-radiance ambiguity, which means that without any regularization, there may be an incorrect shape that explains the training set very well but that generalizes poorly to novel views. This degeneration becomes particularly evident when fewer input views are provided. We propose an explicit regularization to avoid the ambiguity by introducing the Neural Feature Fields which map spatial locations to view-independent features. We synthesize feature maps by projecting the feature fields into images using volume rendering techniques as NeRF does and get an auxiliary loss that encourages the correct view-independent geometry. Experimental results demonstrate that our method has better robustness when dealing with sparse input views.
One important aspect of creating game bots is adversarial motion planning: identifying how to move to counter possible actions made by the adversary. In this paper, we examine the problem of opponent interception, in which the goal of the bot is to reliably apprehend the opponent. We present an algorithm for motion planning that couples planning and prediction to intercept an enemy on a partially-occluded Unreal Tournament map. Human players can exhibit considerable variability in their movement preferences and do not uniformly prefer the same routes. To model this variability, we use inverse reinforcement learning to learn a player-specific motion model from sets of example traces. Opponent motion prediction is performed using a particle filter to track candidate hypotheses of the opponent's location over multiple time horizons. Our results indicate that the learned motion model has a higher tracking accuracy and yields better interception outcomes than other motion models and prediction methods.
One scenario that commonly arises in computer games and military training simulations is predator-prey pursuit in which the goal of the non-player character agent is to successfully intercept a fleeing player. In this paper, we focus on a variant of the problem in which the agent does not have perfect information about the player’s location but has prior experience in combating the player. Effectively addressing this problem requires a combination of learning the opponent’s tactics while planning an interception strategy. Although for small maps, solving the problem with standard POMDP (Partially Observable Markov Decision Process) solvers is feasible, increasing the search area renders many standard techniques intractable due to the increase in the belief state size and required plan length. Here we introduce a new approach for solving the problem on large maps that exploits key events, high reward regions in the belief state discovered at the higher level of abstraction, to plan efficiently over the low-level map. We demonstrate that our hierarchical key-events planner can learn intercept policies from traces of previous pursuits significantly faster than a standard point-based POMDP solver, particularly as the maps scale in size.
Intense earthquakes can cause extensive damage to residential buildings, resulting in casualties and significantly impeding the economic development of affected regions. This system utilizes YOLO object detection and thermal imaging to mutually assist in locating survivors, while also incorporating gas sensors to prevent the diffusion of toxic gases at disaster sites. By increasing the success rate of rescue operations, this system minimizes the risk of injury to rescue personnel. [1] YOLOv7[2] is a state-of-the-art object detection module that has been widely applied across various domains in recent years. Its key feature is the ability to multitask and perform three different tasks simultaneously: object detection, instance segmentation, and keypoint detection. Compared to previous versions, YOLOv7 excels in object detection, providing faster and more accurate object identification. It can effectively detect objects such as pedestrians, buildings, and traffic signals. Moreover, YOLOv7 offers enhanced precision by utilizing color representation to distinguish specific objects using masks. Additionally, it can locate keypoint nodes of the human body in the image, such as the head, elbows, and knees, and connect these key points to create a skeletal structure resembling a stick figure. This capability enables action and poses recognition.
This paper focuses on developing a free-viewpoint rendering system by combining multi-plane images and neural radiance fields. The primary challenge is ensuring the ability to generalize to different scenes while maintaining real-time rendering capabilities. To tackle these challenges, we disentangle position specific feature and direction specific feature to reduce the spatial complexity of storing discrete neural radiance fields information. This enables the system to utilize a multi-plane images structure for real-time rendering while still expressing appearance features dependent on observation direction like highlights. Furthermore, by introducing feature extraction for viewing directions in the scene, we generalize the model to multiple scenes, avoiding the need for retraining for each new scene and enhancing the practicality and adaptability of the system. Finally, we validate the effectiveness of the system by testing with data captured from real-world scenes.
Estimating optical flow from facial videos is an essential preprocessing step for many applications. However, it is a challenging task as the facial videos contain rich expressions, large displacements, and complex occlusions. Obtaining the ground truth optical flow for facial videos is very difficult, which hinders the supervised learning of optical flow from monocular in-the-wild facial videos. In this paper, we provide an effective and accurate method for the unsupervised learning of optical flow from facial videos. An occlusion-aware global-local matching model is introduced for the joint reasoning of optical flow and occlusions. We propose a novel occlusion estimation paradigm to detect occlusions caused by facial expressions and pose variations. Experiments demonstrate that our method compares favorably against the state-of-the-art methods in facial optical flow estimation.