The non-destructive study of soil micromorphology via computed tomography (CT) imaging has yielded significant insights into the three-dimensional configuration of soil pores. Precise pore analysis is contingent on the accurate transformation of CT images into binary image representations. Notably, segmentation of 2D CT images frequently harbors inaccuracies. This paper introduces a novel three-dimensional pore segmentation method, BDULSTM, which integrates U-Net with convolutional long short-term memory (CLSTM) networks to harness sequence data from CT images and enhance the precision of pore segmentation. The BDULSTM method employs an encoder–decoder framework to holistically extract image features, utilizing skip connections to further refine the segmentation accuracy of soil structure. Specifically, the CLSTM component, critical for analyzing sequential information in soil CT images, is strategically positioned at the juncture of the encoder and decoder within the U-shaped network architecture. The validation of our method confirms its efficacy in advancing the accuracy of soil pore segmentation beyond that of previous deep learning techniques, such as U-Net and CLSTM independently. Indeed, BDULSTM exhibits superior segmentation capabilities across a diverse array of soil conditions. In summary, BDULSTM represents a state-of-the-art artificial intelligence technology for the 3D segmentation of soil pores and offers a promising tool for analyzing pore structure and soil quality.
Self-attention has been proven to be a quite powerful yet calculation-intensive method for scene semantic segmentation. Even though many efforts have been made to explore more effective and resource-saving ways to apply self-attention, there is still space in reducing the calculation consumption. Meanwhile, since self-attention is good at fusing information, its application should be extended to multi-scale-feature-fusion, which is barely researched while the information exchange paths between features in different resolutions are mostly addition and concatenation. A special partition method decreasing the computational complexity of self-attention is investigated, and a multi-scale-feature-attention (MFA) module fusing low-resolution features containing semantic information with high-resolution features having detailed information is presented at the same time. To be specific, the proposed multi-scale-partition-attention (MPA) module and MFA module are inserted into the backbone in sequence to fuse information among all the pixels in one highly extracted feature and the pixels from features with different resolutions, respectively. Extensive experiments are carried out on semantic segmentation benchmarks including PASCAL-Context and Cityscapes to demonstrate that these two improved modules can improve the performance of the backbone in scene semantic segmentation tasks that contain multiple classes and objects in both big and small sizes.
We propose a deep learning to hash method named CycHash to improve image retrieval. Its network is an encoder-decoder architecture. Cooperated with full-connected layers, an encoder is used to transform image to hash code. The decoder network is used to restore high-dimensional data from encoded feature maps. We define a restoration loss between the restored data from decoder network and inputted data of decoder network. The restoration loss and central similarity loss are summarized to guide the learning of hash function. Experiments of image retrieval demonstrate the effectiveness of our CycHash method and CycHash outperforms state-of-the-arts.
The training of deep neural networks (DNNs) is usually memory-hungry due to the limited device memory capacity of DNN accelerators. Characterizing the memory behaviors of DNN training is critical to optimize the device memory pressures. In this work, we pinpoint the memory behaviors of each device memory block of GPU during training by instrumenting the memory allocators of the runtime system. Our results show that the memory access patterns of device memory blocks are stable and follow an iterative fashion. These observations are useful for the future optimization of memory-efficient training from the perspective of raw memory access patterns.
While the Transformer architecture has been widely used in natural language processing tasks and computer vision tasks, its application in medical visual question answering is still limited. Most current methods rely on an image extractor to obtain visual features and a text extractor to capture semantic features, and then a fusion module to merge the information from the two modalities to predict the final result. In contrast, this paper proposes a novel Transformer-based medical vision question answering model, called MQAT, in which an improved Transformer structure is used for feature extraction and modal fusion to achieve better performance. Experimental results demonstrate that our Transformer structure not only ensures the stability of the model performance, but also accelerates its convergence, and the MQAT model outperforms the existing state-of-the-art methods.