We previously trained the compression network via optimization of bit-rate and distortion (feature domain MSE) [1]. In this paper, we propose feature map compression method for video coding for machine (VCM) based on deep learning-based compression network that joint training for optimizing both compressed bit rate and machine vision task performance. We use bmshij2018-hyperporior model in the CompressAI [2] as the compression network, and compress the feature map which is the output of stem layer in the Faster R-CNN X101-FPN network of Detectron2 [3]. We evaluated the proposed method by evaluation framework for MPEG VCM. The proposed method shows the better results than VVC of MPEG VCM anchor.
Performance management has a long tradition in China, but questions remain as to its effectiveness. This article presents the results of a comparative time-series analysis that assessed the impact of performance management reform in the province of Guangdong, China, on two outcomes of interest—improved citizen satisfaction with government performance and improved government financial performance. The study used data survey data on citizen satisfaction from the Government Performance Evaluation Center in China and financial data from the statistical yearbook and the fiscal yearbook from 2010 to 2014. This coincides with two periods of development of performance management in China: (1) before 2012, when efforts focused on economic development followed by an emphasis on citizen-oriented and sustainable government services; and (2) between 2012 and 2014, when pilot programs of performance management systems were implemented. A difference-in-differences test comparing the outcome variables before and after the pilot project in Guangdong suggests that overall performance management reform had a positive impact on citizen satisfaction but it had mixed results on financial performance in the county-level jurisdictions that implemented the pilot program.
Objectives This study attempted to explore the process of digital competency change and growth of J kindergarten teachers in operation of a pilot kindergarten for the future curriculum selected by the Ministry of Education. Methods For the study, in-depth interviews were conducted twice for nine J kindergarten teachers in Incheon Metropolitan City who are participating in the operation of the future curriculum. Data analysis was conducted by summarizing and coding, adjusting subcategories and categories, nomadicization and hierarchization, and deriving upper categories according to research problems, followed by expert review and insider verification. Results First, in the early days of the operation of the future curriculum, teachers recognized that digital media use education was a necessary curriculum to prepare for the future society according to the trends and demands of the times. Second, teachers experienced trial and error through the operation of the future curriculum to support infant play and learning, and children used digital media to approach, think, and expand the depth and breadth of learning. Third, teachers overcame the digital competency gap through communication and solidarity with fellow teachers. Low-experienced teachers familiar with digital media communicated in terms of technology use, and high-experienced teachers collaborated in a horizontal structure in terms of play application and support, and experienced changes and growth in digital capabilities in this process. Conclusions J kindergarten teachers improved their digital capabilities through the operation of the future curriculum and tried to continue to challenge them as competent future teachers with digital capabilities.
In the Semantic Web, it is possible to provide intelligent information retrieval and automated web services by defining a concept of information resource and representing a semantic relation between resources with meta data and ontology. It is very important to manage semantic data such as ontology and meta data efficiently for implementing essential functions of the Semantic Web. Thus we propose an index structure to support more accurate search results and efficient query processing by considering semantic and structural features of the semantic data. Especially we use a graph data model to express semantic and structural features of the semantic data and process various type of queries by using graph model based path expressions. In this paper the proposed index aims to distinguish our approach from earlier studies and involve the concept of the Semantic Web in its entirety by querying on primarily extracted structural path information and secondary extracted one through semantic inferences with ontology. In the experiments, we show that our approach is more accurate and efficient than the previous approaches and can be applicable to various applications in the Semantic Web.
After the success of H.264/AVC, a next generation codec for Ultra-HD resolution video is being developed under the name called High Efficiency Video Coding(HEVC). This paper presents an optimization method for HEVC by performing early termination that preventing further CU split. In our experiments, the early termination method reduces the computational complexity without noticeable coding loss.
The proliferation of deep learning-based machine vision applications has given rise to a new type of compression, so called video coding for machine (VCM). VCM differs from traditional video coding in that it is optimized for machine vision performance instead of human visual quality. In the feature compression track of MPEG-VCM, multi-scale features extracted from images are subject to compression. Recent feature compression works have demonstrated that the versatile video coding (VVC) standard-based approach can achieve a BD-rate reduction of up to 96% against MPEG-VCM feature anchor. However, it is still sub-optimal as VVC was not designed for extracted features but for natural images. Moreover, the high encoding complexity of VVC makes it difficult to design a lightweight encoder without sacrificing performance. To address these challenges, we propose a novel multi-scale feature compression method that enables both the end-to-end optimization on the extracted features and the design of lightweight encoders. The proposed model combines a learnable compressor with a multi-scale feature fusion network so that the redundancy in the multi-scale features is effectively removed. Instead of simply cascading the fusion network and the compression network, we integrate the fusion and encoding processes in an interleaved way. Our model first encodes a larger-scale feature to obtain a latent representation and then fuses the latent with a smaller-scale feature. This process is successively performed until the smallest-scale feature is fused and then the encoded latent at the final stage is entropy-coded for transmission. The results show that our model outperforms previous approaches by at least 52% BD-rate reduction and has $\times 5$ to $\times 27$ times less encoding time for object detection. It is noteworthy that our model can attain near-lossless task performance with only 0.002-0.003% of the uncompressed feature data size.