Region of interest (ROI) image and video compression techniques have been widely used in visual communication applications in an effort to deliver good quality images and videos at limited bandwidths. Most image quality metrics have been developed for uniform resolution images. These metrics are not appropriate for the assessment of ROI coded images, where space-variant resolution is necessary. The spatial resolution of the human visual system (HVS) is highest around the point of fixation and decreases rapidly with increasing eccentricity. Since the ROIs are usually the regions "fixated" by human eyes, the foveation property of the HVS supplies a natural approach for guiding the design of ROI image quality measurement algorithms. We have developed an objective quality metric for ROI coded images in the wavelet transform domain. This metric can serve to mediate the compression and enhancement of ROI coded images and videos. We show its effectiveness by applying it to an embedded foveated image coding system.
Inspired by the high performance of High Efficiency Video Coding (HEVC), this paper reports our work on applying the ideas of HEVC intra coding to compression of high-depth images such as 32 bits per pixel (b/p) seismic data. Compared to a licensed commercial wavelet-based codec that is currently used for seismic image compression, which performs on par with JPEG-XR, our new image codec significantly improves the PSNR vs. compression ratio performance. The codec's subject performance is rated by geologist as highly satisfactory.
In this paper we present a compression scheme for the Digital Cinema application. Specifically, we developed a rate allocation algorithm for applying JPEG-2000 to coding Digital Cinema movies to achieve near lossless and smooth visual picture quality over the decoded movies at the required coding rate. First, a rate-distortion model is established based on the estimated source characteristics of the pictures, and a rate allocation algorithm is derived to determine a target rate for each picture based on the model so that 1) the overall distortion is minimized and 2) each picture has the same distortion at the required average bit rate. Then JPEG-2000 is employed to encode the high resolution motion picture according to the target rate allocation to achieve efficient compression performance and smooth picture quality over the picture sequence. The test results on Digital Cinema movie clips have shown that our scheme has achieved near lossless coding performance and very smooth picture quality both visually and in PSNR.
Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder's dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder's external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.
The IBM Blue Gene®/Q platform presents scientists and engineers with a rich set of hardware features such as 16 cores per chip sharing a Level 2 cache, a wide SIMD (single-instruction, multiple-data) unit, a five-dimensional torus network, and hardware support for collective operations. An especially important feature is that the cores have four “hardware threads,” which makes it possible to hide latencies and obtain a high fraction of the peak issue rate from each core. All of these hardware resources present unique performance-tuning opportunities on Blue Gene/Q. We provide an overview of several important applications and solvers and study them on Blue Gene/Q using performance counters and Message Passing Interface profiles. We discuss how Blue Gene/Q tools help us understand the interaction of the application with the hardware and software layers and provide guidance for optimization. On the basis of our analysis, we discuss code improvement strategies targeting Blue Gene/Q. Information about how these algorithms map to the Blue Gene® architecture is expected to have an impact on future system design as we move to the exascale era.
Blue Gene/Q (BG/Q) is an early representative of increasing scale and thread count that will characterize future HPC systems: large counts of nodes, cores, and threads; and a rich programming environment with many degrees of freedom in parallel computing optimization. So it is both a challenge and an opportunity to it to accelerate the seismic imaging applications to the unprecedented levels that will significantly advance the technologies for the oil and gas industry. In this work we aim to address two important questions: how HPC systems with high levels of scale and thread count will perform in real applications; and how systems with many degrees of freedom in parallel programming can be calibrated to achieve optimal performance. Based on BG/Q's architecture features and RTM workload characteristics, we developed massive domain partition, MPI , and SIMD Our detailed deep analyses in various aspects of optimization also provide valuable experience and insights into how can be utilized to facilitate the advance of seismic imaging technologies. Our BG/Q RTM solution achieved a 14.93x speedup over the BG/P implementation. Our multi-level parallelism strategies for Reverse Time Migration (RTM) seismic imaging computing on BG/Q provides an example of how HPC systems like BG/Q can accelerate applications to a new level.
In recent work, data-driven sweet spotting technique for shale plays previously explored with vertical wells has been proposed. Here, we extend this technique to multiple formations and formalize a general data-driven workflow to facilitate feature extraction from vertical well logs and predictive modeling of horizontal well production. We also develop an experimental framework that facilitates model selection and validation in a realistic drilling scenario. We present some experimental results using this methodology in a field with 90 vertical wells and 98 horizontal wells, showing that it can achieve better results in terms of predictive ability than kriging of known production values.
The successful implementation of the MPEG (Motion Picture Experts Group) video compression standard has made digital video decoders a cost effective reality for consumer electronic applications. Development of these applications necessitated the design of a test system to ensure the functionality, reliability, and quality of the video decoder. One area of consumer electronics which utilizes MPEG video decoding are the DBS (Digital Broadcast Satellite) system receivers. We present in this paper an MPEG video decoder test system for a DBS system receiver. The test system is capa- ble
In this paper we present a novel algorithm to speed up the inter mode decision process for the H.264/AVC encoding. The proposed inter mode decision scheme determines the best coding mode between P16×16 and P8×8 based on learning theoretic classification algorithms to discern between mode classes based on the evaluation of a simple set of features extracted from a motion compensated macroblock. We show that the proposed method can reduce the number of macroblocks for P8×8 mode testing by 80% on average, at the cost of only a small loss of 0.1 dB in compression performance.