logo
    Robust and Low-Rank Representation for Fast Face Identification With Occlusions
    45
    Citation
    54
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    In this paper we propose an iterative method to address the face identification problem with block occlusions. Our approach utilizes a robust representation based on two characteristics in order to model contiguous errors (e.g., block occlusion) effectively. The first fits to the errors a distribution described by a tailored loss function. The second describes the error image as having a specific structure (resulting in low-rank in comparison to image size). We will show that this joint characterization is effective for describing errors with spatial continuity. Our approach is computationally efficient due to the utilization of the Alternating Direction Method of Multipliers (ADMM). A special case of our fast iterative algorithm leads to the robust representation method which is normally used to handle non-contiguous errors (e.g., pixel corruption). Extensive results on representative face databases (in constrained and unconstrained environments) document the effectiveness of our method over existing robust representation methods with respect to both identification rates and computational time. Code is available at Github, where you can find implementations of the F-LR-IRNNLS and F-IRNNLS (fast version of the RRC) : https://github.com/miliadis/FIRC
    Keywords:
    Representation
    Robustness
    Rank (graph theory)
    Identification
    This appendix explains how to group pixels in a given 2D array into super-pixels, each containing multiple pixels, while maintaining the proper ordering of pixels. Creating super-pixels should group adjacent pixels in both the horizontal and vertical dimensions. If we grouped pixels in super-pixels only along the axis row-by-row, this would be analogous to having a detector with elements that are long horizontally and thin vertically. Our goal is to properly group the pixels for square detector elements.
    Citations (0)
    Jose ´Manuel Lopez-AlonsoJavier AldaUniversity Complutense of MadridSchool of OpticsDepartment of OpticsAv. Arcos de Jalo´n s/n. 28037 MadridSpainE-mail: jmlopez@opt.ucm.esAbstract. Bad pixels are defined as those pixels showing a temporalevolution of the signal different from the rest of the pixels of a givenarray. Principal component analysis helps us to understand the definitionof a statistical distance associated with each pixels, and using this dis-tance it is possible to identify those pixels labeled as bad pixels. Thespatiality of a pixel is also calculated. An assumption about the normalityof the distribution of the distances of the pixels is revised. Although theinfluence on the robustness of the identification algorithm is negligible,the definition of a parameter related with this nonnormality helps to iden-tify those principal components and eigenimages responsible for the de-parture from a multinormal distribution. The method for identifying thebad pixels is successfully applied to a set of frames obtained from a CCDvisible and a focal plane array (FPA) IR camera.
    Robustness
    Citations (45)
    In this paper, an analytical model for evaluating the charge induction on pixels of the pixellated CdZnTe (CZT) detector is proposed. According to this model, the cross-talk between pixels increases while the pixel size decreases. We also propose a method for determining the position of charges more accurately by exploiting the charge sharing between the neighboring pixels. Conventionally to increase the resolution of the image, the pixel size is decreased that results in an increase in the amount of cross-talk. With our proposed method, the resolution of the image is effectively increased without the need for decreasing the pixel size.
    Charge sharing
    Position (finance)
    Sub-pixel resolution
    K-pass pixel value ordering (PVO) is an effective reversible data hiding (RDH) technique. In k-pass PVO, the complexity measurement may lead to a weak estimation result because the unaltered pixels in a block are excluded to estimate block complexity. In addition, the prediction-error is computed without considering the location relationship of the second largest and largest pixels or the second smallest and smallest pixels. To this end, an improved RDH technique is proposed in this paper to enhance the embedding performance. The improvement mainly lies in the following two aspects. First, some pixels in a block, which are excluded from data hiding in some existing RDH methods, are exploited together with the neighborhood surrounding this block to increase the estimation accuracy of local complexity. Second, the remaining pixels in a block, i.e., three largest and three smallest pixels are involved in data embedding. Taking three largest pixels for example, when the difference between the largest and third largest pixels is relatively large (e.g., > 1), we improve k-pass PVO by considering the location relationship of the second largest and largest pixels. The advantage of doing this is that the difference valued 3 between the maximum and the second largest pixel which is shifted in k-pass PVO, is able to carry 1 bit data in our method. In other words, a larger amount of pixels are able to carry data bits in our scheme compared with k-pass PVO. Abundant experimental results reveal that the proposed method achieves preferable embedding performance compared with the previous work, especially when a larger payload is required.
    Value (mathematics)
    Citations (29)
    In this note, we show the two necessary and sufficient conditions such that two block matrices are similar, that is, suppose that the two square matrices A and B satisfy A2=0 and B2=0. Show that the two block matrices AC=0B and A0=0B are similar if and only if rank AC=0B=rank(A)+rank(B) and AC+CB=0. Suppose that the two square matrices A and B satisfy A2=A and B2=B. Show that the two block matrices AC=0B and A0=0B are similar if and only if AC+CB=C.
    Rank (graph theory)
    Square (algebra)
    Similarity (geometry)
    Matrix (chemical analysis)
    Citations (0)
    Bad pixel correction on pixelated solid-state detectors typically uses the average of the direct neighboring pixels (AVG) to derive the value of a bad pixel. However, the AVG approach was suboptimal for high resolution imaging. Therefore, we developed a least gradient approach (LGA) in this work. In the LGA approach, the gradients of the image in a 5 $\times$ 5 box centered at the bad pixel were calculated along the two orthogonal and two diagonal directions. The value of the bad pixel was derived from the average of the two neighboring pixels along the direction in which the gradient was the least. For 18 cardiac SPECT studies, we added to the data randomly generated bad pixels and bad pixels in a specially designed pattern and then corrected the bad pixels using the AVG approach. Images reconstructed from the bad-pixel-free data and the bad-pixel-corrected data were compared. For high resolution imaging, we used line and bar phantom studies to evaluate the AVG and LGA approaches on a pixelated solid-state gamma camera. Patient studies showed no visible qualitative or significant quantitative difference between the images reconstructed from the bad-pixel-free and bad-pixel-corrected data. The maximum segment change ranged from 0% to 7.4% with average of 3.6 for data with randomly generated bad pixels. Blind reading of the images by an expert nuclear cardiologist showed no diagnostic difference for any of the patients. The line phantom studies showed two bad pixels not corrected by the AVG approach but corrected by the LGA approach. Bar phantom studies showed ten bad pixels not corrected by the AVG approach. But 9 out the 10 bad pixels were corrected using the LGA approach. The commonly used averaging approach (AVG) was effective for cardiac SPECT imaging but the least gradient approach (LGA) developed in this work was more effective for high resolution imaging.
    Dot pitch
    Citations (0)
    Sub-pixel mapping of remotely sensed imagery is often performed by assuming that land cover is spatially dependent both within and between image pixels. Intra- and inter-pixel dependencies are two widely used approaches to represent different land-cover spatial dependencies at present. However, merely using intra- or inter-pixel dependence alone often fails to fully describe land-cover spatial dependence, making current sub-pixel mapping models defective. A more reasonable object for sub-pixel mapping is maximizing both intra- and inter-pixel dependencies simultaneously instead of using only one of them. In this article, the differences between intra- and inter-pixel dependencies are discussed theoretically, and a novel sub-pixel mapping model aiming to maximize hybrid intra- and inter-pixel dependence is proposed. In the proposed model, spatial dependence is formulated as a weighted sum of intra-pixel dependence and inter-pixel dependence to satisfy both intra- and inter-pixel dependencies. By application to artificial and synthetic images, the proposed model was evaluated both visually and quantitatively by comparing with three representative sub-pixel mapping algorithms: the pixel swapping algorithm, the sub-pixel/pixel attraction algorithm, and the pixel swapping initialized with sub-pixel/pixel attraction algorithm. The results showed increased accuracy of the proposed algorithm when compared with these traditional sub-pixel mapping algorithms.
    Land Cover
    Spatial Dependence
    In this paper, a new sub-pixel mapping algorithm is proposed based on sub-pixel/sub-pixel spatial attraction model (SSSAM). Different from the original sub-pixel/pixel spatial attraction model (SPSAM), the SSSAM considers the spatial distribution of each sub-pixel within neighbor pixels, when calculating the spatial attractions for sub-pixels within the centre pixel. Then the attractions are used to determine the class values of these sub-pixels. Two experiments on three artificial images and one real remote sensing image are processed. Both of the results show that compared with traditional SPSAM, the proposed method can produce sub-pixel mapping results with higher accuracy.
    Random walker algorithm
    Citations (12)
    Although the number of pixels in image sensors is increasing exponentially, production techniques have only been able to linearly reduce the probability that a pixel will be defective. The result is a rapidly increasing probability that a sensor will contain one or more defective pixels. The defect pixel detection and defect pixel correction are operated separately but the former must employ before the latter is in use. Traditional detection scheme, which finds the defect pixels during manufacturing, is not able to discover the spread defect pixels years late. Consequently, the lifetime and robust defect pixel detection technique, which identifies the fault pixels when camera is in use, is more practical and developed. The paper presents a two stages dead pixel detection technique without complicated mathematic computations so that the embedded devices can easily implement it. Using six dead pixel types are tested and the experimental result indicates that it can be accelerated more than four times the detection time.
    Citations (11)