We report a new vertical alignment fringe in‐plane switching (VA‐FIS) liquid crystal device with single alignment on top substrate. This design exhibits submillisecond response time, reasonably high transmittance (~70%), and relatively low operation voltage (15 V). It is a strong contender for next‐generation field sequential color displays.
Knowledge distillation (KD) is a widely-used technique that utilizes large networks to improve the performance of compact models. Previous KD approaches usually aim to guide the student to mimic the teacher's behavior completely in the representation space. However, such one-to-one corresponding constraints may lead to inflexible knowledge transfer from the teacher to the student, especially those with low model capacities. Inspired by the ultimate goal of KD methods, we propose a novel Evaluation oriented KD method (EKD) for deep face recognition to directly reduce the performance gap between the teacher and student models during training. Specifically, we adopt the commonly used evaluation metrics in face recognition, i.e., False Positive Rate (FPR) and True Positive Rate (TPR) as the performance indicator. According to the evaluation protocol, the critical pair relations that cause the TPR and FPR difference between the teacher and student models are selected. Then, the critical relations in the student are constrained to approximate the corresponding ones in the teacher by a novel rank-based loss function, giving more flexibility to the student with low capacity. Extensive experimental results on popular benchmarks demonstrate the superiority of our EKD over state-of-the-art competitors.
As an emerging topic in face recognition, designing margin-based loss functions can increase the feature margin between different classes for enhanced discriminability. More recently, the idea of mining-based strategies is adopted to emphasize the misclassified samples, achieving promising results. However, during the entire training process, the prior methods either do not explicitly emphasize the sample based on its importance that renders the hard samples not fully exploited; or explicitly emphasize the effects of semi-hard/hard samples even at the early training stage that may lead to convergence issue. In this work, we propose a novel Adaptive Curriculum Learning loss (CurricularFace) that embeds the idea of curriculum learning into the loss function to achieve a novel training strategy for deep face recognition, which mainly addresses easy samples in the early training stage and hard ones in the later stage. Specifically, our CurricularFace adaptively adjusts the relative importance of easy and hard samples during different training stages. In each stage, different samples are assigned with different importance according to their corresponding difficultness. Extensive experimental results on popular benchmarks demonstrate the superiority of our CurricularFace over the state-of-the-art competitors.
Fine pixel size and high-resolution liquid crystal on silicon (LCoS) backplanes have been developed by various companies and research groups since 1973. The development of LCoS is not only beneficial for full high definition displays but also to spatial light modulation. The high-quality and well-calibrated panels can project computer generated hologram (CGH) designs faithfully for phase-only holography, which can be widely utilized in 2D/3D holographic video projectors and components for optical telecommunications. As a result, we start by summarizing the current status of high-resolution panels, followed by addressing issues related to the driving frequency (i.e., liquid crystal response time and hardware interface). LCoS panel qualities were evaluated based on the following four characteristics: phase linearity control, phase precision, phase stability, and phase accuracy.
Large facial variations are the main challenge in face recognition. To this end, previous variation-specific methods make full use of task-related prior to design special network losses, which are typically not general among different tasks and scenarios. In contrast, the existing generic methods focus on improving the feature discriminability to minimize the intra-class distance while maximizing the interclass distance, which perform well on easy samples but fail on hard samples. To improve the performance on those hard samples for general tasks, we propose a novel Distribution Distillation Loss to narrow the performance gap between easy and hard samples, which is a simple, effective and generic for various types of facial variations. Specifically, we first adopt state-of-the-art classifiers such as ArcFace to construct two similarity distributions: teacher distribution from easy samples and student distribution from hard samples. Then, we propose a novel distribution-driven loss to constrain the student distribution to approximate the teacher distribution, which thus leads to smaller overlap between the positive and negative pairs in the student distribution. We have conducted extensive experiments on both generic large-scale face benchmarks and benchmarks with diverse variations on race, resolution and pose. The quantitative results demonstrate the superiority of our method over strong baselines, e.g., Arcface and Cosface.
We propose a multitask convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA). We decompose the task of rating image quality into two subtasks, namely distortion identification and distortion-level estimation, and then combine the results of the two subtasks to obtain a final image quality score. Unlike conventional multitask convolutional networks, wherein only the early layers are shared and the subsequent layers are different for each subtask, our model shares almost all the layers by integrating a dictionary into the CNN. Moreover, it is trained in an end-to-end manner, and all the parameters, including the weights of the convolutional layers and the codewords of the dictionary, are simultaneously learned from the loss function. We test our method on widely used image quality databases and show that its performance is comparable with those of state-of-the-art general-purpose NR-IQA algorithms.