Handwritten signature recognition is a biometric mode that has started to be deployed. Therefore, it is necessary to analyze the robustness of the recognition process against presentation attacks, to find its vulnerabilities. Using the results of a previous work, the vulnerabilities are detected and two presentation attack detection techniques have been implemented. With such implementations, a new evaluation has been performed, showing an improvement in the performance. Error rates have been lowered from about 20% to below 3% under operational conditions.
Biometric recognition is already a big player in how we interact with our phones and access control systems. This is a result of its comfort of use, speed, and security. For the case of border control, it eases the task of person identification and black-list checking. Although the performance rates for verification and identification have dropped in the last decades, protection against vulnerabilities is still under heavy development. This paper will focus on the detection of presentation attacks in fingerprint biometrics, i.e., attacks that are performed at the sensor level, and from a hardware perspective. Most research on presentation attacks has been carried out on software techniques due to its lower price as, in general, hardware solutions require additional subsystems. For this paper, two low-cost handheld microscopes with special lighting conditions were used to capture real and fake fingerprints, obtaining a total of 7704 images from 17 subjects. After several analyses of wavelengths and classification, it was concluded that only one of the wavelengths is already enough to obtain a very low error rate compared with other solutions: an attack presentation classification error rate of 1.78% and a bona fide presentation classification error rate (BPCER) of 1.33%, even including non-conformant fingerprints in the database. On a specific wavelength, a BPCER of 0% was achieved (having 1926 samples). Thus, the solution can be low cost and efficient. The evaluation and reporting were done following ISO/IEC 30107-3.
The Mobile Pass project focused its efforts on developing a technologically advanced mobile equipment for land border crossing points. During the project execution, a mobile device supporting biometric recognition (face and fingerprint) and passport checking was designed and developed. To test this new equipment under an operational environment, a usability evaluation was carried out in the Sculeni border control (border of Romania with Moldavia), being the operators Romanian Policemen and the data subjects' real volunteer travelers. The equipment was used for 2 days under different conditions in real scenarios, and the usability evaluation carried out follows the ISO 9241-11:1998 standard. A total of 93 participants completed the new border checking and fulfilled a satisfaction survey at the end of the experiment. This paper describes the evaluation and reports some preliminary results on the user-Automatic Border Control system interaction and the final conclusions obtained at the end of the project, resulting in a valuable guide for designing user interaction in security areas.
With the growing number of voice biometrics applications (e.g. banking), it has become necessary to be able to assess and compare different systems' detection of Presentation Attacks, as attackers could have access to sensitive data. For this end, a common ground is needed to perform comparable security evaluations. Based on our experience performing evaluations, this paper unifies several methodologies such as Common Criteria, ISO/IEC JTC1 30107-3 and others to evaluate the security of voice biometric systems.
Biometric recognition is ever-increasingly utilized in our daily lives due to its advantages in convenience, speed and security. Nevertheless, biometric systems can be vulnerable to attacks at many points, presentation attacks being a common issue. Thus, accurate Presentation Attack Detection methods need to be studied and evaluated to overcome these vulnerabilities. In this paper, a narrow-band camera with 10nm increments in wavelength will be used to observe real fingerprints and artefacts (Play-Doh, latex and transparent nail polish) and study the different classification accuracies depending on the wavelength. A total of 9,646 images were captured in 36 wavelengths, and an APCER of 1.98% at 50% training, 50% testing was obtained. All results are calculated in accordance to the requirements of the standard ISO/IEC 30107-3.
Biometric systems on mobile devices are an increasingly ubiquitous method for identity verification. The majority of contemporary devices have an embedded fingerprint sensor which may be used for a variety of transactions including unlock a device or sanction a payment. In this study we explore how easy it is to successfully attack a fingerprint system using a fake finger manufactured from commonly available materials. Importantly our attackers were novices to producing the fingers and were also constrained by time. Our study shows the relative ease that modern devices can be attacked and the material combinations that lead to these attacks.
With the growing number of people that own a smartphone with a fingerprint sensor, it is necessary to be able to assess and compare different smartphones' ability to reject false fingerprints, as attackers could have access to sensitive data (bank accounts, pictures, documents). For this end, a common ground is needed to perform comparable security evaluations. This paper unifies several methodologies to evaluate the security of fingerprint biometric systems embedded in mobile devices. Then, this methodology is applied on 5 different smartphones for a security evaluation and their ability' to reject false fingerprints is compared.
Deep Neural Networks were first developed decades ago, but it was not until recently that they started being extensively used, due to their computing power requirements. Since then, they are increasingly being applied to many fields and have undergone far-reaching advancements. More importantly, they have been utilized for critical matters, such as making decisions in healthcare procedures or autonomous driving, where risk management is crucial. Any mistakes in the diagnostics or decision-making in these fields could entail grave accidents, and even death. This is preoccupying, because it has been repeatedly reported that it is straightforward to attack this type of models. Thus, these attacks must be studied to be able to assess their risk, and defenses need to be developed to make models more robust. For this work, the most widely known attack was selected (adversarial attack) and several defenses were implemented against it (i.e. adversarial training, dimensionality reduc tion and prediction similarity). The obtained outcomes make the model more robust while keeping a similar accuracy. The idea was developed using a breast cancer dataset and a VGG16 and dense neural network model, but the solutions could be applied to datasets from other areas and different convolutional and dense deep neural network models.