FRAUD2 - Face Replay Attack UQ Dataset, Version 2

SAS Logo

Location

http://www.itee.uq.edu.au/sas/datasets/

FRAUD2 - Description

The FRAUD2 dataset contains the following items:
  1. Videos of objects using a computer tablet screen that displays white light from the screen to illuminate them. The objects consist of a 20 different soft toys, five printed faces on pieces of paper, and five live subjects (same faces as the paper faces).

    The illumination from the computer screen is randomly turned on and off to form a digital watermark in the video capture sequence. This watermark is used to determine that the video was captured at a particular point in time (the time when that sequence of light/dark was displayed form the computer screen).

    Each object is recorded five times on each of three cameras. The cameras consist of the in-built Microsoft LifeCam on the Microsoft Surface Pro 2 tablet, a Logitech Pro 5000 webcam, and a Logitech Pro 9000 webcam.

    Objects are also recorded in each of six different environments, as outlined in the Download section. This results in 300 soft toy videos, 75 paper face videos, and 75 live face videos for each environment.

    Each video consists of the following:

    This table shows an example of the random illumination sequence that would be observed during the video capture process.
    Live Face - no illumination Live Face - no illumination Live Face - no illumination Live Face - no illumination Live Face - full illumination Live Face - no illumination Live Face - no illumination Live Face - full illumination Live Face - no illumination Live Face - full illumination Live Face - full illumination Live Face - full illumination Live Face - no illumination Live Face - no illumination Live Face - full illumination Live Face - full illumination Live Face - no illumination Live Face - full illumination Live Face - full illumination Live Face - no illumination

    Images are captured using an application that was developed on Ubuntu Linux 13.10 and OpenCV 2.4.9 running on a VMware Player 6.0.4 virtual machine. The VM is hosted by a Microsoft Surface Pro 2 tablet running Windows 8.1 Pro. Individual frames are saved using the DIVX codec, with resolution 640x480@30fps (the actual frame rate varies, as the webcam driver adjusts based upon current lighting conditions, however this will not affect any results).

  2. Ground truth data for each video, detailing the illumination sequence that was used to illuminate the object. Included in this file is an integer representation of the illumination sequence, to assist simple comparison of results. This integer is calculated as the decimal value of the binary representation of the illumination sequence (Dark = 0, Light = 1).

    For example, for the illumination sequence "Dark Light Light Light Dark Dark Light", the binary representation is 0111001, producing a decimal result of 57. Note that the actual values will be a maximum of 231.

  3. The results obtained using the techniques developed by the authors for classifying the reflections. These are provided for comparison purposes only.

Licence

The FRAUD2 (Face Replay Attack UQ Dataset, Version 2) and associated data ('Licensed Material') are made available to the scientific community for non-commercial research purposes such as academic research, teaching, scientific publications or personal experimentation. Permission is granted to you (the 'Licensee') to use, copy and distribute the Licensed Material in accordance with the following terms and conditions:

Conference Paper Title

Binary Watermarks: A Practical Method to Address Face Recognition Replay Attacks on Consumer Mobile Devices

Abstract

Mobile devices (laptops, tablets, and smart phones) are ideal for the wide deployment of biometric authentication, such as face recognition. However, their uncontrolled use and distributed management increases the risk of remote compromise of the device by intruders or malicious programs. Such compromises may result in the device being used to capture the user's face image and replay it to gain unauthorized access to their online accounts, possibly from a different device. Replay attacks can be highly automated and are cheap to launch worldwide, as opposed to spoofing attacks which are relatively expensive as they must be tailored to each individual victim. In this paper, we propose a technique to address replay attacks for a face recognition system by embedding a binary watermark into the captured video. Our monochrome watermark provides high contrast between the signal states, resulting in a robust signal that is practical in a wide variety of environmental conditions. It is also robust to different cameras and tolerates relative movements well. In this paper, the proposed technique is validated on different subjects using several cameras in a variety of lighting conditions. In addition, we explore the limitations of current devices and environments that can negatively impact on performance, and propose solutions to reduce the impact of these limitations.

FRAUD2 Reflection Dataset Download

The current version of the FRAUD2 Dataset was uploaded on 06-Feb-2015. There are six .ZIP files, one for each of the following environments:

Environment Download Size Filename Description
Dark 212 Mbytes FRAUD2-Dark.zip Darkened room
Office 168 Mbytes FRAUD2-Office.zip Indoor office with typical fluorescent lighting
Natural 203 Mbytes FRAUD2-Natural.zip Indoor office with lighting turned off, illuminated only by natural sunlight from windows
Cloud 356 Mbytes FRAUD2-Cloud.zip Outdoor environment under 100% cloud cover
Shade 581 Mbytes FRAUD2-Shade.zip Outdoor environment (no clouds), under 100% shade, but open to the sky
Sunlight 538 Mbytes FRAUD2-Sunlight.zip Outdoor environment fully illuminated by 100% direct sunlight

Ethical Clearance

The experiment data involving humans was performed in accordance with the School of ITEE Ethics Approval Number: EC201303SMI.A1.

Contact

Further enquiries can be made to:
{danny döt smith ät uq döt net döt au},
{a döt wiliem ät uq döt edu döt au}, or
{lovell ät itee döt uq döt edu döt au}.

Last updated: 06-Feb-2015