FRAUD1 - Face Replay Attack UQ Dataset, Version 1

SAS Logo

Location

http://www.itee.uq.edu.au/sas/datasets/

FRAUD1 - Description

The FRAUD1 dataset contains the following items:

  1. Videos of objects using a computer tablet screen that displays different colours to illuminate them. The objects consist of a blank piece of white paper, a printed face on a piece of paper, and ten live subjects. The colours used to illuminate the objects are chosen randomly from the following:

    Colour Number Colour Name Colour Specification (RGB)
    1 Black (0, 0, 0)
    2 Red (255, 0, 0)
    3 Green (0, 255, 0)
    4 Blue (0, 0, 255)
    5 Magenta (255, 0, 255)
    6 Cyan (0, 255, 255)
    7 Yellow (255, 255, 0)

    The white paper and paper face objects have 150 videos, and each of the ten live subjects have 15 videos. Each video consists of the following:

    The actual number of frames of illumination and no illumination will vary slightly due to different responses from the camera device.

    Each illuminated object video set is captured twice - the first using the in-built Microsoft LifeCam on a Microsoft Surface Pro 2 tablet, and the second using an external Logitech Pro 9000 webcam connected to the Microsoft Surface Pro 2 tablet.

    Images are captured using an application that was developed on Ubuntu Linux 13.10 and OpenCV 2.4.8 running on a VMware Player 6.0.2 virtual machine. The VM is hosted by a Microsoft Surface Pro 2 tablet running Windows 8.1 Pro. Individual frames are saved using the DIVX codec, with resolution 640x480@30fps.

  2. Ground truth data for each video, detailing the colours that were used to illuminate the object. Included in this file is an integer representation of the colour sequence, to assist simple comparison of results. This integer is calculated using the index values above, and the following formula:
    Result = ( ( ( Colour1 * 7) + Colour2) * 7 + Colour3) * 7 + Colour4
    

    For example, for the colour sequence "Red Magenta Green Green", the result is:

    955 = ( ( ( 2 * 7) + 5) * 7 + 3) * 7 + 3
    

  3. The training data used to train the Support Vector Machine. This training data was captured by reflecting colours from a blank white sheet of paper, and capturing the images using the Logitech 9000 Pro camera. The training data was manually labelled.

    The training data is structured as follows:

    <data-line> ::= <colour><SPACE>
                    1:<red-percent><SPACE>
                    2:<yellow-percent><SPACE>
                    3:<green-percent><SPACE>
                    4:<cyan-percent><SPACE>
                    5:<blue-percent><SPACE>
                    6:<magenta-percent><SPACE>
                    7:<black-percent><EOL>
    
    <colour> ::= "1" | "2" | "3" | "4" | "5" | "6" | "7"
    
    <SPACE> ::= " "
    
    <black-percent> ::= <integer>
    <red-percent> ::= <float>
    <yellow-percent> ::= <float>
    <green-percent> ::= <float>
    <cyan-percent> ::= <float>
    <blue-percent> ::= <float>
    <magenta-percent> ::= <float>
    
    <integer> ::= <digits>
    <float> ::= <digits>.<digits>
    <digits> ::= <digits><digit> | ""
    <digit> ::= "0" | "1" | "2" | "3" | "4" | "5" | '6" | "7" | "8" | "9"
    

    <colour> represents the ground truth classified colour. This is defined as:

      1:  Black
      2:  Red
      3:  Green
      4:  Blue
      5:  Magenta
      6:  Cyan
      7:  Yellow
    

    The <black-percent> is calculated as a percentage of pixels that have a value of 0-2 in the V channel of the HSV colour space.

    Each colour percent is calculated as a percentage of pixels that have the following properties:

    The percentage is calculated by dividing the number of pixels for each colour channel, by the total of pixels calculated in each of the six colour channels. That is, the sum of <red-percent> through to <magenta-percent> will always equal 100%.

    An example line of training data is:

    6 1:0.879917 2:0.0690131 3:4.00276 4:80.9179 5:13.9924 6:0.138026 7:1
    
  4. The results obtained using the techniques developed by the authors for classifying the colour reflections. These are provided for comparison purposes only.

This table shows example frames taken from captured videos.

Object Black Red Yellow Green Cyan Blue Magenta
White Paper White paper object - no illumination White paper object - red reflection White paper object - yellow reflection White paper object - green reflection White paper object - cyan reflection White paper object - blue reflection White paper object - magenta reflection
Paper Face Paper face object - no illumination Paper face object - red reflection Paper face object - yellow reflection Paper face object - green reflection Paper face object - cyan reflection Paper face object - blue reflection Paper face object - magenta reflection

Licence

The FRAUD1 (Face Replay Attack UQ Dataset, Version 1) and associated data ('Licensed Material') are made available to the scientific community for non-commercial research purposes such as academic research, teaching, scientific publications or personal experimentation. Permission is granted to you (the 'Licensee') to use, copy and distribute the Licensed Material in accordance with the following terms and conditions:

Journal Article Title

Face Recognition on Consumer Devices: Reflections on Replay Attacks

Abstract

Widespread deployment of biometric systems supporting consumer transactions is yet to occur. Smart consumer devices, such as tablets and phones, have the potential to act as biometric readers authenticating user transactions. However, the use of these devices in uncontrolled environments is highly susceptible to replay attacks where these biometric data are captured and replayed at a later time. Current approaches to counter replay attacks in this context are inadequate. In order to show this, we demonstrate a simple replay attack that is 100% effective against a recent state-of-the-art face recognition system; this system was specifically designed to robustly distinguish between live people and spoofing attempts such as photographs. This paper proposes an approach to counter replay attacks for face recognition on smart consumer devices using a noninvasive challenge and response technique. The image on the screen creates the challenge, and the dynamic reflection from the person's face as they look at the screen forms the response. The sequence of screen images and their associated reflections digitally watermarks the video. By extracting the features from the reflection region, it is possible to determine if the reflection matches the sequence of images that were displayed on the screen. Experiments indicate that the face reflection sequences can be classified under ideal conditions with a high degree of confidence. These encouraging results may pave the way for further studies in the use of video analysis for defeating biometric replay attacks on consumer devices.

Colour Reflection Dataset Download

The current version of the FRAUD1 Dataset was uploaded on 06-Feb-2015.
http://dropbox.eait.uq.edu.au/e3blovel/FRAUD1_Dataset.zip (207 Megabytes)

Ethical Clearance

The experiment data involving humans was performed in accordance with the School of ITEE Ethics Approval Number: EC201303SMI.A1.

Contact

Further enquiries can be made to:
{danny döt smith ät uq döt net döt au},
{a döt wiliem ät uq döt edu döt au}, or
{lovell ät itee döt uq döt edu döt au}.

Last updated: 06-Feb-2015