PROTECT Multimodal DATASET

The collection of biometric data representative of real border settings is an important part of the PROTECT project. The first collection of the PROTECT Multimodal DB database took place in the premises of the University of Reading from 26th to 29th of June.

Several partners from UREAD, IRM, ITTI, WAT, EURECOM, PLUS, and VERIDOS participated in this event whether by collecting biometric data with their individual biometric sensors, providing their biometric data as volunteers or simply aiding in the organization of the event.

This collection involved using video, voice recording, optical sensing and depth-sensing to record the subject’ biometrics including: voice; 2D & 3D face; thermal face; iris/periocular; finger/hand veins; and anthropometrics/gait pattern.

Biometric data was recorded from a total number of 47 subjects. The set of subjects included a wide range of variety in several aspects including age and gender. The age interval was from 21 to 76 and the distribution male/female 57%/43%.

HOW TO OBTAIN THE DATASET

A subset of the PROTECT Multimodal DB is released freely to the academia and industry upon request.

The request can be made by signing the License Agreement filling in the Google form that can be found here: https://goo.gl/forms/FrFvsOB5ZiMb1jfw2

After submitting the license agreement and once it has been validated, the requester will receive a link to download the dataset.

This dataset will comprise the biometric data of 20 subjects.  All data is anonymized.

The biometric traits included are:

  • 2D face video
  • 3D face light field images
  • Thermal face images
  • Iris Mobile images
  • Finger veins nir images
  • Hand veins NIR grayscala data and videos
  • Voice audio files
  • Raw skeleton data in time series form

For the selected subjects will also be provided metadata including age and gender.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 700259.
© PROTECT consortium 2017. 

 .