添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
  • Legals
  • Privacy Policy
  • Accessibility
  • Sitemap
  • Datasets

    Here we give a short overview of the databases that have been used for benchmarking. If you want us to include a new database that can be used for benchmarking please contact Dr.-Ing. Hazım Ekenel .

    Bosphorus Database

    The Bosphorus Database [ Paper ] is intended for research on 3D and 2D human face processing tasks including expression recognition, facial action unit detection, facial action unit intensity estimation, face recognition under adverse conditions, deformable face modeling, and 3D face reconstruction. There are 105 subjects and 4666 faces in the database. This database is unique in three aspects:

  • Rich repertoire of expressions:
  • Up to 35 expressions per subject
  • FACS scoring (includes intensity and asymmetry codes for each AU)
  • One third of the subjects are professional actors/actresses
  • Systematic head poses (13 yaw and pitch rotations)
  • Varieties of face occlusions (beard & moustache, hair, hand, eyeglasses)
  •  Extended Cohn-Kanade (CK+) [ Paper ]

    FDDB [ Paper ]

    Feret [ Paper 1 , Paper 2 ]

    Face and Gesture Recognition Research Network (FG-NET) Aging Database

    The Face and Gesture Recognition Research Network (FG-NET) aging database [ Paper ] contains on average 12 pictures of varying ages between 0 and 69, for each of its 82 subjects. Altogether there are a mixture of 1002 color and greyscale images, which were taken in totally uncontrolled environments. Each was manually annotated with 68 landmark points. In addition there is a data file for every image, containing type, quality, size of the image and information about the subject such as age, gender, spectacles, hat, mustache, beard and pose. One particular problem with this dataset is the fact, that images are not equally distributed over age and thus only few images of persons older than 40 are available.

    FRGC [ Paper ]

    GEMEP-FERA [ Paper ]

    Labeled Faces in the Wild (LFW)

    The Labeled Faces in the Wild (LFW) [ Paper ] dataset contains faces of 5749 individuals (4263 male, 1486 female) collected from the web using a Viola-Jones face detector. Of these there are 1680 people for which more than one image is available. This results in 10256 male images and 2977 female images. These color images have an resolution of 250x250.

    MORPH-II

    The MORPH-II database [ Paper ] is composed of 55608 color images of 13673 subjects of the age between 16 and 99 years, where 47057 images correspond to male persons and 8551 to female persons. 42897 of these images depict black faces, 10736 white, 1753 hispanic, 160 asian, 57 indian and 5 faces are of other ethnicities. The images have varying resolutions of either 200x240 or 400x480 pixels. While this dataset is highly imbalanced towards black male persons and missing images of persons below the age of 16, this adds an additional challenge, which could also point out the generalizability of machine learning approaches.

    CMU Multi-PIE [ Paper ]

    McMaster-UNBC Pain Archive [ Paper ]

    YouTube Faces DB [ Paper ]

    Tayfun Akgul

    Can an Algorithm Recognize Montage Portraits as Human Faces?

    IEEE Signal Processing Magazine, vol. 28, no. 1, pp. 160 - 158, Jan. 2011.

    R. Gross, I. Matthews, J. F. Cohn, T. Kanade, and S. Baker

    Multi-PIE

    Image and Vision Computing, 28(5), 807-813, 2010.

    G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller

    Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments

    University of Massachusetts, Amherst, Tech. Rep., Oct. 2007.

    Vidit Jain and Erik Learned-Miller

    FDDB: A Benchmark for Face Detection in Unconstrained Settings

    Technical Report UM-CS-2010-009, Dept. of Computer Science, University of Massachusetts, Amherst. 2010.

    P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, and Z. Ambadar

    The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

    In Proceedings of the 3rd IEEE Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB), CVPR 2010, 2010.

    P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek

    Overview of the Face Recognition Grand Challenge

    In Proc. 2005 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, vol. 1, Los Alamitos, CA, USA, 2005, pp. 947–954.

    P.J. Phillips, H. Moon, S.A. Rizvi, P.J. Rauss

    The FERET Evaluation Methodology for Face Recognition Algorithms

    IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 22, pp. 1090-1104, 2000.

    P.J. Phillips, H. Wechsler, J. Huang, P. Rauss

    The FERET database and evaluation procedure for face recognition algorithms

    Image and Vision Computing J, Vol. 16, No. 5, pp. 295-306, 1998.

    K. M. Prkachin and P. E. Solomon

    The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain

    Pain, 139, 267-274, 2008.

    K. Ricanek Jr. and T. Tesafaye

    MORPH: A Longitudinal Image Database of Normal Adult Age-Progression

    In IEEE 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), Southampton, UK, Apr. 2006, pp. 341–345.

    A. Savran, B. Sankur, M. T. Bilge

    Facial action unit detection: 3D versus 2D modality

    IEEE CVPR'10 Workshop on Human Communicative Behavior Analysis, San Francisco, California, USA, June 2010.

    Michel F. Valstar, Bihan Jiang, Marc Méhu, Maja Pantic, and Klaus Scherer

    The First Facial Expression Recognition and Analysis Challenge

    In Proc. IEEE Int’l Conf. Automatic Face and Gesture Recognition, 2011.

    Lior Wolf, Tal Hassner and Itay Maoz

    Face Recognition in Unconstrained Videos with Matched Background Similarity

    IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2011.