For age and gender estimation, create the
AgeGenderEstimator class by calling the
FacerecService.createAgeGenderEstimator method, providing the configuration file.
Currently, two configuration files are available:
age_gender_estimator.xml- first implementation of the AgeGenderEstimator interface
age_gender_estimator_v2.xml- improved version of the AgeGenderEstimator interface, which provides higher accuracy of age and gender estimation given that you follow Guidelines for Cameras
AgeGenderEstimator you can estimate age and gender of a captured face using
AgeGenderEstimator.estimateAgeGender. The result is the
AgeGenderEstimator.AgeGender struct containing the number of ages (in years), age group (
AgeGenderEstimator.Age) and gender (
AgeGenderEstimator.Gender). See the example of using the
AgeGenderEstimator in demo.cpp.
Learn how to estimate Age & Gender in an image in our tutorial Estimating Age, Gender, and Emotions.
At the moment, there are two quality estimation interfaces:
QualityEstimatorprovides discrete grade of quality for flare, lighting, noise and sharpness.
FaceQualityEstimatorprovides quality as a single real value that aggregates sample usability for face recognition (i.e. pose, occlusion, noise, blur and lighting), which is very useful for comparing samples of one person from video tracking.
To create the
QualityEstimator object, call the
FacerecService.createQualityEstimator method by passing the configuration file. Currently, two configuration files are available:
quality_estimator.xml– first implementation of the QualityEstimator quality estimation interface
quality_estimator_iso.xml(recommended) – improved version of the QualityEstimator quality estimation interface, provides higher accuracy of quality estimation
QualityEstimator you can estimate the quality of a captured face using
QualityEstimator.estimateQuality. The result is the
QualityEstimator.Quality structure that contains estimated flare, lighting, noise, and sharpness level.
See the example of using the
QualityEstimator in demo.cpp.
To create the
FaceQualityEstimator object, call the
FacerecService.createFaceQualityEstimator method by passing the configuration file. Currently, there is only one configuration file available, which is face_quality_estimator.xml. With
FaceQualityEstimator you can estimate the quality of a captured face using
FaceQualityEstimator.estimateQuality. This results in a real number (the greater it is, the higher the quality), which aggregates sample usability for face recognition. See the example of using the
FaceQualityEstimator in demo.cpp.
The main purpose of liveness estimation is to prevent spoofing attacks (using a photo of a person instead of a real face). Currently, you can estimate liveness in one of three ways - by processing a depth map, by processing an IR image or by processing an RGB image from your camera.
Learn how to estimate liveness of a face in our tutorial Liveness Detection.
To estimate liveness with a depth map, you should create the
DepthLivenessEstimator object using
The following configuration files are available:
- depth_liveness_estimator.xml – the first implementation (not recommended; used only for backward compatibility);
- depth_liveness_estimator_cnn.xml – implementation based on neural networks (recommended, used in
To use this algorithm, it is necessary to obtain synchronized and registered frames (color image + depth map) and use a color image for face tracking / detection and to pass the corresponding depth map to the
To get an estimated result, you can call the
pbio.DepthLivenessEstimator.estimateLiveness method. You will get one of the following results:
DepthLivenessEstimator.NOT_ENOUGH_DATA– too many missing depth values on the depth map.
DepthLivenessEstimator.REAL– the observed face belongs to a living person.
DepthLivenessEstimator.FAKE– the observed face is taken from a photo.
To estimate liveness using an infrared image from a camera, you should create the
IRLivenessEstimator object using the
FacerecService.createIRLivenessEstimator method. Currently, only one configuration file is available – ir_liveness_estimator_cnn.xml (implementation based on neural networks). To use this algorithm, you have to get color frames from the camera in addition to the IR frames.
To get an estimated result, you can call the
IRLivenessEstimator.estimateLiveness method. You will get one of the following results:
IRLivenessEstimator.Liveness.NOT_ENOUGH_DATA– too many missing values in the IR image.
IRLivenessEstimator.Liveness.REAL– the observed face belongs to a living person.
IRLivenessEstimator.Liveness.FAKE– the observed face is taken from a photo.
To estimate liveness with an RGB map, you should create the
Liveness2DEstimator object using the
FacerecService.createLiveness2DEstimator method. Currently, two configuration files are available:
liveness_2d_estimator.xml– the first implementation (not recommended; used only for backward compatibility)
liveness_2d_estimator_v2.xml– an accelerated and improved version of the current module, recommended
Two methods can be used to obtain the evaluation result:
Liveness2DEstimator.estimateLiveness. This method returns a
Liveness2DEstimator.Livenessobject. The result will be one of the following:
Liveness2DEstimator.Liveness.NOT_ENOUGH_DATA– not enough data to make a decision
Liveness2DEstimator.Liveness.REAL– the observed person belongs to a living person
Liveness2DEstimator.Liveness.FAKE– the observed face is take from a photo
Liveness2DEstimator.estimate. This method returns a
Liveness2DEstimator.LivenessAndScoreobject that contains the following fields:
liveness- object of the
Liveness2DEstimator.Livenessclass/structure (see above)
score– the probability that a face belongs to a living person (for
liveness_2d_estimator.xmlthis field is not available, a value of 0 or 1 is returned depending on the value of the
LivenessEstimator object in Face SDK C++/C#/Java API is deprecated.
|Version||Core i7 4.5 ГГц (Single-Core)||Google Pixel 3|
|liveness_2d_estimator.xml||250||126 (GPU) / 550 (CPU)|
|CASIA Face Anti-spoofing||0.99|
To estimate emotions, create the
EmotionsEstimator object using
FacerecService.createEmotionsEstimator and pass the configuration file. Currently, there is only one configuration file, which is emotions_estimator.xml.
EmotionsEstimator object you can estimate the emotion of a captured face using the
EmotionsEstimator.estimateEmotions method. The result is a vector with the
EmotionsEstimator.EmotionConfidence elements containing emotions with a confidence value. See the example of using the
EmotionsEstimator object in demo.cpp.
To check the presence of a mask on a face, the
FaceAttributesEstimator module and the
face_mask_estimator.xml configuration file are available. For estimation, call the
FaceAttributesEstimator.estimate(RawSample) method. The result of the evaluation is an object
Attribute, which contains the following fields:
score– possibility that there's a mask on the face, value from 0 to 1
verdict– possibility that there's a mask on the face, boolean value (
mask_attribute– an object of the
FaceAttributesEstimator.FaceAttributes.Attributeclass/structure, which contains the following values:
NOT_COMPUTED– the attribute was not estimated
NO_MASK– there's no mask on the face
HAS_MASK– there's a mask on the face