- Age & Gender
- Liveness (2D and 3D)
- Presence of a Mask on the Face
- State of the eyes (open/closed)
For age and gender estimation create the
AgeGenderEstimator class by calling the
FacerecService.createAgeGenderEstimator method and providing the configuration file.
Currently, two configuration files are available:
age_gender_estimator.xml- First implementation of the AgeGenderEstimator interface.
age_gender_estimator_v2.xml- Improved version of the AgeGenderEstimator interface, which provides higher accuracy of age and gender estimation given that you follow Guidelines for Cameras.
AgeGenderEstimator you can estimate age and gender of a captured face using
AgeGenderEstimator.estimateAgeGender. The result is the
AgeGenderEstimator.AgeGender structure that contains the number of ages (in years), age group (
AgeGenderEstimator.Age) and gender (
AgeGenderEstimator.Gender). See the example of using the
AgeGenderEstimator in demo.cpp.
You can learn how to estimate Age & Gender in an image in our tutorial Estimating Age, Gender, and Emotions.
At the moment there are two quality estimation interfaces:
QualityEstimatorprovides discrete grade of quality for flare, lighting, noise and sharpness.
FaceQualityEstimatorprovides quality as a single real value that aggregates sample usability for face recognition (i.e. pose, occlusion, noise, blur and lighting), which is very useful for comparing samples of one person from video tracking.
To create the
QualityEstimator object, call the
FacerecService.createQualityEstimator method by passing the configuration file. Currently, two configuration files are available:
quality_estimator.xml– First implementation of the QualityEstimator quality estimation interface.
quality_estimator_iso.xml(recommended) – Improved version of the QualityEstimator quality estimation interface, provides higher accuracy of quality estimation.
QualityEstimator you can estimate the quality of a captured face using
QualityEstimator.estimateQuality. The result is the
QualityEstimator.Quality structure that contains estimated flare, lighting, noise, and sharpness level.
See the example of using the
QualityEstimator in demo.cpp.
To create the
FaceQualityEstimator object, call the
FacerecService.createFaceQualityEstimator method by passing the configuration file. Currently, there is only one configuration file available, which is face_quality_estimator.xml. With
FaceQualityEstimator you can estimate the quality of a captured face using
FaceQualityEstimator.estimateQuality. This results in a real number (the greater the number, the higher the quality), which aggregates sample usability for face recognition. See the example of using the
FaceQualityEstimator in demo.cpp.
The main purpose of liveness estimation is to prevent spoofing attacks (using a photo of a person instead of a real face). Currently, you can estimate liveness in three ways - by processing a depth map, by processing an IR image or by processing an RGB image from your camera. You can also estimate liveness using the Active Liveness, which presupposes that a user has to perform a sequence of certain actions.
You can learn how to estimate liveness of a face in our tutorial Liveness Detection.
To estimate liveness with a depth map, create the
DepthLivenessEstimator object using
The following configuration files are available:
- depth_liveness_estimator.xml – The first implementation (not recommended; used only for backward compatibility);
- depth_liveness_estimator_cnn.xml – Implementation based on neural networks (recommended, used in
To use this algorithm, it is necessary to obtain synchronized and registered frames (color image + depth map), use a color image for face tracking / detection, and pass the corresponding depth map to the
To get an estimated result, call the
pbio.DepthLivenessEstimator.estimateLiveness method. You'll get one of the following results:
DepthLivenessEstimator.NOT_ENOUGH_DATA– too many missing depth values on the depth map.
DepthLivenessEstimator.REAL– the observed face belongs to a living person.
DepthLivenessEstimator.FAKE– the observed face is taken from a photo.
To estimate liveness using an infrared image from a camera, create the
IRLivenessEstimator object using the
FacerecService.createIRLivenessEstimator method. Currently, only one configuration file is available – ir_liveness_estimator_cnn.xml (implementation based on neural networks). To use this algorithm, get color frames from the camera in addition to the IR frames.
To get an estimated result, you can call the
IRLivenessEstimator.estimateLiveness method. You'll get one of the following results:
IRLivenessEstimator.Liveness.NOT_ENOUGH_DATA– Too many missing values in the IR image.
IRLivenessEstimator.Liveness.REAL– The observed face belongs to a living person.
IRLivenessEstimator.Liveness.FAKE– The observed face is taken from a photo.
To estimate liveness with an RGB map, create the
Liveness2DEstimator object using the
FacerecService.createLiveness2DEstimator method. Currently, two configuration files are available:
liveness_2d_estimator.xml– The first implementation (not recommended; used only for backward compatibility).
liveness_2d_estimator_v2.xml– An accelerated and improved version of the current module, recommended.
Two methods can be used to obtain the evaluation result:
Liveness2DEstimator.estimateLiveness. This method returns a
Liveness2DEstimator.Livenessobject. The result will be one of the following:
Liveness2DEstimator.Liveness.NOT_ENOUGH_DATA– Not enough data to make a decision.
Liveness2DEstimator.Liveness.REAL– The observed person belongs to a living person.
Liveness2DEstimator.Liveness.FAKE– The observed face is taken from a photo.
Liveness2DEstimator.estimate. This method returns a
Liveness2DEstimator.LivenessAndScoreobject that contains the following fields:
liveness- Object of the
Liveness2DEstimator.Livenessclass/structure (see above).
score– The probability that a face belongs to a living person (for
liveness_2d_estimator.xmlthis field is not available, a value of 0 or 1 is returned depending on the value of the
LivenessEstimator object in Face SDK C++/C#/Java API is deprecated.
|Version||Core i7 4.5 ГГц (Single-Core)||Google Pixel 3|
|liveness_2d_estimator.xml||250||126 (GPU) / 550 (CPU)|
|CASIA Face Anti-spoofing||0.99|
This type of liveness estimation presupposes that a user needs to perform certain actions. For example, "turn the head", "blink", etc. Estimation is performed through the
VideoWorker object based on the video stream. See more detailed description in Video Stream Processing.
To estimate emotions, create the
EmotionsEstimator object using
FacerecService.createEmotionsEstimator and pass the configuration file. Currently, there is only one configuration file, which is emotions_estimator.xml.
EmotionsEstimator object you can estimate the emotion of a captured face using the
EmotionsEstimator.estimateEmotions method. The result is a vector with the
EmotionsEstimator.EmotionConfidence elements which contain emotions with a confidence value. See the example of using the
EmotionsEstimator object in demo.cpp.
This class is a universal module for evaluation of face attributes. To get the score, call the
FaceAttributesEstimator.estimate(RawSample) method. The evaluation result is an
Attribute object that contains the following attributes:
score– The probability that a person has the required attribute, a value from
1(if the value is set to
-1, then this field is not available for the specified type of assessment).
verdict– The probability that a person has the required attribute, boolean value (
mask– An object of the class/structure
FaceAttributesEstimator.FaceAttributes.Attribute, which contains the following values:
NOT_COMPUTED– No estimation made.
NO_MASK– Face without a mask.
HAS_MASK– Face with a mask.
right_eye_state– objects of the class/structure
FaceAttributesEstimator.FaceAttributes.EyeStateScore, which contains the
scoreattribute and the
EyeStatestructure with the following fields:
NOT_COMPUTED– No estimation is made.
CLOSED– The eye is closed.
OPENED– The eye is open.
To check the presence of a mask on the face, use the
FaceAttributesEstimator in conjunction with the
face_mask_estimator.xml configuration file. This returns the
mask attributes in the
To check the state of the eyes (open/closed), use the
FaceAttributesEstimator in conjunction with the
eyes_openness_estimator.xml configuration file. This returns the
right_eye_state attributes in the