In this section you'll learn how to integrate Liveness Estimator to your C++ project.
1.1. To create a Liveness Estimator, follow steps 1-3 described in Creating a Processing Block
and specify the value
"LIVENESS_ESTIMATOR" for the
"unit_type" key. When creating a Liveness Estimator, you can leave out the value for the
"model_path" key or pass an empty string
1.2. Create a Liveness Estimator Processing Block:
2.1. Create a Context container
ioData for input-output data using the
2.2. Create a Context container
imgCtx with RGB-image following the steps described on
Creating a Context container with RGB-image.
2.3. Put input image to the input-output data container:
2.4. Call the
livenessEstimator and pass the context with source image
Accurate estimation requires only one person's face in the frame, looking at the camera, otherwise the status "MULTIPLE_FACE_FRAMED" will be returned.
If multiple faces are captured, only one of them (order is not guaranteed) will be processed.
The result of calling
livenessEstimator() will be appended to
The format of the output data is presented as a list of objects with the
Each object in the list has the
"class" key with the
"liveness" key contains a Context with 2 elements:
"value"key contains a value of type string that matches one of the pbio::Liveness2DEstimator::Liveness state
"confidence"key contains a number of type double in a range of [0,1]
Liveness Estimator doesn't support GPU acceleration.