Flutter
Face SDK provides a plugin for Flutter that allows to implement the following features:
- Face detection in photo
- Face tracking in video
- Active Liveness checking
- Face verification
The plugin is developed for iOS and Android devices.
note
Flutter Sample with the Face SDK plugin is available in the examples/flutter/demo directory of the Face SDK distribution.
#
Connecting Face SDK plugin to flutter project#
Requirements- Flutter >=2.2 and <=2.8.1
- Android Studio for Android or XCode for iOS
#
Plugin connectionTo connect the Face SDK to flutter project, install the "flutter" component using the Face SDK installer or maintenancetool utility:
If Face SDK is not installed, follow the installation instructions Getting Started. The "flutter" component must be selected in the "Selection Components" section.
If Face SDK is installed without "flutter" component (flutter directory is not present in Face SDK root directory), use the utility maintenancetool and install the "flutter" component, by selecting it in the "Selection Components" section.
Add plugins to project dependencies by specifying them in the file <project_dir>/pubspec.yaml:
face_sdk_3divi, specifying the path to the plugin directory in the
path
fieldpath_provider version 2.0.0 or higher
Add the libfacerec.so library to dependencies
2.a. For Android device:
specify the path of directory with the libfacerec.so library to
sourceSets
block of build.gradle file (<project_dir>/android/app/build.gradle)add a loading of native library inside MainActivity.java (<project_dir>/android/app/src/main/java/<android_app_name>/MainActivity.java):
2.b. For iOS device:
- open ios/Runner.xcworkspace in XCode
- in Target Navigator select "Runner", go to "General" tab, "Frameworks, Libraries, and Embedded Content" section and click "+". In the opened window "Add Other..."->"Add Files" and select facerec.framework in Finder
- Remove facerec.framework in "Build Phases" tab, "Link Binary With Libraries" section
- Add directories and files from the Face SDK distributive to the application assets:
- Create directory <project_dir>/assets (if not present)
- Copy the lib directory from the flutter directory to <project_dir>/assets
- Copy the required files from the conf and share directories to <project_dir>/assets/conf and <project_dir>/assets/share
- Create directory <project_dir>/assets/license
- Copy the license file 3divi_face_sdk.lic to the directory <project_dir>/assets/license
Specify a list of directories and files in <project_dir>/pubspec.yaml, for example:
note
Flutter does not copy directories recursively, so need to specify each directory with files.
Add the function of copying assets to the internal memory of the application to the project (this is required for the Face SDK to work correctly).
dataDir
is the directory where the conf, share and license folders from the Face SDK distribution were copied.
Add the import of the face_sdk_3divi module to the application, as well as the necessary additional modules:
#
Working with the pluginWorking with the plugin begins by initializing the FacerecService
, which will allow to create other
Face SDK primitives for face detection, tracking and comparison.
An example of initializing the FacerecService
object in the main()
function:
#
Basic primitives#
ConfigWorking with Face SDK primitives is based on configuration files. For example: the detector configuration file is common_capturere_uld_fda.xml, face tracker is video_worker_fdatracker_uld_fda.xml.
The Config
class is initialized with the name of the configuration file and allows to
override its parameters (the minimum Score of detected faces, for example).
#
RawSampleThis primitive contains information about the detected face.
#
Detecting faces in imagesTo detect faces in photos, Face SDK uses the Capturer
component. To create this component, call
the FacerecService.createCapturer
method on the initialized FacerecService
object and pass
the Config
object as an argument:
To get detections, use the Capturer.capture
method, which takes an encoded image:
As a result, a list of RawSample
objects will be returned, with each element describing a separate
detected face. If no detections on the images found, the list will be empty.
To receive detections from a device's camera, use the CameraController.takePicture, method, which saves an image in the device memory. Therefore, the image must be loaded first (the saved image can be deleted later):
More information about CameraController can be found at the link.
Using the CameraController in Flutter is detailed at the link.
To cut face from an image cutFaceFromImageBytes
can be used:
note
An example of a widget that uses a Capturer
object to detect faces through a device camera can be found in examples/flutter/demo/lib/photo.dart.
#
Face tracking on video sequence and Active LivenessTo track faces and perform an Active Liveness check on a video sequence, the VideoWorker
object is used.
Procedure for using the VideoWorker
object:
- create a
VideoWorker
object - get frames from the camera (for example, through
cameraController.startImageStream
), then pass them directly to theVideoWorker
via theVideoWorker.addVideoFrame
method or save the frames to a variable and callVideoWorker.addVideoFrame
(for example, wrapped in a looped StreamBuilder function) - get the processing results from
VideoWorker
by calling the functionVideoWorker.poolTrackResults
#
1. Creating a VideoWorker objectUse a VideoWorker
object to track faces on a video sequence and perform an Active Liveness check.
To create a VideoWorker
, call the FacerecService.createVideoWorker
method, which
takes the VideoWorkerParams
structure containing initialization parameters:
The set of Active Liveness checks is defined when the active_liveness_checks_order
property is initialized, which
a list of actions is passed - a scenario of checking (example given above).
Available Active Liveness checks:
- TURN_LEFT
- SMILE
- TURN_DOWN
- TURN_RIGHT
- TURN_UP
- BLINK
When using a video sequence from a camera, take into account the basic angle of rotation of the camera
image (for example, CameraController.description.sensorOrientation).
It is not necessary to rotate images for VideoWorker
, but it is necessary to define the base_angle
parameter
in accordance with the rotation of the camera.
For sensorOrientation == 90
, set the baseAngle parameter to 1,
for sensorOrientation == 270
= 2.
note
The front camera image of iOS devices is horizontally mirrored - in this case it is necessary
to set the value "1" for the parameter active_liveness.apply_horizontal_flip
.
Example of selecting base angle
#
2. Video frame processing in VideoWorkerTo process a video sequence, it is necessary to transfer frames to the VideoWorker
using the VideoWorker.addVideoFrame
method. The VideoWorker
accepts frames as an array of RawImageF
pixels.
Supported color models: RGB, BGR, YUV.
Frames can be retrieved through callback ImageStream:
Example of calling addVideoFrame
Images can be converted for transfer to `VideoWorker` using the integrated functions `convertRAW`,` convertBGRA8888`.
For independent work of ImageStream
and VideoWorker
(call to addVideoFrame
should not block the video stream)
a StreamBuilder can be used to call the addVideoFrame
function asynchronously.
Example of calling addVideoFrame with StreamBuilder
Image stream callback (saving the image and timestamp to global variables):
Asynchronous function for transferring frames in VideoWorker:
Widget (can be combined with any other):
#
3. Retrieving tracking resultsThe VideoWorker.poolTrackResults
method is used in order to get the results of the VideoWorker
operations.
This method will return structure with data on the currently tracked persons.
Active Liveness status is contained in TrackingData.tracking_callback_data
:
Example of implementing Active Liveness checks
Definition of Active Liveness status:
Retrieving tracking results:
Widget (can be combined with any other):
#
4. Retrieving the best shot after completing Active LivenessFor retrieving the best shot of the face, call
method VideoWorker.resetTrackerOnStream
after successfully passing Active Liveness checks. Method resets
tracker state and activates LostTrackingData
in VideoWorker
. The LostTrackingData
callback returns the best face shot,
which can be used to create a template for a face - Template
.
Further, face_template_vw
can be used to compare with other templates and get a similarity score.
Example of obtaining a face template for its subsequent comparison with the NID
After calling the videoWorker.poolTrackResults
function (example given above) the field best_quality_sample
will be set. Ypu can use it to get face template:
To get a face photo, saving the best CameraImage is needed as well as updating when higher quality was obtained:
The method cutFaceFromCameraImage
can be used to cut face from an image:
note
Example of a widget that uses the VideoWorker
object and checks Active Liveness from the front camera,
can be found in examples/flutter/demo/lib/video.dart.
#
Verification of facesThe object Recognizer
is used to build and compare face templates. This object is created as a result of calling
method FacerecService.createRecognizer
, to which is necessary to pass an argument - the name of the recognizer configuration file:
The order of performing operations when comparing faces:
- face detection
- building a face template
- comparison of the face template with other templates
An example of the implementation of comparing two faces (it is assumed that all the necessary Face SDK objects have been created and each image has one face):
#
Comparison of the face on the document and the person who passed the Active Liveness checkTo compare the face on the document and the person who passed the Active Liveness check, need to build templates of these faces.
- Face detection on the document and construction of the
face_template_idcard
template:
Getting the face template
face_template_vw
from the objectVideoWorker
after passing the Active Liveness check (example given above)Comparison of templates
face_template_idcard
andface_template_vw
using the methodRecognizer.verifyMatch
: