Skip to main content
Version: 3.16.1 (latest)


Face SDK provides a plugin for Flutter that allows to implement the following features:

  • Face detection in photo
  • Face tracking in video
  • Active Liveness checking
  • Face verification

The plugin is developed for iOS and Android devices.


Flutter Sample with the Face SDK plugin is available in the examples/flutter/demo directory of the Face SDK distribution.

Connecting Face SDK plugin to flutter project#


  • Flutter >=2.2 and <=2.8.1
  • Android Studio for Android or XCode for iOS

Plugin connection#

  1. To connect the Face SDK to flutter project, install the "flutter" component using the Face SDK installer or maintenancetool utility:

    • If Face SDK is not installed, follow the installation instructions Getting Started. The "flutter" component must be selected in the "Selection Components" section.

    • If Face SDK is installed without "flutter" component (flutter directory is not present in Face SDK root directory), use the utility maintenancetool and install the "flutter" component, by selecting it in the "Selection Components" section.

  1. Add plugins to project dependencies by specifying them in the file <project_dir>/pubspec.yaml:

    • face_sdk_3divi, specifying the path to the plugin directory in the path field

    • path_provider version 2.0.0 or higher

      sdk: flutter
      path: ../flutter/face_sdk_3divi
      path_provider: "^2.0.0"
  1. Add the library to dependencies

    2.a. For Android device:

    • specify the path of directory with the library to sourceSets block of build.gradle file (<project_dir>/android/app/build.gradle)

      android {
      sourceSets {
      main {
      jniLibs.srcDirs = ["${projectDir}/../../assets/lib"]
    • add a loading of native library inside (<project_dir>/android/app/src/main/java/<android_app_name>/

      public class MainActivity extends FlutterActivity {
      static{ System.loadLibrary("facerec"); }

    2.b. For iOS device:

    • open ios/Runner.xcworkspace in XCode
    • in Target Navigator select "Runner", go to "General" tab, "Frameworks, Libraries, and Embedded Content" section and click "+". In the opened window "Add Other..."->"Add Files" and select facerec.framework in Finder
    • Remove facerec.framework in "Build Phases" tab, "Link Binary With Libraries" section
  1. Add directories and files from the Face SDK distributive to the application assets:
    • Create directory <project_dir>/assets (if not present)
    • Copy the lib directory from the flutter directory to <project_dir>/assets
    • Copy the required files from the conf and share directories to <project_dir>/assets/conf and <project_dir>/assets/share
    • Create directory <project_dir>/assets/license
    • Copy the license file 3divi_face_sdk.lic to the directory <project_dir>/assets/license
  1. Specify a list of directories and files in <project_dir>/pubspec.yaml, for example:

    - assets/conf/facerec/
    - assets/license/3divi_face_sdk.lic
    - assets/share/face_quality/
    - assets/share/faceanalysis/
    - assets/share/facedetectors/blf/
    - assets/share/facedetectors/uld/
    - assets/share/facedetectors/config_lbf/
    - assets/share/facedetectors/config_lbf_noise/
    - assets/share/faceattributes/
    - assets/share/fda/
    - assets/share/facerec/recognizers/method10v30/

    Flutter does not copy directories recursively, so need to specify each directory with files.

  2. Add the function of copying assets to the internal memory of the application to the project (this is required for the Face SDK to work correctly).

    late String dataDir;
    Future<String> loadAsset() async {
    final manifestContent = await rootBundle.loadString('AssetManifest.json');
    final Map<String, dynamic> manifestMap = jsonDecode(manifestContent);
    Directory doc_directory = await getApplicationDocumentsDirectory();
    for (String key in manifestMap.keys) {
    var dbPath = doc_directory.path + '/' + key;
    if (FileSystemEntity.typeSync(dbPath) == FileSystemEntityType.notFound) {
    ByteData data = await rootBundle.load(key);
    List<int> bytes = data.buffer.asUint8List(
    data.offsetInBytes, data.lengthInBytes);
    File file = File(dbPath);
    file.createSync(recursive: true);
    await file.writeAsBytes(bytes);
    return doc_directory.path + '/assets';

    dataDir is the directory where the conf, share and license folders from the Face SDK distribution were copied.

  1. Add the import of the face_sdk_3divi module to the application, as well as the necessary additional modules:

    import 'package:face_sdk_3divi/face_sdk_3divi.dart';
    import 'package:path_provider/path_provider.dart';
    import 'package:flutter/services.dart' show rootBundle;
    import 'dart:io';
    import 'dart:convert';
    import "dart:typed_data";

Working with the plugin#

Working with the plugin begins by initializing the FacerecService, which will allow to create other Face SDK primitives for face detection, tracking and comparison.

An example of initializing the FacerecService object in the main()function:

void main() async {
FacerecService facerecService = FaceSdkPlugin.createFacerecService(
dataDir + "/conf/facerec",
dataDir + "/license");

Basic primitives#


Working with Face SDK primitives is based on configuration files. For example: the detector configuration file is common_capturere_uld_fda.xml, face tracker is video_worker_fdatracker_uld_fda.xml.

The Config class is initialized with the name of the configuration file and allows to override its parameters (the minimum Score of detected faces, for example).


This primitive contains information about the detected face.

Detecting faces in images#

To detect faces in photos, Face SDK uses the Capturer component. To create this component, call the FacerecService.createCapturer method on the initialized FacerecService object and pass the Config object as an argument:

Capturer capturer = facerecService.createCapturer(Config("common_capturer4_fda_singleface.xml"));

To get detections, use the Capturer.capture method, which takes an encoded image:

Uint8List img_bytes = File(filePath).readAsBytesSync(); // reading a file from storage
List<RawSample> detections = capturer.capture(img_bytes); // get detections

As a result, a list of RawSample objects will be returned, with each element describing a separate detected face. If no detections on the images found, the list will be empty.

To receive detections from a device's camera, use the CameraController.takePicture, method, which saves an image in the device memory. Therefore, the image must be loaded first (the saved image can be deleted later):

XFile file = await cameraController.takePicture(); // take photo
Uint8List img_bytes = File(file.path).readAsBytesSync(); // load photo
List<RawSample> detections = _capturer.capture(img_bytes); // get detections
File(file.path).delete(); // delete file

More information about CameraController can be found at the link.
Using the CameraController in Flutter is detailed at the link.

To cut face from an image cutFaceFromImageBytes can be used:

final rect = detections[0].getRectangle();
Image _cropImg = await cutFaceFromImageBytes(img_bytes, rect);

An example of a widget that uses a Capturer object to detect faces through a device camera can be found in examples/flutter/demo/lib/photo.dart.

Face tracking on video sequence and Active Liveness#

To track faces and perform an Active Liveness check on a video sequence, the VideoWorker object is used.

Procedure for using the VideoWorker object:

  1. create a VideoWorker object
  2. get frames from the camera (for example, through cameraController.startImageStream), then pass them directly to the VideoWorker via the VideoWorker.addVideoFrame method or save the frames to a variable and call VideoWorker.addVideoFrame (for example, wrapped in a looped StreamBuilder function)
  3. get the processing results from VideoWorker by calling the function VideoWorker.poolTrackResults

1. Creating a VideoWorker object#

Use a VideoWorker object to track faces on a video sequence and perform an Active Liveness check.

To create a VideoWorker, call the FacerecService.createVideoWorker method, which takes the VideoWorkerParams structure containing initialization parameters:

List<ActiveLivenessCheckType> checks = [
VideoWorker videoWorker = facerecService.createVideoWorker(
.overrideParameter("base_angle", 0)
.overrideParameter("enable_active_liveness", 1)
.overrideParameter("active_liveness.apply_horizontal_flip", 0))

The set of Active Liveness checks is defined when the active_liveness_checks_order property is initialized, which a list of actions is passed - a scenario of checking (example given above).

Available Active Liveness checks:


When using a video sequence from a camera, take into account the basic angle of rotation of the camera image (for example, CameraController.description.sensorOrientation). It is not necessary to rotate images for VideoWorker, but it is necessary to define the base_angle parameter in accordance with the rotation of the camera.

For sensorOrientation == 90, set the baseAngle parameter to 1, for sensorOrientation == 270 = 2.


The front camera image of iOS devices is horizontally mirrored - in this case it is necessary to set the value "1" for the parameter active_liveness.apply_horizontal_flip.

Example of selecting base angle
if (controller.description.sensorOrientation == 90)
baseAngle = 1;
else if (controller.description.sensorOrientation == 270)
baseAngle = 2;
double apply_horizontal_flip = 0;
if (Platform.isIOS){
if (controller.description.lensDirection == CameraLensDirection.front)
apply_horizontal_flip = 1;
baseAngle = 0;

2. Video frame processing in VideoWorker#

To process a video sequence, it is necessary to transfer frames to the VideoWorker using the VideoWorker.addVideoFrame method. The VideoWorker accepts frames as an array of RawImageF pixels. Supported color models: RGB, BGR, YUV.

Frames can be retrieved through callback ImageStream:

Example of calling addVideoFrame
NativeDataStruct _data = new NativeDataStruct();
RawImageF convertCameraImg(CameraImage img){
Format format = Format.FORMAT_RGB;
if ( == ImageFormatGroup.yuv420) {
format = Format.FORMAT_YUV_NV21;
convertRAW(img.planes, _data);
else if ( == ImageFormatGroup.bgra8888) {
format = Format.FORMAT_BGR;
convertBGRA8888(img.planes, _data);
else {
print("Unsupported image format");
convertRAW(img.planes, _data);
final rawImg = RawImageF(img.width, img.height, format, data.pointer!.cast());
return rawImg;
cameraController.startImageStream((CameraImage img) async{
int time = new;
final rawImg = convertCameraImg;
videoWorker.addVideoFrame(rawImg, time);

Images can be converted for transfer to `VideoWorker` using the integrated functions `convertRAW`,` convertBGRA8888`.

For independent work of ImageStream and VideoWorker (call to addVideoFrame should not block the video stream) a StreamBuilder can be used to call the addVideoFrame function asynchronously.

Example of calling addVideoFrame with StreamBuilder

Image stream callback (saving the image and timestamp to global variables):

int _lastImgTimestamp = 0;
CameraImage? _lastImg;
cameraController.startImageStream((CameraImage img) async{
int startTime = new;
setState(() {
_lastImgTimestamp = startTime;
_lastImg = img;

Asynchronous function for transferring frames in VideoWorker:

Stream<List<dynamic>> addVF(int prev_time) async* {
final time = _lastImgTimestamp;
var img = _lastImg;
if (!mounted || img == null){
await Future.delayed(const Duration(milliseconds: 50));
yield* addVF(time);
final rawImg = convertCameraImg(img!);
videoWorker.addVideoFrame(rawImg, time);
await Future.delayed(const Duration(milliseconds: 50));
yield* addVF(time);

Widget (can be combined with any other):

stream: addVF(0),
builder: (context, snapshot){return Text("");},

3. Retrieving tracking results#

The VideoWorker.poolTrackResults method is used in order to get the results of the VideoWorker operations. This method will return structure with data on the currently tracked persons.

final callbackData = videoWorker.poolTrackResults();
List<RawSample> rawSamples = callbackData.tracking_callback_data.samples;

Active Liveness status is contained in TrackingData.tracking_callback_data:

List<ActiveLivenessStatus> activeLiveness = callbackData.tracking_callback_data.samples_active_liveness_status;
Example of implementing Active Liveness checks

Definition of Active Liveness status:

bool livenessFailed = False;
bool livenessPassed = False;
String activeLivenessStatusParse(ActiveLivenessStatus status, Angles angles){
Straing alAction = '';
if (status.verdict == ActiveLiveness.WAITING_FACE_ALIGN) {
alAction = 'Please, look at the camera';
if (angles.yaw > 10)
alAction += ' (turn face โ†’)';
else if (angles.yaw < -10)
alAction += ' (turn face โ†)';
else if (angles.pitch > 10)
alAction += ' (turn face โ†“)';
else if (angles.pitch < -10)
alAction += ' (turn face โ†‘)';
else if (status.verdict == ActiveLiveness.CHECK_FAIL) {
alAction = 'Active liveness check FAILED';
livenessFailed = true;
else if (status.verdict == ActiveLiveness.ALL_CHECKS_PASSED) {
alAction = 'Active liveness check PASSED';
livenessPassed = true;
_videoWorker.resetTrackerOnStream(); // To get the best shot of face
else if (status.verdict == ActiveLiveness.IN_PROGRESS) {
if (status.check_type == ActiveLivenessCheckType.BLINK)
alAction = 'Blink';
else if (status.check_type == ActiveLivenessCheckType.SMILE)
alAction = 'Smile';
else if (status.check_type == ActiveLivenessCheckType.TURN_DOWN)
alAction = 'Turn face down';
else if (status.check_type == ActiveLivenessCheckType.TURN_LEFT) {
if (Platform.isIOS)
alAction = 'Turn face right';
alAction = 'Turn face left';
} else if (status.check_type == ActiveLivenessCheckType.TURN_RIGHT) {
if (Platform.isIOS)
alAction = 'Turn face left';
alAction = 'Turn face right';
} else if (status.check_type == ActiveLivenessCheckType.TURN_UP)
alAction = 'Turn face up';
else if (status.verdict == ActiveLiveness.NOT_COMPUTED)
alAction = 'Active liveness disabled';
return alAction;

Retrieving tracking results:

Straing activeLivenessAction = '';
int progress = 0;
Stream<String> pool() async* {
if (!mounted){
await Future.delayed(const Duration(milliseconds: 50));
yield* pool();
final callbackData = _videoWorker.poolTrackResults();
final rawSamples = callbackData.tracking_callback_data.samples;
int progress = livenessProgress;
if (!livenessFailed && !livenessPassed) {
if (callbackData.tracking_callback_data.samples.length == 1) {
ActiveLivenessStatus status = callbackData.tracking_callback_data.samples_active_liveness_status[0];
Angles angles = rawSamples[0].getAngles();
activeLivenessAction = activeLivenessStatusParse(status, angles);
progress = (status.progress_level * 100).toInt();
else if (callbackData.tracking_callback_data.samples.length > 1) {
progress = 0;
activeLivenessAction = "Leave one face in the frame ";
else {
progress = 0;
activeLivenessAction = "";
rawSamples.forEach((element) => element.dispose());
setState(() {
livenessProgress = progress;
await Future.delayed(const Duration(milliseconds: 50));
yield* pool();

Widget (can be combined with any other):

stream: pool(),
builder: (context, snapshot){
return Transform.translate(
offset: Offset(0, 100),
child: Text(activeLivenessAction,
style: new TextStyle(fontSize: 20, backgroundColor:

4. Retrieving the best shot after completing Active Liveness#

For retrieving the best shot of the face, call method VideoWorker.resetTrackerOnStream after successfully passing Active Liveness checks. Method resets tracker state and activates LostTrackingData in VideoWorker. The LostTrackingData callback returns the best face shot, which can be used to create a template for a face - Template.

final callbackData = videoWorker.poolTrackResults();
if (callbackData.tracking_lost_callback_data.best_quality_sample != null){
final best_shot = callbackData.tracking_lost_callback_data.best_quality_sample;
final face_template_vw = recognizer.processing(best_shot);

Further, face_template_vw can be used to compare with other templates and get a similarity score.

Example of obtaining a face template for its subsequent comparison with the NID

After calling the videoWorker.poolTrackResults function (example given above) the field best_quality_sample will be set. Ypu can use it to get face template:

Template? face_template_vw;
Stream<String> pool() async* {
// .....
// pooling results (example given above)
final best_quality_sample = callbackData.tracking_lost_callback_data.best_quality_sample;
if (face_template_vw == null && livenessPassed && best_quality_sample != null){
face_template_vw = recognizer.processing(best_quality_sample!);
setState(() {
if (livenessFailed ){
// liveness fail
else (livenessPassed && templ != null){
// livenss passed, return face_template_vw for face comparing

To get a face photo, saving the best CameraImage is needed as well as updating when higher quality was obtained:

double best_quality = -1e-10;
CamerImage bestImage;
Rectangle bestRect;
Stream<String> pool() async* {
// ... pool and process tracking results ...
if (callbackData.tracking_callback_data.samples.length == 1) {
final sample = callbackData.tracking_callback_data.samples[0];
if (best_quality < callbackData.tracking_callback_data.samples_quality[0]) {
best_quality = callbackData.tracking_callback_data.samples_quality[0];
bestImage = _lastImg;
bestRect = sample.getRectangle();;
// ....

The method cutFaceFromCameraImage can be used to cut face from an image:

Image cut_face_img = cutFaceFromCameraImage(bestImage, bestRect);


Example of a widget that uses the VideoWorker object and checks Active Liveness from the front camera, can be found in examples/flutter/demo/lib/video.dart.

Verification of faces#

The object Recognizer is used to build and compare face templates. This object is created as a result of calling method FacerecService.createRecognizer, to which is necessary to pass an argument - the name of the recognizer configuration file:

Recognizer recognizer = facerecService.createRecognizer("method10v30_recognizer.xml");

The order of performing operations when comparing faces:

  • face detection
  • building a face template
  • comparison of the face template with other templates

An example of the implementation of comparing two faces (it is assumed that all the necessary Face SDK objects have been created and each image has one face):

// Getting the template for the first face
Uint8List imgB1 = File(filePath).readAsBytesSync();
List<RawSample> rawSamples1 = capturer.capture(imgB1);
Template templ1 = recognizer.processing(rawSamples1[0]);
// Getting the template for the second face
Uint8List imgB2 = File(filePath).readAsBytesSync();
List<RawSample> rawSamples2 = capturer.capture(imgB2);
Template templ2 = recognizer.processing(rawSamples2[0]);
// Compare faces
MatchResult match = recognizer.verifyMatch(templ1, templ2);

Comparison of the face on the document and the person who passed the Active Liveness check#

To compare the face on the document and the person who passed the Active Liveness check, need to build templates of these faces.

  • Face detection on the document and construction of the face_template_idcard template:
XFile file = await cameraController.takePicture(); // take photo
Uint8List img_bytes = File(file.path).readAsBytesSync(); // load photo
List<RawSample> detections = capturer.capture(img_bytes); // get detections
File(file.path).delete(); // delete file
Template face_template_idcard = recognizer.processing(detections[0]); // Only one face is expected on the photo
  • Getting the face template face_template_vw from the object VideoWorker after passing the Active Liveness check (example given above)

  • Comparison of templates face_template_idcard and face_template_vw using the method Recognizer.verifyMatch:

MatchResult match = recognizer.verifyMatch(face_template_idcard, face_template_vw);
double similarity_score = match.score;
Last updated on