Skip to main content
Version: 3.13.0

Face SDK VideoEngine JS plugin

Face SDK VideoEngine JS plugin implements functionality of face detection and tracking and active liveness estimation. This section provides information on installing, initializing, and running of Face SDK VideoEngine JS plugin, available startup options, and public methods.

Install the plugin#

Run one of the following commands in the console, depending on the platform used (Node or Yarn):

  • npm install @3divi/face-sdk-video-engine-js
  • yarn add @3divi/face-sdk-video-engine-js

Initialize the plugin#

  1. Import the VideoEngine library from Face SDK:
    import { VideoEngine } from '@3divi/face-sdk-video-engine-js';
  2. Create an instance of the VideoEngine class (see the description of optional startup parameters in the section Plugin startup options):
    const videoEngine = new VideoEngine(?options);

Run the plugin#

  1. Call the load method to load the module: videoEngine.load();
  2. To run the plugin, call the videoEngine.start(input, callback); method, where:
  • input β€” an HTML element HTMLVideoElement that can be obtained from the <video></video> tag and passed as input to the method. The video stream must be specified in the tag (see the section Initializing the camera).
  • callback β€” a callback function that can be used to get the prediction object (see the description of this object in the section Public methods) and process data in real time during the video stream processing (in this mode a video stream from the camera is being processed: a face is detected and Liveness is estimated). For example, to display a bounding box, get data, and display it on the screen.

Note: Loading the module is an asynchronous process.

Example of plugin initialization:

const initVideoEngine = async () => {
try {
// Do something
const videoEngine = new VideoEngine();
await videoEngine.load();
// Do something
} catch (error) {

Initialize the camera#

To initialize the camera, follow the steps below:

  1. Add the <video></video> tag to HTML.
  2. Get a video tag:
    const video = documnet.querySelector('video');
  3. Initialize the stream. See the full example: demo/src/utils.js.
try {
const stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: {
facingMode: 'user',
width: VIDEO_SIZE,
height: VIDEO_SIZE,
video.srcObject = stream;
return new Promise((resolve) => {
video.onloadedmetadata = () => {
} catch (error) {
throw new Error(error);

Plugin startup options#

The new VideoEngine() accepts the options object that contains the following fields:

  • backend β€” Hardware used for processing. Available values: webgl β€” GPU, cpu β€” CPU. Default value: webgl.
  • pose β€” An object that stores the rotation of the head in degrees. The object is used to check Active Liveness and limit the maximum allowed head rotation (see Camera Positioning and Shooting). maxRotation field is included. Type: Number (degree of rotation). Default value: 15.
  • eyes β€” An object that stores the information about the position and state of the eyes. Accepts the following fields:
    • minDistance β€” Minimum distance between pupils in pixels. Type: Number. Default value: 60. You can also import the following constants:
      • REGISTRATION_EYES_MIN_DISTANCE stores the value of 60. A person is added to the database if this minimum value is achieved.
      • IDENTIFICATION_EYES_MIN_DISTANCE stores the value of 40. A person can be identified if this minimum value is achieved.
    • closeLowThreshold β€” Minimum threshold value for the "eyes open" state (from 0 to 1). If the value is less than closeLowThreshold, then the result is the "eyes closed" status. Type: Number (from 0 to 1). Default value: 0.25.
    • closeHighThreshold β€” Maximum threshold value for the "eyes open" state. If the eyes are closed, the value of closeHighThreshold is used. If the value is >= closeHighThreshold, the result is the "eyes open" status, otherwise β€” "eyes closed". Type: Number (from 0 to 1). Default value: 0.3. If the closeHighThreshold value is not specified, it is calculated automatically based on closeLowThreshold.
    • maxDurationClose β€” The value used to define the "blink" status. If the period between the "eyes opened" and "eyes closed" states is less than maxDurationClose, the result is the status "there was a blink". Type: Number (time in ms). Default value: 500.

Public methods#

  • load β€” Asynchronous library initialization, that accepts the backend field (see Startup options of the plugin) and returns the Promise object. After the callback is complete, the Status object is returned:
type: 'success',
message: 'ok',
  • reset β€” removes the information about the last processing.
  • start β€” starts the video stream processing. It returns the Promise object that contains the Status object (See above). When the start method is executed for the first time, the backend object and model are initialized. The following parameters are accpeted:
    • input β€” an HTML element HTMLVideoElement (See Running the plugin).
    • callback β€” a callback function that can be used to process the received data in real time. The argument of this function is the prediction object.
      • prediction:
        • pose β€” an object that stores the information about the current position of the head:
          • axes β€” an object of the roll/pitch/yaw coordinate system. Type of fields: Number.
          • poseInRequiredRange β€” a variable that means that a face position is in the range of the above-mentioned coordinate system (angles do not exceed the maxRotation value). Type: Boolean.
        • headWasTurned β€” whether the head was turned during processing. Type: Boolean.
        • imageBase64 β€” an image of a current prediction object. Type: String. Data format: Base64.
        • face β€” an object that stores the information about the eyes and face points:
          • boundingBox β€” a bounding box:
            • topLeft β€” an array of 2D coordinates [x:number, y:number]; (upper left corner).
            • bottomRight β€” an array of 2D coordinates [x:number, y:number]; (lower right corner).
          • faceInViewConfidence β€” likelihood of a face in the frame. Type: Number (from 0 to 1).
          • mesh β€” an array with nested arrays of 3D coordinates of face points [...[x:number, y:number, z:number]].
          • scaledMesh β€” a normalized array (the coordinates of the points are adjusted in accordance with the face size in the video stream) with nested arrays of 3D coordinates of face points.
          • annotations β€” semantic groupings of the scaledMesh coordinates.
          • eyes β€” an object that stores information about the eyes:
            • isClosed β€” whether the eyes are closed at the current time. Type: Boolean.
            • isBlinked β€” whether there was a blink at the current time. Type: Boolean.
            • wasBlinked β€” whether there was a blink during the entire processing period. Type: Boolean.
            • eyesDistanceInRequiredRange β€” whether the user is at the correct distance from the camera. The calculation takes into account the value of the minDistance parameter (the minimum distance between the pupils). Type: Boolean.
            • levelOpeningEyes β€” an object that stores the current degree of each eye opening:
              • left β€” the left eye. Type: Number.
              • right β€” the right eye. Type: Number.
  • Since the first initialization after calling the start method can take a long time, you can supplement the code so that a message is displayed during the initialization process (for example, β€œInitialization in progress”).
  • stop β€” stops the processing but saves the current values. If the start method is called, processing will continue. The best prediction object is returned.

An example of an interface returned in the prediction object:

interface OutputData {
pose: {
axes: Axes;
poseInRequiredRange: boolean;
face: {
boundingBox: BoundingBox;
mesh: Coord3D[];
scaledMesh: Coord3D[];
annotations?: { [key: string]: Coord3D };
faceInViewConfidence: number;
eyes: {
isClosed: boolean;
isBlinked: boolean;
wasBlinked: boolean;
eyesDistanceInRequiredRange: boolean;
levelOpeningEyes: {
left: number;
right: number;
headWasTurned: boolean;
imageBase64?: string;

Public fields#

  • backend β€” hardware used for processing (See Startup options of the plugin). Returns data of the String type.
  • inProgress β€” whether processing was started. It returns a Boolean value.
  • isInitialized β€” whether the model was initialized. The model is initialized during the first execution of the start method. It returns a Boolean value.
  • isLoaded β€” whether the model was loaded. It returns a Boolean value.
  • bestPrediction β€” returns the best prediction object.
  • bestShot β€” returns the best image of the best prediction object.
Last updated on