In this tutorial, you'll learn how to detect and track faces in a video stream from your camera using the
VideoWorker object from Face SDK API. Tracked faces are highlighted with a green rectangle.
Besides Face SDK and Qt, you'll need a camera connected to your PC (for example, a webcam). You can build and run this project either on Windows or on Ubuntu (v16.04 or higher).
You can find the tutorial project in Face SDK: examples/tutorials/detection_and_tracking_with_video_worker
- Run Qt and create a new project: File > New File or Project > Application > Qt Widgets Application > Choose...
- Name it, for example, 1_detection_and_tracking_with_video_worker and choose the path. Click Next and choose the necessary platform for you project in the Kit Selection section, for example, Desktop. Click Details and select the Release build configuration (we don't need Debug in this project).
- In the Class Information window, leave settings as default and click Next. Then, leave settings as default in the Project Management window and click Finish.
- Let's title the main window of our app: in the project tree, double-click the file Forms > mainwindow.ui. Specify the name of the window in the Properties tab (the right part of the editor): windowTitle > Face SDK Tracking.
- To lay out widgets in a grid, drag-and-drop the Grid Layout object to the MainWindow widget. Call context menu of MainWindow by right-clicking and select Layout > Lay Out in a Grid. The Grid Layout object will be stretched to the size of the MainWindow widget. Rename the Layout: layoutName > viewLayout.
- To run the project, click Run (Ctrl+R). You'll see an empty window with the title Face SDK Tracking.
- In order to use a camera in our project, we have to add Qt multimedia widgets. To do this, add the following line to the .pro file:
- Create a new class
QCameraCaptureto receive the image from a camera: Add New > C++ > C++ Class > Choose… > Class name – QCameraCapture > Base class – QObject > Next > Project Management (default settings) > Finish. Create a new class
CameraSurfacein the file
qcameracapture.h, which will provide the frames from camera via the
- Describe the implementation of this class in the file
qcameracapture.cpp. Designate the
CameraSurface::CameraSurfaceconstructor and the
CameraSurface::supportedPixelFormats, list all the image formats supported by Face SDK (RGB24, BGR24, NV12, NV21). With some cameras, the image is received in the RGB32 format, so we add this format to the list. This format isn't supported by Face SDK, so we'll convert the image from RGB32 to RGB24.
- In the
CameraSurface::startmethod, check the image format. Start the camera, if the format is supported, otherwise handle the exception.
- In the
CameraSurface::presentmethod, process a new frame. If the frame is successfully verified, send the signal
frameUpdatedSignalto update the frame. Next, we'll connect this signal to the slot
frameUpdatedSlot, where the frame will be processed.
QCameraCaptureconstructor takes the pointer to a parent widget (
parent), camera id and image resolution (width and height), which will be stored in the relevant class fields.
- Add the camera objects
- Include the
stdexceptheader file to
qcameracapture.cppto throw exceptions. Save the pointer to a parent widget, camera id and image resolution in the initializer list of the constructor
QCameraCapture::QCameraCapture. In the constructor body, get the list of available cameras. The list of cameras should contain at least one camera, otherwise, the
runtime_errorexception will be thrown. Check that the camera with the requested id is in the list. Create a camera and connect the camera signals to the slots processing the object. When the camera status changes, the camera sends the
statusChangedsignal. Create the
CameraSurfaceobject to display the frames from the camera. Connect the signal
CameraSurface::frameUpdatedSignalto the slot
- Stop the camera in the destructor
- Add the method
QCameraCapture::frameUpdatedSlot, which processes the signal
CameraSurface::frameUpdatedSignal. In this method, we convert the
QImageand send a signal that a new frame is available. Create a pointer to the image
FramePtr. If the image is received in the RGB32 format, convert it to RGB888.
- Add the methods to start and stop the camera to
- In the
QCameraCapture::onStatusChangedmethod, process the change of the camera status to
LoadedStatus. Check if the camera supports the requested resolution. Set the requested resolution, if it's supported by the camera, otherwise set the default resolution (640 x 480), specified by the static fields
- In the
cameraErrormethod, display the camera error messages if they occur.
- Create a new class
Worker: Add New > C++ > C++ Class > Choose… > Class name - Worker > Next > Finish. Through the
Workerclass will save the last frame from the camera and pass this frame through the
- Frames will be displayed in the
ViewWindowclass. Create a widget ViewWindow: Add > New > Qt > Designer Form Class > Choose... > Template > Widget (default settings) > Next > Name – ViewWindow > Project Management (default settings) > Finish.
- In the editor (Design), drag-and-drop the Grid Layout object to the widget. To do this, call context menu of ViewWindow by right-clicking and select Layout > Lay Out in a Grid. The Grid Layout object allows you to place widgets in a grid and is stretched to the size of the ViewWindow widget. Then, add the Label object to gridLayout and name it frame: QObject > objectName > frame.
- Delete the default text in QLabel > text.
- Add the camera
ViewWindowclass and initialize it in the constructor. Using the static fields
camera_image_height, set the required image resolution to 1280x720. The
_runningflag stores the status of the camera:
truemeans that the camera is running,
false- the camera is stopped.
- Add the
Workerobject to the
ViewWindowclass and initialize it in the constructor.
- Frames will be passed to
QCameraCapture. Modify the
QCameraCapture::newFrameAvailablesignal is processed in the
ViewWindow::drawslot, which displays the camera image on the frame widget.
- Start the camera in the
runProcessingmethod and stop it in
- Stop the camera in the desctructor
- Connect the camera widget to the main application window: create a view window and start processing in the
MainWindowconstructor. Stop the processing in the
- Modify the
mainfunction to catch possible exceptions.
- Run the project. You should see a window with the image from your camera.
Note: On Windows, the image from some cameras can be flipped or mirrored, which happens due to some peculiarities of the image processing by Qt. In this case, you'll need to process the image, for example, using QImage::mirrored().
- Download and extract the Face SDK distribution as described in the section Getting Started. The root folder of the distribution should contain the bin and lib folders, depending on your platform.
- To detect and track faces on the image from your camera, you have to integrate Face SDK into your project. In the .pro file, specify the path to the Face SDK root folder in the variable
FACE_SDK_PATH, which includes necessary headers. Also, specify the path to the
includefolder (from Face SDK). If the paths are not specified, the exception “Empty path to Face SDK” is thrown.
Note: When specifying the path to Face SDK, please use a slash ("/").
- [Linux only] To build the project with Face SDK, add the following option to the .pro file:
- Besides, we have to specify the path to the
facereclibrary and configuration files. Create the
FaceSdkParametersclass, which will store the configuration (Add New > C++ > C++ Header File > FaceSdkParameters) and use it in
- Integrate Face SDK: add necessary headers to
initFaceSdkServicemethod to initialize the Face SDK services. Create a
FacerecServiceobject, which is a component used to create the Face SDK modules, by calling the
FacerecService::createServicestatic method. Pass the path to the library and path to the folder with the configuration files in a
try-catchblock in order to catch possible exceptions. If the initialization was successful, the
initFaceSdkServicefunction will return
true, otherwise, it'll return
falseand you'll see a window with an exception.
- In the
MainWindow::MainWindowconstructor, add a service initialization call. In case of an error, throw the
FacerecServiceand Face SDK parameters to the
ViewWindowconstructor, where they'll be used to create the
VideoWorkertracking module. Save the service and parameters to the class fields.
- Modify the
Workerclass for interaction with Face SDK. The
Workerclass takes the
FacerecServicepointer and name of the configuration file of the tracking module. The
Workerclass creates the
VideoWorkercomponent from Face SDK, which is responsible for face tracking, passes the frames to it and processes the callbacks, which contain the tracking results. Imlement the constructor – create the object
VideoWorker, specifying the configuration file, recognizer method (in our case, it's empty because we don't recognize faces in this project), number of video streams (it's 1 in our case because we use only one camera).
Note: In addition to the face detection and tracking,
VideoWorker can be used for face recognition on several video streams. In this case, you have to specify the recognizer method and the streams
- Subscribe to the callbacks from the
TrackingCallback(a face is detected and tracked),
TrackingLostCallback(a face was lost). Delete them in the destructor.
- Include the
cassertheader to handle exceptions. In
TrackingCallback, the result is received in the form of the
TrackingCallbackDatastructure, which stores data about all faces, which are being tracked. The preview output is synchronized with the result output. We cannot immediately display the frame, which is passed to
VideoWorker, because it'll processed a little later. Therefore, frames are stored in a queue. When we get a result, we can find a frame that matches this result. Some frames may be skipped by
VideoWorkerunder heavy load, which means that sometimes there's no matching result for some frames. In the algorithm below, the image corresponding to the last received frame is extracted from the queue. Save the detected faces for each frame so that we can use them later for visualization. To synchronize the changes of shared data in
TrackingLostCallback, we use
TrackingLostCallback, in which we mark that the tracked face left the frame.
VideoWorkerreceives the frames via the
pbio::IRawImageinterface. Create the
VideoFrameheader file: Add New > C++ > C++ Header File > VideoFrame. Include it to the file
videoframe.hand implement the
pbio::IRawImageinterface for the
pbio::IRawImageinterface allows to get the pointer to image data, its format, width and height.
- In the
addFramemethod, pass the frames to
VideoWorker. If there are any exceptions during the callback processing, they're thrown again in the
checkExceptionsmethod. Create the
_framesqueue to store the frames. This queue will contain the frame id and the corresponding image, so that we can find the frame, which matches the processing result in
TrackingCallback. To synchronize the changes in shared data, we use
- Modify the
getDataToDrawmethod - we won't draw the faces, for which
- Modify the
QCameraCaptureclass to catch the exceptions, which may be thrown in
- Create the
DrawFunctionclass, which will contain a method to draw the tracking results in the image: Add New > C++ > C++ Class > Choose… > Class name – DrawFunction.
- In the
ViewWindowconstructor, pass the
FacerecServicepointer and the name of the configuration file of the tracking module when creating
Worker. In the
Drawmethod, draw the tracking result in the image by calling
- Run the project. Now you should see that faces in the image are detected and tracked (they're highlighted with a green rectangle). You can find more info about using the
VideoWorkerobject in the section Video Stream Processing.