Face Detection and Tracking in a Video Stream
In this tutorial you'll learn how to detect and track faces in a video stream from your camera using the VideoWorker
object from Face SDK API. Tracked faces are highlighted with a green rectangle.
Besides Face SDK and Qt, you'll need a camera connected to your PC (for example, a webcam). You can build and run this project either on Windows or Ubuntu (v16.04 or higher).
Find the tutorial project in Face SDK: examples/tutorials/detection_and_tracking_with_video_worker
#
Create a Qt Project- Run Qt and create a new project: File > New File or Project > Application > Qt Widgets Application > Choose...
- Name it, for example, 1_detection_and_tracking_with_video_worker and choose the path. Click Next and choose the necessary platform for your project in the Kit Selection section, for example, Desktop. Click Details and select the Release build configuration ( Debug is not required in this project).
- Leave settings as default in the Class Information window and click Next. Then leave settings as default in the Project Management window and click Finish.
- Title the main window of our application: double-click the file Forms > mainwindow.ui in the project tree. Specify the window name in the Properties tab (the right part of the editor): windowTitle > Face SDK Tracking.
- To lay out widgets in a grid, drag-and-drop the Grid Layout object to the MainWindow widget. Call context menu of MainWindow by right-clicking and select Layout > Lay Out in a Grid. The Grid Layout object will be stretched to the size of the MainWindow widget. Rename the Layout: layoutName > viewLayout.
- To run the project, click Run (Ctrl+R). You'll see an empty window with Face SDK Tracking title.
#
Display the Image from Camera- To use a camera in our project, add Qt multimedia widgets. To do this, add the following line to the .pro file:
detection_and_tracking_with_video_worker.pro
- To receive the image from a camera, create a new class
QCameraCapture
: Add New > C++ > C++ Class > Choose… > Class name – QCameraCapture > Base class – QObject > Next > Project Management (default settings) > Finish. Create a new classCameraSurface
inqcameracapture.h
, which will provide the frames from camera via thepresent
callback.
qcameracapture.h
- Describe the implementation of this class in
qcameracapture.cpp
file. Designate theCameraSurface::CameraSurface
constructor and thesupportedPixelFormats
method. All the image formats inCameraSurface::supportedPixelFormats
list are supported by Face SDK (RGB24, BGR24, NV12, NV21). With some cameras the image is received in the RGB32 format, so we add this format to the list. This format is not supported by Face SDK, so convert the image from RGB32 to RGB24.
qcameracapture.cpp
- Check the image format in the
CameraSurface::start
method. If the format is supported, start the camera. Otherwise, handle the exception.
qcameracapture.cpp
- Process a new frame in the
CameraSurface::present
method. If the frame is successfully verified, send the signalframeUpdatedSignal
to update the frame. Next, connect this signal to theframeUpdatedSlot
slot, where the frame will be processed.
qcameracapture.cpp
- The
QCameraCapture
constructor takes the pointer to a parent widget (parent
), camera id and image resolution (width and height), which will be stored in the relevant class fields.
cameracapture.h
- Add
m_camera
andm_surface
camera objects to theQCameraCapture
class.
qcameracapture.h
- To throw exceptions, include the
stdexcept
header file toqcameracapture.cpp
. Save the pointer to a parent widget, camera id and image resolution in the initializer list of theQCameraCapture::QCameraCapture
constructor. Get the list of available cameras in the constructor body. The list of cameras should contain at least one camera, otherwise, theruntime_error
exception will be thrown. Make sure that the list contains a camera with the requested id. Create a camera and connect the camera signals to the slots processing the object. When the camera status changes, the camera sends thestatusChanged
signal. Create theCameraSurface
object to display the frames from the camera. Connect theCameraSurface::frameUpdatedSignal
signal to theQCameraCapture::frameUpdatedSlot
slot.
qcameracapture.cpp
- Stop the camera in the
QCameraCapture
destructor.
qcameracapture.h
qcameracapture.cpp
- Add the
QCameraCapture::frameUpdatedSlot
method, which processes theCameraSurface::frameUpdatedSignal
signal. Using this method convert theQVideoFrame
object toQImage
and send a signal that a new frame is available. Create a pointer to theFramePtr
image. If the image is received in the RGB32 format, convert it to RGB888.
qcameracapture.h
qcameracapture.cpp
- Add the methods to start and stop the camera to
QCameraCapture
.
qcameracapture.h
qcameracapture.cpp
- In the
QCameraCapture::onStatusChanged
method process the change of the camera status toLoadedStatus
. Check if the camera supports the requested resolution. Set the requested resolution, if it is supported by the camera. Otherwise, set the default resolution (640 x 480), specified by thedefault_res_width
,default_res_height
static fields.
qcameracapture.h
qcameracapture.cpp
- Display the camera error messages in the
cameraError
method if occurred.
qcameracapture.h
qcameracapture.cpp
- Create a new
Worker
class: Add New > C++ > C++ Class > Choose… > Class name - Worker > Next > Finish. Through theaddFrame
method theWorker
class will save the last frame from the camera and pass this frame through thegetDataToDraw
method.
worker.h
worker.cpp
- Frames will be displayed in the
ViewWindow
class. Create a ViewWindow widget: Add > New > Qt > Designer Form Class > Choose... > Template > Widget (default settings) > Next > Name – ViewWindow > Project Management (default settings) > Finish. - In the editor (Design) drag-and-drop the Grid Layout object to the widget. To do this, call ViewWindow context menu by right-clicking and select Layout > Lay Out in a Grid. The Grid Layout object lets you place widgets in a grid and is stretched to the size of the ViewWindow widget. Then add the Label object to gridLayout and name it frame: QObject > objectName > frame.
- Delete the default text in QLabel > text.
- Add the
_qCamera
camera to theViewWindow
class and initialize it in the constructor. Using thecamera_image_width
andcamera_image_height
static fields, set the required image resolution to 1280x720. The_running
flag stores the camera status:true
means that the camera runs,false
means that the camera is stopped.
viewwindow.h
viewwindow.cpp
- Add the
Worker
object to theViewWindow
class and initialize it in the constructor.
viewwindow.h
viewwindow.cpp
- Frames will be passed to
Worker
fromQCameraCapture
. Modify theQCameraCapture
andViewWindow
classes.
qcameracapture.h
qcameracapture.cpp
viewwindow.cpp
- The
QCameraCapture::newFrameAvailable
signal is processed in theViewWindow::draw
slot, which displays the camera image on the frame widget.
viewwindow.h
viewwindow.cpp
- Start the camera in the
runProcessing
method and stop it instopProcessing
.
viewwindow.h
viewwindow.cpp
- Stop the camera in the
~ViewWindow
desctructor.
viewwindow.cpp
- Connect the camera widget to the main application window: create a view window and start processing in the
MainWindow
constructor. Stop the processing in the~MainWindow
destructor.
mainwindow.h
mainwindow.cpp
- Modify the
main
function to catch possible exceptions.
main.cpp
- Run the project. You'll see a window with the image from your camera.
Note: On Windows the image from some cameras can be flipped or mirrored, due to some peculiarities of the image processing by Qt. In this case process the image, for example, using QImage::mirrored().
#
Detect and Track Faces in Video Stream- Download and extract Face SDK distribution as described in the section Getting Started. The distribution root folder contains the bin and lib folders, depending on your platform.
- To detect and track faces on the image from your camera, integrate Face SDK into your project. In the .pro file, specify the path to Face SDK root folder in the variable
FACE_SDK_PATH
, which includes necessary headers. Also, specify the path to theinclude
folder (from Face SDK). If the paths are not specified, the exception “Empty path to Face SDK” will be thrown.
detection_and_tracking_with_video_worker.pro
Note: When you specify the path to Face SDK, please use a slash ("/").
- [Linux only] To build the project with Face SDK, add the following option to the .pro file:
detection_and_tracking_with_video_worker.pro
- Besides, specify the path to the
facerec
library and configuration files. Create theFaceSdkParameters
class, which will store the configuration (Add New > C++ > C++ Header File > FaceSdkParameters) and use it inMainWindow
.
facesdkparameters.h
mainwindow.h
- Integrate Face SDK: add necessary headers to
mainwindow.h
and theinitFaceSdkService
method to initialize Face SDK services. Create aFacerecService
object, which is a component used to create Face SDK modules, by calling theFacerecService::createService
static method. Pass the path to the library and path to the folder with the configuration files in atry-catch
block to catch possible exceptions. If the initialization is successful, theinitFaceSdkService
function will returntrue
. Otherwise, it will returnfalse
and you'll see a window with an exception.
mainwindow.h
mainwindow.cpp
- Add a service initialization call in the
MainWindow::MainWindow
constructor. In case of an error, throw thestd::runtime_error
exception.
mainwindow.cpp
- Pass
FacerecService
and Face SDK parameters to theViewWindow
constructor, where they will be used to create theVideoWorker
tracking module. Save the service and parameters to the class fields.
mainwindow.cpp
viewwindow.h
viewwindow.cpp
- Modify the
Worker
class for interaction with Face SDK. TheWorker
class takes theFacerecService
pointer and name of the configuration file of the tracking module. TheWorker
class creates theVideoWorker
component from Face SDK, responsible for face tracking, passes the frames to it and processes the callbacks, which contain the tracking results. Imlement the constructor – create theVideoWorker
object, specifying the configuration file, recognizer method (in this case it is empty, as faces are not to be recognized in this project), number of video streams (in this case it is 1, as we use one camera only).
worker.h
worker.cpp
Note: In addition to the face detection and tracking, VideoWorker
can be used for face recognition on several video streams. In this case specify the recognizer method and the processing_threads_count
and matching_threads_count
streams.
- Subscribe to the callbacks from the
VideoWorker
class –TrackingCallback
(a face is detected and tracked),TrackingLostCallback
(a face is lost). Delete them in the destructor.
worker.h
worker.cpp
- Include the
cassert
header to handle exceptions. The result inTrackingCallback
is received in the form of theTrackingCallbackData
structure, which stores data about all faces being tracked. The preview output is synchronized with the result output. We cannot immediately display the frame, passed toVideoWorker
, as it will be processed a bit later. Therefore, frames are stored in a queue. When the result is obtained, find a frame that matches this result. Some frames may be skipped byVideoWorker
under heavy load, which means that sometimes there is no matching result for some frames. In the algorithm below, the image corresponding to the last received frame is extracted from the queue. Save the detected faces for each frame to be able to use them later for visualization. To synchronize the shared data changes inTrackingCallback
andTrackingLostCallback
, usestd::mutex
.
worker.h
worker.cpp
- Implement
TrackingLostCallback
, where we mark that the tracked face left the frame.
worker.cpp
VideoWorker
receives the frames via thepbio::IRawImage
interface. Create theVideoFrame
header file: Add New > C++ > C++ Header File > VideoFrame. Include it into thevideoframe.h
file and implement thepbio::IRawImage
interface for theQImage
class. Thepbio::IRawImage
interface allows to get the pointer to image data, its format, width and height.
videoframe.h
- In the
addFrame
method, pass the frames toVideoWorker
. Any exceptions occurred during the callback processing are thrown again in thecheckExceptions
method. Create the_frames
queue to store the frames. This queue will contain the frame id and the corresponding image, so that we can find the frame, which matches the processing result inTrackingCallback
. To synchronize the changes in shared data, usestd::mutex
.
worker.h
worker.cpp
- Modify the
getDataToDraw
method - do not draw the faces, whichTrackingLostCallback
was called for.
worker.cpp
- Modify the
QCameraCapture
class to catch the exceptions, which can be thrown inWorker::addFrame
.
qcameracapture.cpp
- Create the
DrawFunction
class, which will contain a method to draw the tracking results in the image: Add New > C++ > C++ Class > Choose… > Class name – DrawFunction.
drawfunction.h
drawfunction.cpp
- In the
ViewWindow
constructor, pass theFacerecService
pointer and the name of the configuration file of the tracking module when creatingWorker
. In theDraw
method, draw the tracking result in the image by callingDrawFunction::Draw
.
viewwindow.cpp
- Run the project. Now you can see that faces in the image are detected and tracked (they are highlighted with a green rectangle). You can find more information about using the
VideoWorker
object in Video Stream Processing section.