Skip to main content
Version: Eyedid Beta

API Docs (C++)

Overview

ClassDescription
GazeTrackerClass that generates gaze tracking data
ITrackingCallbackGaze tracking metrics data listener
ICalibrationCallbackCalibration process listener
CoordConverterV2Matrix-based coordinate converter
EyedidBooleanSubstitution for bool type
EyedidTrackerOptionsThis is a struct that sets the options related to the creation of a GazeTracker.
EyedidCalibrationPointNumEnum with number of points for calibration
EyedidCalibrationAccuracyEnum containing options that manage calibration process and accuracy
EyedidTrackingStateEnum containing tracking state types used in GazeInfo
EyedidEyeMovementStateEnum containing eye movement state types used in GazeInfo
EyedidGazeDataThis is a struct that contains data related to Gaze.
EyedidFaceDataThis is a struct that contains data related to face recognition.
EyedidBlinkDataThis is a struct that contains data related to blink detection.
EyedidUserStatusDataThis is a struct that contains data related to user status.

global_init

void global_init(const char* file = "eyedid_core.dll")

This function sets up the program environment that Eyedid SDK needs.
This function must be called at least once within a program before the program calls any other function in Eyedid SDK.
It's a dll loader in actual. On other platforms except Windows, this function is no-op.

Exceptions

Failed to load a dll

Eyedid SDK requires the dll files to execute(which are included in the SDK files), including eyedid_core.dll and several third party library dlls.
See eyedid/bin or error logs for such dll files. eyedid::global_init() will throw if one or more of these files are missing.
Make sure that they're in the same directory with an executable file, or use add_custom_command in CMakeLists.txt as the following.
If add_custom_command isn't working, copy and paste the dll files to the directory(where an executable file is located) manually.

Failed to find XXX from dll

Dll files in the Eyedid SDK are broken, or wrong version is used.
Re-download the SDK from Eyedid Console and try again.

getVersionStr

std::string getVersionStr()

return Eyedid SDK version information in string.

ReturnTypeDescription
version_strstd::stringEyedid SDK version in std::string

Example

const std::string version = eyedid::getVersionStr();

getVersionInt

int32_t getVersionInt()

Return Eyedid SDK version as 9 digit integer.
Version forms XXYYZZZZZ, where XX is a major version, YY is a minor version, and ZZZZZ is a patch version.

ReturnTypeDescription
version_intint32_tEyedid SDK version in 9 digit integer

Example

int32_t version = eyedid::getVersionInt();

GazeTracker

class GazeTracker

GazeTracker is the class that generates GazeTracking data.

Summary

Constructor and Destructor
GazeTracker(const converter_type&)
~GazeTracker()
Functions
initialized
deinitialized
isTrackerInitialized
setFaceDistance
getFaceDistance
setCameraFOV
getCameraFOV
setTrackingCallback
setCalibrationCallback
removeTrackingCallback
removeCalibrationCallback
setTrackingFPS
addFrame
setTargetBoundRegion
startCalibration
stopCalibration
startCollectSamples
setCalibrationData
setAttentionRegion
getAttentionRegion
removeAttentionRegion
converter

GazeTracker

explicit GazeTracker(const converter_type& coord_converter = makeNoOpConverter<float>())
ParametersTypeDescription
coord_converterconverter_typeCoordinate system converter. See Custom Coordinate

Example

// Coordinate system is camera-centered millimeters coordinate(default)
GazeTracker gaze_tracker();

// Coordinate system is display pixel coordinate
const auto camera2display = eyedid::makeDefaultCameraToDisplayConverter<float>(...);
GazeTracker gaze_tracker_custom(camera2display);

~GazeTracker

~GazeTracker()

Effectively calls GazeTracker::deinitialize() and releases internal object.

initialize

int GazeTracker::initialize(
const std::string& licenseKey,
const EyedidTrackerOptions& options
)

The construction of GazeTracker requires authentication. This example will pass EyedidTrackerOptions to configure GazeTracker. After initialization, the authentication result will be logged with a status code.

ParametersTypeDescription
licenseKeyconst std::string&Key generated from Eyedid SDK Console
optionsconst EyedidTrackerOptions&These are the options related to configuring GazeTracker.
Return TypeDescription
intAuthentication code. See Authentication for more details.

Example

GazeTrackerOption option = {};
option.use_user_status = kEyedidTrue;
option.use_blink = kEyedidTrue;
const int errCode = gaze_tracker.initialize('licenseKey', option);
if (errCode == 0) {
// Initialization success
} else {
// Initialization failed
}

deinitialize

void GazeTracker::deinitialize()

Deinitialize the internal object. Calling other member functions (except for initialize) after calling deinitialize() results in undefined behavior.

isTrackerInitialized

bool GazeTracker::isTrackerInitialized() const;
Return TypeDescription
boolReturns true if initialized, false otherwise.

setFaceDistance

void GazeTracker::setFaceDistance(int cm)

Set face distance from the camera in cm. To get better GazeTracking performance, accurate face distance should be set by this function. Default valule is 60 (cm).

ParametersTypeDescription
faceDistanceintDistance between user face and camera in cm.

getFaceDistance

int getFaceDistance() const
Return TypeDescription
intDistance between user face and camera in cm.

setCameraFOV

void GazeTracker::setCameraFOV(float fov);

If you know the horizontal field of view (FOV) of the webcam, you can set it using this function. This can help improve accuracy compared to using the setFaceDistance function.

ParametersTypeDescription
fovfloatThe camera’s horizontal FOV (field of view) is specified as a floating-point value in radians.

Example

gaze_tracker.setCameraFOV(M_PI / 3.f); // 60 degree.

getCameraFOV

float GazeTracker::getCameraFOV();

Returns the current horizontal field of view (FOV) value of the camera.

Return TypeDescription
floathorizontal field of view (FOV).

Example

auto fov = gaze_tracker.getCameraFOV(); 
std::cout << "Current FOV: " << fov << std::endl;

setTrackingCallback

void GazeTracker::setTrackingCallback(eyedid::ITrackingCallback *listener)

This function sets the callback for gaze tracking metrics. Note that GazeTracker does not own the instance.

ParametersTypeDescription
listenerITrackingCallback*Address of gaze tracking metrics listener instance

Example

eyedid::ITrackingCallback listener;
gaze_tracker.setTrackingCallback(&listener);

setCalibrationCallback

void GazeTracker::setCalibrationCallback(eyedid::ICalibrationCallback* listener);

Set Calibration callback for GazeTracking.
Note that GazeTracker does not own the instance.

ParametersTypeDescription
listenerICalibrationCallback*Address of calibration listener instance.

Example

eyedid::ICalibrationCallback calibration_callback;
gaze_tracker.setCalibrationCallback(&calibration_callback);

removeTrackingCallback

void GazeTracker::removeTrackingCallback()

Remove tracking callback interface for GazeTracking.

removeCalibrationCallback

void GazeTracker::removeCalibrationCallback()

Remove calibration callback interface for GazeTracking.

setTrackingFPS

void GazeTracker::setTrackingFps(int fps)

Set the maximum Input Frames-per-second (FPS) for GazeTracking. FPS can be lower than the set value due to device specifications. It limits the calculation by omitting the given frame internally. The default value is set to 30 FPS.

ParametersTypeDescription
fpsintMaximum FPS(frames per seconds) for gaze tracking

Example

gaze_tracker.setTrackingFps(15);

addFrame

bool GazeTracker::addFrame(int64_t time_stamp, uint8_t *buffer, int width, int height)

Add image frame into GazeTracker for GazeTracking. addFrame should be called continuously, and add image frame from the camera stream. The image should be in RGB color format. Note that some frames can be ignored inside Eyedid SDK, thus not calculated

  • If frames are input very often(Don't worry; Eyedid SDK will perform 30+ FPS in real-time on most of the latest hardware).
  • Even if the hardware specs are good, some frames can be ignored to guarantee iTrackingCallback::OnMetrics()'s timestamp is in increasing order. This happens rarely. For each calculated input image, iTrackingCallback::OnMetrics() will be called with the input timestamp and the calculated values.
    Image data is copied inside the function, so it's safe to destroy an image object after calling addFrame.
ParametersTypeDescription
time_stampint64_tTimestamp of the given frame
bufferuint8_t*Image data buffer in RGB format. It is safe to release buffer after passing to GazeTracker since the buffer is deep-copied.
widthintWidth of the image
heightintHeight of the image
Return TypeDescription
boolReturns false if the input frame is omitted, true otherwise.

Example

cv::Mat frame, input;
while(true) {
video >> frame; // read frame
if (frame.empty()) break;

cv::cvtColor(frame, input, cv::COLOR_BGR2RGB); // convert color format
gaze_tracker.addFrame(current_time(), input.data, input.cols, input.rows);
}

setTargetBoundRegion

void GazeTracker::setTargetBoundRegion(float left, float top, float right, float bottom)

This function sets the gaze tracking area to a specific region. By default, the entire screen of the main monitor is used.

ParametersTypeDescription
leftfloatThe left boundary of the tracking area.
topfloatThe top boundary of the tracking area.
rightfloatThe right boundary of the tracking area.
bottomfloatThe bottom boundary of the tracking area.

Example

gaze_tracker.setTargetBoundRegion(100, 200, 2000, 1000); // Set the tracking area: left=100px, top=200px, right=2000px, bottom=1000px

startCalibration

void GazeTracker::startCalibration(
EyedidCalibrationPointNum point_num,
EyedidCalibrationAccuracy criteria,
float left, float top, float right, float bottom);

Start calibration process with CalibrationMode and Criteria. calibrationMode is number of calibration points. criteria is option that manage calibration process and accuracy.

ParametersTypeDescription
point_numEyedidCalibrationPointNumNumber of points used for calibration
criteriaEyedidCalibrationAccuracyAccuracy option of the calibration (Note: Currently, only the default implementation is available.)
leftfloatDistance to left edge of the calibration ROI
topfloatDistance to top edge of the calibration ROI
rightfloatDistance to right edge of the calibration ROI
bottomfloatDistance to bottom edge of the calibration ROI

Example

gaze_tracker.startCalibration(
kEyedidCalibrationPointFive,
kEyedidCalibrationAccuracyDefault,
0.0f, 0.0f, 1280.0f, 720.0f);

stopCalibration

void GazeTracker::stopCalibration()

Stop the current calibration process.

startCollectSamples

void GazeTracker::startCollectSamples()

Start collect next calibration point data.

This should be called in onCalibrationNextPoint when need to collect new calibration point data.

Example

if (isCalibrating) {
gaze_tracker.startCollectSamples();
}

setCalibrationData

void GazeTracker::setCalibrationData(const std::vector<float>& calibrationData)

Set calibration data without calibrating manually.

ParametersTypeDescription
calibrationDataconst std::vector<float>&Stored calibration data

Example

  gaze_tracker.setCalibrationData(calibrationData);

setAttentionRegion

void GazeTracker::setAttentionRegion(float left, float top, float right, float bottom);

Set Status ROI.

ParametersTypeDescription
leftfloatSet distance to the left edge of the ROI rectangle
topfloatSet distance to the top edge of the ROI rectangle
rightfloatSet distance to the right edge of the ROI rectangle
bottomfloatSet distance to the bottom edge of the ROI rectangle

getAttentionRegion

std::vector<float> GazeTracker::getAttentionRegion() const;

Get Status ROI.

Return TypeDescription
std::vectorcontains the value from setAttentionRegion if set, empty std::vector otherwise.

removeAttentionRegion

void GazeTracker::removeAttentionRegion();

Remove Status ROI.

converter

converter_type& GazeTracker::converter()
const converter_type& GazeTracker::converter() const

Access or modify GazeTracker's coordinate converter. See Custom Coordinate.

ITrackingCallback

class ITrackingCallback {
public:
ITrackingCallback() = default;
virtual ~ITrackingCallback() = default;

virtual void OnMetrics(/* See below */) = 0;
};

Summary

Abstract Functions
onMetrics

OnMetrics

virtual void ITrackingCallback::OnMetrics(
std::uint64_t timestamp,
EyedidGazeData gaze_data,
EyedidFaceData face_data,
EyedidBlinkData blink_data,
EyedidUserStatusData user_status_data)

Tracking metrics callback.

ParametersTypeDescription
timestampuint64_tTimestamp of the image from GazeTracker::addFrame
gaze_infoEyedidGazeDataGaze information of the image with the specified timestamp from GazeTracker::addFrame
face_infoEyedidFaceDataFace information of the image with the specified timestamp from GazeTracker::addFrame
blink_infoEyedidBlinkDataBlink information of the image with the specified timestamp from GazeTracker::addFrame
user_status_infoEyedidUserStatusDataUser Status information of the image with the specified timestamp from GazeTracker::addFrame

ICalibrationCallback


class ICalibrationCallback {
public:
ICalibrationCallback() = default;
virtual ~ICalibrationCallback() = default;

virtual void OnCalibrationProgress(float progress) = 0;
virtual void OnCalibrationNextPoint(float x, float y) = 0;
virtual void OnCalibrationFinish(const std::vector<float>& data) = 0;
};

Summary

Abstract Functions
onCalibrationNextPoint
onCalibrationProgress
onCalibrationFinished

OnCalibrationNextPoint

void ICalibrationCallback::OnCalibrationNextPoint(float x, float y)

Coordinate of the next calibration point.

x and y are current calibration point coordinate.

Point (x, y) should be focused during the calibration process.

startCollectSamples shuold be called to collect next calibration data.

ParametersTypeDescription
timestampuint64_tTimestamp of the iamge from GazeTracker::addFrame
xfloatx coordinate of the current calibration point
yfloaty coordinate of the current calibration point

Example

void OnCalibrationNextPoint(float x, float y) {
this->render_point_x = x;
this->render_point_y = y;

gaze_tracker.startCollectSamples();
}

OnCalibrationProgress

void ICalibrationCallback::OnCalibrationProgress(float progress)

Calibration progress callback.

progress is the calibration progress of the current calibration point (0.0 ~ 1.0).

Next calibration point will be guided when the progress reaches 1.0.

Return 0.0 when start calibration progress.

Return 1.0 when finish calibratino progress.

ParametersTypeDescription
progressfloatCalibration progress of the current calibration point

Example

void OnCalibrationProgress(float progress){
if(progress >= 1.0){
// Collect next calibration point data
} else {
// Collect current calibration point data
}
}

OnCalibrationFinished

void ICalibrationCallback::OnCalibrationFinished(const std::vector<float>& calibrationData)

Calibration finish callback.

calibrationData is the result calibration data.

ParametersTypeDescription
calibrationDataconst vector<float>&Result of the calibration data

CoordConverterV2


template<typename T = double>
class CoordConverterV2 {
public:
using value_type = T;
using transform_type = Matrix<value_type, 2, 2>;
using translate_type = Matrix<value_type, 2, 1>;
using coordinate_type = Matrix<value_type, 2, 1>;

/** default constructor: R=I, T=Zeros */
constexpr CoordConverterV2() = default;
CoordConverterV2(const transform_type& r, const translate_type& t);

CoordConverterV2& transform(const transform_type& t);
transform_type& transform();
const transform_type& transform() const { return transform_; }

CoordConverterV2& translate(const translate_type& t);
translate_type& translate();
const translate_type& translate() const;

coordinate_type convert(const coordinate_type& m) const;
coordinate_type revert(const coordinate_type& m) const;
};

Helper functions

CoordConverterV2<T> eyedid::makeCameraToDisplayConverter(const Point<float>& display_tl, const Point<float>& display_pixels, const Point<float>& display_mm)
CoordConverterV2<T> eyedid::makeDefaultCameraToDisplayConverter(const Point<float>& display_pixels, const Point<float>& display_mm)
CoordConverterV2<T> eyedid::makeDefaultCameraToDisplayConverter(float px_width, float px_height, float mm_width, float mm_height)
CoordConverterV2<T> eyedid::makeNoOpConverter()

See the header file comment

EyedidBoolean

typedef enum EyedidBoolean {
kEyedidFalse = 0,
kEyedidTrue = 1,
} EyedidBoolean;

Substitution of bool type.

EyedidTrackerOptions

typedef struct EyedidTrackerOptions {
EyedidBoolean use_blink = kEyedidFalse;
EyedidBoolean use_user_status = kEyedidFalse;
EyedidBoolean use_gaze_filter = kEyedidTrue;
EyedidBoolean stream_mode = kEyedidTrue;
float camera_fov = static_cast<float>(M_PI) / 4.f;
int max_concurrency = 0;
} EyedidTrackerOptions;

This structure defines the module options related to the use of GazeTracker.

NameTypeDescription
use_blinkEyedidBooleanEnables or disables blink detection. Default is kEyedidFalse.
use_user_statusEyedidBooleanEnables or disables user status detection. Default is kEyedidFalse.
use_gaze_filterEyedidBooleanEnables or disables gaze data filtering. Default is kEyedidTrue.
stream_modeEyedidBooleanEnables or disables stream mode for gaze tracking. In stream mode, some input frames may be dropped to maintain real-time performance. Default is kEyedidTrue.
camera_fovfloatThe horizontal field of view (FOV) of the camera, in radians. Default is π/4 radians (45 degrees).
max_concurrencyintSets the maximum number of concurrent threads for processing, based on the number of hardware cores available. Default is 0, which means it will automatically determine the optimal number of threads based on the system's hardware capabilities.

EyedidCalibrationPointNum

typedef enum EyedidCalibrationPointNum {
kEyedidCalibrationPointOne = 1,
kEyedidCalibrationPointFive = 5,
} EyedidCalibrationPointNum;

The number of points for calibration.

NameDescription
kEyedidCalibrationPointOneOne-point calibration mode
kEyedidCalibrationPointFiveFive-point calibration mode

EyedidCalibrationAccuracy

typedef enum EyedidCalibrationAccuracy {
kEyedidCalibrationAccuracyDefault = 0,
//kEyedidCalibrationAccuracyLow,
//kEyedidCalibrationAccuracyHigh,
} EyedidCalibrationAccuracy;

The enum containing options that manage calibration process and accuracy.

NameDescription
kEyedidCalibrationAccuracyDefaultStarts default calibration
kEyedidCalibrationAccuracyLowStart calibration with lower threshold for each step. (May result in lower accuracy)
kEyedidCalibrationAccurarcyHighStart calibration with higher threshold for each step. (Likely to result in higher accuracy)

EyedidGazeData

typedef struct EyedidGazeData {
float x;
float y;
float fixation_x;
float fixation_y;
EyedidTrackingState tracking_state;
EyedidEyeMovementState movement_state;
} EyedidGazeData;

This struct encapsulates information related to gaze.

ParametersTypeDescription
xfloatx coordinate of gaze point
yfloaty coordinate of gaze point
fixation_xfloatx coordinate value of last fixation point.
fixation_yfloaty coordinate value of last fixation point.
tracking_stateEyedidTrackingStateTracking state value
eye_movement_stateEyedidEyeMovementStateEye movement state value

EyedidFaceData

typedef struct EyedidFaceData {
float score;
float left;
float top;
float right;
float bottom;
float yaw;
float pitch;
float roll;
float center_x;
float center_y;
float center_z;
} EyedidFaceData;

This struct encapsulates information related to face.

ParametersTypeDescription
scorefloatValue of facial recognition confidence (0.0 ~ 1.0).
leftfloatLeft of the face.
topfloatTop of the face.
rightfloatRight of the face.
bottomfloatBottom of the face.
pitchfloatPitch is the rotation around the X-axis of an object. It is also known as the attitude angle. In aviation, it is often referred to as the nose-up or nose-down angle.
yawfloatYaw is the rotation around the Y-axis of an object. It is also known as the heading angle. In aviation, it is often referred to as the bearing or compass angle.
rollfloatRoll is the rotation around the Z-axis of an object. It is also known as the bank angle. In aviation, it is often referred to as the wing-up or wing-down angle.
center_xfloatThe x distance of the center of the face from the camera. The unit is mm.
center_yfloatThe y distance of the center of the face from the camera. The unit is mm.
center_zfloatThe z distance of the center of the face from the camera. The unit is mm.

EyedidBlinkData

typedef struct EyedidBlinkData {
EyedidBoolean is_blink;
EyedidBoolean is_blink_left;
EyedidBoolean is_blink_right;
float left_openness;
float right_openness;
} EyedidBlinkData;

This struct encapsulates information related to blink.

ParametersTypeDescription
isBlinkLeftboolLeft blink flag
isBlinkRightboolRight blink flag
isBlinkboolBlink flag
leftOpennessfloatOpen ratio of left eye.
rightOpennessfloatOpen ratio of right eye.

EyedidUserStatusData

typedef struct EyedidUserStatusData {
EyedidBoolean is_drowsy;
float drowsiness_intensity;
float attention_score;
} EyedidUserStatusData;

This struct encapsulates information related to user's status.

ParametersTypeDescription
is_drowsyEyedidBooleanUser drowsiness flag
drowsiness_intensityfloatLevel of drowsiness intensity (0 to 1)
attention_scorefloatUser attention rate score

EyedidTrackingState

typedef enum EyedidTrackingState {
kEyedidTrackingSuccess = 0,
kEyedidTrackingFaceMissing,
kEyedidTrackingGazeNotFound,
} EyedidTrackingState;

The enum containing tracking state types used in GazeInfo.

NameDescription
kEyedidTrackingSuccessFace alignment is in the best position, tracking is successful with valid accuracy
kEyedidTrackingFaceMissingFace is missing (Gaze tracking has failed)
kEyedidTrackingGazeNotFoundGaze tracking is not possible because the eyes are closed.

EyedidEyeMovementState

typedef enum EyedidEyeMovementState {
kEyedidEyeMovementFixation = 0,
kEyedidEyeMovementSaccade = 2,
kEyedidEyeMovementUnknown = 3,
} EyedidEyeMovementState;

The enum containing eye movement state types used in GazeInfo.

NameDescription
kEyedidEyeMovementFixationEye movement has fixation.
kEyedidEyeMovementSaccadeEye movement has saccade.
kEyedidEyeMovementUnknownEye movement is unknown.