API Descriptions (C++)

This section introduces the basic concepts and usage methods of the Camport C++ API. For details on how to set camera features using the APIs, please refer to Download PDF .

Library Loading and Unloading

Before using Camport SDK to control a camera for capturing images, the Camport SDK library should be loaded into your application. Similarly, before concluding the application, the Camport SDK library should be unloaded.

inline TY_STATUS    TYInitLib                 (void);
TY_CAPI             TYDeinitLib               (void);

Get the version information of Camport SDK.

TY_CAPI             TYLibVersion              (TY_VERSION_INFO* version);

Interface Control

Before discovering devices, the host computer’s USB interface, Ethernet interface, and wireless network interface status need to be updated in order to obtain interface handle, and the interface handle should be released before exiting the application.

Update interface status.

TY_CAPI             TYUpdateInterfaceList     ();

Get the number of interfaces.

TY_CAPI             TYGetInterfaceNumber      (uint32_t* pNumIfaces);

Get the interface list.

TY_CAPI             TYGetInterfaceList        (TY_INTERFACE_INFO* pIfaceInfos, uint32_t bufferCount, uint32_t* filledCount);

Check if the interface is valid.

TY_CAPI             TYHasInterface            (const char* ifaceID, bool* value);

Open the interface.

TY_CAPI             TYOpenInterface           (const char* ifaceID, TY_INTERFACE_HANDLE* outHandle);

Release the interface handle.

TY_CAPI             TYCloseInterface          (TY_INTERFACE_HANDLE ifaceHandle);

Device Control

TYUpdateDeviceList updates the list of devices mounted on the specified interface.

TY_CAPI             TYUpdateDeviceList        (TY_INTERFACE_HANDLE ifaceHandle);

TYGetDeviceNumber gets the number of devices mounted on the specified interface.

TY_CAPI             TYGetDeviceNumber         (TY_INTERFACE_HANDLE ifaceHandle, uint32_t* deviceNumber);

TYGetDeviceList gets the list of devices mounted on the specified interface. The bufferCount is the deviceInfos array size set according to the number of mounted devices.

TY_CAPI             TYGetDeviceList           (TY_INTERFACE_HANDLE ifaceHandle, TY_DEVICE_BASE_INFO* deviceInfos, uint32_t bufferCount, uint32_t* filledDeviceCount);

TYHasDevice queries the status of a device. The input parameters are the interface handle and device ID, and the output parameter is the status information of the specified device.

TY_CAPI             TYHasDevice               (TY_INTERFACE_HANDLE ifaceHandle, const char* deviceID, bool* value);

TYOpenDevice opens a device. The input parameters are the interface handle and device ID, and the output parameter is the device handle of the opened camera.

TY_CAPI             TYOpenDevice              (TY_INTERFACE_HANDLE ifaceHandle, const char* deviceID, TY_DEV_HANDLE* outDeviceHandle, TY_FW_ERRORCODE* outFwErrorcode);

TYOpenDeviceWithIP can be used to open a network camera with a specified IP address. Input the interface handle and IP address to obtain the handle for the opened network camera.

TY_CAPI             TYOpenDeviceWithIP        (TY_INTERFACE_HANDLE ifaceHandle, const char* IP, TY_DEV_HANDLE* deviceHandle);

TYGetDeviceInterface uses the handle of a known device to query the handle of the interface mounted on that device.

TY_CAPI             TYGetDeviceInterface      (TY_DEV_HANDLE hDevice, TY_INTERFACE_HANDLE* pIface);

TYForceDeviceIP forcibly sets the IP address of the network camera. When the MAC address of the network camera is known, this interface can be used to temporarily force the camera to use the specified IP address and gateway. After the device reboots, this IP configuration becomes invalid.

TY_CAPI             TYForceDeviceIP           (TY_INTERFACE_HANDLE ifaceHandle, const char* MAC, const char* newIP, const char* newNetMask, const char* newGateway);

TYCloseDevice closes the specified device.

TY_CAPI             TYCloseDevice             (TY_DEV_HANDLE hDevice, bool reboot);

The input parameter of TYGetDeviceInfo is the device handle, and the output parameter is the device information, such as interface, version, manufacturer, and other information.

TY_CAPI             TYGetDeviceInfo           (TY_DEV_HANDLE hDevice, TY_DEVICE_BASE_INFO* info);

Device information includes the following data:

typedef struct TY_DEVICE_BASE_INFO
{
    TY_INTERFACE_INFO   iface;
    char                id[32];///<device serial number
    char                vendorName[32];
    char                userDefinedName[32];
    char                modelName[32];///<device model name
    TY_VERSION_INFO     hardwareVersion; ///<deprecated
    TY_VERSION_INFO     firmwareVersion;///<deprecated
    union {
      TY_DEVICE_NET_INFO netInfo;
      TY_DEVICE_USB_INFO usbInfo;
    };
    char                buildHash[256];
    char                configVersion[256];
    char                reserved[256];
}TY_DEVICE_BASE_INFO;

The general operation of opening the device is as follows:

LOGD("Init lib");
ASSERT_OK( TYInitLib() );
TY_VERSION_INFO ver;
ASSERT_OK( TYLibVersion(&ver) );
LOGD("     - lib version: %d.%d.%d", ver.major, ver.minor, ver.patch);

std::vector<TY_DEVICE_BASE_INFO> selected;
ASSERT_OK( selectDevice(TY_INTERFACE_ALL, ID, IP, 1, selected) );
ASSERT(selected.size() > 0);
TY_DEVICE_BASE_INFO& selectedDev = selected[0];

ASSERT_OK( TYOpenInterface(selectedDev.iface.id, &hIface) );
ASSERT_OK( TYOpenDevice(hIface, selectedDev.id, &handle) );

Where the selectDevice function is encapsulated as follows:

static inline TY_STATUS selectDevice(TY_INTERFACE_TYPE iface
    , const std::string& ID, const std::string& IP
    , uint32_t deviceNum, std::vector<TY_DEVICE_BASE_INFO>& out)
{
    LOGD("Update interface list");
    ASSERT_OK( TYUpdateInterfaceList() );

    uint32_t n = 0;
    ASSERT_OK( TYGetInterfaceNumber(&n) );
    LOGD("Got %u interface list", n);
    if(n == 0){
      LOGE("interface number incorrect");
      return TY_STATUS_ERROR;
    }

    std::vector<TY_INTERFACE_INFO> ifaces(n);
    ASSERT_OK( TYGetInterfaceList(&ifaces[0], n, &n) );
    ASSERT( n == ifaces.size() );
    for(uint32_t i = 0; i < n; i++){
      LOGI("Found interface %u:", i);
      LOGI("  name: %s", ifaces[i].name);
      LOGI("  id:   %s", ifaces[i].id);
      LOGI("  type: 0x%x", ifaces[i].type);
      if(TYIsNetworkInterface(ifaces[i].type)){
        LOGI("    MAC: %s", ifaces[i].netInfo.mac);
        LOGI("    ip: %s", ifaces[i].netInfo.ip);
        LOGI("    netmask: %s", ifaces[i].netInfo.netmask);
        LOGI("    gateway: %s", ifaces[i].netInfo.gateway);
        LOGI("    broadcast: %s", ifaces[i].netInfo.broadcast);
      }
    }

    out.clear();
    std::vector<TY_INTERFACE_TYPE> ifaceTypeList;
    ifaceTypeList.push_back(TY_INTERFACE_USB);
    ifaceTypeList.push_back(TY_INTERFACE_ETHERNET);
    ifaceTypeList.push_back(TY_INTERFACE_IEEE80211);
    for(size_t t = 0; t < ifaceTypeList.size(); t++){
      for(uint32_t i = 0; i < ifaces.size(); i++){
        if(ifaces[i].type == ifaceTypeList[t] && (ifaces[i].type & iface) && deviceNum > out.size()){
          TY_INTERFACE_HANDLE hIface;
          ASSERT_OK( TYOpenInterface(ifaces[i].id, &hIface) );
          ASSERT_OK( TYUpdateDeviceList(hIface) );
          uint32_t n = 0;
          TYGetDeviceNumber(hIface, &n);
          if(n > 0){
            std::vector<TY_DEVICE_BASE_INFO> devs(n);
            TYGetDeviceList(hIface, &devs[0], n, &n);
            for(uint32_t j = 0; j < n; j++){
              if(deviceNum > out.size() && ((ID.empty() && IP.empty())
                  || (!ID.empty() && devs[j].id == ID)
                  || (!IP.empty() && IP == devs[j].netInfo.ip)))
              {
                if (devs[j].iface.type == TY_INTERFACE_ETHERNET || devs[j].iface.type == TY_INTERFACE_IEEE80211) {
                  LOGI("*** Select %s on %s, ip %s", devs[j].id, ifaces[i].id, devs[j].netInfo.ip);
                } else {
                  LOGI("*** Select %s on %s", devs[j].id, ifaces[i].id);
                }
                out.push_back(devs[j]);
              }
            }
          }
          TYCloseInterface(hIface);
        }
      }
    }

    if(out.size() == 0){
      LOGE("not found any device");
      return TY_STATUS_ERROR;
    }

    return TY_STATUS_OK;
}

The general operation of closing the device is as follows:

ASSERT_OK( TYCloseDevice(hDevice));
ASSERT_OK( TYCloseInterface(hIface) );

Component Control

TYGetComponentIDs queries the components supported by the device.

TY_CAPI             TYGetComponentIDs         (TY_DEV_HANDLE hDevice, int32_t* componentIDs);

TYGetEnabledComponents queries the enabled components.

TY_CAPI             TYGetEnabledComponents    (TY_DEV_HANDLE hDevice, int32_t* componentIDs);

TYEnableComponents enables the specified component.

TY_CAPI             TYEnableComponents        (TY_DEV_HANDLE hDevice, int32_t componentIDs);

TYDisableComponents disables the specified component.

TY_CAPI             TYDisableComponents       (TY_DEV_HANDLE hDevice, int32_t componentIDs);

Sample Code: Query and enable the left and right mono sensors and the color image sensor.

int32_t allComps;
ASSERT_OK( TYGetComponentIDs(hDevice, &allComps) );
if(allComps & TY_COMPONENT_RGB_CAM  && color) {
    LOGD("Has RGB camera, open RGB cam");
    ASSERT_OK( TYEnableComponents(hDevice, TY_COMPONENT_RGB_CAM) );
}

if (allComps & TY_COMPONENT_IR_CAM_LEFT && ir) {
    LOGD("Has IR left camera, open IR left cam");
    ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_LEFT));
}

if (allComps & TY_COMPONENT_IR_CAM_RIGHT && ir) {
    LOGD("Has IR right camera, open IR right cam");
    ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_RIGHT));
}

Framebuffer Management

TYGetFrameBufferSize gets the framebuffer size required for the current device configuration. The framebuffer size depends on the enabled components, and the format and resolution of image data.

TY_CAPI             TYGetFrameBufferSize      (TY_DEV_HANDLE hDevice, uint32_t* bufferSize);

TYEnqueueBuffer pushes the allocated framebuffer into the buffer queue.

TY_CAPI             TYEnqueueBuffer           (TY_DEV_HANDLE hDevice, void* buffer, uint32_t bufferSize);

TYClearBufferQueue clears the buffer queue of the framebuffer. During the system operation, when the number of components is dynamically adjusted, it is necessary to clear the buffer queue inside the SDK, and reapply and push it into the buffer queue.

TY_CAPI             TYClearBufferQueue        (TY_DEV_HANDLE hDevice);

Sample Code: Query the size of the framebuffer, allocate 2 framebuffers, and push them into the buffer queue.

uint32_t frameSize;
ASSERT_OK( TYGetFrameBufferSize(hDevice, &frameSize) );

LOGD("     - Allocate & enqueue buffers");
char* frameBuffer[2];
frameBuffer[0] = new char[frameSize];
frameBuffer[1] = new char[frameSize];
LOGD("     - Enqueue buffer (%p, %d)", frameBuffer[0], frameSize);
ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[0], frameSize) );
LOGD("     - Enqueue buffer (%p, %d)", frameBuffer[1], frameSize);
ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[1], frameSize) );

Time Synchronization Settings

TY_ENUM_TIME_SYNC_TYPE = 0x0211 | TY_FEATURE_ENUM,

TY_BOOL_TIME_SYNC_READY = 0x0212 | TY_FEATURE_BOOL,

Set the type of time synchronization for the depth camera, an enumerated feature. Confirm the time synchronization, a Boolean feature. The depth camera supports time synchronization with HOST, NTP server, PTP server, or CAN.

Tip

After setting up NTP time synchronization, you need to configure the NTP server. Please refer to Configure Network for the configuration method.

It is defined as follows:

typedef enum TY_TIME_SYNC_TYPE_LIST
{
    TY_TIME_SYNC_TYPE_NONE = 0,
    TY_TIME_SYNC_TYPE_HOST = 1,
    TY_TIME_SYNC_TYPE_NTP = 2,
    TY_TIME_SYNC_TYPE_PTP = 3,
    TY_TIME_SYNC_TYPE_CAN = 4,
    TY_TIME_SYNC_TYPE_PTP_MASTER = 5,
}TY_TIME_SYNC_TYPE_LIST;
typedef int32_t TY_TIME_SYNC_TYPE;

Operation: Set the time synchronization type through the interface of TYSetEnum(), and then confirm whether the time synchronization is completed by reading TY_BOOL_TIME_SYNC_READY.

Sample Code: Set the time synchronization type to HOST. After setting this synchronization type, the host will automatically send the current time, and then synchronize the time every 6 seconds.

LOGD("Set type of time sync mechanism");
ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_TIME_SYNC_TYPE, TY_TIME_SYNC_TYPE_HOST));
LOGD("Wait for time sync ready");
while (1) {
    bool sync_ready;
    ASSERT_OK(TYGetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_TIME_SYNC_READY, &sync_ready));
    if (sync_ready) {
        break;
    }
    MSLEEP(10);
}

Log Management and Log Output

TYSetLogLevel is used to set the log output level.

TY_CAPI TYSetLogLevel             (TY_LOG_LEVEL lvl);

Definition:

typedef enum TY_LOG_LEVEL_LIST
{
    TY_LOG_LEVEL_VERBOSE  = 1,
    TY_LOG_LEVEL_DEBUG    = 2,
    TY_LOG_LEVEL_INFO     = 3,
    TY_LOG_LEVEL_WARNING  = 4,
    TY_LOG_LEVEL_ERROR    = 5,
    TY_LOG_LEVEL_NEVER    = 9,
}TY_LOG_LEVEL_LIST;
typedef int32_t TY_LOG_LEVEL;

TYSetLogPrefix adds a customizable prefix to the logs to easily distinguish logs from different sources.

TY_CAPI TYSetLogPrefix            (const char* prefix);

TYAppendLogToFile outputs logs of the specified level and below to the designated file.

TY_CAPI TYAppendLogToFile         (const char* filePath, TY_LOG_LEVEL lvl);

TYAppendLogToServer sends logs of the specified level and below to the designated server via the TCP protocol.

TY_CAPI TYAppendLogToServer       (const char* protocol, const char* ip, uint16_t port, TY_LOG_LEVEL lvl);

TYRemoveLogFile closes the log file output and releases related resources.

TY_CAPI TYRemoveLogFile           (const char* filePath);

TYRemoveLogServer disconnects the TCP connection with the log server and stops log transmission.

TY_CAPI TYRemoveLogServer         (const char* protocol, const char* ip, uint16_t port);

Work Mode Settings

TY_STRUCT_TRIGGER_PARAM = 0x0523 | TY_FEATURE_STRUCT,

Set the work mode of the depth camera, a structure type feature. It is defined as follows:

typedef enum TY_TRIGGER_MODE_LIST
{
    TY_TRIGGER_MODE_OFF         = 0,
    TY_TRIGGER_MODE_SLAVE       = 1,
    TY_TRIGGER_MODE_M_SIG       = 2,
    TY_TRIGGER_MODE_M_PER       = 3,
    TY_TRIGGER_MODE_SIG_PASS    = 18,
    TY_TRIGGER_MODE_PER_PASS    = 19,
    TY_TRIGGER_MODE_TIMER_LIST  = 20,
    TY_TRIGGER_MODE_TIMER_PERIOD= 21,
}TY_TRIGGER_MODE_LIST;

typedef int16_t TY_TRIGGER_MODE;
typedef struct TY_TRIGGER_PARAM
{
    TY_TRIGGER_MODE   mode;
    int8_t    fps;
    int8_t    rsvd;
}TY_TRIGGER_PARAM;

//@see sample SimpleView_TriggerMode, only for TY_TRIGGER_MODE_SIG_PASS/TY_TRIGGER_MODE_PER_PASS
typedef struct TY_TRIGGER_PARAM_EX
{
    TY_TRIGGER_MODE   mode;
    union
    {
        struct
        {
            int8_t    fps;
            int8_t    duty;
            int32_t   laser_stream;
            int32_t   led_stream;
            int32_t   led_expo;
            int32_t   led_gain;
        };
        struct
        {
            int32_t   ir_gain[2];
        };
        int32_t   rsvd[32];
    };
}TY_TRIGGER_PARAM_EX;

//@see sample SimpleView_TriggerMode, only for TY_TRIGGER_MODE_TIMER_LIST
  • TY_TRIGGER_MODE_OFF sets the depth camera to work in mode 0, where the camera continuously captures images and outputs image data at the highest frame rate.

    LOGD("=== Disable trigger mode");
    TY_TRIGGER_PARAM trigger;
    trigger.mode = TY_TRIGGER_MODE_OFF;
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
    
  • TY_TRIGGER_MODE_SLAVE sets the depth camera to work in mode 1, where the camera captures images and outputs image data upon receiving a software trigger command or a hardware trigger signal.

    LOGD("=== Set trigger to slave mode");
    TY_TRIGGER_PARAM trigger;
    trigger.mode = TY_TRIGGER_MODE_SLAVE;
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
    
  • TY_TRIGGER_MODE_M_SIG sets the master device (camera) to work in mode 2, while multiple slave devices (cameras) work in mode 1 to achieve cascade triggering of multiple depth cameras and simultaneous image acquisition.

    After receiving the software trigger signal sent by the host computer, the master device outputs the trigger signal through the hardware TRIG_OUT interface, and at the same time triggers its own acquisition and outputs the depth map. After receiving the hardware trigger signal from the master device, the slave device acquires and outputs the depth map.

    LOGD("=== Set trigger mode");
    if (((strcmp(selected[i].id, list[0]) == 0) && (list.size() > 0))
            || ((count == 0) && (list.size() == 0))) {
        LOGD("=== set master device, id: %s", cams[count].sn);
        cams[count].tag = std::string(cams[count].sn) + "_master";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_M_SIG;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    } else {
        cams[count].tag = std::string(cams[count].sn) + "_slave";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_SLAVE;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    }
    
  • TY_TRIGGER_MODE_M_PER sets the master device (camera) to work in mode 3, while multiple slave devices (cameras) work in mode 1, to achieve cascade triggering of multiple depth cameras according to the set frame rate, while simultaneously capturing images.

    The master device outputs a trigger signal through the hardware TRIG_OUT interface according to the set frame rate, and triggers its own acquisition and output of the depth map. After receiving the hardware trigger signal from the master device, the slave device acquires and outputs the depth map.

    Note

    1. The set frame rate cannot exceed the camera’s processing capability, which is the output frame rate of the camera in work mode 0 and when TY_BOOL_CMOS_SYNC=false.

    2. In work mode 3 (without connecting to a slave device), the master device can smoothly output images at the set frame rate, which is suitable for platforms that require a specific frame rate for receiving images or have limited image data processing capabilities.

    LOGD("=== Set trigger mode");
    
    if (((strcmp(selected[i].id, list[0]) == 0) && (list.size() > 0))
        || ((count == 0) && (list.size() == 0))) {
        LOGD("=== set master device");
        cams[count].tag = std::string(cams[count].sn) + "_master";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_M_PER;
        param.fps = 5;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    }
    else {
        cams[count].tag = std::string(cams[count].sn) + "_slave";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_SLAVE;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    }
    

Start-Stop Management

TYStartCapture

After configuring device components and its feature settings, call this interface to start the device, initiating image capture and computational output.

TYStopCapture

Once the operation is completed, call TYStopCapture to stop the image capture process.

TY_CAPI             TYStartCapture            (TY_DEV_HANDLE hDevice);
TY_CAPI             TYStopCapture             (TY_DEV_HANDLE hDevice);

Software Trigger

TYSendSoftTrigger

When the camera works in mode 1 and mode 2 (please refer to Work Mode Settings ), the software trigger interface function TYSendSoftTrigger() can be used to send a trigger command to the depth camera via USB or Ethernet interface. After receiving this command, the camera will capture images once and output the corresponding image data.

TY_CAPI             TYSendSoftTrigger         (TY_DEV_HANDLE hDevice);

Report Status

TYRegisterEventCallback

When the device is offline or the device’s License status is abnormal, this callback function can receive TY_EVENT_DEVICE_OFFLINE or TY_EVENT_LICENSE_ERROR event notifications.

TY_CAPI             TYRegisterEventCallback   (TY_DEV_HANDLE hDevice, TY_EVENT_CALLBACK callback, void* userdata);

Data Reception

The depth camera outputs depth data through a USB interface or an Ethernet interface. The host computer retrieves depth map data using the FetchFrame API of the SDK.

The data output of the depth camera is buffered through a frameBuffer Queue for communication with the host computer. When all the frameBuffers in the Queue are occupied, the depth camera will stop sending data. To prevent the image data stream from being blocked, the host computer should promptly call TYEnqueueBuffer to return the frameBuffer after retrieving the image data.

If the host computer’s capability of receiving and processing data is lower than the depth camera’s image output capability, it is recommended to use the software or hardware trigger which can lower the image calculation and frame rate of output images, and also reduce the power consumption of the camera. The SDK sample programs SimpleView_Callback and SimpleView_FetchFrame provide two framework examples for image application processing, one in a separate application thread and the other directly in the image acquisition thread.

TYFetchFrame is a data reception function. Input device handle and wait for valid data frames within the specified time. If no data frames are received within the specified time, the function will return and report an error status.

TY_CAPI             TYFetchFrame              (TY_DEV_HANDLE hDevice, TY_FRAME_DATA* frame, int32_t timeout);

Feature Settings

Feature Availability Query

TYHasFeature queries whether the specified feature is available. The input parameters are the device handle and component ID.

TY_CAPI             TYHasFeature              (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, bool* value);

Feature Information Query

TYGetFeatureInfo queries the information of the specified feature. The input parameters are the device handle and component ID.

TY_CAPI             TYGetFeatureInfo          (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_FEATURE_INFO* featureInfo);

Feature information includes: access feature (TY_ACCESS_MODE, read, write), whether it supports to be written at runtime, the ID of the component, and other features that the current feature binds to.

typedef struct TY_FEATURE_INFO
{
    bool            isValid;            ///< true if feature exists, false otherwise
    TY_ACCESS_MODE  accessMode;         ///< feature access privilege
    bool            writableAtRun;      ///< feature can be written while capturing
    char            reserved0[1];
    TY_COMPONENT_ID componentID;        ///< owner of this feature
    TY_FEATURE_ID   featureID;          ///< feature unique id
    char            name[32];           ///< describe string
    int32_t         bindComponentID;    ///< component ID current feature bind to
    int32_t         bindFeatureID;      ///< feature ID current feature bind to
    char            reserved[252];
}TY_FEATURE_INFO;

Feature Classification Operation Interface

Feature operation APIs typically include input parameters such as camera handle hDevice, componentID to which the feature belongs, featureID to be operated on , and data parameters to be received or written. Different operation APIs are selected based on the different feature types, following the affiliation of the camera, the component, the feature, and the type of the feature.

There are seven data types of component features, and the SDK uses the same API to operate on different features of the same type.

typedef enum TY_FEATURE_TYPE_LIST
{
    TY_FEATURE_INT              = 0x1000,
    TY_FEATURE_FLOAT            = 0X2000,
    TY_FEATURE_ENUM             = 0x3000,
    TY_FEATURE_BOOL             = 0x4000,
    TY_FEATURE_STRING           = 0x5000,
    TY_FEATURE_BYTEARRAY        = 0x6000,
    TY_FEATURE_STRUCT           = 0x7000,
}TY_FEATURE_TYPE_LIST;

The interfaces for integer features:

TY_CAPI             TYGetIntRange             (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_INT_RANGE* intRange);
TY_CAPI             TYGetInt                  (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t* value);
TY_CAPI             TYSetInt                  (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t value);

The interfaces for floating point features:

TY_CAPI             TYGetFloatRange           (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_FLOAT_RANGE* floatRange);
TY_CAPI             TYGetFloat                (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, float* value);
TY_CAPI             TYSetFloat                (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, float value);

The interfaces for enumeration features:

TY_CAPI             TYGetEnumEntryCount       (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, uint32_t* entryCount);
TY_CAPI             TYGetEnumEntryInfo        (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_ENUM_ENTRY* entries, uint32_t entryCount, uint32_t* filledEntryCount);
TY_CAPI             TYGetEnum                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t* value);
TY_CAPI             TYSetEnum                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t value);

The interfaces for boolean features:

TY_CAPI             TYGetBool                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, bool* value);
TY_CAPI             TYSetBool                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, bool value);

The interfaces for string features:

TY_CAPI             TYGetStringLength         (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, uint32_t* length);
TY_CAPI             TYGetString               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, char* buffer, uint32_t bufferSize);
TY_CAPI             TYSetString               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, const char* buffer);

The interfaces for structure features:

TY_CAPI             TYGetStruct               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, void* pStruct, uint32_t structSize);
TY_CAPI             TYSetStruct               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, void* pStruct, uint32_t structSize);

Feature Descriptions

Different models of Percipio depth cameras have different feature configurations. By traversing the components and feature lists of the depth camera, you can get accurate product configuration information. It is recommended to use the SDK sample program DumpAllFeatures to traverse the supported features of the device.

Component Optical Parameters

  • TY_STRUCT_CAM_INTRINSIC = 0x0000 | TY_FEATURE_STRUCT,

    The intrinsic parameter of the infrared image sensor component, color image sensor component, and depth map sensor component. It is a structure feature and the data structure is a 3x3 floating-point array.

    Definition:

    ///  a 3x3 matrix
    /// |.|.|.|
    /// | --|---|---|
    /// | fx|  0| cx|
    /// |  0| fy| cy|
    /// |  0|  0|  1|
    typedef struct TY_CAMERA_INTRINSIC
    {
        float data[3*3];
    }TY_CAMERA_INTRINSIC;
    
  • TY_STRUCT_EXTRINSIC_TO_LEFT_IR = 0x0001 | TY_FEATURE_STRUCT,

    The extrinsic parameter of the component. The extrinsic parameter of the right-side infrared image sensor component or the color image sensor component relative to the left-side infrared image sensor component. It is a structure feature and the data structure is a 4x4 floating-point array.

    Definition:

    /// a 4x4 matrix
    ///  |.|.|.|.|
    ///  |---|----|----|---|
    ///  |r11| r12| r13| t1|
    ///  |r21| r22| r23| t2|
    ///  |r31| r32| r33| t3|
    ///  | 0 |   0|   0|  1|
    // This parameter is used to transform coordinate points from the initial coordinate system to a new coordinate system.
    // r11, r12, r13, and t1 represent the rotation coefficients and offset for the x-axis in the new coordinate system.
    // r21, r22, r23, and t2 represent the rotation coefficients and offset for the y-axis in the new coordinate system.
    // r31, r32, r33, and t3 represent the rotation coefficients and offset for the z-axis in the new coordinate system.
    typedef struct TY_CAMERA_EXTRINSIC
    {
        float data[4*4];
    }TY_CAMERA_EXTRINSIC;
    
    [r11, r12, r13, t1,
     r21, r22, r23, t2,
     r31, r32, r33, t3,
     0,   0,   0,  1]
    
  • TY_STRUCT_CAM_DISTORTION = 0x0006 | TY_FEATURE_STRUCT,

    The optical distortion parameter of infrared image sensors or color image sensor components. It is a structure feature and the data structure is a floating-point array of 12 elements.

    Definition:

    // k1 k2 k3 k4 k5 k6 are radial distortion coefficients based on ideal modeling
    // p1 p2 are tangential distortion coefficients
    // s1 s2 s3 s4 are the thin prism distortion coefficients
    ///camera distortion parameters
    typedef struct TY_CAMERA_DISTORTION
    {
        float data[12];///<Definition is compatible with opencv3.0+ :k1,k2,p1,p2,k3,k4,k5,k6,s1,s2,s3,s4
    }TY_CAMERA_DISTORTION;
    
  • TY_STRUCT_CAM_RECTIFIED_INTRI = 0x0008 | TY_FEATURE_STRUCT,

    Get the intrinsic parameter data of the mono sensor component after distortion correction.

    Sample Code:

    TY_COMPONENT_ID componentID;
    TY_FEATURE_ID featureID;
    componentID = TY_COMPONENT_IR_CAM_LEFT;
    featureID = TY_STRUCT_CAM_RECTIFIED_INTRI;
    TY_CAMERA_INTRINSIC intri;
    ASSERT_OK(TYGetStruct(hDevice, componentID, featureID, &intri, sizeof(TY_CAMERA_INTRINSIC)));
    LOGD("===%23s%f %f %f", "", intri.data[0], intri.data[1], intri.data[2]);
    LOGD("===%23s%f %f %f", "", intri.data[3], intri.data[4], intri.data[5]);
    LOGD("===%23s%f %f %f", "", intri.data[6], intri.data[7], intri.data[8]);
    
  • TY_STRUCT_CAM_CALIB_DATA = 0x0007 | TY_FEATURE_STRUCT,

    The calibration parameter combination of infrared image sensor components and color image sensor components. It can retrieve intrinsic data, extrinsic data, and distortion data. The structure contains the calibration size, intrinsic parameters, extrinsic parameters, and distortion parameters calibrated before the camera leaves the factory. These calibration parameters are calculated at a specific resolution and have a conversion relationship with the image resolution. The related APIs provided by the SDK will automatically adjust the output parameters according to the actual image resolution.

    Sample Code: Retrieve calibration parameters, for more details please refer to the SDK sample program SimpleView_Registration.

    struct CallbackData {
      int             index;
      TY_ISP_HANDLE   IspHandle;
      TY_DEV_HANDLE   hDevice;
      DepthRender*    render;
      DepthViewer*    depthViewer;
      bool            needUndistort;
    
      float           scale_unit;
    
      TY_CAMERA_CALIB_INFO depth_calib;
      TY_CAMERA_CALIB_INFO color_calib;
    };
    
    CallbackData cb_data;
    
    LOGD("=== Read depth calib info");
    ASSERT_OK( TYGetStruct(hDevice, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA
          , &cb_data.depth_calib, sizeof(cb_data.depth_calib)) );
    
    LOGD("=== Read color calib info");
    ASSERT_OK( TYGetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA
          , &cb_data.color_calib, sizeof(cb_data.color_calib)) );
    

TY_STRUCT_CAM_RECTIFIED_ROTATION= 0x0003 | TY_FEATURE_STRUCT,

Retrieves the rotation parameter from the IR calibration results of the V-series cameras.

Sample Code:

TY_CAMERA_ROTATION rotation;
ASSERT_OK(TYGetStruct(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_STRUCT_CAM_RECTIFIED_ROTATION,&rotation, sizeof(rotation)));
LOGD("===%23s%f %f %f", "", rotation.data[0], rotation.data[1], rotation.data[2]);
LOGD("===%23s%f %f %f", "", rotation.data[3], rotation.data[4], rotation.data[5]);
LOGD("===%23s%f %f %f", "", rotation.data[6], rotation.data[7], rotation.data[8]);

Trigger Settings

  • TY_INT_FRAME_PER_TRIGGER = 0x0202 | TY_FEATURE_INT,

    The number of frames output by the depth camera after receiving a software or hardware trigger signal. The default output is 1 frame.

  • TY_INT_TRIGGER_DELAY_US = 0x0206 | TY_FEATURE_INT,

    The trigger with a delay. When the depth camera receives a hardware trigger signal, it starts image acquisition after a set delay time in microseconds. The maximum delay time is 1300000 microseconds.

    LOGD("=== Set trigger to slave mode");
    TY_TRIGGER_PARAM trigger;
    trigger.mode = TY_TRIGGER_MODE_SLAVE;
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
    
    //notice: trigger delay only be enabled in trigger salve mode and only work for hardware trigger.
    //        delay time unit is microsecond, the maximum value is 1.3s
    int32_t time = 1000;
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEVICE, TY_INT_TRIGGER_DELAY_US, time));
    
  • TY_BOOL_TRIGGER_OUT_IO = 0x0207 | TY_FEATURE_BOOL,

    Trigger IO output control. When the depth camera is working in mode 1, this feature can be used to control the trigger output interface to operate in general IO mode.

    • When the camera trigger output defaults to a high level, setting this feature to false keeps the output signal at a high level, whereas setting it to true switches the output signal to a low level.

    • When the camera trigger output defaults to a low level, setting this feature to false keeps the output signal at a low level, whereas setting it to true switches the output signal to a high level.

    This feature cannot be used when the depth camera is working in mode 2 or mode 3.

Note

  1. Please refer to Hardware Trigger Connection Methods and Work Mode Settings for the hardware connection methods of various trigger modes.

  2. Please refer to the SDK sample program SimpleView_TriggerMode1 for software implementation.

Exposure Settings

  • TY_BOOL_AUTO_AWB = 0x0304 | TY_FEATURE_BOOL,

    The automatic white balance control, which adjusts the R, G, B digital gain to achieve color space balance. Before setting the R, G, B digital gain, you need to turn off the automatic white balance function, otherwise the setting will not take effect.

  • TY_BOOL_AUTO_EXPOSURE = 0x0300 | TY_FEATURE_BOOL,

    The automatic exposure time control. Automatic exposure switch for image sensor components. The optical components of some depth camera models support automatic image exposure.

  • TY_INT_AE_TARGET_Y = 0x0527 | TY_FEATURE_INT,

    The target brightness for auto-exposure. Adjusting the target brightness is crucial when optimizing the color image quality in high contrast scenes. If the region of interest is relatively darker than other areas, increase the target brightness until details in that area become visible. Conversely, if the region of interest is relatively brighter, decrease the target brightness to reveal the details within that area.

  • TY_INT_EXPOSURE_TIME = 0x0301 | TY_FEATURE_INT,

    Exposure Time. The adjustable range of exposure time varies with different image sensors and frame rate configurations. You can use the TYGetIntRange interface to query the maximum and minimum values of the exposure time. Before setting the exposure time, Please disable auto-exposure if supported.

    //shutdown the Auto Exposure time function of the rgb image
    ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_RGB_CAM, TY_BOOL_AUTO_EXPOSURE, false));
    
    //Adjust the Exposure time of the rgb image
    TY_INT_RANGE range;
    ASSERT_OK(TYGetIntRange(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_EXPOSURE_TIME, &range));
    int32_t tmp;
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_EXPOSURE_TIME, (range.min + range.max) / 2));
    ASSERT_OK(TYGetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_EXPOSURE_TIME, &tmp));
    if (tmp != (range.min + range.max) / 2)
    {
        LOGD("set rgb image exposure time failed");
    }
    
  • TY_FLOAT_EXPOSURE_TIME_US = 0x0308 | TY_FEATURE_FLOAT,

    The absolute true value of exposure time, in μs. This refers to the actual effective exposure time of the image sensor under a specific work mode. To set it, you must first disable auto-exposure and query the valid range via the corresponding API.

    ASSERT_OK(TYSetFloat(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_FLOAT_EXPOSURE_TIME_US, 23000.0));
    
  • TY_INT_ANALOG_GAIN = 0x0524 | TY_FEATURE_INT,

    The analog gain. The adjustable range of analog gain varies depending on different image sensors. Before setting the analog gain, please disable auto-exposure if supported.

    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_INT_ANALOG_GAIN, 4));
    
    //shutdown the Auto Gain function of the RGB image
    ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_RGB_CAM, TY_BOOL_AUTO_GAIN, false));
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_ANALOG_GAIN, 4));
    
  • TY_INT_GAIN = 0x0303 | TY_FEATURE_INT,

    The digital gain of the mono sensors. The adjustable range of the digital gain varies depending on different image sensors.

    int32_t value;
    TY_INT_RANGE range;
    // get the range of digital gain
    ASSERT_OK(TYGetIntRange(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_INT_GAIN, &range));
    // set the max digital gain
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_INT_GAIN, range.max));
    
  • TY_INT_R_GAIN = 0x0520 | TY_FEATURE_INT,

  • TY_INT_G_GAIN = 0x0521 | TY_FEATURE_INT,

  • TY_INT_B_GAIN = 0x0522 | TY_FEATURE_INT,

    The digital gain of color sensors. The digital gain of the color image sensor component needs to be independently set for the R, G, and B channels. Before setting the digital gain, please disable the AWB function if supported.

    //shutdown the AWB function of the RGB image
    ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_RGB_CAM, TY_BOOL_AUTO_AWB, false));
    
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_R_GAIN, 2));
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_G_GAIN, 2));
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_B_GAIN, 2));
    

Auto-Exposure ROI

TY_STRUCT_AEC_ROI = 0x0305 | TY_FEATURE_STRUCT,

Sets the statistical range of the AEC configuration of the image sensor, and the image sensor will automatically adjust the exposure time and gain based on the image data characteristics within this range to achieve better image effects.

The following code sets the 100*100 area in the upper left corner of the image as the region of interest (ROI), and AEC is effective for this region only:

TY_AEC_ROI_PARAM aec_roi_param;
aec_roi_param.x = 0;
aec_roi_param.y = 0;
aec_roi_param.w = 100;
aec_roi_param.h = 100;
TYSetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_AEC_ROI, &aec_roi_param, sizeof(TY_AEC_ROI_PARAM));

Image Acquisition Time

TY_INT_CAPTURE_TIME_US = 0x0210 | TY_FEATURE_INT,

Retrieves the actual image acquisition time of the camera in trigger mode, with the units being microseconds.

After receiving the trigger signal, the camera enters the exposure phase, which lasts for the time of CAPTURE_TIME_US. At this time, the objects in the camera’s field of view should remain stationary, otherwise the image quality will be impacted. After the exposure phase, the camera starts data processing and at this time, the objects within the camera’s field of view can move and it won’t impact the quality of the captured image.

Therefore, you can first read the actual image acquisition time of the camera, so as to determine the time when the object (e.g., the robotic arm) moves, and avoid entering the camera field of view during the exposure period, which will affect the camera imaging.

Sample Code for retrieving the CAPTURE_TIME_US:

int32_t capture_time;
TYGetInt(hDevice, TY_COMPONENT_DEVICE, TY_INT_CAPTURE_TIME_US, &capture_time);
printf("get capture time %d\n", capture_time);

Image Synchronization

TY_BOOL_CMOS_SYNC = 0x0205 | TY_FEATURE_BOOL,

When the left and right images used for stereo depth calculation are exposed completely synchronously, the best depth map data can be obtained. Even when the exposure of the left and right infrared images is not completely synchronous and there is a small time difference, depth map data can still be output, although with slightly lower accuracy. When the left and right images are completely synchronized, the frame rate of the image output is lower than when they are not completely synchronized. Depending on the needs of the actual use scenario for depth map quality and frame rate, the camera’s working configuration can be modified using this API.

bool tmp;
ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_CMOS_SYNC, false));
ASSERT_OK(TYGetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_CMOS_SYNC, &tmp));
if (tmp != false) {
   LOGD("==set TY_BOOL_CMOS_SYNC failed !");
}

Asynchronous Data Stream

TY_ENUM_STREAM_ASYNC = 0x0209 | TY_FEATURE_ENUM,

When the grayscale, color, and depth map data of a depth camera are output, it supports the immediate output of the acquired image data. For instance, with the -IX series and P series cameras, the acquisition time for grayscale and color images is relatively short, whereas the calculation time for depth data is longer. Therefore, the grayscale and color images can be output first for the host computer to perform necessary calculations.

typedef enum TY_STREAM_ASYNC_MODE_LIST
{
    TY_STREAM_ASYNC_OFF         = 0,
    TY_STREAM_ASYNC_DEPTH       = 1,
    TY_STREAM_ASYNC_RGB         = 2,
    TY_STREAM_ASYNC_DEPTH_RGB   = 3,
    TY_STREAM_ASYNC_ALL         = 0xff,
}TY_STREAM_ASYNC_MODE_LIST;

ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_STREAM_ASYNC, TY_STREAM_ASYNC_ALL));

Image Format Settings and Processing

The format and resolution of the image are ENUMERATION types. The SDK header file enumerates the various image formats and resolutions supported by the camera. Different cameras support different specific formats, which can be queried and set using the API.

  • TY_ENUM_IMAGE_MODE = 0x0109 | TY_FEATURE_ENUM,

    Sample Code: Get the image formats supported by the color image.

    uint32_t n;
    ASSERT_OK(TYGetEnumEntryCount(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, &n));
    LOGD("===         %14s: entry count %d", "", n);
    if (n > 0) {
        std::vector<TY_ENUM_ENTRY> entry(n);
        ASSERT_OK(TYGetEnumEntryInfo(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, &entry[0], n, &n));
        for (uint32_t i = 0; i < n; i++) {
            LOGD("===         %14s:     value(%d), desc(%s)", "", entry[i].value, entry[i].description);
        }
    }
    

    Sample Code: Set the color image format to YUYV, with a resolution of 1280*960.

    TY_IMAGE_MODE image_mode;
    image_mode = TYImageMode2(TY_PIXEL_FORMAT_YUYV,1280, 960);
    ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_RGB_CAM , TY_ENUM_IMAGE_MODE, image_mode));
    

    Sample Code: Obtain the supported image formats for color images and set the resolution to 1280*960.

    uint32_t n;
    ASSERT_OK(TYGetEnumEntryCount(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, &n));
    std::vector<TY_ENUM_ENTRY> image_mode_list(n);
    ASSERT_OK(TYGetEnumEntryInfo(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, &image_mode_list[0], n, &n));
    for (int idx = 0; idx < image_mode_list.size(); idx++)
    {
    
      //try to select a 1280x960 resolution
      if (TYImageWidth(image_mode_list[idx].value) == 1280 || TYImageHeight(image_mode_list[idx].value) == 960)
      {
          LOGD("Select Color Image Mode: %s", image_mode_list[idx].description);
          int err = TYSetEnum(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, image_mode_list[idx].value);
          ASSERT(err == TY_STATUS_OK || err == TY_STATUS_NOT_PERMITTED);
          break;
      }
    }
    
  • TY_FLOAT_SCALE_UNIT = 0x010a | TY_FEATURE_FLOAT ,

    The calculation coefficient for depth measurement values. The default value is 1.0, meaning the data value of each pixel in the depth map represents the actual measured distance (in millimeters). When the set value is not 1.0, the product of the data value of each pixel in the depth map and this coefficient gives the actual measured distance (in millimeters).

    float value;
    ASSERT_OK(TYGetFloat(hDevice, TY_COMPONENT_DEPTH_CAM, TY_FLOAT_SCALE_UNIT, &value));
    

    This value affects the final depth result. If the value is too small, it may lead to errors in the depth calculation. If the value is too large, it may reduce the accuracy of the depth calculation.

  • TY_BOOL_UNDISTORTION = 0x0510 | TY_FEATURE_BOOL, ///< Output undistorted image

    The switch for undistortion. The default value is false, which means the infrared image sensor component outputs image data without undistortion by default.

    bool hasUndistortSwitch;
    ASSERT_OK( TYHasFeature(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_BOOL_UNDISTORTION, &hasUndistortSwitch) );
    if (hasUndistortSwitch) {
        ASSERT_OK( TYSetBool(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_BOOL_UNDISTORTION, true) );
    }
    
  • TY_BOOL_BRIGHTNESS_HISTOGRAM = 0x0511 | TY_FEATURE_BOOL,

    The enable switch for the brightness histogram component. The default value is false. After enabling the brightness histogram component (TY_COMPONENT_BRIGHT_HISTO), the device will output the brightness histogram of the left and right grayscale images.

    Sample Code is as follows. For details, please refer to the SDK sample program SimpleView_FetchHisto.

    int32_t allComps, componentIDs = 0;
    ASSERT_OK( TYGetComponentIDs(hDevice, &allComps) );
    if(allComps & TY_COMPONENT_BRIGHT_HISTO) {
        LOGD("=== Has bright histo component");
        componentIDs |= TY_COMPONENT_BRIGHT_HISTO;
    }
    
    LOGD("=== Configure components, open ir cam");
    componentIDs |= TY_COMPONENT_IR_CAM_RIGHT| TY_COMPONENT_IR_CAM_LEFT;
    ASSERT_OK( TYEnableComponents(hDevice, componentIDs) );
    

Laser Settings

  • TY_BOOL_LASER_AUTO_CTRL = 0x0501 | TY_FEATURE_BOOL,

    The switch of laser auto-adjustment function, a boolean feature and the default value is true, which means the depth camera automatically turns on/off the infrared laser based on the needs of depth calculation. The specific rules are as follows:

    • TY_BOOL_LASER_AUTO_CTRL= true

      • When there is depth map output, the laser will be turned on, and the laser brightness is set with TY_INT_LASER_POWER;

      • When there is no depth map output, the laser will be turned off.

    • TY_BOOL_LASER_AUTO_CTRL= false

      • When any image is output, the laser is turned on, and the laser brightness is set with TY_INT_LASER_POWER.

        Note

        Some camera models, like FM851-E2, behave differently from the above description that the laser will be turned off when TY_BOOL_LASER_AUTO_CTRL= false, TY_BOOL_CMOS_SYNC = false and there is no depth map output.

  • TY_INT_LASER_POWER = 0x0500 | TY_FEATURE_INT,

    The laser power setting, an integer feature and the setting range is 0~100. The default value is 100. This feature can be used to set the laser power, thereby adjusting the brightness of the infrared sensor imaging. When this feature is set to 0, the laser is turned off.

    Sample Code: Turn off the laser auto-adjustment function and set the laser power to 90.

    ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_LASER, TY_BOOL_LASER_AUTO_CTRL, false));
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_LASER, TY_INT_LASER_POWER, 90));
    

Floodlight Settings

  • TY_BOOL_FLASHLIGHT = 0x0213 | TY_FEATURE_BOOL,

    The IR floodlight function switch, a boolean feature and the default value is false. This feature can be used to turn on the IR floodlight to facilitate camera calibration.

    • TY_BOOL_IR_FLASHLIGHT= true

      • When there is depth map output, the floodlight is turned off;

      • When there is no depth map output, the floodlight is turned on, and the floodlight brightness is set with TY_INT_FLASHLIGHT_INTENSITY.

    • TY_BOOL_IR_FLASHLIGHT= false

      • The IR floodlight is turned off.

  • TY_INT_IR_FLASHLIGHT_INTENSITY = 0x0214 | TY_FEATURE_INT,

    The IR floodlight intensity setting, an integer feature. After turning on the IR floodlight, you can set the IR floodlight intensity according to the actual calibration scene using this feature.

  • TY_BOOL_RGB_FLASHLIGHT = 0x0221 | TY_FEATURE_BOOL,

    RGB floodlight function switch, Boolean feature. This feature can be used to turn on the RGB floodlight to facilitate camera calibration.

  • TY_INT_RGB_FLASHLIGHT_INTENSITY = 0x0222 | TY_FEATURE_INT,

    The RGB floodlight intensity setting, an integer feature. After turning on the RGB floodlight, you can set the RGB floodlight intensity according to the actual calibration scene using this feature.

    Note

    The floodlights have a built-in overheating protection function. When the temperature is too high, the floodlights will be automatically turned off.

    At any given time, only one of the following can be powered on: IR floodlight, RGB floodlight, or laser.

Sample Code: Turn on the floodlight function and set the floodlight intensity to 63.

ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_FLASHLIGHT, true));
ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEVICE, TY_INT_FLASHLIGHT_INTENSITY, 63));

Preset Mode Settings

TY_ENUM_CONFIG_MODE = 0x0221 | TY_FEATURE_ENUM,

Sets the camera preset mode. It is an enumeration feature and defined as follows:

typedef enum TY_CONFIG_MODE_LIST :uint32_t
{
    TY_CONFIG_MODE_PRESET0 = 0,
    TY_CONFIG_MODE_PRESET1, //1
    TY_CONFIG_MODE_PRESET2, //2

    TY_CONFIG_MODE_USERSET0 = (1<<16),
    TY_CONFIG_MODE_USERSET1, //0x10001
    TY_CONFIG_MODE_USERSET2, //0x10002
}TY_CONFIG_MODE_LIST;
typedef uint32_t TY_CONFIG_MODE;

Sample Code: Set the preset mode of the V-series camera to PRESET0

ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_CONFIG_MODE, TY_CONFIG_MODE_PRESET0));

Note

Different preset modes correspond to different measurement performance modes for the cameras. Please select the most suitable preset mode according to actual needs, in order to achieve the optimal measuring performance for different application scenarios. The specific correspondence needs to be checked in the camera’s fetch_config.xml file.

To explain with a snippet from the fetch_config.xml file of a certain V series camera:

<feature id="0x3221" addr="0x400038" name="PreSetMode" rw="3" hide="0">
         <entry name="Quality" value="0" desc="Quality"></entry>
         <entry name="Standard" value="1" desc="Standard"></entry>

For this camera:

  • When CONFIG_MODE is set to PRESET0, its measurement performance is in Quality mode.

  • When CONFIG_MODE is set to PRESET1, its measurement performance is in Standard mode.

  • When CONFIG_MODE is set to PRESET2, its measurement performance is in Fast mode.

Temperature Sensor Data Reading

To read the real-time temperature data for specific camera components, such as the left IR sensor, the right IR sensor, the color sensor, the CPU, and the motherboard.

  • TY_ENUM_TEMPERATURE_ID = 0x0223 | TY_FEATURE_ENUM,

    Retrieves the number of temperature sensors inside the camera.

    uint32_t n = 0;
    ASSERT_OK(TYGetEnumEntryCount(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_TEMPERATURE_ID, &n));
    if (n == 0) {
    LOGD("No temperature sensors available.\n");
    return;
    }
    

    Retrieve the detailed information of each temperature sensor:

    std::vector<TY_ENUM_ENTRY> feature_info(n);
    ASSERT_OK(TYGetEnumEntryInfo(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_TEMPERATURE_ID, &feature_info[0], n, &n));
    
  • TY_STRUCT_TEMPERATURE = 0x0224 | TY_FEATURE_STRUCT,

    Read the temperature values for each temperature sensor one by one:

    for (int i = 0; i < n; i++) {
    // Set ID for the temperature sensor to be read
    int ret = TYSetEnum(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_TEMPERATURE_ID, feature_info[i].value);
    if (ret < 0) {
        LOGD("Set temperature id[%d](%s) failed %d(%s)\n", feature_info[i].value, feature_info[i].description, ret, TYErrorString(ret));
        break;
    }
    
    // Retrieve temperature data
    TY_TEMP_DATA temp;
    memset(&temp, 0, sizeof(temp));
    ret = TYGetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TEMPERATURE, &temp, sizeof(temp));
    if (ret < 0) {
        LOGD("Get temperature [%d](%s) failed %d(%s)\n", feature_info[i].value, feature_info[i].description, ret, TYErrorString(ret));
        break;
    }
    LOGD("Get temperature [%d](%s) temp %s\n", feature_info[i].value, feature_info[i].description, temp.temp);
    }
    

SGBM Features

Measuring Range, Accuracy, and Frame Rate Related Features

  • TY_FLOAT_SCALE_UNIT = 0x010a | TY_FEATURE_FLOAT

    For its descriptions, refer to TY_FLOAT_SCALE_UNIT.

  • TY_INT_SGBM_DISPARITY_NUM = 0x0611 | TY_FEATURE_INT

    Sets the disparity search range. The disparity search range of the matching window during the search process, an integer feature.

    The larger the set value, the greater the measurement range in the Z direction of the camera, but the computing power will increase. It is recommended to set it as a multiple of 16.

  • TY_INT_SGBM_DISPARITY_OFFSET = 0x0612 | TY_FEATURE_INT

    Sets the starting value for the disparity search, an integer feature.

    The smaller the set value, the greater the maximum measurement value in the Z direction (Z max), meaning the measurement range is farther. However, the lower limit of the set value is affected by the depth of field.

  • TY_INT_SGBM_IMAGE_NUM = 0x0610 | TY_FEATURE_INT

    Sets the number of grayscale images for depth calculation, an integer feature.

    The larger the set value, the better the quality of the output depth map, but the frame rate will decrease. The upper limit of the set value is influenced by the camera’s computing power.

  • TY_INT_SGBM_MATCH_WIN_HEIGHT = 0x0613 | TY_FEATURE_INT

    Sets the height of the disparity matching window, an integer feature. The value must be odd.

  • TY_INT_SGBM_MATCH_WIN_WIDTH = 0x061A | TY_FEATURE_INT

    Sets the width of the disparity matching window, an integer feature. The value must be odd.

    The larger the disparity matching window ( match window height * match window width ), the smoother the image, but the precision will decrease. The smaller the disparity matching window, the more details the depth map will show, but the probability of incorrect matches will increase.

Note

Due to the limited computing power of the camera, there is a constraint between image number and match window height. The constraints differ for different camera models. For details, refer to fetch_config.xml.

Egde Smoothing Features

  • TY_INT_SGBM_SEMI_PARAM_P1 = 0x0614 | TY_FEATURE_INT

    Sets the penalty parameter P1 for disparity changes between neighboring pixels (+/-1).

    The larger the set value, the smoother the depth map.

    Adjusting this parameter helps prevent discontinuities or unreasonable depth values, effectively suppressing noise and discontinuities.

  • TY_INT_SGBM_SEMI_PARAM_P2 = 0x0615 | TY_FEATURE_INT

    Sets the penalty parameter P2 for disparity changes between surrounding pixels.

    The larger the value, the smoother the depth map. P2 > P1.

    This parameter effectively handles texture-rich areas, reducing the number of mismatches.

  • TY_INT_SGBM_SEMI_PARAM_P1_SCALE = 0x061F | TY_FEATURE_INT

    Sets the scaling factor for the penalty parameter P1 for disparity changes between neighboring pixels (+/-1).

    The smaller the value, the smoother the depth map.

  • TY_BOOL_SGBM_HFILTER_HALF_WIN = 0x0619 | TY_FEATURE_BOOL

    Search filter switch.

    Further optimizes the depth map, removing noise and discontinuities, and making the point cloud at object edges more accurate.

  • TY_INT_SGBM_TEXTURE_THRESH = 0x0630 | TY_FEATURE_INT

  • TY_INT_SGBM_TEXTURE_OFFSET = 0x062F | TY_FEATURE_INT

    Sets the threshold for the texture feature.

    It is used to optimize the edge detection for the depth maps, decrease noise within the point cloud and minimizing measurement errors.

Mismatch Related Features

  • TY_INT_SGBM_UNIQUE_FACTOR = 0x0616 | TY_FEATURE_INT

    One of the uniqueness check parameters, which is the percentage of the best and second-best match points.

    The larger the set value, the more unique the matching cost, and the larger the value, the more error match points are filtered out.

  • TY_INT_SGBM_UNIQUE_ABSDIFF = 0x0617 | TY_FEATURE_INT

    One of the uniqueness check parameters, which is the absolute difference between the best and second-best match points.

    The larger the set value, the more unique the matching cost, and the larger the value, the more error match points are filtered out.

  • TY_BOOL_SGBM_LRC = 0x061C | TY_FEATURE_BOOL

    Left and right consistency check switch.

    TY_INT_SGBM_LRC_DIFF = 0x061D | TY_FEATURE_INT

    Left and right consistency check parameters.

    When performing stereo matching, for pixels on the same object surface, the disparity of matching from the left image to the right image is LR, and the disparity of matching from the right image to the left image is RL. If ABS(LR-RL) < max LRC diff, then the point is considered a reliable match point.

    The smaller the parameter setting value for left and right consistency check, the more reliable the matching.

Median filtering

  • TY_BOOL_SGBM_MEDFILTER = 0x061B | TY_FEATURE_BOOL

    Median filter switch.

    Used to eliminate isolated noise points while preserving the edge information of the image as much as possible.

    TY_INT_SGBM_MEDFILTER_THRESH = 0x061E | TY_FEATURE_INT

    Median filter threshold.

    The larger the set value, the more noise will be filtered out, but it may also result in the loss of detailed information in the depth map.

ToF Camera Features

Depth Map Quality

TY_ENUM_DEPTH_QUALITY = 0x0900 | TY_FEATURE_ENUM,

Sets the quality of the depth map output by the camera to meet the needs of different applications. An enumeration feature, defined as follows:

typedef enum TY_DEPTH_QUALITY_LIST
{
TY_DEPTH_QUALITY_BASIC = 1,
TY_DEPTH_QUALITY_MEDIUM = 2,
TY_DEPTH_QUALITY_HIGH = 4,
}TY_DEPTH_QUALITY_LIST;
  • When the depth map quality is set to BASIC, the jitter of depth value is large and the output frame rate is high.

  • When the depth map quality is set to MEDIUM, the jitter of depth value is medium and the output frame rate is medium.

  • When the depth map quality is set to HIGH, the jitter of depth value is small and the output frame rate is low.

Outlier Filter

TY_INT_FILTER_THRESHOLD = 0x0901 | TY_FEATURE_INT,

Sets the outlier filtering threshold of the ToF depth camera, an integer feature, ranging [0,100]. Default value is 0, which means no filtering. The smaller the filtering threshold, the more outliers will be filtered out.

Note

If the filter threshold is set too low, a large amount of valid depth information may be filtered out.

Sample Code: Set the outlier filter threshold of the ToF depth camera to 43.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_FILTER_THRESHOLD, 43));

Modulation Channel

TY_INT_TOF_CHANNEL = 0x0902 | TY_FEATURE_INT,

Sets the threshold for the laser modulation intensity received by the ToF depth camera, an integer feature. The modulation frequency varies for different modulation channels and does not interfere with each other. If multiple ToF depth cameras need to run in the same scenario, it is necessary to ensure that the modulation channels of the cameras in the same series are different.

Sample Code: Set the modulation channel of the ToF depth camera to 2.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_CHANNEL, 2));

Modulated Laser Light Intensity

TY_INT_TOF_MODULATION_THRESHOLD = 0x0903 | TY_FEATURE_INT,

Sets the threshold for the laser modulation intensity received by the ToF depth camera, an integer feature. Pixels with intensity below this threshold will not participate in depth calculation, and their depth values will be set to 0.

Sample Code: Set the threshold of laser modulation light intensity received by the ToF depth camera to 300.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_MODULATION_THRESHOLD, 300));

Jitter Filter

TY_INT_TOF_JITTER_THRESHOLD = 0x0307 | TY_FEATURE_INT,

Sets the jitter filtering threshold of the ToF depth camera, an integer feature. The larger the threshold value, the less the depth data at the edges of the depth map will be filtered.

Sample Code: Set the jitter filter threshold of the ToF depth camera to 5.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_JITTER_THRESHOLD, 5));

HDR Ratio (High Dynamic Range Ratio)

TY_INT_TOF_HDR_RATIO = 0x0306 | TY_FEATURE_INT,

Sets the high dynamic range ratio threshold, an integer feature. Before setting this threshold, the depth map quality needs to be set to HIGH.

Sample Code: Set the high dynamic range ratio threshold of the TL460-S1-E1 ToF depth camera to 50.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_HDR_RATIO, 50));

Anti-Sunlight Index

TY_INT_TOF_ANTI_SUNLIGHT_INDEX = 0x0906 | TY_FEATURE_INT,

Sets the anti-sunlight index, an integer feature. Value range: [0,2]. It is used to optimize the depth imaging effect of the ToF camera under sunlight.

In indoor scenes or under weak sunlight, it is recommended to set the index to 0; in outdoor scenes or under some sunlight, set it to 1 or 2.

Sample Code: Set the anti-sunlight index of the ToF depth camera to 0.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_ANTI_SUNLIGHT_INDEX, 0));

Speckle Filter

  • TY_INT_MAX_SPECKLE_DIFF = 0x0908 | TY_FEATURE_INT,

    Sets the speckle filter clustering threshold, an integer feature. Value range: [100,500]. Unit: mm. If the depth difference between adjacent pixels is less than the Max speckle diff clustering threshold, the adjacent pixels are considered to belong to the same speckle cluster.

  • TY_INT_MAX_SPECKLE_SIZE = 0x0907 | TY_FEATURE_INT,

    Sets the speckle filter size threshold, an integer feature. Value range:[0,200]. Unit: px. Speckle clusters with an area smaller than the Max speckle size threshold will be filtered out.

Sample Code: Set the TY_INT_MAX_SPECKLE_DIFF to 123, and TY_INT_MAX_SPECKLE_SIZE to 148.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_MAX_SPECKLE_DIFF, 123));
ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_MAX_SPECKLE_SIZE, 148));

Anti-Multi-Camera Interference

TY_BOOL_TOF_ANTI_INTERFERENCE = 0x0905 | TY_FEATURE_BOOL,

The enable switch for anti-multi-camera interference, an boolean feature. The default value is false.

When multiple cameras are working in the same scene and using the same modulation channel (ToF Channel), their signals may interfere with each other, causing abnormal depth values. Enabling the anti-multi-camera interference at this time can effectively filter out abnormal depth data caused by such interference.

Multi-Camera Interference

Multi-Camera Interference

Sample Code: Set TY_BOOL_TOF_ANTI_INTERFERENCE of the ToF camera to true.

ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_DEPTH_CAM, TY_BOOL_TOF_ANTI_INTERFERENCE, true));
ToF Anti-interference

Effect after Enabling Anti-Multi-Camera Interferences

Depth Effective Distance Range

  • TY_INT_DEPTH_MIN_MM = 0x062D | TY_FEATURE_INT,

    The threshold for the minimum depth effective distance, an int feature, unit: mm. Pixels with values lower than this threshold will be set to 0 and excluded from subsequent calculation.

  • TY_INT_DEPTH_MAX_MM = 0x062E | TY_FEATURE_INT,

    The threshold for the maximum depth effective distance, an int feature, unit: mm. Pixels with values larger than this threshold will be set to 0 and excluded from subsequent calculation.

Sample Code: Set the depth effective distance range for a ToF camera to [500, 1500].

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_DEPTH_MIN_MM, 500));
ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_DEPTH_MAX_MM, 1500));

Coordinate Transformation

Point cloud data structure:

typedef struct TY_VECT_3F
{
    float   x;
    float   y;
    float   z;
}TY_VECT_3F;

Data structure of points on a depth map:

typedef struct TY_PIXEL_DESC
{
  int16_t x;      // x coordinate in pixels
  int16_t y;      // y coordinate in pixels
  uint16_t depth; // depth value
  uint16_t rsvd;
}TY_PIXEL_DESC;

TYInvertExtrinsic inverts the matrix. Input a 4x4 extrinsic matrix, the output is the inverse matrix.

TY_CAPI   TYInvertExtrinsic      (const TY_CAMERA_EXTRINSIC* orgExtrinsic,
                                 TY_CAMERA_EXTRINSIC* invExtrinsic);

TYMapDepthToPoint3d converts points on a depth map to point cloud data. It takes input parameters such as calibration data of the depth camera, the width and height of the depth map, depth data points, and point count. The output is the point cloud data.

TY_CAPI   TYMapDepthToPoint3d    (const TY_CAMERA_CALIB_INFO* src_calib,
                                 uint32_t depthW, uint32_t depthH,
                                 const TY_PIXEL_DESC* depthPixels, uint32_t count,
                                 TY_VECT_3F* point3d);

TYMapPoint3dToDepth converts point cloud data to depth map data. It takes input parameters such as the calibration data of the depth camera, point cloud data, point count, and the width and height of the target depth map. It outputs point data on the depth map. It is the inverse operation of TYMapDepthToPoint3d.

TY_CAPI   TYMapPoint3dToDepth    (const TY_CAMERA_CALIB_INFO* dst_calib,
                                 const TY_VECT_3F* point3d, uint32_t count,
                                 uint32_t depthW, uint32_t depthH,
                                 TY_PIXEL_DESC* depth);

TYMapDepthImageToPoint3d converts depth data images to point clouds. It takes input parameters such as the calibration data of the depth camera, the width and height of the depth map, and the depth map itself. It outputs point cloud. Depth points with a value of 0 are mapped to (NAN, NAN, NAN), indicating invalid depth.

TY_CAPI   TYMapDepthImageToPoint3d  (const TY_CAMERA_CALIB_INFO* src_calib,
                                     int32_t imageW, int32_t imageH,
                                     const uint16_t* depth,
                                     TY_VECT_3F* point3d,
                                     float f_scale_unit = 1.0f);

Sample Code: To get point cloud data. For more details, please refer to the SDK sample program SimpleView_Point3D.

struct CallbackData {
    int             index;
    TY_DEV_HANDLE   hDevice;
    TY_ISP_HANDLE   isp_handle;
    TY_CAMERA_CALIB_INFO depth_calib;
    TY_CAMERA_CALIB_INFO color_calib;

};

static void handleFrame(TY_FRAME_DATA* frame, void* userdata) {
    //we only using Opencv Mat as data container.
    //you can allocate memory by yourself.
    CallbackData* pData = (CallbackData*) userdata;
    LOGD("=== Get frame %d", ++pData->index);

    cv::Mat depth, color;
    parseFrame(*frame, &depth, NULL, NULL, &color, isp_handle);
    if(!depth.empty()){
        std::vector<TY_VECT_3F> p3d;
        p3d.resize(depth.size().area());

        ASSERT_OK(TYMapDepthImageToPoint3d(&pData->depth_calib, depth.cols, depth.rows
            , (uint16_t*)depth.data, &p3d[0]));
}

TYMapPoint3dToDepthImage converts point clouds to depth map. It takes input parameters such as the calibration data of the depth camera, the point cloud, the point count, the width and height of the depth map. It outputs the converted depth map. Invalid point clouds (NAN, NAN, NAN) are mapped to an invalid depth of 0.

TY_CAPI   TYMapPoint3dToDepthImage  (const TY_CAMERA_CALIB_INFO* dst_calib,
                                 const TY_VECT_3F* point3d, uint32_t count,
                                 uint32_t depthW, uint32_t depthH, uint16_t* depth);

TYMapPoint3dToPoint3d is the point cloud coordinate conversion. It takes input parameters such as the extrinsic parameter matrix, point cloud data, and point count. It outputs the converted point cloud data.

TY_CAPI   TYMapPoint3dToPoint3d     (const TY_CAMERA_EXTRINSIC* extrinsic,
                                 const TY_VECT_3F* point3d, int32_t count,
                                 TY_VECT_3F* point3dTo);

TYMapDepthToColorCoordinate maps depth data points to color image coordinates. It takes input parameters such as the calibration data of the depth map, the width and height of the depth map, the depth map points data, the point count, the calibration data of the color image, the width and height of the color image. It outputs the mapped depth points.

static inline TY_STATUS TYMapDepthToColorCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH,
              const TY_PIXEL_DESC* depth, uint32_t count,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t mappedW, uint32_t mappedH,
              TY_PIXEL_DESC* mappedDepth,
                  float f_scale_unit = 1.0f);

TYCreateDepthToColorCoordinateLookupTable creates a coordinate lookup table from a depth map to a color image. It takes input parameters such as the calibration data of the depth map, the width and height of the depth map, the depth map data, the calibration data of the color image, the width and height of the mapped depth map data. It outputs a table of mapped depth point data.

static inline TY_STATUS TYCreateDepthToColorCoordinateLookupTable(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t mappedW, uint32_t mappedH,
              TY_PIXEL_DESC* lut,
                  float f_scale_unit = 1.0f);

TYMapDepthImageToColorCoordinate maps a depth map to the color image coordinates. It takes input parameters such as the calibration data of the depth map, the width and height of the depth map, the depth map, the calibration data of the color image, and the width and height of the color image. It outputs the mapped depth map.

static inline TY_STATUS TYMapDepthImageToColorCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t mappedW, uint32_t mappedH, uint16_t* mappedDepth,
                  float f_scale_unit = 1.0f);

TYMapRGBImageToDepthCoordinate maps a color image to the depth map coordinates. It takes input parameters such as the calibration data of the depth map, the width and height of the depth map, the depth map, the calibration data of the color image, and the width and height of the color image. It outputs the mapped color image data.

static inline TY_STATUS TYMapRGBImageToDepthCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t rgbW, uint32_t rgbH, const uint8_t* inRgb,
              uint8_t* mappedRgb,
                  float f_scale_unit = 1.0f);

Sample Code: Coordinate mapping between the depth map and the color image. For more details, please refer to the SDK sample program SimpleView_Registration.

static void doRegister(const TY_CAMERA_CALIB_INFO& depth_calib
                      , const TY_CAMERA_CALIB_INFO& color_calib
                      , const cv::Mat& depth
                      , const float f_scale_unit
                      , const cv::Mat& color
                      , bool needUndistort
                      , cv::Mat& undistort_color
                      , cv::Mat& out
                      , bool map_depth_to_color
                      )
{
  // do undistortion
  if (needUndistort) {
    TY_IMAGE_DATA src;
    src.width = color.cols;
    src.height = color.rows;
    src.size = color.size().area() * 3;
    src.pixelFormat = TY_PIXEL_FORMAT_RGB;
    src.buffer = color.data;

    undistort_color = cv::Mat(color.size(), CV_8UC3);
    TY_IMAGE_DATA dst;
    dst.width = color.cols;
    dst.height = color.rows;
    dst.size = undistort_color.size().area() * 3;
    dst.buffer = undistort_color.data;
    dst.pixelFormat = TY_PIXEL_FORMAT_RGB;
    ASSERT_OK(TYUndistortImage(&color_calib, &src, NULL, &dst));
  }
  else {
    undistort_color = color;
  }

  // do register
  if (map_depth_to_color) {
    out = cv::Mat::zeros(undistort_color.size(), CV_16U);
    ASSERT_OK(
      TYMapDepthImageToColorCoordinate(
        &depth_calib,
        depth.cols, depth.rows, depth.ptr<uint16_t>(),
        &color_calib,
        out.cols, out.rows, out.ptr<uint16_t>(), f_scale_unit
      )
    );
    cv::Mat temp;
    //you may want to use median filter to fill holes in projected depth map
    //or do something else here
    cv::medianBlur(out, temp, 5);
    out = temp;
  }
  else {
    out = cv::Mat::zeros(depth.size(), CV_8UC3);
    ASSERT_OK(
      TYMapRGBImageToDepthCoordinate(
        &depth_calib,
        depth.cols, depth.rows, depth.ptr<uint16_t>(),
        &color_calib,
        undistort_color.cols, undistort_color.rows, undistort_color.ptr<uint8_t>(),
        out.ptr<uint8_t>(), f_scale_unit
      )
    );
  }
}

TYMapMono8ImageToDepthCoordinate maps a MONO8 color image to the depth map coordinates. It takes input parameters such as the calibration data of the depth map, the width and height of the depth map, the depth map, the calibration data of the MONO8 color image, and the width and height of the color image. It outputs the mapped MONO8 color image data.

static inline TY_STATUS TYMapMono8ImageToDepthCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t monoW, uint32_t monoH, const uint8_t* inMono,
              uint8_t* mappedMono,
                  float f_scale_unit = 1.0f);

Image Processing

TYUndistortImage is used for undistortion of images output by the image sensor components. Supported data formats include TY_PIXEL_FORMAT_MONO, TY_PIXEL_FORMAT_RGB, and TY_PIXEL_FORMAT_BGR. It takes input parameters such as the sensor calibration data, the distorted original image, and the desired image intrinsics (if NULL is input, the sensor’s original intrinsics will be used). It outputs the image data after undistortion.

TY_CAPI TYUndistortImage (const TY_CAMERA_CALIB_INFO *srcCalibInfo
    , const TY_IMAGE_DATA *srcImage
    , const TY_CAMERA_INTRINSIC *cameraNewIntrinsic
    , TY_IMAGE_DATA *dstImage
    );

TYDepthSpeckleFilter fills in invalid points and reduces noise in discrete points in depth maps. It takes input parameters such as the depth map and the filtering parameters, and outputs the processed depth map.

struct DepthSpeckleFilterParameters {
    int max_speckle_size; // blob size smaller than this will be removed
    int max_speckle_diff; // Maximum difference between neighbor disparity pixels
};
#define DepthSpeckleFilterParameters_Initializer {150, 64}

TY_CAPI TYDepthSpeckleFilter (TY_IMAGE_DATA* depthImage
        , const DepthSpeckleFilterParameters* param
        );

TYDepthEnhenceFilter is used for image filtering. It takes input parameters such as the depth map, the number of images, the reference image, and the filter coefficients. It outputs the filtered depth data.

TY_CAPI TYDepthEnhenceFilter (const TY_IMAGE_DATA* depthImages
        , int imageNum
        , TY_IMAGE_DATA *guide
        , TY_IMAGE_DATA *output
        , const DepthEnhenceParameters* param
        );


struct DepthEnhenceParameters{
    float sigma_s;          // filter param on space
    float sigma_r;          // filter param on range
    int   outlier_win_sz;   // outlier filter windows size
    float outlier_rate;
};
#define DepthEnhenceParameters_Initializer {10, 20, 10, 0.1f}

sigma_s is the spatial filtering coefficient, sigma_r is the depth filtering coefficient, outlier_win_sz is the filtering window in pixels, and outlier_rate is the noise filtering coefficient.