API description

Library loading and unloading

Before using Camport SDK to control a camera for capturing images, the Camport SDK library should be loaded into your application. Similarly, before concluding the application, the Camport SDK library should be unloaded.

inline TY_STATUS    TYInitLib                 (void);
TY_CAPI             TYDeinitLib               (void);

Get the version information of Camport SDK.

TY_CAPI             TYLibVersion              (TY_VERSION_INFO* version);

Interface control

Before discovering devices, the host’s USB interface, Ethernet interface, and wireless network interface status need to be updated in order to obtain interface handle, and the interface handle should be released before exiting the application.

Update interface status.

TY_CAPI             TYUpdateInterfaceList     ();

Get the number of interfaces.

TY_CAPI             TYGetInterfaceNumber      (uint32_t* pNumIfaces);

Get the interface list.

TY_CAPI             TYGetInterfaceList        (TY_INTERFACE_INFO* pIfaceInfos, uint32_t bufferCount, uint32_t* filledCount);

Check if the interface is valid.

TY_CAPI             TYHasInterface            (const char* ifaceID, bool* value);

Open the interface.

TY_CAPI             TYOpenInterface           (const char* ifaceID, TY_INTERFACE_HANDLE* outHandle);

Release the interface handle.

TY_CAPI             TYCloseInterface          (TY_INTERFACE_HANDLE ifaceHandle);

Device control

TYUpdateDeviceList updates the list of devices mounted on the specified interface.

TY_CAPI             TYUpdateDeviceList        (TY_INTERFACE_HANDLE ifaceHandle);

TYGetDeviceNumber gets the number of devices mounted on the specified interface.

TY_CAPI             TYGetDeviceNumber         (TY_INTERFACE_HANDLE ifaceHandle, uint32_t* deviceNumber);

TYGetDeviceList gets the list of devices mounted on the specified interface. The bufferCount is the deviceInfos array size set according to the number of mounted devices.

TY_CAPI             TYGetDeviceList           (TY_INTERFACE_HANDLE ifaceHandle, TY_DEVICE_BASE_INFO* deviceInfos, uint32_t bufferCount, uint32_t* filledDeviceCount);

TYHasDevice queries the status of a device. The input parameters are the interface handle and device ID, and the output parameter is the status information of the specified device.

TY_CAPI             TYHasDevice               (TY_INTERFACE_HANDLE ifaceHandle, const char* deviceID, bool* value);

TYOpenDevice opens a device. The input parameters are the interface handle and device ID, and the output parameter is the device handle of the opened camera.

TY_CAPI             TYOpenDevice              (TY_INTERFACE_HANDLE ifaceHandle, const char* deviceID, TY_DEV_HANDLE* outDeviceHandle, TY_FW_ERRORCODE* outFwErrorcode);

TYOpenDeviceWithIP can be used to open a network camera with a specified IP address. Input the interface handle and IP address to obtain the handle for the opened network camera.

TY_CAPI             TYOpenDeviceWithIP        (TY_INTERFACE_HANDLE ifaceHandle, const char* IP, TY_DEV_HANDLE* deviceHandle);

TYGetDeviceInterface uses the handle of a known device to query the handle of the interface mounted on that device.

TY_CAPI             TYGetDeviceInterface      (TY_DEV_HANDLE hDevice, TY_INTERFACE_HANDLE* pIface);

TYForceDeviceIP forces the IP address of the network camera to be set. When the MAC address of the network camera is known, this interface can be used to temporarily force the camera to use the specified IP address and gateway. After the device reboots, this IP configuration becomes invalid.

TY_CAPI             TYForceDeviceIP           (TY_INTERFACE_HANDLE ifaceHandle, const char* MAC, const char* newIP, const char* newNetMask, const char* newGateway);

TYCloseDevice closes the specified device.

TY_CAPI             TYCloseDevice             (TY_DEV_HANDLE hDevice, bool reboot);

TYGetDeviceInfo

The input parameter of TYGetDeviceInfo is the device handle, and the output parameter is the device information, such as interface, version, manufacturer, and other information.

TY_CAPI             TYGetDeviceInfo           (TY_DEV_HANDLE hDevice, TY_DEVICE_BASE_INFO* info);

Device information includes the following data:

typedef struct TY_DEVICE_BASE_INFO
{
    TY_INTERFACE_INFO   iface;
    char                id[32];///<device serial number
    char                vendorName[32];
    char                userDefinedName[32];
    char                modelName[32];///<device model name
    TY_VERSION_INFO     hardwareVersion; ///<deprecated
    TY_VERSION_INFO     firmwareVersion;///<deprecated
    union {
      TY_DEVICE_NET_INFO netInfo;
      TY_DEVICE_USB_INFO usbInfo;
    };
    char                buildHash[256];
    char                configVersion[256];
    char                reserved[256];
}TY_DEVICE_BASE_INFO;

The general operation of opening the device is as follows:

LOGD("Init lib");
ASSERT_OK( TYInitLib() );
TY_VERSION_INFO ver;
ASSERT_OK( TYLibVersion(&ver) );
LOGD("     - lib version: %d.%d.%d", ver.major, ver.minor, ver.patch);

std::vector<TY_DEVICE_BASE_INFO> selected;
ASSERT_OK( selectDevice(TY_INTERFACE_ALL, ID, IP, 1, selected) );
ASSERT(selected.size() > 0);
TY_DEVICE_BASE_INFO& selectedDev = selected[0];

ASSERT_OK( TYOpenInterface(selectedDev.iface.id, &hIface) );
ASSERT_OK( TYOpenDevice(hIface, selectedDev.id, &handle) );

The selectDevice function is encapsulated as follows:

static inline TY_STATUS selectDevice(TY_INTERFACE_TYPE iface
    , const std::string& ID, const std::string& IP
    , uint32_t deviceNum, std::vector<TY_DEVICE_BASE_INFO>& out)
{
    LOGD("Update interface list");
    ASSERT_OK( TYUpdateInterfaceList() );

    uint32_t n = 0;
    ASSERT_OK( TYGetInterfaceNumber(&n) );
    LOGD("Got %u interface list", n);
    if(n == 0){
      LOGE("interface number incorrect");
      return TY_STATUS_ERROR;
    }

    std::vector<TY_INTERFACE_INFO> ifaces(n);
    ASSERT_OK( TYGetInterfaceList(&ifaces[0], n, &n) );
    ASSERT( n == ifaces.size() );
    for(uint32_t i = 0; i < n; i++){
      LOGI("Found interface %u:", i);
      LOGI("  name: %s", ifaces[i].name);
      LOGI("  id:   %s", ifaces[i].id);
      LOGI("  type: 0x%x", ifaces[i].type);
      if(TYIsNetworkInterface(ifaces[i].type)){
        LOGI("    MAC: %s", ifaces[i].netInfo.mac);
        LOGI("    ip: %s", ifaces[i].netInfo.ip);
        LOGI("    netmask: %s", ifaces[i].netInfo.netmask);
        LOGI("    gateway: %s", ifaces[i].netInfo.gateway);
        LOGI("    broadcast: %s", ifaces[i].netInfo.broadcast);
      }
    }

    out.clear();
    std::vector<TY_INTERFACE_TYPE> ifaceTypeList;
    ifaceTypeList.push_back(TY_INTERFACE_USB);
    ifaceTypeList.push_back(TY_INTERFACE_ETHERNET);
    ifaceTypeList.push_back(TY_INTERFACE_IEEE80211);
    for(size_t t = 0; t < ifaceTypeList.size(); t++){
      for(uint32_t i = 0; i < ifaces.size(); i++){
        if(ifaces[i].type == ifaceTypeList[t] && (ifaces[i].type & iface) && deviceNum > out.size()){
          TY_INTERFACE_HANDLE hIface;
          ASSERT_OK( TYOpenInterface(ifaces[i].id, &hIface) );
          ASSERT_OK( TYUpdateDeviceList(hIface) );
          uint32_t n = 0;
          TYGetDeviceNumber(hIface, &n);
          if(n > 0){
            std::vector<TY_DEVICE_BASE_INFO> devs(n);
            TYGetDeviceList(hIface, &devs[0], n, &n);
            for(uint32_t j = 0; j < n; j++){
              if(deviceNum > out.size() && ((ID.empty() && IP.empty())
                  || (!ID.empty() && devs[j].id == ID)
                  || (!IP.empty() && IP == devs[j].netInfo.ip)))
              {
                if (devs[j].iface.type == TY_INTERFACE_ETHERNET || devs[j].iface.type == TY_INTERFACE_IEEE80211) {
                  LOGI("*** Select %s on %s, ip %s", devs[j].id, ifaces[i].id, devs[j].netInfo.ip);
                } else {
                  LOGI("*** Select %s on %s", devs[j].id, ifaces[i].id);
                }
                out.push_back(devs[j]);
              }
            }
          }
          TYCloseInterface(hIface);
        }
      }
    }

    if(out.size() == 0){
      LOGE("not found any device");
      return TY_STATUS_ERROR;
    }

    return TY_STATUS_OK;
}

The general operation of closing the device is as follows:

ASSERT_OK( TYCloseDevice(hDevice));
ASSERT_OK( TYCloseInterface(hIface) );

Component control

TYGetComponentIDs queries the components supported by the device.

TY_CAPI             TYGetComponentIDs         (TY_DEV_HANDLE hDevice, int32_t* componentIDs);

TYGetEnabledComponents queries the enabled components.

TY_CAPI             TYGetEnabledComponents    (TY_DEV_HANDLE hDevice, int32_t* componentIDs);

TYEnableComponents enables the specified component.

TY_CAPI             TYEnableComponents        (TY_DEV_HANDLE hDevice, int32_t componentIDs);

TYDisableComponents disables the specified component.

TY_CAPI             TYDisableComponents       (TY_DEV_HANDLE hDevice, int32_t componentIDs);

Sample code: Query and enable the left and right infrared image sensors and the color image sensor.

int32_t allComps;
ASSERT_OK( TYGetComponentIDs(hDevice, &allComps) );
if(allComps & TY_COMPONENT_RGB_CAM  && color) {
    LOGD("Has RGB camera, open RGB cam");
    ASSERT_OK( TYEnableComponents(hDevice, TY_COMPONENT_RGB_CAM) );
}

if (allComps & TY_COMPONENT_IR_CAM_LEFT && ir) {
    LOGD("Has IR left camera, open IR left cam");
    ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_LEFT));
}

if (allComps & TY_COMPONENT_IR_CAM_RIGHT && ir) {
    LOGD("Has IR right camera, open IR right cam");
    ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_RIGHT));
}

Framebuffer management

TYGetFrameBufferSize gets the framebuffer size required for the current device configuration. The framebuffer size depends on the enabled components, and the format and resolution of image data.

TY_CAPI             TYGetFrameBufferSize      (TY_DEV_HANDLE hDevice, uint32_t* bufferSize);

TYEnqueueBuffer pushes the allocated framebuffer into the buffer queue.

TY_CAPI             TYEnqueueBuffer           (TY_DEV_HANDLE hDevice, void* buffer, uint32_t bufferSize);

TYClearBufferQueue clears the buffer queue of the framebuffer. During the system operation, when the number of components is dynamically adjusted, it is necessary to clear the buffer queue inside the SDK, and reapply and push it into the buffer queue.

TY_CAPI             TYClearBufferQueue        (TY_DEV_HANDLE hDevice);

Sample code: Query the size of the framebuffer, allocate 2 framebuffers, and push them into the buffer queue.

uint32_t frameSize;
ASSERT_OK( TYGetFrameBufferSize(hDevice, &frameSize) );

LOGD("     - Allocate & enqueue buffers");
char* frameBuffer[2];
frameBuffer[0] = new char[frameSize];
frameBuffer[1] = new char[frameSize];
LOGD("     - Enqueue buffer (%p, %d)", frameBuffer[0], frameSize);
ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[0], frameSize) );
LOGD("     - Enqueue buffer (%p, %d)", frameBuffer[1], frameSize);
ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[1], frameSize) );

Time synchronization settings

TY_ENUM_TIME_SYNC_TYPE = 0x0211 | TY_FEATURE_ENUM,

TY_BOOL_TIME_SYNC_READY = 0x0212 | TY_FEATURE_BOOL,

Set the type of time synchronization for the depth camera, an enumerated feature. Confirm the time synchronization, a boolean feature. The depth camera supports time synchronization with HOST, NTP server, PTP server, or CAN.

Tip

After setting up NTP time synchronization, you need to configure the NTP server. Please refer to Network configuration for the configuration method.

Definition:

typedef enum TY_TIME_SYNC_TYPE_LIST
{
    TY_TIME_SYNC_TYPE_NONE = 0,
    TY_TIME_SYNC_TYPE_HOST = 1,
    TY_TIME_SYNC_TYPE_NTP = 2,
    TY_TIME_SYNC_TYPE_PTP = 3,
    TY_TIME_SYNC_TYPE_CAN = 4,
    TY_TIME_SYNC_TYPE_PTP_MASTER = 5,
}TY_TIME_SYNC_TYPE_LIST;
typedef int32_t TY_TIME_SYNC_TYPE;

Operation: Set the time synchronization type through the interface of TYSetEnum(), and then confirm whether the time synchronization is completed by reading TY_BOOL_TIME_SYNC_READY.

Sample code: Set the time synchronization with HOST. After setting this synchronization type, the host will automatically send the current time, and then synchronize the time every 6 seconds.

LOGD("Set type of time sync mechanism");
ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_TIME_SYNC_TYPE, TY_TIME_SYNC_TYPE_HOST));
LOGD("Wait for time sync ready");
while (1) {
    bool sync_ready;
    ASSERT_OK(TYGetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_TIME_SYNC_READY, &sync_ready));
    if (sync_ready) {
        break;
    }
    MSLEEP(10);
}

Work mode settings

TY_STRUCT_TRIGGER_PARAM = 0x0523 | TY_FEATURE_STRUCT,

Set the work mode of the depth camera, a structure type feature. Defined as follows:

typedef enum TY_TRIGGER_MODE_LIST
{
    TY_TRIGGER_MODE_OFF         = 0,
    TY_TRIGGER_MODE_SLAVE       = 1,
    TY_TRIGGER_MODE_M_SIG       = 2,
    TY_TRIGGER_MODE_M_PER       = 3,
    TY_TRIGGER_MODE_SIG_PASS    = 18,
    TY_TRIGGER_MODE_PER_PASS    = 19,
    TY_TRIGGER_MODE_TIMER_LIST  = 20,
    TY_TRIGGER_MODE_TIMER_PERIOD= 21,
}TY_TRIGGER_MODE_LIST;

typedef int16_t TY_TRIGGER_MODE;
typedef struct TY_TRIGGER_PARAM
{
    TY_TRIGGER_MODE   mode;
    int8_t    fps;
    int8_t    rsvd;
}TY_TRIGGER_PARAM;

//@see sample SimpleView_TriggerMode, only for TY_TRIGGER_MODE_SIG_PASS/TY_TRIGGER_MODE_PER_PASS
typedef struct TY_TRIGGER_PARAM_EX
{
    TY_TRIGGER_MODE   mode;
    union
    {
        struct
        {
            int8_t    fps;
            int8_t    duty;
            int32_t   laser_stream;
            int32_t   led_stream;
            int32_t   led_expo;
            int32_t   led_gain;
        };
        struct
        {
            int32_t   ir_gain[2];
        };
        int32_t   rsvd[32];
    };
}TY_TRIGGER_PARAM_EX;

//@see sample SimpleView_TriggerMode, only for TY_TRIGGER_MODE_TIMER_LIST
  • TY_TRIGGER_MODE_OFF sets the depth camera to work in mode 0, where the camera continuously captures images and outputs image data at the highest frame rate.

    LOGD("=== Disable trigger mode");
    TY_TRIGGER_PARAM trigger;
    trigger.mode = TY_TRIGGER_MODE_OFF;
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
    
  • TY_TRIGGER_MODE_SLAVE sets the depth camera to work in mode 1, where the camera captures images and outputs image data upon receiving a software trigger command or a hardware trigger signal.

    LOGD("=== Set trigger to slave mode");
    TY_TRIGGER_PARAM trigger;
    trigger.mode = TY_TRIGGER_MODE_SLAVE;
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
    
  • TY_TRIGGER_MODE_M_SIG sets the master device (camera) to work in mode 2, while multiple slave devices (cameras) work in mode 1 to achieve cascade triggering of multiple depth cameras and simultaneous image acquisition.

    After receiving the software trigger signal sent by the host computer, the master device outputs the trigger signal through the hardware TRIG_OUT interface, and at the same time triggers its own acquisition and outputs the depth image. After receiving the hardware trigger signal from the master device, the slave device acquires and outputs the depth image.

    LOGD("=== Set trigger mode");
    if (((strcmp(selected[i].id, list[0]) == 0) && (list.size() > 0))
            || ((count == 0) && (list.size() == 0))) {
        LOGD("=== set master device, id: %s", cams[count].sn);
        cams[count].tag = std::string(cams[count].sn) + "_master";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_M_SIG;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    } else {
        cams[count].tag = std::string(cams[count].sn) + "_slave";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_SLAVE;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    }
    
  • TY_TRIGGER_MODE_M_PER sets the master device (camera) to work in mode 3, while multiple slave devices (cameras) work in mode 1, to achieve cascade triggering of multiple depth cameras according to the set frame rate, while simultaneously capturing images.

    The master device outputs a trigger signal through the hardware TRIG_OUT interface according to the set frame rate, and triggers its own acquisition and output of the depth image. After receiving the hardware trigger signal from the master device, the slave device acquires and outputs the depth image.

    Note

    1. The set frame rate cannot exceed the camera’s processing capability, which is the output frame rate of the camera in work mode 0 and TY_BOOL_CMOS_SYNC=false.

    2. In work mode 3 (without connecting to a slave device), the master device can smoothly output images at the set frame rate, which is suitable for platforms that require a specific frame rate for receiving images or have limited image data processing capabilities.

    LOGD("=== Set trigger mode");
    
    if (((strcmp(selected[i].id, list[0]) == 0) && (list.size() > 0))
        || ((count == 0) && (list.size() == 0))) {
        LOGD("=== set master device");
        cams[count].tag = std::string(cams[count].sn) + "_master";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_M_PER;
        param.fps = 5;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    }
    else {
        cams[count].tag = std::string(cams[count].sn) + "_slave";
        TY_TRIGGER_PARAM param;
        param.mode = TY_TRIGGER_MODE_SLAVE;
        ASSERT_OK(TYSetStruct(cams[count].hDev, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, (void*)&param, sizeof(param)));
    }
    
  • TY_TRIGGER_MODE_SIG_PASS sets the depth camera to work in mode 18. After receiving a software trigger command or a hardware trigger signal, the camera captures one round of images and outputs image data according to the set frame rate in a 1+duty manner (1: emit one time of floodlight; duty: emit duty times of laser).

    Note

    The set frame rate cannot exceed the camera’s processing capability, which is the output frame rate of the camera in work mode 0 and TY_BOOL_CMOS_SYNC=false.

    LOGD("=== Set trigger to trig mode 18");
    TY_TRIGGER_PARAM_EX trigger;
    trigger.mode = TY_TRIGGER_MODE_SIG_PASS;
    trigger.fps = 10;           // [1, 15]
    trigger.duty = duty;
    trigger.laser_stream = TY_COMPONENT_DEPTH_CAM | TY_COMPONENT_RGB_CAM;
    trigger.led_stream = TY_COMPONENT_IR_CAM_LEFT | TY_COMPONENT_RGB_CAM;
    trigger.led_expo = 1088;    // [3, 1088]
    trigger.led_gain = 32;      // [0, 255]
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM_EX, &trigger, sizeof(trigger)));
    
  • TY_TRIGGER_MODE_PER_PASS sets the depth camera to work in mode 19. After receiving a software or hardware trigger signal, the camera continuously captures and outputs image data at the set frame rate in a 1+duty manner (1: emit one time of floodlight; duty: emit duty times of laser).

    Note

    The set frame rate cannot exceed the camera’s processing capability, which is the output frame rate of the camera in work mode 0 and TY_BOOL_CMOS_SYNC=false.

    LOGD("=== Set trigger to trig mode 19");
    TY_TRIGGER_PARAM_EX trigger;
    trigger.mode = TY_TRIGGER_MODE_PER_PASS;
    trigger.fps = 10;           // [1, 15]
    trigger.duty = duty;
    trigger.laser_stream = TY_COMPONENT_DEPTH_CAM | TY_COMPONENT_RGB_CAM;
    trigger.led_stream = TY_COMPONENT_IR_CAM_LEFT | TY_COMPONENT_RGB_CAM;
    trigger.led_expo = 1088;    // [3,1088]
    trigger.led_gain = 32;      // [0, 255]
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM_EX, &trigger, sizeof(trigger)));
    
  • TY_TRIGGER_MODE_TIMER_LIST configures the depth camera to work in mode 20 (list timer trigger mode). The camera captures (1 + offset_us_count) images and outputs image data based on the set trigger start time (start_time_us), trigger count (offset_us_count), and array of time intervals between every two frames (offset_us_list[]).

    This work mode requires the camera to first start PTP synchronization. For more details, please refer to Time synchronization settings.

    Definition:

    typedef struct TY_TRIGGER_TIMER_LIST
    {
        uint64_t  start_time_us; // 0 for disable
        uint32_t  offset_us_count; // length of offset_us_list
        uint32_t  offset_us_list[50]; // used in TY_TRIGGER_MODE_TIMER_LIST mode
    }TY_TRIGGER_TIMER_LIST;
    

    Operation:

    1. Set the depth camera to work in mode 20 using TY_TRIGGER_MODE_TIMER_LIST.

      LOGD("=== Set trigger to trig mode 20");
      TY_TRIGGER_PARAM trigger;
      trigger.mode = TY_TRIGGER_MODE_TIMER_LIST;
      ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
      
    2. Set a list trigger timer using TY_TRIGGER_TIMER_LIST, where offset_us_count ≤ 50.

      LOGD("=== Set trigger timer list");
      TY_TRIGGER_TIMER_LIST list_timer;
      list_timer.start_time_us = (getSystemTime() + 3000) * 1000;
      list_timer.offset_us_count = 4;
      list_timer.offset_us_list[0] = 1000000;
      list_timer.offset_us_list[1] = 1000000;
      list_timer.offset_us_list[2] = 1000000;
      list_timer.offset_us_list[3] = 1000000;
      ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_TIMER_LIST, &list_timer, sizeof(list_timer)));
      
    3. If you need to disable the list trigger timer, set start_time_us to 0.

      TY_TRIGGER_TIMER_LIST list_timer;
      list_timer.start_time_us = 0;
      ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_TIMER_LIST, & list_timer, sizeof(list_timer)));
      
  • TY_TRIGGER_MODE_TIMER_PERIOD sets the depth camera to work in mode 21 (periodic timer trigger mode). The camera captures one frame of image every peroid_us interval, based on the set trigger start time (start_time_us), trigger count (trigger_count), and trigger time interval (peroid_us). It captures a total of trigger_count images and outputs the image data.

    This work mode requires the camera to start PTP synchronization first. For details, please refer to Time synchronization settings.

    Definition:

    typedef struct TY_TRIGGER_TIMER_PERIOD
    {
        uint64_t  start_time_us; // 0 for disable
        uint32_t  trigger_count;
        uint32_t  period_us; // used in TY_TRIGGER_MODE_TIMER_PERIOD mode
    }TY_TRIGGER_TIMER_PERIOD;
    

    Operation:

    1. Set the depth camera to work in mode 21 using TY_TRIGGER_MODE_TIMER_PERIOD.

      LOGD("=== Set trigger to trig mode 21");
      TY_TRIGGER_PARAM trigger;
      trigger.mode = TY_TRIGGER_MODE_TIMER_PERIOD;
      ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
      
    2. Set a periodic trigger timer using TY_TRIGGER_TIMER_PERIOD.

      TY_TRIGGER_TIMER_PERIOD period_timer;
      period_timer.start_time_us = (getSystemTime() + 3000) * 1000;
      period_timer.trigger_count = 10;
      period_timer.period_us = 1000000;
      ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_TIMER_PERIOD, &period_timer, sizeof(period_timer)));
      
    3. If you need to disable the periodic trigger timer, set start_time_us to 0.

      TY_TRIGGER_TIMER_PERIOD period_timer;
      period_timer.start_time_us = 0;
      ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_TIMER_PERIOD, & period_timer, sizeof(period_timer)));
      

Start-stop management

After configuring components and their features, call TYStartCapture interface to start the camera for image acquisition. After the operation is finished, TYStopCapture interface is used to stop image acquisition.

TY_CAPI             TYStartCapture            (TY_DEV_HANDLE hDevice);
TY_CAPI             TYStopCapture             (TY_DEV_HANDLE hDevice);

Software trigger

TYSendSoftTrigger

When the camera works in mode 1 and mode 2 (please refer to Work mode settings ), the software trigger interface function TYSendSoftTrigger() can be used to send a trigger command to the depth camera via USB or Ethernet interface. After receiving this command, the camera will capture images once and output the corresponding image data.

TY_CAPI             TYSendSoftTrigger         (TY_DEV_HANDLE hDevice);

Status report

When the device is offline or the device’s license status is abnormal, this callback function can receive TY_EVENT_DEVICE_OFFLINE or TY_EVENT_LICENSE_ERROR event notifications.

TY_CAPI             TYRegisterEventCallback   (TY_DEV_HANDLE hDevice, TY_EVENT_CALLBACK callback, void* userdata);

Data reception

The depth camera outputs depth data through a USB interface or an Ethernet interface. The host computer retrieves depth image data using the FetchFrame API of the SDK.

The data output of the depth camera is buffered through a frameBuffer Queue for communication with the host computer. When all the frameBuffers in the Queue are occupied, the depth camera will stop sending data. To prevent the image data stream from being blocked, the host computer should promptly call TYEnqueueBuffer to return the frameBuffer after retrieving the image data.

If the host computer’s capability of receiving and processing data is lower than the depth camera’s image output capability, it is recommended to use software or hardware trigger, the work mode which can lower the image calculation and frame rate of output images, and also reduce the power consumption of the camera. The SDK sample programs SimpleView_Callback and SimpleView_FetchFrame provide two framework examples for image application processing, one in a separate application thread and the other directly in the image acquisition thread.

FetchFrame is a data reception function. Input device handle and wait for valid data frames within the specified time. If no data frames are received within the specified time, the function will return and report an error status.

TY_CAPI             TYFetchFrame              (TY_DEV_HANDLE hDevice, TY_FRAME_DATA* frame, int32_t timeout);

Feature settings

Feature existence query

TYHasFeature queries whether the specified feature is available. The input parameters are the device handle and component ID.

TY_CAPI             TYHasFeature              (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, bool* value);

Feature information query

TYGetFeatureInfo queries the information of the specified feature. The input parameters are the device handle and component ID.

TY_CAPI             TYGetFeatureInfo          (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_FEATURE_INFO* featureInfo);

Feature information includes: access feature (TY_ACCESS_MODE, read, write), whether it support to be written at runtime, and the ID of the component and other features that the current feature binds to.

typedef struct TY_FEATURE_INFO
{
    bool            isValid;            ///< true if feature exists, false otherwise
    TY_ACCESS_MODE  accessMode;         ///< feature access privilege
    bool            writableAtRun;      ///< feature can be written while capturing
    char            reserved0[1];
    TY_COMPONENT_ID componentID;        ///< owner of this feature
    TY_FEATURE_ID   featureID;          ///< feature unique id
    char            name[32];           ///< describe string
    int32_t         bindComponentID;    ///< component ID current feature bind to
    int32_t         bindFeatureID;      ///< feature ID current feature bind to
    char            reserved[252];
}TY_FEATURE_INFO;

Feature operation interfaces

Feature operation APIs typically include input parameters such as camera handle hDevice, ID of the component to which the feature belongs componentID, ID of the feature to be operated on featureID, and data parameters to be received or written. Different operation APIs are selected based on the different feature types, following the affiliation of the camera, the component, the feature, and the type of the feature.

There are seven data types of component features, and the SDK uses the same API to operate on different features of the same type.

typedef enum TY_FEATURE_TYPE_LIST
{
    TY_FEATURE_INT              = 0x1000,
    TY_FEATURE_FLOAT            = 0X2000,
    TY_FEATURE_ENUM             = 0x3000,
    TY_FEATURE_BOOL             = 0x4000,
    TY_FEATURE_STRING           = 0x5000,
    TY_FEATURE_BYTEARRAY        = 0x6000,
    TY_FEATURE_STRUCT           = 0x7000,
}TY_FEATURE_TYPE_LIST;

The interfaces for integer features:

TY_CAPI             TYGetIntRange             (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_INT_RANGE* intRange);
TY_CAPI             TYGetInt                  (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t* value);
TY_CAPI             TYSetInt                  (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t value);

The interfaces for floating point features:

TY_CAPI             TYGetFloatRange           (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_FLOAT_RANGE* floatRange);
TY_CAPI             TYGetFloat                (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, float* value);
TY_CAPI             TYSetFloat                (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, float value);

The interfaces for enumeration features:

TY_CAPI             TYGetEnumEntryCount       (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, uint32_t* entryCount);
TY_CAPI             TYGetEnumEntryInfo        (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, TY_ENUM_ENTRY* entries, uint32_t entryCount, uint32_t* filledEntryCount);
TY_CAPI             TYGetEnum                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t* value);
TY_CAPI             TYSetEnum                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, int32_t value);

The interfaces for boolean features:

TY_CAPI             TYGetBool                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, bool* value);
TY_CAPI             TYSetBool                 (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, bool value);

The interfaces for string features:

TY_CAPI             TYGetStringLength         (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, uint32_t* length);
TY_CAPI             TYGetString               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, char* buffer, uint32_t bufferSize);
TY_CAPI             TYSetString               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, const char* buffer);

The interfaces for structure features:

TY_CAPI             TYGetStruct               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, void* pStruct, uint32_t structSize);
TY_CAPI             TYSetStruct               (TY_DEV_HANDLE hDevice, TY_COMPONENT_ID componentID, TY_FEATURE_ID featureID, void* pStruct, uint32_t structSize);

Feature Description

Different models of Percipio depth cameras have different feature configurations. By traversing the components and feature lists of the depth camera, you can get accurate product configuration information. It is recommended to use the SDK sample program “DumpAllFeatures” to traverse the supported features of the device.

Component optical parameters

  • TY_STRUCT_CAM_INTRINSIC = 0x0000 | TY_FEATURE_STRUCT,

    The intrinsic parameter of the infrared image sensor component, color image sensor component, and depth image sensor component. It is a structure feature and the data structure is a 3x3 floating-point array. Defined as follows:

    ///  a 3x3 matrix
    /// |.|.|.|
    /// | --|---|---|
    /// | fx|  0| cx|
    /// |  0| fy| cy|
    /// |  0|  0|  1|
    typedef struct TY_CAMERA_INTRINSIC
    {
        float data[3*3];
    }TY_CAMERA_INTRINSIC;
    
  • TY_STRUCT_EXTRINSIC_TO_LEFT_IR = 0x0001 | TY_FEATURE_STRUCT,

    The extrinsic parameter of the component. The extrinsic parameter of the right-side infrared image sensor component or the color image sensor component relative to the left-side infrared image sensor component. It is a structure feature and the data structure is a 4x4 floating-point array. Defined as follows:

    /// a 4x4 matrix
    ///  |.|.|.|.|
    ///  |---|----|----|---|
    ///  |r11| r12| r13| t1|
    ///  |r21| r22| r23| t2|
    ///  |r31| r32| r33| t3|
    ///  | 0 |   0|   0|  1|
    
    typedef struct TY_CAMERA_EXTRINSIC
    {
        float data[4*4];
    }TY_CAMERA_EXTRINSIC;
    
    [r11, r12, r13, t1,
     r21, r22, r23, t2,
     r31, r32, r33, t3,
     0,   0,   0,  1]
    
  • TY_STRUCT_CAM_DISTORTION = 0x0006 | TY_FEATURE_STRUCT,

    The optical distortion parameter of infrared image sensors or color image sensor components. It is a structure feature and the data structure is a floating-point array of 12 elements. Defined as follows:

    // k1 k2 k3 k4 k5 k6 are radial distortion coefficients based on ideal modeling
    // p1 p2 are tangential distortion coefficients
    // s1 s2 s3 s4 are the thin prism distortion coefficients
    ///camera distortion parameters
    typedef struct TY_CAMERA_DISTORTION
    {
        float data[12];///<Definition is compatible with opencv3.0+ :k1,k2,p1,p2,k3,k4,k5,k6,s1,s2,s3,s4
    }TY_CAMERA_DISTORTION;
    
  • TY_STRUCT_CAM_CALIB_DATA = 0x0007 | TY_FEATURE_STRUCT,

    The calibration parameter combination of infrared image sensor components and color image sensor components. It can retrieve intrinsic data, extrinsic data, and distortion data. The structure contains the calibration size, intrinsic parameters, extrinsic parameters, and distortion parameters calibrated before the camera leaves the factory. These calibration parameters are calculated at a specific resolution and have a conversion relationship with the image resolution. The related APIs provided by the SDK will automatically adjust the output parameters according to the actual image resolution.

    Sample code: Retrieve calibration parameters, for more details please refer to the SDK sample program SimpleView_Registration.

    struct CallbackData {
      int             index;
      TY_ISP_HANDLE   IspHandle;
      TY_DEV_HANDLE   hDevice;
      DepthRender*    render;
      DepthViewer*    depthViewer;
      bool            needUndistort;
    
      float           scale_unit;
    
      TY_CAMERA_CALIB_INFO depth_calib;
      TY_CAMERA_CALIB_INFO color_calib;
    };
    
    CallbackData cb_data;
    
    LOGD("=== Read depth calib info");
    ASSERT_OK( TYGetStruct(hDevice, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA
          , &cb_data.depth_calib, sizeof(cb_data.depth_calib)) );
    
    LOGD("=== Read color calib info");
    ASSERT_OK( TYGetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA
          , &cb_data.color_calib, sizeof(cb_data.color_calib)) );
    

Trigger settings

  • TY_INT_FRAME_PER_TRIGGER = 0x0202 | TY_FEATURE_INT,

    The number of frames output by the depth camera after receiving a software or hardware trigger signal. The default output is 1 frame.

  • TY_INT_TRIGGER_DELAY_US = 0x0206 | TY_FEATURE_INT,

    The trigger with a delay. When the depth camera receives a hardware trigger signal, it starts image acquisition after a set delay time in microseconds. The maximum delay time is 1300000 microseconds.

    LOGD("=== Set trigger to slave mode");
    TY_TRIGGER_PARAM trigger;
    trigger.mode = TY_TRIGGER_MODE_SLAVE;
    ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));
    
    //notice: trigger delay only be enabled in trigger salve mode and only work for hardware trigger.
    //        delay time unit is microsecond, the maximum value is 1.3s
    int32_t time = 1000;
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEVICE, TY_INT_TRIGGER_DELAY_US, time));
    
  • TY_BOOL_TRIGGER_OUT_IO = 0x0207 | TY_FEATURE_BOOL,

    The trigger output control. When the depth camera is working in mode 1, this feature can be used to control the level of the trigger output interface.

    • When the camera trigger output defaults to a high level, setting this feature to false keeps the output signal at a high level, whereas setting it to true switches the output signal to a low level.

    • When the camera trigger output defaults to a low level, setting this feature to false keeps the output signal at a low level, whereas setting it to true switches the output signal to a high level.

    This feature cannot be used when the depth camera is working in mode 2 or mode 3.

Note

  1. Please refer to Connecting to external trigger signal and Work mode settings sections for the hardware connection methods of various trigger modes.

  2. Please refer to the SDK sample program SimpleView_TriggerMode1 for software implementation.

Exposure settings

  • TY_BOOL_AUTO_AWB = 0x0304 | TY_FEATURE_BOOL,

    The automatic white balance control, which adjusts the R, G, B digital gain to achieve color space balance. Before setting the R, G, B digital gain, you need to turn off the automatic white balance function, otherwise the setting will not take effect.

  • TY_BOOL_AUTO_EXPOSURE = 0x0300 | TY_FEATURE_BOOL,

    The automatic exposure time control. Automatic exposure switch for image sensor components. The optical components of some depth cameras support automatic image exposure.

  • TY_INT_EXPOSURE_TIME = 0x0301 | TY_FEATURE_INT,

    The exposure time. The exposure time setting for infrared or color image sensor components. The maximum and minimum values of the exposure time can be queried using the TYGetIntRange interface. Before setting the exposure time, please disable the automatic exposure time setting function if it is supported.

    //shutdown the Auto Exposure time function of the rgb image
    ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_RGB_CAM, TY_BOOL_AUTO_EXPOSURE, false));
    
    //Adjust the Exposure time of the rgb image
    TY_INT_RANGE range;
    ASSERT_OK(TYGetIntRange(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_EXPOSURE_TIME, &range));
    int32_t tmp;
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_EXPOSURE_TIME, (range.min + range.max) / 2));
    ASSERT_OK(TYGetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_EXPOSURE_TIME, &tmp));
    if (tmp != (range.min + range.max) / 2)
    {
        LOGD("set rgb image exposure time failed");
    }
    
  • TY_BOOL_AUTO_GAIN = 0x0302 | TY_FEATURE_BOOL,

    The automatic gain control. Automatic gain adjustment switch for color image sensor components. The optical components of some depth cameras allow for automatic gain adjustment.

  • TY_INT_ANALOG_GAIN = 0x0524 | TY_FEATURE_INT,

    The analog gain. Analog gain settings for infrared or color image sensor components. The adjustable range of gain varies depending on different optical sensors and frame rate configurations. Before setting the analog gain, please disable the automatic gain adjustment function.

  • TY_INT_GAIN = 0x0303 | TY_FEATURE_INT,

    The digital gain of infrared sensors. Digital gain setting for infrared image sensor components. The adjustable range of gain varies depending on different optical sensors and frame rate configurations.

    int32_t value;
    TY_INT_RANGE range;
    // get the range of digital gain
    ASSERT_OK(TYGetIntRange(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_INT_GAIN, &range));
    // set the max digital gain
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_INT_GAIN, range.max));
    
  • TY_INT_R_GAIN = 0x0520 | TY_FEATURE_INT,

  • TY_INT_G_GAIN = 0x0521 | TY_FEATURE_INT,

  • TY_INT_B_GAIN = 0x0522 | TY_FEATURE_INT,

    The digital gain of color sensors. The digital gain of the color image sensor module needs to be independently set for the R, G, and B channels. Before setting the digital gain, please disable the AWB function if it is supported.

    //shutdown the AWB function of the RGB image
    ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_RGB_CAM, TY_BOOL_AUTO_AWB, false));
    
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_R_GAIN, 2));
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_G_GAIN, 2));
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_B_GAIN, 2));
    

Automatic exposure ROI

TY_STRUCT_AEC_ROI

Sets the statistical range of the AEC/AGC configuration of the image sensor, and the image sensor will automatically adjust the exposure time and gain based on the image data characteristics within this range to achieve better image effects.

The following code sets the 100*100 area in the upper left corner of the image as the region of interest (ROI), and AEC is effective for this region only:

TY_AEC_ROI_PARAM aec_roi_param;
aec_roi_param.x = 0;
aec_roi_param.y = 0;
aec_roi_param.w = 100;
aec_roi_param.h = 100;
TYSetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_AEC_ROI, &aec_roi_param, sizeof(TY_AEC_ROI_PARAM));

Image integration time

TY_INT_CAPTURE_TIME_US

It is used to get the actual image integration time of the camera in trigger mode in microseconds.

After receiving the trigger signal, the camera enters the exposure phase, which lasts for the time of CAPTURE_TIME_US. At this time, the objects in the camera’s field of view should remain stationary, otherwise the image quality will be impacted. After the exposure phase, the camera starts data processing and at this time, the objects within the camera’s field of view can move and it won’t impact the quality of the captured image.

Therefore, you can first read the actual image integration time of the camera, so as to determine the time when the object (e.g., robotic hand) moves, and avoid entering the camera field of view during the exposure period, which will affect the camera imaging.

Sample code for retrieving the CAPTURE_TIME_US:

int32_t capture_time;
TYGetInt(hDevice, TY_COMPONENT_DEVICE, TY_INT_CAPTURE_TIME_US, &capture_time);
printf("get capture time %d\n", capture_time);

Image synchronization

TY_BOOL_CMOS_SYNC = 0x0205 | TY_FEATURE_BOOL,

When the left and right images used for stereo depth calculation are exposed completely synchronously, the best depth image data can be obtained. Even when the exposure of the left and right infrared images is not completely synchronous and there is a small time difference, depth image data can still be output, although with slightly lower accuracy. When the left and right images are completely synchronized, the frame rate of the image output is lower than when they are not completely synchronized. Depending on the needs of the actual use scenario for depth image quality and frame rate, the camera’s working configuration can be modified using this boolean feature. The depth camera is set to completely synchronous by default.

bool tmp;
ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_CMOS_SYNC, false));
ASSERT_OK(TYGetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_CMOS_SYNC, &tmp));
if (tmp != false) {
   LOGD("==set TY_BOOL_CMOS_SYNC failed !");
}

Asynchronous data stream

TY_ENUM_STREAM_ASYNC

During image acquisition and output, the acquired image data can be immediately output. For example, the -IX series and PS series products, the infrared and color images are captured earlier, and the time needed for depth data calculation is longer, the infrared and color data can be output first for necessary calculations by the host computer.

typedef enum TY_STREAM_ASYNC_MODE_LIST
{
    TY_STREAM_ASYNC_OFF         = 0,
    TY_STREAM_ASYNC_DEPTH       = 1,
    TY_STREAM_ASYNC_RGB         = 2,
    TY_STREAM_ASYNC_DEPTH_RGB   = 3,
    TY_STREAM_ASYNC_ALL         = 0xff,
}TY_STREAM_ASYNC_MODE_LIST;

ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEVICE, TY_ENUM_STREAM_ASYNC, TY_STREAM_ASYNC_ALL));

Image Format Settings and Processing

The format and resolution of the image are ENUM types. The SDK header file enumerates the various image formats and resolutions supported by the camera. Different cameras support different specific formats, which can be queried and set using the API.

  • TY_ENUM_IMAGE_MODE = 0x0109 | TY_FEATURE_ENUM,

    Sample code: Get the image formats supported by the color image.

    uint32_t n;
    ASSERT_OK(TYGetEnumEntryCount(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, &n));
    LOGD("===         %14s: entry count %d", "", n);
    if (n > 0) {
        std::vector<TY_ENUM_ENTRY> entry(n);
        ASSERT_OK(TYGetEnumEntryInfo(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, &entry[0], n, &n));
        for (uint32_t i = 0; i < n; i++) {
            LOGD("===         %14s:     value(%d), desc(%s)", "", entry[i].value, entry[i].description);
        }
    }
    

    Sample code: Set the color image format to BAYER8GB, with a resolution of 1280*960.

    LOGD("=== Configure feature, set resolution to 1280x960.");
    ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, TY_PIXEL_FORMAT_BAYER8GB|TY_RESOLUTION_MODE_1280x960));
    

    Or use the combined image and format definition TY_IMAGE_MODE_format_resolution TY_IMAGE_MODE_format_resolution to configure the output image format of the color image sensor:

    LOGD("=== Configure feature, set resolution to 1280x960.");
    ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, TY_IMAGE_MODE_BAYER8GB_1280x960));
    

    Sample code: Set the depth image to 1280x960 format.

    std::vector<TY_ENUM_ENTRY> image_mode_list(n);
    ASSERT_OK(TYGetEnumEntryInfo(hDevice, TY_COMPONENT_DEPTH_CAM, TY_ENUM_IMAGE_MODE, &image_mode_list[0], n, &n));
    for (int idx = 0; idx < image_mode_list.size(); idx++)
    {
    
        //try to select a 1280x960 resolution
        if (TYImageWidth(image_mode_list[idx].value) == 1280 || TYImageHeight(image_mode_list[idx].value) == 960)
        {
            LOGD("Select Depth Image Mode: %s", image_mode_list[idx].description);
            int err = TYSetEnum(hDevice, TY_COMPONENT_DEPTH_CAM, TY_ENUM_IMAGE_MODE, image_mode_list[idx].value);
            ASSERT(err == TY_STATUS_OK || err == TY_STATUS_NOT_PERMITTED);
            break;
        }
    }
    
  • TY_FLOAT_SCALE_UNIT = 0x010a | TY_FEATURE_FLOAT,

    The coefficient for depth measurements. This default value is 1.0, which means that the data value of each pixel in the depth image is the actual measured distance (in millimeters); when the value is not 1.0, the product of the data value of each pixel in the depth image and this coefficient yields the actual measured distance (in millimeters).

    float value;
    ASSERT_OK(TYGetFloat(hDevice, TY_COMPONENT_DEPTH_CAM, TY_FLOAT_SCALE_UNIT, &value));
    
  • TY_BOOL_UNDISTORTION = 0x0510 | TY_FEATURE_BOOL, ///< Output undistorted image

    The switch for undistortion. The default value is false, which means the infrared image sensor component outputs image data without undistortion by default.

    bool hasUndistortSwitch;
    ASSERT_OK( TYHasFeature(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_BOOL_UNDISTORTION, &hasUndistortSwitch) );
    if (hasUndistortSwitch) {
        ASSERT_OK( TYSetBool(hDevice, TY_COMPONENT_IR_CAM_LEFT, TY_BOOL_UNDISTORTION, true) );
    }
    
  • TY_BOOL_BRIGHTNESS_HISTOGRAM = 0x0511 | TY_FEATURE_BOOL,

    The enable switch for Infrared brightness histogram component. The default value is false. After enabling the infrared brightness histogram component (TY_COMPONENT_BRIGHT_HISTO), the device will output the brightness histogram of the left and right infrared images.

    Sample code is as follows. For details, please refer to the SDK sample program SimpleView_FetchHisto.

    int32_t allComps, componentIDs = 0;
    ASSERT_OK( TYGetComponentIDs(hDevice, &allComps) );
    if(allComps & TY_COMPONENT_BRIGHT_HISTO) {
        LOGD("=== Has bright histo component");
        componentIDs |= TY_COMPONENT_BRIGHT_HISTO;
    }
    
    LOGD("=== Configure components, open ir cam");
    componentIDs |= TY_COMPONENT_IR_CAM_RIGHT| TY_COMPONENT_IR_CAM_LEFT;
    ASSERT_OK( TYEnableComponents(hDevice, componentIDs) );
    
  • TY_BOOL_DEPTH_POSTPROC = 0x0512 | TY_FEATURE_BOOL,

    The depth image post-processing switch. Some depth cameras support the smoothing filter function of depth images.

Laser setup

  • TY_BOOL_LASER_AUTO_CTRL = 0x0501 | TY_FEATURE_BOOL,

    The switch of laser auto-adjustment function, a boolean feature and the default value is true, which means the depth camera automatically turns on/off the infrared laser based on the needs of depth calculation. The specific rules are as follows:

    • TY_BOOL_LASER_AUTO_CTRL= true

      • There is depth image output, the laser is turned on, and the laser brightness is set with TY_INT_LASER_POWER;

      • No depth image output, the laser is turned off.

    • TY_BOOL_LASER_AUTO_CTRL= false

      • There is any image output, the laser is turned on, and the laser brightness is set with TY_INT_LASER_POWER.

        Note

        Note that some models, like FM851-E2, behave differently from the above description that the laser will be off when TY_BOOL_LASER_AUTO_CTRL= false, TY_BOOL_CMOS_SYNC = false and no depth image output.

  • TY_INT_LASER_POWER = 0x0500 | TY_FEATURE_INT,

    The laser power setting, an integer feature and the setting range is 0~100, the default value is 100. This feature can be used to set the laser power, thereby adjusting the brightness of the infrared sensor imaging. When this feature is set to 0, the laser is turned off.

    Sample code: Turn off the automatic adjustment function of the laser emitter and set the laser power to 90.

    ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_LASER, TY_BOOL_LASER_AUTO_CTRL, false));
    ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_LASER, TY_INT_LASER_POWER, 90));
    

Floodlight settings

  • TY_BOOL_FLASHLIGHT = 0x0213 | TY_FEATURE_BOOL,

    The floodlight function switch, a boolean feature and the default value is false. This feature can be used to enable the floodlight function and perform camera calibration using infrared floodlight. Currently, only the PS series supports the floodlight function. The specific rules are as follows:

    • TY_BOOL_FLASHLIGHT= true

      • There is depth image output, and the floodlight is turned off;

      • There is no depth image output, the floodlight is turned on, and the floodlight brightness is set with TY_INT_FLASHLIGHT_INTENSITY.

    • TY_BOOL_FLASHLIGHT= false

      • The floodlight is turned off.

  • TY_INT_FLASHLIGHT_INTENSITY = 0x0214 | TY_FEATURE_INT,

    The floodlight intensity setting, an integer feature, with a range of 0 to 63 and the default value is 19. After enabling the floodlight function, the floodlight intensity can be set based on the actual calibrated scene using this feature.

    Note

    The floodlight has a built-in overheating protection function. When the temperature is too high, the floodlight will automatically turn off.

Sample code: Turn on the floodlight function and set the floodlight intensity to 63.

ASSERT_OK(TYSetBool(hDevice, TY_COMPONENT_DEVICE, TY_BOOL_FLASHLIGHT, true));
ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEVICE, TY_INT_FLASHLIGHT_INTENSITY, 63));

SGBM feature

Related to measuring range, accuracy, and frame rate

  • TY_FLOAT_SCALE_UNIT = 0x010a | TY_FEATURE_FLOAT

    For its descriptions, refer to TY_FLOAT_SCALE_UNIT.

  • TY_INT_SGBM_DISPARITY_NUM = 0x0611 | TY_FEATURE_INT

    Sets the disparity search range. The disparity search range of the matching window during the search process, an integer feature.

    The larger the set value, the larger the measurement range in the Z direction of the camera, and the computing power will increase. It is recommended to set it as a multiple of 16.

  • TY_INT_SGBM_DISPARITY_OFFSET = 0x0612 | TY_FEATURE_INT

    Sets the disparity value to start the search, an integer feature.

    The smaller the set value, the larger the maximum measurement value in the Z direction (Zmax), indicating a longer measurement range. However, the lower limit of the set value is affected by depth of field.

  • TY_INT_SGBM_IMAGE_NUM = 0x0610 | TY_FEATURE_INT

    Sets the number of IR images for depth calculation, an integer feature.

    The larger the set value, the better the quality of the output depth image, but the lower the frame rate. The upper limit of the set value is influenced by the camera’s computing power.

  • TY_INT_SGBM_MATCH_WIN_HEIGHT = 0x0613 | TY_FEATURE_INT

    Sets the height of the disparity matching window, an integer feature. The value must be odd.

  • TY_INT_SGBM_MATCH_WIN_WIDTH = 0x061A | TY_FEATURE_INT

    Sets the width of the disparity matching window, an integer feature. The value must be odd.

    The larger the disparity matching window (MATCH_WIN_HEIGHT * MATCH_WIN_WIDTH), the smoother the depth image, but the accuracy will decrease. The smaller the disparity matching window, the more details are displayed in the depth image, but the probability of incorrect matching increases.

Note

Due to the influence of camera computing power, there is a constraint between IMAGE_NUM and MATCH_WIN_HEIGHT. You can check the constraint relationship in configuration information of the camera. For the ways of obtaining the file, please refer to Note.

Related to smoothing the edge pixels in the image

  • TY_INT_SGBM_SEMI_PARAM_P1 = 0x0614 | TY_FEATURE_INT

    Adjacent pixel (+/-1) constraint penalty parameter P1.

    The larger the set value, the smoother the depth image.

    It can prevent the occurrence of discontinuous or unreasonable depth values and effectively suppress noise and discontinuity.

  • TY_INT_SGBM_SEMI_PARAM_P2 = 0x0615 | TY_FEATURE_INT

    Surrounding pixel constraint penalty parameter P2.

    The larger the set value, the smoother the depth image. P2 > P1.

    This parameter can effectively handle texture-rich areas and reduce the number of mismatches.

  • TY_INT_SGBM_SEMI_PARAM_P1_SCALE = 0x061F | TY_FEATURE_INT

    Adjacent pixel (+/-1) constraint penalty parameter P1_scale, the smaller the value, the smoother.

  • TY_BOOL_SGBM_HFILTER_HALF_WIN = 0x0619 | TY_FEATURE_BOOL

    Semi-search filter switch.

    Used to further optimize the depth image, remove noise and discontinuities, and be more friendly to object edge point clouds.

Related to mismatch

  • TY_INT_SGBM_UNIQUE_FACTOR = 0x0616 | TY_FEATURE_INT

    Uniqueness check parameter 1, which is the percentage of the best and second-best match points.

    The larger the set value, the more unique the matching cost, and the larger the value, the more error match points are filtered out.

  • TY_INT_SGBM_UNIQUE_ABSDIFF = 0x0617 | TY_FEATURE_INT

    Uniqueness check parameter 2, which is the difference between the best and second-best match points.

    The larger the set value, the more unique the matching cost, and the larger the value, the more error match points are filtered out.

  • TY_BOOL_SGBM_LRC = 0x061C | TY_FEATURE_BOOL

    Left and right consistency check switch.

    TY_INT_SGBM_LRC_DIFF = 0x061D | TY_FEATURE_INT

    Left and right consistency check parameters.

    When performing stereo matching, for pixels on the same object surface, the disparity of matching from the left image to the right image is LR, and the disparity of matching from the right image to the left image is RL. If ABS(LR-RL) < max LRC diff, then the point is considered a reliable match point.

    The smaller the parameter setting value for left and right consistency check, the more reliable the matching.

Median filtering

  • TY_BOOL_SGBM_MEDFILTER = 0x061B | TY_FEATURE_BOOL

    Median filter switch.

    Used to eliminate isolated noise points while preserving the edge information of the image as much as possible.

    TY_INT_SGBM_MEDFILTER_THRESH = 0x061E | TY_FEATURE_INT

    Median filtering threshold.

    The larger the set value, the more noise will be filtered out, but it may also result in the loss of detailed information in the depth image.

ToF camera features

Image quality settings

TY_ENUM_DEPTH_QUALITY = 0x0900 | TY_FEATURE_ENUM,

Sets the depth image quality of the ToF depth camera output. An enumeration feature, defined as follows:

typedef enum TY_DEPTH_QUALITY_LIST
{
TY_DEPTH_QUALITY_BASIC = 1,
TY_DEPTH_QUALITY_MEDIUM = 2,
TY_DEPTH_QUALITY_HIGH = 4,
}TY_DEPTH_QUALITY_LIST;
  • When the depth image quality is set to BASIC, the jitter of depth value is large and the output frame rate is high.

  • When the depth image quality is set to MEDIUM, the jitter of depth value is medium and the output frame rate is medium.

  • When the depth image quality is set to HIGH, the jitter of depth value is small and the output frame rate is low.

Outlier filtering settings

TY_INT_FILTER_THRESHOLD = 0x0901 | TY_FEATURE_INT,

The outlier filtering threshold of the ToF depth camera, an integer feature, ranging [0,100]. Default value is 0, which means no filtering. The smaller the filtering threshold setting, the more outliers filtered out.

Note

If the filtering threshold is set too low, a large amount of valid depth information may be filtered out.

Sample code: Set the outlier filtering threshold of the ToF depth camera to 43.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_FILTER_THRESHOLD, 43));

Modulation channel settings

TY_INT_TOF_CHANNEL = 0x0902 | TY_FEATURE_INT,

The modulation channel of the ToF depth camera, an integer feature. The modulation frequency varies for different modulation channels and does not interfere with each other. If multiple ToF depth cameras need to run in the same scenario, it is necessary to ensure that the modulation channels of the cameras in the same series are different.

Sample code: Set the modulation channel of the ToF depth camera to 2.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_CHANNEL, 2));

The settings of laser modulation light intensity

TY_INT_TOF_MODULATION_THRESHOLD = 0x0903 | TY_FEATURE_INT,

The threshold of laser modulation light intensity the ToF depth camera receives, an integer feature. Pixels with values below this threshold are not involved in depth calculation, meaning the depth value of these pixels is assigned to 0.

Sample code: Set the threshold of laser modulation light intensity the ToF depth camera receives to 300.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_MODULATION_THRESHOLD, 300));

Jitter filtering settings

TY_INT_TOF_JITTER_THRESHOLD = 0x0307 | TY_FEATURE_INT,

The jitter filtering threshold of the ToF depth camera, an integer feature. The larger the threshold setting value, the less depth data of jitter on the edge of the depth image filtered out.

Sample code: Set the jitter filtering threshold of the ToF depth camera to 5.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_JITTER_THRESHOLD, 5));

High dynamic range ratio settings

TY_INT_TOF_HDR_RATIO = 0x0306 | TY_FEATURE_INT,

High dynamic range ratio threshold, an integer feature. Before setting this threshold, the camera work mode needs to be set to trigger mode and the depth image quality needs to be set to HIGH. Currently, only the TL460-S1-E1 ToF depth camera supports the high dynamic range ratio feature.

Sample code: Set the high dynamic range ratio threshold of the TL460-S1-E1 ToF depth camera to 50.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_HDR_RATIO, 50));

Anti-sunlight index settings

TY_INT_TOF_ANTI_SUNLIGHT_INDEX = 0x0906 | TY_FEATURE_INT,

Anti-sunlight index, an integer feature. Value range: [0,2]. The anti-sunlight index is used to optimize the depth imaging effect of the TM260-E2 ToF depth camera under sunlight.

If in indoor scenes or weak sunlight, it is recommended to set the index to 0; in semi-outdoor scenes or with some sunlight, set it to 1; in outdoor scenes or strong sunlight, set it to 2.

Sample code: Set the anti-sunlight index of TM260-E2 ToF depth camera to 0.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_TOF_ANTI_SUNLIGHT_INDEX, 0));

Speckle filter settings

  • TY_INT_MAX_SPECKLE_DIFF = 0x0908 | TY_FEATURE_INT,

    Maximum difference between neighbor disparity pixels to put them into the same speckle. A integer feature. Value range: [100,500]. Unit: mm.

  • TY_INT_MAX_SPECKLE_SIZE = 0x0907 | TY_FEATURE_INT,

    The maximum speckle size to consider it a speckle. A integer feature. Value range:[0,200]. Unit: px. The specke whose size is smaller than the parameter will be filtered.

Currently, only the TM260-E2 ToF camera supports speckle filter settings.

Sample code: Set the TY_INT_MAX_SPECKLE_DIFF to 123, and TY_INT_MAX_SPECKLE_SIZE to 148.

ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_MAX_SPECKLE_DIFF, 123));
ASSERT_OK(TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_MAX_SPECKLE_SIZE, 148));

Coordinate inversion

Point cloud data structure:

typedef struct TY_VECT_3F
{
    float   x;
    float   y;
    float   z;
}TY_VECT_3F;

Data structure of points on a depth image:

typedef struct TY_PIXEL_DESC
{
  int16_t x;      // x coordinate in pixels
  int16_t y;      // y coordinate in pixels
  uint16_t depth; // depth value
  uint16_t rsvd;
}TY_PIXEL_DESC;

TYInvertExtrinsic inverts the matrix. Input a 4x4 extrinsic matrix, the output is the inverse matrix.

TY_CAPI   TYInvertExtrinsic      (const TY_CAMERA_EXTRINSIC* orgExtrinsic,
                                 TY_CAMERA_EXTRINSIC* invExtrinsic);

TYMapDepthToPoint3d converts points on a depth image to point cloud data. It takes input parameters such as calibration data of the depth camera, width and height of the depth image, depth data points, and the number of points. The output is point cloud data.

TY_CAPI   TYMapDepthToPoint3d    (const TY_CAMERA_CALIB_INFO* src_calib,
                                 uint32_t depthW, uint32_t depthH,
                                 const TY_PIXEL_DESC* depthPixels, uint32_t count,
                                 TY_VECT_3F* point3d);

TYMapPoint3dToDepth converts point cloud data to depth image data. It takes input parameters such as the calibration data of the depth camera, point cloud data, the number of points, and the width and height of the target depth image. It outputs point data on the depth image. It is the inverse operation of TYMapDepthToPoint3d.

TY_CAPI   TYMapPoint3dToDepth    (const TY_CAMERA_CALIB_INFO* dst_calib,
                                 const TY_VECT_3F* point3d, uint32_t count,
                                 uint32_t depthW, uint32_t depthH,
                                 TY_PIXEL_DESC* depth);

TYMapDepthImageToPoint3d converts depth data images into point clouds. It takes input parameters such as the calibration data of the depth camera, the width and height of the depth image, and the depth image itself. It outputs a point cloud. Depth points with a value of 0 are mapped to (NAN, NAN, NAN), indicating invalid depth.

TY_CAPI   TYMapDepthImageToPoint3d  (const TY_CAMERA_CALIB_INFO* src_calib,
                                     int32_t imageW, int32_t imageH,
                                     const uint16_t* depth,
                                     TY_VECT_3F* point3d,
                                     float f_scale_unit = 1.0f);

Sample code: To obtain point cloud data, please refer to the SDK sample program SimpleView_Point3D.

struct CallbackData {
    int             index;
    TY_DEV_HANDLE   hDevice;
    TY_ISP_HANDLE   isp_handle;
    TY_CAMERA_CALIB_INFO depth_calib;
    TY_CAMERA_CALIB_INFO color_calib;

};

static void handleFrame(TY_FRAME_DATA* frame, void* userdata) {
    //we only using Opencv Mat as data container.
    //you can allocate memory by yourself.
    CallbackData* pData = (CallbackData*) userdata;
    LOGD("=== Get frame %d", ++pData->index);

    cv::Mat depth, color;
    parseFrame(*frame, &depth, NULL, NULL, &color, isp_handle);
    if(!depth.empty()){
        std::vector<TY_VECT_3F> p3d;
        p3d.resize(depth.size().area());

        ASSERT_OK(TYMapDepthImageToPoint3d(&pData->depth_calib, depth.cols, depth.rows
            , (uint16_t*)depth.data, &p3d[0]));
}

TYMapPoint3dToDepthImage converts point clouds to depth image. It takes input parameters such as the calibration data of the depth camera, the point cloud, the number of points, the width and height of the depth image, and outputs the converted depth image. Invalid point clouds (NAN, NAN, NAN) are mapped to an invalid depth of 0.

TY_CAPI   TYMapPoint3dToDepthImage  (const TY_CAMERA_CALIB_INFO* dst_calib,
                                 const TY_VECT_3F* point3d, uint32_t count,
                                 uint32_t depthW, uint32_t depthH, uint16_t* depth);

TYMapPoint3dToPoint3d is the point cloud coordinate conversion. Input the extrinsic parameter matrix, point cloud data, and the number of points, and the output is the converted point cloud data.

TY_CAPI   TYMapPoint3dToPoint3d     (const TY_CAMERA_EXTRINSIC* extrinsic,
                                 const TY_VECT_3F* point3d, int32_t count,
                                 TY_VECT_3F* point3dTo);

TYMapDepthToColorCoordinate maps depth data points to color image coordinates. Input the calibration data of the depth image, the width and height of the depth image, depth data points, the point count of the depth image, the calibration data of the color image, the width and height of the color image. The output is the mapped depth points.

static inline TY_STATUS TYMapDepthToColorCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH,
              const TY_PIXEL_DESC* depth, uint32_t count,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t mappedW, uint32_t mappedH,
              TY_PIXEL_DESC* mappedDepth,
                  float f_scale_unit = 1.0f);

TYCreateDepthToColorCoordinateLookupTable creates a coordinate lookup table from a depth image to a color image. It takes input parameters such as the calibration data of the depth image, the width and height of the depth image, the depth image, the calibration data of the color image, the width and height of the mapped depth image, and outputs a table of mapped depth point data.

static inline TY_STATUS TYCreateDepthToColorCoordinateLookupTable(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t mappedW, uint32_t mappedH,
              TY_PIXEL_DESC* lut,
                  float f_scale_unit = 1.0f);

TYMapDepthImageToColorCoordinate maps a depth image to the color image coordinates. Input the calibration data, width and height of the depth image, the depth image, as well as the calibration data, width and height of the color image. The output is the mapped depth image.

static inline TY_STATUS TYMapDepthImageToColorCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t mappedW, uint32_t mappedH, uint16_t* mappedDepth,
                  float f_scale_unit = 1.0f);

TYMapRGBImageToDepthCoordinate maps a color image to the depth image coordinates. Input the calibration data, width and height of the depth image, the depth image, as well as the calibration data, width and height of the color image, the color image. The output is the mapped color image data.

static inline TY_STATUS TYMapRGBImageToDepthCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t rgbW, uint32_t rgbH, const uint8_t* inRgb,
              uint8_t* mappedRgb,
                  float f_scale_unit = 1.0f);

Sample code: For the coordinate mapping between the depth image and the color image, please refer to the SDK sample program SimpleView_Registration.

static void doRegister(const TY_CAMERA_CALIB_INFO& depth_calib
                      , const TY_CAMERA_CALIB_INFO& color_calib
                      , const cv::Mat& depth
                      , const float f_scale_unit
                      , const cv::Mat& color
                      , bool needUndistort
                      , cv::Mat& undistort_color
                      , cv::Mat& out
                      , bool map_depth_to_color
                      )
{
  // do undistortion
  if (needUndistort) {
    TY_IMAGE_DATA src;
    src.width = color.cols;
    src.height = color.rows;
    src.size = color.size().area() * 3;
    src.pixelFormat = TY_PIXEL_FORMAT_RGB;
    src.buffer = color.data;

    undistort_color = cv::Mat(color.size(), CV_8UC3);
    TY_IMAGE_DATA dst;
    dst.width = color.cols;
    dst.height = color.rows;
    dst.size = undistort_color.size().area() * 3;
    dst.buffer = undistort_color.data;
    dst.pixelFormat = TY_PIXEL_FORMAT_RGB;
    ASSERT_OK(TYUndistortImage(&color_calib, &src, NULL, &dst));
  }
  else {
    undistort_color = color;
  }

  // do register
  if (map_depth_to_color) {
    out = cv::Mat::zeros(undistort_color.size(), CV_16U);
    ASSERT_OK(
      TYMapDepthImageToColorCoordinate(
        &depth_calib,
        depth.cols, depth.rows, depth.ptr<uint16_t>(),
        &color_calib,
        out.cols, out.rows, out.ptr<uint16_t>(), f_scale_unit
      )
    );
    cv::Mat temp;
    //you may want to use median filter to fill holes in projected depth image
    //or do something else here
    cv::medianBlur(out, temp, 5);
    out = temp;
  }
  else {
    out = cv::Mat::zeros(depth.size(), CV_8UC3);
    ASSERT_OK(
      TYMapRGBImageToDepthCoordinate(
        &depth_calib,
        depth.cols, depth.rows, depth.ptr<uint16_t>(),
        &color_calib,
        undistort_color.cols, undistort_color.rows, undistort_color.ptr<uint8_t>(),
        out.ptr<uint8_t>(), f_scale_unit
      )
    );
  }
}

TYMapMono8ImageToDepthCoordinate maps the MONO8 color image to the depth image coordinates. Input the calibration data, the width and height of the depth image, the depth image, the calibration data, the width and height of the MONO8 color image, and the MONO8 color image. The output is the mapped MONO8 color image data.

static inline TY_STATUS TYMapMono8ImageToDepthCoordinate(
              const TY_CAMERA_CALIB_INFO* depth_calib,
              uint32_t depthW, uint32_t depthH, const uint16_t* depth,
              const TY_CAMERA_CALIB_INFO* color_calib,
              uint32_t monoW, uint32_t monoH, const uint8_t* inMono,
              uint8_t* mappedMono,
                  float f_scale_unit = 1.0f);

Image processing

TYUndistortImage is used for undistortion of images outputted by sensor components. Supported data formats include TY_PIXEL_FORMAT_MONO, TY_PIXEL_FORMAT_RGB, and TY_PIXEL_FORMAT_BGR. The input parameters are sensor calibration data, the distorted original image, and the desired image intrinsics (if NULL is inputted, the sensor’s original intrinsics will be used). The function outputs the undistorted image data.

TY_CAPI TYUndistortImage (const TY_CAMERA_CALIB_INFO *srcCalibInfo
    , const TY_IMAGE_DATA *srcImage
    , const TY_CAMERA_INTRINSIC *cameraNewIntrinsic
    , TY_IMAGE_DATA *dstImage
    );

TYDepthSpeckleFilter fills in invalid points and reduces noise in discrete points in depth images. It takes a depth image and filtering parameters as input, and outputs the processed depth image.

struct DepthSpeckleFilterParameters {
    int max_speckle_size; // blob size smaller than this will be removed
    int max_speckle_diff; // Maximum difference between neighbor disparity pixels
};
#define DepthSpeckleFilterParameters_Initializer {150, 64}

TY_CAPI TYDepthSpeckleFilter (TY_IMAGE_DATA* depthImage
        , const DepthSpeckleFilterParameters* param
        );

TYDepthEnhenceFilter is an image filtering algorithm that takes as input a depth image, the number of images, a reference image, and filter coefficients, and outputs the filtered depth data.

TY_CAPI TYDepthEnhenceFilter (const TY_IMAGE_DATA* depthImages
        , int imageNum
        , TY_IMAGE_DATA *guide
        , TY_IMAGE_DATA *output
        , const DepthEnhenceParameters* param
        );


struct DepthEnhenceParameters{
    float sigma_s;          // filter param on space
    float sigma_r;          // filter param on range
    int   outlier_win_sz;   // outlier filter windows size
    float outlier_rate;
};
#define DepthEnhenceParameters_Initializer {10, 20, 10, 0.1f}

sigma_s is the spatial filtering coefficient, sigma_r is the depth filtering coefficient, outlier_win_sz is the filtering window in pixels, and outlier_rate is the noise filtering coefficient.