SDK Application Reference
Sample Program Descriptions
The sample programs in sample_v2
introduce user-friendly camera control interfaces and offer an optional OpenCV dependency that users can choose to include if necessary.
Sample_v1
ListDevices
: demonstrates how to list all the depth cameras that are connected to the host system.
DeviceStorage
: demonstrates how to read from and write to the camera’s Custom Block Storage area (64KB) and the ISP Block Storage area (64KB).
DumpAllFeatures
: demonstrates how to list the components and features supported by the depth camera, along with the read and write operations available for each feature.
ForceDeviceIP
: demonstrates how to manually set the IP address for depth camera.
LoopDetect
: demonstrates how to address data connection anomalies caused by unstable environmental factors.
NetStatistic
: demonstrates how to calculate the packet loss rate of the network connection for the depth camera.
SimpleView_FetchFrame
: demonstrates how to continuously capture and output image data when the depth camera is in continuous capture mode.
SimpleView_Callback
: demonstrates how to capture image data in continuous capture mode and render it using OpenCV in a separate processing thread to prevent blocking.
SimpleView_FetchHisto
: demonstrates how to retrieve the image brightness distribution histogram.
SimpleView_MultiDevice
: demonstrates how to use multiple depth cameras to continuously capture and output image data.
SimpleView_Point3D
: demonstrates how to acquire 3D point cloud data.
SimpleView_Registration
: demonstrates how to obtain the intrinsic parameters, extrinsic parameters, depth map, and color image of the depth camera, and align the depth map with the color image.
SimpleView_TriggerDelay
: demonstrates how to control the camera’s trigger delay.
SimpleView_TriggerMode0
: demonstrates how to set the depth camera to trigger mode 0, allowing it to continuously capture and output images at the highest frame rate.
SimpleView_TriggerMode1
: demonstrates how to set the depth camera to trigger mode 1, allowing it to acquire and output images upon receiving a trigger signal.
SimpleView_TriggerMode_M2S1
: demonstrates how to set the master device (camera) to work in trigger mode 2, and multiple slave devices (cameras) to work in trigger mode 1, in order to achieve cascaded triggering among multiple depth cameras while capturing images simultaneously. Upon receiving a software trigger signal sent by the host computer, the master device outputs a trigger signal via the hardware TRIG_OUT interface, while simultaneously triggering itself to capture and output a depth map. After receiving the hardware trigger signal from the master device, the slave devices capture and output depth maps.
SimpleView_TriggerMode_M3S1
: demonstrates how to set the master device (camera) to work in trigger mode 3, and multiple slave devices (cameras) to work in trigger mode 1, in order to achieve cascaded triggering of multiple depth cameras at the set frame rate while capturing images simultaneously. The master device outputs a trigger signal through the hardware TRIG_OUT interface at the set frame rate, while simultaneously triggering itself to capture and output a depth map. After receiving the hardware trigger signal from the master device, the slave devices capture and output depth maps.
SimpleView_SaveLoadConfig
: demonstrates how to write a local JSON file into the camera’s Storage component and load the JSON file that has been written into the Storage component. Additionally, the program supports exporting the JSON file to the local system for checking and configuring parameters.
SimpleView_XYZ48
: demonstrates how to parse and display depth maps in XYZ format.
SimpleView_Point3D_XYZ48
: demonstrates how to acquire 3D point cloud data represented in xyz48 format.
Sample_v2
ListDevices_v2
: demonstrates how to list all depth cameras that are connected to the host computer.
DepthStream_v2
: demonstrates how to acquire the depth map from the depth camera.
ExposureTimeSetting_v2
: demonstrates how to set the exposure time of the camera’s color image.
ForceDeviceIP_v2
: demonstrates how to forcibly set the IP address of the depth camera.
GetCalibData_v2
: demonstrates how to get the raw calibration parameters of the depth camera.
TofDepthStream_v2
: demonstrates how to perform distortion correction on a ToF depth map.
IREnhance_v2
: performs image enhancement for the IR images of the ToF depth cameras.
NetStatistic_v2
: demonstrates how to calculate the network packet loss rate of the depth camera.
OfflineReconnection_v2
: demonstrates how to automatically reconnect the camera after a disconnection.
OpenWithInterface_v2
: demonstrates how to access the camera through a specified network interface.
OpenWithIP_v2
: demonstrates how to access the camera through a specified IP address.
PointCloud_v2
: demonstrates how to acquire and save 3D point cloud data in PLY format.
Registration_v2
: demonstrates how to register the depth and color images.
ResolutionSetting_v2
: demonstrates how to set the image resolution through user interaction or by directly specifying the image mode.
SaveLoadConfig_v2
: demonstrates how to save camera parameters to custom_block.bin (the camera’s internal memory) and export the camera parameters locally from it.
SoftTrigger_v2
: demonstrates how to acquire and output images upon receiving a trigger signal.
StreamAsync_v2
: demonstrates how to configure the asynchronous output of image data streaming from the camera.
framefetch.py
: demonstrates how to capture depth maps and color images in continuous capture mode.frame_fetchIR.py
: demonstrates how to capture IR images in continuous capture mode.frame_isp.py
: demonstrates how to process RAW BAYER images with color casts into color image in a normal color space.frame_registration.py
: demonstrates how to register the depth and color images.frame_trigger.py
: demonstrates how to configures the camera to operate in soft trigger mode and capture a depth map.multidevice_fetch.py
: demonstrates how to configures multiple cameras for image acquisition.point3d_fetch.py
: demonstrates how to capture 3D point cloud and display the number of point clouds and the (X, Y, Z) coordinates of the center through log messages.parameter_settings.py
: demonstrates how to set the exposure time and resolution for the color image.temp_read.py
: demonstrates how to read the temperature values from the temperature sensors.
fetch_frame.cs
: demonstrates how to capture depth maps and color images in continuous capture mode.fetch_IR.cs
: demonstrates how to capture IR images in continuous capture mode.fetch_isp.cs
: demonstrates how to process RAW BAYER images with color casts into color image in a normal color space.fetch_registration.cs
: demonstrates how to capture and align RGB-D images.fetch_trigger.cs
: demonstrates how to configures the camera to operate in soft trigger mode and capture a depth map.fetch_point3d.cs
: demonstrates how to capture 3D point cloud and display the number of point clouds and the (X, Y, Z) coordinates of the center through log messages.offline_reconnection.cs
: demonstrates how to automatically reconnect the camera after a disconnection.parameter_settings.cs
: demonstrates how to set camera parameters.
Image Acquisition Process
The configuration and image acquisition process for the depth camera is illustrated below. The sample program of the C++ SDK SimpleView_FetchFrame
is used to detail the image acquisition process.

Image Acquisition Process
Initialize APIs
TYInitLib initiates data structures such as device objects.
// Load the library
LOGD("Init lib");
ASSERT_OK( TYInitLib() );
// Retrieve SDK version information
TY_VERSION_INFO ver;
ASSERT_OK( TYLibVersion(&ver) );
LOGD(" - lib version: %d.%d.%d", ver.major, ver.minor, ver.patch);
Open Device
Acquire Device List
When retrieving device information for the first time, use selectDevice() to query the number of connected devices and obtain a list of all connected devices.
std::vector<TY_DEVICE_BASE_INFO> selected; ASSERT_OK( selectDevice(TY_INTERFACE_ALL, ID, IP, 1, selected) ); ASSERT(selected.size() > 0); TY_DEVICE_BASE_INFO& selectedDev = selected[0];
Open Interface
ASSERT_OK( TYOpenInterface(selectedDev.iface.id, &hIface) );
Open Device
ASSERT_OK( TYOpenDevice(hIface, selectedDev.id, &hDevice) );
Configure Components
Query Component Status
// Query components that are supported by the camera TY_COMPONENT_ID allComps; ASSERT_OK( TYGetComponentIDs(hDevice, &allComps) );
Configure Components and Set Features
After the device is opened, only the virtual component TY_COMPONENT_DEVICE is enabled by default.
// Enable the RGB component and configure RGB component features if(allComps & TY_COMPONENT_RGB_CAM && color) { LOGD("Has RGB camera, open RGB cam"); ASSERT_OK( TYEnableComponents(hDevice, TY_COMPONENT_RGB_CAM) ); //create a isp handle to convert raw image(color bayer format) to rgb image ASSERT_OK(TYISPCreate(&hColorIspHandle)); //Init code can be modified in common.hpp //NOTE: Should set RGB image format & size before init ISP ASSERT_OK(ColorIspInitSetting(hColorIspHandle, hDevice)); //You can call follow function to show color isp supported features // Enable the left IR component if (allComps & TY_COMPONENT_IR_CAM_LEFT && ir) { LOGD("Has IR left camera, open IR left cam"); ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_LEFT)); } // Enable the right IR component if (allComps & TY_COMPONENT_IR_CAM_RIGHT && ir) { LOGD("Has IR right camera, open IR right cam"); ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_RIGHT)); } // Enable the depth component and configure depth component features LOGD("Configure components, open depth cam"); DepthViewer depthViewer("Depth"); if (allComps & TY_COMPONENT_DEPTH_CAM && depth) { /// Configure depth component features (depth map resolution) TY_IMAGE_MODE image_mode; ASSERT_OK(get_default_image_mode(hDevice, TY_COMPONENT_DEPTH_CAM, image_mode)); LOGD("Select depth map Mode: %dx%d", TYImageWidth(image_mode), TYImageHeight(image_mode)); ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEPTH_CAM, TY_ENUM_IMAGE_MODE, image_mode)); /// Enable the depth component ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_DEPTH_CAM)); /// Configure depth component features (scale unit) //depth map pixel format is uint16_t ,which default unit is 1 mm //the acutal depth (mm)= PixelValue * ScaleUnit float scale_unit = 1.; TYGetFloat(hDevice, TY_COMPONENT_DEPTH_CAM, TY_FLOAT_SCALE_UNIT, &scale_unit); depthViewer.depth_scale_unit = scale_unit; }
Manage Frame Buffers
Note
Before performing frame buffer management, ensure that the required components have been enabled via the TYEnableComponents() interface, and the correct image format and resolution have been set via the TYSetEnum() interface. This is because the size of the frame buffer depends on these settings; otherwise, you may encounter issues with insufficient frame buffer space.
// Query frame buffer size for current camera component configurations
LOGD("Prepare image buffer");
uint32_t frameSize;
ASSERT_OK( TYGetFrameBufferSize(hDevice, &frameSize) );
LOGD(" - Get size of framebuffer, %d", frameSize);
// Allocate frame buffers
LOGD(" - Allocate & enqueue buffers");
char* frameBuffer[2];
frameBuffer[0] = new char[frameSize];
frameBuffer[1] = new char[frameSize];
// Enqueue frame buffers
LOGD(" - Enqueue buffer (%p, %d)", frameBuffer[0], frameSize);
ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[0], frameSize) );
LOGD(" - Enqueue buffer (%p, %d)", frameBuffer[1], frameSize);
ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[1], frameSize) );
Register Callback Function
TYRegisterEventCallback
Register an event callback function. When an exception occurs, the system calls the function registered via TYRegisterEventCallback. The following example demonstrates a callback function that includes exception handling for reconnection scenarios.
static bool offline = false;
void eventCallback(TY_EVENT_INFO *event_info, void *userdata)
{
if (event_info->eventId == TY_EVENT_DEVICE_OFFLINE) {
LOGD("=== Event Callback: Device Offline!");
// Note:
// Please set TY_BOOL_KEEP_ALIVE_ONOFF feature to false if you need to debug with breakpoint!
offline = true;
}
}
int main(int argc, char* argv[])
{
LOGD("Register event callback");
ASSERT_OK(TYRegisterEventCallback(hDevice, eventCallback, NULL))
while(!exit && !offline) {
//Fetch and process frame data
}
if (offline) {
//Release resources
TYStopCapture(hDevice);
TYCloseDevice(hDevice);
// Can try re-open and start device to capture image
// or just close interface exit
}
return 0;
}
Set Work Mode
Configure the depth camera’s work mode according to actual needs. For steps on how to set the camera to other work modes, refer to Work Mode Settings.
// Check if the camera supports setting work mode
bool hasTrigger;
ASSERT_OK(TYHasFeature(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM_EX, &hasTrigger));
if (hasTrigger) {
// Set depth camera to a work mode 0
LOGD("Disable trigger mode");
TY_TRIGGER_PARAM_EX trigger;
trigger.mode = TY_TRIGGER_MODE_OFF;
ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM_EX, &trigger, sizeof(trigger)));
}
Start Image Capture
LOGD("Start capture");
ASSERT_OK( TYStartCapture(hDevice) );
Fetch Frame Data
LOGD("While loop to fetch frame");
bool exit_main = false;
TY_FRAME_DATA frame;
int index = 0;
while(!exit_main) {
int err = TYFetchFrame(hDevice, &frame, -1);
if( err == TY_STATUS_OK ) {
LOGD("Get frame %d", ++index);
int fps = get_fps();
if (fps > 0){
LOGI("fps: %d", fps);
}
cv::Mat depth, irl, irr, color;
parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle);
if(!depth.empty()){
depthViewer.show(depth);
}
if(!irl.empty()){ cv::imshow("LeftIR", irl); }
if(!irr.empty()){ cv::imshow("RightIR", irr); }
if(!color.empty()){ cv::imshow("Color", color); }
int key = cv::waitKey(1);
switch(key & 0xff) {
case 0xff:
break;
case 'q':
exit_main = true;
break;
default:
LOGD("Unmapped key %d", key);
}
TYISPUpdateDevice(hColorIspHandle);
LOGD("Re-enqueue buffer(%p, %d)"
, frame.userBuffer, frame.bufferSize);
ASSERT_OK( TYEnqueueBuffer(hDevice, frame.userBuffer, frame.bufferSize) );
}
}
Stop Capture
ASSERT_OK( TYStopCapture(hDevice) );
Close Device
// Close Device
ASSERT_OK( TYCloseDevice(hDevice));
// Release interface handle
ASSERT_OK( TYCloseInterface(hIface) );
ASSERT_OK(TYISPRelease(&hColorIspHandle));
Release API
// Unload Library
ASSERT_OK( TYDeinitLib() );
// Free Allocated Memory Resources
delete frameBuffer[0];
delete frameBuffer[1];
Application Example: Set Camera IP
This section introduces how to use the compiled ForceDeviceIP
example to set the camera’s IP address and provides common examples of IP address configurations.
Note
Description of IP Address Types:
Temporary IP Address: A manually configured IP address temporarily assigned to the device.
Static IP Address: A manually configured IP address permanently assigned to the device.
Dynamic IP Address: An IP address automatically assigned by the DHCP (Dynamic Host Configuration Protocol) server in the network.
Commands
After executing the command, the IP address of the network depth camera will be modified to the IP address specified by the command, and it will take effect immediately; after the camera is powered off and rebooted, the original configuration will be restored.
Command:
ForceDeviceIP.exe -force <MAC> <newIP> <newNetmask> <newGateway>
Sample code:
ForceDeviceIP.exe -force 68:f7:56:36:90:a3 192.168.1.160 255.255.255.0 192.168.1.1
After executing the command, the IP address of the network depth camera will be modified to the IP address specified by the command, and it will take effect immediately; after the camera is powered off and rebooted, the IP address will remain unchanged as configured by the command.
Command:
ForceDeviceIP.exe -static <MAC> <newIP> <newNetmask> <newGateway>
Sample code:
ForceDeviceIP.exe -static 68:f7:56:36:90:a3 192.168.1.160 255.255.255.0 192.168.1.1
After executing the command, the camera’s IP address will be modified to the one specified in the command. After the camera is powered off and rebooted, the camera will actively request an IP address using the DHCP method.
Command:
ForceDeviceIP.exe -dynamic <MAC> <newIP> <newNetmask> <newGateway>
Sample code:
ForceDeviceIP.exe -dynamic 68:f7:56:36:90:a3 192.168.1.160 255.255.255.0 192.168.1.1
<MAC>
: The camera MAC address. It can be obtained from the device label, with the format of xx:xx:xx:xx:xx:xx.<newIP>
: The IP address to be set.<newNetmask>
: The subnet mask corresponding to the IP address to be set.<newGateway>
: The default gateway corresponding to the IP address to be set.
Use Cases
Use Case 1
Set a static C-class IP address (192.168.5.12) for the Percipio network camera.
Steps on Windows 10:
Check the Current IP Address Configuration of the host computer.
Press WIN+R, type cmd, then enter
ipconfig
and press Enter.Check the IP Address
Check if the host computer’s IP address is within the target subnet. If not (e.g., the target subnet is 192.168.5.XX, but the current IP address is in the 192.168.6.XX subnet), then it is necessary to modify the host computer’s IP address.
Open the Control Panel on your computer, navigate to Network and Internet > Network and Sharing Center > Change adapter settings > Ethernet > Internet Protocol Version 4 (TCP/IPv4). In the Internet Protocol Version 4 (TCP/IPv4) Properties dialog box that appears, select Use the following IP address and configure the IP address, subnet mask, and gateway.
Modify Computer IP Address
Open the
lib\win\hostapp\x64
folder in the SDK. Open Windows PowerShell in this directory and execute the following commands:ForceDeviceIP.exe -static 06:29:39:05:DA:D1 192.168.5.12 255.255.255.0 192.168.5.1
06:29:39:05:DA:D1 is the MAC address of the camera; 192.168.5.12 is the newly assigned IP address; 255.255.255.0 is the subnet mask corresponding to the new IP address, and 192.168.5.1 is the default gateway corresponding to the new IP address.
Use Case 2
Set a Dynamic IP Address for a Percipio Network Camera.
Steps on Windows 10 (Assuming The host computer’s IP configuration is set to DHCP mode):
Open the
lib\win\hostapp\x64
folder in the SDK.Open Windows PowerShell in the directory and run the following command:
ForceDeviceIP.exe -dynamic 06:29:39:05:DA:D1
06:29:39:05:DA:D1 is the MAC address of the camera.