When you obtain this document, you can download the model_matching. zip 3D model matching tutorial data from the RVS developer community. After decompression, its content should include:

image-20231225151333115

What needs to be used in this case are:

  • create_model.xml:Generate a model pointcloud from the model file and preprocess and save the pointcloud.

  • RVSCommonGroup:Commonly used Group groups, load as needed.

  • model.pcd:This file is a pointcloud file saved after running the create_model.xml.

  • match_model.xml:Used for offline model matching, when used, it is necessary to update the path of offline data in the ReadDirectoryNames node and the model pointcloud file in the LoadModelCloud node. When using, first click the start button in the dashboard to start the project, and then click the iterate button to iterate the test_ Offline data in data.

  • test_data:For loading model matching data, please note that the path is correct when using it.

  • workpiece.obj:The 3D model file used for loading the Load Polydata node in create_model.xml. (The model file formats supported by RVS include: *. ply *. stl *. obj *. gltf *. glb. If the current model file format does not match, conversion needs to be completed in advance.)

All files used in the case are saved in the model_matching directory. Please run them in the projects folder under the RVS installation directory. For example:

Note: If in the Windows version, please run it in the RVS installation directory.

Note: If you do not want to adjust the file path in the XML. unstacking_runtime can be renamed as runtime.

image-20231225151915907

Note: The RVS software version used in this document is 1.5.236. If there are any problems or incompatible versions during use, please provide feedback to us in a timely manner and send an email rvs-support@percipio.xyz !

Model pointcloud generation

In the model matching process, model pointcloud are required. On the premise of having a 3D model file, we need to generate a model pointcloud based on the 3D model, preprocess, adjust, and save the pointcloud. This chapter mainly introduces how to generate model pointcloud, including the following steps:

  1. Grid sampling: Create a pointcloud based on the given 3D model.

  2. Pointcloud preprocessing: Divide suitable work areas in the poincloud.

  3. Adjust pointcloud: Adjust the model pointcloud object face and coordinate system.

Grid sampling

Load the 3D model file and convert it to a pointcloud using the MeshSampling node.

Operation steps:

Note:You can directly use the create_model.xml under the model_matching/path to generate model pointcloud, or you can connect the XML by following the steps below.

  1. Create a new engineering project create_model.xml.

  2. Add Trigger, Load, and MeshSampling nodes to the node graph.

  3. Set node parameters.

    • Set load node parameters:

      • type → PolyData

      • filename → icon_more→ workpiece.obj (If you are directly loading an existing create_model.xml,pay attention to the file path)

      • surface → icon_visOn

    • Set MeshSampling node parameters:

      • sample_points → 10000

      • cloud → icon_visOnicon_color 90

  4. Connection node.

    image-20231225153040080

  5. Click the RVS Run button and Trigger the trigger node.

Running result

The results are shown in the figure below. In the 3D view, grey and white are displayed for the model and green is the converted pointcloud.

image-20231225153009450

Pointcloud preprocessing

After transforming the 3D model into a pointcloud, the pointcloud is preprocessed. According to the job requirements of model matching, the pointcloud should be cut and only the upper surface of the model pointcloud should be retained as the working area.

Operation steps:

  1. Add Emit and CloudSegment nodes to the node graph.

  2. Set the node parameters.

    • Set Emit node parameters:

      • type → Cube

      • pose → 0.101468 0.048794 -0.072353 1.253140 1.543760 1.253250

      • width → 0.3

      • height → 0.3

      • depth → 0.2

      • cube → icon_visOn

    • Set the CloudSegment node parameters:

      • type → CropboxSegment

      • mode → intersection

      • cloud → icon_visOnicon_color 0

    Note:The CloudSegment node is used to cut out the desired pointcloud based on the given cube.

  3. Connection node.

    image-20231225153226078

  4. Click the RVS Run button and Trigger the trigger node.

Running result

The results below show the pointcloud displayed in the 3D view before, after and after the pointcloud cutting. The figure on the left shows the pointcloud visualization result of the MeshSampling node. The middle image is a visualization of MeshSampling and Emit node. The figure on the right shows the pointcloud visualization result of the CloudSegment node.

image-20231225153308970

Adjusted pointcloud

The model matching node (ICP node) requires that the input model pointcloud is parallel to the Oxy plane of the actual incoming object pointcloud, and the model pointcloud center is at the origin of the 3D world coordinate system of RVS.

Because the actual incoming object pointcloud is located in the robot coordinate system, and the robot coordinate system coincides with the 3D world coordinate system in RVS. Therefore, the model pointcloud is spatially transformed so that the Oxy plane of its posture is parallel to the Oxy plane of the RVS 3D world coordinate system, and the pointcloud center is at the origin of the RVS 3D world coordinate system, which can meet the input requirements of the ICP node.

Operation steps:

Step 1: Obtain the model pointcloud minimum surrounding cube and its central point pose

  1. Add MinimumBoundingBox and Emit node to node graph.

  2. Set the node parameters:

    • To set the MinimumBoundingBox node parameter:

      • type → ApproxMVBB

      • point_samples → 500

      • box → icon_visOn

      • box_pose → icon_visOn

    Note:The MinimumBoundingBox node is used to obtain the center point of the pointcloud.

    • Set Emit node parameters:

      • Node_name→ EmitPose

      • type → pose

      • pose → 0 0 0 0 0 0

  3. Connection node. image-20240117180407972

    Note: The pose data generated by the EmitPose node is connected to the ref_pose port on the left side of the MinimunBoundingBox node. Used to match the minimum enveloping cube Hight-width-Depth calculated by the input cloud with ref_pose X-Y-Z.

  4. Click the RVS Run button and Trigger the trigger node.

Running result

As shown in the figure below, the minimum surrounding cube and its central point pose are obtained. The central point pose of the cube is regarded as the central point pose of the model pointcloud.

image-20231225153956285

Note: At this time, the center point of Pose can be used as the center point of the model pointcloud, but the Oxy plane of the Pose is not parallel to the Oxy plane of the actual attitude of the incoming object, so it is necessary to rotate the current pose around its Y-axis by -90°.

Step 2: Rotate the central point pose of the model pointcloud so that its Oxy plane is parallel to the Oxy plane of the actual incoming object pose

  1. Add Emit and Transform nodes to the node graph.

  2. Set the node parameters.

    • Set Emit node parameters:

      • node_name → RotY90

      • type → Pose

      • pose → 0 0 0 0 -1.5708 0

      • pose → icon_visOn

    • Set the Transform node parameters:

      • type → Pose

      • transformed → icon_visOn

  3. Connection node.image-20231225154311671

  4. Click the RVS Run button and Trigger the trigger node.

Running result

The following figure shows the process of model posture Oxy conversion. The left image is the original pose, the middle image is the original pose and generated pose, and the right image is the rotated pose. image-20231225154342575

Note: Keep the Oxy plane of the current model pose parallel to the Oxy plane of the actual incoming object pose. Then you need to move the center of the model pointcloud to the origin of the 3D world coordinate system in RVS.

Step 3: Move the center of the model pointcloud to the origin of the 3D world coordinate system

  1. Add AdjustPose and Transform nodes to the node graph.

  2. Set the node parameters.

    • Set the node parameters.:type → InvertPose

    • Set the Transform node parameters:

      • node_name → TransformPointCloud

      • type → PointCloud

      • cloud → icon_visOnicon_color 60

  3. Connection node.

    image-20231225154845199

  4. Click the RVS Run button and Trigger the trigger node.

Running result

As shown in the following picture. The comparison before and after pointcloud transformation is shown. The yellow pointcloud in the 3D view is the pointcloud transformed by the node TransformPointCloud. The center of the converted pointcloud is at the origin of the 3D world coordinate system in RVS.

Note:You can run an Emitpose (0, 0, 0, 0, 0) at a time to verify whether the converted pointcloud center is at the origin of the 3D world coordinate system. You can see that the center of the cloud has moved to the origin of the 3D world coordinate system in RVS.

image-20231225155140023

Save pointcloud

After adjusting the flow of the pointcloud, we have obtained the model pointcloud with the 3D world coordinate origin of the pointcloud center in RVS, and now save the model pointcloud so that we can use it in model matching.

**Operation steps: **

  1. Add the Save node to the node graph.

  2. Set the node parameters. As shown in the following picture.

    • Set the Save node parameters:

      • type → PointCloud

      • filename → model.pcd

    image-20231225160147006

  3. Connection node.

    image-20231225160226900

  4. Click the RVS Run button and Trigger the trigger node.

Running result

At this point, you can see the saved “model.pcd” in the runtime directory.

image-20231225160436031

model matching

After the model pointcloud is created, we can perform model matching.In this case, the ICP node is used to match the model. The ICP node needs to prepare the source_cloud, target_cloud, and initial_pose of the conversion relationship between the two pointclouds.

We need to make sure that initial_pose is basically accurate, and then ICP node is to further fine-tune the initial_pose to obtain the final result_pose. After the model pointcloud is spatially transformed according to result_pose, it can achieve the coincidence effect with the real pointcloud.

This section describes how to match model, including the following steps:

  1. Prepare the model pointcloud: Load the model pointcloud. The pointcloud is sparsely processed to improve the node execution efficiency.

  2. Target pointcloud preparation: In this case, multiple groups of offline test data are obtained. The name of the target pointcloud is read using the Foreach-String node and the corresponding target pointcloud is loaded using the load node.

  3. Target pointcloud preprocessing: The background pointcloud (such as ground) in the target pointcloud is removed, and the pointcloud is downsampled to improve the execution efficiency of subsequent nodes.

  4. Preparation of initial conjectures: Given multiple sets of initial conjectures, the optimal solution is obtained.

  5. Model matching: The ICP node obtains result_pose, and the model pointcloud is spatially transformed according to the pose to coincide with the target pointcloud.

Prepare a model pointcloud

Load the model pointcloud. The pointcloud is sparsely processed to improve the node execution efficiency.

Operation steps:

Note: You can directly use match_model.xml in the model_matching/ path for model matching. You can also join the xml yourself by following these steps.

  1. New project match_model.xml. (You can directly import LoadModelCloud.group under model_matching/RVSCommonGroup, Note the file path )

  2. Creates a new Group in the node graph. Add Trigger, Load, Emit, DownSampling nodes.

  3. Set the node parameters.

    • Set Group parameters:node_name → LoadModelCloud

    • Set Trigger node parameters:type → InitTrigger

    Note:Used to automatically trigger LoadModelCloud node and Emit node.

    • Set the Load node parameters:

      • node_name → LoadModelCloud

      • type → PointCloud

      • filename → model.pcd (Saved model.pcd file path, Note the file path )

      • cloud → icon_visOnicon_color 90

    • Set Emit node parameters:

      • type → Pose

      • pose → 0 0 0 0 0 0

      • pose → icon_visOn

      Note:Generate based on RVS 3D world coordinate origin Pose(0,0,0,0,0).

    • Set DownSampling node parameters:

      • node_name → DownSampling_model

      • type → DownSample

      • leaf_x → 0.002

      • leaf_y → 0.002

      • leaf_z → 0.002

      Note:Indicates that only one point is taken within the spatial scale of 0.002m * 0.002m * 0.002m.

  4. Connection node.

    image-20231225160918027

  5. Click the RVS Run button.

Running result

As shown in the following figure, the loaded model pointcloud and the generated origin pose are displayed in the 3D view, and the origin pose is also the central point of the pointcloud.

image-20231225160955308

Target pointcloud ready

Load test data. In this case, there are multiple groups of offline data. Therefore, you need to use the Foreach-String node to load the target pointcloud file.

Operation steps:

  1. Add triggers (2).

    • Set Trigger node parameters:

      • node_name → start

      • Trigger → light

    • Set Trigger_1 node parameters:

      • node_name → iterate

      • Trigger → light

  2. Creates a new Group in the node graph. (You can directly import LoadTargetCloud.group under model_matching/RVSCommonGroup, Note the file path )

  3. Add DirectoryOperation, Foreach, Emit (2), and Load (2) nodes.

    • Set Group parameters:node_name→ LoadTargetCloud

    • Set the DirectoryOperation node parameters:

      • type → ReadDirectory

      • directory → test_data (If you are directly loading an existing match_model.xml, Note the file path )

      Note: This step is used to load the offline data folder.

    • Set Foreach node parameters:type → String

    • Set Emit node parameters:

      • node_name→ image_name

      • type → string

      • string → /rgb.png

    • Set the Load node parameters:

      • node_name → LoadImage

      • type → Image

      • image → icon_visOn

    • Set Emit_1 node parameters:

      • node_name → cloud_name

      • type → string

      • string → /cloud.pcd

    • Set Load_1 node parameters:

      • node_name → LoadPointCloud

      • type → PointCloud

      • cloud → icon_visOn

  4. Add 2 input tools to the interaction panel – the “Button” control, double click Rename → start and iterate .Middle mouse button click button, select the same name exposure property binding.

    image-20231225161021518

  5. Connection node.

    image-20231225161131973

  6. Click the RVS Run button. When the start button on the panel is clicked, the subsequent node is triggered. iterate over the offline data in the test_data folder when you click the ‘Iterate’ button on the panel.

Running result

As shown below, the currently traversed image is displayed in 2D view and the currently traversed pointcloud is displayed in 3D view.

image-20231225161244121

Pointcloud preprocessing

After loading the test pointcloud data, the background pointcloud needs to be removed to screen out the work area. The pointcloud is sparsely processed to improve the node execution efficiency.

Operation steps:

Step 1: Crop the pointcloud and filter out the work area

  1. Add Emit and CloudSegment nodes to the node graph.

  2. Set the node parameters.

    • Set Emit node parameters:

      • node_name→ EmitCube

      • type → Cube

      • pose → -0.126006 -0.085611 0.621984 -3.141590 -3.117050 3.035830

      • width → 0.5

      • height → 0.5

      • depth → 0.2

      • cube → icon_visOn

    • Set the CloudSegment node parameters:

      • type → CropboxSegment

      • cloud → icon_visOnicon_color 90

      Note:The CloudSegment node is used to cut out the desired pointcloud based on the given cube.

  3. Connection node.

    image-20231225161326068

  4. Click the RVS Run button. Click the start button on the panel.

Running result

The following figure shows the pointcloud before and after the clipping box. The left image shows the pointcloud before cutting, the middle image shows the generated Cube, and the right image shows the pointcloud after cutting (green part).

image-20231225161627276

Step 2: Point cloud drop sampling

  1. Add the DownSampling node to the node graph.

  2. Set the node parameters.

  • Set DownSampling node parameters:

    • node_name → DownSampling_cloud

    • type → DownSample

    • leaf_x/leaf_y/leaf_z → 0.002

    • cloud → icon_visOn 可视 → icon_color 90

      Note:Specify the resampling interval of pointclouds in the xyz axis direction for leaf_x, leaf_y and leaf_z of the DownSampling node, which is set as 0.002m, indicating that only one point is taken within the spatial scale of 0.002m * 0.002m * 0.002m.

  1. Connection node.

    image-20231225161728829

  2. Click the RVS Run button and click the start button on the panel.

Running result

The following figure shows the comparison of pointclouds before and after downsampling. The left picture is the pointcloud before downsampling, and the right picture is the pointcloud after downsampling.image-20231225161812765

Initial estimate preparation

Model matching computes the transformation Pose of the model pointcloud to the target pointcloud. Before calculating the transform Pose, the initial evaluation value needs to be given. The closer the estimated value is to the true value, the easier it is to obtain a stable matching result. The MinimumBoundingBox node is used to obtain the central Pose of the input target pointcloud as the initial estimated value. However, since the short and short sides of the actual pointcloud of the target are along the X-axis or Y-axis of Pose, the short and short sides of the converted model pointcloud should also be parallel to the short and short sides of the actual pointcloud.

The target pointcloud Oxy and the model pointcloud Oxy are parallel in plane, but the pointcloud X (Y) axis may be transformed into the target pointcloud Y (X) axis. Therefore, according to the pose output by the MinimumBoundingBox node, a total of 4 poses should be obtained by rotating 90 degrees along its z axis in turn, all of which are used as initial speculated values, so as to ensure that one of the 4 poses must be close to the true value.

Operation steps:

  1. Add MinimumBoundingBox, Emit, GenerateSamplePose nodes to the node graph.

  2. Set the node parameters.

    • To set the MinimumBoundingBox node parameter:

      • type → SimpleMVBB

      • box → icon_visOn (Shows the calculated minimum cube surrounding box)

      • box_pose → icon_visOn (Shows the center point of the calculated bounding box)

    • Set Emit node parameters:

      • node_name → EmitPose

      • type → Pose

      • pose →0 0 0 0 0 0

    • Set the GenerateSamplePose node parameters:

      • type → BoxGridSample

      • min_position → 0 0 0 0 0 -3.141593

      • max_position → 0 0 0 0 0 1.5708

      • step_yaw → 4

      • pose_list → icon_visOn

  3. Connection node.

    image-20231225161935711

  4. Click the RVS Run button and click the start button on the panel.

Running result

The MinimumBoundingBox and GenerateSamplePose in 3D view are shown below. The left figure is the result of the MinimumBoundingBox node. The figure on the right shows the result of the GenerateSamplePose node.image-20231225161956847

Template matching

After the above four steps, we have prepared the model pointcloud (source_cloud), the target pointcloud (target_cloud), and four groups of initial values initial_pose. At this time, we need to use the ICP node to obtain the optimal transform result_pose. The model pointcloud is spatially transformed according to this pose and then overlapped with the target pointcloud.

Operation steps:

  1. Add ICP and Transform nodes to the node graph.

  2. Set node parameters

    • Set the attribute parameters of the ICP node:

      • max_correspondence_distance → 0.005

      Note:This value represents the maximum spatial distance between two matching points when the target pointcloud is matched with the model pointcloud. Beyond this distance, the matching points are considered invalid. Too many invalid matches means that the current match is poor.

      • ransac_outlier_rejection_threshold → 0.15

      Note:This value indicates the removal of clutter outside 0.15m of the target pointcloud.

      • max_iterations → 2000

      Note:The ICP node carries out continuous optimization respectively based on all the initial evaluation values. The maximum number of optimization iterations of each initial evaluation value is the value of max_iterations.

      • transformation_epsilon → 0.00000001

      Note:This value represents the maximum adjustment of the translation transformation matrix between two adjacent iterations.

      • rot_epsilon → 0.00000001

      Note:This value represents the maximum adjustment of the rotation transformation matrix between two adjacent iterations. (Usually transformation_epsilon has the same value as rot_epsilon.)

      • score_threshold → -1

      Note:This value represents the maximum adjustment of the rotation transformation matrix between two adjacent iterations. (Usually transformation_epsilon has the same value as rot_epsilon.)

      • result_pose → icon_visOn

    • Set the Transform node parameters:

    • type → PointCloud

    • cloud → icon_visOn 可视 → icon_color 180

  3. Connection node.

    image-20231225162105983

  4. Click the RVS Run button and click the start button on the panel to match Start. When you click the iterate button on the panel, iterate over the target pointcloud in the offline data for model matching.

Running result

As shown in the following picture. In the 3D view, the original color pointcloud is finally displayed as the target pointcloud, the red is the matching front model pointcloud, and the blue is the matching post pointcloud.

image-20231225162134427

At this point, you have completed all the content of this document, thank you for your patience to check, I believe you must have some understanding of RVS software, you can create your own unique project, if there is any problem in use, please timely feedback with us, send email rvs-support@percipio.xyz!