I am doing my master thesis in 3d reconstruction of apple plantation and I using the Intel Realsense D435 camera to make this reconstruction. I read that this type of camera can work outdoor but when I use it to get the point cloud of the trees It gives me weired things rather than what is shown in realsense_viewer. I tried in my python code to adjust the depth_units but nothing enhanced.
My project is requiring the point cloud to apply ICP (Iterative closest point) algorithm in order to register all the point clouds that I saved from the camera.
Here is the python code that I used to save the .ply files to use it after that in the registration it gives excellent results in indoor but in outdoor the results is really bad.
# First import the library
import pyrealsense2 as rs
from pynput import keyboard
counter = 0
def on_press(key):
global counter
if key == keyboard.Key.space:
print("Saving pcd_{}.ply".format(counter))
points.export_to_ply("/home/agilex/thesis_ws/src/reconstruction/img/point_cloud/pcd_{}.ply".format(counter), color_frame)
print("Done")
def on_release(key):
global counter
counter += 1
if key == keyboard.Key.esc:
# Stop listener
return False
if __name__ == '__main__':
pipe = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth)
config.enable_stream(rs.stream.color)
align_to = rs.stream.color
align = rs.align(align_to)
profile = pipe.start(config)
device = profile.get_device()
depthsensor = device.first_depth_sensor()
if depthsensor.supports(rs.option.depth_units):
# This value in realsense viewer in micrometers and here is in meters
depthsensor.set_option(rs.option.depth_units,0.000755)
frames = pipe.wait_for_frames()
aligned_frames = align.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()
pc = rs.pointcloud()
points = rs.points()
pc.map_to(color_frame)
points = pc.calculate(aligned_depth_frame)
# Collect events until released
with keyboard.Listener(on_press=on_press,on_release=on_release) as listener:
listener.join()
listener = keyboard.Listener(on_press=on_press,on_release=on_release)
listener.start()
Here is the rgb and depth images: RGB image Depth image
When I try to use the realsense_viewer and export the .ply file i got this: realsense_viewer exported ply but when I used my python code I got this:
My objective is to get the depth and color streams and then use the generated point cloud to save different point clouds of the scene and then use the ICP to get the full point cloud of the scene.
Is there any suggestions to make this project work either by ROS or by the SDK?