2

I have a .las file and I performed the following operations:

  1. Convert PointCloud to RGB Image
  2. Convert PointCloud to GroundTruth Matrix.
  3. Crop Images and corresponding GroundTruth Matrix to fixed size 256x256
  4. Train UNet (image and groundtuth label)
  5. Inference. Get prediction Matrix with each pixel representing Labels

So I've a predicted matrix, I don't know how to map it to PointCloud to see how 3D predicted classification looks like? I'm using Julia

h612
  • 544
  • 2
  • 11
  • I suggest that you assign the image pixel colour value to each 3D point falling into the corresponding X/Y grid cell. This would transform the 2D semantic classification to 3D space assuming you want the same classification in the Z-dimension. – HyperCube Sep 27 '22 at 20:03

2 Answers2

0

Unfortunately for your goal, you pre-processed your 3D data by converting it to 2D and then cropping the 2D image further. You can plot the 2D data with colors for differently labeled points to show the 2D results, but you are unlikely to be able to get back to a true 3D plot you can move a point of view through with a 3D viewer that way. You should, if you can, modify your preprocessing so as to train your network on the 3D data directly.

Bill
  • 5,600
  • 15
  • 27
0

I;ve used binning which was used to project 3D to 2D again from 2D to 3D.

h612
  • 544
  • 2
  • 11