Home > Academics, Research > 3D LIDAR Point-cloud Segmentation

3D LIDAR Point-cloud Segmentation

One of the big challenges in 3D LIDAR point-cloud segmentation is detailed ground extraction, especially in high vegetated area. In some applications, it requires to extract the ground points from the LIDAR data such that the details are preserved as much as possible, however, most of the time the details and the noise are coupled and it is difficult to remove the noise whereas the ground details are preserved. Imagine the case where you have the LIDAR point cloud over a creek covered by multilayer canopies including ground flora and you would like to extract the creek from the data set by preserving the ground details as much as you can. This would be a very labor-intensive task for human, so a better choice might be to develop an automatic process for computer to complete the task for us. Even for a computer, this can be a very labor-intensive task due to the number of points in the area is extremely high.


In 2004, I and my former adviser, Dr. Kenneth C. Slatton, developed a multiscale information-theoretic based algorithm for ground segmentation. The method works well in real-world applications and is used in several publications. The MATLAB toolbox is available here. The brief manual can be found here.

I would like to thank my colleagues at National Center for Airborne Laser Mapping (NCALM), Adaptive Signal Processing Laboratory (ASPL) and Geosensing group at University of Florida who use the algorithm on their work and give tons of useful suggestions to improve this algorithm up until now; Dr. Jhon Caceres for very nice GUI; Dr. Sowmya Selvarajan for the first-ever manual for this toolbox. Last but not least, I would like to thank Dr. Kenneth Clint Slatton for wonderful ideas and guidance–we still have an unpublished journal to fulfill [1].

/* Style Definitions */
{mso-style-name:”Table Normal”;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-fareast-font-family:”Times New Roman”;

[1] K. Kampa and K. Clint Slatton, “Information-Theoretic Hierarchical Segmentation of Airborne Laser Swath Mapping Data,” IEEE Transactions in Geoscience and Remote Sensing, (in preparation).

[2] K. Kampa and K. C. Slatton, “An Adaptive Multiscale Filter for Segmenting Vege­tation in ALSM Data,” Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), vol. 6, Sep. 2004, pp. 3837 – 3840.

A brief slides can be found here.

  1. stryker
    February 10, 2011 at 12:10 am

    Yo Bot,

    You mention you use Parzen Window and also you mention you use EM algorithm for mixture of Gaussians. This confuses me because my udnerstanding is Parzen is a discrete density where as EM for GMM provides a closed form density. Could you explain better where/why you used Parzen and where/why you used EM GMM. Also, what kernel did you select for Parzen and how did you determine your bandwidth? Maybe I’m confused here but it seems that you would typically use one or the other.

    thanks bro , keep on shredding….

  2. admin
    February 15, 2011 at 2:06 am

    Yo Mike, I feel so honored that you still remember this work of mine since that time…you are amazing. In fact, there are 2 parts. The first part is to roughly separate the whole area into 2 types; 1) lightly vegetated and 2) highly vegetated. Of course, this first part is supervised learning, and relies on non-parametric pdf estimation (i.e. Parzen windowing). Somehow it is convenient to use Gaussian kernel for pdf estimation, and in order to pick the good kernel size, I use cross validation to estimate that “magic” number ^_^. The second part uses GMM to find the distribution of ground and non-ground objects in order to determine optimal thresholds for vegetation filtering. Apparently these two parts are 2 autonomous processes. Hope this helps, Mike my bro! How is your super Gaussian process or Bayesian non-parametric going?

  3. mike
    February 15, 2011 at 10:09 am

    Thanks Bot for the great explanation! I’m looking into Gaussian process but they are still not sure of what their final goals will be so in the meantime I’m working on 3D segmentation methods for terrestrial point clouds. The goal here is to compute volume changes along a stream bank for differing soil types from lowest level at stream bottom upwards to top soil. This segmentation is based on RGB values co-aligned to the lidar point cloud.

  4. admin
    February 15, 2011 at 11:58 am

    Mike, your work is extremely interesting. It’s nice to know that one can use both RGB and terrestrial lidar to do soil type classification which traditionally I thought would be only possible with soil measurement directly. Also I believe that lidar + color space that you are doing is a very good way to go for point cloud segmentation. I’m not sure if RGB color space provides a good discriminative feature or not in this situation. I have seen people in image segmentation convert RGB to L*a*b color space as its Euclidean distance between a pair of colors is similar to those of human perception. I’m not sure whether or not this will help with soil type anyway. You rock, bro!

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: