Tuesday, 16 December 2014

eCognition Tutorial: Finding trees and buildings from LiDAR with limited information

I have worked a lot with LiDAR during my MSc time, but for my current work I am working more with optical images. I kind of miss LiDAR. When one of my friends came  to me with a problem related to LiDAR, I was very happy and decided to flaunt her my eCognition skills :-).

LiDAR data typically comes with many attributes like no. of returns, intensity, First Pulse Elevation, Last Pulse Elevation etc. Some data collectors even provides preliminary discrimination into ground and non-ground. But in this case, all we got is First Pulse data and Last Pulse data. And the desired output is discrimination between trees and buildings.

The data was borrowed from eCognition community. If someone has time to kill, head over there, get the data, roll up your sleeves and let’s do information extraction from LiDAR with eCognition, shall we?

My workflow:
  1. Create a diff image (FP-LP) and classify tree
    1. Assign pixels > 2 as high with multi-threshold segmentation
    2. Assign small objects < 10 pixels surrounded by high as high
    3. Perform opening and closing with ball shaped Structuring elements (SE). Opening step is required to remove footprint effect of LiDAR along the buildings edges.
    4. Assign objects with area > 20 pixels as tree
  2. Find buildings in Last Pulse images
    1. Perform chess-board segmentation with size 1 on unclassified objects
    2. Perform a Multi-Resolution Segmentation (MRS)
    3. Create a feature “Mean Difference to unclassified” feature within neighborhood of 20 pixels. For that a customized feature was created.
    4. Assign unclassified objects with Mean Difference to unclassified (20) > 4 m
    5. Merge buildings objects and assign small objects ( area < 100 pixels) classified as buildings as trees.
    6. Assign unclassified objects surrounded by buildings as buildings
The rule-set development took 20 minutes of my lunch time. The process takes 11 seconds for an area 360 m x 360 m and the result obtained is reasonably good. With a little more effort, the result can obviously be enhanced. Nevertheless, i gave myself a pat on the back.

When i will have more time in near future, i will compare the result with adopting a different methodology using lastools.

Rule-set in action

First Pulse
Diffrence image
Classification ( Yellow: Buildings, Green: Trees)

Last Pulse

Well, this morning my friend called me and told me that the classification of buildings is great. Can we get straight lines for buildings edges rather than zig-zag lines? Lets see what she is taking about. Yes, buildings are most of the times straight. But due to the effect of segmentation, our classified buildings edges are zig-zag.The problem can be tackled with native vector handling capability of eCognition. The algorithms were introduced in eCognition 9.0 that was released couple of months ago.

Zig-Zag edges problem of buildings
Approach 1

  • Convert building objects into a shp file
  • Use buiding orthogonalization algorithm (Chessboard: 7 pixels and Merge Threshold: 0.5)

As you can, the result is far from perfect.

Approach 2
  • Use mathematical morphology closing (SE: box 7x7 pixels) on building objects
  • Use mathematical morphology opening (SE: box 7x7 pixels) on building objects
  • Convert buildings objects into a shp file
  • Use buiding orthogonalization algorithm (Chessboard: 7 pixels and Merge Threshold: 0.5)

The result is much better than first approach. Here, the sequence of close-open must be followed since buildings objects that are loosely connected may break into separate objects if a sequence of open-close is adopted.

Boundary othrogonalization without Mathematical Morphology ( Yellow: building objects, Red: New boundary)
Boundary othrogonalization with Mathematical Morphology ( Yellow: building objects, Red: New boundary)