tag:blogger.com,1999:blog-2265440553251651002024-03-07T04:34:12.341+01:00QGIS, remote sensing, Matlab, ENVI, Python, eCognitionAnonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.comBlogger64125tag:blogger.com,1999:blog-226544055325165100.post-44394077985257691612015-12-31T18:17:00.001+01:002015-12-31T18:18:49.370+01:00Happy New Year 2016<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
<span lang="EN-US">The year
2015 is ending very shortly. <o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">I wish
every readers of my blog an exciting new year ahead that is full of happiness
and prosperity.<o:p></o:p></span><br />
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">From my
blog statistics, most of my visitors comes from the US and Europe. There is
negligible traffic coming from developing countries. This can be correlated to
the fact that exploitation of geo-spatial data in developing countries is still
in infant stage. On average, I tend to get around 600 visits on my blog, which is not a lot but its good to see that someone actually bothers to read the posts that i wrote. When i get an email from my visitors mentioning that some posts helped them to do things in their professional life, i am over the moon on that day :). <o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3VbZKcbsUjqNrbrBS5pNinrl6NsGaxKeheHslVz8vSskHmlnnsKbO3BYJcdx2hcYw2vZ_ZToUqARcqP3jz1565o2L7VuibmghSK1p6vdzZCvC49tGfEpNNZ33_8L15sZ50xO4lFATeqE/s1600/Dec.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="311" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3VbZKcbsUjqNrbrBS5pNinrl6NsGaxKeheHslVz8vSskHmlnnsKbO3BYJcdx2hcYw2vZ_ZToUqARcqP3jz1565o2L7VuibmghSK1p6vdzZCvC49tGfEpNNZ33_8L15sZ50xO4lFATeqE/s640/Dec.PNG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKVNdYC3-Z2pDaKo792ktmco7WCYdffwgEvvEBpV2TLX0vtnHvuTNe_dS6aKU7eRgbSQVHvY8pu-roxheFZGfoVo7wreL-ID331NQUaD6W6z_lOA64tpNvtDbx77eETcpMqcm8t3NICdM/s1600/nov.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="316" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKVNdYC3-Z2pDaKo792ktmco7WCYdffwgEvvEBpV2TLX0vtnHvuTNe_dS6aKU7eRgbSQVHvY8pu-roxheFZGfoVo7wreL-ID331NQUaD6W6z_lOA64tpNvtDbx77eETcpMqcm8t3NICdM/s640/nov.PNG" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9n_fb2pB3LEGnWpkf-NwkE4RuPnBrNyokGAL2dAN7OveJ12Fi_2kbrBKVXWiXWrPC2I6ImWPxpcYuPITF_jp7Q4v6MF7TeSu51txHJ6J5-iDX7gLuoiR_AUPtLE2BcDSPmGd60lvSlyM/s1600/oct.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9n_fb2pB3LEGnWpkf-NwkE4RuPnBrNyokGAL2dAN7OveJ12Fi_2kbrBKVXWiXWrPC2I6ImWPxpcYuPITF_jp7Q4v6MF7TeSu51txHJ6J5-iDX7gLuoiR_AUPtLE2BcDSPmGd60lvSlyM/s640/oct.PNG" width="640" /></a></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">These are
the all-time top 10 posts within my blog that seems to be attracted to many
users. Some of the post that I have written in 2010-11 is still very popular
among visitors, specially related to ArcGIS. The post related to GIS gets more
hits than those related to Remote Sensing. I tend to write less related to GIS
posts as I don’t work with GIS day in and day out. I will write more related to Remote Sensing in
coming days. The top ten blogs post are related to MATLAB : 5 ARCGIS: 4 and eCognition: 1. I could not get the
posts specific statistics for 2014 from blogger but lately my eCognition
related blogs are liked by many visitors. I have made few friends through my
blogs which is awesome.</span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
</div>
<ol style="text-align: left;">
<li><a href="http://shreshai.blogspot.com/2010/12/kml-creation-using-matlab.html" target="_blank">KMLcreation using MATLAB</a></li>
<li><a href="http://shreshai.blogspot.com/2011/12/opening-multispectral-or-hyperspectral.html" target="_blank">Openingmultispectral or hyperspectral ENVI files in MATLAB</a></li>
<li><a href="http://shreshai.blogspot.com/2011/01/converting-raster-dataset-to-xyz-in.html" target="_blank">Convertingraster dataset to XYZ in ARCGIS !!</a></li>
<li><a href="http://shreshai.blogspot.com/2011/07/utilizing-numpy-to-perform-complex-gis.html" target="_blank">UtilizingNumpy to perform complex GIS operation in ARCGIS 10</a></li>
<li><a href="http://shreshai.blogspot.com/2011/01/data-driven-map-book-in-arcgis-10.html" target="_blank">Data DrivenMap Book in ArcGIS 10</a></li>
<li><a href="http://shreshai.blogspot.com/2011/01/arcpy-python-scripting-in-arcgis-10.html" target="_blank">ArcPy :Python scripting in ArcGIS 10</a></li>
<li><a href="http://shreshai.blogspot.com/2011/02/matlab-gui-for-3d-point-generation-from.html" target="_blank">MATLAB GUIfor 3D point generation from SR 4000 images</a></li>
<li><a href="http://shreshai.blogspot.com/2015/02/matlab-tutorial-dividing-image-into.html" target="_blank">MATLABtutorial: Dividing image into blocks and applying a function</a></li>
<li><a href="http://shreshai.blogspot.com/2015/01/matlab-tutorial-finding-center-pivot.html" target="_blank">MATLABTutorial: Finding center pivot irrigation fields in a high resolution image</a></li>
<li><a href="http://shreshai.blogspot.com/2014/12/ecognition-tutorial-finding-vegetation.html" target="_blank">eCognitionTutorial: Finding trees and buildings from LiDAR with limited information</a></li>
</ol>
<br />
<div class="MsoNormal">
</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-17685180546939034722015-12-14T18:51:00.002+01:002015-12-14T19:00:00.705+01:00eCognition Tutorial: How to find segments which have lower mean value to the neighbouring segments with additional condition to class?<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div class="MsoNormal" style="line-height: normal; margin: 0in 0in 0pt;">
<span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; mso-ansi-language: EN; mso-fareast-font-family: "Times New Roman"; mso-fareast-language: EN-GB;"><em><strong>I have segmented data, classified into two classes: 1, 2. I would like
to find segments into the 1 class which are adjacent to the 2 class and have
lower mean value. The one condition should be: Existence of 2 > 0, but how
to combine it with information about lower mean value of segment?<o:p></o:p></strong></em></span></div>
<br />
<span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;">This is a problem
posted in the eCognition community by one of the user. One of the core strength
of OBIA is to incorporate contextual information and class related information
in the process which is difficult with pixel-based approaches. Here the class
of interest has to satisfy two contextual class related information:<o:p></o:p></span><br />
<br />
<div class="MsoListParagraphCxSpFirst" style="margin: 0in 0in 0pt 0.75in; mso-add-space: auto; mso-list: l0 level1 lfo1; text-indent: -0.25in;">
<!--[if !supportLists]--><span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN; mso-fareast-font-family: "Times New Roman";"><span style="mso-list: Ignore;">1)<span style="font-size-adjust: none; font-stretch: normal; font: 7pt/normal "Times New Roman";">
</span></span></span><!--[endif]--><span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;">It
must be bordering the class 2<o:p></o:p></span></div>
<br />
<div class="MsoListParagraphCxSpLast" style="margin: 0in 0in 8pt 0.75in; mso-add-space: auto; mso-list: l0 level1 lfo1; text-indent: -0.25in;">
<!--[if !supportLists]--><span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN; mso-fareast-font-family: "Times New Roman";"><span style="mso-list: Ignore;">2)<span style="font-size-adjust: none; font-stretch: normal; font: 7pt/normal "Times New Roman";">
</span></span></span><!--[endif]--><span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;">It
must be class 1 and must have lower mean value<o:p></o:p></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgebxaPcQtfMzDOx3Mz2aKCtIRikWp5U0fbMmXMsSreTRC2zou4ZP91hD0VvDa7VoBTNkoJ-AR7auc5sHX4iPelfF6fcH0vdYwkDT-NnBeZqkPlI_kscdaEeiUI76tYIq5Xka34E_gdSpA/s1600/1.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="332" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgebxaPcQtfMzDOx3Mz2aKCtIRikWp5U0fbMmXMsSreTRC2zou4ZP91hD0VvDa7VoBTNkoJ-AR7auc5sHX4iPelfF6fcH0vdYwkDT-NnBeZqkPlI_kscdaEeiUI76tYIq5Xka34E_gdSpA/s400/1.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The Problem</td></tr>
</tbody></table>
<br />
<div class="MsoNormal" style="margin: 0in 0in 8pt;">
<span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;">For this problem,
we have to make a class related feature ( Class-Related features ><span style="mso-spacerun: yes;"> </span>Relation to neigbor objects > Mean diff.
to<span style="mso-spacerun: yes;"> </span>) that is based on a layer of interest
and class. For demonstration purpose I will be using NIR layer. So the feature
that is created is “Mean diff to nir, class 2”. In the following figure, we can
see the feature “Mean diff to nir, class 2” on the right side. In the figure,
objects that are not bordering the class 2 have undefined value (red), the
objects that are bordering the class 2 and have lower “Mean diff to nir, class
2”, <span style="mso-spacerun: yes;"> </span>have smaller value (darker) and the
objects that are bordering the class 2 and have higher “Mean diff to nir, class
2”,<span style="mso-spacerun: yes;"> </span>have higher value (brighter).<o:p></o:p></span></div>
<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkXg9mc3fAg2ErQxG1FSH_hyphenhyphen8eP0BjEcPylR_9i8nOYE5aHBTzYGJbeoptz8mEVtLlJBIlOf5eOAnRPBsWM4ItbmSW7FfwUpcdwMk0yB9iHwO0tzy_jUN1s-PNphx_bzAA5SWIrHsFa6g/s1600/2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="307" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkXg9mc3fAg2ErQxG1FSH_hyphenhyphen8eP0BjEcPylR_9i8nOYE5aHBTzYGJbeoptz8mEVtLlJBIlOf5eOAnRPBsWM4ItbmSW7FfwUpcdwMk0yB9iHwO0tzy_jUN1s-PNphx_bzAA5SWIrHsFa6g/s640/2.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The custom feature </td></tr>
</tbody></table>
<br />
<div class="MsoNormal" style="margin: 0in 0in 8pt;">
<span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;">For better illustration
I have attached some figures that also show values for “Mean diff to nir, class
2” feature.<o:p></o:p></span></div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8HF2rDPS6SpaG5blupCSFaXiAEfD4Sg5rOf-AnDwjMoNkQds0W0na3u8F0oCCCpgym8laGZ1SJEd5evBJjhOWIVncxi0DLCnye7SrYtdnHS9RUDlkTi7xClPlxLRo6ZDRZAEBtvkzIf8/s1600/3C.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8HF2rDPS6SpaG5blupCSFaXiAEfD4Sg5rOf-AnDwjMoNkQds0W0na3u8F0oCCCpgym8laGZ1SJEd5evBJjhOWIVncxi0DLCnye7SrYtdnHS9RUDlkTi7xClPlxLRo6ZDRZAEBtvkzIf8/s400/3C.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Class 2 object</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3NELzXXOKEH9bd3n3QeeTs75Q1u1HjmyUKdVLszjUqo0haZ8YoCPW62zFelqY8o_CH3VH4Dg8A34-7S9_J55-U5tEVGEHfNPFPz0qq_8d7bTk3LAsIa23zqnuQRtsjPsZoL96bP-lg8g/s1600/4C.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3NELzXXOKEH9bd3n3QeeTs75Q1u1HjmyUKdVLszjUqo0haZ8YoCPW62zFelqY8o_CH3VH4Dg8A34-7S9_J55-U5tEVGEHfNPFPz0qq_8d7bTk3LAsIa23zqnuQRtsjPsZoL96bP-lg8g/s400/4C.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Class 1 object not bordering class 2</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeI5JJYMoBePBt4jPVjkT-clBhNc-FFnjzceyBlOlq49iMXcG6I0bxYnE7dGNEZTOfTi2f9Zq7F3Oed5T0buVoyiJdxVzskt109Y1S0fAx-tKsaHC9c7hwZTIlnxFrCeTEn93KrSdt-7U/s1600/5C.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeI5JJYMoBePBt4jPVjkT-clBhNc-FFnjzceyBlOlq49iMXcG6I0bxYnE7dGNEZTOfTi2f9Zq7F3Oed5T0buVoyiJdxVzskt109Y1S0fAx-tKsaHC9c7hwZTIlnxFrCeTEn93KrSdt-7U/s400/5C.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Class 2 objects bordering Class 1 </td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEio7HDzLnLJnlZrYDHl1LDO-5rL7uZINzwSPaBS7o3gxNrRJml6vZbCBwcWaPBqD6hcAxmAhlvVI1om3-GW-H-7-YJ-7SHN-XzfN4US3JSXa01w0cewk2LnbDgvOoetksDgqL4wHXClBY4/s1600/6C.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="217" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEio7HDzLnLJnlZrYDHl1LDO-5rL7uZINzwSPaBS7o3gxNrRJml6vZbCBwcWaPBqD6hcAxmAhlvVI1om3-GW-H-7-YJ-7SHN-XzfN4US3JSXa01w0cewk2LnbDgvOoetksDgqL4wHXClBY4/s400/6C.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Unclassified object bordering Class 1</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: left;">
<span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;">Afterwards, the
extraction of objects of interest is straight forward. We use assign class
algorithm for that purpose. </span></div>
<div class="separator" style="clear: both; text-align: left;">
<span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;"></span> </div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiSx0Xjs4FG59ny-83QT0rn0QKf7xWW6Ac6GWJGfexzJho_E9vxdJZs-nCurpwbYIeOs04wIauGjEHkbrkM_1ARxqKd-crqZAqP-DCfY4MRA8TeLFYNdoF6dYX4TPUKZnVn7PnM9NiY2w/s1600/7.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="262" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiSx0Xjs4FG59ny-83QT0rn0QKf7xWW6Ac6GWJGfexzJho_E9vxdJZs-nCurpwbYIeOs04wIauGjEHkbrkM_1ARxqKd-crqZAqP-DCfY4MRA8TeLFYNdoF6dYX4TPUKZnVn7PnM9NiY2w/s400/7.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Assign class</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZAOKm08WvSLFJxgAPM8oLRAOaULmSynDfCnRUhW1dl8ajGx-YUuEgSdAuYpJHv2robY8r5S-o4jQy2-emF7Ph0xefKqCz267fjOMmEDBC0lN_nWXKnp-eQrgiOw_ZaO4rYbYaVXmBqUA/s1600/8.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="284" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZAOKm08WvSLFJxgAPM8oLRAOaULmSynDfCnRUhW1dl8ajGx-YUuEgSdAuYpJHv2robY8r5S-o4jQy2-emF7Ph0xefKqCz267fjOMmEDBC0lN_nWXKnp-eQrgiOw_ZaO4rYbYaVXmBqUA/s640/8.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The solution. Pink color represent objects we are attempting to extract.</td></tr>
</tbody></table>
<br />
<div class="separator" style="clear: both; text-align: left;">
<span lang="EN" style="font-family: "times new roman" , "serif"; font-size: 12pt; line-height: 107%; mso-ansi-language: EN;"><o:p></o:p></span> </div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-21745083748739870412015-11-06T10:33:00.000+01:002015-11-06T11:14:28.761+01:00Nepal, Mr. Modi and Petrol stations<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal" style="text-align: justify;">
I know that I haven’t posted for
a very long time for now. Please accept my sincere apology. There have been
lots of things that going around in my life. I have moved to my home country
Nepal after almost 10 years of stay in Europe in February 2015. Unfortunately, a big earthquake hit Nepal on
25<sup>th</sup> April 2015 that has severely affected many peoples in Nepal.
Around 10,000 peoples were dead. Many
people lost their relatives and belongings. Everybody suffered by wrath of the mighty Earthquake. </div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Things started to become normal after
several months and suddenly Mr. Modi, the prime minister of India Modi-fied the
life of Nepalese people who were attempting to enjoy their day to day normal
life and were enthralled by newly formulated constitution. Nepal was slapped by an unofficial blockade
from India limiting supply of essential commodities and fuel in Nepal. Being a
landlocked country, Nepal solely relies on India for its need of fuel. Being a sovereign
country, Nepal has every right to decide what is right for the people of Nepal and
to promulgate constitution that was ratified by over 90 % of constitution assembly
members. India did not like some part of the constitution and started the so
called “unofficial” blockade since beginning of September 2015. The interference of India on internal affair
of Nepal is totally wrong from any possible angle but surprisingly India thinks
that they have every right to bully Nepal. Now, due to the blockade life of a
common Nepalese people is very miserable. There is scarcity of cooking gas, no
fuel (petrol and diesel) to run vehicles. Miles long queue of vehicles are a
normal scene at every nook of the capital, Kathmandu. If someone has 20 liters
of petrol or diesel or 2-3 cylinders of cooking gas, then he/she is part of the minority of people in Kathmandu who happily celebrated the ongoing festive season
of Dashain and Tihar. For everyone else, days are spent counting numbers of
fuel carrying vehicles entering Nepal and queuing at a fuel station on a scorching
heat. Things are so bad in Kathmandu and majority of places in Nepal that if
you like to irritate someone, just say ‘Did I tell you something, you look like
Mr. Modi with that beard’. LOL.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Rambling aside, I was looking for
a map of fuel stations in Kathmandu, and no surprise I could not find one. Not
even in the official website of <a href="http://www.nepaloil.com.np/" target="_blank">Nepal Oil Corporation (NOC</a>) , the sole body that
is responsible for import and distribution of fuels within Nepal. I did not even
find a simple table list. Shame NOC. So I decided to create one. For the map,
I used <a href="https://www.openstreetmap.org/" target="_blank">Open Street Map (OSM)</a> data. The fuel station list may be incomplete as
data was gathered from Voluntary GIS (VGIS) approach. For queering the data, <a href="http://wiki.openstreetmap.org/wiki/Overpass_API" target="_blank">Overpass-API</a>
was used. Overpass-API is a query language that is used to grab data of
interest from OSM. For displaying the map, the power of GIST from GITHUB was
used. If you like KML file of the map then click <a href="https://www.dropbox.com/s/t0a6ruf6z31w9dq/FUEL_KTM.kml?dl=0" target="_blank">here</a>. I will write a detail post about Overpass-API sometime in future when I will
be tired of scolding Mr. Modi for his wrong deeds. For now its everyone’s
favorite things to do here in Nepal. Even ladies love it more than watching Indian drama serials.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<br />
<div class="MsoNormal" style="text-align: justify;">
<script src="https://gist.github.com/sukuchha/c1d43974196f098d468e.js"></script></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com2tag:blogger.com,1999:blog-226544055325165100.post-787321722538754502015-03-26T05:34:00.000+01:002015-03-26T05:34:28.185+01:00MATLAB Tutorial: Finding trees and buildings from LiDAR with limited information using Mathematical Morphology<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<br />
<br />
<span style="font-size: large;">Coming soon </span>:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgORYcOg0bwswxXJElvgazNpcECjogIJbRmwhxivQ03vxJAHGXWlm8UUV-Mk9zcl_L8j8wjgmisK7l5_q8ao_rTeFMRtb02qGkNyL2PPwavSjnQ3L-QFBkgMaSX3P_kJSF-ok9zu7d8PQs/s1600/LP.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgORYcOg0bwswxXJElvgazNpcECjogIJbRmwhxivQ03vxJAHGXWlm8UUV-Mk9zcl_L8j8wjgmisK7l5_q8ao_rTeFMRtb02qGkNyL2PPwavSjnQ3L-QFBkgMaSX3P_kJSF-ok9zu7d8PQs/s1600/LP.bmp" height="320" width="282" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Last Pulse</td></tr>
</tbody></table>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8dwAbmzGNL5zwab8qww7VPF7WUhw0hLvsFFCXxsjfXzddeAfg9beSofe18xjDhE6Jg86ARqA9mBwPphfU29tXGYpehyphenhyphen6eOLmp7SEsHQmngE7Ju8OZNykgG6JbRRk2xexWkB_HxPOGUMY/s1600/FP.bmp" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8dwAbmzGNL5zwab8qww7VPF7WUhw0hLvsFFCXxsjfXzddeAfg9beSofe18xjDhE6Jg86ARqA9mBwPphfU29tXGYpehyphenhyphen6eOLmp7SEsHQmngE7Ju8OZNykgG6JbRRk2xexWkB_HxPOGUMY/s1600/FP.bmp" height="320" width="282" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">First Pulse</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr1oC8O4tL42vUPEg6-O6bJkBR3cBRgvT1hkffMiYKb4yCoKxA8TC5YRCQLCxJI9e78sj0UmElGjrttVJL3xfJmE9tgVejFKrmMLZ4khBPddfKQyVfki7cXVkaEz48jNgkQ43GZIqnsGg/s1600/downarrow.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr1oC8O4tL42vUPEg6-O6bJkBR3cBRgvT1hkffMiYKb4yCoKxA8TC5YRCQLCxJI9e78sj0UmElGjrttVJL3xfJmE9tgVejFKrmMLZ4khBPddfKQyVfki7cXVkaEz48jNgkQ43GZIqnsGg/s1600/downarrow.png" height="195" width="200" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHfpU6fZ-RnyuL8qFDIZmx90RY7tCBrv9e7GxebDxLRQSt4tnHLEt0NuhBp7mHrkjTmXnRFV066jMSwPOs-p5fMsnTnWwZBpaJAzvlonbYdkdb_PR0oR5HpYRsGtY1WAKAXVYYVEJ3HnU/s1600/matlab_classification.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHfpU6fZ-RnyuL8qFDIZmx90RY7tCBrv9e7GxebDxLRQSt4tnHLEt0NuhBp7mHrkjTmXnRFV066jMSwPOs-p5fMsnTnWwZBpaJAzvlonbYdkdb_PR0oR5HpYRsGtY1WAKAXVYYVEJ3HnU/s1600/matlab_classification.bmp" height="320" width="278" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Classification of buildings and tree</td></tr>
</tbody></table>
<br />
<b><span style="font-size: x-large;">Processing TIME: < 1 sec</span></b><br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br /></div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-83799761300579139492015-02-14T02:28:00.000+01:002015-02-14T02:28:26.093+01:00MATLAB tutorial: Dividing image into blocks and applying a function<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal" style="text-align: justify;">
Often due to the limitation in memory, we want to divide an
image into mxn blocks and process those blocks. If your block processing outputs
an image then, you can use <i>blockproc</i> function in MATLAB. But, if processing
function outputs points then one can’t use block processing. For that, one has
to rely on image indexing to divide image into blocks. So here I will show you
how you can divide image into blocks and process those individual blocks for
any particular function using indexing. I am going to use block processing for
finding pivot irrigation fields detection that was featured in this <a href="http://shreshai.blogspot.com/2015/01/matlab-tutorial-finding-center-pivot.html" target="_blank">post</a>. The
trick is to apply matrix indexing to get chunk of block data and process them
in a sequential manner. <o:p></o:p></div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Here is the original image. We are going to divide it into <i>mxn </i>blocks. With the code you can specify any <i>m</i> or <i>n</i> value.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7BdXO9HQuSPVOIoxbWwKagPiIqv5H7aoGVF-9ZCkrKitIcD1CTgMRsRteCwPHzshdKGFJtmtHVOwT4zzlqMTt6goHlhuSMp2girAWTyLkwfORHCiYmfC3rDEPOtSD6dHCl951clQTzec/s1600/1.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7BdXO9HQuSPVOIoxbWwKagPiIqv5H7aoGVF-9ZCkrKitIcD1CTgMRsRteCwPHzshdKGFJtmtHVOwT4zzlqMTt6goHlhuSMp2girAWTyLkwfORHCiYmfC3rDEPOtSD6dHCl951clQTzec/s1600/1.bmp" height="476" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Original image</td></tr>
</tbody></table>
<div>
<span style="color: #274e13;"><i><script src="https://gist.github.com/sukuchha/e35359ac6d3843f896c8.js"></script></i></span></div>
<div>
<i><span style="color: #274e13;"><br /></span></i></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkcqek5UDCRBq3-a_vF-ACRPPVZTgCPTi3Zkj5dnp0pyTn9w9UNUGEjDWmHtPLUFUGvfkZuUxitdcE5RUI_giZiyMCIJJTIyzEKOoVqGv1GopyecNRCiA8tjiO8DF2e8mFupFNvP7eWn4/s1600/blocks.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkcqek5UDCRBq3-a_vF-ACRPPVZTgCPTi3Zkj5dnp0pyTn9w9UNUGEjDWmHtPLUFUGvfkZuUxitdcE5RUI_giZiyMCIJJTIyzEKOoVqGv1GopyecNRCiA8tjiO8DF2e8mFupFNvP7eWn4/s1600/blocks.bmp" height="327" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Block processing sequential numbering</td></tr>
</tbody></table>
<div class="MsoNormal">
With the following code, the block processing on individual
blocks are performed. Feel free to copy the code and adapt it to your liking. Everything in the code is self-explanatory, so just go through the code line by line and you will understand whats going on.</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span style="color: #274e13;"><i><script src="https://gist.github.com/sukuchha/7508af3bc313e6918cc8.js"></script></i></span></div>
<div class="MsoNormal">
The output of block processing for original image is as follows:</div>
<div class="MsoNormal">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaEhhZxqFuPVqtPz5raTEsSuae9-14vA98eV5W2OeAWER94eBHaDqZ73s5omrr3zGLnlnXBfR8c7L2nJq7dm4f5ppL6tya7Um63eJ-dB1TtQBIytq2lN43oAQDbC73QXRq_3oMdKm1pFA/s1600/4.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjaEhhZxqFuPVqtPz5raTEsSuae9-14vA98eV5W2OeAWER94eBHaDqZ73s5omrr3zGLnlnXBfR8c7L2nJq7dm4f5ppL6tya7Um63eJ-dB1TtQBIytq2lN43oAQDbC73QXRq_3oMdKm1pFA/s1600/4.bmp" height="497" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Block processing of detecting circles </td></tr>
</tbody></table>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
.<o:p></o:p></div>
<div class="MsoNormal">
<i><span style="color: #274e13;"><br /></span></i></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-42825376939675449522015-01-14T17:53:00.000+01:002015-01-16T00:29:07.958+01:00MATLAB Tutorial: Finding center pivot irrigation fields in a high resolution image<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal" style="text-align: justify;">
<span lang="EN-US">In this
post, we will be talking about finding circular irrigation fields. The image
was download from <a href="https://www.planet.com/gallery/pinal-county-irrigation/" target="_blank">here</a>. If you want to try, download it and play around. The approach
I am using is detection of edges and finding edges that form circular edges.
If you have heard of Hough’s transformation for detection of straight lines,
the method is just an extension of it to detect circles. For Hough’s transformation,
please go through <a href="http://en.wikipedia.org/wiki/Hough_transform" target="_blank">here</a>. If you have this excellent<a href="http://www.mathworks.com/support/books/book49039.html" target="_blank"> book</a>, it has an elaborate
explanation of Hough’s transformation to detect straight lines. Peoples have
used Hough’s transformation for many different purpose. Many people use it for detecting straight building edges that can used to reconstruct buildings for 3-D
buildings, generating city GML models etc. <o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal" style="text-align: justify;">
<span lang="EN-US">For Hough’s
transformation for straight line detection, you can use either PYTHON based
<a href="http://scikit-image.org/" target="_blank">Scikit-image</a> or MATLAB. Circular Hough's transformation is also available in both scikit-image and openCV. </span><span lang="EN-US">The algorithm (</span><i><a href="http://imfindcircles/" target="_blank">imfindcircles</a>)</i> is available in MATLAB since 2013b version with image processing toolbox. I could not find the algorithm in any remote sensing
software so far. My personnel view is that remote sensing software are very behind on incorporating state-of-the-art algorithms. So, knowing some coding either PYTHON, MATLAB or R will take you to greater heights in your professional path.<br />
<br />
A circle is represented mathematically as:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://opencv-python-tutroals.readthedocs.org/en/latest/_images/math/a59e83d016322a7b1e888b67eb77a7a3112493c2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="(x-x_{center})^2 + (y - y_{center})^2 = r^2" border="0" class="math" src="http://opencv-python-tutroals.readthedocs.org/en/latest/_images/math/a59e83d016322a7b1e888b67eb77a7a3112493c2.png" style="background-color: #fcfcfc; border: 0px; box-sizing: border-box; color: #404040; font-family: Lato, proxima-nova, 'Helvetica Neue', Arial, sans-serif; font-size: 16px; height: auto !important; line-height: 24px; max-width: 100%; text-align: center; vertical-align: middle;" /></a></div>
<br />
where xcenter and ycenter are center of the circle and r is the radius of the circle. As you can see, there are 3 parameters to be fitted for cicular hough transformation.<br />
<br />
Read the documentation
of MATLAB, to get an idea about <i>imfindcircles</i> function parameters. So here basically, we are going
to use <i>imfindcircles</i> function to
detect center pivot irrigation fields in the image. In this image, there are
only dark pivot circular fields surround by bright objects. So, we will be
using only ‘dark’ mode of <i>imfindcircles.
</i>The minimum circle and the maximum circle radii are image dependent so you
need to provide those information with a little bit of data exploration.</div>
<div class="MsoNormal" style="text-align: justify;">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal" style="text-align: justify;">
<span lang="EN-US">Here is a code in MATLAB.</span><br />
<span lang="EN-US"><br /></span>
<script src="https://gist.github.com/sukuchha/75de77daae22f8b04c70.js"></script></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXNaKN9rbyPX4DWYQ7rN-pBhJSa5FwkTbamce_Niz3O_KxfggUb9dK2fyqNHvxDVND8GKwr62n7BRmxora-yBU-pE8YTlCQkkB1u2Dqbtkr3ZrLunK2h2KqBsdeVo8HulmKngNH4pbAw4/s1600/1.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXNaKN9rbyPX4DWYQ7rN-pBhJSa5FwkTbamce_Niz3O_KxfggUb9dK2fyqNHvxDVND8GKwr62n7BRmxora-yBU-pE8YTlCQkkB1u2Dqbtkr3ZrLunK2h2KqBsdeVo8HulmKngNH4pbAw4/s1600/1.bmp" height="476" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Original RGB image</td></tr>
</tbody></table>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7wrNzGVsqLfCZOhk8fgBh-kuaYhAo3hiZFyfWo79ZLbaQF8afflufgi7gZjJ1UT2FqDBxQBmYy-n5rLmj1wnhQIiaV1sWhEKUXMSFphIQHGkcb5ooMxNZl2kolkeCJL6a2EGCgfHyQ2M/s1600/2.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7wrNzGVsqLfCZOhk8fgBh-kuaYhAo3hiZFyfWo79ZLbaQF8afflufgi7gZjJ1UT2FqDBxQBmYy-n5rLmj1wnhQIiaV1sWhEKUXMSFphIQHGkcb5ooMxNZl2kolkeCJL6a2EGCgfHyQ2M/s1600/2.bmp" height="497" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">RGB image with detected pivot irrigation fields</td></tr>
</tbody></table>
<div class="MsoNormal" style="text-align: justify;">
<span lang="EN-US">As you can see, 7 fields out of 9 fields were correctly detected. Two undetected fields( middle -top of the image) are also darker circles but due to its low contrast with surrounding, they were not detected. Even with the higher value for parameter '<i>sensitivity</i>' and the low value of '<i>edgeThreshold</i>' paramter, those two fields were undetected. You can further increse sensitivity and lower edgeThreshold parameters, to find those undetected circles but then you risk of finding many false alarms as well.</span></div>
<div class="MsoNormal" style="text-align: justify;">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal" style="text-align: justify;">
<span lang="EN-US">I have a gut feeling that with OBIA with ecognition, the process of finding center pivot irrigation fields would be not a straight forward procedure as with using the function </span><i>imfindcircles </i>and the process would be much complex. Nevertheless, i will try it with eCognition in near future and report back.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
Pixel-based based classification using any machine learning classifies will fail miserably for this case as center pivot irrigation fields ares spectrally similar to vegeation in other rectangular plots.</div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal">
<span style="color: red;"><u>UPDATE</u></span><br />
<span style="color: red;"><br /></span>
<br />
<div style="text-align: justify;">
I spent some time in eCognition exploring a newer algorithm " template matching". The technique is not new but it has been incorporated with the last release of eCognition. The concept is given a template, the template moves over the image (a single layer) in a sliding window and calculates normalized cross-correlation simalarity between the template and the pixels within the sliding window. The result is an cross-correlation image. Subsequently a threshold value is used to find position of pixels with higher cross-correlation value. <br />
<br />
The concept of normalized cross-correlation is shown below taken from a good <a href="http://www.cse.psu.edu/~rcollins/CSE486/lecture07.pdf" target="_blank">presentation</a>. Study the presentation in detail if you want.Notice border effect of the cross-correlation image below. This can be avoided if padding with replicated pixels are added in the image (commonly done in MATLAB) for any convolution procedure.</div>
<br />
<div style="text-align: justify;">
The detail explanation for performing the template matching in eCognition will follow sometime in future.</div>
<div style="text-align: justify;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZCNchJNhtEYpheNk7aIZLJRAHQ6aA8gh3YxmGfTj-T_GN8xHPQ86dKPpgzlz5gukz7wpheihuTnNbhh9KHOqM3wrTf4uxs8XM0sVjQJ3rmr5xMt0QeHL1IzHfv8N0pVC3CBpxd1fMozQ/s1600/cross-correlation+concept.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZCNchJNhtEYpheNk7aIZLJRAHQ6aA8gh3YxmGfTj-T_GN8xHPQ86dKPpgzlz5gukz7wpheihuTnNbhh9KHOqM3wrTf4uxs8XM0sVjQJ3rmr5xMt0QeHL1IzHfv8N0pVC3CBpxd1fMozQ/s1600/cross-correlation+concept.bmp" height="227" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Cross-correlation concept</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjE6geDZTS4znZ3DmoVtikvIRRb3odKr4tQ5DLjhNe0G82vwUEBmTOV-yGXCKooIOUW0jKUUzGIMv7nZVupkFahN2OqlePUM6rYpAMWFqhhifxYb89HbYnoBK7BaFFpKIHvzkN5opfZrMo/s1600/template.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjE6geDZTS4znZ3DmoVtikvIRRb3odKr4tQ5DLjhNe0G82vwUEBmTOV-yGXCKooIOUW0jKUUzGIMv7nZVupkFahN2OqlePUM6rYpAMWFqhhifxYb89HbYnoBK7BaFFpKIHvzkN5opfZrMo/s1600/template.bmp" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">A genarated template with many samples.</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSOxudl9dm-G4ZR_ctVIiWOO7O7zdeInw68qNwwaCGQvHIAngQ7SRL4_EyjTC3p6D-YTaQDgOshRq5lOaW3cMl4_HD30WGcQS6B7IYrhnKyspbBOLwIUKf4pmEoQ9lhx3Axro2Kr65JOA/s1600/ecog2.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSOxudl9dm-G4ZR_ctVIiWOO7O7zdeInw68qNwwaCGQvHIAngQ7SRL4_EyjTC3p6D-YTaQDgOshRq5lOaW3cMl4_HD30WGcQS6B7IYrhnKyspbBOLwIUKf4pmEoQ9lhx3Axro2Kr65JOA/s1600/ecog2.bmp" height="292" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Cross-Correlation image</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW4kPjFynfzJcvzb_UIffmyMgmqdOX0fq-ds67QQoIGjsHdluesMCSfDBFegB8qNIUVnoS05FKlYL4TcQW1n6kuoqo5OVuDrlplEbQYTTUsnpc0G4EfQPewjihP20eMjdotE5wlOyd6Mw/s1600/ecogFull.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW4kPjFynfzJcvzb_UIffmyMgmqdOX0fq-ds67QQoIGjsHdluesMCSfDBFegB8qNIUVnoS05FKlYL4TcQW1n6kuoqo5OVuDrlplEbQYTTUsnpc0G4EfQPewjihP20eMjdotE5wlOyd6Mw/s1600/ecogFull.bmp" height="322" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Result of template matching in eCognition</td></tr>
</tbody></table>
<br />
<br />
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com4tag:blogger.com,1999:blog-226544055325165100.post-4602945583002977772014-12-16T16:03:00.000+01:002014-12-18T19:35:43.241+01:00eCognition Tutorial: Finding trees and buildings from LiDAR with limited information<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
<span lang="EN-US">I have worked a lot with LiDAR
during my MSc time, but for my current work I am working more with optical
images. I kind of miss LiDAR. When one of my friends came to me with a problem related to LiDAR, I was very happy and decided to flaunt her my eCognition skills :-).<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">LiDAR data
typically comes with many attributes like no. of returns, intensity,
First Pulse Elevation, Last Pulse Elevation etc. Some data collectors even provides preliminary
discrimination into ground and non-ground. But in this case, all we got is First
Pulse data and Last Pulse data. And the desired output is discrimination
between trees and buildings.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">The data
was borrowed from eCognition <a href="http://community.ecognition.com/home/LiDAR%20Webinar_%20Ground%20-%20Non%20Ground%20classification.zip/view" target="_blank">community</a>. If someone has time to kill, head over
there, get the data, roll up your sleeves and let’s do information
extraction from LiDAR with eCognition, shall we?<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">My
workflow:</span></div>
<div class="MsoNormal" style="margin-left: .25in;">
</div>
<ol style="text-align: left;">
<li>Create a diff image (FP-LP) and classify tree</li>
<ol>
<li>Assign pixels > 2 as <i>high </i>with multi-threshold segmentation</li>
<li>Assign small objects < 10 pixels surrounded by <i>high</i> as <i>high</i></li>
<li><i><i><span style="font-style: normal;">Perform opening and closing with ball shaped Structuring elements (SE). Opening step is required to remove footprint effect of LiDAR along the buildings edges.</span></i></i></li>
<li><i><span style="font-style: normal;">Assign objects with area > 20 pixels as </span>tree</i></li>
</ol>
<li>Find buildings in Last Pulse images</li>
<ol>
<li>Perform chess-board segmentation with size 1 on unclassified objects</li>
<li>Perform a Multi-Resolution Segmentation (MRS)</li>
<li>Create a feature “<i>Mean Difference to unclassified</i>” feature within neighborhood of 20 pixels. For that a customized feature was created.</li>
<li>Assign <i>unclassified</i> objects with <i>Mean Difference to unclassified (20)</i> > 4 m</li>
<li>Merge <i>buildings </i>objects and assign small objects ( area < 100 pixels) classified as buildings as trees.</li>
<li>Assign unclassified objects surrounded by <i>buildings</i> as <i>buildings</i></li>
</ol>
</ol>
The rule-set
development took 20 minutes of my lunch time. The process takes 11 seconds for
an area 360 m x 360 m and the result obtained is reasonably good. With a little more effort, the result can obviously be enhanced. Nevertheless, i gave myself a pat on the back.<br />
<div class="MsoNormal">
<span lang="EN-US"><br /></span>When i will have more time in near future, i will compare the result with adopting a different methodology using lastools.<br />
<span lang="EN-US"><br /></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_aZs6Rb6dvn5hgcwR7kABV5ZYIdVkpBs2ZR1SNuW-uoIj_PTrPKFOCQxcaVX-2V58v-RmxlrGHV9qzmCYBKg9_FIHOVzHpZkbTIsEipHmdaNbXEGAtGE2ZSixCX9oBy4aGV6t90q0Lqg/s1600/snap.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_aZs6Rb6dvn5hgcwR7kABV5ZYIdVkpBs2ZR1SNuW-uoIj_PTrPKFOCQxcaVX-2V58v-RmxlrGHV9qzmCYBKg9_FIHOVzHpZkbTIsEipHmdaNbXEGAtGE2ZSixCX9oBy4aGV6t90q0Lqg/s1600/snap.bmp" height="372" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Rule-set in action</td></tr>
</tbody></table>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMvM298zSLGb3aI46UZajfQrB86_ysYbobzO0uViQiUNC0P32V3ayEu7X39TAiuhLKr5tDsS09QfLElQUQS0o0z8-x99gYI0XHQNDiZ_657c_yaoHcxTAVpbZhF8ws2SeqfKPaTsjL7ko/s1600/FP.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMvM298zSLGb3aI46UZajfQrB86_ysYbobzO0uViQiUNC0P32V3ayEu7X39TAiuhLKr5tDsS09QfLElQUQS0o0z8-x99gYI0XHQNDiZ_657c_yaoHcxTAVpbZhF8ws2SeqfKPaTsjL7ko/s1600/FP.bmp" height="320" width="282" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">First Pulse</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNL0n889Avazd4PQXlQh-7oO4HNPg4-0Kkf_qQohLc2EFG2kqBUfvR7evJYmxS6ZFE3yaeBa2GUqcnNuz1Ccugi11oEchreLhjvO6XW_ucV6PlbH7yss7J7-laVjeRbkXB3X96OI_FZwQ/s1600/diff.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNL0n889Avazd4PQXlQh-7oO4HNPg4-0Kkf_qQohLc2EFG2kqBUfvR7evJYmxS6ZFE3yaeBa2GUqcnNuz1Ccugi11oEchreLhjvO6XW_ucV6PlbH7yss7J7-laVjeRbkXB3X96OI_FZwQ/s1600/diff.bmp" height="320" title="Difference image" width="283" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Diffrence image</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQ43DtlyLIm0JRSD6grLue6zaQ-PNosRAhBmeF8EDMzOf1aEMmT2MHFY0vYy0KGSjObSBESUp13Wa2X9YrvASCK5DEhctLvYodO0L-r396FsJfXBlxCGTZZTK5NoyS0yO4iJxRkwua2NE/s1600/classification.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQ43DtlyLIm0JRSD6grLue6zaQ-PNosRAhBmeF8EDMzOf1aEMmT2MHFY0vYy0KGSjObSBESUp13Wa2X9YrvASCK5DEhctLvYodO0L-r396FsJfXBlxCGTZZTK5NoyS0yO4iJxRkwua2NE/s1600/classification.bmp" height="320" width="283" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Classification ( Yellow: Buildings, Green: Trees)</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjK_OGf1MvpI8QAzMEEyX3ALpKJUFzYwYjI15x63mrGTLTOfGIPWpmMtBXe493GrOz4m_DmUPaxidXLr0uSy7hy1NPebRVvfkt4bskEV4x4-BFOTUPxp6ZHk9Y52OCGJkWCCU5_ULJDZRQ/s1600/LP.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjK_OGf1MvpI8QAzMEEyX3ALpKJUFzYwYjI15x63mrGTLTOfGIPWpmMtBXe493GrOz4m_DmUPaxidXLr0uSy7hy1NPebRVvfkt4bskEV4x4-BFOTUPxp6ZHk9Y52OCGJkWCCU5_ULJDZRQ/s1600/LP.bmp" height="320" width="282" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Last Pulse</td></tr>
</tbody></table>
<b><u>UPDATE</u></b><br />
<br />
Well, this morning my friend called me and told me that the classification of buildings is great. Can we get straight lines for buildings edges rather than zig-zag lines? Lets see what she is taking about. Yes, buildings are most of the times straight. But due to the effect of segmentation, our classified buildings edges are zig-zag.The problem can be tackled with native vector handling capability of eCognition. The algorithms were introduced in eCognition 9.0 that was released couple of months ago.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWN41WdIp1AW5y56muk-WC4N8IzWNOeSZRMtZmNTle4ovltj4jIfisdxVmdTUhOtsK15B_uPbsJfhizhgcJwqdWLg9Au_WL-Oi9cbyzIx_MAy4LO1h4BVgC5Y71NfyHZAUSknqmGBUxq0/s1600/buiding_problem.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWN41WdIp1AW5y56muk-WC4N8IzWNOeSZRMtZmNTle4ovltj4jIfisdxVmdTUhOtsK15B_uPbsJfhizhgcJwqdWLg9Au_WL-Oi9cbyzIx_MAy4LO1h4BVgC5Y71NfyHZAUSknqmGBUxq0/s1600/buiding_problem.bmp" height="320" width="315" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Zig-Zag edges problem of buildings</td></tr>
</tbody></table>
<u>Approach 1</u><br />
<br />
<ul style="text-align: left;">
<li>Convert building objects into a shp file</li>
<li>Use buiding orthogonalization algorithm (Chessboard: 7 pixels and Merge Threshold: 0.5)</li>
</ul>
<br />
As you can, the result is far from perfect.<br />
<br />
<u>Approach 2</u><br />
<ul>
<li>Use mathematical morphology closing (SE: box 7x7 pixels) on building objects</li>
<li>Use mathematical morphology opening (SE: box 7x7 pixels) on building objects</li>
<li>Convert buildings objects into a shp file</li>
<li>Use buiding orthogonalization algorithm (Chessboard: 7 pixels and Merge Threshold: 0.5)</li>
</ul>
<br />
The result is much better than first approach. Here, the sequence of close-open must be followed since buildings objects that are loosely connected may break into separate objects if a sequence of open-close is adopted.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgj0l6aVRz1x_xJQBESqJrXPIx82soyqLqzG-HQJF_yoC8upJAm3t51yIqYOAdVzJ9U_-lkMriY3-0P2__LLqM9EzvzQf79Zkk8RoTz21cDZIgmiHkW0KuLrf1ByEkwsQX814iHuPaBF3c/s1600/buiding_problem+ortho.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgj0l6aVRz1x_xJQBESqJrXPIx82soyqLqzG-HQJF_yoC8upJAm3t51yIqYOAdVzJ9U_-lkMriY3-0P2__LLqM9EzvzQf79Zkk8RoTz21cDZIgmiHkW0KuLrf1ByEkwsQX814iHuPaBF3c/s1600/buiding_problem+ortho.bmp" height="320" width="318" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Boundary othrogonalization without Mathematical Morphology ( Yellow: building objects, Red: New boundary)</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtKn21zyyFUnXtqBQwRJXfiGNdNyeErBCJzsI5S_xTis2UYLBJ1fowuOU0wgSk93i4z5qvCT9XlMRuTY3pIzMYjMK3zfLRRibkFcWVaTv6CTWbmA69cB1kT727TqF_AneY9T40trL17pg/s1600/buiding_problem+MM+ortho.bmp" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtKn21zyyFUnXtqBQwRJXfiGNdNyeErBCJzsI5S_xTis2UYLBJ1fowuOU0wgSk93i4z5qvCT9XlMRuTY3pIzMYjMK3zfLRRibkFcWVaTv6CTWbmA69cB1kT727TqF_AneY9T40trL17pg/s1600/buiding_problem+MM+ortho.bmp" height="320" width="287" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 13.3333339691162px;">Boundary othrogonalization with Mathematical Morphology ( Yellow: building objects, Red: New boundary)</span></td></tr>
</tbody></table>
<br />
<br />
<br />
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com3tag:blogger.com,1999:blog-226544055325165100.post-7944856547749980572014-11-18T20:36:00.001+01:002014-11-18T21:19:33.398+01:00eCognition Tutorial: Find the closest classified object<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
<a href="https://www.blogger.com/null" name="OLE_LINK1" style="font-family: inherit;"><span style="background: white; font-size: 9pt; line-height: 115%;"><span style="color: red;">How
I can find the closest classified object to any other object picked by me?</span></span></a></div>
<div class="MsoNormal" style="text-align: left;">
<span style="font-family: inherit;"><a href="https://www.blogger.com/null" name="OLE_LINK1"><span style="background: white; color: #4d4d4d; font-size: 9pt; line-height: 115%;"><br /></span></a></span></div>
<div class="MsoNormal" style="text-align: left;">
<span style="background: white; color: #4d4d4d; font-size: 9pt; line-height: 115%;"><span style="font-family: inherit;">Solution:<o:p></o:p></span></span></div>
<div class="MsoNormal" style="text-align: left;">
<span style="background: white; color: #4d4d4d; font-size: 9pt; line-height: 115%;"><span style="font-family: inherit;"><br /></span></span></div>
<div style="text-align: left;">
<span style="font-family: inherit;">
</span></div>
<div class="MsoNormal" style="text-align: left;">
<span style="background: white;"><span style="color: #4d4d4d; font-family: inherit;"><span style="font-size: 9pt; line-height: 115%;">There are
different ways to tackle this problem. First one should be aware that there are
two different methods of computing image objects distance. Center of Gravity
and Smallest Enclosing Rectangle. Scour the help doc to find what those two methods
are and choose the method you want. You can set up this in the </span></span><span style="color: #4d4d4d;"><span style="font-size: 11.8181819915771px; line-height: 13.8000011444092px;">beginning</span></span><span style="color: #4d4d4d;"><span style="font-family: inherit;"><span style="font-size: 9pt; line-height: 115%;"> of your rule set using </span></span><span style="font-size: 11.8181819915771px; line-height: 13.8000011444092px;">rule set</span><span style="font-family: inherit;"><span style="font-size: 9pt; line-height: 115%;"> options algorithm. Obviously you need to first create a feature ' Distance to' with the class you are interested in. Afterwards, we going to use "find domain extreme" algorithm for finding the closest object. </span></span></span></span><o:p></o:p></div>
<div class="MsoNormal" style="text-align: left;">
<span style="background: white;"><span style="color: #4d4d4d;"><span style="font-family: inherit;"><span style="font-size: 9pt; line-height: 115%;"><br /></span></span></span></span></div>
<div class="MsoNormal" style="text-align: left;">
<span style="color: #4d4d4d;"><span style="background-color: white; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">For this problem, i am going to use the same image that i used in the last blog. We are going use different concepts such as multi-threshold segmentation, variables, and PPO. For every blob, we will find a nearest blob in the image.Subsequently, we will export results as image so that you can see what is happening.</span></span><br />
<span style="color: #4d4d4d;"><span style="background-color: white; font-size: 11.8181819915771px; line-height: 13.8000011444092px;"><br /></span></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghv3IEWOHGNJZFOv_jA76oi069VrysJZkNtkEkQeYmTyre2mcP8q4JiGvLdCYih-DkXAN87HiamKXbmwMW_Jy_cCB3ACLJWYN3C7odYHMFJSRMdD_9aFnYyL9mfi4tu5joHSLpRvayZ-4/s1600/1.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghv3IEWOHGNJZFOv_jA76oi069VrysJZkNtkEkQeYmTyre2mcP8q4JiGvLdCYih-DkXAN87HiamKXbmwMW_Jy_cCB3ACLJWYN3C7odYHMFJSRMdD_9aFnYyL9mfi4tu5joHSLpRvayZ-4/s1600/1.PNG" height="275" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">find domain extrema</td></tr>
</tbody></table>
<div class="MsoNormal" style="text-align: left;">
<span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">Steps:</span></div>
<div class="MsoNormal" style="text-align: left;">
<br />
<ul style="text-align: left;">
<li><span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">multi-threshold segmentation to get all blobs and classify as <i>class1</i></span></li>
<li><span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">create variable to count loop ( it will be later use to export images with distinct names)</span></li>
<li><span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">Use PPO to loop through <i>class1</i></span></li>
<ul>
<li><span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;"> assign current object as curr_class</span></li>
<li><span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;"> update variable </span></li>
<li><span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;"> find domain extreme to find nearest blob that belong to<i> class1</i></span></li>
<li><span style="background-color: white; color: #4d4d4d; font-size: 11.8181819915771px; line-height: 13.8000011444092px;"> export image</span></li>
</ul>
</ul>
</div>
<div class="MsoNormal" style="text-align: left;">
<span style="color: #4d4d4d;"><span style="background-color: white; font-size: 11.8181819915771px; line-height: 13.8000011444092px;"><br /></span></span></div>
<div class="MsoNormal" style="text-align: left;">
<span style="color: #4d4d4d;"><span style="background-color: white; font-size: 11.8181819915771px; line-height: 13.8000011444092px;"><br /></span></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXR01e-F1bTBSzUqy5K8jDX43-dfLYDqdHyNaV2rM5VBp3nK25xVv7LQOkte2e4UGTdPo1ZimmNAjgq001PghaA1_1Yv0MPd4zoKFaSKDUNeRI0M5OuqGIjA9vnmg0LNeUdnaH1tmFIio/s1600/2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXR01e-F1bTBSzUqy5K8jDX43-dfLYDqdHyNaV2rM5VBp3nK25xVv7LQOkte2e4UGTdPo1ZimmNAjgq001PghaA1_1Yv0MPd4zoKFaSKDUNeRI0M5OuqGIjA9vnmg0LNeUdnaH1tmFIio/s1600/2.PNG" height="482" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Project to find the nearest object</td></tr>
</tbody></table>
<div class="MsoNormal" style="text-align: left;">
<br /></div>
<div class="MsoNormal">
<span style="color: #4d4d4d;"><span style="background-color: white; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">Color: Red- Class 1</span></span></div>
<div class="MsoNormal">
<span style="color: #4d4d4d;"><span style="background-color: white; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">Color: Green:cur_class</span></span></div>
<div class="MsoNormal">
<span style="color: #4d4d4d;"><span style="background-color: white; font-size: 11.8181819915771px; line-height: 13.8000011444092px;">Color :Magenta: near ( blob closest to cur-class)</span></span></div>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL-kW6Yl_GjG70pv_yj2L6D7xQnDSukR9TxN6vnFUSVytk1z453ZnshKDJdevmZF3nfDZ8gTi1kGrSjbUhhRxz0KLCiJJ10yQ2SOGS3qAZZgm6LuZ0by1xH-rVSTm0tszHKNlLxIbO-tk/s1600/output_7TTb0O+(1).gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL-kW6Yl_GjG70pv_yj2L6D7xQnDSukR9TxN6vnFUSVytk1z453ZnshKDJdevmZF3nfDZ8gTi1kGrSjbUhhRxz0KLCiJJ10yQ2SOGS3qAZZgm6LuZ0by1xH-rVSTm0tszHKNlLxIbO-tk/s1600/output_7TTb0O+(1).gif" height="320" width="298" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">gif demonstrating nearest object</td></tr>
</tbody></table>
<div>
<br /></div>
<div class="MsoNormal" style="text-align: left;">
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-91511423749538560522014-11-12T19:25:00.003+01:002014-11-15T01:34:05.017+01:00Tutorial eCognition : Finding area of classified objects within eCognition<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-family: 'Times New Roman', serif; font-size: 10pt; line-height: 115%;">Problem:</span><br />
<div class="MsoNormal">
<span lang="EN-US" style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%; mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<span style="background: white; color: red; font-family: "Times New Roman","serif"; line-height: 115%;"><i>I have done some classification
with many classes. Now i want to know the area of individual class. Please tell
a way to find it and store in a text file.<span class="apple-converted-space"> </span></i></span><span style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%;"><br />
<!--[if !supportLineBreakNewLine]--><br />
<!--[endif]--></span><span lang="EN-US" style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%; mso-ansi-language: EN-US;"><o:p></o:p></span></div>
<div class="MsoNormal">
<span style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%;">Solution</span></div>
<div class="MsoNormal">
<span style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%;"><br /></span></div>
<div class="MsoNormal">
</div>
<div class="MsoNormal">
<span lang="EN-US" style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%; mso-ansi-language: EN-US;">For this kind of problem, many people would resort to finding a way in ArcGIS with exported shapes files with classification. In ArcGIS, one has to do series of operations to get the desired output (add area column and populate with area for every features, sum up area for each class). These operation are not complex but if you are not good with ArcGIS, then doing all that is a tough nut to crack.</span><br />
<span lang="EN-US" style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%; mso-ansi-language: EN-US;"><br /></span>
<span lang="EN-US" style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%; mso-ansi-language: EN-US;">Good thing is that there is feature
in eCognition ‘area of classified objects’ that can be used for this purpose. But for that you have to create that feature for every class that you have in
your project. Imagine you have more than 10 classes. Creating that features for
ten different classes is boring, isn’t it? At least for me, it is cumbersome. On top of that imagine you just finished with the project you are working with and you have to perform same thing again in another project with 15 classes with different names. Gosh, you have to create another 15 features again. Not fun, right? So what’s the solution? You guess it right, we can use array functionality within
eCognition to loop over all classes and create that feature for every class in
one go. The rule set,we will create, will work in every project regardless of numbers of classes you have in your project. Sounds interesting? Keep on reading.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%; mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="font-family: "Times New Roman","serif"; font-size: 10.0pt; line-height: 115%; mso-ansi-language: EN-US;">eCognition introduce concept of variables and array from version 8 ( ? not sure). With array functionality, you can gather all classes present in you project in one go and make a particular feature with each class. I have seen many eCognition projects developed by other people for last three years and not many people use these new features of eCognition. It very handy for many cases. So here is a rule that that does the following. You can even create a customized algorithm from it and apply it to every project at the end of your classification.</span></div>
<div class="MsoListParagraphCxSpFirst" style="margin-left: 72pt; text-align: left; text-indent: -18pt;">
</div>
<ol style="text-align: left;">
<li><span lang="EN-US" style="font-family: Symbol; font-size: 10pt; line-height: 115%; text-indent: -18pt;"><span style="font-family: 'Times New Roman'; font-size: 7pt; font-stretch: normal; line-height: normal;"> </span></span><span style="background-color: white; text-indent: -18pt;"><span style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; font-family: 'Times New Roman', serif; font-size: 10pt; line-height: 115%;">create an array to
store all you classes</span></span></li>
<li><span lang="EN-US" style="font-family: Symbol; font-size: 10pt; line-height: 115%; text-indent: -18pt;"><span style="font-family: 'Times New Roman'; font-size: 7pt; font-stretch: normal; line-height: normal;"> </span></span><span style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; font-family: 'Times New Roman', serif; font-size: 10pt; line-height: 115%; text-indent: -18pt;">create a temp class</span></li>
<li><span style="background-color: white; font-family: 'Times New Roman', serif; font-size: 10pt; line-height: 115%; text-indent: -18pt;">create a feature area
of classified objects ( you can specify unit you want) based on temp class</span></li>
<li><span lang="EN-US" style="font-family: Symbol; font-size: 10pt; line-height: 115%; text-indent: -18pt;"><span style="font-family: 'Times New Roman'; font-size: 7pt; font-stretch: normal; line-height: normal;"> </span></span><span style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; font-family: 'Times New Roman', serif; font-size: 10pt; line-height: 115%; text-indent: -18pt;">loop each class from
step 1 and store class into temp class. Get information about area from feature
in step three and store in an array.</span></li>
<li><span lang="EN-US" style="font-family: Symbol; font-size: 10pt; line-height: 115%; text-indent: -18pt;"><span style="font-family: 'Times New Roman'; font-size: 7pt; font-stretch: normal; line-height: normal;"> </span></span><span style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; font-family: 'Times New Roman', serif; font-size: 10pt; line-height: 115%; text-indent: -18pt;">write class array and
area array in a CSV with export project statistics</span></li>
</ol>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsbZbX9k3ihFclhyzXfQDTeRp1myXhbAtPbVrY1nb2_DDc-9eUKVMY6kBX5m97zdtHNhFV8gNXih6xum090y283yepwyGbHgn0rFosbf5z2ptkAnVI56WkWIrAUu5m6PgnLXjMZQE8mjg/s1600/4.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="51" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsbZbX9k3ihFclhyzXfQDTeRp1myXhbAtPbVrY1nb2_DDc-9eUKVMY6kBX5m97zdtHNhFV8gNXih6xum090y283yepwyGbHgn0rFosbf5z2ptkAnVI56WkWIrAUu5m6PgnLXjMZQE8mjg/s320/4.PNG" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Area of classified objects for each class in a csv file.</td></tr>
</tbody></table>
<div style="text-indent: -24px;">
<span style="font-family: Times New Roman, serif; font-size: x-small;"><span style="line-height: 15.3333330154419px;"></span></span><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTIjq0ojDIf3twWn1Hf9cGgSjhhckLk38ltD0dEIEmFBcOF61ffxvy3QCFNc9JwvuvUzmi7arYxH_kF6mDYBe1oVhCknb7UIva4YYYjPXDxwoETU4cRJ4w8nIAkX6iBpJMI6ZQT1zUs2k/s1600/5.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="352" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTIjq0ojDIf3twWn1Hf9cGgSjhhckLk38ltD0dEIEmFBcOF61ffxvy3QCFNc9JwvuvUzmi7arYxH_kF6mDYBe1oVhCknb7UIva4YYYjPXDxwoETU4cRJ4w8nIAkX6iBpJMI6ZQT1zUs2k/s640/5.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Rule set in action</td></tr>
</tbody></table>
<br />
<span style="font-family: Times New Roman, serif; font-size: x-small;"><span style="line-height: 15.3333330154419px;"></span></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXZzy1DLRwEfIVOGsgbhkPXOyPuTsCNG7fFmZ5FhUttjFIv8AS2-jt1lhh4dYj3E8NT0NMAtSnyR342njvsaOKUqLclm_Uo01k8HMV3_RmTSxv7LytCCqer8CdtpvIQUmKyE33Zq2_0e8/s1600/1.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXZzy1DLRwEfIVOGsgbhkPXOyPuTsCNG7fFmZ5FhUttjFIv8AS2-jt1lhh4dYj3E8NT0NMAtSnyR342njvsaOKUqLclm_Uo01k8HMV3_RmTSxv7LytCCqer8CdtpvIQUmKyE33Zq2_0e8/s400/1.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Loop over all classes one at a time</td></tr>
</tbody></table>
<br />
<div class="MsoNormal">
<span lang="EN-US" style="background-color: white; font-family: 'Times New Roman', serif; font-size: 10pt; line-height: 115%;">
</span></div>
<div style="text-align: left;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1YByWvHWepOLyI8KFUT92rgOOhWFQdI0nmeB189pJvpBlwncP4uL4JhbzNMhwYmujmWef8tdbK7v6wzDcXetOf4FoUWNuOwQGtjysdufhLC3JqX_7n0AQFWPoYs26ooRQIAYqIZmdcvo/s1600/2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1YByWvHWepOLyI8KFUT92rgOOhWFQdI0nmeB189pJvpBlwncP4uL4JhbzNMhwYmujmWef8tdbK7v6wzDcXetOf4FoUWNuOwQGtjysdufhLC3JqX_7n0AQFWPoYs26ooRQIAYqIZmdcvo/s400/2.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Store areas of current class in an array</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc3KVeByBM5m4RlmqOG4U6YJ0rcgqGqUzO-rZCB83l4clwZNx2mUKPLe14esXEgImDh16Qs2MWDsiS4BzzjVftZGDwD-0jopUKdBRm5L4d6PhJnHD32_Vj84vbP5j0zzAzN6diKOZlTX4/s1600/3.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgc3KVeByBM5m4RlmqOG4U6YJ0rcgqGqUzO-rZCB83l4clwZNx2mUKPLe14esXEgImDh16Qs2MWDsiS4BzzjVftZGDwD-0jopUKdBRm5L4d6PhJnHD32_Vj84vbP5j0zzAzN6diKOZlTX4/s400/3.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">export area information store in the array in a csv file<br />
<br />
<div style="text-align: left;">
I have written another use of array functionality to export a particular feature fore every image bands in your project in this <a href="http://shreshai.blogspot.com/2014/10/exporting-ecognition-features-as-images.html" target="_blank">blog</a>. </div>
</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com1tag:blogger.com,1999:blog-226544055325165100.post-77485787473430780252014-11-05T08:40:00.001+01:002014-11-15T09:50:57.353+01:00Python tutorial: Converting a raster dataset to XYZ in Python <div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
<span lang="EN-US">One of the
most popular posts in my blog is about converting a raster image into XYZ text
file. Converting raster image to XYZ
file may be necessary because machine learning algorithms (outside proprietary software)
requires input to be a table. I had written a <a href="http://shreshai.blogspot.com/2011/01/converting-raster-dataset-to-xyz-in.html" target="_blank">post</a> about converting a raster file into
XYZ using ARCGIS some three years ago, which seems to attract many visitors to
my blog.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span>
<span lang="EN-US">Here I will
write a way to do it using Python. For reading geo-referenced raster file, we
will use rasterio package which is a wrapper for gdal that provides clean and fast I/O for geospatial raster
images. The package is written in cython therefore it is very fast. Reading a
raster file with rasterio is a one liner code. You can download binary of
rasterio <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#rasterio" target="_blank">here</a>. Its a binary file so installing rasterio is just a matter of clicking binary executable file. After reading the raster file we will get bounding box of the
image and compute XY of each pixel and later write in a csv file together with
pixel values. The code works for any number of bands in the image.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US">I hope this
piece of code is useful for you. In near future I will show you to do exact same thing within QGIS. Stay tuned.<o:p></o:p></span></div>
<div class="MsoNormal">
<br />
<script src="https://gist.github.com/sukuchha/47a0122b998b0110fd71.js"></script></div>
<div class="MsoNormal">
<i></i></div>
<div class="MsoNormal">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4GU9M7Bk__5gZNtxtVj0uSKbPdL8gOppdnYPtgYJKJ3VyZGrN8OMZezyFcPYQujrFIosSAvmOjGSD9NrqlzPHf-3hcTaKXfjjm_byetJex7XZzwqk1jX6Dfpxj1Rav5viKFnsYfYJidU/s1600/Capture.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4GU9M7Bk__5gZNtxtVj0uSKbPdL8gOppdnYPtgYJKJ3VyZGrN8OMZezyFcPYQujrFIosSAvmOjGSD9NrqlzPHf-3hcTaKXfjjm_byetJex7XZzwqk1jX6Dfpxj1Rav5viKFnsYfYJidU/s1600/Capture.PNG" height="243" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Output XYZ file with band values</td></tr>
</tbody></table>
<div class="MsoNormal">
<i><br /></i></div>
<div class="MsoNormal">
In addition, the above code can be combine with the <a href="http://shreshai.blogspot.com/2014/07/data-exchange-between-matlab-and-python.html" target="_blank">code </a>(Data exchange between MATLAB and Python: Reading and writing .mat files with Python) so that you can easily export multi-spectral and hyperspectral data as a mat file for MATLAB. Many people seem to be having problem with reading remote sensing image with <a href="http://shreshai.blogspot.com/2011/12/opening-multispectral-or-hyperspectral.html" target="_blank">multiband read function</a> in MATLAB. If you are one them, use above codes and bypass multiband read function to get your remotes images straight in MATLAB.</div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com3tag:blogger.com,1999:blog-226544055325165100.post-90253913703140871912014-10-28T10:43:00.002+01:002014-11-15T09:51:21.472+01:00eCognition tutorial: Almost connected components for clumps identification in eCognition<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
<span lang="EN-US">This post
is inspired from a <a href="http://blogs.mathworks.com/steve/2010/09/07/almost-connected-component-labeling/" target="_blank">post </a>by <a href="http://blogs.mathworks.com/steve/" target="_blank">Steve Eddins</a>, who works in Math works, a company that build
MATLAB. He is a software development manager in the MATLAB and one of the co-author of a book "<a href="http://www.amazon.com/Digital-Image-Processing-Using-MATLAB/dp/0130085197" target="_blank"> Digitial image processing with MATLAB</a>".</span>I use both MATLAB and eCognition, so I ponder if this can be done in
eCognition. eCognition has basic Morphological Operators like dilation and
erosion . Advance MM operators like by opening by reconstruction, connected component labeling or skeleton
and many others are not available in eCognition.</div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<u><span lang="EN-US">The
problem of almost connected components:<o:p></o:p></span></u></div>
<div class="MsoNormal">
<u><span lang="EN-US"><br /></span></u></div>
<div class="MsoNormal">
<span lang="EN-US">There is
simple synthetic image containing a number of circular blobs.</span> How
can we label and measure the three clumps instead of the smaller circles?
Two circles are almost connected if circles are within 25 pixels unit.<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghILwn0jFk9DEG9p_fnDlG30ez3AGdyoXs9rC0qjju3WlTEG4KEq6ENnCWyJbByVgQe_9ajiBZmsAZvnQ1QOfZnOhORB7rl0w7oM5bt7KbLSy3LRVTopuLVgCo2ouOOwUwz5ub_FZX_cE/s1600/1.1.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghILwn0jFk9DEG9p_fnDlG30ez3AGdyoXs9rC0qjju3WlTEG4KEq6ENnCWyJbByVgQe_9ajiBZmsAZvnQ1QOfZnOhORB7rl0w7oM5bt7KbLSy3LRVTopuLVgCo2ouOOwUwz5ub_FZX_cE/s1600/1.1.PNG" height="320" width="294" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Binary circles</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLqcRmZK2Gjl4RbSwRj_yeyI7hNgaWSw0LbL9dZn3I1Qd2y_q5UdVSzQmS_WeBHJXOMn2JIVZAva-6whyphenhyphenCnkLYkDQT_0JLxpRboDnkB08M0Jsgaf6X0A_LcH2XG-P8eKHiaokRcN9s2G0/s1600/1.2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLqcRmZK2Gjl4RbSwRj_yeyI7hNgaWSw0LbL9dZn3I1Qd2y_q5UdVSzQmS_WeBHJXOMn2JIVZAva-6whyphenhyphenCnkLYkDQT_0JLxpRboDnkB08M0Jsgaf6X0A_LcH2XG-P8eKHiaokRcN9s2G0/s1600/1.2.PNG" height="320" width="295" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Connected components labeling</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVGnEmqc2UD7Jb0q69qMaz2amFn9uI4pTVeOQyomnTTwzkbc2scdYGQGUYpCPQbtg-sgP-Du-wwmAwOgRrRJl1HfCs6XKUNBjoRyVQ-AuOYyDL49ucHowLmQuvakqttA3VbkwJ0Kuftzg/s1600/1.3.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVGnEmqc2UD7Jb0q69qMaz2amFn9uI4pTVeOQyomnTTwzkbc2scdYGQGUYpCPQbtg-sgP-Du-wwmAwOgRrRJl1HfCs6XKUNBjoRyVQ-AuOYyDL49ucHowLmQuvakqttA3VbkwJ0Kuftzg/s1600/1.3.PNG" height="320" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Almost connected components labeling</td></tr>
</tbody></table>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Of course it can be solved in eCognition. But for this, you have to be familiar
with many concepts in eCognition. Concepts such as PPO, object variables, multi-level
representation and temporary layers are required. My workflow for the solution is as follows:</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
</div>
<ul style="text-align: left;">
<li>Use multi-threshold
segmentation to get circles</li>
<li>Use distance map algorithm to get binary distance map</li>
<li>Use chessboard segmentation to get pixel level unclassified objects</li>
<li>Use multi-thresholding segmentation based on the distance map to get
clump</li>
<li>Copy level above</li>
<li>Use object variable concept to assign each clump a unique ID</li>
<li>Convert to sub-objects to get original circle at upper level</li>
</ul>
<span lang="EN-US">
<!--[endif]--><o:p></o:p></span>
<br />
<div class="MsoNormal">
<span lang="EN-US">I will post rule-set after some time. I have given you enough hint how to proceed. Get your hands dirty !</span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-I5RuuciZsHFV6da6jYUDygPkmIx-_YLk3KtAkx470w-kKjKAWewAvHm0pr0NcxizrszQVH8M5Yz7aoW_FOfKJ53eImtv5-vZ7JgEy8YhK0BLUGqWltcwG2efNCgWMeksQDDFxNPm6Ss/s1600/2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-I5RuuciZsHFV6da6jYUDygPkmIx-_YLk3KtAkx470w-kKjKAWewAvHm0pr0NcxizrszQVH8M5Yz7aoW_FOfKJ53eImtv5-vZ7JgEy8YhK0BLUGqWltcwG2efNCgWMeksQDDFxNPm6Ss/s1600/2.PNG" height="640" width="632" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: 13.3333339691162px;">Almost connected components labelling within eCognition</span></td></tr>
</tbody></table>
<div class="MsoNormal">
<span lang="EN-US">The concept
of “almost connected components” can be applicable in remote sensing for
clustering buildings detected in remote sensing images for analyzing of micro-climate
of urban areas. There can be various other applications. Can you think of any ?<o:p></o:p></span></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com2tag:blogger.com,1999:blog-226544055325165100.post-69053679859020503372014-10-27T21:29:00.000+01:002014-11-15T09:52:05.982+01:00eCognition tutorial: Image object fusion in eCognition: Example of water bodies classification<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
eCognition is a powerful software for analysis for remote
sensing images. Many people have this false impression that eCognition is all
about segmentation. That’s not true. Segmentation is a just a part of it big
chain that may involve segmentation, temporary classification, fusion,
exploitation of contextual information etc. Many people believe that
segmentation should be perfect at first time which is a fallacy. You can modify
your segments as more information becomes available during the analysis.
Typically in my any project I use segmentation at least 10 times. Yes at least 10 times
.One of the underutilized feature of
eCognition is “<a href="https://www.blogger.com/null" name="OLE_LINK6"></a><a href="https://www.blogger.com/null" name="OLE_LINK5"></a><a href="https://www.blogger.com/null" name="OLE_LINK4">Image Object Fusion</a>” algorithm.The algorithm is an essential part of "iterative segmentation and classification" approach of GEOBIA. In this blog, I will show an example of image object fusion
for classification of water bodies in a small remote sensing image. The step
involves:<br />
<div class="MsoNormal">
</div>
<ol style="text-align: left;">
<li>Segmentation</li>
<li>Initial classification of water</li>
<ol>
<li><span style="text-indent: -0.25in;">Establishment of customized feature </span><span style="text-indent: -0.25in;">RationNIR </span></li>
<ul>
<li><span style="text-indent: -0.25in;">RationNIR = (meanNIR/(meanR+</span><span style="text-indent: -24px;">meanB+</span><span style="text-indent: -24px;">meanG+meanNIR))*100</span></li>
</ul>
<li><span style="text-indent: -0.25in;">Classify objects that satisfy property</span></li>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<ul>
<li><span style="text-indent: -144px;">RationNIR less than 15 <!--15--><!--15--><!--15--><!--15--><!--15--><!--15--><!--15--></span></li>
<li><span style="text-indent: -144px;">Area greater than 10 pixels ( to avoid small shadows)</span></li>
</ul>
<li><div class="MsoListParagraph" style="margin-left: 1.0in; mso-add-space: auto; mso-list: l0 level2 lfo1; text-indent: -.25in;">
<o:p></o:p></div>
</li>
</ol>
<li><span style="text-indent: -24px;">Image object fusion using PPO ( Parent Process Object) to get whole water body</span></li>
<ol>
<li><span style="text-indent: -24px;">Starting from water classified in step 2, check neighboring water objects and if difference between water object and neighboring objects is less than 5 in RatioNIR. Run in a loop.</span></li>
<li><span style="text-indent: -24px;"><span style="text-indent: -0.25in;">Process all water objects.</span><span style="text-indent: 0px;"> </span></span></li>
</ol>
</ol>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmqmUxfhxWuGYz-pCMlTwvGKfbgPwitsk5Le_jjdktF2GtJ2bJJU0x-2bC_GKfGnkubQOu51DOS02h8VzBGzVWbLnSnf7aQ7TcsHTjChMxAPcYt9WyVOuseThR6_1rCxgzENg_G4Gtp5E/s1600/2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmqmUxfhxWuGYz-pCMlTwvGKfbgPwitsk5Le_jjdktF2GtJ2bJJU0x-2bC_GKfGnkubQOu51DOS02h8VzBGzVWbLnSnf7aQ7TcsHTjChMxAPcYt9WyVOuseThR6_1rCxgzENg_G4Gtp5E/s1600/2.PNG" height="224" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Original image</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-aVEl-5f2MOqGeiGs_fNxh1G0-nAKajXZsySXn5fvr5zzNs4OzEsNVL8drt2TzqfboOySFy-LwfbN0R6I0bOgkYxEuHDoKy558ugqg-yiKJYcnSZL4E0qb2jasY9NHFr_jMHR2CoujfY/s1600/3.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-aVEl-5f2MOqGeiGs_fNxh1G0-nAKajXZsySXn5fvr5zzNs4OzEsNVL8drt2TzqfboOySFy-LwfbN0R6I0bOgkYxEuHDoKy558ugqg-yiKJYcnSZL4E0qb2jasY9NHFr_jMHR2CoujfY/s1600/3.PNG" height="221" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Initial segmentation</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEozP3Qi8NbNbbA6zvWObPif169are0vHXVoUMpOXBU-PU3WRrkHgBdz0zs9sv92hB7ovHE59hYvg9U1G20nhy-mBeKbkwlSr7D61tYG1bN4rlO7q_rYEuxCe0a9Ny_sW3rUVfhgxwwhU/s1600/4.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEozP3Qi8NbNbbA6zvWObPif169are0vHXVoUMpOXBU-PU3WRrkHgBdz0zs9sv92hB7ovHE59hYvg9U1G20nhy-mBeKbkwlSr7D61tYG1bN4rlO7q_rYEuxCe0a9Ny_sW3rUVfhgxwwhU/s1600/4.PNG" height="226" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Initial water classification</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYbs7UWQYr4VHOTS2XnHukjUbj8DWKwoj8xun3kVGO7mt_9cryFlJHX46mSiM-rj9RrwL1Lfiyq2QHyeEjbaJ1ATjpRf3KfHrCQAqD-H1B4_0RUejZqxnvg6adIRtx2eOAMStOj5SQ34Y/s1600/7.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYbs7UWQYr4VHOTS2XnHukjUbj8DWKwoj8xun3kVGO7mt_9cryFlJHX46mSiM-rj9RrwL1Lfiyq2QHyeEjbaJ1ATjpRf3KfHrCQAqD-H1B4_0RUejZqxnvg6adIRtx2eOAMStOj5SQ34Y/s1600/7.PNG" height="225" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Final segmentation after image object fusion</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQGEvCl3F_kHv1qLRjUfPUzJYUZlxTrnJOFOqnanMLOm7qN_f2qxIFPPdRlAf1768cCjtspW8uTzL7PVFZR7ZrbWmxn5LTHtT7IFB2WZg2wyiOS1utHSDYb8L4NoyxjQINHtMEdEZMj6s/s1600/5.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQGEvCl3F_kHv1qLRjUfPUzJYUZlxTrnJOFOqnanMLOm7qN_f2qxIFPPdRlAf1768cCjtspW8uTzL7PVFZR7ZrbWmxn5LTHtT7IFB2WZg2wyiOS1utHSDYb8L4NoyxjQINHtMEdEZMj6s/s1600/5.PNG" height="219" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Final water classification using image object fusion</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiK7adB4r2s6FFNlU2oKYYzxmKwkEiO39NNV1fT6u6EHsLe-NezeKKtoRRPhCWyXsa4PuBIhxwDKw_7GV-rJQzXNZRJXciHxM32j4Tl5SK-EgkJTd8cSkQhnWG2ASQ-XtxzQJwl7N0fp8k/s1600/6.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiK7adB4r2s6FFNlU2oKYYzxmKwkEiO39NNV1fT6u6EHsLe-NezeKKtoRRPhCWyXsa4PuBIhxwDKw_7GV-rJQzXNZRJXciHxM32j4Tl5SK-EgkJTd8cSkQhnWG2ASQ-XtxzQJwl7N0fp8k/s1600/6.PNG" height="271" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Very powerful image object fusion algorithm<br />
<br />
<div style="text-align: left;">
<span style="font-size: small; text-indent: -24px;">Step 3 is a one liner algorithm using "Image object Fusion". Notice various parameters like Class filter, candidates classes, fitting function threshold, use absolute fitting value and weighted sum.To help you understand, I have made a "gif" to elaborate what is going on.</span></div>
<div style="text-align: left;">
<span style="font-size: small; text-indent: -24px;"><br /></span></div>
<div style="text-align: left;">
<span style="font-size: small; text-indent: -24px;">Blue: Initial water</span></div>
<div style="text-align: left;">
<span style="font-size: small; text-indent: -24px;">Yellow: Active object</span></div>
<div style="text-align: left;">
<span style="font-size: small; text-indent: -24px;">Red: Fused water after image object fusion</span></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<div class="MsoNormal">
<a href="https://www.blogger.com/null" name="OLE_LINK2"></a><a href="https://www.blogger.com/null" name="OLE_LINK1"><code><b><span style="background: white; color: red; font-size: 10.0pt; line-height: 115%; mso-fareast-font-family: Calibri; mso-fareast-theme-font: minor-latin;"><img atl="Gif" src="http://s14.postimg.org/jg3qjqmu9/dj0x5.gif" /></span></b></code></a><o:p></o:p></div>
<div class="MsoNormal">
<a href="https://www.blogger.com/null" name="OLE_LINK1"><code><b><span style="background: white; color: red; font-size: 10.0pt; line-height: 115%; mso-fareast-font-family: Calibri; mso-fareast-theme-font: minor-latin;"><br /></span></b></code></a></div>
<div class="MsoNormal">
<a href="https://www.blogger.com/null" name="OLE_LINK1"><code><b><span style="background: white; color: red; font-size: 10.0pt; line-height: 115%; mso-fareast-font-family: Calibri; mso-fareast-theme-font: minor-latin;"><br /></span></b></code></a></div>
<div class="MsoNormal">
<span style="font-size: small; text-indent: -24px;">Here is the whole Rule set for water classification with image object fusion.</span></div>
<div class="MsoNormal">
<span style="font-size: small; text-indent: -24px;"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiq4Or2W_AB2-uqwsrw_NXTx3T8mIAjiWQ_IeUOTMnhoJ9tQYkvADPcweq3c4Uo5tkTAKsUsb6_BlbL3f3iB7F3XWY14cfJP3ZDlZT_3ncVa7lIzkuR3uedvXahUZcsvtc2Kn7yiKdJSjI/s1600/1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiq4Or2W_AB2-uqwsrw_NXTx3T8mIAjiWQ_IeUOTMnhoJ9tQYkvADPcweq3c4Uo5tkTAKsUsb6_BlbL3f3iB7F3XWY14cfJP3ZDlZT_3ncVa7lIzkuR3uedvXahUZcsvtc2Kn7yiKdJSjI/s1600/1.PNG" height="216" width="400" /></a></div>
</div>
</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
</div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com1tag:blogger.com,1999:blog-226544055325165100.post-80664507881267324692014-10-10T14:14:00.002+02:002014-11-15T09:51:47.106+01:00QGIS tutorial: Display with color symbology from data defined properties of shape files<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
How many of you have this problem: you classify your images in ecognition or any other software and exported classified image as shape file. Now you want to view the shape file in a GIS environment and with the same color for each class that was used in eCognition. Manually matching color RGB values from eCognition to shape colors for each class is a tedious task, specially when you have many classes.<br />
<br />
This problem can be solved with a little trick.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<ul><br />
<li>Do your classification in eCognition. Assign class colors as you wish.</li>
<br />
<li>Export classification result as shape file with Class attributes.</li>
<br />
<li>Export classification result as thematic raster. By doing so, you will get a CSV file in which you can get RGB color codes for each class.</li>
<br />
<li>Load your shape file in GIS (I use QGIS which is free. I am a poor guy therefore I always love free stuffs)</li>
<br />
<li>Load the CSV file in QGIS</li>
<br />
<li>Perform attribute Join of shape file and CSV file with key attribute join based on Class. With this you will get RGB values attached in attribute table of shape file.</li>
<br />
<li>Double click color symbology of shape file and use data defined properties of color attributes of shape file.</li>
<br />
<li>Build an expression for color selecting respective color attribute column from shape file.</li>
</ul>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWhO7WXTPaanWffNPWq2UiyBr2-eghnsoTcJBi_rlcJQbH56XPXo3hdMn7lZX4Way4KancxVt1lRkMtWc8qOWiw-R7mHrnz680Dz8ZNorkIb43q47EeLAeii3wgwaGMvJ-h7b3AaqSvVE/s1600/Capture_7.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWhO7WXTPaanWffNPWq2UiyBr2-eghnsoTcJBi_rlcJQbH56XPXo3hdMn7lZX4Way4KancxVt1lRkMtWc8qOWiw-R7mHrnz680Dz8ZNorkIb43q47EeLAeii3wgwaGMvJ-h7b3AaqSvVE/s1600/Capture_7.PNG" height="303" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">import CSV file into QGIS</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8Nrii0XNnQqfnDoyo2wSP7ZkGVqAyXUnZ6gGBwyJ47N10OpDli1O8sQVHwrbj5cVRxOkw7gPQSObfdhkEzxZgW8YXMQovpv5BYdpglLUJTRHWHUa2IYPKyvMbKtMvZt3cTZD1mu7YUJY/s1600/Capture_6.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8Nrii0XNnQqfnDoyo2wSP7ZkGVqAyXUnZ6gGBwyJ47N10OpDli1O8sQVHwrbj5cVRxOkw7gPQSObfdhkEzxZgW8YXMQovpv5BYdpglLUJTRHWHUa2IYPKyvMbKtMvZt3cTZD1mu7YUJY/s1600/Capture_6.PNG" height="353" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Attribute of shape file after performing join</span></td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgesR4WzEgBxDsZGlV9hdDC7JgUOGic0-_qU6cN_KerRfYWZV8LSAPsea4nu9ZLtaPvU3zPt7IiOdM3_ppno-pCtL_EpXv-yN9u5ge8VFtw0pqjL-lB9ksWd4uZM7-jDmQpsGq9mW9H7tw/s1600/Capture_5.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgesR4WzEgBxDsZGlV9hdDC7JgUOGic0-_qU6cN_KerRfYWZV8LSAPsea4nu9ZLtaPvU3zPt7IiOdM3_ppno-pCtL_EpXv-yN9u5ge8VFtw0pqjL-lB9ksWd4uZM7-jDmQpsGq9mW9H7tw/s1600/Capture_5.PNG" height="411" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Data defined properties</span></td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh47dSDRTDsrxWo_c70hxxqVEju_7dtpQ-INPw-kW_0Gmgvkkoa3jlJpAyPLGZxy2_Pwhh2jr8OoHvB4LF9KZ_1B90E_qH1rpbqmjeBBUwPDsSotPyj5wr0s7tlvRNydNLGFBkL8WSpSWU/s1600/Capture._2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh47dSDRTDsrxWo_c70hxxqVEju_7dtpQ-INPw-kW_0Gmgvkkoa3jlJpAyPLGZxy2_Pwhh2jr8OoHvB4LF9KZ_1B90E_qH1rpbqmjeBBUwPDsSotPyj5wr0s7tlvRNydNLGFBkL8WSpSWU/s1600/Capture._2.PNG" height="321" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td class="tr-caption" style="font-size: 13px;"><span style="font-size: small; text-align: left;">data defined properties for color</span><br />
<div>
<span style="font-size: small; text-align: left;"><br /></span></div>
</td></tr>
</tbody></table>
</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFGUAP9yvlDZDKetIKHCtshMDJmmk6VQZ9FdNaE3gS1T5apZHtadfD32ji7dpYHJp9lgX6phvbZf_iOEkJ21ZZtBxkbI63Mua_A5KEHlGvaMB6jgbBLJKHhO95EWqbYLXvo6JlGpprYyM/s1600/Capture._1PNG.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFGUAP9yvlDZDKetIKHCtshMDJmmk6VQZ9FdNaE3gS1T5apZHtadfD32ji7dpYHJp9lgX6phvbZf_iOEkJ21ZZtBxkbI63Mua_A5KEHlGvaMB6jgbBLJKHhO95EWqbYLXvo6JlGpprYyM/s1600/Capture._1PNG.PNG" height="400" width="362" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td class="tr-caption" style="font-size: 13px;"><span style="font-size: small; text-align: left;">data defined properties for color</span></td></tr>
</tbody></table>
</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw3Q_P7gmxPpy0fygkTuW6ZqY3VlC6XcCP8rhVEg8hIwkFT8DBHpAhb3pP_A6F0387gT6twfi5WnBMgvuxyv9S5SZ1sgGi3uS8wMzrA2I2vBLkzc-JaGwCs0b7Rw5uEdcNuebFTRwC0ho/s1600/before.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw3Q_P7gmxPpy0fygkTuW6ZqY3VlC6XcCP8rhVEg8hIwkFT8DBHpAhb3pP_A6F0387gT6twfi5WnBMgvuxyv9S5SZ1sgGi3uS8wMzrA2I2vBLkzc-JaGwCs0b7Rw5uEdcNuebFTRwC0ho/s1600/before.PNG" height="470" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Normal display without color symbology</span></td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyhZqrvgFqFLDp4yCbRs9NGN7fUnWW_qWxiutn1vWEchp8tUBYdpuIE6SESGhudrDbmOVu7rJEmhyphenhyphen90tarfU0vTTbKkWD2njjqNejNcz8yXmkF4lm7lPXG0lOLtuaKnG1-BPDWIaoKZQ0/s1600/After.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto; text-align: center;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyhZqrvgFqFLDp4yCbRs9NGN7fUnWW_qWxiutn1vWEchp8tUBYdpuIE6SESGhudrDbmOVu7rJEmhyphenhyphen90tarfU0vTTbKkWD2njjqNejNcz8yXmkF4lm7lPXG0lOLtuaKnG1-BPDWIaoKZQ0/s1600/After.PNG" height="474" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Display with color symbology</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-17847830805577001282014-10-08T19:24:00.000+02:002014-11-15T09:52:25.863+01:00eCognition tutorial: Exporting eCognition features as images with array functionalities<div dir="ltr" style="text-align: left;" trbidi="on">
There are instances when one would like to export different features from eCognition as a images to perform some tasks outside eCognition. eCognition doesn't provide a way to export many object features as images but it is only possible to export as a thematic raster in which each object has a unique ID, and features values are stored in a separated CSV file. To convert it into images, one has to mapped ID raster tiff file and data from CSV file.<br />
<br />
If you want to get eCogntion features as images in an automatic way that can export any numbers of features as images in one go, then here is a way. For the purpose, we are going to utilize array handing capabilities of eCogntion and export each feature as separate tiff (My_1.tiff, My_2.tiff …. and so on) file. The rule set can be downloaded <a href="https://sites.google.com/site/shreshai/file-cabinet/RuleSet_export_image_1.dcp?">here.</a> The rule set is flexible in the sense that you just have to update an array to store your feature list of interest. Number features can be any number (10, 20 or even 100 features). We will merge it afterwards in open source QGIS.<br />
<ul style="text-align: left;"><br />
<li>Perform a segmentation</li>
<li>Create a array and store features you want to export as images</li>
<li>Loop over the array</li>
</ul>
<ol style="text-align: left;"><ol>
<li>For each feature, create a temporary image file</li>
<li>Export the temporary file as a unique name</li>
<li>Repeat until all features in the array are executed</li>
</ol>
</ol>
<ul style="text-align: left;">
<li>Now in QGIS, we use GDAL to stack individual images into one single image. We will use merge function of GDAL (Raster>Miscellaneous> Merge).<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgj5AG5uRrqe4vf0UmPYZe0A2ai-vOEYYBTybcgiz-zzC6pHsBlX3-Of_wbFKkdu81VjNJoAyYTJ1B67c_D99HuaVrTqUtRdyf3rgMYp98GoDyYi5lti8k9Y5rV5SgOq7OS5DVNwfvCnTk/s1600/3.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgj5AG5uRrqe4vf0UmPYZe0A2ai-vOEYYBTybcgiz-zzC6pHsBlX3-Of_wbFKkdu81VjNJoAyYTJ1B67c_D99HuaVrTqUtRdyf3rgMYp98GoDyYi5lti8k9Y5rV5SgOq7OS5DVNwfvCnTk/s1600/3.PNG" height="442" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Complete ruleset within eCognition</td></tr>
</tbody></table>
</li>
</ul>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeoUInsW3eLdN7D1EEPH6LJwA2dF4qJWggzQh923sgS24KrD4WyC9hkqaNeMPlT5s9IW-5-Jkt3uw28Mz3lWPnAMZQ3vH11Ty5eeZQIHAXgAJSRmfLQ6hcOr27RN2EDZ33ztyr7IPrwbg/s1600/4.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeoUInsW3eLdN7D1EEPH6LJwA2dF4qJWggzQh923sgS24KrD4WyC9hkqaNeMPlT5s9IW-5-Jkt3uw28Mz3lWPnAMZQ3vH11Ty5eeZQIHAXgAJSRmfLQ6hcOr27RN2EDZ33ztyr7IPrwbg/s1600/4.PNG" height="262" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Ruleset for exporting features as images </td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyvM34sTG5oH76XDvlXJOkkf3eu5BHKQuV4_CEFUCb5XTk4vyFOzkcC8PIX3ElfU10YaJz40wrk2IwoUma6QJNm9ZSH7SPi6MPn8UqYPW56N__S2Goy-fLJbNH5htgTYWeK31P9n8VpGs/s1600/2.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyvM34sTG5oH76XDvlXJOkkf3eu5BHKQuV4_CEFUCb5XTk4vyFOzkcC8PIX3ElfU10YaJz40wrk2IwoUma6QJNm9ZSH7SPi6MPn8UqYPW56N__S2Goy-fLJbNH5htgTYWeK31P9n8VpGs/s1600/2.PNG" height="400" width="252" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">GDALmerge function within QGIS</span></td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4a7H7UqYiEE3R2MDmhDOQDkZBIRvtY6K66TvdtCbgUUB-DYCDf-UihMzITN8iak3QpmF8XEE-kIMA7J-PhAiNXD7JdIcwu_yxv0YEi4Runzkz1Depoxt-wD-1efmv7j_O2lw44Prro0s/s1600/1.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4a7H7UqYiEE3R2MDmhDOQDkZBIRvtY6K66TvdtCbgUUB-DYCDf-UihMzITN8iak3QpmF8XEE-kIMA7J-PhAiNXD7JdIcwu_yxv0YEi4Runzkz1Depoxt-wD-1efmv7j_O2lw44Prro0s/s1600/1.PNG" height="480" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Color composite of three merge features</span></td></tr>
</tbody></table>
Thus produced merge features image now can be used for classification in ENVI, ERDAS IMAGINE or writing custom script in Python or MATAB. Personally I use such images within Python using <a href="http://scikit-learn.org/stable/">scikit-learn</a> library.</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com1tag:blogger.com,1999:blog-226544055325165100.post-88853914261401069202014-08-15T23:41:00.000+02:002014-11-15T09:52:36.544+01:00eCognition tutorial: Exporting eCognition classification file to ENVI<div dir="ltr" style="text-align: left;" trbidi="on">
The problem:<br />
<br />
<em>I was wondering if anyone could help me export a usable file from eCognition Developer for use in ENVI 5.0?</em><em> I've classified an image using Multiresolution Segmentation followed by the Classification algorithm using selected samples from the Standard Nearest Neighbour and everything worked so far. I try and export it as an ENVI-supported file type (e.g. *.tif) the raster seems to have "lost" all the classification- leaving a grayscale image- useless! I have been able to open a *.jpg file, but as it is an image it has lost any previous classfication from Ecognition.</em><br />
<br />
Solution:<br />
<br />
This is not even a problem but for people who are just starting with ENVI is a big HEADACHE. It’s just a matter of a symbology. If you have been using the ENVI for a while you should know that ENVI uses two files for any image. One binary file and another hdr file where different information about the binary image are stored. One can simply open hdr files with a notepad. Normal raster images have “<em>file type = ENVI Standard</em>” whereas classification images have “<em>file type = ENVI Classification” </em>and some other information such number of classes, class names and class colours. So the classified tiff file that one exports from eCognition does not have those information, hence ENVI opens the classified tiff file as a normal file. There are values differentiating classes but colour information are lost. This is a problem if you want to do some other operation in ENVI that required a classification image as input.<br />
<br />
So we need to convert it <em>ENVI Classification</em> type, somehow. So here is a way to do it.<br />
<ol style="text-align: left;"><br />
<li>Do your classification in eCognition.</li>
<li>Export it using “export thematic raster files” algorithm in eCognition with <em>Export Type </em> Don’t forget to select your classes in <em>Class Filter </em>parameters</li>
<li>Once you export it, you will have two files. One *.tiff file and other *.csv fil. Open *.csv file, there you will find class names and RGB colour for each class.</li>
<li>Open *.tif file in ENVI.</li>
<li>Then File> File Save as > ENVI Standard>Import file > Pick recently open tif file. Give a name and save it. Now you changed tiff file to ENVI file but remember, its still a normal ENVI standard file.</li>
<li>Open the file created in step 5. Then File> Edit Header Info > Select File type as ENVI classification. It will prompt you for number of classes, class names, colour information. Provide those information by using *.CSV file that you opened in step 3.</li>
<li> Now you have a ENVI classification image with same class names and colour as in eCognition. Now smile and go for a coffee.</li>
</ol>
<div>
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX6ih1zUxjEM7Um3XxGD_81r8EoPDhELJzy-eQFVoWl61HWreGMDwO2ZIrYo2AxAqI_DTvfnzHsaWRpcEtKdLEijCmAdLAb9ingJ80cwTupSakvScOim7jZuY5NWYt0jiO0QUADzg4gqA/s1600/1.1.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX6ih1zUxjEM7Um3XxGD_81r8EoPDhELJzy-eQFVoWl61HWreGMDwO2ZIrYo2AxAqI_DTvfnzHsaWRpcEtKdLEijCmAdLAb9ingJ80cwTupSakvScOim7jZuY5NWYt0jiO0QUADzg4gqA/s1600/1.1.JPG" height="184" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Example of a CSV file</span></td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzomeKxi6HjwfT6-MxitwdUTaQm1JurzpBykTSiJBKtlESw5SlS7X_Ma7btL53DEzptcXBuV1SGWrSDl7ZNIItZC-kuLAtuhyphenhyphenpTscwHX-tZVn8jlEIQt_zL01DOPp-j7CG5nHU13mPJNI/s1600/1.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzomeKxi6HjwfT6-MxitwdUTaQm1JurzpBykTSiJBKtlESw5SlS7X_Ma7btL53DEzptcXBuV1SGWrSDl7ZNIItZC-kuLAtuhyphenhyphenpTscwHX-tZVn8jlEIQt_zL01DOPp-j7CG5nHU13mPJNI/s1600/1.JPG" height="320" width="225" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Converting tiff file to ENVI file</span></td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRSl9ouEv8iMtG_C-XWa_qvKvpGvfG_saOhNij6Xork8eZ28OjDDOlS2Re_wkOICypesylZyOSn72qYke3PbYWCEcKYGRCqRB0EGQ6ie28mZQQgNrWnODKzsdJEJEiYvQNQwf0PohVg-w/s1600/2.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRSl9ouEv8iMtG_C-XWa_qvKvpGvfG_saOhNij6Xork8eZ28OjDDOlS2Re_wkOICypesylZyOSn72qYke3PbYWCEcKYGRCqRB0EGQ6ie28mZQQgNrWnODKzsdJEJEiYvQNQwf0PohVg-w/s1600/2.JPG" height="310" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Modifying header files</span></td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisdFwIkdEv6TvQIja0lW9aWd505d3FY-Yr78DtT52Hbwyu9SINQ9JSQ0gbSgm0UwGvQosTTMdZ1F7-DUtptgo3MJCLLkWbZ7fSKIkAuNVwSBMcu51N7fGqJ-x30xOqaDYSXWQZy_7ZOpw/s1600/3.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisdFwIkdEv6TvQIja0lW9aWd505d3FY-Yr78DtT52Hbwyu9SINQ9JSQ0gbSgm0UwGvQosTTMdZ1F7-DUtptgo3MJCLLkWbZ7fSKIkAuNVwSBMcu51N7fGqJ-x30xOqaDYSXWQZy_7ZOpw/s1600/3.JPG" height="320" width="225" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Editing class names and color information</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtJA2j2KqP3A6rY-m7KOFZ10p1vUUsdMvPkxxsksgXhzaAixQNlf6refiDrNSzvCP4tIAFh7qtqGnBV1U8f378YEOtuc3w2I42h-_Zgbfg9-vMKNp12MQiQWcdZq1XSIE-eBoaXOrmSz8/s1600/4.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtJA2j2KqP3A6rY-m7KOFZ10p1vUUsdMvPkxxsksgXhzaAixQNlf6refiDrNSzvCP4tIAFh7qtqGnBV1U8f378YEOtuc3w2I42h-_Zgbfg9-vMKNp12MQiQWcdZq1XSIE-eBoaXOrmSz8/s1600/4.JPG" height="221" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Initial and Final image</span></td></tr>
</tbody></table>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com10tag:blogger.com,1999:blog-226544055325165100.post-27301044249924950132014-07-11T01:10:00.000+02:002014-10-10T01:41:23.110+02:00Data exchange between MATLAB and Python: Reading and writing .mat fileswith Python<div dir="ltr" style="text-align: left;" trbidi="on">
I am a MATLAB guy. I love MATLAB. I started with Fortan during my first degree at my Bachelor level. I came across with MATLAB only in year 2010 as a part of my Msc studies. Since then I been using it continuously for various tasks. Just recently, I started using Python and I LOVE IT. The biggest reason being that Python is open-source and there are lots of Python libraries that can be used. But there are times; I have to shuffle data between MATLAB and python, because for some tasks I prefer MATLAB as I am accustom to it. So if you are in the same boat, then here I show you simple way to transfer data between MATLAB and Python. One has to use scipy.io module in Scipy for that purpose. If you have some old data or got some data online that are saved as MATLAB’s .mat file format, you can simply import it as:<br />
<br />
<i><span style="color: #38761d;">import numpy as np</span></i><br />
<i><span style="color: #38761d;">import scipy.io as sio</span></i><br />
<i><span style="color: #38761d;">mydata = sio.loadmat('mydata.mat')</span></i><br />
<br />
Now your mydata contains a dictinary with keys corresponding to the varible names saved in the original mydata.mat Saving a python variable to .mat file is also straight forward<br />
<br />
<span style="color: #6aa84f;"><i># write one variable</i></span><br />
<span style="color: #6aa84f;"><i>x = np.arange(1,10,1)</i></span><br />
<span style="color: #6aa84f;"><i># file name of the file</i></span><br />
<span style="color: #6aa84f;"><i>fname ='export_from_python.mat'</i></span><br />
<span style="color: #6aa84f;"><i>sio.savemat(fname, {'x':x})</i></span><br />
<br />
When you read export_from_python.mat, you will get 'x' varialbe into MATLAB. If you want to write more than two variable:<br />
<br />
<span style="color: #6aa84f;"><i># write two variables</i></span><br />
<span style="color: #6aa84f;"><i>x = np.arange(1,10,1)</i></span><br />
<span style="color: #6aa84f;"><i>y = np.ones((5,5))</i></span><br />
<span style="color: #6aa84f;"><i>fname ='export_from_python_1.mat'</i></span><br />
<span style="color: #6aa84f;"><i>sio.savemat(fname, {'x':x, 'y':y})</i></span><br />
<br />
I do lot of work with classification of remote sensing images using many different types of machine learning algorithms. Proprietary software like ENVI and ERDAS are not flexible enough for me, as these software donot allow an efficient way to tune hyper-parameters that are algorithm specific. I dont like training models with defualt parameters. I do my model traning and classification in python. Here is the typical workflow.<br />
<ol style="text-align: left;">
<li> Import image and training data in python using GDAL</li>
<li>Train machine learning model in python using Sci-kit Learn</li>
<li>For accuracy assessment, i do it within python but for accuracy assessment visualization, i export targets and outputs vectors from Python to MATLAB</li>
<li>Use 'plotconfusion' function from Neural Network toolbox for a visualization of confusion matrix. Here is the confusion Matrix information that i got from MATLAB. Its nice, isnt it ?</li>
</ol>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://geotipsandtricks.files.wordpress.com/2014/07/picture1.png" style="margin-left: auto; margin-right: auto;"><img alt="Picture1" class="aligncenter wp-image-325 size-medium" src="http://geotipsandtricks.files.wordpress.com/2014/07/picture1.png?w=293" height="300" width="293" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Confusion matrix</td></tr>
</tbody></table>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com1tag:blogger.com,1999:blog-226544055325165100.post-69954817423469454922014-02-07T12:46:00.000+01:002014-10-10T01:42:56.687+02:00Example of extracting grid nodes in MATLAB<div dir="ltr" style="text-align: left;" trbidi="on">
For last few posts, I am writing about morphological image analysis with python with open source package scikit-image. In this post I want to show benefit of morphological image analysis with some real application. I been working with road extraction from satellite images for past few weeks and one of the common task is to extract the grid corner nodes of the road network. So let’s assume the road grid has been already extracted and we want to extract intersection of grid network. Here I am showing three approaches for that based on synthetic data. Obviously with real data, the task is more complex and might require many slight modifications.<br />
<br />
Here is the grid. You can generate such grid with checkerboard function in MATLAB with minor processing (edge detection and hole filling). Intersections of of grid lines have a property that it has 4 white pixels in 4-N neighborhood. We can exploit this property to extract it. I am going to show to ways by which it can be done and in addition, another method based on extraction of corners will also be shown.<br />
<br />
<i><span style="color: #6aa84f;">%make a checkerboard</span></i><br />
<i><span style="color: #6aa84f;">I=checkerboard(20, 10,10)>0.5;</span></i><br />
<i><span style="color: #6aa84f;">imshow(I)</span></i><br />
<i><span style="color: #6aa84f;">% detect edge with canny</span></i><br />
<i><span style="color: #6aa84f;">BW = edge(I,'canny');</span></i><br />
<i><span style="color: #6aa84f;">figure, imshow(BW)</span></i><br />
<i><span style="color: #6aa84f;">%make square structural element to fill holes with closing</span></i><br />
<i><span style="color: #6aa84f;">SE = strel('square', 3);</span></i><br />
<i><span style="color: #6aa84f;">BW1 = imclose(BW, SE);</span></i><br />
<i><span style="color: #6aa84f;">figure, imshow(BW1)</span></i><br />
<i><span style="color: #6aa84f;">% make a skeleton of the to lines 1 pixel thick</span></i><br />
<i><span style="color: #6aa84f;">BW2 = bwmorph(BW1,'skel',Inf);</span></i><br />
<i><span style="color: #6aa84f;">figure, imshow(BW2)</span></i><br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><img alt="1" class="alignnone wp-image-293" src="http://geotipsandtricks.files.wordpress.com/2014/02/1.png?w=300" height="155" style="line-height: 1.5em; margin-left: auto; margin-right: auto;" width="180" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<span style="line-height: 1.5em;"> <a href="http://geotipsandtricks.files.wordpress.com/2014/02/2.png"><img alt="2" class="alignnone wp-image-294" src="http://geotipsandtricks.files.wordpress.com/2014/02/2.png?w=300" height="155" width="180" /></a><a href="http://geotipsandtricks.files.wordpress.com/2014/02/6.png"><img alt="6" class="alignnone wp-image-298" src="http://geotipsandtricks.files.wordpress.com/2014/02/6.png?w=300" height="155" width="180" /></a></span><br />
<br />
<br />
In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original function .In simple word, convolution of image is a process where you scan your image from left to right and top to bottom within local neighborhood defined by user, and do mathematical calculation within the local neighborhood. Here we are going to use it for simply sum the number of pixels which are which are white within 3x3 neighborhood. As shown by above figure, if the point is an intersection grid point then at that point sum should be 5. Any points with less than 5 white pixels are not intersection grid points.<br />
<br />
<i><span style="color: #6aa84f;">% with block processing and inline function with convolution</span></i><br />
<i><span style="color: #6aa84f;">tic;</span></i><br />
<i><span style="color: #6aa84f;">kernel = [0 1 0; ... %# Convolution kernel</span></i><br />
<i><span style="color: #6aa84f;">1 1 1; ...</span></i><br />
<i><span style="color: #6aa84f;">0 1 0];</span></i><br />
<i><span style="color: #6aa84f;">sumX = conv2(double(BW2),kernel,'same');</span></i><br />
<i><span style="color: #6aa84f;">result=sumX;</span></i><br />
<i><span style="color: #6aa84f;">% only consider pixels which are in grid image</span></i><br />
<i><span style="color: #6aa84f;">result (BW2==0)= 0</span></i><br />
<i><span style="color: #6aa84f;">result(sumX<5)=0;</span></i><br />
<i><span style="color: #6aa84f;">[r,c] = find(result>0);</span></i><br />
<i><span style="color: #6aa84f;">toc;</span></i><br />
<i><span style="color: #6aa84f;">figure,imshow((BW2)), hold on, plot(r,c,'r*');title ('with Convolution')</span></i><br />
<i><span style="color: #6aa84f;">hold off</span></i><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://geotipsandtricks.files.wordpress.com/2014/02/3.png" style="margin-left: auto; margin-right: auto;"><img alt="3" src="http://geotipsandtricks.files.wordpress.com/2014/02/3.png?w=300" height="258" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><em style="font-size: medium; text-align: left;">Way Two: With Mathematical Morphology (MM)</em></td></tr>
</tbody></table>
If you are working in the field of Geo-information and you still saying what the heck this ‘MM’ then seriously you should be acquainted with this field of image processing. You can use MM for classification, edge detection, filtering, segmentation, building detection and many other things. Grab a cup of coffee and Google MM in remote sensing; there is tons of stuff to read. There are courses in MM taught in different universities which are one semester long, so you got an idea how vast the subject is. If you want to seriously know MM, then there is no better book than ‘ Morphological Image Analysis’ by P Soillie. Here we use ‘Erosion’ method of MM. Time: 0.011184 seconds.<br />
<br />
<i><span style="color: #6aa84f;">% method two with morphological analysis</span></i><br />
<i><span style="color: #6aa84f;">tic;</span></i><br />
<i><span style="color: #6aa84f;">SE = strel ('disk', 1);</span></i><br />
<i><span style="color: #6aa84f;">result2 = imerode(BW, SE);</span></i><br />
<i><span style="color: #6aa84f;">[r,c] = find(result>0);</span></i><br />
<i><span style="color: #6aa84f;">toc;</span></i><br />
<i><span style="color: #6aa84f;">figure,imshow((BW2)), hold on, plot(r,c,'b*'), title ('With MM Erosion')</span></i><br />
<i><span style="color: #6aa84f;">hold off</span></i><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://geotipsandtricks.files.wordpress.com/2014/02/4.png" style="margin-left: auto; margin-right: auto;"><img alt="4" src="http://geotipsandtricks.files.wordpress.com/2014/02/4.png?w=300" height="258" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><em style="font-size: medium; text-align: left;">Way Two: </em><em style="font-size: medium; text-align: left;">Mathematical Morphology (MM)</em></td></tr>
</tbody></table>
There are many corner detection techniques such as Moravec, Harris, SIFT etc. Here, I am going to use Harris corner detector for intersection grid points. Time: 0.094848 seconds.<br />
<br />
<i><span style="color: #6aa84f;">% detect corner by harris corner detector</span></i><br />
<i><span style="color: #6aa84f;">tic;</span></i><br />
<i><span style="color: #6aa84f;">C = corner(BW2,500 );</span></i><br />
<i><span style="color: #6aa84f;">timet2 = toc;</span></i><br />
<i><span style="color: #6aa84f;">toc;</span></i><br />
<i><span style="color: #6aa84f;">figure,imshow(BW2);</span></i><br />
<i><span style="color: #6aa84f;">hold on</span></i><br />
<i><span style="color: #6aa84f;">plot(C(:,1), C(:,2), 'g*'); title('with harris corner detector')</span></i><br />
<i><span style="color: #6aa84f;">hold off</span></i><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://geotipsandtricks.files.wordpress.com/2014/02/5.png" style="margin-left: auto; margin-right: auto;"><img alt="5" class="alignnone size-medium wp-image-297" src="http://geotipsandtricks.files.wordpress.com/2014/02/5.png?w=300" height="258" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><em style="font-size: medium; text-align: left;">Way Two: With Harris Corner Detector</em></td></tr>
</tbody></table>
So you have seen there are number of ways to solve a particular problem. All above methods detected intersection points successfully. Among three methods, convolution required longest time for processing and MM required shortest time. The gain in computation cost between MM and Corner detection is 9 folds. I did all this in MATLAB but it can easily coded in python with scikit-image.</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-62926289141073048812014-01-23T20:04:00.000+01:002014-10-10T01:43:45.700+02:00Example of Opening by reconstruction in scikit-image<div dir="ltr" style="text-align: left;" trbidi="on">
This blog is a continuation of the last blog that I have written. The aim is to give you potential of the mathematical morphology (MM) using ski-image and their application. MM is branch of image processing which has wide application in many diverse field. Erosion and dilation are basic operators of MM which can be combined in different way to form many different powerful MM operators. MM was devised for binary images but nowadays it has been extended to grayscale images as well.<br />
<br />
Here I would show you application of erosion and opening by reconstruction to do some image processing. For definition of erosion and opening by reconstruction please follow any image processing books or look in the web there are many online tutorials.<br />
<br />
So we have an image. Let us assume that out the different blobs that we have we want to have a blob which should have a minimum length of 200 pixels in horizontal direction. Such expert knowledge is handy in many images processing domain to improve the image analysis. E.g building cannot be less than certain width, road should be of certain width, bridge should be of certain width etc. in this case lets assume, the shown blobs are result of some building detection algorithm. Now we want to improve the result by incorporating expert knowledge using MM.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://geotipsandtricks.files.wordpress.com/2013/11/figure_4.png" style="margin-left: auto; margin-right: auto;"><img alt="figure_4" src="http://geotipsandtricks.files.wordpress.com/2013/11/figure_4.png?w=300" height="298" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
So first we want to erode the image with linear structural element of length 200 pixels and whatever is remaining after that, we want to recover original shape of the blobs which was distorted by using erosion. For that purpose we use opening by reconstruction. Here are the codes, enjoy and dont forget to play around with python.<br />
<br />
<a href="http://geotipsandtricks.files.wordpress.com/2013/11/figure_5.png"><img alt="figure_5" src="http://geotipsandtricks.files.wordpress.com/2013/11/figure_5.png?w=300" height="223" width="300" /></a><a href="http://geotipsandtricks.files.wordpress.com/2013/11/figure_6.png"><img alt="figure_6" src="http://geotipsandtricks.files.wordpress.com/2013/11/figure_6.png?w=300" height="223" width="300" /></a><br />
<br />
<br />
<i><span style="color: #6aa84f;"># -*- coding: utf-8 -*-</span></i><br />
<i><span style="color: #6aa84f;">Created on Mon Nov 18 15:42:21 2013</span></i><br />
<i><span style="color: #6aa84f;">@author: shailesh</span></i><br />
<i><span style="color: #6aa84f;">"""</span></i><br />
<i><span style="color: #6aa84f;">#%%</span></i><br />
<i><span style="color: #6aa84f;">% call necessary libraries</span></i><br />
<i><span style="color: #6aa84f;">import math</span></i><br />
<i><span style="color: #6aa84f;">import matplotlib.pyplot as plt</span></i><br />
<i><span style="color: #6aa84f;">import numpy as np</span></i><br />
<i><span style="color: #6aa84f;">from skimage.draw import ellipse</span></i><br />
<i><span style="color: #6aa84f;">from skimage.draw import polygon</span></i><br />
<i><span style="color: #6aa84f;">from skimage.draw import circle</span></i><br />
<i><span style="color: #6aa84f;">from skimage.morphology import label</span></i><br />
<i><span style="color: #6aa84f;">from skimage.measure import regionprops</span></i><br />
<i><span style="color: #6aa84f;">from skimage.transform import rotate</span></i><br />
<i><span style="color: #6aa84f;">import matplotlib.patches as mpatches</span></i><br />
<i><span style="color: #6aa84f;">import skimage.morphology as MM</span></i><br />
<i><span style="color: #6aa84f;">% draw some arbitary shapes</span></i><br />
<i><span style="color: #6aa84f;">image = np.zeros((1000, 1000),dtype=uint8)</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">% create an ellipse with centre (350,350) with minor axis = 100 and major = 220</span></i><br />
<i><span style="color: #6aa84f;">rr, cc = ellipse(300, 750, 100, 220)</span></i><br />
<i><span style="color: #6aa84f;">image[rr,cc] = 1</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">% create a polygon</span></i><br />
<i><span style="color: #6aa84f;">x = np.array([1, 7, 4, 1])</span></i><br />
<i><span style="color: #6aa84f;">x = np.array([1, 70, 40, 1])</span></i><br />
<i><span style="color: #6aa84f;">y = np.array([1, 20, 80, 1])</span></i><br />
<i><span style="color: #6aa84f;">rr, cc = polygon(y, x)</span></i><br />
<i><span style="color: #6aa84f;">image[rr, cc] = 1</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">% create a polygon</span></i><br />
<i><span style="color: #6aa84f;">rr, cc = circle(200, 200, 50)</span></i><br />
<i><span style="color: #6aa84f;">image[rr, cc] = 1</span></i><br />
<i><span style="color: #6aa84f;">fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(6, 6))</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">rr, cc = ellipse(500, 800, 30, 40)</span></i><br />
<i><span style="color: #6aa84f;">image[rr, cc] = 1</span></i><br />
<i><span style="color: #6aa84f;">ax.imshow(image)</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">#lable connected regions</span></i><br />
<i><span style="color: #6aa84f;">label_img = label(image)</span></i><br />
<i><span style="color: #6aa84f;">#find properties of connected regions</span></i><br />
<i><span style="color: #6aa84f;">regions = regionprops(label_img,['Centroid','Area','BoundingBox'])</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">%loop through connected regions and find</span></i><br />
<i><span style="color: #6aa84f;">% Centroid', 'BoundingBox', 'Area' for each blobs</span></i><br />
<i><span style="color: #6aa84f;">i= 0</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">NOS= len(np.unique(label_img))-1</span></i><br />
<i><span style="color: #6aa84f;">X = np.zeros((1,NOS))</span></i><br />
<i><span style="color: #6aa84f;">Y = np.zeros((1,NOS))</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">for props in regions:</span></i><br />
<i><span style="color: #6aa84f;">y0, x0 = props['Centroid']</span></i><br />
<i><span style="color: #6aa84f;">print x0, y0</span></i><br />
<i><span style="color: #6aa84f;">%show figure</span></i><br />
<i><span style="color: #6aa84f;">plt.gray()</span></i><br />
<i><span style="color: #6aa84f;">plt.axis((0, 1000, 1000, 0))</span></i><br />
<i><span style="color: #6aa84f;">plt.title('original image')</span></i><br />
<i><span style="color: #6aa84f;">plt.show()</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">%perform erosion with linear structuring element</span></i><br />
<i><span style="color: #6aa84f;">SE= np.ones((1,200))</span></i><br />
<i><span style="color: #6aa84f;">figure()</span></i><br />
<i><span style="color: #6aa84f;">imageE = MM.binary_erosion(image, SE)</span></i><br />
<i><span style="color: #6aa84f;">plt.imshow(imageE)</span></i><br />
<i><span style="color: #6aa84f;">plt.title('Eroded image')</span></i><br />
<i><span style="color: #6aa84f;">plt.show()</span></i><br />
<i><span style="color: #6aa84f;">&nbsp;</span></i><br />
<i><span style="color: #6aa84f;">figure()</span></i><br />
<i><span style="color: #6aa84f;">%perform opening by reconstruction to recover blob</span></i><br />
<i><span style="color: #6aa84f;">imageRECON = MM.reconstruction(imageE, image)</span></i><br />
<i><span style="color: #6aa84f;">plt.imshow(imageRECON)</span></i><br />
<i><span style="color: #6aa84f;">plt.title('Reconstructed image')</span></i><br />
<i><span style="color: #6aa84f;">plt.show()</span></i><br />
<br />
Morphological reconstruction is a very powerful method. People have use it for many different applications such as watershed delieanation, filtering, change detection of building after earthquakes, building detection, bridge detection etc. For my work also, i have used the technique for unsupervised change detection using high resolution images and change detection of high rise buildings.</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-46754674979179486982013-12-05T01:13:00.000+01:002014-10-10T00:21:42.363+02:00A little teaser for using scikit learn for classification of remotesensing images<div dir="ltr" style="text-align: left;" trbidi="on">
I work with classification of remote sensing images a lot both with supervised as well as unsupervised classification. Unsupervised classifications don’t need any external input where as supervised classifications need samples or training areas for an algorithm to learn. For processing remote sensing images, there are many proprietary software like ENVI, ERDAS, PCI Geomatica, Global Mapper, eCognition and many more. Even after paying thousands of Euros, classifications algorithm available in Costs-off-the-self (COTS) software is far from satisfactory.<br />
<br /><table border="1" cellpadding="0" cellspacing="0"><tbody>
<tr><td valign="top" width="111"><ul>
<li>Software</li>
</ul>
</td><td valign="top" width="503"><ul>
<li>Available Algorithms</li>
</ul>
</td></tr>
<tr><td valign="top" width="111"><ul>
<li><strong>eCognition</strong> 8.7</li>
<br />
<li>(cost >1000 Euros/year)</li>
</ul>
<br /></td><td valign="top" width="503"><ul>
<li>KNN</li>
<br />
<li>Decision Tree DT</li>
<br />
<li>SVM (no Y and C parameters available for tunning, no way to perform grid search for optimal determination of Y and C, Only linear and rbf kernel available)</li>
<br />
<li>Random Forest RF (only available in 8.8</li>
</ul>
</td></tr>
<tr><td valign="top" width="111"><ul>
<li><strong>ENVI</strong></li>
<br />
<li>(cost >1000 Euros/year)</li>
</ul>
<br /></td><td valign="top" width="503"><ul>
<li>Maximum Likelihood (ML)</li>
<br />
<li>SVM (no way to perform grid search for optima, unless you get your hands dirty with IDL programming</li>
<br />
<li>Neural Network NN ( Numbers of hidden nodes cannot be assigned)</li>
<br />
<li>Some otheralgorithms more suitable for Hyperspectral imagesy</li>
<br />
<li>SAM, SID, Spectral Unmixing.</li>
</ul>
</td></tr>
<tr><td valign="top" width="111"><ul>
<li><strong>Scikit learn</strong></li>
<br />
<li>Free</li>
<br />
<li>Opensoure</li>
</ul>
<br /></td><td valign="top" width="503"><ul>
<li>Neighrest Neigbour (NN)</li>
<br />
<li>Decision Tree (DT)</li>
<br />
<li>SVM ( Grid search, cross validation flexibility)</li>
<br />
<li>Random Forest RF</li>
<br />
<li>AdaBoost</li>
<br />
<li>Naives Bayes</li>
<br />
<li>Linear Discriminant Analysis (LDA)</li>
<br />
<li>Quadratic Discriminant Analysis (QDA)</li>
</ul>
<br /></td></tr>
</tbody></table>
<br />
Here is the little teaser of classification accuracy with many algorithms that are available in scikit-learn for a remote sensing imagery. In near future, I will blog with more illustration and with code. Till then go and make your hands dirty with Python and Scikit-Learn. Make that your new year resolution and trust me, you will thank me for that.<br />
<br />
<a href="http://geotipsandtricks.files.wordpress.com/2013/12/accuracy1.png"><img alt="Accuracy1" class="size-medium wp-image-275 aligncenter" src="http://geotipsandtricks.files.wordpress.com/2013/12/accuracy1.png?w=300" height="475" width="640" /></a><br />
<br />
Here, algorithms hyperparameters were not optimally tuned hence superior machine learning algorithms like SVM has very low accuracy for test samples which are not seen by trained model.<br />
<br />
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-22905457312947844632013-11-18T19:06:00.000+01:002014-10-10T01:44:36.712+02:00Image processing in python with scikit-learn and pymorph<div dir="ltr" style="text-align: left;" trbidi="on">
I have been using MATLAB quite a bit for some time. But being a proprietary language it has some drawbacks. One has to fork out lot of money to buy it and renew it yearly license. On top of that one hast to busy many toolboxes.Check <a href="https://sites.google.com/site/pythonforscientists/python-vs-matlab" title="this">this</a> blog for finding various advantages of switching over from MATLAB to Python. You will find very helpful insights into key difference between MATLAB and python numpy <a href="http://wiki.scipy.org/NumPy_for_Matlab_Users">here</a>.<br />
<br />
Recently for couples of weeks now, I have to experimenting with python which is an open source software for some of my tasks that I do mainly related to classification, segmentation of remote sensing image etc during my weekend free time. So after sacrificing two of my weekends time for scouring python and different libraries which otherwise I would spend for my leisure activity, I am now pretty comfortable with python language as well for image processing tasks. Sweet :-) .<br />
<br />
In this post I will show some morphological image analysis with python. I can see this will be series of post related to python. I plan to cover classification, clustering, edge detection in my subsequent posts.<br />
<br />
For installing python, I have used <a href="https://code.google.com/p/pythonxy/" title="python xy">pythonXY</a> which comes with many libraries that might be needed for the scientific data analysis such as scipy, numpy, matplotlib and many others.PythoXY also comes with an Interactive development environment (IDE) for python which is called Spyder. With Spyder, you don’t have run python from command prompt. Other benefit includes integrated help, variable explorer, history console etc. In addition its all FREE.<br />
<br />
Here is the interface how it looks.<br />
<div style="text-align: center;">
<a href="http://geotipsandtricks.files.wordpress.com/2013/11/unbenannt.png"><img alt="Spyder iterface" class="size-medium wp-image-246 aligncenter" src="http://geotipsandtricks.files.wordpress.com/2013/11/unbenannt.png?w=300" height="341" width="640" /></a></div>
<br />
I will be using library called “<a href="http://scikit-image.org/">scikit-image</a>” which comes in a bundle with pythonXY and in addition I would be using a library called <a href="http://pythonhosted.org/pymorph/">“PyMorph”</a> for morphological image analysis. Scikit-image also has a morphologicl module but there you would only find basic morphological operators like opening, closing, erosion and dialation. In PyMorph there are many advance morphological operator such opening by reconstruction and closing by reconstruction, Auto sequential filter(ASF), ASF by reconstruction etc. So after installing pythonXY, download the PYMoprh library and install it.<br />
<br />
So, in this post I would show you how to use scikit-image for creating some basic shapes, calculating i area for each shapes or blobs and writing areas in the figure, finding bounding box of each blobs and plotting it in the figures. After finding areas of each figures, I will show how to remove certain blob which has less area t than specified area.<br />
<br />
<i><span style="color: #93c47d;">% -*- coding: utf-8 -*-</span></i><br />
<i><span style="color: #93c47d;">Created on Mon Nov 18 15:42:21 2013</span></i><br />
<i><span style="color: #93c47d;">@author: shailesh</span></i><br />
<i><span style="color: #93c47d;">"""</span></i><br />
<i><span style="color: #93c47d;">%%%</span></i><br />
<i><span style="color: #93c47d;">% call necessary libraries</span></i><br />
<i><span style="color: #93c47d;">import math</span></i><br />
<i><span style="color: #93c47d;">import matplotlib.pyplot as plt</span></i><br />
<i><span style="color: #93c47d;">import numpy as np</span></i><br />
<i><span style="color: #93c47d;">from skimage.draw import ellipse</span></i><br />
<i><span style="color: #93c47d;">from skimage.draw import polygon</span></i><br />
<i><span style="color: #93c47d;">from skimage.draw import circle</span></i><br />
<i><span style="color: #93c47d;">from skimage.morphology import label</span></i><br />
<i><span style="color: #93c47d;">from skimage.measure import regionprops</span></i><br />
<i><span style="color: #93c47d;">from skimage.transform import rotate</span></i><br />
<i><span style="color: #93c47d;">import matplotlib.patches as mpatches</span></i><br />
<i><span style="color: #93c47d;">import pymorph as MM</span></i><br />
<i><span style="color: #93c47d;"><br /></span></i>
<i><span style="color: #93c47d;">%%%</span></i><br />
<i><span style="color: #93c47d;">% draw some arbitary shapes</span></i><br />
<i><span style="color: #93c47d;">image = np.zeros((1000, 1000),dtype=uint8)</span></i><br />
<i><span style="color: #93c47d;">% create an ellipse with centre (350,350) with minor axis = 100 and major = 220</span></i><br />
<i><span style="color: #93c47d;">rr, cc = ellipse(300, 350, 100, 220)</span></i><br />
<i><span style="color: #93c47d;">image[rr,cc] = 1</span></i><br />
<i><span style="color: #93c47d;"><br /></span></i>
<i><span style="color: #93c47d;">% create a polygon</span></i><br />
<i><span style="color: #93c47d;">x = np.array([1, 7, 4, 1])</span></i><br />
<i><span style="color: #93c47d;">x = np.array([1, 70, 40, 1])</span></i><br />
<i><span style="color: #93c47d;">y = np.array([1, 20, 80, 1])</span></i><br />
<i><span style="color: #93c47d;">rr, cc = polygon(y, x)</span></i><br />
<i><span style="color: #93c47d;">image[rr, cc] = 1</span></i><br />
<i><span style="color: #93c47d;"><br /></span></i>
<i><span style="color: #93c47d;">% create a polygon</span></i><br />
<i><span style="color: #93c47d;">rr, cc = circle(200, 200, 50)</span></i><br />
<i><span style="color: #93c47d;">image[rr, cc] = 1</span></i><br />
<i><span style="color: #93c47d;">fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(6, 6))</span></i><br />
<i><span style="color: #93c47d;">ax.imshow(image)</span></i><br />
<i><span style="color: #93c47d;"><br /></span></i>
<i><span style="color: #93c47d;">%lable connected regions</span></i><br />
<i><span style="color: #93c47d;">label_img = label(image)</span></i><br />
<i><span style="color: #93c47d;">%find properties of connected regions</span></i><br />
<i><span style="color: #93c47d;">regions = regionprops(label_img,['Centroid', 'BoundingBox', 'Area'])</span></i><br />
<i><span style="color: #93c47d;"><br /></span></i>
<i><span style="color: #93c47d;">%loop through connected regions and find</span></i><br />
<i><span style="color: #93c47d;">% Centroid', 'BoundingBox', 'Area' for each blobs</span></i><br />
<i><span style="color: #93c47d;">for props in regions:</span></i><br />
<i><span style="color: #93c47d;">y0, x0 = props['Centroid']</span></i><br />
<i><span style="color: #93c47d;">minr, minc, maxr, maxc = props['BoundingBox']</span></i><br />
<i><span style="color: #93c47d;">area =props['Area']</span></i><br />
<i><span style="color: #93c47d;">rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr,</span></i><br />
<i><span style="color: #93c47d;">fill=False, edgecolor='red', linewidth=2)</span></i><br />
<i><span style="color: #93c47d;">ax.add_patch(rect)</span></i><br />
<i><span style="color: #93c47d;">ax.text(x0, y0, str(area), fontsize=10, color="blue")</span></i><br />
<i><span style="color: #93c47d;">%show figure</span></i><br />
<i><span style="color: #93c47d;">plt.gray()</span></i><br />
<i><span style="color: #93c47d;">plt.axis((0, 1000, 1000, 0))</span></i><br />
<i><span style="color: #93c47d;">plt.show()</span></i><br />
<i><span style="color: #93c47d;"><br /></span></i>
<i><span style="color: #93c47d;">%%% remove small blobs</span></i><br />
<i><span style="color: #93c47d;">image = image.astype(np.uint8)</span></i><br />
<i><span style="color: #93c47d;">%remove blobs with area less than 10000 '</span></i><br />
<i><span style="color: #93c47d;">% MM.areaopen is coming from PyMorph</span></i><br />
<i><span style="color: #93c47d;">b=MM.areaopen(image,10000)</span></i><br />
<i><span style="color: #93c47d;">fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(6, 6))</span></i><br />
<i><span style="color: #93c47d;">ax.imshow(b)</span></i><br />
<i><span style="color: #93c47d;">ax.title('blobs with areas greater than 10000 ')</span></i><br />
<i><span style="color: #93c47d;">plt.gray()</span></i><br />
<i><span style="color: #93c47d;">plt.axis((0, 1000, 1000, 0))</span></i><br />
<i><span style="color: #93c47d;">plt.show()</span></i><br />
<i><span style="color: #93c47d;">% to close the pixels</span></i><br />
<i><span style="color: #93c47d;">plt.close("all")</span></i><br />
<br />
Here are the figures produced by above code.<br />
<br />
Initial figure after some blobs were made. The area of each blobs and bounding box were plotted as well<br />
<a href="http://geotipsandtricks.files.wordpress.com/2013/11/figure_1.png"><img alt="figure_1" class="alignnone size-medium wp-image-244" src="http://geotipsandtricks.files.wordpress.com/2013/11/figure_1.png?w=300" height="298" width="300" /></a><br />
<br />
Figure where blobs with area less than 10000 pixels are remove with pymorph.<br />
<a href="http://geotipsandtricks.files.wordpress.com/2013/11/figure_2.png"><img alt="figure_2" class="alignnone size-medium wp-image-245" src="http://geotipsandtricks.files.wordpress.com/2013/11/figure_2.png?w=300" height="298" width="300" /></a><br />
<br />
So this is a hypothetical simple illustration. But the real application of this kind of analysis could be many in remote sensing field. For example, after detection buildings you can easily remove buildings with size less than minimum mapping unit (say 100 m2). Similarly it can be used for removing some artifacts which are small in area that you might not be interested in. And the list can go on.<br />
<br />
In next few blogs, i will be writing more about other morphological image processing operators as well as classfiers such Support Vector Machine (SVM), Random Forest (RF) etc.</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com1tag:blogger.com,1999:blog-226544055325165100.post-49756576381717391642013-09-13T10:42:00.000+02:002014-10-10T00:34:26.915+02:00MATLAB linking figure for simultaneous zooming and panning<div dir="ltr" style="text-align: left;" trbidi="on">
Sometime back I have written a <a href="http://geotipsandtricks.wordpress.com/2011/03/09/gui-for-linking-figures-in-matlab/" title="GUI for Linking figures in MATLAB">Matlab GU</a>I, which allows you select variables from your work space and plot them in linked figures so that when you pan and zoom, other figures zoom automatically. Lately I have received a lot of request to share that GUI with other but unfortunately I am unable to do so as I developed it for some research works and I am not allowed to distribute it. But, I am going to share you a function which does that linking.<br />
<br />
To use that function, download it from <a href="https://sites.google.com/site/shreshai/file-cabinet/linkaxes_shailesh.p?attredirects=0&d=1" title="link_axes">here.</a> There is a <a href="https://sites.google.com/site/shreshai/file-cabinet/test.m?attredirects=0&d=1" title="test link-axes">test.m</a> for testing purpose as well.<br />
<br />
%first image<br />
Im1 = imread('cameraman.tif');<br />
% show first figure<br />
imshow(Im1)<br />
% threshold image<br />
Im2 = Im1>50;<br />
%show second image in another figure<br />
figure,<br />
imshow(Im2);<br />
% then call linkaxes_shailesh ()<br />
% remember that matriz size must be same</code><br />
<br />
Put it in a safe place in a folder, set path in Matlab to that folder so that the function is available in every Matlab session ( file> set path). Open two or more figure you want to link in but bear in mind that the variables used in the figures must of same size i.e. it should have same number of rows and columns. After that, call that function by typing<em><strong> linkaxes_shailesh</strong></em> in your command line or you can even call it within one of your m file or function files by <em><strong>linkaxes_shailesh ()</strong></em> and viola, your figures will be linked. Its simple as that, don’t believe? Try out yourself. Someone said rightly ‘ Seeing is believing’. I hope you find it useful.<br />
<br />
<iframe width="560" height="315" src="//www.youtube.com/embed/QhqnGdk2yQU?list=UUHQMpdmbYQPJxqsnKuz9DUw" frameborder="0" allowfullscreen></iframe><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com1tag:blogger.com,1999:blog-226544055325165100.post-71240234135287469832013-08-30T16:01:00.000+02:002013-09-02T11:17:21.939+02:00Neighborhood analysis in QGIS.<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
<br /></div>
<div class="MsoNormal">
<span lang="EN-GB">QGIS is an open
source software which anyone can download for free. The community is growing
each day and it is completely capable of doing common basic GIS tasks as well
as more advance stuff. In this tutorial, I will show you how to do some neighbourhood
analysis in QGIS with the help of sextante graphical modeler. If you guys are
more ArcGIS person then you might be familiar with ArcGIS modeler. QGIS
provides similar graphical interface as ArcGIS through which all the available algorithms
in QGIS are accessible. On top of that you can combine algorithms from different
other providers such as GDAL, Grass, OrfeoToolbox, R and SagaGIS. So it can be
really powerful tool to automate your daily GIS task. Go check it out. Scour
help file of QGIS, you will find plenty of readings.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB"><br /></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDev_LVcnBBwFeIw3eQilObhtfwSH2x1njpjnItC34DMN9aUCfEgd1FlbEcOCOUg7UFJpw_fswy4Y6pXMuWGLioG6I0mFg30yJiEPczZZhHiubYiHYqfizGnpdbWNupoWyHhkfkYwvQcQ/s1600/1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDev_LVcnBBwFeIw3eQilObhtfwSH2x1njpjnItC34DMN9aUCfEgd1FlbEcOCOUg7UFJpw_fswy4Y6pXMuWGLioG6I0mFg30yJiEPczZZhHiubYiHYqfizGnpdbWNupoWyHhkfkYwvQcQ/s1600/1.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Different programs that are available with QGIS</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="MsoNormal">
<span lang="EN-GB"></span></div>
<div class="MsoNormal">
<span lang="EN-GB"><b>So the task
is:</b> There is a polygon shape file of my country Nepal which is shown in green. Each
polygon represents a district. There is a point shp file which represent point
of interest (assume it to be major infrastructures such as big hospitals or
airports or universities etc). Each point of interest is represented by unique
ID. Now the task is to assign each district to closest point of interest based
on how far the district is from point of interest<o:p></o:p></span></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRmGwg3yjOvqhUciNyW5bQ-Q1GpZ4lf5RyR9VizwAosAUTK2Et6QPnOE-UwwtpTmOkm-MaGg5dxRmYDYcOjX5K2P8Rl9OzHiS97Rh5uNJzUyf9K7KRZSD5VDea5jCsuecTv34e_4VWJxM/s1600/2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="257" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRmGwg3yjOvqhUciNyW5bQ-Q1GpZ4lf5RyR9VizwAosAUTK2Et6QPnOE-UwwtpTmOkm-MaGg5dxRmYDYcOjX5K2P8Rl9OzHiS97Rh5uNJzUyf9K7KRZSD5VDea5jCsuecTv34e_4VWJxM/s400/2.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Input shpfiles: </td></tr>
</tbody></table>
<div class="MsoNormal">
<span lang="EN-GB"></span></div>
<div class="MsoNormal">
<b><span lang="EN-GB">The graphical model in QGIS.<o:p></o:p></span></b></div>
<div class="MsoNormal">
<b><span lang="EN-GB"><br /></span></b></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcSeRo29PFUcG5nOTw67plZTOrpUaGHtK0kBGIa_DWTUAuei0dnHunfe5Ourr5m3-OymRItKLeF34Rf20iraUvwKidwIAplg9zol_T74rPnmmFwEox0QLqnK5cCNPPUtg7FYBiq75JTxM/s1600/3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcSeRo29PFUcG5nOTw67plZTOrpUaGHtK0kBGIa_DWTUAuei0dnHunfe5Ourr5m3-OymRItKLeF34Rf20iraUvwKidwIAplg9zol_T74rPnmmFwEox0QLqnK5cCNPPUtg7FYBiq75JTxM/s640/3.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="MsoNormal">
<span lang="EN-GB">The model
is basically doing following main task.<o:p></o:p></span></div>
<div class="MsoListParagraphCxSpFirst" style="mso-list: l0 level1 lfo1; text-indent: -18.0pt;">
</div>
<ul style="text-align: left;">
<li><span lang="EN-GB" style="font-family: Symbol; text-indent: -18pt;">·<span style="font-family: 'Times New Roman'; font-size: 7pt;">
</span></span><span lang="EN-GB" style="text-indent: -18pt;">Convert
shp file districts to its centroid</span></li>
<li><span lang="EN-GB" style="font-family: Symbol; text-indent: -18pt;">·<span style="font-family: 'Times New Roman'; font-size: 7pt;">
</span></span><span lang="EN-GB" style="text-indent: -18pt;">Calculate
nearest point of interest from each district centroid.</span></li>
</ul>
<!--[if !supportLists]--><br />
<div class="MsoNormal">
<span lang="EN-GB">After
creating the model one could just hit the run button and you are good to go.
The model can be saved and used it for later purpose. As you can see in the
modeller, the first algorithm is coming from SAGA GIS whereas second algorithms
is coming from core QGIS. This is also a good illustration to show how
algorithms can be combined from different source.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="MsoNormal">
<span lang="EN-GB">After running
the model, you will get a point file. Let’s examine its attribute table. As you
can see, each district is associated with @HubName which in our case is point
of interest. It also has distance information. I forgot to converts projection EPSG:
4632 to a projected one so the distance is in terms of latitude and longitude.
If you want distance in m or km then use a projected EPSG and you will be fine.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisOxrJ8__4nqpvCq3iRoUlyUdIS_ILx12-ctxNqc7QY14UwSCw1dkjpGKylG7OIXtgjP2jCTXaGD4FwN2rDdZ4KXqeM8olpTNRRFt0t5ReR4EMKnOmv47nW2L-W-dvXHA62gNie_GuR9k/s1600/4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="170" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisOxrJ8__4nqpvCq3iRoUlyUdIS_ILx12-ctxNqc7QY14UwSCw1dkjpGKylG7OIXtgjP2jCTXaGD4FwN2rDdZ4KXqeM8olpTNRRFt0t5ReR4EMKnOmv47nW2L-W-dvXHA62gNie_GuR9k/s320/4.png" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="MsoNormal">
<span lang="EN-GB"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB">Now we will
join this information to the original district shp file based on the column @Name.
To do that do the following:<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="MsoNormal">
<span lang="EN-GB">Right click
the polygon shp file and click properties. Click join tab and following figure
pops up. Click green + button to define a join as shown in next figure. The
join column is @Name which is common is both polygon shp files as well point
shp files that we calculated from our graphical modeler</span></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmceK0wy-2Zvqsd8cc08HzqoXJRcPVjuMowNH8RVkfpdigyDfDM5JePa0oojEja6tDiEpymS7s6yiPMa-Vj65ZtpfOw7wrVZzvtHtxPjDZRVpvOotkrBNqCXwJmUBG992rC7c7q02Zq3M/s1600/5.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmceK0wy-2Zvqsd8cc08HzqoXJRcPVjuMowNH8RVkfpdigyDfDM5JePa0oojEja6tDiEpymS7s6yiPMa-Vj65ZtpfOw7wrVZzvtHtxPjDZRVpvOotkrBNqCXwJmUBG992rC7c7q02Zq3M/s320/5.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Join attribute interface</td></tr>
</tbody></table>
<div class="MsoNormal">
<span lang="EN-GB">.<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlFCV874hiLAWRuEXOqxFljgLjPLXeZ4OINoyBHQ2PmkAmgFkHtGqsNpPjIKK2fGS5zz_3Ad0wOesE06xkyMPPI_xGCXpuZKvpJ0c7kKyqWP2DWwyqAL05bCAWrtxDqMx6MnhA0Tc24Xo/s1600/6.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="146" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlFCV874hiLAWRuEXOqxFljgLjPLXeZ4OINoyBHQ2PmkAmgFkHtGqsNpPjIKK2fGS5zz_3Ad0wOesE06xkyMPPI_xGCXpuZKvpJ0c7kKyqWP2DWwyqAL05bCAWrtxDqMx6MnhA0Tc24Xo/s320/6.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Defining join layer and columns</td></tr>
</tbody></table>
<o:p></o:p></span></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<span lang="EN-GB" style="font-family: "Calibri","sans-serif"; font-size: 11.0pt; line-height: 115%; mso-ansi-language: EN-GB; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-fareast-font-family: Calibri; mso-fareast-language: EN-US; mso-fareast-theme-font: minor-latin; mso-hansi-theme-font: minor-latin;">After
the join, the attribute of districts polygon has additional information. Now it’s
the matter of applying colour symbology to differentiate which districts are
closer to which one of the point of interest based on fourth column.</span><span lang="EN-GB" style="font-family: "Calibri","sans-serif"; font-size: 11.0pt; line-height: 115%; mso-ansi-language: EN-US; mso-ascii-theme-font: minor-latin; mso-bidi-font-family: "Times New Roman"; mso-bidi-language: AR-SA; mso-bidi-theme-font: minor-bidi; mso-fareast-font-family: Calibri; mso-fareast-language: PL; mso-fareast-theme-font: minor-latin; mso-hansi-theme-font: minor-latin; mso-no-proof: yes;"> </span></div>
<div class="MsoNormal">
<b><span lang="EN-GB"><br /></span></b></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5udYE7o2u8WbRPeB1l4HkbfITOtsB1At_FDXASr5L3kc8i_BMBVkLi6qDdRR6AZE8spppVlhs1u0JLBL2GfwR-92QoA5cEuHWP0TDh092G2KA6dl8cPLxivS5GqU9UaorCdigdkG3Gy4/s1600/7.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="290" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5udYE7o2u8WbRPeB1l4HkbfITOtsB1At_FDXASr5L3kc8i_BMBVkLi6qDdRR6AZE8spppVlhs1u0JLBL2GfwR-92QoA5cEuHWP0TDh092G2KA6dl8cPLxivS5GqU9UaorCdigdkG3Gy4/s320/7.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Attribute table after join</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="MsoNormal">
<b><span lang="EN-GB">The final result is :<o:p></o:p></span></b></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj31OenvFusRwRNjpEntUJwl4M37tqKOUQ9aLQcZxHly1FGVsL3d5Kv-zvrAc1tka1lKKvFUVvUx6kDc97qEk-Xyx_8Mwh_3xZXainM1B2A-suFmcUCKSOP_OZAURqAW_hREr1sp9KIiW8/s1600/8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="194" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj31OenvFusRwRNjpEntUJwl4M37tqKOUQ9aLQcZxHly1FGVsL3d5Kv-zvrAc1tka1lKKvFUVvUx6kDc97qEk-Xyx_8Mwh_3xZXainM1B2A-suFmcUCKSOP_OZAURqAW_hREr1sp9KIiW8/s320/8.png" width="320" /></a></div>
<div class="MsoNormal">
<b><span lang="EN-GB"><br /></span></b></div>
<div class="MsoNormal">
<span lang="EN-GB">I have not
shown each and every step in this tutorial. My main purpose is to show how you
can use powerful QGIS and sextante modeler to perform complex GIS task such as
neighborhood analysis without using ArcGIS which costs a lot and stimulate you
to try QGIS. If any questions, just ask and I will reply. Of course, if I am not
busy with my shits </span><span lang="EN-GB" style="font-family: Wingdings; mso-ansi-language: EN-GB; mso-ascii-font-family: Calibri; mso-ascii-theme-font: minor-latin; mso-char-type: symbol; mso-hansi-font-family: Calibri; mso-hansi-theme-font: minor-latin; mso-symbol-font-family: Wingdings;">J</span><span lang="EN-GB">. <o:p></o:p></span></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="MsoNormal">
<b><span lang="EN-GB"><br /></span></b></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-31893567938775132232013-06-10T11:46:00.001+02:002013-06-17T11:06:34.653+02:00Maximum reflectance in a spectra of Multispectral or Hyperspectral image in MATLAB<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
<span lang="EN-US">This is a very
short post, something related to processing of remote sensing images. The image
can be either multi-spectral or hyper-spectral images. One of my colleagues asked
me this simple help in MATLAB He was trying that in ENVI with band-math and
IDL, with no avail.</span><br />
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US"><i>I wanted to
produce 2 images from my hyper-spectral image: 1) showing the maximum
reflectance value across the bands and 2) the band number where the maximum
came from.</i><o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><i><br /></i></span></div>
<div class="MsoNormal">
<span lang="EN-US">First step is to read the image in MATLAB, of course :). That's no-brainer. For that you have to use multibandread function. I have a written post about it earlier. Search in the site. Now you have your image read in the MATLAB.</span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Here is the code, that i have written to perform what my friend asked.Hope this is useful for someone else.</span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">%find the number of rows and columns</span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">[rows,cols,bands]= size(imt1);</span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">%reshape it </span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">imt1= reshape(imt1,[],4)';</span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">%find max_value and band which has maximum value</span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">[max_value,idx]= max(imt1,[],1);</span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">%reconstruct the image</span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">band_image= reshape(idx,rows,cols);</span></i></div>
<div class="MsoNormal" style="text-align: left;">
<i><span style="font-family: Courier New, Courier, monospace;">max_image= reshape(max_value,rows,cols);</span></i></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0tag:blogger.com,1999:blog-226544055325165100.post-17884040440291231922013-06-07T12:46:00.001+02:002014-12-04T16:48:30.565+01:00eConition Tutorial: Customized algorithm for performing majority vote in eCognition<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;">
Today, I present you a customized rule set which lets you to assign super-object by evaluating all of its sub-objects based on which classification makes up the largest proportion of the area. This is one of the wishlist in eCognition Ideas and was also frequently asked in the ecognition community.<br />
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US"></span>A majority statistic customized algorithm lets you assign super- object to the class with the majority value of the pixels within each object. This would be useful for converting existing pixel-based classifications into an object-based format where additional object-based edits can be made.</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US">The only parameter in this customized algorithm is a level variable. This is a level which consists of super-object. The customized algorithm will look one level below the level variable and perform all the necessary calculation.</span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieenaLzPsGb7PxTSJ3uW80-t64DiCVieZSHIEY8XrgQs6RlS3Ny9JNdn5-4ByTY0oDhBvNyRQm5fvZU2DG21TFU1Bhuj3he86wP7w-ZXlSM2flaKJLw82ANZ5vWRPCTUCLmjJ_9ZHp_g0/s1600/suobject.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieenaLzPsGb7PxTSJ3uW80-t64DiCVieZSHIEY8XrgQs6RlS3Ny9JNdn5-4ByTY0oDhBvNyRQm5fvZU2DG21TFU1Bhuj3he86wP7w-ZXlSM2flaKJLw82ANZ5vWRPCTUCLmjJ_9ZHp_g0/s1600/suobject.PNG" height="161" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Sub-objects level with classification</span></td></tr>
</tbody></table>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieHanrbHnn_tq8lgY8Ck-imDwtpAi41KVk9jw5qlpDRMeB4e4Z1QFXHw9zH4eqGRXlfzm-84HoeDazZplt9cXCVb33LbdiJXdmLrwBod7ggqNKD4uBbBN0Q3zYJxHjJv_972gWs-FzYro/s1600/superobject_initial.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieHanrbHnn_tq8lgY8Ck-imDwtpAi41KVk9jw5qlpDRMeB4e4Z1QFXHw9zH4eqGRXlfzm-84HoeDazZplt9cXCVb33LbdiJXdmLrwBod7ggqNKD4uBbBN0Q3zYJxHjJv_972gWs-FzYro/s1600/superobject_initial.PNG" height="147" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-family: 'Lucida Sans Unicode', sans-serif; font-size: small;">Super-objects level with no classification</span></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ6x6pNc5YjQJ_75b2fg4E0p9JpzDFUR24GXhtFMuHGP1FLXf5d5R5fmNEyhE_sk7eTwp5VpdwhyjF9xf26eXYaPfQs0XEpe5xKuwjnB0_iPDcFxnGN_I1h77nA25U58Cq_3RyuRiEB_c/s1600/superobject_final.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQ6x6pNc5YjQJ_75b2fg4E0p9JpzDFUR24GXhtFMuHGP1FLXf5d5R5fmNEyhE_sk7eTwp5VpdwhyjF9xf26eXYaPfQs0XEpe5xKuwjnB0_iPDcFxnGN_I1h77nA25U58Cq_3RyuRiEB_c/s1600/superobject_final.PNG" height="165" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Super-object level with classification from customized algorithm</span></td></tr>
</tbody></table>
<div class="MsoNormal">
<span lang="EN-US">The customized algorithm does assignment of super-object with following steps.</span></div>
<div class="MsoNormal">
<br />
1) store all the classes in an array ( array_class)<br />
3) loop through your objects in super objects<br />
2) loop through your class arrays<br />
4) store rel. area of (1) in an array for the class ( array_occur)<br />
5) find maximum in the array_occur<br />
6) assign super-object as maximum occurrence class in array array_occur.<br />
<br /></div>
<div class="MsoNormal">
</div>
<div class="MsoNormal">
<span lang="EN-US">All of these steps are performed behind the scene in customized algorithm, so the user does not need to worry about how to perform these steps.</span><br />
<span lang="EN-US"><br /></span>I have uploaded a <a href="https://sites.google.com/site/shreshai/file-cabinet/majority%20Vote.rar?attredirects=0&d=1">zip file</a> where you will find a project which shows the usage of the customized algorithm. There is a customized rule-set as well. Load the customized rule set in your project and after that you will find a algorithm MajorityVote in the available algorithm list.<br />
<br />
The algorithm was developed with eCogntion 8.8 and will not work below that version.<br />
<br />
<span lang="EN-US"><br /></span><span lang="EN-US"><br /></span></div>
</div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com3tag:blogger.com,1999:blog-226544055325165100.post-65471879599328507222013-06-07T12:46:00.000+02:002014-11-15T09:54:04.383+01:00eCognition tutorial: Customized algorithm for performing majority vote in eCognition<div dir="ltr" style="text-align: left;" trbidi="on">
Today, I present you a customized rule set which lets you to assign super-object by evaluating all of its sub-objects based on which classification makes
up the largest proportion of the area. This is one of the wishlist in
eCognition Ideas and was also frequently asked in the ecognition community.<br />
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US"><o:p></o:p></span>A majority statistic customized algorithm lets you assign super- object to the class with the majority value of the pixels within each object. This would be useful for converting existing pixel-based classifications into an object-based format where additional object-based edits can be made.</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US">The only
parameter in this customized algorithm is a level variable. This is a level
which consists of super-object. The customized algorithm will look one level
below the level variable and perform all the necessary calculation.<o:p></o:p></span></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9tLZeHnDoFwzErESIxVHhvzbg3LkNiJktC4bUcF-VrVy0Yn4CNb-YuO9hgszxAipSNRXXpganPTHHMGBh5wkPU9JeQiIrj-3LsGMCfCpreHqPjaNhljZ6izTfySLT09tPy1-_Xp0jut8/s1600/suobject.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9tLZeHnDoFwzErESIxVHhvzbg3LkNiJktC4bUcF-VrVy0Yn4CNb-YuO9hgszxAipSNRXXpganPTHHMGBh5wkPU9JeQiIrj-3LsGMCfCpreHqPjaNhljZ6izTfySLT09tPy1-_Xp0jut8/s320/suobject.PNG" height="131" title="Sub-objects" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Sub-objects level with classification</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjp1HrftrpjTXEomtkjNopHTT7-YTHaShum4geyTR_0lu3UesKeaUZL2l8I0IVaYmrSHhSu0Zs3nkIZPT1NolfiwQvycbPUw9bhzApvJUYTOxJQGl097W5ha_cVPVapZzeTOE0QZIFyLU8/s1600/superobject_initial.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjp1HrftrpjTXEomtkjNopHTT7-YTHaShum4geyTR_0lu3UesKeaUZL2l8I0IVaYmrSHhSu0Zs3nkIZPT1NolfiwQvycbPUw9bhzApvJUYTOxJQGl097W5ha_cVPVapZzeTOE0QZIFyLU8/s320/superobject_initial.PNG" height="117" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-family: 'Lucida Sans Unicode', sans-serif;">Super-objects level with no classification</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTM0VUXEkeGqWbulhex5bgLYULcw7Of0vt81GslSBsB0YeGLSDTbE7r2OvDs7j_3w6aqW3r_Wl21NLBlKg0J1n89OYcaUSom-FDzhT_BOS1QdpUijNbZqgIetnvgGlA4mq0egz7xLuQg0/s1600/superobject_final.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTM0VUXEkeGqWbulhex5bgLYULcw7Of0vt81GslSBsB0YeGLSDTbE7r2OvDs7j_3w6aqW3r_Wl21NLBlKg0J1n89OYcaUSom-FDzhT_BOS1QdpUijNbZqgIetnvgGlA4mq0egz7xLuQg0/s320/superobject_final.PNG" height="132" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Super-object level with classification from customized algorithm</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-US">The
customized algorithm does assignment of super-object with following steps.<o:p></o:p></span></div>
<div class="MsoNormal">
<br />
1) store all the classes in an array ( array_class)<br />
3) loop through your objects in super objects<br />
2) loop through your class arrays<br />
4) store rel. area of (1) in an array for the
class ( array_occur)<br />
5) find maximum in the array_occur<br />
6) assign super-object as maximum occurrence class
in array array_occur.<o:p></o:p><br />
<br /></div>
<div class="MsoNormal">
</div>
<div class="MsoNormal">
<span lang="EN-US">All of
these steps are performed behind the scene in customized algorithm, so the user
does not need to worry about how to perform this steps.<o:p></o:p></span><br />
<span lang="EN-US"><br /></span>
I have uploaded a <a href="https://sites.google.com/site/shreshai/file-cabinet/majority%20Vote.rar?attredirects=0&d=1">zip file</a> where you will find a project which shows the usage of the customized algorithm. There is a customized rule-set as well. Load the customized rule set in your project and after that you will find a algorithm MajorityVote in the available algorithm list.<br />
<br />
The algorithm was developed with eCogntion 8.8 and will not work below that version.<br />
<br />
<span lang="EN-US"><br /></span>
<span lang="EN-US"><br /></span></div>
</div>
Anonymoushttp://www.blogger.com/profile/06633342383500839837noreply@blogger.com0