We can use remotely sensed images to extract useful information about the land surface using image classification.  The first step is usually segmentation, which groups pixels with common properties together.  This post is a quick look at what it is and how to do it.

Ah, a nice, relaxing paint-by-numbers landscape for a Friday.  While I’d love to while away a few hours doing one of these myself, our subject today is image segmentation, but I’ve started with this due to that unique aesthetic quality we often see in paint-by-number works; we’ll come back to that later.

We’ve been looking at remotely sensed images recently and using things like NDVI to glean some useful information from the raw data.  In this post we delving into the process of image analysis, or more specifically, image classification.  This is the process of converting the data in an image into useful data layers, such as land cover or impervious surfaces.  A really good example of an output from image classification is the Land Cover Database, a polygon layer with 33 different classes of land cover for the whole country (including the Chathams) at a scale of 1:50,000.  Here’s an image from the lyttle town of Lyttelton and surrounding area to give you a sense of some of the land cover classes and scale:

This layer began life as a set of satellite images.  Using image classification, pixels were reclassified to a specific land cover on a national scale and then converted over to polygons.  Sounds pretty straightforward but there’s as much art as science that goes into the process.

One key first step in a process like this is image segmentation.  With segmentation, pixels with similar characteristics are grouped together into segments, which is really nothing more than putting a pixel into a bin, or class, with other pixels that have similar values.  When you fill out a census form, for example, you’re sort of segmenting yourself based on your personal details: age, income, level of education, whether you’re a Jedi or not, etc.  Each detail goes into a bin and later we can make some judgments about what those bins tell us about you (in my case, I think it means I’m in the rubbish bin…). With an image, we go right back to the intensity values for each pixel in each band (what we might also call its spectral values), so the more bands you have, the better job you can do of segmenting an image.  At the end of the segmentation process we won’t have our land cover classes but it will make the steps that follow a lot easier.

How to Segment

We’ll go back to our Mt Grand image to illustrate segmentation, here added to a map in Pro:

As we’ve seen before, this is a four band image with a band for Blue, Green, Red and Infrared.  To do land cover mapping that’s the bare minimum.  You can do it with an RGB image but the result won’t be nearly as reliable – the infrared band adds a lot of value.  As shown above, I’m looking on the Imagery tab and will use the Segmentation tool under Classification Tools:

 The other two tools we’ll use in a later post.

After clicking on Segmentation a new pane should open at the right – we’ll need to set some parameters:

Spectral Detail sets the level of importance given to differences between features you want to distinguish.  Values range between 1 and 20.  The higher the value, the more you can distinguish between features.  Lower values result in fewer segments (and, oddly enough, longer processing times).

Spatial detail ranges from 1 to 20; the higher the number, the more importance you can place on combining segments that are close together.

The Minimum segment size in pixels allows you set the minimum size of any segment (though not its shape).  Segments smaller than this size are forced into the best fitting segment close by.  Of course how large this area will be in real life depends on the resolution of the image.

These would usually be a fair bit of trial and error in getting the right segmentation as these settings directly depend on how much detail you’re wanting to map.  For this post, we’ll just stick with the defaults.  Here’s the result, zoomed in to an area so you can see the effect – I’ve put the original image and the segmented one side by side so you can more easily see the output:

They look quite similar, I hear you say.  I’ll zoom in some more to tease out the differences:

On the left we can see all the detail from the original image, but it’s more generalised and smoothed out on the right.  This is where the paint-by-number picture comes back.  Looking at segmented images, I’m often reminded of that slightly abstracted effect, like someone has painstakingly painted within the lines with just a few colours.  We can visually interpret what the land covers are on the right (including shadows) and hopefully you can see how groups of similar pixels are now together.  These are the segments.  We could go back and tweak the settings to end up with fewer or more segments but the key thing we’ve accomplished here is to simplify the original image using the details from the individual bands.  Now instead of dealing with individual pixels, we’re dealing with zones of pixels with similar values.

You might be tempted to think our job is done here, but this is really the first step towards land cover mapping.  While the pixels have been grouped together we still don’t know what they represent.  Well, at least Pro doesn’t.  Our wetware can quickly do the visual interpretation and have a pretty good idea about what’s what.  But for this to be a useful, permanent layer for analysis and mapping, we’ve got to go to the next step of image classification.  And we’re going to want to do the whole image at one time and not just smaller sections.  It’s a fairly involved process so stay tuned – we’ll cover it soon.

In the meantime, grab your favourite paint-by-numbers kit and start relaxing.  Here’s mine: