In remote sensing analysis, the higher the image resolution the more valuable the asset. In this post we cover pansharpening, the process of using high resolution panchromatic imagery to improve the resolution of coarser multispectral imagery.
I’m not a big fan of all those CSI-type shows, especially when it comes to their treatment of images. All they need to do it utter “Enhance!” and the killer’s reflection in the victim’s eye suddenly becomes visible – cue the chase and lots of gunfire. Here’s a good compilation of such instances and here’s a welcome antidote.
The idea that the resolution of an image and somehow be magically increased to improve detail is mostly a fantasy – though there is one thing that comes close in remote sensing: pansharpening. With this technique, we can basically use a high resolution black and white image to increase the resolution of a coarser multispectral image. there’s a lot to unpack in that and we started the conversation about imagery in a previous post. There we saw that imagery captures reflected energy at various wavelengths and resolutions. Looking at Landsat-8 satellite data may be a bit helpful here.
This satellite captures multispectal imagery across 11 bands (this link provides a great band-by-band discussion, by the way. Well worth a read) – summarised in the figure below:
The visible (to our remotely sensing eyes) blue, green, red bands are 2, 3 and 4 respectively and vegetation mapping is made a lot more useful with band 5, near infrared. Note band 8, panchromatic and, more specifically, note the resolution. Band 8 is at 15 m while most of the rest are at 30 m. In general, panchromatic sensors have higher resolutions than the multispectral sensors. Here’s another way to look at those bands, in terms of wavelengths:
Perhaps I’ll break down the different sensors in another post but for now we’re focused on bands 2, 3, 4, 5 and 8. We can take advantage of the higher resolution of band 8 to increase the resolution of the ones we’re most interested in. Here’s where pansharpening comes in and there are a range of algorithms that do the heavy lifting.
Let’s take a closer look at the Mt Grand satellite image we looked at previously – here I’ll zoom in on particular area with a bit of detail and show both the RGB image on the left (2 m) and the panchromatic (black and white) image (0.5 m) on the right
Hopefully you can clearly see the resolution differences. In ArcGIS there are several pansharpening methods to choose from: Brovey, ESRI, Gram-Schmidt, IHS and Simple Mean. I won’t go into the gory details of how they work but feel free to check the link above if you really can’t get to sleep without knowing.
For the Mt Grand image, I tried all of the above methods and the IHS method seemed to work the best (i.e. looked the most realistic, colour-wise). For the above area, the output looks like this:
The result is a 0.5 m resolution multispectral image (even though it looks a little “ghosty”), ready for further analysis.
With remote sensing, resolution is paramount, so what I’ve been able to do here is fuse a high resolution panchromatic image with a lower resolution multispectral image to get the best of both worlds. All well and good, but what next? The value of multispectral data goes beyond just giving us a pretty picture. In later posts we’ll look at how we can now build on this image to do some vegetation and land cover mapping. These will include image classification and vegetation indicies so stay tuned – it should really “enhance” your mapping abilities.