Wavelength(s)
Remotely sensed images are incredibly useful sources of GIS data. In this post we begin looking at using satellite images to create useful GIS data.
Without giving too much away about myself, one of my top five absolute favourite Van Morrison songs is Wavelength, from the 1978 (egads!) album of the same name. It might well be number one but please don’t force me to make such an important decision. I will always remember seeing him play this song on Saturday Night Live as a young lad and being forever changed after that (in a good way, I think).
(I can’t find that video anymore but here’s one from Belfast `79)
This isn’t a post about Van Morrison, but it is one about wavelengths and how important they can be to GIS. I’m talking here about the use of remotely sensed images (from planes, satellites and UAVs) as a source of data, much of which depends on the wavelengths of light (well, more precisely, electromagnetic energy) captured in such images. Before we go too much further, we need to delve into the electromagnetic spectrum:
Does this bring back memories of that high school physics class? The electromagnetic energy we’re probably most familiar with is the visible light our sophisticated remote sensors (eyes) are tuned to. But that’s just a small portion of the spectrum which covers several orders of magnitude of wavelengths. We are literally bathed in energy, much of it from the sun but just as much human-made. At one end of the spectrum are gamma rays – high frequency, short wavelength energy that is deadly – exposure to this stuff can damage your DNA and lead very quickly to death. Most everyone knows the benefits of lower frequency, longer wavelength X-rays and of course there’s visible light that we rely on. When you feel the warmth of the sun on your arm, you’re experiencing invisible infrared energy. Yet lower frequency, even longer wavelength micro- and radio waves are also well known. In covering the spectrum we move from nanometres (10^-9 metres- a thousand- millionth of a metre!) through to metres and kilometres of wavelength. Practically speaking, GIS makes the most use of the narrow band between roughly 400 nanometres and 1000 micrometres – from blue through to thermal infrared (sensible heat). How do we work with these wavelengths? Mostly through aerial photos and satellite images.
For example, here’s an aerial photo of Quail Island in Lyttelton Harbour:
This image was taken by a slightly high-end (well, it was at the time) digital camera from an airplane at low altitude and has a resolution (pixel size) of about 0.72 m on the ground. It’s a raster image as you can see if we zoom in far enough to see the individual pixels:
You’ll notice that the borders of the image are not straight – that’s because some of distortions in the image have been corrected – but more on that another time. There’s a lot that can be gleaned from this image – we could use it as a base map to create some digital layers of features that are visible. If we took several of these images over time we could track how the land cover is changing. Looking at this image with a bit more detail, we could note that it is in JPEG format (i.e. the image name ends in .jpg). This is a very standard format for images that I suspect most of us are familiar with. If we look at the image in ArcCatalog, we can further note that it is actually made up of three layers, or as we’ll come to refer to them more correctly, bands:
JPGs work off of a colour model known as RGB, for Red, Green and Blue. A digital camera has a separate sensor for each of these bands of energy and captures the various shades of intensities for each colour in three separate images. When added to ArcMap, the software knows how to render these three bands as natural colour. (If you have a look at the image yourself, preview each band and see how the intensities change between the bands.)
Let’s step things up a notch and look at a satellite image we recently acquired of Mt Grand Station near Lake Hawea:
This is from the Geoeye-1 satellite with a 2.4 m resolution (Cost: ~$700 + GST). Lots of things to talk about here but let’s start by looking at this image in ArcCatalog:
Unlike the Quail Island image, it’s in TIFF format and has four bands: red, green, blue and an additional band with near infrared data (not all TIFs have four bands). Since it’s got data beyond just red, green and blue, an image like this is “multispectral”. And this extra band opens up a whole new realm of useful data that we can glean from this image. Plants are particularly active in the red and infrared bands – we can use this extra information in some image analysis to convert the pixels over to landcover. In this way, the image becomes more than just a good basemap – it’s a platform from which we can extract a variety of useful data, all relying on the wavelengths of energy captured. And there are more bands to be had. Below is previewed an image captured by one of the Landsat satellites – this one packs a whopping 6 bands:
We might refer to an image as “hyperspectral“. The additional bands take us further into the thermal infrared range of the spectrum – that much more grist for the mill in image analysis.
Before we go whole-hog into that topic, there’s a key bit of processing that we need to cover. While 2.4 m is pretty decent resolution, we have the ability to transform this image into a 0.5 m resolution image. And why wouldn’t we if we could?!? It’s certainly a truism with imagery that the finer the resolution the better, so long as you’ve got enough storage space and processing power. This post should set us up nicely for several follow ons covering pan-sharpening, image analysis and vegetation indicies.
I’m sure you can’t wait… So tee up a bit of Van Morrison (or whatever) on your turntable (or the device of your choice) until we can get to that.
C