GIS Blog

The grass is always greener, isn’t it?

This post looks at the NDVI vegetation index that can be derived from multispectral images and give us useful information about the state of plant health for an area.

We’ve been talking about satellite images recently and in this post we’ll start going over some of the beneficial information we can get from those images beyond just a pretty picture.  In particular, with the right bands, we can do some simple calculations and that tell us useful things about vegetation health and land cover.  Many of you will probably be familiar with the NDVI, or the normalised difference vegetation index.  We’ll first cover what it tells us and then cover how we derive it.

To get our heads around this, we’ve got to go back to the very basics of remote sensing, starting with the sun.  We are literally bathed in radiation across an immensely wide range of wavelengths and frequencies from moment to moment by this essential part of human life.  We’ve seen it before, but here’s the electromagnetic spectrum diagram again:

As remote sensors, our eyes only work with the small range of visible light but other types of sensors give us access to a wider range of wavelengths (energy).  The sun emits radiation across the whole range of frequencies and wavelengths shown above and it is primarily this energy the remote sensing works off of – particularly with how different materials reflect different amounts of energy.  (It’s worth a quick note to say that there are two kinds of signals we work with in remotes sensing: passive (reflected energy) and active (a signal is sent out and we work with its reflection, e.g. LiDAR, radar, sonar.  In this post were working with passive reflected energy.)

Different materials reflect different amounts of energy.  In the figure below, the responses of bare soil, vegetation and water are shown across a range of wavelengths:

We might refer to these lines as spectral signatures and they allow us to differentiate between the materials.  For instance, the range of wavelengths that clear water reflects is much smaller than that for vegetation or soil.  In other words, at longer wavelengths, water would show very low values of reflectance.  Bare soil and vegetation reflect across a wider range of wavelengths but at differing amounts.  So if we had some imagery for wavelengths between, say, 2.2 and 2.6 micrometers, we should be able to differentiate one from the other.  That’s the basis for how we use a lot of remotely sensed imagery.

Let’s now look at plants specifically.  Most plants appear green to us because they reflect a lot of green light (as well as higher wavelengths as we see above) that our eyes detect.  It turns out they also reflect a lot of infrared energy.  The energy they absorb is used for photosynthesis.  When a sensor in a satellite (or a UAV for that matter) captures an image, it stores what it sees as intensities.  For instance, have a look at a Landsat 7 satellite image below covering much of the area around Christchurch.  This is a six band image but for now we’ll just focus on the blue, green, red and infrared bands.  Here’s the RGB image:

Below I’ll show all four bands for an area zoomed in to the blue box above – they are greyscale images so a pixel’s level of  brightness shows how much energy is reflected in that band: total absorption appears black while total reflection is white.

Looking at the intensities, there are subtle differences between the first three, but perhaps the greatest difference is with the infrared band.  Lyttelton Harbour and the estuary are black, showing that water absorbs infrared energy.  Areas of vegetation in the RGB image appear brighter in the infrared as well and there appears to be more red reflected than blue or green in general.  Industrial areas of Christchurch (roofs, mainly) reflect a lot in blue, green and red (i.e. they’re brighter) but absorb much of the infrared.

As a next step we can do a simple raster calculation that allows us to better distinguish vegetation from other materials.  We’ll use a standard vegetation index called the NDVI

NDVI stands for normalised difference vegetation index.  We derive it by subtracting the value of the red band from the near infrared band (NIR) and dividing that by the two bands added together (that’s the normalising bit), pixel by pixel, i.e.

As a result, we get pixel values that range between -1 and 1.  Going further, we can roughly classify the type of surface material in that pixel based on this value.  Negative values can usually be classified as snow, cloud or water while values close to zero are usually rock or bare soil.  Shrubs and low vegetation often has values between 0.2 and 0.3 while larger values (0.6 – 0.8) may be temperate or tropical forests.  The closer the value is to one, the denser or healthier that vegetation is.  If we run an NDVI analysis for our Christchurch image, here’s what we get:

The legend is quite rough – there’s no set symbology for this index and I’ve taken some liberties in the interpretation.  The nice thing about this result is that is allows us to map the state of health of the vegetation while distinguishing other land covers.  Which also brings us to a key point in not just NDVI, but remote sensing in general.  Any imagery we work with is a snapshot in time – the results above capture the state of vegetation at this moment in time only.  As anyone remotely familiar with the growth stages of plants will tell you, things change.  Thus, any NDVI result will very much depend on the point in the growth stage that the imagery was captured.  In the image below, we can see that the amounts of reflected energy can change depending on where we are in that process:

At the height of this plant’s growth it’s reflecting half of it’s infrared energy and absorbing most of its red.  Later, the infrared percentage drops off and the amount of red reflected increases – with noticeable changes in the NDVI value.  Of course, the changes could also be due to a differences between plants’ health from drought or disease, so we’d need a bit more context to determine what’s driving the difference.  While this is a bit of a blessing and a curse, it does allow us to monitor changes in plant health over time.

So the NDVI is a powerful index for mapping vegetation health and is widely used.  There are two primary ways we can derive this using ArcGIS:

  • A raster calculation, or
  • The NDVI button on the Image Analysis window

The first is quite straightforward as long as you know which band is which:

(Note – the individual bands of an image can be added to a map by double-clicking on the image name in the Add Data window – you’ll then be able to see the individual bands and add the ones you need.  To use the tool above I added the individual bands to a map and relabeled them with a more sensible name.)

The second method is probably not as widely known.  When working with imagery, the Image Analysis window provides some useful shortcuts for a lot of common workflows.  In ArcMap it can be added by going to Windows > Image Analysis – this adds a new window to work with which can be docked at the side to keep handy:

Images on your map are available at the top of the window: etm42rex_multi.img is the image I want to run the NDVI on.  When I click on it, the NDVI button, , in the Processing section becomes active.

Before my next step, I should open up the Options menu, , and check a few settings:

Here’s where knowing which band is which is important – for this image, I’ve set them to the correct bands as shown above and also ticked “Scientific Output” – this means the result will show the actual index values.  Clicking on the NDVI button is how I got the image shown earlier in the post.  I then changed the symbology to reflect the range of values and label them:

Well, this post went on a lot longer than I had anticipated at the start, but hopefully the ground we’ve covered has been enlightening, if not irradiating.  The NDVI is not the only index we can use – there are scads of others, but it is one of the more commonly used ones.  If there’s time (and interest), we could certainly cover them in a separate post.  In a later post I’ll cover image classification and how we can use that to map different land covers (amongst other things).

C

• July 25, 2019


Previous Post

Next Post

Leave a Reply

Your email address will not be published / Required fields are marked *