This post covers some of the outputs from LU’s new Unmanned Aerial Vehicle, or drone.

The university recently acquired a new unmanned aerial vehicle (UAV), or as they’re more commonly known, a drone.  It’s an eight-rotor Altus Delta with three sensors: a high-spec Sony camera, a Sequioa multispectral sensor, and a WIRS thermal camera.

That’s the thermal camera hanging off the bottom in this picture.  (We haven’t named this bird yet; any suggestions?)  With these sensors we can capture a wide variety of data so in this post I’ll give you an overview of what we can expect to get out of the system.  Over the past few weeks, Don Royds, Majeed Safa and myself have been learning how to use the system by flying around the Bert Sutcliffe Oval:

With the Sony camera we can capture video and stills, like this one:

Nice, but from a GIS perspective, the oblique image above isn’t much use on its own.  But the stills become quite valuable when the camera is pointing straight down because from them we can derive some very useful data.  What we basically do is program a flight plan for the UAV to cover an area in a systematic way and capture images as it goes.  Here’s an picture showing the flight path in yellow and the places where photos were taken as blue dots.

After the flight, we can post-process these images to get an orthomosaic, which basically stitches together all the separate images (40 in this case) into one with the extra added benefit of correcting distortions due to changes in elevation.  Here’s a look at all the images in Windows Explorer:

An issue we often run into with aerial photos (and satellite images to a lesser extent) is distortion based on ground heights.  Consider two equally sized objects, one at the top of a hill and the other at the bottom of a valley.  Because the one on the hill is closer to the camera, it will appear larger than the one at the bottom of a valley.  There’s not much relief in the area we were flying but in areas with lots of elevation change, this distortion can be quite significant.  If elevation data are available, the image can be “corrected” to take this into account – the result is an ortho-corrected image.  When multiple corrected images are stitched together, we have an orthomosaic, which can then be treated as if it were a map.

As an example, below is an image of the orthomosaic that came out of our recent flights (from an average height of about 80 m):

You can just make us out at the top of the image on the roadway onto the pitch.  Better still, here’s a 3D version that you can rotate and zoom in and out of – nice (you may need to open this in Chrome – it doesn’t work for me in Firefox).

This image alone is pretty useful.  It gives us a high resolution image of an area of our choice.  But that’s not all we can get out of this.  We also get the information we need to correct the images: a DEM and a DSM.  DEMs you’re probably familiar with – raster grids of elevation.  DEMs are often said to be “bare earth” rasters as they only show the ground surface, not the trees or buildings, or things on the surface of the ground (you might also see them referred to as DTMs – Digital Terrain Models).  Here’s a peak at the DTM in ArcMap – note the elevations in the legend – they range from 2.80087 m to 6.17135 m:

A quick note about those elevations – the data were captured in WGS84, so latitude and longitude.  The elevations are the height above the WGS84 ellipsoid rather than the ones we might be more familiar with from topo maps.  If we project these data to NZTM that will shift the elevations as well.  We can also derive a DSM (Digital Surface Model) which will give us the elevations for the buildings and trees and features sitting on the ground.  Here’s a view of the DSM shown as a hillshade:

In this one we can see the cricket pavilion and a few other taller features with some good definition.   If you’re sitting down, I’ll tell you the resolution (pixel size) of these rasters: roughly 0.07 m!!  That’s stunning!  I fell out of my chair (well, off my Swiss ball) when I saw that.  While that sort of resolution is amazing, as a next step we will need to get a handle on just how accurate these values are.

When you think about it it’s pretty amazing that we’ve gotten elevations from just these images.  How does that work?  We have the process of structure from motion to thank for this.  This photogrammetric technique takes advantage of multiple overlapping images to derive elevations.  We have a built in structure from motion capability in our heads that allows us to discern three dimensional structure given our stereo vision, and the post-processing does a similar thing.  With the UAV knowing where it is (via GPS), those dimensions can be related to real-world datums, giving them true proportions.

Here’s a web map that brings these layers all together.

But wait, that’s not all – in addition to a DTM, DSM and orthomosaic, we also get a LAS point cloud, similar to what we might get out of LiDAR data.  Here’s an image of the point cloud as a LAS dataset in ArcScene (north is to the top of the data):

In this one you can see the line of trees and make out the shape of the pavilion.  The LAS dataset has well over seven million points in it (!).

We haven’t had much of a chance to work with the multispectral or thermal data yet, but once we’ve have I will get an update out there.  While we are just at the beginning of how to best use the UAV, it’s pretty clear that we can gather some useful data for a range of interests at the university.  There’s still a ways to go before we’ve perfected our flying and our processing, but we’re off to a good start.

In the meantime, drone on…

C