This post follows up Image(ry) is Everything and looks at a few apparent anomalies in the image.

In a previous post we looked at a high resolution image of post-earthquake Christchurch from 2015-16.  We briefly return to that same image in this post to look at a few oddities.  In particular, let’s look at part of the image below showing the tank farm at Lyttelton – notice anything odd?

tankfarm

It may not be too apparent at this scale so let’s zoom in a bit:

tankfarmzoom

Apart from the amazing resolution of this image, it does remind me a bit of Inception – the tanks at the top are leaning in a different direction than the ones across the road!  Yes, Lyttelton does have a bit of a reputation as a hard drinking town, so is this the result of some drunken engineering at work?  Not at all – it’s actually quite common in remotely sensed images and large extent aerial photos.

Here’s another example from the satellite image for Wellington on Google Maps:

wellington

Now I could make jokes about the Beehive leaning to the right and Vic leaning to the left (and I often do) but let’s not get sidelined.  What we’re dealing with here is an issue of perspective – the  image above is composed of at least two separate images taken from different perspectives that have been stitched together.  Not only that, but they were taken at different times going by the way the shadows fall (though probably on the same day).  Google gets its imagery from a variety of sources including both satellite imagery and aerial photography.  It’s not always recent but they maintain a unique global coverage.

Returning to the Christchurch image, at its full extent the image covers an impressive 604 square kilometers including the city and some outlying areas on the peninsula.

imageextent

As we saw above, there are some unique perspectives contained within this image.  So what’s the story on all this?  Part of what we’re seeing is timing – when the image(s) were captured – but it’s also about how they were captured.  Let’s look at timing first.  My assumption going into this was that an airplane was flying over the city and collecting multiple, sequential images with a high-resolution digital camera which were later stitched together, so I first wanted to get a sense of if the images were all collected at the same time.  (If you’d like to play along at home you can add this image from J:\Data\Christchurch\Imagery\ChristchurchImage2015 or just view it on the LINZ Data Service.)

On the image below, I first drew the blue line along the shadow of the tank at lower left, orienting it carefully to the tank’s shadow line.  I then copied and pasted that line (to preserve the angle) and sat it alongside the shadow line of the tank at upper right – to my untrained eye the shadow lines look to be pretty well parallel so I’m assuming that the images were taken very close in time (or just happened to be taken on different days with the sun is roughly the same place – unlikely).  (The green lines are the tile footprint boundaries as part of the mosaic dataset.)

tanklines

If I wanted to get really technical I could measure the shadow lengths to see if I could determine the time difference between the images though I would need to know the height of the objects casting the shadows.

Oh, and look, we had the HMNZS Canterbury in port that day:

canterbury

She looks to be listing a tiny bit…  With a bit of digging we could probably narrow down the day this image was captured for at least this portion of the image.   If I look over by the Naval Point Club I think I’m pretty safe in saying it’s a Saturday given all the dinghies and trailers parked up:

navalpoint

In fact, the Canterbury was in port between 15 and 22 Feb of 2016 so Saturday, 20 Feb 2016 is a pretty likely day of capture for this image.  So if these are two separate images stitched together (or rather, mosaiced together), where is the dividing line? After some rather obsessive-compulsive looking, here’s where I’ve assumed the boundary could be for this part of the image:

npboundary

Given the curve of the line at left, I’m assumed some human made a decision about where that sseam is rather than just some algorithm stitching images together at boundaries.  And putting the boundary along a nondescript feature like a road makes it harder to detect, so that made sense to me.  Over at the Cashin Quay it’s even more apparent:

containerboundary

The blue lines separate areas where features are leaning in one of two main directions. I found myself getting a bit nauseous looking at these changes in perspective to be honest.  Here’s a place where I’ve found a definite border between images (I added the yellow highlighter line to make the seam easier to see):

boundary

So these image boundaries are cropping up all over and we’ve only looked at one small portion of the total image.

All of these images were captured around the same time but that’s not always the case for the rest of the image.  Here’s particularly telling example – this is east of West Melton between SH73 and the Old West Coast Road.  Note the shadows – those at lower left are at a right angle to those at centre right!:

shadows

So not only were parts of this image taken from different perspectives, they were also taken at very different times of the day.

Back in the LINZ Data Servce, I did manage to find an additional layer with more information on the tiles that make up this image (you can find a copy of this on the J: drive in the same folder with the image).  The layer shows the extent of each tile and the table holds some useful attributes, such as what day the tile was captured (some areas were reflown on three different days), the type of camera (VisionMap A3) and the image accuracy, amongst other things.  The metadata from the LINZ site further tell us that “Aerial photography over the Christchurch City CBD was captured on 17 Nov 2015, and the surrounding parts of Christchurch City and Banks Peninsula were captured on 22 January, 10 & 20 February 2016.” so we’re getting closer to understanding the timing of all this.  Here’s an image that breaks the tiles down by date of capture (shown in the Table of Contents at left):

tiletiming

Nice to have some confirmation that I got the date right for the Lyttelton images above.  But timing doesn’t explain my leaning tanks.  The answer to that gets down to the particular camera  system used.  Unlike most digital cameras used for aerial photos, this one is a fully automated capture and processing system that works more like the kinds of sensors you would find in satellites.  The camera sweeps from side to side capturing multiple, overlapping images as the plane flies:

video courtesy of Rafael Vision Map via YouTube

So features on the ground are captured multiple times in different images from different angles at about the same time.  These are then processed together to come up with a final image that stitches them all together into what appears to be a seamless whole.  There must be a best fit algorithm that places image patches together; in the process, we end up with features in close proximity on the ground but appearing to be oriented in different directions.  Below is an extended video (2:44) that shows this process and the final result (and, no, I’m not getting any commission on this) again from Rafeal Vision Map (other interesting videos here):

If we return to the West Melton shadows above, we could only get such big differences in shadow angles if the airplane returned to that same area at different times on the same day.

So in the end, with this system it is a computer making decisions about how the image is mosaiced together rather than a human painstakingly deciding what goes where.

So what’s the GIS upshot of all this?  Imagery like this is extremely valuable.  Think of how often you might turn on the satellite image in Google Maps so you can see what’s at a given location.  These sort of data are crucial for regional, district and city councils (though they can quickly go out of date).  Civil Defence, disaster response, the military – these data provide important information for a wide variety of users.  For most of these forms of visual interpretation, we can live with leaning buildings and odd shadows.  But above and beyond just showing us what’s there, remotely sensed data can be used for analysis, and that’s where we need to be a bit more careful.  The metadata for this image says that it has been “orthorectified” meaning that some of the errors inherent in the image have been removed.  This mainly has to do with things appearing larger/smaller than they actually because they are closer to/further away from the sensor or camera.  But orthorectifiying won’t straighten the buildings up.  Without orthorectification, aerial photos and satellite images cannot be treated as flat, two dimensional maps – we shouldn’t make area calculations off of them.  After orthorectification they are more reliable for things like building footprints and features at ground level.  But as we’ve seen above with leaning buildings, I wouldn’t trust this for the correct location of rooftops.  As an example, check out the image below:

chcfootprints

This is an aerial photo from not long after the February earthquake – building footprints from the Christchurch City Council are shown in blue.  If you look closely, say at the Heritage Hotel you can see that the footprint matches pretty well (at least as much as the image is in the right place) but the roof doesn’t.  Here’s a closer look:

heritage

Of course another aspect of this is placing the images in their correct geographic context – a process called georeferencing which we’ll cover another time.

So we’ve had a bit of an overview of some aerial oddities imagery plus gained a bit of insight into remote sensing – lots more we could talk about in this respect so stay tuned.

C

Special thanks to Chris Worts at AAM NZ, the people who created the Christchurch image, for input on this post.