I debated whether or not to post this model.
I’m not really happy with the results but I’ve learned quite a bit along the way and figured it might help somebody in the future. This post is pretty long but there are a lot of photogrammetry details included for anybody who cares.
Here is LINK to a photogrammetry model of the TBM Avenger in San Diego that is about 250 feet deep. It is based upon video footage that Ben Lair shot. Here are some screenshots of the model:
The model has kind of an oil painting look to it in my opinion. I’m not crazy about that but it does have some interesting appeal. We’ll get to why I think it turned out like that in the “long version” below.
Model from photos
When I dived the TBM Avenger a few weeks ago, I was NOT taking pictures to make a photogrammetry model. My goal was to take stills and document the wreck (well, and survive a 250′ dive without getting the bends). 🙂
Therefore, I didn’t really have enough pictures to make a good model. I did make a model which turned out pretty good for the front section of the wreck on the port side:
I especially like the detail of the back of the cockpit. Note that I had to use a LOT of markers in the “Pro” version of the model in order to get my photos to line up correctly. I’ve included them in one of the screenshots so you can get an idea.
Model from video
The previous week, Ben Lair had made it to the wreck and Ben usually takes video footage and not stills. The Metashape tool has a cool function that will take a video input and then output stills that can be used to make a model. I faced quite a few challenges along the way and thought it would be good to document those so maybe others can learn.
Challenge 1: Camera lens correction
One of the features of Metashape is that it reads the EXIF data from pictures and performs calibration. This is especially important in Fisheye lenses given the distortion that happens. As it so happens, Ben shoots a GH5 camera with a 8mm fisheye lens (16mm full frame equivalent). The Metashape tool doesn’t “transfer” EXIF data from a movie to the stills it creates. This makes sense somewhat since the focal length, ISO, etc. can all change during a move. However, if you know they aren’t going to change, then it really would help to transfer it (Agisoft is considering the feature request I filed).
In the meantime, I needed a way to quickly and easily write the correct EXIF data to the hundreds of still photos created. Given my Unix background, I found a fantastic command line utility based on Perl called “exiftool.” Here is what I used to add the correct EXIF data:
exiftool -overwrite_original -Make=Panasonic -Model=DC-GH5S -FocalLength=”8.0 mm” -FocalLengthIn35mmFormat=”16 mm” -ISO=2000 -ExposureTime=”1/125″ -FNumber=3.5 <files>
Challenge 2: Color correction and video light / backscatter
Ben was not shooting specifically for a photogrammetry model and his video that he produced turned out fantastic. However, I had to work on color correction and the fact that his video lights (specifically on the right side) wasn’t pointed at an ideal angle and had a lot of backscatter.
Here is an example sill capture from the video before and after the changes in Lightroom:
The reason I had to do that (or so I thought / think) is that on the original models i got this horrible “halo” around large parts of the wreck:
I also wanted a more realistic color and ended up with a pretty good model of at least the port section of the front of the plane:
Note: During this time, I also spent many, many tedious hours applying “masks” to literally every photo in order to reduce backscatter and “noise” from stuff in the ocean and around the wreck. The masks helped, but didn’t “fix” the problem. It is a useful tool in many situations but I needed another solution.
Challenge 3 : Alignment (but not really aligned)
By “finessing” a few things and by carefully (and with a lot of time) building a model, I could get all the photos to align. HOWEVER, just because the software will align doesn’t mean that it is correctly aligned.
In this case, when I didn’t use markers and just finessed it, the model resulted in the port side of the wreck being “skewed” from the starboard side (I had started the model on the starboard side, followed “backward” in time through the video frames through the broken tail section which lacks a lot of features for perfect alignment. Small errors get compounded. You can see the alignment problem in this model.
Note how the port side of the cockpit is “bigger” and skewed forward of the starboard side. I will stress that the photos are all technically “aligned” in terms of the software.
To “fix” this problem, I gave up on trying to get “sparse” photos to align and resorted to using manual markers on each of the two “chunks” in the model and then aligning on those. This worked out well. You can see the markers (yellow pin with blue flag) in the screenshot below.
Challenge 4 : Coloring & the “oil paint” effect
After all the above, I “gave up” on trying to fix the problem through masks, color correction, etc and just built a model with the two different sections of the wreck aligned through the use of markers.
I built two versions of the model.
The first was with all the photos. I thought that maybe the “oil painting” effect was due to the overlap of photos that might not be perfectly focussed or aligned and the blending process that happens. Here is a screen capture of that model:
For the second version, I used the Tools -> Reduce Overlap function of Metashape to reduce the number of photos — hoping that the reduced overlap would result in a “sharper” texture model. Here is the result of the same general view:
You can see that, in general, it is sharper, but nowhere near as sharp as the model built from the still photos I took or even the limited view model from the port side based on the video from Ben.
I still need to spend more time with the software to figure this out. But, it won’t be with this model. 🙂
After countless hours with Metashape working on the TBM Avenger model, I’ve learned a lot about the software and the different features. Some of the key lessons I learned are as follows:
- Sometimes fewer photos are better – Since the program “blends” different photos that are of the same area, if they aren’t all exactly focussed, it could result (I think) in the “oil painting” effect I saw.
- Markers between chunks should be on “flat” surfaces that don’t allow for different interpretations. For example, if I were to put a marker on top of the antenna on the top of the cockpit, it doesn’t give direction. That marker could not be used effectively to align chunks
- Still photos are better than video screen captures – I believe part of my problem was that screen grabs aren’t always in focus whereas still photos that are hand chosen are in focus.