Photogrammetry Technical Details & Equipment

Background

I started hearing about photogrammetry for documenting and visualizing wrecks about 2-3 years ago and had a passing interest. Since then, I’ve spent more time working on my photography skills (and equipment). I had watched a couple Wetpixel videos on photogrammetry recently and decided that, with Covid, I should learn more about it.

I’ve documented a bit of my “journey” on this website.

I’ve had quite a few people ask me what equipment I use, what software I use, etc. and figured it would be worthwhile to create a short post that provided some details.

Disclosure: I don’t sell any equipment or work for any of these companies and profit in no way if you end up buying any of it.

Technical Details

Camera & Housing setup

  • Camera: Sony a6400
  • Lens: Sony 10-18mm F/4 (15mm-27mm full frame equivalent)
  • Housing: Nauticam
  • Dome: Nauticam 7″ Acrylic Dome #36129 (Note: the glass dome is “better” but only rated to 60m)
  • Lights: 2 x BigBlue 15,000 lumens
  • Pictures are usually shot at 10mm, 1/60th, F/8, ISO Auto (but limited to 1250)

Note: I prefer a rectilinear lens for photogrammetry models (and shooting wrecks in general). Fisheye lenses are great for field-of-view but they definitely distort more and I’m shooting for accuracy and not “looks” when I shoot to build a model. I really like the results I get from the 10-18mm lens.

What I’m trying to do when I shoot for a model is also VERY different than when I take pictures for stills. I’m not trying to compose a picture with backgrounds, rule-of-thirds, etc. I am trying to identify features that can be successfully used to align photos. I don’t care as much about backscatter since I know the software will filter it out.

I also have a Sony a7rIV camera but don’t have a housing for it (yet). For photogrammetry, I think a full frame camera like the a7r4 would be complete overkill. The processing requirements are already pretty extensive with the resolution of the pictures from the a6400. I don’t think you would gain much from shooting with a 61MP camera!

Sony a6400 with a 10-18mm lens in a Nauticam Housing with a 7″ acrylic dome and 2 x BigBlue 15k lights

Computer & Software

  • iMac Pro (2017)
    • 3.2 GHz 8-Core Intel Xeon W
    • 32G DDR4 memory
    • Radeon Pro Vega 56 8 GB graphics card
  • MacOS Catalina
  • Agisoft Metashape
    • Standard edition – I bought this software
    • Professional edition – They have been kind enough to provide me a time-limited Pro license so I can test out and use markers which are very helpful in some situations

The software definitely hammers both the CPU and GPU so I would say the more processor cores and the faster the GPU you can buy, the better off you are going to be. There are also a few cloud hosting companies where you can lease compute time on a machine with Metashape.

Example Processing Steps & Times – UB88 Conning Tower

Processing times can be extensive. My iMac Pro is what most people would consider a fast computer or personal use but it can still take a long time to align the photos, process the dense cloud, depth maps, etc.

I’ve detailed below the different steps required to make a 3D model with textures for the UB88 Conning Tower.

Step 1 : Photo alignment

The first step is to align the pictures which produces a sparse cloud.

I usually do that somewhat “manually” by building a base model and then adding pictures and aligning after adding a chunk of pictures. This model has 428 pictures aligned. That is definitely overkill and could probably be pruned to 250 or less (even using the Tools -> Reduce Overlap function in Metashape.

If you build it from scratch starting with no photos aligned, on Medium quality, then it took about 3 minutes to detect the possible alignment points for the photos (I set a limit of 40k points per photo) and about 38 minutes to match the points (I set a Tie Point limit of 10k) and complete the alignment. This is mostly done in the GPU. The software then estimates camera locations which is done largely in CPU and took about 4 minutes. The sparse cloud after alignment is below:

Sparse Cloud after alignment

Step 2 : Dense cloud creation

The next step is to build the dense cloud. I built a Medium quality cloud with Mild depth filtering (default settings for the most part). This is very processor intensive. The first process is to build the depth maps and this is done largely in GPU and took about 1 hour and 8 minutes. The next process is to build the dense cloud itself. That is largely CPU bound and took about 30 minutes.

Step 3 : Build Mesh

I used a Medium face count to build the mesh 3D model and it took about 12 minutes.

3D Model (Manually “cleaned up” and No Texture)

Step 4 : Build the Texture

This is where you add the color from the photos as an “overlay” on top of the 3D model. Using the default settings, the texturing took about 10 minutes.

Final textured model

Summary

Overall, the model had 428 photos and took about 2 hours and 45 minutes of compute time to build.

One nice feature of the Pro version of Metashape is that it allows you to “batch” process tasks. So, I can kick off a dense cloud followed by a mesh followed by a texture model and have that all happen while I sleep and then wake up to a finished model.

5 thoughts on “Photogrammetry Technical Details & Equipment

  1. Thanks for sharing, Brett. I’m sure Jeffrey would enjoy seeing what you’re doing too.

    I spoke to guy that does some photogrammetry professionally. He’s also a diver and does some U/W work. I asked about a GoPro and he said they suck for photogrammetry. I have GoPro 8 and I’m impressed with its quality, especially when compared to the earlier 3 & 4 I used to have. The 8 has a linear mode that eliminates most of the lens distortion (but not as wide) and the 4k provides a fairly high resolution. I heard others say the GoPro does OK and I don’t see why a video shot at 4k/15fps wouldn’t work well. The advantage is that you would many more frames and wouldn’t have to fiddle with the camera—just hold and shoot the video. What is your opinion? Would the additional images make the rendering too long? Is the resolution not good enough?

    BTW, my GF took the Sundiver to Catalina today. She said it was rough, cold and rainy. They came back early. That’s the main reason I’m not going out this weekend.

    Steve

    1. Jeffrey and I have been in touch (thanks for the intro)! He is thinking about revising his paper to the USN on the UB88 and including my new model.

      I’ve only tried to make a model from a video once (see my post on the San Diego TBM model). That was from a video shot from a Panasonic GH5 which is known for good video. I had mixed results.

      The Metashape software can even take video input and create the stills. You are correct that they do line up nicely and the alignment task is easier.

      The downside is that you potentially have too many pictures which leads to longer processing times (as you mention). I also had some weird effects whereby the sharpness of the final texture on the model wasn’t as good. I’m not sure if that was due to the large overlap of photos and the blending that happens when they create a texture OR because the photos aren’t in perfect focus when you “randomly” create stills from video OR both.

      One thought I had was to put the GoPro into interval shooting mode and have it shoot one picture per second and then slowly go around the the target subject. I think that could actually work, especially from a GoPro 8 or newer where the resolution is pretty good.

      It would certainly be worthwhile to give it a try on a small site. If you end up diving a small site (or small section of a larger site) and want to try it, just dump all the photos to an on-line storage (Dropbox, etc) and let me know where they are and I can try to build something.

    2. PS – I dove the UB-88 on Thursday and the conditions deep were horrible.

      The surface layer was actually not too bad but once I got to 130 feet or so and heading down, the line almost disappeared from view. It was really silty and sandy. Almost like somebody had shaken up a snow globe and the particles were just suspended. 🙂

      I’ve never seen it that bad at that depth. The vis was literally 5 feet at the bottom. I put a strobe on the down line and my buddy ran a line and we couldn’t see the strobe from 10 feet away. After searching for 5-7 minutes we just called it. Ugh.

    1. Yeah, it isn’t exactly an easy process and all the challenges of photography underwater compound it (especially on deeper dives).

      The camera / lens / housing / dome combo weighs about 8 lbs and each of the video lights with the arms and clamps weighs about 2.7 lbs so the whole rig weights about 13.5 lbs.

      However, underwater it is a LOT less due to the density of water but also due to the large air pocket trapped in the dome and the floats I have on the arms. I’m guessing it is about 1-2 lbs negative in saltwater.

Leave a Reply