I started hearing about photogrammetry for documenting and visualizing wrecks about 2-3 years ago and had a passing interest. Since then, I’ve spent more time working on my photography skills (and equipment). I had watched a couple Wetpixel videos on photogrammetry recently and decided that, with Covid, I should learn more about it.
I’ve documented a bit of my “journey” on this website.
I’ve had quite a few people ask me what equipment I use, what software I use, etc. and figured it would be worthwhile to create a short post that provided some details.
Disclosure: I don’t sell any equipment or work for any of these companies and profit in no way if you end up buying any of it.
Camera & Housing setup
- Camera: Sony a6400
- Lens: Sony 10-18mm F/4 (15mm-27mm full frame equivalent)
- Housing: Nauticam
- Dome: Nauticam 7″ Acrylic Dome #36129 (Note: the glass dome is “better” but only rated to 60m)
- Lights: 2 x BigBlue 15,000 lumens
- Pictures are usually shot at 10mm, 1/60th, F/8, ISO Auto (but limited to 1250)
Note: I prefer a rectilinear lens for photogrammetry models (and shooting wrecks in general). Fisheye lenses are great for field-of-view but they definitely distort more and I’m shooting for accuracy and not “looks” when I shoot to build a model. I really like the results I get from the 10-18mm lens.
What I’m trying to do when I shoot for a model is also VERY different than when I take pictures for stills. I’m not trying to compose a picture with backgrounds, rule-of-thirds, etc. I am trying to identify features that can be successfully used to align photos. I don’t care as much about backscatter since I know the software will filter it out.
I also have a Sony a7rIV camera but don’t have a housing for it (yet). For photogrammetry, I think a full frame camera like the a7r4 would be complete overkill. The processing requirements are already pretty extensive with the resolution of the pictures from the a6400. I don’t think you would gain much from shooting with a 61MP camera!
Computer & Software
- iMac Pro (2017)
- 3.2 GHz 8-Core Intel Xeon W
- 32G DDR4 memory
- Radeon Pro Vega 56 8 GB graphics card
- MacOS Catalina
- Agisoft Metashape
- Standard edition – I bought this software
- Professional edition – They have been kind enough to provide me a time-limited Pro license so I can test out and use markers which are very helpful in some situations
The software definitely hammers both the CPU and GPU so I would say the more processor cores and the faster the GPU you can buy, the better off you are going to be. There are also a few cloud hosting companies where you can lease compute time on a machine with Metashape.
Example Processing Steps & Times – UB88 Conning Tower
Processing times can be extensive. My iMac Pro is what most people would consider a fast computer or personal use but it can still take a long time to align the photos, process the dense cloud, depth maps, etc.
I’ve detailed below the different steps required to make a 3D model with textures for the UB88 Conning Tower.
Step 1 : Photo alignment
The first step is to align the pictures which produces a sparse cloud.
I usually do that somewhat “manually” by building a base model and then adding pictures and aligning after adding a chunk of pictures. This model has 428 pictures aligned. That is definitely overkill and could probably be pruned to 250 or less (even using the Tools -> Reduce Overlap function in Metashape.
If you build it from scratch starting with no photos aligned, on Medium quality, then it took about 3 minutes to detect the possible alignment points for the photos (I set a limit of 40k points per photo) and about 38 minutes to match the points (I set a Tie Point limit of 10k) and complete the alignment. This is mostly done in the GPU. The software then estimates camera locations which is done largely in CPU and took about 4 minutes. The sparse cloud after alignment is below:
Step 2 : Dense cloud creation
The next step is to build the dense cloud. I built a Medium quality cloud with Mild depth filtering (default settings for the most part). This is very processor intensive. The first process is to build the depth maps and this is done largely in GPU and took about 1 hour and 8 minutes. The next process is to build the dense cloud itself. That is largely CPU bound and took about 30 minutes.
Step 3 : Build Mesh
I used a Medium face count to build the mesh 3D model and it took about 12 minutes.
Step 4 : Build the Texture
This is where you add the color from the photos as an “overlay” on top of the 3D model. Using the default settings, the texturing took about 10 minutes.
Overall, the model had 428 photos and took about 2 hours and 45 minutes of compute time to build.
One nice feature of the Pro version of Metashape is that it allows you to “batch” process tasks. So, I can kick off a dense cloud followed by a mesh followed by a texture model and have that all happen while I sleep and then wake up to a finished model.