Patrick Morrison, 17 August 2025
BelowJS can use 3D models from any photogrammetry package. The industry standard is Agisoft Metashape, which produces excellent models even in challenging underwater conditions. This guide provides our approach to preparing a model in it, refer to the manual and other tutorials for an introduction to the software.
Image capture for photogrammetry relies on systematic collection and appropriate overlap. Typically this means swimming over a wreck site in a 'lawnmower' pattern, capturing an image every second or so, aiming for 70-80% overlap. For best camera calibration and to avoid gaps, this pattern should be done in two passes: first along the length of the wreck, then across the wreck perpendicularly. The camera should mostly point downward. Overhang imaging is lowest priority—tilt the camera when needed but avoid capturing too much water column. It takes practice.
Any camera can be used. GoPros on timelapse are unreasonably effective. DSLRs, film cameras, and 360 cameras all work, with their own benefits and calibration challenges. The more detailed, consistent and sharper images, the better the results.
All your images should be loaded into Metashape, then you work down the workflow menu as below.
We use Medium or High settings depending on the processing power available. With a modern laptop and a few thousand images, High settings should only take a few hours. If you want to check results more quickly, Low quality settings are useful.
Generic preselection is an excellent way to speed up high-quality datasets. For more difficult datasets, it may cause alignments to fail. Try with it enabled first, then turn it off if troubleshooting alignment issues.
Tip: Sometimes underwater datasets benefit from lower settings, because it smooths out the noise. If you are having trouble aligning, try reducing the accuracy setting.
Workflow → Align Images
Once this is done you will see a point cloud, with the site starting to appear.
The most time consuming step is building the model. The quality chosen here will directly affect your final output, at the cost of processing time and storage space. We will reduce the polygon count later, but higher quality inputs produce better results when reduced than starting with low quality.
Workflow → Build Model
Depth maps
The Meta Quest 3 headset has been tested with models from 1 to 1.2 million polygons, beyond that, it can struggle. Use Tools → Models → Decimate models to reduce the polygon count. For lower tier devices or slower internet connections, 500,000 may be a better target.
Once the model is at the target polygon count, you can build textures. A single texture map can be easier to deal with, but 4 × 4k textures are tested to be the most detailed option usable on standard VR headsets. You could consider a single 4k texture for lighter applications.
Workflow → Build Texture
Before exporting as a .glb file, we first orient and scale the model.