Patrick Morrison, 17 August 2025
Learn how to optimise photogrammetry models for smooth WebXR performance on headsets and mobile devices. This guide covers mesh compression, texture optimisation, and practical file size targets for web deployment.
Note: We use the belowjs-optimiser utility for this workflow. Example:
npx belowjs-optimiser pack model.glb. By default this simplifies to ~1,200,000 polygons
(our practical upper target for smooth Quest 3 VR), applies 20-bit Draco mesh compression, converts
textures to KTX2, and resizes textures to a max of 4096x4096.
BelowJS uses .glb files to store 3D models. This is the binary (single-file) version of glTF, an open standard for sharing 3D data often thought of as the "JPEG of 3D." All photogrammetry packages we use can export .glb files, so we have an opinionated pipeline for preparing them for the web.
Model exports are rarely well-optimised by default. In WebXR they benefit substantially from mesh and texture compression. This 1) reduces file size (faster loads) and 2) greatly reduces GPU memory (VRAM), which is crucial for running large, detailed models on a headset or phone.
Below is how we approach optimisation, plus a script to do it for you.
Assuming you have a scaled and levelled model, it is simple to export from Agisoft Metashape. It will be similar in other packages.
In Metashape, File → Export → Export model… Binary glTF (*.glb)
This .glb is a single file that contains the mesh and the textures - it's an alternative to formats like .obj, .fbx, .stl, etc. For Quest 3 VR, we usually target ~1,000,000 polygons and treat ~1,200,000 as the upper smooth-performance limit. With 4 × 4K textures this often stays under ~50 MB after optimisation, which is acceptable for most modern internet connections. In some cases it might be worth considering a much lower target - say under 10MB.
Draco is a de-facto standard for glTF geometry compression. It's well supported in Three.js, and can significantly reduce mesh sizes for faster downloads, with only a small decode cost on model load. It does not directly improve runtime framerate or GPU load; the benefit is primarily smaller files. For technical details on using Draco with Three.js, see the DRACOLoader documentation.
This can be done in one line with gltf-transform:
npx gltf-transform draco input.glb output.glb
For many models the default settings are ideal, however, Metashape models with multiple textures can end up with distracting seams caused by imperfect compression. This is because the underlying representation of model with 4 texture tiles is 4 separate models, and slight changes to the mesh become very obvious in VR.
The solution is to increase the quality of quantization, from a default of 14-bit to 20-bit. We have had no visible seams on models compressed at this setting. With this method, highly detailed ~1,000,000 polygon models are usually smooth on Meta Quest 3, while ~1,200,000 is typically near the upper limit.
npx gltf-transform draco tmp-ktx.glb output.glb \
--method sequential \
--encode-speed 0 --decode-speed 0 \
--quantize-position 20 \
--quantize-normal 20 \
--quantize-color 20 \
--quantize-texcoord 20 \
--quantize-generic 20
The biggest gains in VRAM (GPU memory) efficiency come from texture compression. We use the GPU-native KTX2 ETC1S (see Don McCurdy's web texture formats guide for more information):
npx gltf-transform etc1s input.glb tmp-ktx.glb --quality 64
For a model with four 4K textures and ~1,000,000 polygons, this workflow typically reduces size from ~65 MB on disk and ~1.5 GB VRAM to ~45 MB on disk and ~390 MB VRAM—about a 30% cut on disk and 75% less VRAM. Without this, the headset often crashes on load.
| Model | Disk before | Disk after | VRAM before | VRAM after |
|---|---|---|---|---|
| Junee | 71.2 MB | 44.4 MB | 1.5 GB | 392.4 MB |
| Clipper | 64.2 MB | 48.5 MB | 1.5 GB | 390.1 MB |
| Sesa | 61.3 MB | 43.9 MB | 1.5 GB | 388.0 MB |
For smooth VR on Meta Quest 3, treat ~1.2 million polygons as an upper limit and prefer ~1,000,000 when possible. Models above ~1.2 million generally struggle on standalone headsets. For AR/mobile experiences, a target around 500,000 polygons is usually more reliable. This is a common issue when migrating high-detail models from Sketchfab.
Note: If your dataset is so large that even aggressive polygon reduction doesn't work (10+ GB exports, massive survey areas), consider 3D Tiles streaming instead of a single model. Tilesets handle scale through level-of-detail streaming rather than reducing everything to one file.
Polygon reduction is the first step in optimisation, before mesh and texture compression, to avoid reprocessing models. It's better done directly in Metashape (or your photogrammetry package) before texturing, so the texturing is optimised for that exact geometry.
We use a standard gltf-transform pipeline to reduce polygon counts while maintaining visual quality.
The --error 0.005 parameter allows --lock-border true to work without
creating seams.
npx gltf-transform dedup model.glb tmp-dedup.glb
npx gltf-transform weld tmp-dedup.glb tmp-weld.glb
npx gltf-transform join tmp-weld.glb tmp-join.glb
npx gltf-transform simplify tmp-join.glb tmp-simplified.glb \
--ratio 0.125 \
--error 0.005 \
--lock-border true
The --ratio parameter depends on your starting polygon count. Here are some typical values:
| Starting Polygons | Recommended Ratio | Resulting Polygons |
|---|---|---|
| 8 million | 0.125 | ~1 million |
| 5 million | 0.20 | ~1 million |
| 3 million | 0.33 | ~1 million |
| 2 million | 0.50 | ~1 million |
After reducing polygons with this pipeline, proceed with the mesh and texture compression steps above.
We use a utility for this process.
Install globally:
npm install -g belowjs-optimiser
Or run ad-hoc with npx:
npx belowjs-optimiser pack model.glb
The optimiser requires ktx on your PATH (KTX-Software):
# macOS
brew install ktx-software
# Windows / Linux
# https://github.com/KhronosGroup/KTX-Software/releases
Optimise a model:
belowjs-optimiser pack model.glb
This applies 20-bit Draco mesh compression, KTX2 texture compression, texture resizing (max 4096x4096),
and automatic simplification above the polygon target, then writes out
<input>-belowjs.glb.
Optimise many files:
belowjs-optimiser pack models/*.glb
Recommended polygon targets:
1,000,000 preferred, 1,200,000 upper limit500,000 polygonsSet a custom polygon target when needed:
belowjs-optimiser pack model.glb --polygon 800000
Skip simplification:
belowjs-optimiser pack model.glb --no-simplify
Custom output suffix:
belowjs-optimiser pack model.glb --suffix "_ar"
Apply uniform scene scaling:
belowjs-optimiser pack model.glb --scale 0.01
Scaling behavior:
--scale 1).--scale <factor> to apply uniform scene scaling.scale factor = expected / measured. Example:
if a 1 m scale stick measures 0.85 m in the viewer, 1.00 / 0.85 = 1.18, so use
--scale 1.18.For texture editing - normally light colour correction, we also have unpack:
belowjs-optimiser unpack model.glb
This will create a folder of the form model_edit/. You can then edit these textures, and pack
that folder.
belowjs-optimiser pack model_edit/
It will also detect normal maps, and link them, as long as they are named to the form
*normal1.*, *normal2.*, etc. If you originally had .png textures, but replaced
them with jpegs, it will also detect that and adjust the gltf references before packing.
Inspect model stats:
belowjs-optimiser info model.glb
below-optimiser is still supported as a compatibility alias.
Download belowjs-optimiser at https://github.com/patrick-morrison/belowjs-optimiser
It has been tested on macOS, and Windows Subsystem for Linux (WSL).
Once your model is optimised, you can load it into BelowJS and experience it in VR with a compatible headset. You might also want to create flythrough animations for presentations and social media.