Photogrammetry Experiments

Since I started fooling around with Agisoft’s Photoscan a few months back and began posting experiments on Instagram, people have been asking me about my workflow.

I’ve been hesitant to post about it, let alone write a tutorial, for a simple reason: I didn’t have a workflow.

I’ve experimented with scanning outside, at home, with and without a turntable, decimating or retopo-ing models, and with various ways to extract maps.

Lately, though, thanks to some expert tips by photogrammetry guru Tristan Bethe at Humanalloy, I’ve begun to settle on a workflow that may be worth writing about.

First, this is an approach that works for small, free-standing objects to be used as 3D assets in scenes–fruit and vegetable, bread, small sculptures, wood bowls, etc. Scanning large objects, trees, ground surfaces, facades or objects without volume, such as leaves, will require a different workflow and, in some instances, much more complex equipment. Same thing if you want to use photogrammetry to make versatile tileable textures (something I’ve also been experimenting with and may write about later).

Second, this is a down-to-earth, shoestring approach that will yield reasonably accurate 3D models but not multi-million-poly monsters accurate to the millimeter. It doesn’t involve anything fancy, such as cross-polarized lights to extract reflectance maps, needed to acquire very high-end results.

In fact, the point of this approach is to generate retopoed models that will not overburden a large scene but which, thanks to extracted displacement and normal maps, will still render convincingly in reasonably tight close-ups.

Let’s start with one of my latest models: A handsome rock picked up in a Berlin park last weekend.


For something like this, I will use a DSLR (a full-frame Sony A7) with a sharp 50mm prime lens, which is literally the only pricey piece of equipment in my setup–well, apart from the software, that is, but if you’re reading this, I assume you’re equipped on that front.

You’ll also need a tripod, a turntable (in my case, a €20 rotating cheese board), a light tent (in my case an €18.99 foldable Ikea box), a cheap ring flash (I got this one for €49.99) and/or a set of four photography lamps (these will set you back €29.99 for a pair).

One difficulty in scanning small objects is to get rid of shadows. In my approach, which consists in scanning the same object twice in two different positions in order to capture all sides, top-down shadows may make Agisoft think it is dealing with two different objects. Also, you don’t want any shadows in the textures you will extract from the model. Getting rid of shadows is tough. I initially tried to do it by positioning two photography lamps on each side of the object and two next to the camera pointing at the object. It works reasonably well. But over time, I’ve found a ring flash to work better. Since the light comes straight from the lens, all shadows are hidden from the camera, resulting in a satisfyingly flat look. The setup is also a lot faster, making for shorter scanning sessions. And while the photography lights generate tons of highlights on reflective objects, making it often necessary to cross-polarize, the ring flash generates just one faint highlight facing the camera. In many cases, the fresnel effect means that this zero-degree-angle reflection is very weak, making it easy to remove in post. Using a turntable instead of rotating the camera around the object also helps even out the lighting.

The flipside of a cheap ring flash is its low strength. But that doesn’t matter since using a tripod means you can set the exposure to be as long as you want. In general, you’ll want to use the smallest lens aperture your lens allows (I always try for 22) in order to expand the DOF and maximize the sharpness of the image. This may mean an exposure of a few seconds per image (lower if your studio has decent ambient light on top of the flash). I normally place my model and turntable in a light tent in order to even out the lighting even more and obtain as neutral a background as possible, but it’s not strictly necessary. If you have a long exposure, use a remote to trigger the camera and minimize camera shake.

Once you’ve photographed your model from all angles, turning the table by roughly 10-degree increments, turn the object on its head and shoot a second batch of images. The resulting two batches of images should cover your object from all possible angles–left, right, top and bottom. Here, DSC01582.jpg marks the start of the second batch.


In Photoscan, we start by scanning the first batch of images with the model in an upright position (I’m not going to go into how to use Photoscan as there are many great tuts online about it). The turntable is meant to fool Photoscan into thinking that you shot the object from many different angles even though your camera is actually static and the model is rotating. Sometimes, however, if your background is not neutral enough, Photoscan will pick up details in it and assume, rightly but annoyingly, that the camera was static, resulting in an unspeakable mess of a scarce cloud. To prevent this, you may want to draw a very simple mask around your object on each photo by using the rectangle masking tool just to get rid of these background details. No need to mask the turntable because it rotates with the object (in fact, I’ve found that on tough models, the wood texture on my cheese board gives Photoscan additional reference points).


This first scan is just used to generate a perfect mask for the object, so I’ll use low dense-cloud and mesh settings to deliver a rough, low-poly model quickly.

Before generating the mesh, make sure to clean the dense cloud of any trace of the turntable, as shown below:



Once this is done, import masks for each photo by using the “from model” option, then export these masks to your photo folder.



From here, discard the project and repeat the procedure for the second batch of images with the object on its head.

You should now have a perfect mask for each image in both batches. Time to bring all these images into a fresh Photoscan project.

After importing the previously rendered masks for all images–this time using the “from file” option–rotate the photos in the second batch 180 degrees (this is not always needed, but I’ve found it made it more likely that Photoscan would recognize both batches as being from the same object).


Alternatively, and this is a slightly faster workflow, you can also render masks only for the first batch of photos and then use only a quickie hand-drawn rectangle mask on the upside-down photos. This will leave parts of the turntable visible on the second batch of photos but that’s fine as long as it is not visible in the first batch. If the turntable is visible in both batches, however, Photoscan will go crazy.

With some luck (and making sure you click the “constrain by mask” option when aligning), Photoscan will now align and scan the photos and generate one solid model combining top and bottom sides of the object. (My camera “rings” intersect below because I didn’t exactly put the rock on its head in the second batch, but more on its side–something to avoid if possible in order to maximize coverage).


If you used the method where you only render masks from the first batch, make sure to delete any bits of turntable left in the dense cloud before meshing it.


For a reasonably good model, I use a “high” setting for the dense cloud and the highest number of polys available for the resulting mesh. For the rock, it works out at about half-a-million polys.


We’re now half-way through, with a nice, high-poly, triangulated asset.


The next and final step would normally be to calculate a diffuse texture. The problem is that Photoscan will assign the model pretty nasty automated UVs before doing this, making it impossible to edit the map in Photoshop.

The solution–kindly provided by Tristan Bethe–is to import and edit the mesh into ZBrush before re-importing it into Photoscan for texturing.


ZBrush is perfect for this because not only does it allow you to clean up your scan and remove any annoying artifact, but it also does a great job automatically retopo-ing the mesh and unwrapping sensible UVs. It almost feels as though ZBrush was designed to edit 3D scans.

First, import the mesh from Photoscan into ZBrush, then duplicate it into a second sub-tool and hide the first version.


I use ZRemesher with a setting of 5 to generate a nice, mid-poly version of the tool with an all-quad topology.


This is the model I will use in my future scenes.


Then, cut a few polygroups, which will define the object’s UV islands (two will be fine for our rock, but you may need more if the object has a more complex shape).


Using the UV-master plugin, you can now unwrap your model automatically. You now have both a clean topology and easily editable UVs.


For the next step, unhide the high-poly sub-tool and subdiv your low-poly version once (Ctrl+D). By pressing the “project all” button in the subgroup menu, the low-poly model will conform to the high-poly one.


Repeat these subdiv and project steps until your retopoed mesh has roughly the same number of polys and the same amount of detail as the original mesh.


To be on the safe side, this is when I normally export both a fully subdivided version of the retopoed mesh and the low-poly version with zero subdiv levels. With your retopoed mesh now dialed back to the lowest subdiv level (the low-poly version), you can bake 4K displacement and normal maps straight in ZBrush that will contain all the details of the high-poly mesh.




Back in your Photoscan project, import the new mesh (whether the low-poly or high-poly version doesn’t matter since they have the same UVs).


You can now calculate the diffuse texture, making sure to use the “keep uv” option to prevent Photoscan from re-unwrapping the model.




This is basically it. You can now export the diffuse and edit it, alongside the normal and displacement maps, in Photoshop (make sure to flip the ZBrush-generated maps vertically as they always come upside down by default) and paint a reflectance or glossy map, using a combination of the displacement and diffuse maps.


All’s left to do is to import the lowpoly mesh into Max, rescale it as needed, and build a shader using your maps.


Depending on your needs, the renderer you use, or the complexity of your scene, you can either use the normal map to add high-res details to your low-poly mesh, use the height map for displacement or bump, or just work with the high-poly mesh if you need super-detailed close-ups. In all cases, you will have a nice all-quad, animation-friendly, topology and clean, easy-to-edit UVs.


This rock is slightly translucent, so I’ll use an SSS material. The render below is with normal map only. No displacement.


Et voilà. With a bit of practice and some trial-and-error, this should take no more than an hour or two per model on average.

I hope you found this useful. As always, let me know below if you have any question and I’ll try not to less too many years pass before responding.

Photogrammetry Experiments