CG artists in advertising, VFX, the game industry, and visualization often seem to pursue contradictory goals. While some use computers to conjure up the impossible, others go to great lengths to replicate the world. Yet they do have something in common: More often than not they strive to make their art, however fantastical, look real, or at least plausible.
As a goal, achieving photorealism informs every step of the creative process, from collecting references to modeling and lighting a scene and its props. Getting the photographic look right, however, is a subset of this that is more concerned with the final look. This is the part I’d like to focus on in this post, which was inspired by the frequent questions I get about the final look of my images.
Note that this is not just a post-production tutorial. While everything does converge onto post-production, there are many upstream steps that are equally vital in ensuring that your renders reach post-production in the best possible shape.
Photorealism is an ever-receding ideal. I don’t pretend to have achieved it. But it definitely is my aspiration and I thought I’d share a few tips–some practical, other slightly more conceptual–on how I go about it.
Look up to photographers
Many of the tips below revolve around the notion of using your digital tools as a camera simulator. Many of these tools, especially renderers and post-processing software, use photographic language, with such concepts as exposure, depth of field, field of view… Their strength is that, unlike cameras, they are not constrained by the laws of physics. As a result, a big part of achieving the photographic look lies in embracing these constraints and renouncing some of the freedom our tools afford us, which can be a tough discipline.
Achieving photorealism is not just about accepting (some of) the constraints of photography, but also using photography as your benchmark. Concretely, this means photography, and in particular the type of photography you are trying to replicate, should be your reference, as opposed to nature or the work of other artists. Photography itself is not nature, it is nature seen through a lens and captured on a film or sensor, with all the distorsions this implies. Mimicking these distortions is among the surest ways to make your stuff “look” real.
So throughout the process, it is good to keep photographic references to hand and to check them regularly to make sure you’re on the right track. Even when modeling and shading assets, how these are built and look in real life matters often less than how they appear on camera.
Make your world real
Photorealism lies somewhere between fraud and prestidigitation. It is about persuading viewers that they are looking at something else than what they are really looking at. If your goal is to create the illusion of an image of the world as seen through a camera, you will help yourself by making your digital world as similar as possible to the real one, starting with the scale.
We derive a lot of subconscious information about a scene when looking at a photograph, including a pretty accurate sense of scale. Ambient haze, shadow tapering, depth of field, are among the many clues we pick up that tell us whether we are looking at a toy model or at a cityscape. Tilt-shift timelapse photography of real-life street scenes, for instance, works by subverting these clues to make us think we are looking at a miniature.
If you model to real-world scale, things like the light’s falloff or the depth of field are far more likely to resemble those in a photography of the equivalent scene, without you having to fake it. For example, the haze and the deep depth-of-field in this wide-angle shot convey the monumental scale of the space
While this is not a shading tutorial, it’s still worth saying a few things about materials as bad materials can kill the photorealism in your scene. Getting the perfect shader takes a long time and books could be written about them (Grant Warwick’s V-Ray course is a great place to start and you can check my material creation tips here & here). But there are a few simple things you can do that will prevent your materials from damaging your render, and they all have to do with textures and colors.
Essentially, you should be careful about not setting any color values in the diffuse slot too close to the maximum RGB value of 255 whether you’re working with colors or bitmap textures. This applies to the individual red, green and blue channels. Excessive RGB values will tax the renderer, extending render times, and give your image an unrealistic over-saturated look. Likewise, you should never use pure white and pure black in the diffuse as these never occur in real life. Pure black will absorb all light and pure white will bounce all light rays back. As a guide, I use values around 180, 180, 180 for white in all my scenes and never go beyond 200. If you work with textures in the diffuse, color-pick them to make sure they don’t have excessive RGB values. I also often slightly desaturate the textures I make from photos in Photoshop before using them. You may also want to tweak your textures to make sure they all “sit” well together. This is particularly true of saturation. For example, although the trees in the scene below use different leaf textures, some warmth was added to make the whole foliage “sit well” together.
Make your light real
The realism commandment goes for other values, including lights. A skylight or a sun will have vastly more power than the brightest lightbulb. Bear this in mind when setting up dome lights and fill lights such as soft boxes. If you set a realistic intensity for your sky and electric lights when lighting an interior for instance–where interior and exterior lighting work in tandem–you are far more likely to end up with a render that looks real than if you eyeball these values.
I often hear viz artists say they have a hard time dealing with burnouts (a dramatic overexposure of some parts of the scene) when using real-world sun or sky intensities for interior renders. The truth is, photographers have to deal with the very same issues in high-contrast situations. Our advantage is we can render high-dynamic range images that have enough color depth to let us deal with burnouts in post and recover information in highlights. Even so, any exterior visible through windows in an interior render just has to be overexposed. Perfectly uniform lighting may be an artistic goal, but it will not feel photorealistic (though many photographers will deal with this by merging different exposures). The shot below shows a realistic balance of interior and exterior lighting, with the exterior overexposed and featuring a colder light (two softboxes illuminate the interior to help balance the lighting, as clearly visible on the beams):
Realistic light intensity matters for other reasons. In non-realistic set-ups (for instance overly bright lights paired with a very dark camera lens), materials will stop behaving accurately or predictably. For example, this will give an illusion of higher reflectivity on some materials, which then has to be compensated for. If you rely on asset libraries, real-world lights is the only way to make sure that shaders will render the way they’re meant to. All the models in my archive were built and shaded so as to work in realistic situations.
Make your camera real
This part is easy. If you use V-Ray or any other renderer that offers a physical camera, use it and make sure you work with realistic aperture, exposure and ISO settings for your scene.
Physical cameras can also be a great help setting up lights if you’re not sure what the right power should be for a blue sunny sky or an overcast winter one. If you have a (real) camera, go outside and take a photo in fully automatic mode. Then, give your physical camera the exact same settings your real camera used in the photo (this info should be embedded in the photo’s EXIF data). From there, tweak the intensity of your dome or skylight until it resembles the sky in the photo. That’s it. Now it’ll be much easier to extrapolate the power of other lights from that of the sky.
The right choice of camera can influence the mood of your image. You can mimic different cameras and formats by playing with such settings as the film plate size (turning your camera in a medium- or large-format camera, that will show much shallower depth-of-field), distortion or bokeh shape. In the shot below, the anamorphic bokehs and shallow depth-of-field lend a cinematic feel to the image.
Shoot like a photog
It’s generally bad advice to model parts of your scene that won’t be seen in the final renders. If you’re working on tight deadlines, knowing what not to do can be a life-saver (and it will boost your margins).
Still, I quite like to model my scenes as completely as possible–within reason–to give myself the freedom to roam the space later looking for good angles and views, as a photographer would (V-Ray RT is a godsend here). If you still need to be economical about what to model, make sure to block your scene accurately before picking your camera angle. You can then add fine details where you need them.
Freedom is also the reason why I like to work with geometry environments when doing interior work (as opposed to using backplates of trees and buildings outside the windows). These will ensure that the perspective of the environment is in line with the perspective of the interior wherever you decide to shoot from. Ironically, out-of-synch perspective is not just a tell-tale sign of a poor render, it also occurs frequently in bad studio photography. Nothing shouts “fake” quite as loudly.
Finally, when exploring your space in search of the perfect shot, make sure your composition is photographic.
Showing off your modeling skills is a major source of compositional mistake. Getting too close to your models to showcase the intricate details or using implausibly wide angles to capture your entire, laboriously built scene will rarely end up in successful shots. It helps to think strategically about what you’re trying to focus the viewer’s attention on. Except in close-up shots, details enrich a photograph but are rarely the focal point of an image.
Like a photographer, you’re free to use additional light sources (strobes, softboxes, etc.) to help light your scene or add highlights where you want them. And do experiment with perspective, switching between one- and two-point perspectives, for instance, before deciding which approach serves your subject better (see how this series of photos only uses a one-point perspective).
The elements that make the following image work include the strong horizontals and verticals, the rule-of-thirds composition, the wall on the left that frames the image, and the accurate DOF:
By contrast, the two-point perspective and general lack of clear focus make the image below less successful:
This is not the place for a composition primer, but again, don’t forget to look to photographers for guidance (you’ll find tips about composing, lighting and editing architecture and interior shots here and here, and Archdaily is a great source of inspiration when it comes to composition).
Render settings & image format
The render settings I used are the result of trial and errors and simple, hard-to-kill habits. They work for me and may work for you but others might be perfectly fine or better. They also depend on the type of image you’re going for, the nature of your scene and the resolution of your render.
My startup setup uses LWF, which I’ve found to be the only way to obtain a naturalistic light falloff, especially for interiors. A linear workflow will ensure that exterior light fills the furthest corners of your interiors without having to crank up the light beyond reason, producing burnouts or fireflies and other rendering artefacts in the process.
My color mapping type is always set to Linear Multiply with a Gamma value of 2.2 and the sRGB button ticked in the VFB (i.e. I do not burn in the gamma correction).
Many visualization artists like to use the Catmull-Rom AA filter, which has a pronounced sharpening effect on renders. Yet even photos from cameras equipped with good optics and large sensors are rarely that sharp when seen at 100%. Most of them display slightly blurred edges, either owing to the lens or to the filters digital cameras use to avoid moirée on small regular patterns. Using a blurring filter, like Area, will replicate this effect. I normally use a value of 2 pixels for 4K renders, which should be cranked up as the resolution increases. This will go a long way towards softening the rough CG edges of a render and making sure everything “blends” well.
Here is a crop of a 10 megapixel photo at 100% that was taken with a good 50mm prime lens. Notice the slight softness of the contours:
And here is a crop of a 4K render using a 2px area filter: Not exactly blurry, but the lack of edge enhancement binds the whole nicely together. If you’re going to sharpen, which I sometimes do, then better do it in post.
Finally, try not to clamp your renders and de-activate sub-pixel mapping (if this doesn’t increase your render time too much, which it often does). This will ensure that your final image, saved as a 32-bit EXR, contains the biggest possible dynamic range. In essence, it will give you an HDR image to play with, which you can think of as a photograph containing many different exposures.
By the way, using the Reinhard color mapping type may give you a more pleasing image in the frame buffer by restoring invisible details in highlights, but it will kill the dynamic range of your image, severely restraining your freedom in post-production. All these highlight details may look like they’re lost when looking at a Linear Multiply image, but they’re not. They remain in the high-dynamic range image and you can recover them in post.
Which brings me to…
There’s a great deal of myth about post-production. It is in no way the secret to photorealism, and neither does it have to be very complex. Because I do a lot in 3D (as opposed to compositing and painting in 2D), more often than not, my post-production ends up being relatively simple. The sad truth is, good and subtle post-production can push a good render into the realm of photorealism but it will never rescue a bad render. If you don’t like your raw render, chances is are you will not like your post-produced image either. So make sure you’re happy with the raw file before proceeding.
I’ve long used a multitude of tools to post-process my images. Lately though, I’ve been turning more and more to Random Control’s ArionFX for Photoshop. The first reason is that it is the only PS plugin I’ve found that provides a full range of tools to work with 32-bit images (PS has very little native support for HDR images, and while Magic Bullet PhotoLooks does the same on paper, it behaves strangely when working with images that are bigger than 2K).
The second reason I like ArionFX is that it is made with CG artists in mind (RC makes the Arion unbiased renderer whereas PhotoLooks is more of an artistic tool for photo processing). ArionFX is also designed around photographic concepts, with such controls as exposure, and white balance, and features a long list of response curves that can make your image look like it was taken by a film camera for a more analogue look.
ArionFX is also a great demonstration of why you should work with unclamped 32-bit images. For one, you can use the dynamic range in your image to create realistic glow and glare effects that can be as subtle or over-the-top as you want, and respect the luminance information in the image. This means these glows and glares will appear exactly where they ought to if your render were a photograph. Rather than fake it, you can let ArionFX decide which parts of your image are bright enough to “deserve” generating halos and flares. And it will do so using information that is invisible to you when looking at your image on a LDR monitor.
This render of the Bauhaus in Dessau shows realistic glare around the interior lights that were automatically derived from the dynamic range of the image (ArionFX wasn’t used here):
That said, you really want to be subtle here. Big flares and glows are an artistic choice, but they can also be the hallmark of poor optics. And photorealism doesn’t mean making your image look like it was taken with a toy camera. Again, it is worth keeping an eye on reference photographs to guide you here.
Another benefit of using 32-bit images in combination with LWF is that they’ll give you a lot of leeway to tweak the exposure of your image in post without much loss of detail. If, like me, you tend to render your images relatively dark in order to accelerate render time, you can boost the final image a few stops without giving up much quality (ArionFX also has a built-in Reinhard color mapper that will get these lost highlight details back in the processed image).
Chromatic Aberration is a concept that arouses passion. Also known as “shifting” or “fringing”, this is an artefact that appears as colorful outlines–purple, red, cyan or yellow–along high-contrast areas of an image. Though purple fringing can be due to the limitations of digital sensors, CA is more often due to the fact that lenses do not always focus all colors of the spectrum on the same point of the sensor (or film), resulting in the separation and shifting of light of different wavelength. This type of diffraction is more visible in low quality optics and more pronounced towards the edge of an image, especially when using wider-angle lenses.
Again, while some projects may warrant simulating poor optics, this is not generally desirable. On the other hand, even the best lenses will display a degree of CA in very high-contrast areas at certain apertures, which is often only visible when viewing photos at 100%. A tiny bit of CA can go a long way towards making a render look like a photo. My guide here, though, is that it should be barely visible to the naked eye, and only at 100%. Ideally, viewers should be able to perceive it without quite being able to put their finger on it.
Below is an example of CA on a 10 megapixel photo taken with a high-quality prime 50mm lens (cropped):
And here a render showing a simulation of the same effect. It is only really visible at 100% (right-hand side):
Photolooks, ArionFX, PTLens and Lightroom all produce good, subtle CA (though all of these except ArionFX are actually designed to eliminate CA in photos), but it’s hard to give good indicative values as the result will vary depending on a render’s resolution and contrast values. This is unfortunately a part of post-production that may require extensive trial-and-error.