The photographic look

Corona Tests

CG artists in advertising, VFX, the game industry, and visualization often seem to pursue contradictory goals. While some use computers to conjure up the impossible, others go to great lengths to replicate the world. Yet they do have something in common: More often than not they strive to make their art, however fantastical, look real, or at least plausible.

As a goal, achieving photorealism informs every step of the creative process, from collecting references to modeling and lighting a scene and its props. Getting the photographic look right, however, is a subset of this that is more concerned with the final look. This is the part I’d like to focus on in this post, which was inspired by the frequent questions I get about the final look of my images.

Note that this is not just a post-production tutorial. While everything does converge onto post-production, there are many upstream steps that are equally vital in ensuring that your renders reach post-production in the best possible shape.

Photorealism is an ever-receding ideal. I don’t pretend to have achieved it. But it definitely is my aspiration and I thought I’d share a few tips–some practical, other slightly more conceptual–on how I go about it.

 

Look up to photographers

Many of the tips below revolve around the notion of using your digital tools as a camera simulator. Many of these tools, especially renderers and post-processing software, use photographic language, with such concepts as exposure, depth of field, field of view… Their strength is that, unlike cameras, they are not constrained by the laws of physics. As a result, a big part of achieving the photographic look lies in embracing these constraints and renouncing some of the freedom our tools afford us, which can be a tough discipline.

Achieving photorealism is not just about accepting (some of) the constraints of photography, but also using photography as your benchmark. Concretely, this means photography, and in particular the type of photography you are trying to replicate, should be your reference, as opposed to nature or the work of other artists. Photography itself is not nature, it is nature seen through a lens and captured on a film or sensor, with all the distorsions this implies. Mimicking these distortions is among the surest ways to make your stuff “look” real.

So throughout the process, it is good to keep photographic references to hand and to check them regularly to make sure you’re on the right track. Even when modeling and shading assets, how these are built and look in real life matters often less than how they appear on camera.

 

Make your world real

Photorealism lies somewhere between fraud and prestidigitation. It is about persuading viewers that they are looking at something else than what they are really looking at. If your goal is to create the illusion of an image of the world as seen through a camera, you will help yourself by making your digital world as similar as possible to the real one, starting with the scale.

We derive a lot of subconscious information about a scene when looking at a photograph, including a pretty accurate sense of scale. Ambient haze, shadow tapering, depth of field, are among the many clues we pick up that tell us whether we are looking at a toy model or at a cityscape. Tilt-shift timelapse photography of real-life street scenes, for instance, works by subverting these clues to make us think we are looking at a miniature.

If you model to real-world scale, things like the light’s falloff or the depth of field are far more likely to resemble those in a photography of the equivalent scene, without you having to fake it. For example, the haze and the deep depth-of-field in this wide-angle shot convey the monumental scale of the space

BNF

While this is not a shading tutorial, it’s still worth saying a few things about materials as bad materials can kill the photorealism in your scene. Getting the perfect shader takes a long time and books could be written about them (Grant Warwick’s V-Ray course is a great place to start and you can check my material creation tips here & here). But there are a few simple things you can do that will prevent your materials from damaging your render, and they all have to do with textures and colors.

Essentially, you should be careful about not setting any color values in the diffuse slot too close to the maximum RGB value of 255 whether you’re working with colors or bitmap textures. This applies to the individual red, green and blue channels. Excessive RGB values will tax the renderer, extending render times, and give your image an unrealistic over-saturated look. Likewise, you should never use pure white and pure black in the diffuse as these never occur in real life. Pure black will absorb all light and pure white will bounce all light rays back. As a guide, I use values around 180, 180, 180 for white in all my scenes and never go beyond 200. If you work with textures in the diffuse, color-pick them to make sure they don’t have excessive RGB values. I also often slightly desaturate the textures I make from photos in Photoshop before using them. You may also want to tweak your textures to make sure they all “sit” well together. This is particularly true of saturation. For example, although the trees in the scene below use different leaf textures, some warmth was added to make the whole foliage “sit well” together.

Kongresshalle Berlin

 

Make your light real

The realism commandment goes for other values, including lights. A skylight or a sun will have vastly more power than the brightest lightbulb. Bear this in mind when setting up dome lights and fill lights such as soft boxes. If you set a realistic intensity for your sky and electric lights when lighting an interior for instance–where interior and exterior lighting work in tandem–you are far more likely to end up with a render that looks real than if you eyeball these values.

I often hear viz artists say they have a hard time dealing with burnouts (a dramatic overexposure of some parts of the scene) when using real-world sun or sky intensities for interior renders. The truth is, photographers have to deal with the very same issues in high-contrast situations. Our advantage is we can render high-dynamic range images that have enough color depth to let us deal with burnouts in post and recover information in highlights. Even so, any exterior visible through windows in an interior render just has to be overexposed. Perfectly uniform lighting may be an artistic goal, but it will not feel photorealistic (though many photographers will deal with this by merging different exposures). The shot below shows a realistic balance of interior and exterior lighting, with the exterior overexposed and featuring a colder light (two softboxes illuminate the interior to help balance the lighting, as clearly visible on the beams):

Norsouth

Realistic light intensity matters for other reasons. In non-realistic set-ups (for instance overly bright lights paired with a very dark camera lens), materials will stop behaving accurately or predictably. For example, this will give an illusion of higher reflectivity on some materials, which then has to be compensated for. If you rely on asset libraries, real-world lights is the only way to make sure that shaders will render the way they’re meant to. All the models in my archive were built and shaded so as to work in realistic situations.

 

Make your camera real

This part is easy. If you use V-Ray or any other renderer that offers a physical camera, use it and make sure you work with realistic aperture, exposure and ISO settings for your scene.

Physical cameras can also be a great help setting up lights if you’re not sure what the right power should be for a blue sunny sky or an overcast winter one. If you have a (real) camera, go outside and take a photo in fully automatic mode. Then, give your physical camera the exact same settings your real camera used in the photo (this info should be embedded in the photo’s EXIF data). From there, tweak the intensity of your dome or skylight until it resembles the sky in the photo. That’s it. Now it’ll be much easier to extrapolate the power of other lights from that of the sky.

The right choice of camera can influence the mood of your image. You can mimic different cameras and formats by playing with such settings as the film plate size (turning your camera in a medium- or large-format camera, that will show much shallower depth-of-field), distortion or bokeh shape. In the shot below, the anamorphic bokehs and shallow depth-of-field lend a cinematic feel to the image.

Crystals2

 

Shoot like a photog

It’s generally bad advice to model parts of your scene that won’t be seen in the final renders. If you’re working on tight deadlines, knowing what not to do can be a life-saver (and it will boost your margins).

Still, I quite like to model my scenes as completely as possible–within reason–to give myself the freedom to roam the space later looking for good angles and views, as a photographer would (V-Ray RT is a godsend here). If you still need to be economical about what to model, make sure to block your scene accurately before picking your camera angle. You can then add fine details where you need them.

Freedom is also the reason why I like to work with geometry environments when doing interior work (as opposed to using backplates of trees and buildings outside the windows). These will ensure that the perspective of the environment is in line with the perspective of the interior wherever you decide to shoot from. Ironically, out-of-synch perspective is not just a tell-tale sign of a poor render, it also occurs frequently in bad studio photography. Nothing shouts “fake” quite as loudly.

Finally, when exploring your space in search of the perfect shot, make sure your composition is photographic.

Showing off your modeling skills is a major source of compositional mistake. Getting too close to your models to showcase the intricate details or using implausibly wide angles to capture your entire, laboriously built scene will rarely end up in successful shots. It helps to think strategically about what you’re trying to focus the viewer’s attention on. Except in close-up shots, details enrich a photograph but are rarely the focal point of an image.

Like a photographer, you’re free to use additional light sources (strobes, softboxes, etc.) to help light your scene or add highlights where you want them. And do experiment with perspective, switching between one- and two-point perspectives, for instance, before deciding which approach serves your subject better (see how this series of photos only uses a one-point perspective).

The elements that make the following image work include the strong horizontals and verticals, the rule-of-thirds composition, the wall on the left that frames the image, and the accurate DOF:

Bauhaus Archiv

By contrast, the two-point perspective and general lack of clear focus make the image below less successful:

Bauhaus Archiv

This is not the place for a composition primer, but again, don’t forget to look to photographers for guidance (you’ll find tips about composing, lighting and editing architecture and interior shots here and here, and Archdaily is a great source of inspiration when it comes to composition).

 

Render settings & image format

The render settings I used are the result of trial and errors and simple, hard-to-kill habits. They work for me and may work for you but others might be perfectly fine or better. They also depend on the type of image you’re going for, the nature of your scene and the resolution of your render.

My startup setup uses LWF, which I’ve found to be the only way to obtain a naturalistic light falloff, especially for interiors. A linear workflow will ensure that exterior light fills the furthest corners of your interiors without having to crank up the light beyond reason, producing burnouts or fireflies and other rendering artefacts in the process.

My color mapping type is always set to Linear Multiply with a Gamma value of 2.2 and the sRGB button ticked in the VFB (i.e. I do not burn in the gamma correction).

Many visualization artists like to use the Catmull-Rom AA filter, which has a pronounced sharpening effect on renders. Yet even photos from cameras equipped with good optics and large sensors are rarely that sharp when seen at 100%. Most of them display slightly blurred edges, either owing to the lens or to the filters digital cameras use to avoid moirée on small regular patterns. Using a blurring filter, like Area, will replicate this effect. I normally use a value of 2 pixels for 4K renders, which should be cranked up as the resolution increases. This will go a long way towards softening the rough CG edges of a render and making sure everything “blends” well.

Here is a crop of a 10 megapixel photo at 100% that was taken with a good 50mm prime lens. Notice the slight softness of the contours:

SharpnessExample1

And here is a crop of a 4K render using a 2px area filter: Not exactly blurry, but the lack of edge enhancement binds the whole nicely together. If you’re going to sharpen, which I sometimes do, then better do it in post.

SharpnessExample2

Finally, try not to clamp your renders and de-activate sub-pixel mapping (if this doesn’t increase your render time too much, which it often does). This will ensure that your final image, saved as a 32-bit EXR, contains the biggest possible dynamic range. In essence, it will give you an HDR image to play with, which you can think of as a photograph containing many different exposures.

By the way, using the Reinhard color mapping type may give you a more pleasing image in the frame buffer by restoring invisible details in highlights, but it will kill the dynamic range of your image, severely restraining your freedom in post-production. All these highlight details may look like they’re lost when looking at a Linear Multiply image, but they’re not. They remain in the high-dynamic range image and you can recover them in post.

Which brings me to…

 

Post-production

There’s a great deal of myth about post-production. It is in no way the secret to photorealism, and neither does it have to be very complex. Because I do a lot in 3D (as opposed to compositing and painting in 2D), more often than not, my post-production ends up being relatively simple. The sad truth is, good and subtle post-production can push a good render into the realm of photorealism but it will never rescue a bad render. If you don’t like your raw render, chances is are you will not like your post-produced image either. So make sure you’re happy with the raw file before proceeding.

I’ve long used a multitude of tools to post-process my images. Lately though, I’ve been turning more and more to Random Control’s ArionFX for Photoshop. The first reason is that it is the only PS plugin I’ve found that provides a full range of tools to work with 32-bit images (PS has very little native support for HDR images, and while Magic Bullet PhotoLooks does the same on paper, it behaves strangely when working with images that are bigger than 2K).

The second reason I like ArionFX is that it is made with CG artists in mind (RC makes the Arion unbiased renderer whereas PhotoLooks is more of an artistic tool for photo processing). ArionFX is also designed around photographic concepts, with such controls as exposure, and white balance, and features a long list of response curves that can make your image look like it was taken by a film camera for a more analogue look.

ArionFX is also a great demonstration of why you should work with unclamped 32-bit images. For one, you can use the dynamic range in your image to create realistic glow and glare effects that can be as subtle or over-the-top as you want, and respect the luminance information in the image. This means these glows and glares will appear exactly where they ought to if your render were a photograph. Rather than fake it, you can let ArionFX decide which parts of your image are bright enough to “deserve” generating halos and flares. And it will do so using information that is invisible to you when looking at your image on a LDR monitor.

This render of the Bauhaus in Dessau shows realistic glare around the interior lights that were automatically derived from the dynamic range of the image (ArionFX wasn’t used here):

Bauhaus Dessau

That said, you really want to be subtle here. Big flares and glows are an artistic choice, but they can also be the hallmark of poor optics. And photorealism doesn’t mean making your image look like it was taken with a toy camera. Again, it is worth keeping an eye on reference photographs to guide you here.

Another benefit of using 32-bit images in combination with LWF is that they’ll give you a lot of leeway to tweak the exposure of your image in post without much loss of detail. If, like me, you tend to render your images relatively dark in order to accelerate render time, you can boost the final image a few stops without giving up much quality (ArionFX also has a built-in Reinhard color mapper that will get these lost highlight details back in the processed image).

Chromatic Aberration is a concept that arouses passion. Also known as “shifting” or “fringing”, this is an artefact that appears as colorful outlines–purple, red, cyan or yellow–along high-contrast areas of an image. Though purple fringing can be due to the limitations of digital sensors, CA is more often due to the fact that lenses do not always focus all colors of the spectrum on the same point of the sensor (or film), resulting in the separation and shifting of light of different wavelength. This type of diffraction is more visible in low quality optics and more pronounced towards the edge of an image, especially when using wider-angle lenses.

Again, while some projects may warrant simulating poor optics, this is not generally desirable. On the other hand, even the best lenses will display a degree of CA in very high-contrast areas at certain apertures, which is often only visible when viewing photos at 100%. A tiny bit of CA can go a long way towards making a render look like a photo. My guide here, though, is that it should be barely visible to the naked eye, and only at 100%. Ideally, viewers should be able to perceive it without quite being able to put their finger on it.

Below is an example of CA on a 10 megapixel photo taken with a high-quality prime 50mm lens (cropped):

CAexample1

And here a render showing a simulation of the same effect. It is only really visible at 100% (right-hand side):

CAexample

Photolooks, ArionFX, PTLens and Lightroom all produce good, subtle CA (though all of these except ArionFX are actually designed to eliminate CA in photos), but it’s hard to give good indicative values as the result will vary depending on a render’s resolution and contrast values. This is unfortunately a part of post-production that may require extensive trial-and-error.

    • Max
    • January 7, 2015

    Great post, I always tend to work in 32bit but wasn’t aware of ArionFX so thanks for that. I sometimes see artists tweaking the intensity of the sun in their scene. I didn’t realise you could do that in real life!

    I just have a quick question – you mentioned that you render relatively dark to improve render times but I don’t understand how. If a scene is darker wouldn’t there be more noise for the DMC sampler to clean up and therefore longer render times?

  1. Thank you very much Bertrand for this Great Article,
    Agree with you on the point about “Photography, Materials and Linear workflow ” and that we should always refer to photography guides when it comes to CGI,

    I had the some Obstacle when i started in the visualization field, i was always wondering how to make my work more realistic, and my mistake was that i was learning from 3d rather than learning from photographers whom spend years developing workflows and solutions to get the realism and sculpt objects with light!

    I remember watching a video about Architecture photography and the instructor was taking several 16 bits images to Combine them with masks in Photoshop ” It was a kitchen scene and he didn’t like the reflections on the glass so he used the some frame, some position but with in this case a Reflector to block the lights coming from the windows ” he did the some techniques for over-bright areas and at that moment it clicked for me ( Ha! they have the some Challenges! ).

    Very Inspiring works you have Bertrand,
    again thank you,
    best,
    Ismail

    • elgrillo
    • January 7, 2015

    Greate article! thanks for all this usefull information!

    • toon
    • January 7, 2015

    Hi Bertrand very interesting post , thank you

    Could you please clarify what you mean by avoiding excessive rgb values . In terms of a RGB value , how would you represent (numerically) for example a pure red material (where would your cut off values be on the red channel to avoid over saturation as you have suggested in your post)

    When dealing with textures again how are you making sure they fall within a realistic value ? are you making sure no part of the texture that is white goes over 180-200 for example . Pulling back saturated colours to below what value? and making sure the blacks are above 5-10 ? etc

    Sorry for all the questions , its just when you get down to the nitty gritty of your post (which is great) this aspect needs more clarity.

  2. As usual, top shelf work! Thanks for sharing your workflow.

    • LC3D
    • January 7, 2015

    Brilliant post, thanks Bertrand! Great advice here…

    • LC3D
    • January 7, 2015

    Just quickly, regarding the point on using 180, 180, 180 for your white diffuse value, do you adjust this in post to be more “white”, as 180, 180, 180 is quite ‘grey’…?

  3. Thanks! Priceless information!

  4. Thanks for the great post, this is a great primer for photorealistic rendering. I also appreciate that you’ve taken the time to elaborate on your whole production pipeline, not just one or two parts.

    In response to Max: the rendering in V-Ray has been optimized for speed so that dark areas are not sampled as accurately as bright areas. The reason for this is that (presumably) in most cases, the noise in dark areas is less noticeable than in bright, well lit areas. Therefore you should be able to get away with more noise in those dark areas while enjoying optimized render speeds as a result. I believe the technique is called importance sampling, in case you want to read more about it online.

    This does raise a question for Bertrand: wouldn’t lifting up the overall brightness by a few stops in post make the noise more noticeable in the dark areas, in case you’ve rendered the image a bit too dark to begin with? Or are you pointing to a sweet spot which occurs somewhere between rendering with less brightness and with optimized speed, while not producing too much noise for purposes of brightening the image in post? In other words, just how dark are the renderings without post 😉 It’s an interesting concept so I’d be curious to know more about it.

  5. Thanks a lot Bertrand.. very useful advices

  6. hey master can you share with us the rig lighting of “Scandinavian Style Interior” i guess is hdri dome light with peter hdri? so how much subdive in dome light and what about the windows glass? exclude them in the dome light? it will be great if you answer that 🙂

    • jackieteh
    • January 13, 2015

    Hi sir,

    (If you rely on asset libraries, real-world lights is the only way to make sure that shaders will render the way they’re meant to. All the models in my archive were built and shaded so as to work in realistic situations.)

    What is the real-world lights mean here? how to set it up??can it done in 16 bits rendering???

    Regards,
    Jackie

    • chrisc0le
    • January 14, 2015

    You’ve got me hooked on using linear multiply now. 😉

    I must admit the range of tones is far wider than using reinhard, but when using a HDRI the amount of power needed to light an interior (without lots of large windows) blows out the direct sunlight.

    You state ‘color depth to let us deal with burnouts in post and recover information in highlights’ And I understand that using an exposure layer with a mask in Photoshop can recover those areas.

    Is this the method you choose or do you use the reinhard adjustment in Arion and lose some of the tonal range? I also read about you using the curve adjustment to ‘clip’ the burnt out areas on another post. but I’m a little unsure how your doing this? is it simply lower the point on the right down until the burnt areas return below 1.00?

    I think I need reassurance of what i’m getting is correct.

    Thanks Chris.

    • nildoe
    • January 17, 2015

    Hi Bertrand,

    You say its always best to work with real life units which of course makes sense…what system units do YOU normally use for.modelling things like chairs or people? And do you those same units to model glasses of water for example?

    • carlocki
    • January 19, 2015

    this art meet technology

  7. Excellent post, thank you so much Bertrand.
    @jackieteh: I think what Bertrand means is that you must set up your lights using “lumen” units rather than “Default (image)” one. Usually interior lighting sources range approximately between 1000 and 3000 lm.

  8. Usually interior lighting sources range approximately between 1000 and 3000 lm.

  9. Excellent post, thank you so much Bertrand.
    @jackieteh: I think what Bertrand means is that you must set up your lights using “lumen” units rather than the “Default (image)” one. Usually interior lighting sources range approximately between 1000 and 3000 lm.

  10. Hi, Bertrand.

    Wonderful article (again..=)

    It’s so Nice to read something rather different, that stays away from all those catchy “10 tips to make Your renders look like photo” (tweak this, tweak that, use this value, do this…-do whatEver.., as we only need Your time on this website to make add pay out..).

    I know some were a bit “upset” for You not to reveal something “extraOrdinary” (like secret “render Nice image Button”) in AD4 few years ago.. – However I found the first part the most interesting – How You approach the process is much more important than any settings or “cool (always working) tips” (that it seems like too many of “visualisers” are grown on..)

    Thanks for taking time to share all those thoughts! & please, post some more articles like this in the future (if You find some time writing them).

    Cheers
    Glimps

    • carlocki
    • March 12, 2015

    Betrand with which format you go out from vray?

  11. I use .exr

  12. Hello Bertrand,

    Thanks again for such great posts. I wanted to ask you what is your approach to simulate real photography light tools such as strobes or softboxes. Have you written anything about it or do you have a link you would recommend to read?.
    Thank you so much again, I enjoy the knowledge that you share but even more the way that you write it. It is really a smooth reading.

    Greetings
    Paola

    • marcozzz
    • April 2, 2015

    Hello Bertrandt,i’m a Blender user with Vray, there is somethingh unclear for me:if you works without subpixel and unclamped render options ,in wich way it’s possible fix the jagged border, only with blur effect?,my problem is specially for the extra contrast zone and highlight.
    Ciao.

  13. Yes, that’s what I do. Also, it isn’t such a problem if you render at high resolutions.

    • marcozzz
    • April 2, 2015

    brute force + High res, and I think you mean around 2500X1400 should the computing power of the CINECA, I will settle for IM.
    Thanks for the reply.
    P.s. would be a very interesting your little flashback to Blender

  14. Actually, it’s not that bad. With the right settings, V-Ray is very fast in most scenarios.

    • SeBass
    • May 21, 2015

    Hi Bertrand.
    Just a quick question: What option do you choose under color mapping “mode”? Color mapping and gamma, none or color mapping only?

Leave a Comment