What is PBR really?

What is PBR really?

When I was trying to get a good comprehensive understanding of Physically Based Rendering, I often found articles about microfacet theory and energy conservation. However, as I kept digging this turned to be only a part of the whole story. I’m writing this post as an attempt to give more overarching description of real-time PBR in a single article without diving too deep in specific topics for those who like to get better understanding of this often spoken concept in modern real-time rendering.

tl;dr

In essence PBR covers everything from photons being emitted from a light source, zipping through the medium while scattering off objects, and finally ending up to the lens system of the observer and hitting the light sensor. PBR tries to simulate tracing of this entire path of photons using physical models. The term “Physically Based” is used instead of “Physically Correct” because the models are approximate due to performance and memory constraints. If the performance wasn’t a concern we would just trace massive amount of photons from light sources and be done with it, but unfortunately we are still very far from this being a real-time option thus we have to resort to various approximations instead. In short this is what PBR is about, but if you are interested in reading more, let’s start by uncovering some more details about this term.

Splitting PBR

PBR is an umbrella term for a range of topics. To scratch the surface a bit, here are some of the larger PBR sub-topics:

  • Physically Based Lighting, to approximate how light ends up on the surface element seen by the observer. All light sources should be simulated as area lights as there are no punctual light sources in the real world. The shadows should as well be simulated as matching area shadows with proper contact hardening effect. Global illumination should be energy conserving, influenced by all light sources, and featuring multiple bouncing and secondary shadows. All image based lighting (e.g. environment mapping and screen-space reflections) should be consistent with each other and other analytical light sources.
  • Physically Based Shading, the most discussed topic, that defines how light scatters off surface elements. Microfacet BRDF model is de facto stardard shading model used nowadays to model specular reflectance of opaque surfaces. For the microfacet model you can choose from different options for the normal distribution function (D), geometry term (G) and Fresnel term (F) to approximate light scattering in the real world, where GGX model is currently widely adopted with Schlick’s Fresnel approximation. While there are more advanced models for diffuse reflectance (e.g. Oren-Nayar featuring retro-reflectance of rough surfaces), the simple constant Lambertian reflectance model is still most widely used. While these models are used for completely opaque surfaces, different physical models are used to model sub-surface scattering, multi-layer surfaces, semi-transparent surfaces, etc.
  • Physically Based Camera Model, to model how the light interacts with the observer optics, was it the eye or camera. How the lens configuration, shutter speed & shape and light sensor properties of the observer influence post-effects such as lens flares, exposure, bloom, vignetting, motion blur and depth of field.
  • Physically Based Atmospheric Model, to simulate how light interacts with the atmosphere. Eric Bruneton’s model is often cited for realistic atmospheric model suitable for real-time application that accounts multiple Rayleigh and Mie scattering of sun light. Clouds and fog should also be based on physical models, influencing light transmittance from all sources with shadowing and multiple scattering effects for crepuscular rays and light shafts.

As shown above, in PBR the goal is to have physically meaningful model for each and every component in the rendering algorithm, without introduction of ad hoc fudge factors. This is quite obvious and natural expectation but in the past real-time graphics technology development has taken different approach that’s deep-rooted to the mindset of many real-time graphics engineers.

A bit of history

Historically improvements in real-time 3D rendering have been made based on observations. In the early days of GPU’s, not to mention the time prior to GPU’s, there wasn’t much graphics processing power available and rendering features were added incrementally based on empirical evidence and research done in off-line rendering. As graphics programmers visually analyzed real world images, the lacking features were incorporated into rendering algorithms based on what made the biggest visual step towards realism with the least amount of required processing power.

At first the Lambertian diffuse model was introduced due to the lack of any dynamic lighting on surfaces and because this was quite cheap to calculate, first at the triangle/vertex level and later at the pixel level. As more processing power became available Phong and Blinn-Phong specular models were added to more realistically model materials with different specular reflectances. Shadow mapping and many other rendering features such as environment mapping, fog, bloom, ambient occlusion, screen-space reflections, global illumination, etc. followed as their absence from rendered images were determined when compared to real world images and R&D work made it possible to calculate in real-time.

It’s important to realize that during the past couple of decades real-time graphics technology development has been driven by perception. The approach has been to analyze the rendered image, see what’s missing, and add the feature within the given performance budget.

Why to change it if it’s not broken?

This was the question I was asking myself and I know many people are who don’t simply take PBR at face value. We have had great success using the current approach for graphics R&D so why would we change it?

While the empirical development route of graphics technology has taken us far, real-time rendering has reached the point where it is difficult to tell what exactly is the correct visual result. Yes, we have the specular reflections of lights on surfaces, but do these reflections have the right intensity, shape and falloff that matches the real world? Does the fog in the level influence light transmittance realistically, blocking light until it reaches surfaces and scatter from air particles until it reaches the observer? We can see Global Illumination resulting the expected light bleeding effects from surrounding geometry, but how well does this match real world references absent from all the other lighting effects? In the past we have implemented many of the required components for realistic real-time rendering, but the accuracy of the implementation of these components is in question.

It’s usually quite easy for anyone to tell a real-time rendered image apart from a real world image, but it’s often difficult if not impossible even for a seasoned technical artist or 3D programmer to pinpoint why. These subtle errors in rendering algorithms contribute us not to believe that a rendered image is real since our brains have developed to subconsciously notice all these subtleties in the real world. If we want to increase the level of realism in real-time rendering, we can’t simply trust on our perception to detect imperfections in rendering algorithms anymore but have to rely on physical theory of light instead. PBR can thus be seen as a paradigm shift from the oldschool observation-based graphics technology development to development based on physics of light.

How about the artistic control?

Artists have been able to counter rendering discrepancies to a degree while authoring assets, but this has been quite laborious effort based on subjective visual analysis of the rendered image. For example if artists are given a separate control over the diffuse and specular intensities of a light, you’ll get as many intensity settings as there are artists since they set this value based on their own personal perception of the reality and artistic taste. What further complicates the situation is that games are about interactivity with free cameras and are becoming increasingly dynamic with changing environmental conditions. The non-PBR settings which may give realistic result in specific lighting conditions may break in another. Of course your game may not feature changing lighting conditions, but same assets are used in different conditions within the game and these conditions also likely change throughout the development process.

Ideally with PBR the goal is to give artists control only over assets that could be controlled in the real physical world and do the rest by simulating the physics of light in the rendering engine. Artists can for example define the material of a wall, but they can’t define how the light behaves when it interacts with the wall beyond the boundaries defined by the physically based BSDF model of the material. Artists can also define size, color and intensity of a light, but they can’t define separate colors for diffuse and specular illumination since there isn’t such a thing in the real world, and so forth. This means that artists have actually less control over assets than before and have to change their way of thinking towards how light behaves in the real world.

While PBR engine ensures proper processing of data, artists must ensure that the data is correctly authored to give the realistic look for rendered images. For example it’s important that albedo textures use physically plausible color ranges that represent the intended materials, and that properly calibrated reference values are used for overall consistency. We had some issues that artists used too dark albedo values for metals (even for anodized metals), while natural metals have quite bright albedo values (above ~0.5 for all RGB channels). Oldschool artists tend to judge desired albedo color from the rendered result, but this can be quite misleading due to improper lighting or camera settings, or because of artist’s personal perception what the reality should look like.

Reality check

However, in practice the ideal world of PBR is difficult to achieve since we need rely on approximations for the physics of light due to performance and memory constraints. For example if your GI solution isn’t accurate enough you may have to give artists some artificial control over countering the deficiencies. Maybe the GI works only for sun light and not for local lights thus artists need some manual control over defining the light bouncing for local lights in order to increase realism, e.g. by introducing constant ambient lighting term. Maybe the shadows are not physically based on area lighting thus you need to expose artificial control over the shadow penumbra size. These realities in the rendering engines will introduce some artificial control exposed to artists despite the strive for PBR.

When taking the PBR route it’s important to have the mind set of making everything physically based and making conscious decisions when introducing hacks contrary to the PBR approach. With PBR goal in mind the first line of defense shouldn’t be introduction of ad hoc parameters as requested by artists to achieve a certain effect, but analysis what physical feature the PBR model is lacking that artists can’t get the desired result. This analysis may reveal there is an issue with artist created assets or perhaps there’s something missing or wrong with the physical model that requires fixing. This analysis is an important step prior to making changes to the existing PBR model to keep pushing the rendering engine to the right physically-based direction. Migration to PBR thus requires collaboration of engineering and art and can’t be done properly without conscious effort from both disciplines.

The payoff

The result of developing a PBR engine that’s fed with properly authored PBR data is consistent realistic look across different areas of your game and reusable assets that behave as expected in different lighting conditions. The production implications are less time spent on creating assets due to reduced yet intuitive control with clear guidelines and better reusability of the assets. The production is also more tolerant to changes since change of environmental conditions doesn’t require extensive amount of parameter tweaking and re-authoring of assets in order to reach an acceptable level of quality.

Where from here?

While many real-time rendering engines are promoted as PBR engines, these engines employ PBR at widely varying extent. Some may implement only some level of PBS (e.g. implement GGX) and call it a day, while others try to model the entire path of photons from light sources to camera sensor with physical models. Due to real-time constraints however, all the PBR engines are doing various compromises and there’s quite a lot of interesting challenges ahead to push the quality of real-time PBR further.

If you are interested in finding more details about PBR, I recommend checking out Moving Frostbite to PBR SIGGRAPH 2014 slides by Charles de Rousiers and Sébastian Lagarde, as the slides and the associated course notes cover many important PBR components.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave Comment

We do not share your information with anyone else or publish your email. *Required fields.

Top