GDC Retrospective and Additional Thoughts on Real-Time Raytracing

This post is part of the series “Finding Next-Gen“.

Just got back from GDC. Had a great time showcasing the hard work we’ve been up to at SEED. In case you missed it, we did two presentations on real-time raytracing:

gdc1DirectX Raytracing Announcement (Microsoft) and Shiny Pixels and Beyond: Real-Time Raytracing at SEED (NVIDIA)

In case you were at GDC and saw the presentation, you can skip directly here.

During the first session Matt Sandy from Microsoft announced DirectX Raytracing (DXR). He went into great detail over the API changes, and showed how DirectX 12 has evolved to support raytracing. We then followed with our own presentations, where we showcased Project PICA PICA, a real-time raytracing experiment featuring a mini-game for self-learning AI agents in a procedurally-assembled world. The team has worked super hard on this demo, and the results really show it! 🙂

PICA PICA is powered by DXR.

DirectX Raytracing?

The addition of raytracing to DirectX 12 is exposed via simple concepts: acceleration structures (bottom & top), new shader types (ray-generation, closest-hit, any-hit, and miss), new HLSL types and intrinsics, commandlist-level DispatchRays(…) and a raytracing pipeline state. You can read more about it here.

Taken from our presentation, here’s a brief overview of how this works in PICA PICA:

gdc3.pngUsing Bottom/top acceleration structures and shader table (from GDC slides)
picapica_hlslPseudoCode.pngRay Generation Shadow – HLSL Pseudo Code – Does Not Compile (from GDC slides)

While you don’t necessarily need to use DXR to do real-time raytracing on current GPUs (see Sebastian Aaltonen’s Claybook rendering presentation), it’s a flexible new tool in the toolbox. From the code above, you benefit from the fact that it’s unified with the rest of DirectX 12. DXR relies on well known HLSL functionality and types, allowing you to share code between rasterization, compute and raytracing. More than just raytracing, DXR also allows to solve more sparse and incoherent problems that you can’t easily solve with rasterization and compute. It’s also a centralized implementation for hardware vendors to optimize, and now becomes common language for every developer that wants to do raytracing in DirectX 12. It’s not perfect, but it’s a good start and it works well.

Presentation Retrospective

During the presentation we talked about our hybrid rendering pipeline where rasterization, compute and raytracing work together:

gdc2.pngPICA PICA’s Hybrid Rendering Pipeline (from GDC slides)

Our hybrid approach allows us to solve, develop and apply several interesting techniques and algorithms that rely on rasterization, compute or raytracing while balancing quality and performance. This shows the flexibility of the API, where one is free to choose a specific pipeline to solve a specific problem. Again since raytracing is another tool in the toolbox, it can be used where it makes sense and doesn’t prevent you from using other available pipelines.

First we talked about how we raytrace reflections from the G-Buffer at half resolution, reconstruct at full resolution, and how it allows us to handle varying levels of roughness. We also presented our multi-layer material system, shared between rasterization, compute and raytracing.

picapica_reflectionsMaterials.pngRaytraced Reflections (left) and Multi-Layer Materials (right) (from GDC slides)

We then followed by describing a novel texture-space approach for order-independent transparency, translucency and subsurface scattering:

picapica_translucencyGlass.pngGlass and Translucency (from GDC slides)

We then presented a sparse surfel-based approach where we use raytracing to pathtrace irradiance from surfels spawned from the camera.

picapica_gi.pngSurfel-based Global Illumination (from GDC slides)

We also covered ambient occlusion (AO), and how raytraced AO compares to screen-space AO.

This slideshow requires JavaScript.

Inspired from Schied/NVIDIA’s Spatiotemporal Variance-Guided Filtering (SVGF), we also presented a super-optimized denoising filter specialized for soft shadows with varying penumbra.

picapica_shadows.pngSurfel-based Global Illumination (from GDC slides)

Finally we talked about how we handle multiple GPUs (mGPU) and split the frame, relying on the first GPU to act as an arbiter that dispatches work to secondary GPUs in parallel fork-join style.

picapica_mgpu.pngmGPU in PICA pica (from GDC slides)

All-and-all, it was a lot of content for the time slot we had. In case you want more info, check out the presentation:

You can also download the slides: Powerpoint and PDF. You can also watch the presentation live here (starts around 21:30).

Here are a few additional links that talk about DirectX Raytracing and Project PICA PICA:

Additional Thoughts

As mentioned at GDC we’ve had the chance to be involved early with DXR, to experiment and provide feedback as the API evolved. Super glad to have been part of this initiative. We still have a lot to explore, and the future is exciting! Some additional thoughts:

Noise vs Ghosting vs Performance

DXR opens the door to an entirely new class of techniques that have never been achieved in games. With real-time raytracing it feels like the upcoming years will be about managing complex tradeoffs, such as noise, ghosting, quality vs performance. While you can add more samples to reduce noise (and improve convergence) during stochastic sampling, it decreases performance. Alternatively you can reuse samples from previous frames (via temporal filtering), but it can add ghosting. It feels like achieving the right balance here will be important. As DXR gets adopted in games this topic will generate a lot of good presentations at conferences.

Comparing Against Ground Truth

We also mentioned that we built our own pathtracer inside our framework. This pathtracer acts as reference implementation, which at any point we can toggle when working on a feature for our hybrid renderer. This allows us to rapidly compare results, and see how a feature looks against ground truth. Since a lot of code is shared between the reference and various hybrids techniques, no significant additional maintenance is required. At the end of the day, having a reference implementation will help you make the best decision in order to achieve the balance between quality and performance for your (hybrid) techniques.

If raytracing is new to you and building a reference ray/pathtracer is of interest, many books and online resources are available. Peter Shirley’s Ray Tracing in One Weekend is quite popular. You should check it out! 🙂

Specialized Denoising and Reconstruction

Also mentioned during the presentation, we built a denoising filter specialized for soft penumbra shadows. While one can use general denoising algorithms like SVGF on the whole image, building a denoising filter around a specific term will undeniably achieve greater quality and performance. This is true since you can really customize the filter around the constraints of that term. In the near future one can expect that significant time and energy will be spent on specialized denoisers, and custom reconstruction of stochastically sampled terms.

DXR Interop

As mentioned earlier we share a lot of code between raytracing, rasterization and compute. In the event where one wants to bake lightmaps inside their engine (see Sébastien Hillaire‘s talk on Real-Time Raytracing For Interactive Global Illumination Workflows in Frostbite), DXR is very appealing because you can evaluate your actual HLSL material shaders. No need for (limited) parameter conversion, which is often necessary when using an external lightmap baking tool.

This is awesome!

Wrapping-up

Even though the API is there and available to everyone, this is just the beginning. It’s an important tool going forward that will enable new techniques in games, and could end up pushing the industry to new heights. I’m looking forward to the new techniques that evolve from everyone having access to DXR, and what kind of rendering problems get solved. I also find it quite appealing for the research community to be able to try and solve problems closer to the realm of real-time raytracing, where researchers can implement their solutions using a raytracing API that everyone can use.

Because it’s unified, it should also be easy for you to pick up the API, experiment and integrate in your own engine. Again, one doesn’t need this API to do real-time raytracing, but it provides a really nice package and a common language that all DirectX 12 developers can talk around. It’s also a clear focus point for hardware makers to focus on optimization. Also compute hasn’t really changed in a while, so hopefully these improvements will drive improvements in compute and in the the pipelines as well. That being said, the API is obviously not perfect, and is still at the proposal stage. Microsoft is open to additional feedback and discussion. Try it out and send your feedback!

Can’t wait to see what you will do with DXR! 🙂

SIGGRAPH 2017 – Past, Present and Future Challenges of Global Illumination in Games

This post is part of the series “Finding Next-Gen“.

Just got back from Los Angeles, where I presented in the Open Problems in Real-Time Rendering Course at this year’s SIGGRAPH:

Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.

First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.

You can also download my slides with notes here.

Super grateful to have been part of this initiative. Lots of great content was presented. Thanks to everyone who came to the course!

Channeling Your Inner Light

An attempt at more blogging, but this happened in the meantime, which is why you might find some of tweets below to be from a few months ago. 😉

A topic of discussion that comes up every now and then between programmers, technical artists and lighting artists is the concept of light masking, or Lighting Channels, and whether this concept is still valid. I’ve had this discussion many times before with developers out there (and somehow I’m sure you have too). Artists and programmers alike, opinions diverge. To get a new sample on the matter I decided to ask the twitter-verse:

LightChannels0
Light Channels – Yay or Ney (Twitter Poll)

duel1

Yup, a division! Before we go over the discussion and the many answers people provided, which I will mix/interleave throughout this post to give perspective, let’s first cover some ground and make sure we all talk about the same thing.

Light(ing) Channels?

In layman’s terms, Lighting Channels (LC) is the functionality of masking lights on an per-object basis, or on a subset of objects that meet the masking criteria. Environment-only, character-only, and cinematic-only lights to name a few are examples that come to mind.

LightChannels3
Lighting Channels in UDK  – Point light affecting Dynamic objects tagged as Cinematic 1 [1]

This inclusion/exclusion concept allows lighting artists to have more fine-grained, manual control on light interactions. The image above describes a light affecting dynamic objects such as characters and others, but not the static environment. This case is especially common for cut-scenes, where lighting artists can clearly identify the key, fill and rim lights for each character, for each shot, to ensure that the art-directed lighting is manicured and behaves as expected.


3-point light setup example from a TV show

Lighting channels are not limited to characters and cut-scenes. Other classic examples come to mind, such as additional lights to manually enhance/fixup global illumination (ie.: faking/adding custom diffuse inter-reflection, or “bounce”), or additional lights for animated/hero objects.

Light Channels vs Light Merging

A bit of a side topic, though a concept often intertwined with light channels is light merging. In the context of forward lighting and how it was done back then, prior to tiled or clustered approaches [2] [3], iterating on all potentially affecting lights on a per object basis would greatly affect performance, especially on older hardware. To palliate this issue, dynamically lit objects were often lit by a subset of lights present in the scene: a select number of closest lights, or dynamically merging/coalescing lights [1] based on brightness/luminous flux, distance, or even using spherical harmonics [4] [5] to merge and extract the most relevant n-point (often 3). Lights could also be merged by taking their affecting channels into account.

What’s the connection with light channels you say? Well, more than just about being merged based on their channels, it turns out that while merging lights can provide a “good approximation” in certain scenarios of light interactions with dynamic objects without having to compute all interactions, you still end up with a discrepancy between lights that have been manually placed by artists and lights that have been merged for dynamic objects. To compensate, having additional “forced/non-merged” lights was often requested by lighting artists. Unfortunately this often led to too many of these lights, and we were back at square one with regards to the performance benefits of lighting with merged lights. Basically, a performance-affecting visual workaround to fix a performance work-around. It’s getting complicated…

A Workaround For Something Broken?

At this point you probably feel like something’s odd, not working, or simply that light channels are a hack for something broken in the way we light scenes. And you’d be right to think so. To put things in perspective, though we/programmers work really hard to constantly improve their representation and behavior, real-time lights in video games don’t generally behave the way they’re intuitively expected to. Of the many discrepancies, the following stand out:

  1. Shadows are commonly missing from many lights
    1. Only a few select (key) lights get shadows, not all of them.
    2. Shadowless lights shine through walls, and can hit unintended targets.
  2. Lights don’t (all) trigger inter-reflections / indirect illumination
    1. The lack of proper GI on all dynamic lights means only direct illumination.
    2. If your engine has real-time GI, most likely limited to a few lights.

The lack of good GI / inter-reflection / indirect illumination causes artists to want to manually add fixup secondary/fill lights to artificially simulate such effects, for both environments and characters, separately but sometimes simultaneously handling both cases. In practice this can work, but can easily become a mess of lights, unless you are very strict and handle these in separate layers. And even if you are organized, since most of these lights will not cast shadows, often causes things to now get hit with artificial lights that shouldn’t.

lc2 (1).png

One common example is the Fridge Mouth Effect, the glowing of characters’ mouths from fill lights that are intended to light characters faces, or enhance the lighting on the environment, but aren’t shadowed and end up lighting up the inside of characters’ mouths. A visual artifact also featured on ears with translucency and no self-shadowing.

5.png
Fridge Mouth Effect – Non-shadow casting fill light coming from the right, during a randomly positioned cutscene in Fallout 4

Artists then want to isolate where these fixups happen. They want to work around the shadowing and GI limitations by controlling where the light ends up. This is light channels come in, but also other exotic modifications to physically-plausible light attenuation, such as custom falloff curves. The latter is up for another discussion. 😉

Missing Shadows Feels Like a Big Deal. Is this it?

If we had shadows on everything, feels like most of these issues would be non-issues. At the end of the day, it’s also about fighting priorities: artist total control vs practical AAA production realities:

area1.png

Using flags to enforce rules to compensate for the lack of light/shadow behavior makes some sense if it weren’t the fact that 1) it doesn’t work well with deferred, 2) creates lighting discrepancies, 3) significantly increases scene management complexity for both art & code, and 4) breaks global illumination. In this day and age, with the sheer number of available dynamic lights, heavy usage of real-time & static shadow caching atlases, and more game teams working at solving real-time GI properly, feels like asking for light channels is a matter of convenience for an approach that used to work on previous generation titles, a consequence of getting used to the previous era lighting systems.

LC2.png

But I Really Want/Need To Make This Work With Deferred…

LightChannels4Simple G-Buffer, with Lighting Channel (LC)

In the case of deferred, dedicating a full channel for storing a bitmask is probably not what you want to do. If you really want to make this work, instead you can store a subset of essential lighting channels with a few bits “borrowed” from other channels. Nonetheless, you will have to figure out how this interacts with your forward path, your particles, your more complex multi-layer environments/scenes, and if it’s worth the hassle.

So, What’s The Conclusion?

Taking a step back and looking at how we use our tools, figuring out what works and what doesn’t with technology and rethinking our workflows is part of being game developers, and applies to all professions in the field. Never being satisfied and always looking at improving how we achieve better is necessary to our success, individually but also as an industry. Such an industry-wide challenge happened not too long ago with the first generation of games that showcased PBR. I recall discussions with friends who pioneered this at various studios, and it wasn’t easy to get everyone on board, until the industry saw the true value in the long term investment, and embraced the change. Now, it’s hard to go back. 😉

In the case of lighting approaches & tools, while we still have some major challenges to tackle with shadows and global illumination, maybe it’s also time to take a leap of faith, and think about the long term value of moving away from some old concepts such as light channels. That being said, I invite you to have this discussion with the various rendering programmers, lighting artists and technical artists at your studio, and get perspective on your game needs and figure out what needs to happen to get everyone on board with a solution that works for everybody, keep the conversations going, and blog about what worked for your project.

It’s not necessarily about the conclusion, but rather about the discussion. Looking forward to hearing about it, and how your investment in shadows and unified lighting & GI solutions has paid of in the long run. 😉

lc1

Addendum – Another Perspective From The Movie Industry

lightchannels5

Addendum – New York Times

I was asked by the New York Times if I could do a shorter version of this article, for a tech column:

LightChannelsNYT.jpgIn case you haven’t seen the original article. A few might find this amusing 😉

Thanks

Thanks to everyone who provided feedback by responding to the Twitter poll (Bart Wronski, Steve Anichini, Sébastien Lagarde, Paul Greveson, Don Williamson, Stephen Hill, and Jordan Walker), and especially Jon Greenberg and Nicolas Lopez for the additional feedback and conversations. Was nice to have both artists and programmers express their views. Let’s keep the conversations going, super important for our industry!

References

[1] Unreal Developer Kit (UDK), “Light Environments”, Online.

[2] Harada, Takahiro, “Forward+: Bringing Deferred Lighting To The Next Level”, EUROGRAPHIC  2012. Online.

[3] Olsson, Ola. “Clustered Deferred and Forward Shading”, HPG 2012, Online.

[4] Greenberg, Jon. “Hitting 60Hz in Unreal Engine”, GDC 2009. Online.

[5] Greenberg, Jon. “Dynamic Lighting in Mortal Kombat vs DC Universe”, 2012, Online