GDC Retrospective and Additional Thoughts on Real-Time Raytracing

This post is part of the series “Finding Next-Gen“.

Just got back from GDC. Had a great time showcasing the hard work we’ve been up to at SEED. In case you missed it, we did two presentations on real-time raytracing:

gdc1DirectX Raytracing Announcement (Microsoft) and Shiny Pixels and Beyond: Real-Time Raytracing at SEED (NVIDIA)

In case you were at GDC and saw the presentation, you can skip directly here.

During the first session Matt Sandy from Microsoft announced DirectX Raytracing (DXR). He went into great detail over the API changes, and showed how DirectX 12 has evolved to support raytracing. We then followed with our own presentations, where we showcased Project PICA PICA, a real-time raytracing experiment featuring a mini-game for self-learning AI agents in a procedurally-assembled world. The team has worked super hard on this demo, and the results really show it! 🙂

PICA PICA is powered by DXR.

DirectX Raytracing?

The addition of raytracing to DirectX 12 is exposed via simple concepts: acceleration structures (bottom & top), new shader types (ray-generation, closest-hit, any-hit, and miss), new HLSL types and intrinsics, commandlist-level DispatchRays(…) and a raytracing pipeline state. You can read more about it here.

Taken from our presentation, here’s a brief overview of how this works in PICA PICA:

gdc3.pngUsing Bottom/top acceleration structures and shader table (from GDC slides)
picapica_hlslPseudoCode.pngRay Generation Shadow – HLSL Pseudo Code – Does Not Compile (from GDC slides)

While you don’t necessarily need to use DXR to do real-time raytracing on current GPUs (see Sebastian Aaltonen’s Claybook rendering presentation), it’s a flexible new tool in the toolbox. From the code above, you benefit from the fact that it’s unified with the rest of DirectX 12. DXR relies on well known HLSL functionality and types, allowing you to share code between rasterization, compute and raytracing. More than just raytracing, DXR also allows to solve more sparse and incoherent problems that you can’t easily solve with rasterization and compute. It’s also a centralized implementation for hardware vendors to optimize, and now becomes common language for every developer that wants to do raytracing in DirectX 12. It’s not perfect, but it’s a good start and it works well.

Presentation Retrospective

During the presentation we talked about our hybrid rendering pipeline where rasterization, compute and raytracing work together:

gdc2.pngPICA PICA’s Hybrid Rendering Pipeline (from GDC slides)

Our hybrid approach allows us to solve, develop and apply several interesting techniques and algorithms that rely on rasterization, compute or raytracing while balancing quality and performance. This shows the flexibility of the API, where one is free to choose a specific pipeline to solve a specific problem. Again since raytracing is another tool in the toolbox, it can be used where it makes sense and doesn’t prevent you from using other available pipelines.

First we talked about how we raytrace reflections from the G-Buffer at half resolution, reconstruct at full resolution, and how it allows us to handle varying levels of roughness. We also presented our multi-layer material system, shared between rasterization, compute and raytracing.

picapica_reflectionsMaterials.pngRaytraced Reflections (left) and Multi-Layer Materials (right) (from GDC slides)

We then followed by describing a novel texture-space approach for order-independent transparency, translucency and subsurface scattering:

picapica_translucencyGlass.pngGlass and Translucency (from GDC slides)

We then presented a sparse surfel-based approach where we use raytracing to pathtrace irradiance from surfels spawned from the camera.

picapica_gi.pngSurfel-based Global Illumination (from GDC slides)

We also covered ambient occlusion (AO), and how raytraced AO compares to screen-space AO.

This slideshow requires JavaScript.

Inspired from Schied/NVIDIA’s Spatiotemporal Variance-Guided Filtering (SVGF), we also presented a super-optimized denoising filter specialized for soft shadows with varying penumbra.

picapica_shadows.pngSurfel-based Global Illumination (from GDC slides)

Finally we talked about how we handle multiple GPUs (mGPU) and split the frame, relying on the first GPU to act as an arbiter that dispatches work to secondary GPUs in parallel fork-join style.

picapica_mgpu.pngmGPU in PICA pica (from GDC slides)

All-and-all, it was a lot of content for the time slot we had. In case you want more info, check out the presentation:

You can also download the slides: Powerpoint and PDF. You can also watch the presentation live here (starts around 21:30).

Here are a few additional links that talk about DirectX Raytracing and Project PICA PICA:

Additional Thoughts

As mentioned at GDC we’ve had the chance to be involved early with DXR, to experiment and provide feedback as the API evolved. Super glad to have been part of this initiative. We still have a lot to explore, and the future is exciting! Some additional thoughts:

Noise vs Ghosting vs Performance

DXR opens the door to an entirely new class of techniques that have never been achieved in games. With real-time raytracing it feels like the upcoming years will be about managing complex tradeoffs, such as noise, ghosting, quality vs performance. While you can add more samples to reduce noise (and improve convergence) during stochastic sampling, it decreases performance. Alternatively you can reuse samples from previous frames (via temporal filtering), but it can add ghosting. It feels like achieving the right balance here will be important. As DXR gets adopted in games this topic will generate a lot of good presentations at conferences.

Comparing Against Ground Truth

We also mentioned that we built our own pathtracer inside our framework. This pathtracer acts as reference implementation, which at any point we can toggle when working on a feature for our hybrid renderer. This allows us to rapidly compare results, and see how a feature looks against ground truth. Since a lot of code is shared between the reference and various hybrids techniques, no significant additional maintenance is required. At the end of the day, having a reference implementation will help you make the best decision in order to achieve the balance between quality and performance for your (hybrid) techniques.

If raytracing is new to you and building a reference ray/pathtracer is of interest, many books and online resources are available. Peter Shirley’s Ray Tracing in One Weekend is quite popular. You should check it out! 🙂

Specialized Denoising and Reconstruction

Also mentioned during the presentation, we built a denoising filter specialized for soft penumbra shadows. While one can use general denoising algorithms like SVGF on the whole image, building a denoising filter around a specific term will undeniably achieve greater quality and performance. This is true since you can really customize the filter around the constraints of that term. In the near future one can expect that significant time and energy will be spent on specialized denoisers, and custom reconstruction of stochastically sampled terms.

DXR Interop

As mentioned earlier we share a lot of code between raytracing, rasterization and compute. In the event where one wants to bake lightmaps inside their engine (see Sébastien Hillaire‘s talk on Real-Time Raytracing For Interactive Global Illumination Workflows in Frostbite), DXR is very appealing because you can evaluate your actual HLSL material shaders. No need for (limited) parameter conversion, which is often necessary when using an external lightmap baking tool.

This is awesome!

Wrapping-up

Even though the API is there and available to everyone, this is just the beginning. It’s an important tool going forward that will enable new techniques in games, and could end up pushing the industry to new heights. I’m looking forward to the new techniques that evolve from everyone having access to DXR, and what kind of rendering problems get solved. I also find it quite appealing for the research community to be able to try and solve problems closer to the realm of real-time raytracing, where researchers can implement their solutions using a raytracing API that everyone can use.

Because it’s unified, it should also be easy for you to pick up the API, experiment and integrate in your own engine. Again, one doesn’t need this API to do real-time raytracing, but it provides a really nice package and a common language that all DirectX 12 developers can talk around. It’s also a clear focus point for hardware makers to focus on optimization. Also compute hasn’t really changed in a while, so hopefully these improvements will drive improvements in compute and in the the pipelines as well. That being said, the API is obviously not perfect, and is still at the proposal stage. Microsoft is open to additional feedback and discussion. Try it out and send your feedback!

Can’t wait to see what you will do with DXR! 🙂

Deformable Snow and DirectX 11 in Batman: Arkham Origins

It’s been a while, but I finally found some time for a quick post to regroup the presentations I’ve done this year at the Game Developers Conference (GDC) and NVIDIA’s GPU Technology Conference (GTC). These presentations showcase and explain some of the features developed for Batman: Arkham Origins.

Continue reading “Deformable Snow and DirectX 11 in Batman: Arkham Origins”

Approximating Translucency Revisited – With “Simplified” Spherical Gaussian Exponentiation

Lately, someone at work has pointed out the approximation of translucency Marc Bouchard and I developed back at EA [1], which ended up in DICE’s Frostbite engine [2] (aka The Battlefield 3 Engine). Wanting to know more, we started browsing the slides one by one and revisiting the technique. Looking at the HLSL, an optimization came to my mind, which I’ll end up discussing in this post. In case you missed the technique, here’s a few cool screenshots made by Marc, as well as tips & tricks regarding implementing the technique and generating the inverted ambient-occlusion/thickness map. See the references for additional links.

Continue reading “Approximating Translucency Revisited – With “Simplified” Spherical Gaussian Exponentiation”

Approximating Translucency – Part II (addendum to GDC 2011 talk / GPU Pro 2 article)

Thanks to everyone who attended my GDC talk! Was quite happy to see all those faces I hadn’t seen in a while, as well as meet those whom I only had contact with via Twitter, IM or e-mail.

For those who contacted me post-GDC, it seems the content I submitted for GPU Pro 2 didn’t make it into the final samples archive. I must’ve submitted too late, or it didn’t make it to the editor. Either way, the code in the paper is the most up-to-date, so you should definitely check-it out (and/or simply buy the book)!

Roger Cordes sent the following questions. I want to share the answers, since it covers most of the questions people had after the talk:

Continue reading “Approximating Translucency – Part II (addendum to GDC 2011 talk / GPU Pro 2 article)”

GDC 2011 – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look

As presented at GDC 2011, here’s my (and the legendary Marc Bouchard) talk on our real-time approximation of translucency, featured in the Frostbite 2 engine (used for DICE’s Battlefield 3). These are the slides that we presented, along with audio. Enjoy! 🙂

 

 

Marc and I would like to thank the following people for their time, reviews and constant support:

For those we managed to meet, we had such a good time with all of you at GDC. Always happy to interact with passionate game developers – this is what makes our industry so great! We hope to see you soon again! 🙂

GDC 2011 Talks You Should Attend

As seen in the previous post, I’ll be presenting at GDC 2011. We also have several AMAZING speakers from EA (Electronic Arts) whose talk you should attend:

SPU-based Deferred Shading in BATTLEFIELD 3 for Playstation 3

[Speaker]

Christina Coffin (DICE), @ChristinaCoffin

[Description]

This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE’s Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.

[Takeaway]

Attendees will learn how SPU based shading allows a rich variety in materials, more complex lighting and enables offloading of traditional GPU work over to SPUs. Optimization techniques used to minimize SPU processing time for various scenarios will also be taught. Attendees will understand how to technically design, balance and analyze the performance of a game environment that uses an SPU based shading system. Attendees will learn key points of creating and optimizing code and data processing for high throughput shading on SPUs.

[Intended Audience]

This session is intended for advanced programmers with an understanding of current forward and deferred rendering techniques, as well as console development experience. Knowledge of lower level programming in vector intrinsic, assembly language, and structure-of-arrays versus array-of-structures data processing is recommended.

[Links]

http://schedule.gdconf.com/session/12273

Lighting You Up in BATTLEFIELD 3

[Speaker]

Kenny Magnusson (DICE)

[Description]

This session presents a detailed overview of the new lighting system implemented in DICEs Frostbite 2 engine and how it enables us to stretch the boundaries of lighting in BATTLEFIELD 3 with its highly dynamic, varied and destructible environments. BATTLEFIELD 3 goes beyond the lighting limitations found in our previous battlefield games, while avoiding costly and static prebaked lighting without compromising quality. We discuss the technical implementation of the art direction in BATTLEFIELD 3, the workflows we created for it as well as how all the individual lighting components fit together: deferred rendering, HDR, dynamic radiosity and particle lighting.

[Takeaway]

Attendees will learn the workflow we use to light our worlds, as well as memory and performance considerations to hit our performance budgets from a technical art perspective. Attendees will also get a thorough insight into an exciting new approach to lighting both open landscapes and indoor environments with dynamic radiosity in a fully destructible world.

[Intended Audience]

Attendees should understand the fundamentals of lighting systems used in contemporary game development as well as basic principles of rendering technology. Primarily directed at technical artist and rendering programmers, the presentation is accessible enough that anyone attending will gain an insight into the world of lighting.

[Links]

http://schedule.gdconf.com/session/12139

Advanced Visual Effects with DirectX 11

[Speakers]

Johan Andersson (DICE, @repi), Evan Hart (NVIDIA), Richard Huddy (AMD), Nicolas Thibieroz (AMD), Cem Cebenoyan (NVIDIA), Jon Story (AMD), John McDonald (NVIDIA Corporation), Jon Jansen (NVIDIA Corp), Holger Grn (AMD), Takahiro Harada (Havok) and Nathan Hoobler (NVIDIA)

[Description]

Brought to you with the collaboration of the industry’s leading hardware and software vendors, this day-long tutorial provides an in-depth look at the Direct3D technologies in DirectX 11 and how they can be applied to cutting-edge PC game graphics for GPUs and APUs. This year we focus exclusively on DirectX 11, examining a variety of special effects which illustrate its use in real game content. This will include detailed presentations from AMD and NVIDIAs demo and developer support teams as well as some of the top game developers who ship real games into the marketplace. In addition to illustrating the details of rendering advanced real-time visual effects, this tutorial will cover a series of vendor-neutral optimizations that developers need to keep in mind when designing their engines and shaders.

[Takeaway]

Attendees will gain greater insights into advanced utilization of the Direct3D 11 graphics API as used in popular shipping titles.

[Intended Audience]

The intended audience for this session is a graphics programmer who is planning or actively developing a Direct3D 11 application.

[Link]

http://schedule.gdconf.com/session/12078

Culling the Battlefield: Data Oriented Design in Practice

[Speaker]

Daniel Collin (DICE), @daniel_collin

[Description]

This talk will highlight the evolution of the object culling system used in the Frostbite engine over the years and why we decide to rewrite a system for BATTLEFIELD 3 that had worked well for 4 shipping titles. The new culling system is developed using a data oriented design that favors simple data layouts which enables very efficient computation using pipelined vector instructions. Concrete examples of how code is developed with this approach and the implications and benefits compared to traditional tree-based systems will be given.

[Takeaway]

Attendees will learn how to apply data oriented design in practice to write simple but high throughput code that works well on all platforms. This is especially important for the current consoles.

[Intended Audience]

Intended for programmers on all levels but some background on vector math and basic threading would be beneficial.

[Link]

http://schedule.gdconf.com/session/12251

Four Guns West

[Speakers]

Ben Minto (DICE), Chuck Russom (Chuck Russom FX), Jeffrey Wesevich (38 Studios), Chris Sweetman (Splash Damage Ltd.), and Charles Maynes (Freelance)

[Description]

This session aims to give an insight into the shadowy world of audio in AAA FPS titles. Featuring the sound designers behind MEDAL OF HONOR, BRINK, BLACK, HBO’s THE PACIFIC, and CALL OF DUTY. The face off is split into bite size chunks concentrating on key areas that are required to design the weapon audio for a AAA shooter. Areas of focus will include insight into Weapons Field Recording headed up by Charles Maynes, Sound Design with Chuck Russom, Creating Believable Worlds and Mixing Practices with Ben Minto, and Real vs Hyper Real with Chris Sweetman. The panel will also discuss the emotional power of weapon sound design in Video Games & Film.

[Takeaway]

New attendees will get tips and tactics on approaching audio in an FPS which can then be applied to their own productions. It will empower producers and game designers to consider audio early in a titles development which will increase the player’s experience and enjoyment tenfold.

[Intended Audience]

Target audience will be sound designers,producers, game designers and creatives from all aspects of video games wanting insight into the tricks behind great sounding AAA titles. The session will be structured to allow for all levels of knowledge in the specific fields.

[Link]

http://schedule.gdconf.com/session/12109

GDC 2011 – Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look

This year, I’ll be presenting at GDC (Game Developers Conference), along other great speakers from EA (especially DICE).

The talk is about a very cheap and fast approximation of translucency that will allow developers to add convincing subsurface scattering to their scenes with minimal impact on performance. The technique is excellent in a wide variety of scenes, using anything from minimal to massive numbers of lights. Here’s a quick summary of my talk, which you can also find on the GDC website.

[Title]

Approximating translucency for a Fast, Cheap, and convincing Subsurface Scattering Look

[Description]

In real-time computer graphics, the interaction of light and matter is often reduced to local reflection described by Bidirectional Reflectance Distribution Functions (BRDFs). While this mathematical model is valid for describing surface reflectance of opaque objects, many objects in nature are partly translucent: light travels within the surface. To simulate translucent properties of objects in real-time, such as subsurface scattering (in human skin and other surfaces), developers rely on complex and expensive techniques. Conversely, this talk presents a fast and scalable approximation of translucency for a convincing subsurface scattering look which can be implemented on current and next generation video gaming systems.

[Takeaway]

Developers attending this session will be able to improve their game’s visuals by adding real-time translucency to their scenes with minimal impact on the run-time, as demonstrated using EA DICE’s Frostbite engine. Moreover, this effect, once limited to offline rendering, will undeniably help developers in creating a more complete and immersive gaming experience.

[Intended Audience]

Reaching stakeholders from several disciplines of video game development, this talk is intended for all individuals that share common goals in terms of real-time graphics and that strive towards improving the visual quality of tomorrow’s games: rendering programmers, technical artists, art directors and technical art directors.

Visit this website for more info on other great talks to be presented.

See you at GDC!