Raytracing — The Future of Computer Graphics, Simulation, and Synthetic Data

We’re excited. You’re excited. Raytracing is finally here. Since its announcement in 2018 by NVIDIA, raytracing has been one of the most-hyped technologies in 3D graphics. But for all its allure, what is it? Let’s dive in.

Raytracing is a way to make photorealistic lighting effects in 3D graphics. Inspired by nature, the principle of raytracing is that by literally simulating photons (light) bouncing off objects in a 3D scene, the resulting image will be realistically lit.

History of raytracing

While raytracing was first imagined by Albrecht Dürer in the 16th century, the first production use of ray tracing was Pixar’s 2013 movie Monsters University. Since then, the raytracing lighting technology has made its way into film, gaming, and now synthetic data.

Today, NVIDIA stands as the thought-leader, having introduced raytracing technology back in 2018 with its NVIDIA Turing GPU. Named after Alan Turing, this graphics processing unit (GPU) microarchitecture leverages DirectX Raytracing, NVIDIA’s OptiX ray tracing API, and Vulkan as supporting software architectures for raytracing.

Raytracing’s impact on 3D gaming

The advent of raytracing promises to bring major advancements to computer graphics, and thus 3D gaming. As of September 2020, a short but growing list of games supports NVIDIA raytracing. These include:

  • Battlefield V
  • Call of Duty: Modern Warfare
  • Fortnite
  • Minecraft
  • Call of Duty: Black Ops Cold War

While in the past raytracing was only used by movie studios who could afford to wait hours to render a photorealistic 3D scene, advancements in GPU technology by NVIDIA, AMD, and other chipmakers have enabled AAA video game developers to incorporate raytracing into real-time gameplay.

With recent GPUs such as the NVIDIA GeForce RTX Super Series, raytracing calculations happen in milliseconds rather than hours, opening up a wide world of photorealistic possibilities for AAA quality video games.

Who are the leaders in raytracing?

 When we consider which company provides the most thought-leadership, hardware and software support, and overall hype around raytracing, NVIDIA immediately comes to mind. The company has made tremendous strides reducing power requirements and optimizing their GPUs for real-time raytracing.
 
NVIDIA also publishes a lot of research on computer graphics and has made a point to emphasize the potential for raytracing across multiple domains. However, we must first consider the other players in raytracing: NVIDIA may be in front but they are not alone.
 
When it comes to GPUs, Advanced Micro Devices (AMD) is a formidable player. AMD’s RX 6000 GPUs are expected to feature raytracing technology, which would position the company to compete with NVIDIA.
 
There have also been rumors that while AMD is behind NVIDIA in terms of gaming performance and power efficiency, AMD’s switch to TSMC’s 7nm manufacturing process has helped them catch up. It remains to be seen whether AMD can catch up with NVIDIA, but the GPU competition is fierce.
 
 Another industry giant who has so far missed out on the hype of GPUs is Intel. However, that may change with Intel’s forthcoming Xe GPUs, planned for a 2021 release. The Xe GPU is rumored to include raytracing, indicating that Intel may believe it actually has a performance-competitive GPU in development. As the first dedicated GPU out of Intel in over 20 years, it will be interesting to see whether the chip giant will be able to compete with NVIDIA and AMD.  
 

How AI is helping to power raytracing

Raytracing comes at a time when AI techniques are being used to enhance all kinds of processes, so it should be no surprise that NVIDIA is already using AI to further enhance its raytracing technology. Specifically, NVIDIA is using a deep-learning engine to boost the rendering quality of its GPUs. The company calls this technology DLSS (Deep Learning Super Sampling).
 
Deep Learning Super Sampling works by taking in three inputs: a high resolution frame, a low resolution frame, and the low resolution frame’s motion vectors. The specific AI algorithm, called a convolutional autoencoder, compares these inputs to figure out how to generate a high resolution frame. The result is that a video game looks better.
 
NVIDIA will not be the only company to improve its hardware with software deep-learning. While AMD’s Virtual Super Resolution anti-aliasing is not known to use deep-learning techniques, it would not be surprising to see AMD introduce a rendering software upgrade similar to NVIDIA’s DLSS.
 

How raytracing will help power AI

We are unabashedly huge fans of using photorealistic simulations to train AI. And one of the biggest drivers of photorealism in computer graphics is raytracing. Improvements in raytracing mean more photorealistic simulations. And that’s why we are excited about raytracing.
 
Photorealistic simulation is already used to train autonomous vehicles, but it is increasingly being used in other domains as well. At Simerse, we use photorealistic computer graphics to generate synthetic data and then feed that synthetic data into a machine learning, computer vision model.
 
For a synthetic data and simulation company like ourselves, improvements in raytracing mean that our synthetic training sets become more representative of the real world, which is good news.
 

The applications of computer vision are increasing

As more and more businesses and consumers interact with computer vision, the applications of the technology will grow. Technologists across academia and industry are already dreaming up the next big uses for computer vision. Object detection, feature recognition, and other AI-enabled techniques will become more prolific in nearly every aspect of modern life.
 
The trajectory and prevalence of computer vision will continue to increase: that’s why people need to understand not just what is possible today but what may be possible five or ten years down the line. Object detection by AI will be second-nature, enabling a wide range of new technologies such as autonomous cars, delivery, process improvements, and much more.
 

Computer graphics will continue to improve

20 years ago, video games were jagged and had relatively poor resolution. Today, AAA video games can run in 4K resolution and are immensely realistic. However, the pace of change is not expected to slow down.
 
New technologies, like raytracing, are being rolled out every day with the promise of making computer graphics more realistic. Improvements in computer graphics will impact gaming, animated film, and now synthetic data and AI.
 
Not only will rendering technology continue to improve, but the quality of 3D models will also increase. For example, texture level-of-detail (LOD) will get improve as real-world textures are captured and digitized with increasingly sophisticated technology. 3D modeling software such as Blender or Maya is getting more powerful with each update, leveraging advancements in computer hardware to enable smoother and more realistic materials. Technological improvements are driving advancements in both rendering and tools, ultimately resulting in more realistic computer graphics.
 

Raytracing will lead the future

Realtime raytracing at 60 frames per second was unthinkable ten years ago. Now, raytracing technology is on the verge of being mainstream in gaming. And the logical next step is that improved raytracing will continue to enter the film industry, the synthetic data market, and open the door to the next wave of technologies.

We expect that NVIDIA will continue to drive raytracing adoption for now, but AMD and Intel are notable competitors who may catch up. In total, there are some really intriguing advancements in computer graphics, AI, entertainment, and gaming, driven by raytracing. We look forward to seeing what the future holds, and playing a role ourselves.