Dr. Adam Marrs successfully defends his dissertation

Yesterday with many friends and family in attendance, the new Dr. Adam Marrs successfully defended his dissertation. His committee included professors and co-advisors Benjamin Watson and Chris Healey, as well as professors Turner Whitted and Rob St. Amant, and NVidia VP Graphics Research Dr. David Luebke. Dr. Marrs will be joining NVidia in RTP after his graduation. Congratulations Adam!

Real-Time GPU Accelerated Multi-View Point-Based Rendering
Adam Marrs

Doctoral dissertation
NC State Univ Computer Science

Abstract
Research in the field of computer graphics has focused on producing realistic images by accurately simulating surface materials and the behavior of light. Since achieving photorealism requires significant computational power, visual realism and interactivity are typically adversarial goals. Dedicated graphics co-processors (GPUs) are now synonymous with innovation in real-time rendering and have fueled further advances in the simulation of light within real-time constraints. Important rendering effects that accurately model light transport often require evaluating costly multi-dimensional integrals. Approximating these integrals is achieved by dense spatial sampling, and is typically implemented with GPUs as multiple rasterizations of a scene from differing viewpoints. Producing multiple renders of complex geometry reveals a critical limitation in the design of the graphics processor: the throughput optimizations that make GPUs capable of processing millions of polygons in only milliseconds also prevent them from leveraging data coherence when synthesizing multiple views. Unlike its parallel processing of vertices and post-rasterization fragments, existing GPU architectures must render views serially and thus parallelize view rendering poorly. The full potential of GPU accelerated rendering algorithms is not realized by the existing single view design.

In this dissertation, we introduce an algorithmic solution to this problem that improves the efficiency of sample generation, increases the number of available samples, and enhances the performance-to-quality relationship of real-time multi-view effects. Unlike traditional polygonal rasterization, our novel multi-view rendering design achieves parallel execution in all stages of the rendering process. We accomplish this by: (1) transforming the multi-view rendering primitive from polygons to points dynamically at run-time, (2) performing geometric sampling tailored to multiple views, and (3) reorganizing the structure of computation to parallelize view rendering. We demonstrate the effectiveness of our approach by implementing and evaluating novel multi-view soft shadowing algorithms based on our design. These new algorithms tackle a complex visual effect that is not possible to accurately produce in real-time using existing methods. We also introduce View Independent Rasterization (VIR): a fast and flexible method to transform complex polygonal meshes into point representations suitable for rendering many views from arbitrary viewpoints. VIR is an important tool to achieve multi-view point-based rendering, as well as a useful general approach to real-time view agnostic polygonal sampling. Although we focus on algorithmic solutions to the classic rendering problem of soft shadows, we also provide suggestions to evolve future GPU architectures to better accelerate point-based rendering, multi-view rendering, and complex visual effects that are still out of reach.

A7076C87-5AA2-4F81-A45E-C2F407498D9D.JPG