Prof Watson's presentation on making machine learning fun and fast

IBM RTP recently invited Prof Watson to give a presentation on his collaborative research with them.


Screen Shot 2018-02-26 at 19.26.23.png

Making ML Training Fun and Fast
February 27, 2018 @ 2p - 3pm

Prof Watson's collaboration uses interfaces based on manual classification — qualitative coding — to make training of automated machine learning classifiers more engaging and productive. Ultimately, the collaboration will use large touch displays to approximate qualitative coding's data displays.

Holle Christensen presents her work on interaction for ML training

Student Holle Christensen's presented her short paper Building bridges: a case study in structuring human-ML training interactions at the AAAI Symposium on the Design of the User Experience for Artificial Intelligence (the UX of AI) in Palo Alto, CA.

Screenshot 2018-02-03 at 5.58.06 PM.png

Building bridges: a case study in structuring human-ML training interactions.
Johanne Christensen, Benjamin Watson, Andy Rindos and Sharon Joines.

Symposium on the Design of the User Experience for Artificial Intelligence (the UX of AI), AAAI Spring Symposium Series (Palo Alto, March).

With the increasing ubiquity of artificial intelligence and machine learning applications, systems are emerging that require non-ML experts to interact with machine learning at the training step, not just the final system. These users may not have the skills, time, or inclination to familiarize themselves with the way machine learning works, so training systems must be developed that can communicate the necessary information and facilitate effortless collaboration with the user. We consider how to utilize techniques from qualitative coding, a human-centered approach for manual classification, and build better user experience for ML training.


Holle Christensen publishes on immersive UIs for machine learning, based on qualitative coding

Student Holle Christensen's short paper Structuring human-ML interaction with an immersive interface based on qualitative coding was accepted as a poster at the Workshop on Immersive Analytics at the IEEE Visualization conference in Phoenix, AZ!


Structuring human-ML interaction with an immersive interface based on qualitative coding
Johanne Christensen and Benjamin Watson

Workshop on Immersive Analytics: Exploring Future Interaction and Visualization Technologies for Data Analytics, IEEE Visualization conference (Phoenix, October).

With ever increasing bodies of data, much of it unlabeled and from complex, dynamic and weakly structured domains, machine learning (ML) is more necessary than ever. Yet even domain experts have difficulty understanding most ML algorithms, and so cannot easily retrain them as new data arrives. This limits ML’s use in many fields that sorely need it, such as law, where users must have confidence in ML results. Interactive machine learning techniques have been proposed to take advantage of humanity’s ability to categorize in these complex domains, but little attention has been paid to building interfaces for non-ML experts to provide input, and in particular to creating a user experience that engenders trust. Qualitative coding — the decades-old practice of manual classification — provides a proven methodology that can be adapted to structure interaction between domain experts and ML algorithms. Qualitative
coders often use physical props such as notecards to help sort through and understand datasets. Here we explore how an immersive system can be built to leverage QC’s intuitive techniques and grow a trusting partnership between human and ML classifiers.

Dr. Adam Marrs successfully defends his dissertation

Yesterday with many friends and family in attendance, the new Dr. Adam Marrs successfully defended his dissertation. His committee included professors and co-advisors Benjamin Watson and Chris Healey, as well as professors Turner Whitted and Rob St. Amant, and NVidia VP Graphics Research Dr. David Luebke. Dr. Marrs will be joining NVidia in RTP after his graduation. Congratulations Adam!

Real-Time GPU Accelerated Multi-View Point-Based Rendering
Adam Marrs

Doctoral dissertation
NC State Univ Computer Science

Research in the field of computer graphics has focused on producing realistic images by accurately simulating surface materials and the behavior of light. Since achieving photorealism requires significant computational power, visual realism and interactivity are typically adversarial goals. Dedicated graphics co-processors (GPUs) are now synonymous with innovation in real-time rendering and have fueled further advances in the simulation of light within real-time constraints. Important rendering effects that accurately model light transport often require evaluating costly multi-dimensional integrals. Approximating these integrals is achieved by dense spatial sampling, and is typically implemented with GPUs as multiple rasterizations of a scene from differing viewpoints. Producing multiple renders of complex geometry reveals a critical limitation in the design of the graphics processor: the throughput optimizations that make GPUs capable of processing millions of polygons in only milliseconds also prevent them from leveraging data coherence when synthesizing multiple views. Unlike its parallel processing of vertices and post-rasterization fragments, existing GPU architectures must render views serially and thus parallelize view rendering poorly. The full potential of GPU accelerated rendering algorithms is not realized by the existing single view design.

In this dissertation, we introduce an algorithmic solution to this problem that improves the efficiency of sample generation, increases the number of available samples, and enhances the performance-to-quality relationship of real-time multi-view effects. Unlike traditional polygonal rasterization, our novel multi-view rendering design achieves parallel execution in all stages of the rendering process. We accomplish this by: (1) transforming the multi-view rendering primitive from polygons to points dynamically at run-time, (2) performing geometric sampling tailored to multiple views, and (3) reorganizing the structure of computation to parallelize view rendering. We demonstrate the effectiveness of our approach by implementing and evaluating novel multi-view soft shadowing algorithms based on our design. These new algorithms tackle a complex visual effect that is not possible to accurately produce in real-time using existing methods. We also introduce View Independent Rasterization (VIR): a fast and flexible method to transform complex polygonal meshes into point representations suitable for rendering many views from arbitrary viewpoints. VIR is an important tool to achieve multi-view point-based rendering, as well as a useful general approach to real-time view agnostic polygonal sampling. Although we focus on algorithmic solutions to the classic rendering problem of soft shadows, we also provide suggestions to evolve future GPU architectures to better accelerate point-based rendering, multi-view rendering, and complex visual effects that are still out of reach.


Holle Christensen publishes on experience analytics

Student Holle Christensen's short paper Experience analytics: developing a scalable, implicit and rich measure of user experience was accepted at the Triangulation in UX Studies:  Learning from Experience Workshop at the ACM Conference on Designing Interactive Systems (DIS) in Edinburgh, Scotland!


Experience analytics: developing a scalable, implicit and rich measure of user experience
Johanne Christensen and Benjamin Watson

Triangulation in UX Studies:  Learning from Experience Workshop
ACM Conference on Designing Interactive Systems (DIS) 2017, June 10.

New measures of user experience must be defined that can combine the scalability and unobtrusiveness of activity traces with the richness of more traditional measures. Machine learning can be used to predict established UX measures from such activity traces. We advocate research into the type of activity traces needed as input for such measures, the machine learning technology needed, and the user experience components and measures to be predicted.

Adam Marrs publishes on multiview rendering

Student Adam Marrs presented his paper Real-Time View Independent Rasterization for Multi-View Rendering at Eurographics 2017 in Lyon, France! 


Real-Time View Independent Rasterization for Multi-View Rendering
Adam Marrs, Benjamin Watson, and Christopher G. Healey

Proc. Eurographics 2017 short papers, April 25, Lyon, France.

Existing graphics hardware parallelizes view generation poorly, placing many multi-view effects – such as soft shadows, defocus blur, and reflections – out of reach for real-time applications. We present emerging solutions that address this problem using a high density point set tailored per frame to the current multi-view configuration, coupled with relatively simple reconstruction kernels. Points are a more flexible rendering primitive, which we leverage to render many high resolution views in parallel. Preliminary results show our approach accelerates point generation and the rendering of multi-view soft shadows up to 9x.

Prof Watson's recent presentation on location experience at NC State's Geospatial Analytics Center

NC State's Geospatial Analytics Center recently invited Prof Watson to give a talk. It went well; at least his hosts said that the audience interacted more with him than any other presentation yet!


Location Experience: Where We’ve Been, Are, and May Be
September 1, 2016 @ 3:30 pm - 4:30 pm

Finding our way has always been necessary, and we have always tried to make it easier. Yet today, wayfinding is changing so rapidly that it makes our heads spin. What have we lost? What might we gain? I will use a review of wayfinding past, present and future to raise such questions; arguing that the enjoyment we experience along the way is now just as important as the efficiency with which we find the way’s end.