Christiaan Gribble
PMTS Silicon Design Engineer

Advanced Micro Devices, Inc.
www.amd.com

 


[ home ]   [ research ]   [ vita ]

 

2006



Interactive Methods for Effective Particle Visualization

Christiaan P. Gribble

PhD Dissertation, School of Computing, University of Utah, December 2006

Particle-based simulation methods are used to model complex phenomena in many computational science and engineering applications.  Effective visualization of the resulting state communicates subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within the data as a simulation evolves, as well as enables easier navigation and exploration of the data through interactivity.  As particle-based simulations continue to grow in size and complexity, effective visualization becomes increasingly problematic.  The sheer size of these datasets make interactive visualization a difficult task, while the intricacies of complex data are difficult to convey sensibly.

This dissertation combines and extends knowledge in computer graphics, scientific visualization, and visual perception to enhance the ability to efficiently and effectively visualize particle-based simulation data.  We first introduce two interactive visualization algorithms that render large, time-varying particle datasets at highly interactive rates on current and upcoming desktop computing platforms.  Then, to motivate the use of effects from global illumination in particle visualization, we describe a psychophysical user study that examines the impact of two advanced shading models on the ability to detect subtle differences between particle configurations.  Finally, we introduce two algorithms that make the use of effects from global illumination practical for an interactive particle visualization process.

The results of this research demonstrate the feasibility of rendering large, time-varying particle datasets at highly interactive rates on desktop computer systems.  The interactive visualization algorithms improve performance and make interactive systems more accessible.  This dissertation also demonstrates both the importance and feasibility of using advanced shading models in an interactive particle visualization process.  The user study shows that effects from global illumination can be perceptually beneficial, while the practical global illumination algorithms demonstrate that advanced shading models can be used in an interactive setting.  The results show that the proposed algorithms enhance an investigator's ability to perform data analysis and feature detection tasks while maintaining the ability to interrogate large, time-varying datasets at interactive rates.
     

  A Coherent Grid Traversal Approach to Visualizing Particle-Based Simulation Data

Christiaan P. Gribble, Thiago Ize, Andrew Kensler, Ingo Wald, and Steven G. Parker

Poster, IEEE Symposium on Interactive Ray Tracing, September 2006

We describe an efficient algorithm for visualizing particle-based simulation data using fast packet-based ray tracing and multi-level grids. In particular, we introduce optimizations that exploit the properties of these datasets to tailor the coherent grid traversal algorithm for particle visualization, achieving both improved performance and reduced storage requirements.





Visualizing Particle-Based Simulation Datasets on the Desktop

Christiaan P. Gribble, Abraham J. Stephens, James E. Guilkey, and Steven G. Parker

British HCI 2006 Workshop on Combining Visualization and Interaction to Facilitate Scientific Exploration and Discovery, September 2006

We present an approach to rendering large, time-varying particle-based simulation datasets using programmable graphics hardware on desktop computer systems.  Particle methods are used to model a wide range of complex phenomena, and effective visualization of the resulting data requires communicating subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves, as well as allowing easier navigation and exploration of the data through interactivity.  We highlight the critical components of our approach, and introduce an extension to the coherent hierarchical culling algorithm that often improves temporal coherence and leads to better average performance for time-varying datasets.  Our approach performs competitively with current particle visualization systems based on interactive ray tracing that require tightly coupled supercomputers.  Moreover, our system runs on hardware that is a fraction of the cost of these systems, making particle visualization and data exploration more accessible.  We thus advance the current state-of-the-art by bringing visualization of particle-based simulation datasets to the desktop.






Enhancing Interactive Particle Visualization with Advanced Shading Models

Christiaan P. Gribble and Steven G. Parker

ACM SIGGRAPH Third Symposium on Applied Perception in Graphics and Visualization, July 2006

Particle-based simulation methods are used to model a wide range of complex phenomena and to solve time-dependent problems of various scales.  Effective visualization of the resulting state should communicate subtle changes in the three-dimensional structure, spatial organization, and qualitative trends within a simulation as it evolves.  We take steps toward understanding and using advanced shading models in the context of interactive particle visualization.  Specifically, the impact of ambient occlusion and physically based diffuse interreflection is investigated using a formal user study.  We find that these shading models provide additional visual cues that enable viewers to better understand subtle features within particle datasets.  We also describe a visualization process that enables interactive navigation and exploration of large particle datasets, rendered with illumination effects from advanced shading models.  Informal feedback from application scientists indicates that the results of this process enhance the data analysis tasks necessary for understanding complex particle datasets.





A Case Study:  Visualizing Material Point Method Data

James Bigler, James Guilkey, Christiaan Gribble, Charles Hansen, and Steven Parker

Eurographics/IEEE-VGTC Symposium on Visualization, May 2006

The Material Point Method is used for complex simulation of solid materials represented using many individual particles.  Visualizing such data using existing polygonal or volumetric methods does not accurately encapsulate both the particle and macroscopic properties of the data.  In this case study we present various methods used to visualize the particle data as spheres and explain and evaluate two methods of augmenting the visualization using silhouette edges and advanced illumination such as ambient occlusion.  We also present informal feedback received from the application scientists who use these methods in their workflow.




   
Toward Validation of Advanced Visualization Techniques

Christiaan P. Gribble and Steven G. Parker

Poster, Advanced Simulation and Computing Program Principal Investigator's Meeting, February 2006

Effective visualization of large particle datasets requires communicating subtle changes in three-dimensional structure as a simulation evolves, as well as allowing easier navigation and exploration of the data through interactivity.  Typical interactive visualization systems employ only local shading models when rendering particle datasets.  However, using an experimental study, we demonstrate that effects from diffuse interreflection aid viewers' comprehension of three-dimensional structure and spatial relationships within these datasets.  Using a simple matching task, we show that diffuse interreflection, or more accurate approximations to it, provide additional cues that enable viewers to better understand the details of particle geometry.  We also describe a particle visualization process that takes a first step toward making these effects practical for interactive use.  We show that this process enables interactive visualization of large particle datasets with effects from diffuse interreflection.

2005

 
  

An Experimental Design for Determining the Effects of Illumination Models in Particle Visualization

Christiaan P. Gribble and Steven G. Parker

Poster, ACM SIGGRAPH Second Symposium on Applied Perception in Graphics and Visualization, August 2005

Effective visualization of large particle datasets requires communication of the spatial characteristics of the particles as a simulation progresses.  Typical visualization systems employ local illumination models when rendering such data.  These models rely solely on local information and may not capture other illumination effects that can aid attempts to comprehend the spatial characteristics of complex particle datasets.

We submit that advanced illumination models, for example, ambient occlusion or global illumination, help viewers better understand complex spatial relationships within these datasets.  We propose an experimental design targeting an increased understanding of the perceptual cues provided by advanced illumination models in the context of particle visualization.



    
Memory Sharing for Interactive Ray Tracing on Clusters

David E. DeMarle, Christiaan P. Gribble, Solomon Boulos, and Steven G. Parker

Parallel Computing, February 2005

We present recent results in the application of distributed shared memory to image parallel ray tracing on clusters.  Image parallel rendering is traditionally limited to scenes that are small enough to be replicated in the memory of each node, because any processor may require access to any piece of the scene. We solve this problem by making all of a cluster's memory available through software distributed shared memory layers. With gigabit ethernet connections, this mechanism is sufficiently fast for interactive rendering of multi-gigabyte datasets.  Object- and page-based distributed shared memories are compared, and optimizations for efficient memory use are discussed.





Practical Global Illumination for Interactive Particle Visualization

Christiaan Gribble, James Bigler, Steven Parker, and Charles Hansen

Poster, School of Computing Research Day, April 2005

Particle methods are commonly used to simulate complex phenomena in many scientific domains.  Thousands or even millions of particles are required to model a system accurately, resulting in very large, very complex datasets.  Effective visualization of this data requires communicating subtle changes in three-dimensional structure as the simulation evolves, as well as allowing easier navigation and exploration of the data through interactivity.  We submit that advanced illumination models, such as ambient occlusion and global illumination, can help viewers understand complex spatial relationships and three-dimensional structure.  The visualization method we describe overcomes the computational limitations by removing the illumination calculation from the interactive rendering pipeline using preprocessing.  In our method, the illumination across each particle is sampled and compressed in a manageable set of textures.  These textures are then reconstructed and mapped to the particles during interactive rendering.

2004

 

    
Memory-Savvy Distributed Interactive Ray Tracing

David E. DeMarle, Christiaan P. Gribble, and Steven G. Parker

Eurographics Symposium on Parallel Graphics and Visualization, June 2004

Interactive ray tracing in a cluster environment requires paying close attention to the constraints of a loosely coupled distributed system.  To render large scenes interactively, memory limits and network latency must be addressed efficiently.  In this paper, we improve previous systems by moving to a page-based distributed shared memory layer, resulting in faster and easier access to a shared memory space. The technique is designed to take advantage of the large virtual memory space provided by 64-bit machines.  We also examine task reuse through decentralized load balancing and primitive reorganization to complement the shared memory system.  These techniques improve memory coherence and are valuable when physical memory is limited.





A Preliminary Evaluation of the Silicon Graphics Onyx4 UltimateVision Visualization System for Large-Scale Parallel Volume Rendering

Christiaan Gribble, Steven Parker, and Charles Hansen

Technical report, School of Computing, University of Utah, UUSOC-04-003, January 2004

Many recent approaches to interactive volume rendering have focused on leveraging the power of commodity graphics hardware.  Though currently limited to relatively small datasets, these approaches have been overwhelmingly successful.  Now, as the size of volumetric datasets continues to grow at a rapid pace, the need for scalable systems that can interactively visualize large-scale datasets has emerged.  Attempting to address this need, SGI, Inc. has introduced the Silicon Graphics Onyx4 UltimateVision family of visualization systems.  We present the results of our preliminary investigation into the utility of an 8-pipe Onyx4 system for large-scale parallel volume rendering.  The attainable frame rates suffer from the system's slow framebuffer readback, and we have found that an 8-node cluster of commodity computers outperforms the Onyx4.

2003

 

    
Distributed Interactive Ray Tracing for Large Volume Visualization

David DeMarle, Steven Parker, Mark Hartner, Christiaan Gribble, and Charles Hansen

IEEE Symposium on Parallel Visualization and Graphics, October 2003

We have constructed a distributed parallel ray tracing system that interactively produces isosurface renderings from large data sets on a cluster of commodity PCs.  The program was derived from the SCI Institute's interactive ray tracer (*-Ray), which utilizes small to large shared memory platforms, such as the SGI Origin series, to interact with very large-scale data sets.  Making this approach work efficiently on a cluster requires attention to numerous system-level issues, especially when rendering data sets larger than the address space of each cluster node.  The rendering engine is an image parallel ray tracer with a supervisor/workers organization.  Each node in the cluster runs a multi-threaded application.  A minimal abstraction layer on top of TCP links the nodes, and enables asynchronous message handling.  For large volumes, render threads obtain data bricks on demand from an object-based software distributed shared memory.  Caching improves performance by reducing the amount of data transfers for a reasonable working set size.  For large data sets, the cluster-based interactive ray tracer performs comparably with an SGI Origin system.  We examine the parameter space of the renderer and provide experimental results for interactive rendering of large (7.5 GB) data sets.



(no image)


So Much Data, So Little Time...

Charles Hansen, Steven Parker, and Christiaan Gribble

Parallel Computing:  Software Technology, Algorithms, Architectures and Applications, September 2003

Massively parallel computer have been around for the past decade.  With the advent of such powerful resources, scientific computation rapidly expanded the size of computational domains.  With the increased amount of data, visualization software strove to keep pace through the implementation of parallel visualization tools and parallel rendering leveraging the computational resources.

Tightly coupled ccNUMA parallel processors with attached graphics adapters have shifted the research of visualization to leverage the more unified memory architecture.  Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visualization.  Real-time ray tracing for isosurfacing has proven to be the most interactive method for large scale scientific data.  We have also investigated cluster-based volume rendering leveraging multiple nodes of commodity components.





Cluster-Based Interactive Volume Rendering with Simian

Christiaan Gribble, Xavier Cavin, Mark Hartner, and Charles Hansen

Technical report, School of Computing, University of Utah, UUSOC-03-017, September 2003

Commodity-based computer clusters offer a cost-effective alternative to traditional large-scale, tightly coupled computers as a means to provide high-performance computational and visualization services.  The Center for the Simulation of Accidental Fires and Explosions (C-SAFE) at the University of Utah employs such a cluster, and we have begun to experiment with cluster-based visualization services.  In particular, we seek to develop an interactive volume rendering tool for navigating and visualizing large-scale scientific datasets.  Using Simian, an OpenGL volume renderer, we examine two approaches to cluster-based interactive volume rendering:  (1) a ``cluster-aware'' version of the application that makes explicit use of remote nodes through a message-passing interface, and (2) the unmodified application running atop the Chromium clustered rendering framework.  This paper provides a detailed comparison of the two approaches by carefully considering the key issues that arise when parallelizing Simian.  These issues include the richness of user interaction; the distribution of volumetric datasets and proxy geometry; and the degree of interactivity provided by the image rendering and compositing schemes. The results of each approach when visualizing two large-scale C-SAFE datasets are given, and we discuss the relative advantages and disadvantages that were considered when developing our cluster-based interactive volume rendering application.




A Survey of the Itanium Architecture from a Programmer's Perspective

Christiaan Paul Gribble and Steven Parker

Technical report, Scientific Computing and Imaging Institute, University of Utah, UUSCI-2003-003, August 2003

The Itanium family of processors represents Intel's foray into the world of Explicitly Parallel Instruction Computing and 64-bit system design.  Within this survey is contained an introduction to the Itanium architecture and instruction set, as well as some of the available implementations.  We have attempted to distill the relevant information from the thousands of pages of Itanium documentation and reference materials cited at the end of this work by taking a programmer's perspective.

This survey largely follows the structure, form, and content of an excellent book by James Evans and Gregory Trimper, entitled Itanium Architecture for Programmers.  We have, of course, taken the liberty to rearrange the topics, omit the less important details, and expand the most relevant discussions with appropriate information from other sources; in other words, we do more than simply summarize the book.  Nevertheless, we gratefully acknowledge the significant impact that their work has had on this survey.

We cover the following topics in varying levels of detail:
  • the important characteristics of the Itanium architecture,
  • programming with the Itanium instruction set,
  • program performance factors and optimization techniques, and
  • several implementations of the Itanium architecture
It is not our intention to provide exhaustive discussions of the Itanium architecture, its instruction set, or any of the available implementations.  We have made an effort to include those topics and details that we found most useful during our initial experimentation with the Itanium architecture.  Likewise, where useful or important details have been omitted intentionally, due either to space and formatting constraints or to the intended scope of this work, we have made an effort to cite specific sections and pages within the reference materials that will enhance the included discussion.

Our hope is that this survey will serve as a practical introduction to creating new applications for the Itanium architecture.

2002

 

    
A Visualization Subsystem for the PSC TCS

Christiaan Gribble, James Vasak, and Joel Welling

IEEE Workshop on Commodity-Based Visualization Clusters, October 2002

This brief communication describes our continued efforts to realize a visualization subsystem for the Terascale Computing System.  In particular, we outline our long-term project goals, describe our recent modifications to the system's rendering and communication software, report the initial timing and scaling results obtained with a small test system, and raise important issues that will be the focus of future research.




Parallel Rendering using the Elan Interconnection Network

Christiaan Paul Gribble

Project Report, Information Networking Institute, Carnegie Mellon University, May 2002

The Terascale Computing System (TCS) is a new high-peformance machine that was recently installed by researchers at the Pittsburgh Supercomputing Center (PSC).  The TCS was constructed using commodity hardware components that communicate over a high-speed Quadrics Elan interconnection network.  The PSC is also building a visualization subsystem for the TCS using a cluster of high-end workstations equipped with nVidia-based graphics and Quadric Elan interconnection hardware.  Unfortunately, Quadrics does not provide a high-level programming interface to access the low-level abilities provided by the Elan hardware.  In addition, neither WireGL nor Chromium, the software packages being used for graphics rendering, support the Elan interconnection network.  We describe the operation of a high-level Elan communications interface, called libtcomm, as well as the modified WireGL and Chromium network layers that provide support for the Elan interconnect.  We also consider the initial performance implications of using libtcomm with WireGL.

2001

 

    
Parallel Rendering for the Terascale Computing System

Christiaan Paul Gribble and James Stanley Vasak

Masters Thesis, Information Networking Institute, Carnegie Mellon University, December 2001

The Pittsburgh Supercomputing Center (PSC) has recently installed its newest high-performance machine, the Terascale Computing System (TCS).  Utilizing a cluster-based architecture, the TCS was constructed using commodity components that communicate over a high-speed Quadrics network.  The PSC will construct a visualization subsystem for the TCS using commodity graphics hardware rather than rely on a highly specialized graphics system.  Although scientific visualization is an active field of research, software that meets the specific needs of the TCS architecture is not available.  This thesis builds upon the large body of scientific visualization and computer graphics work to make the first step toward a parallel rendering system that meets the needs of the TCS and its users.  We discuss the modifications necessary to use an existing visualization package, called WireGL, with the TCS.  In particular, we added support for saving image output to files, off-screen rendering, and the Quadrics interconnect network.  Furthermore, we implemented a sort-last rendering algorithm to improve the system's performance.  We also present an evaluation of our initial implementation and describe areas of work to extend and enhance this parallel rendering system.

 

[ 2001-2006 ]   [ 2007-2012 ]   [ 2013-2018 ] [ 2019-present ]

 

Copyright © 2000-2023 Christiaan Gribble
All rights reserved