Program

Thursday Friday
Aug 17 Aug 18
9:00 – 10:00 Paper Session I
10:00 – 10:20 Coffee Break
10:20 – 11:00 Paper Session II
11:00 – 11:45 Industrial Keynote
11:45 – 12:00 Closing Remarks
12:30 – 13:00 Registration
13:00 – 13:15 Opening
13:15 – 14:00 Keynote I / II
14:00 – 15:10 Swedish Research Overview Session I
15:10 – 15:30 Coffee Break
15:30 – 16:15 Keynote III
16:15 – 17:25 Swedish Research Overview Session II
17:30 – 18:00 Mingle, Visualization Center C
18:00 – 19:00 Dome Show
19:00 – 20:30 Reception, Visualization Center C

Keynote I

Task-Based Parallelization for Visualization Algorithms
Christoph Garth, University of Kaiserslautern, Germany

Keynote II

Visual Integration of Molecular and Cell Biology
Ivan Viola, Vienna University of Technology, Austria

Keynote III

The role of visualization in the world of AI
Claes Lundström, CMIV, Linköping University

Abstract

Artificial intelligence (AI), in particular deep learning, is considered to have the potential to revolutionize many domains. Even though some inflated expectations will prove unrealistic, there are many examples that clearly show how groundbreaking impact AI will have. But does visualization have a role to play in a world dominated by automated analytics? This talk will cover a few aspects of this issue in the context of applications from medical imaging diagnostics.

Industrial Keynote II

Games are Definining the Future
Samuel Ranta Eskola, Microsoft

Abstract

The games industry has in a couple of decades moved from a Jolt cola-drinking basement culture to pushing technological invention all around the world. There are many examples of technologies and ideas that have been pushed forward within the games industry.

One example is Simplygon, that was spawned as a technology for the games industry. In 2017, the team joined with Microsoft in 2017 to take part in the development of 3D for everyone. We’ll also look at technologies like the GPU that was pushed forward by games and now is used in cancer treatment. How VR spawned in many shapes and forms in games and now is driving car sales. Or how the Kinect was developed by game developers, now has many use cases outside of game and then later morphed into the Hololens.

We’ll use our spy glass to consider how games will affect our future as well.

Paper Session I

Each paper slot will have 15 minutes of presentation and 5 minutes for questions from the audience.

The papers can be accessed here.

  • High-Quality Real-Time Depth-Image-Based-Rendering
    Jens Ogniewski

    Abstract

    With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality.

    Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline.

    We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately.

  • Treating Presence as a Noun — Insights Obtained from Comparing a VE and a 360° Video
    Martina Tarnawski

    Abstract

    With 360° videos becoming more commercially available, more research is needed in order to evaluate how they are perceived by users. In this study we compare a low-budget computer-generated virtual environment to a low-budget 360° video viewed in VR mode. The Igroup Presence Questionnaire (IPQ), discomfort-scores and semi-structured interviews were used to investigate differences and similarities between the two environments. The most fruitful results were obtained from the interviews. The interviews highlight problematic aspects with presence, such as the difficulty of separating reality, real and realistic, which lead to a reconsideration of treating presence as a concept. The conclusions are that VR research should benefit from treating presence as a noun, the feeling of “being there” instead of a unitary concept. We also argue that presence should not by default be considered a goal of a VR experience or VR research.

  • From Visualization Research to Public Presentation – Design and Realization of a Scientific Exhibition
    Michael Krone, Karsten Schatz, Nora Hieronymus, Christoph Müller, Michael Becher, Tina Barthelmes, April Cooper, Steffen Currle, Patrick Gralka, Marcel Hlawatsch, Lisa Pietrzyk, Tobias Rau, Guido Reina, Rene Trefft and Thomas Ertl

    Abstract

    In this paper, we present the design considerations of a scientific exhibition we recently realized. The exhibition presented the work of two large research projects related to computer simulations, which include scientific visualization as an essential part of the involved research. Consequently, visualization was also of central importance for our exhibition. It was not only used to illustrate the complex simulation data to convey information about the results from the application domains, but we also wanted to teach visitors about visualization itself. Therefore, explaining the purpose and the challenges of visualization research was a significant part of the exhibition. We describe how we developed an engaging experience of a highly theoretic topic using the same visualization tools we developed for the application scientists and how we integrated the venue into our design. Finally, we discuss our insights from the project as well as visitor feedback.

Paper Session II

Each paper slot will have 15 minutes of presentation and 5 minutes for questions from the audience.

The papers can be accessed here.

  • Evaluating the Influence of Stereoscopy on Cluster Perception in Scatterplots
    Christian van Onzenoodt, Julian Kreiser, Dominique Heer and Timo Ropinski

    Abstract

    Unlike 2D scatterplots, which only visualize 2D data, 3D scatterplots have the advantage of showing an additional dimension of data. However, cluster analysis can be difficult for the viewer since it is challenging to perceive depth in 3D scatterplots. In addition, 3D scatterplots suffer from overdraw and require more time for perception than their 2D equivalents. As an approach to this issue, stereoscopic rendering of three-dimensional point-based scatterplots is evaluated through a user study. In detail, participants’ ability to make precise judgements about the positions of clusters was explored. 2D scatterplots were compared to non-stereoscopic 3D and stereoscopic 3D scatterplots. The results showed that performance in perception decreased when confronted with 3D scatterplots in general, as opposed to 2D scatterplots. A tendency towards an improvement of perception showed when comparing stereoscopic 3D scatterplots to non-stereoscopic 3D scatterplots.

  • Concepts of Hybrid Data Rendering
    Torsten Gustafsson, Wito Engelke, Rickard Englund and Ingrid Hotz

    Abstract

    We present a concept for hybrid data rendering, based on an A-buffer approach. With this, our system is capable of rendering multiple data sets, of varying types, in one scene and with correct transparency. For scientific visualization, there is often a need to combine data sets, from multiple sources, to gain a more complete understanding of the data itself. The problem is, that the underlying rendering technique depends on the data sets type. Combining these different techniques, in one scene, is not a straight forward task. We solve this problem by using an A-buffer based approach to gather color and transparency information from different sources, combine them and generate the final output image.

Swedish Research Overview Session I

Each paper slot will have 15 minutes of presentation and 2-3 minutes for questions from the audience.

  • MVN-Reduce: Dimensionality Reduction for the Visual Analysis of Multivariate Networks
    R. M. Martins, J. F. Kruiger, R. Minghim, A. C. Telea, and A. Kerren
    EuroVis 2017 (Short Paper). pdf

    Abstract

    The analysis of Multivariate Networks (MVNs) can be approached from two different perspectives: a multidimensional one, consisting of the nodes and their multiple attributes, or a relational one, consisting of the network’s topology of edges. In order to be comprehensive, a visual representation of an MVN must be able to accommodate both. In this paper, we propose a novel approach for the visualization of MVNs that works by combining these two perspectives into a single unified model, which is used as input to a dimensionality reduction method. The resulting 2D embedding takes into consideration both attribute- and edge-based similarities, with a user-controlled trade-off. We demonstrate our approach by exploring two real-world data sets: a co-authorship network and an open-source software development project. The results point out that our method is able to bring forward features of MVNs that could not be easily perceived from the investigation of the individual perspectives only.

  • Towards Perceptual Optimization of the Visual Design of Scatterplots
    L. Micallef, G. Palmas, A. Oulasvirta, and T. Weinkauf
    IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE PacificVis) 23(6), June 2017. pdf

    Abstract

    Designing a good scatterplot can be difficult for non-experts in visualization, because they need to decide on many parameters, such as marker size and opacity, aspect ratio, color, and rendering order. This paper contributes to research exploring the use of perceptual models and quality metrics to set such parameters automatically for enhanced visual quality of a scatterplot. A key consideration in this paper is the construction of a cost function to capture several relevant aspects of the human visual system, examining a scatterplot design for some data analysis task. We show how the cost function can be used in an optimizer to search for the optimal visual design for a user’s dataset and task objectives (e.g., “reliable linear correlation estimation is more important than class separation”). The approach is extensible to different analysis tasks. To test its performance in a realistic setting, we pre-calibrated it for correlation estimation, class separation, and outlier detection. The optimizer was able to produce designs that achieved a level of speed and success comparable to that of those using human-designed presets (e.g., in R or MATLAB). Case studies demonstrate that the approach can adapt a design to the data, to reveal patterns without user intervention.

  • Transfer Function Design Toolbox for Full-Color Volume Datasets
    M. Falk, I. Hotz, P. Ljung, D. Treanor, A. Ynnerman, C. Lundström
    IEEE Pacific Visualization Symposium (PacificVis 2017), 2017. pdf

    Abstract

    In this paper, we tackle the challenge of effective Transfer Function (TF) design for Direct Volume Rendering (DVR) of full-color datasets. We propose a novel TF design toolbox based on color similarity which is used to adjust opacity as well as replacing colors. We show that both CIE L*u*v* chromaticity and the chroma component of YCbCr are equally suited as underlying color space for the TF widgets. In order to maximize the area utilized in the TF editor, we renormalize the color space based on the histogram of the dataset. Thereby, colors representing a higher share of the dataset are depicted more prominently, thus providing a higher sensitivity for fine-tuning TF widgets. The applicability of our TF design toolbox is demonstrated by volume ray casting challenging full-color volume data including the visible male cryosection dataset and examples from 3D histology.

  • Global Feature Tracking and Similarity Estimation in Time-Dependent Scalar Fields
    H. Saikia, T. Weinkauf
    Computer Graphics Forum (Proc. EuroVis) 35(3), June 2017. pdf

    Abstract

    We present an algorithm for tracking regions in time-dependent scalar fields that uses global knowledge from all time steps for determining the tracks. The regions are defined using merge trees, thereby representing a hierarchical segmentation of the data in each time step.

    The similarity of regions of two consecutive time steps is measured using their volumetric overlap and a histogram difference. The main ingredient of our method is a directed acyclic graph that records all relevant similarity information as follows: the regions of all time steps are the nodes of the graph, the edges represent possible short feature tracks between consecutive time steps, and the edge weights are given by the similarity of the connected regions. We compute a feature track as the global solution of a shortest path problem in the graph. We use these results to steer the – to the best of our knowledge – first algorithm for spatio-temporal feature similarity estimation. Our algorithm works for 2D and 3D time-dependent scalar fields. We compare our results to previous work, showcase its robustness to noise, and exemplify its utility using several real-world data sets.

Swedish Research Overview Session II

Each paper slot will have 15 minutes of presentation and 2-3 minutes for questions from the audience.

  • SAH guided spatial split partitioning for fast BVH construction
    P. Ganestam and M. Doggett
    Computer Graphics Forum (Proceedings of Eurographics), Volume 35, No. 2, 2016. pdf

    Abstract

    We present a new SAH guided approach to subdividing triangles as the scene is coarsely partitioned into smaller sets of spatially coherent triangles. Our triangle split approach is integrated into the partitioning stage of a fast BVH construction algorithm, but may as well be used as a stand-alone pre-split pass. Our algorithm significantly reduces the number of split triangles compared to previous methods, while at the same time improving ray tracing performance compared to competing fast BVH construction techniques. We compare performance on Intel’s Embree ray tracer and show that BVH construction with our splitting algorithm is always faster than Embree’s pre-split construction algorithm. We also show that our algorithm builds significantly improved quality trees that deliver higher ray tracing performance. Our algorithm is implemented into Embree’s open source ray tracing framework, and the source code will be released late 2015.

  • Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data
    D. Jönsson, A. Ynnerman
    IEEE Transactions on Visualization and Computer Graphics (TVCG), Volume 23, No. 1, 2017. pdf

    Abstract

    We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%.

  • On local image completion using an ensemble of dictionaries
    E. Miandji, J. Unger
    IEEE International Conference on Image Processing, 2016. pdf

    Abstract

    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

  • A high dynamic range video codec optimized by large-scale testing
    G. Eilertsen, R.K. Mantiuk, J. Unger
    IEEE International Conference on Image Processing, 2016. pdf

    Abstract

    While a number of existing high-bit depth video compression methods can potentially encode high dynamic range (HDR) video, few of them provide this capability. In this paper, we investigate techniques for adapting HDR video for this purpose. In a large-scale test on 33 HDR video sequences, we compare 2 video codecs, 4 luminance encoding techniques (transfer functions) and 3 color encoding methods, measuring quality in terms of two objective metrics, PU-MSSIM and HDR-VDP-2. From the results we design an open source HDR video encoder, optimized for the best compression performance given the techniques examined.