Haptic Feedback from Volumetric Data
Volume visualization is rapidly becoming an indispensable tool in the analysis of the vast amount of information contained in volumetric data. The development of efficient tools that will support the analysis and filtering of this data continues to pose research challenges. Evaluations of simple, well defined tasks have shown that haptics has the potential to significantly increase both speed and accuracy of human-computer interaction. Our sense of touch and kinaesthetics is also capable of providing large amounts of information about the location, structure, stiffness and other material properties of objects, that can be hard to represent visually.
This project aims to develop and implement methods and algorithms for effective haptic feedback from volumetric data, primarily in scientific and medical visualization. Preliminary results include the introduction of passive, constraint-based haptic feedback, and the development of a more general, versatile and intuitive abstraction layer towards volume haptics using haptic primitives.
Haptic interaction, or haptic force feedback, is the technology that enables physical touch in computer environments. The work haptics comes from the Greek word αφή which can be translated to touch, the sense, or απτός, which means tangible. Haptic feedback is a growing market, not only for the gaming industry, but also in medical research. Lately medical training equipment with haptic feedback has begun to emerge in hospitals, however still in the testing rooms. Early testings has shown, not only that the physicians can improve their performance in virtual environments, but also that this can be transformed to an increase in real world performance, to some extent.
Haptic Feedback from Volumetric Data
Some late research has also shown that the extra information bandwidth, provided by haptics, can be useful in examination and exploration environments. By providing a data analyser with a multi-modal environment the speed and accuracy of her exploration can be significantly increased. This can be a geologist, examining earth data, or a physician or radiologist, examining a scanning of a human being.
Most haptic interaction until now has been concentrated on surface models, where most algorithms for force feedback therefore can be found. To examine real data in scientific visualization direct haptic rendering from volumetric data is needed, which is sparsely found today. Present algorithms for volume haptics only provide simple haptic models. For the interaction between user and data to be clear, intuitive and fast while providing most information possible new schemes and techniques are required.
The preliminary progress in this project involves proxy-based surface simulation from volumetric density data, such as Computer Tomography data. Lately a more general proxy-based scheme has been developed, that allows for more specialized haptic feedback from higher order data like vector or tensor data as well as from scalar data. This approach also allows haptic feedback from multi-field data like co-registered scalar and vector data.
The efforts in related projects, in which these methods are implemented and applied, have resulted in a toolkit for multi modal exploration of volumetric data. See related projects for more information about the toolkit.
In our research on volume haptics we use a Reachin Display. The display is equipped with a stereo-scopic CRT monitor, which is reflected in a mirror for co-location between haptic and visual feedback. The haptic device used in the display is a Desktop PHANToM from Sensable, and for rotating models in the virtual environment a 3D mouse from Connexion is provided. The stereo-scopic display use an active stereo approach with CrystalEyes glasses which get stereo synchronization signals from an IR-emitter connected to the computer.
The programming of the Reachin Display is usually done using the Reachin API, formerly known as Magma API. We are, however, currently moving our development to the H3D API. H3D API is cross platform and available under GNU GPL for both commercial and non-commercial development. It provides a similar structure as Reachin API, supporting both high and low level programming through scene-graph modelling, Python scripting and a C++ interface, however everything with X3D as base instead of the now obsolete VRML format.