.. weden, Sjostrom (Sjostrom, 1997) and his colleagues have created a painting application in which the PHANToM can be used by the visually impaired; line thickness varies with the user’s force on the fingertip thimble and colors are discriminated by their tactual profile. Marcy, Temkin, Gorman, and Krummel (1998) have developed the Tactile Max, a PHANToM plug-in for 3D Studio Max. Dynasculpt, a prototype from Interval Research Corporation (Snibbe, Anderson, and Verplank, 1998) permits sculpting in three dimensions by attaching a virtual mass to the PHANToM position and constructing a ribbon through the mass’s path through the 3D space. Gutierrez, Barbero, Aizpitarte, Carrillo, and Eguidazu (1998) have integrated the PHANToM into DATum, a geometric modeller.
Objects can be touched, moved, or grasped (with two PHANToMs), and the assembly/disassembly of mechanical objects can be simulated. Haptics has also been incorporated into scientific visualization. Drubeck, Macias, Weinstein, Johnson, and Hollerbach (1998) have interfaced SCIrun, a computation software steering system, to the PHANToM. Both haptics and graphics displays are directed by the movement of the PHANToM stylus through haptically rendered data volumes. Similar systems have been developed for geoscientific applications (e.g., the Haptic Workbench, Veldkamp, Truner, Gunn, and Stevenson, 1998), Green and Salisbury (1998) have produced a convincing soil simulation (Green and Salisbury, 1998) where they have varied parameters such as soil properties, plow blade geometry, and angle of attack.
At Interactive Simulations, a San Diego-based company, researchers have succeeded in adding a haptic feedback component to Sculpt, a program for analyzing chemical and biological molecular structures, which will permit analysis of molecular conformational flexibility and interactive docking. Acquisition of 3D models There are several commercial 3D digitizing cameras available for applications like the museum, such as the ColorScan and the Virtuoso shape cameras. The latter uses six digital cameras, five black and white cameras for capturing the shape information and one color camera which acquires texture information which is layered onto the triangle mesh. Our digitization process begins with models acquired from photographs, using a semiautomatic system to infer complex 3-D shapes from photographs developed at IMSC (Chen, 1998, 1999). Images are used as the rendering primitives, beginning with six input images of our teapots at 60 degrees separation; multiple input pictures are allowed, taken from nearby viewpoints with different position, orientation and camera focal length. Other comparable approaches to digitizing museum objects (e.g., Synthonics) use an older version of the shape-from-stereo technology which requires the cameras to be calibrated whenever the focal length or relative position of the two cameras is changed.
The direct output of the IMSC program is volumetric but is converted to a surface representation for the purpose of graphic rendering. The reconstructed surfaces are quite large, on the order of 40 MB. They are decimated with a modified version of a program for surface simplication using quadric error metrics written by Garland and Heckbert (1997). Figure 3. Teapot digitization: 1 of six input views; an image of the reconstructed point set; an image of the omnidirectional solid model (reconstructed surface) Pai and Reissell(1997) report on a technique based on wavelets for multiresolution modeling of 2D shapes. The models rely on a robust edge detector to detect boundary curves in the image.
These curves are then rendered as solid objects using a haptic interface. The system also incorporates a fast contact detection algorithm based on collision trees. The paper includes a discussion of a state machine that serves as a simple model for contact transition and hence, force computation. Volumetric data is used extensively in medical imaging and scientific visualization. Currently the GHOST SDK, which is the development toolkit for the PHANToM, construes the haptic environment as scenes composed of geometric primitives. Huang, Qu, and Kaufman of SUNY-Stony Brook have developed a new interface which supports volume rendering, based on volumetric objects, with haptic interaction.
The APSIL library (Huang, Qu, and Kaufman, 1998) is an extension of GHOST. To date the Stony Brook group has developed succesful demonstrations of volume rendering with haptic interaction from CT data of a lobster, a human brain, and a human head, simulating stiffness, friction, and texture solely from the volume voxel density. The development of the new interface may facilitate working directly with the volumetric representations of the teapots obtained through the view synthesis methods. The surface texture of an object can be displacement mapped (consisting of thousands of tiny polygons) (Srinivasan and Basdogan, 1997), although the computation demand is such that force discontinuities can occur, or more commonly, a texture field can be constructed from 2-D image data. For example, Ikei, Wakamatsu, and Fukuda (1997 created textures from images converted to greyscale, then enhanced to heighten brightness and contrast, such that the level and distribution of intensity corresponds to variation in the height of texture protrusions and retractions (Ikei et al., 202).
They then employed an array of vibrating pins to communicate tactile sensations to the user’s fingertip, with the amplitude of the vibration of each pin driven at the intensity level of the underlying portion of the image. Surface texture may also be rendered haptically, through techniques such as force perturbation, where the direction and magnitude of the force vector is altered using the local gradient of the texture field to simulate effects such as coarseness (Srinivasan and Basdogan, 1997). Synthetic textures such as wood, sandpaper, cobblestone, rubber, and plastic may also be created using mathematical functions for the height field (Anderson, 1996; Basogan, Ho, and Srinivasan, 1997). The ENCHANTER environment (Jansson, Faenger , Konig and Billberger, 1998) has a texture mapper which can render sinus, triangular, and rectangular textures as well as textures provided by other programs, for any haptic object provided by the Ghost SDK. Issues in haptic rendering Researchers working with force feedback devices for object sensing have been concerned with issues of presence, or the fidelity (realism) of the haptic experience.
For instance, Brown and Colgate (1994), in their physics-based approach to haptic display, address the issue of stability guarantees in virtual environments. In particular they note the threat to presence created when the virtual environment becomes computationally unstable, as for example when a normally passive tool, such as a chisel, begins to move independently of the control of the user who is wielding it. Similarly, a virtual wall must unilaterally constrain the user’s forward movement. Brown and Colgate develop a model for improving the passivity of the haptic display through inherent physical damping and the impedence of virtual walls through increased sampling (update rates). The many potential applications in industry, the military, and entertainment for force feedback in multi-user environments, where two or more users orient to and manipulate objects in a shared environment, have led to work such as that of Buttolo and his colleagues (Buttolo, Hewitt, Oboe, & Hannaford, 1997; Buttolo, Oboe, Hannaford, & McNally, 1996), who note that the addition of force feedback to multi-user environments demands low latency and high collision detection sampling rates.
LANs, because of their low communication delay, may be conducive to applications in which users can touch each other, but for wide area networks, or any environment where the demands above cannot be met, Buttolo et al propose their one-user-at-a-time architecture. Mark and his colleagues (Mark, Randolph, Finch, van Verth, and Taylor, 1996) have proposed a number of solutions to recurring problems in haptics, such as improving the update rate for forces communicated back to the user. They propose the use of intermediate representation of force through a plane and probe method: a local planar approximation to the user’s hand location is computed when the probe or haptic tool penetrates the plane, and the force is updated at approximately 1 kHz by the force server, while the application recomputes the position of the plane and updates it at approximately 20 kHz. Mark et al. also propose solutions to add surface texture and friction to what otherwise would be the slick surface produced under their model, using a parameterized snag distribution on the object surface. They also present a method for specifying torques as well as force, and a recovery-time algorithm for preventing force discontinuity artifacts, such as occur when the haptic probe’s sideways movement is too fast relative to the computation of the new intermediate representation.
Mark et al. have developed a device-independent library of routines for haptic interfaces, Armlib, which supports multi-user and multi-hand applications. Armlib works with a number of different haptic display devices, including the PHANToM. Psychophysical studies: perceptions of shape and texture in multimodal virtual environments The behavior of the human haptic system has been the subject of far more systematic study than has touching with robotic masters. Texture, apprehended by most subjects through lateral, side-to-side hand movement or exploratory procedure, is only one of several haptically important dimensions of object recognition, including hardness, shape, and thermal conductivity (Klatzky, Lederman, & Reed, 1987).
Most researchers report that subjects are able to discriminate textures and to a lesser extent shapes using the haptic sense only. For example, Ballesteros, Manga, and Reales (1997) reported a moderate level of accuracy for single-finger haptic detection of raised-line shapes, with asymmetric shapes being more readily discriminated. Hatwell (1995) found that recall of texture information coded haptically was successful when memorization was intentional, but not when it was incidental, indicating that haptic information processing may be effortful for subjects. Hughes and Jansson (1994) lament the inadequacy of embossed maps and other devices intended to communicate information to the visually handicapped through the sense of touch, a puzzling state of affairs insomuch as texture perception by active touch (purposeful motion of the skin surface relative to the surface of some distal object) appears to be comparatively accurate, and even more accurate than vision in apprehending certain properties, such as smoothness (Hughes & Jansson, 302). The authors note in their critical review of the literature on active-passive equivalence that active and passive touch (as when a texture is presented to the surface of the fingers, see Hollins et al., 1993, below) have repeatedly been demonstrated by Lederman and her colleagues (Lederman, 1985; Lederman, Thorne, & Jones, 1986; Loomis & Lederman, 1986) to be functionally equivalent with respect to texture perception, in that touch modality does not seem to account for a significant proportion of the variation in judgments of such basic dimensions as roughness, even though the two types of touch may lead to different sorts of attributions (respectively, about the texture object and about the cutaneous sensing surface) and motor information should clearly be useful in assessing the size and distribution of surface protrusions and retractions.
Active-passive touch is more likely to be equivalent in certain types of perceptual tasks; active touch should be less relevant to judgments of hardness than it is to assessments of springiness. Medicine.