![]() |
![]() |
![]() |
---|---|---|
Action-At-A-Distance | Worlds In Miniature | Look At Menu |
![]() |
![]() |
![]() |
Environment Control | Object Control | Tear-off Object Palette |
![]() |
![]() |
|
Two-handed Interaction | Interactive Numbers |
![]() |
![]() |
![]() |
![]() |
![]() |
---|---|---|---|---|
Home Page | Research | Publications | Walkthrough | UNC CS |
CHIMP uses spotlight selection (selection of objects which fall within a cone) instead of laser beam selection (selection of objects whose bounding box is intersected by a beam), because spotlight selection facilitates selection of small and distant objects that can be hard to hit with a laser beam.
CHIMP provides two basic forms of at-a-distance interaction for selected objects: hand-centered and object-centered. In both forms of interaction, changes in the position and orientation of the user's hand are mapped onto the position and orientation of the selected objects. The only difference between the two is the center of rotation used in applying changes in orientation to the object. In hand-centered manipulation, objects move about the center of the user's hand, like objects at the end of a long stick. When the lever arm between the hand and the object is large, hand-centered manipulation allows large scale object motions without the user having to navigate through the environment (to reach the object or its target destination). Hand-centered interaction, however, is particularly sensitive to noise in the tracking data and to instability in the user's hand position. In object-centered manipulation, objects rotate and move relative to their own center. Object-centered interaction enables one to make a localized change while standing back to get a global view.
The advantage of using a WIM system is that it allows one to perform large scale manipulations more easily (such as moving a chair across a room without having to actually move through the environment to reach it). In addition it helps provide global context while immersed in one's current location. Researchers at the University of Virginia are also exploring the use of WIMs for navigation in the virtual world (see [Pausch, et al. 1995] ). Though highly effective for the gross manipulation of objects, the WIM interface does not solve the precise manipulation problem (and in fact aggravates it, since one is restricted to large scale motions of objects due to the small scale of the WIM). The controlled manipulation of objects in a WIM (and in virtual environments in general) depends on the implementation of additional constraints such as real-time collision detection (see [Gottschalk, et al. 1996] ) or innovative techniques such as Bukowski's object associations (see [Bukowski and Sequin 1995] ).
There are several potential enhancements to CHIMP's current implementation of the WIM metaphor:
Since the amount of information displayed in a WIM can quickly become overwhelming, it is important to be able to display an arbitrary subset of the environment. If the WIM can only display the entire environment (as is currently the case in the CHIMP system), it makes it difficult to distinguish, select, and interact with objects in the WIM. It would also be useful if these arbitrary subsets can be displayed at different scales since we often observe users alternating between global and local views when working in interactive design systems. This would enable users to interact with multiple views of the same space, each at a different location and scale. Finally, if multiple WIMs can be displayed simultaneously, one can also move objects from WIM to WIM, allowing for large scale interaction with better local resolution and control. A user, for example, could move a chair between two WIMs, each showing a different room in a house in detail, instead of having to move a chair across a single WIM displaying the entire house at a lower resolution.
Given these capabilities, the WIM technique becomes in effect a three-dimensional windowing system, with each WIM presenting a different view of the virtual space.
In the University of Virginia system, the WIM is attached to a tracked clipboard held in the user's left hand. By moving the clipboard the user changes the position and orientation of the WIM, allowing him to view it from different directions. The user also has the option of detaching the WIM from the clipboard, leaving it floating in space. This is required to help minimize user fatigue (resulting from holding the clipboard up in the air) and to allow the user to pass his head through the WIM (by leaning into the WIM floating in space) which would not be possible if it was still attached to the clipboard.
In the CHIMP system the WIM metaphor has been combined with orbital mode viewing (described above) to provide an orbital WIM. Normally the WIM is left floating in space. In this state the user is free to interact with the WIM, move his head inside the WIM for a close up view, or to stand back to get a global view. If the user wishes to view the WIM from another direction he grabs the WIM (by pointing at one of its three orthogonal grid planes and pulling on the trigger) which starts orbital-mode viewing. He is then free to view the WIM from any direction. By moving his hand in and out from his body the user changes the WIM's orbital distance.
One advantage of an orbital WIM is that it maintains the correspondence between WIM and world orientation. This helps to avoid the additional cognitive step of registering the orientation of a hand-held miniature with the immersive world. Further research is required, however, to determine which form of WIM viewing is more effective (a hand-held WIM or an orbital WIM).
If the user presses and holds the look-at menu button (on his input device) when a menu is active the corresponding pop-up menu become visible, centered above the hot spot. The active item in the menu depends on the current gaze direction of the user, simulated in the absence of eye tracking by the current view direction of the user's head. To select a new item the user moves his head to look at the desired item. When the user releases the button on the input device the currently selected item is activated. If the user is looking beyond the extents of the pop-up when he releases the button no menu item is selected. Menus can be used to select modes and/or tools and callbacks can be associated with each menu item.
If, instead of holding down the pop-up menu button, the user quickly presses and releases it, the user can select the default menu item without having to wait for the associated pop-up menu to become visible. This allows the user to quickly select a default item without having to look at a menu and cognitively process the displayed information (similar to the style of interaction found in the innovative Marking Menus techniques, see [Kurtenbach and Buxton 1993] ). The default item is the one in the pop-up that is centered over the hot-point. If desired, the default action can be a null action, forcing the user to wait for the menu to become visible before making a valid selection. Since menus can be attached to objects in the virtual world, menus are associated with the objects they control rather than placed at some arbitrary position in screen space. This helps reduce the cognitive distance between menu and controlled object. In addition, placing menus in the environment takes advantage of the ability to distribute information throughout the virtual space rather than concentrating the information in a limited fixed space (such as a menu bar).
Another advantage of the look-at menu scheme is that it makes use of an additional input channel (i.e., head orientation) and does not tie up the user's hands for menu interaction. This hands-free operation makes it possible to interact with a menu without interrupting the current operation. The user can change between hand-centered and object-centered manipulation, for example, without releasing the currently held object. On the other hand, the distribution of menus throughout the environment can make it potentially more difficult to find a desired menu. In addition, since the menus are not visible until activated, the user can not know what a menu contains until he activates it.
The environment control panel (figure 6) provides access functions such as:
The tear-off object palette (figure 8) contains miniature representations of all the objects currently loaded into the environment. A user can add a new object to the environment by grabbing one of the miniature copies on the object palette and tearing off a new copy (by moving his hand away from the palette). The new copy can then be placed anywhere in the environment. The object palette can contain both primitive objects (such as cones, cylinders and spheres) and more complex library objects (such as tables, pianos and chairs).
In the CHIMP system, multiple control panels can be attached to the user's left hand at one time. To help avoid visual clutter only one panel is visible at a time, the visible panel depending upon the orientation of the user's left hand. As the user rotates the sensor about its main axis different panels pop into view. For example the tear-off object palette is attached to the back of the environment control panel. If when looking at the environment control panel the user flips the panel over (by rotating the hand-held sensor 180 degrees) the control panel will disappear and the object palette will pop into view. The advantage of this system is that the user can quickly access multiple panels without visual clutter becoming unmanageable due to multiple panels floating in space. The disadvantage is a lack of affordances, since a user may not know that another panel is on the back of the current one unless he turns his hand around (albeit a very easy operation).
The mode bar always lies along the vector between the user's two hands. As a result sub-mode selection is independent of the hand's current position and orientation and depends solely upon the user's current hand separation. For example, for flying mode selection, if the user's hands are closer together than 0.25 meters he is in a pointing mode of flying. If they are between 0.25 meters and 0.5 meters, he is in an orbital mode of flying. If his hands are more than 0.5 meters apart, he exits two-handed interaction and resumes the default one-handed AAAD interaction.
The user specifies the resulting transformation using his two hands as follows:
The user first selects the axis of interaction based upon the orientation of his right hand. The user can choose between having the axis of interaction snapped to the principal axis closest to the current hand orientation or having the axis of interaction precisely aligned with the current hand orientation. Once the user has selected an axis of interaction, he initiates the constrained manipulation by pulling and holding the trigger button with his right hand. The magnitude of the transformation depends upon the relative motion of his two hands. For translation the user specifies the motion of the selected objects by moving his right hand along the selected axis, like sliding a bead along a wire. For rotation the user twists his right hand about its axis, like turning a screwdriver. For scale the user determines the size of the objects by the separation of his two hands. If he increases his hand separation the objects scale up, if he decreases it the objects scale down.
To provide a large dynamic range of possible translations CHIMP multiplies the resulting translation of the selected objects by a scaling factor that depends on the distance of the user's hand from his body. To perform fine-grain motions he holds his hand close to his body. For large-scale object translations he extends his arm fully. Though complicated to describe, the two-handed interaction style is easy to learn and easy to use. Users have found it easier to interact with CHIMP's various modes of constrained manipulation using the two-handed interaction techniques than they did with the previous one-handed style of interaction.
Working with interactive numbers is similar to using one of the old mechanical calculating devices (such as the Arithma Addiator, for example) in which numbers are input by moving physical sliders up and down using a pointed stylus. In place of the pointed stylus is a laser beam emanating from the user's hand. The user can point the beam at any digit in a number and press and hold on the trigger button. This will invoke a pop-up list that includes the digits 0-9 (see figure). By moving his hand up or down the user can then slide the desired new digit into place. Alternately, if the user moves his hand left or right he can copy the currently selected digit left or right. If he user selects an empty space to the left of the number's current most-significant digit he can change that space to any digit (1-9) and zeros will fill in all the intermediate spaces. This allows him to quickly enter in large values. Alternately, if he copies that space to the right he clears out any numbers he encounters.
The user can also grab the decimal point and slide it left or right (increasing or decreasing the number of digits to the right of the decimal point) and he can change the sign of the current value at any time.
The advantages of interactive numbers is that it allows one to double up input and output in the same region of the control panel instead of having a numerical display and a separate input widget (such as a slider or a virtual keypad). Moreover, in theory, interactive numbers are unbounded, one can enter in any magnitude number (though in effect they are limited by the number of digits that can be displayed). Furthermore, the technique doesn't require the user to learn how to use an unfamiliar input device such as a chord keyboard or have to struggle with a virtual keypad that suffers from the lack of haptic feedback (touch typing without the touch). Interactive numbers are, however, susceptible to noise in tracking data and require that user hand motion be scaled or filtered to smooth out interaction.
Techniques such as interactive numbers may be unnecessary given a means for reliable voice input. However there may be cases in which it is not possible or desirable to use voice input (due to environmental conditions, because users may be unable to talk all day long without irritating their throat, or simply because users prefer not to). In such cases interactive numbers or some similar technique may be required.
![]() |
![]() |
![]() |
![]() |
![]() |
---|---|---|---|---|
Home Page | Research | Publications | Walkthrough | UNC CS |