Toolglass and Magic Lenses: The See-Through Interface
Eric A. Bier, Maureen C. Stone, Ken Pier, William Buxton(1), Tony D. DeRose(2)
Xerox PARC, 3333 Coyote Hill Road, Palo Alto, CA 94304
(1) of Toronto, (2)University of Washington
Abstract
Toolglass(TM) widgets are new user interface tools that can appear, as though on a transparent sheet of glass, between an application and a traditional cursor. They can be positioned with one hand while the other positions the cursor. The widgets provide a rich and concise vocabulary for operating on application objects. These widgets may incorporate visual filters, called Magic Lens(TM) filters, that modify the presentation of application objects to reveal hidden information, to enhance data of interest, or to suppress distracting information. Together, these tools form a see-through interface that offers many advantages over traditional controls. They provide a new style of interaction that better exploits the user's everyday skills. They can reduce steps, cursor motion, and errors. Many widgets can be provided in a user interface, by designers and by users, without requiring dedicated screen space. In addition, lenses provide rich context-dependent feedback and the ability to view details and context simultaneously. Our widgets and lenses can be combined to form operation and viewing macros, and can be used over multiple applications.CR Categories and Subject Descriptors: I.3.6 [Computer Graphics]: Methodology and Techniques-interaction techniques; H.5.2 [Information Interfaces and Presentation]: User Interfaces-interaction styles; I.3.3 [Computer Graphics]: Picture/Image Generation-viewing algorithms; I.3.4 [Computer Graphics]: Graphics Utilities-graphics editors
Key Words: multi-hand, button, lens, viewing filter, control panel, menu, transparent, macro
Two hands can be used to operate the see-through interface. The user can position the sheet with the non-dominant hand, using a device such as a trackball or touchpad, at the same time as the dominant hand positions a cursor (e.g., with a mouse or stylus). Thus, the user can line up a widget, a cursor, and an application object in a single two-handed gesture.
A set of simple widgets called click-through buttons is shown in figure 1. These buttons can be used to change the color of objects below them. The user positions the widget in the vicinity and indicates precisely which object to color by clicking through the button with the cursor over that object, as shown in figure 1(b). The buttons in figure 1(c) change the outline colors of objects. In addition, these buttons include a filter that shows only outlines, suppressing filled areas. This filter both reminds the user that these buttons do not affect filled areas and allows the user to change the color of outlines that were obscured.
Figure 1. Click-through buttons. (a) Six wedge objects. (b) Clicking through a green fill-color button. (c) Clicking through a cyan outline-color button.
Many widgets can be placed on a single sheet, as shown in figure
2. The user can switch from one command or viewing mode to another simply
by repositioning the sheet.
Figure 2. A sheet of widgets. Clockwise from upper left: color palette, shape palette, clipboard, grid, delete button, and buttons that navigate to additional widgets.
Widgets and lenses can be composed by overlapping them, allowing
a large number of specialized tools to be created from a small basic set.
Figure 3 shows an outline color palette over a magnifying lens, which makes
it easy to point to individual edges.
Figure 3. An outline color palette over a magnifying lens.
The see-through interface has been implemented in the Multi-Device
Multi-User Multi-Editor (MMM) framework5 in the Cedar programming language
and environment,24 running on the SunOS UNIX(TM)-compatible operating system
on Sun Microsystems SPARCstations and other computers. The Gargoyle graphics
editor,20 as integrated into MMM, serves as a complex application on which
to test our interface. We use a standard mouse for the dominant hand and
a MicroSpeed FastTRAP(TM) trackball for the non-dominant hand. The trackball
includes three buttons and a thumbwheel, which can be used to supply additional
parameters to the interface.
The remainder of this paper is organized as follows. The next section describes related work. Section 3 describes some examples of the tools we have developed. Section 4 discusses general techniques for using the see-through interface. Section 5 discusses some advantages of this approach. Section 6 describes our implementation. Sections 7 and 8 present our conclusions and plans for future work.
Except for figures 12 and 16, all of the figures in this paper reflect
current capabilities of our software.
Other work characterizes the situations under which people successfully perform two-handed tasks. Guiard presents evidence that people are well-adapted to tasks where the non-dominant hand coarsely positions a context and the dominant hand performs detailed work in that context.4 Similarly, Kabbash presents evidence that a user's non-dominant hand performs as well or better than the dominant hand on coarse positioning tasks.13
Our system takes full advantage of a user's two-handed skills; the non-dominant hand sets up a context by coarsely positioning the sheet, and the dominant hand acts in that context, pointing precisely at objects through the sheet.
Several existing systems provide menus that can be positioned in the same work area as application objects. For example, MacDraw "tear-off menus" allow a pull-down menu to be positioned in the work area and repositioned by clicking and dragging its header.17 Unfortunately, moving these menus takes the cursor hand away from its task, and they must be moved whenever the user needs to see or manipulate objects under them.
Toolglass sheets can be positioned relative to application objects and moved without tying up the cursor.
While these systems allow the user to continue to see the underlying application while a menu is in place, they don't allow the user to interact with the application through the menu and they don't use filters to modify the view of the application, as does our interface.
The concept of using a filter to change the way information is visualized in a complex system has been introduced before.25,10,14 Recent image processing systems support compostition of overlapping filters.23 However, none of these systems combine the filtered views with the metaphor of a movable viewing lens.
Other systems provide special-purpose lenses that provide more detailed views of state in complex diagrams. For example, a fisheye lens can enhance the presentation of complicated graphs.21 The bifocal display22 provides similar functionallity for viewing a large space of documents. The MasPar Profiler3 uses a tool based on the magnifying lens metaphor to generate more detail (including numerical data) from a graphical display of a program.
Magic Lens filters combine viewing filters with interaction and composition
in a much broader way than do previous systems. They are useful both as
a component of the see-through interface and as a general-purpose visualization
paradigm, in which the lenses become an integral part of the model being
viewed.
Figure 4. Shape palette. (a) Choosing a shape. (b) Placing the shape.
Figure 5 shows a design for a property palette for setting the face
of text in a document. Each face (regular, bold, etc.) has an active region
on the right side of the tool. Selecting the text displayed in this region
changes its face.
Figure 5. Font face palette. The word "directly" is being selected and changed to bold face.
Figure 6 shows a symmetry clipboard that picks up the shape that the user clicks on (figure 6(a)) and produces all of the rotations of that shape by multiples of 90 degrees (figure 6(b)). Moving the clipboard and clicking on it again, the user drops a translated copy of the resulting symmetrical shape (figure 6(c)). Clicking the small square in the upper left corner of the widget clears the widget so that new shapes can be clipped.
Figure 6. Symmetry clipboard. (a) Picking up an object. (b) Rotated copies appear. (c) The copies are moved and pasted.
Figure 7 shows an example of a type of clipboard that we call a
rubbing.
It picks up the fill color of an object when the user clicks on that object
through the widget (figure 7(a)). The widget also picks up the shape of
the object as a reminder of where the color came from (figure 7(b)). Many
fill-color rubbings can be placed on a single sheet, allowing the user
to store several colors and remember where they came from. The stored color
is applied to new shapes when the user clicks on the applicator nib of
the rubbing (figure 7(c)).
Figure 7. Fill-color rubbings. (a) Lifting a color. (b) Moving the clipboard. (c) Applying the color.
Besides implementing graphical cut and paste, clipboards provide
a general mechanism for building customized libraries of shapes and properties.
Figure 8. An achromatic lens over a drop shadow lens over a knotwork. (Knotwork by Andrew Glassner)
Previewing lenses can be parameterized. For example, the drop shadow
lens has parameters to control the color and displacement of the shadow.
These parameters can be included as graphical controls on the sheet near
the lens, attached to input devices such as the thumbwheel, or set using
other widgets.
Figure 9. Vertex selection widget. (a) Shapes. (b) The widget is placed. (c) A selected vertex.
Figure 10. The local scaling lens. (Tiling by Doug Wyatt)
Figure 10 shows a lens that shrinks each object around its own centroid.
This lens makes it easy to select an edge that is coincident with one or
more other edges.
Figure 11. Three grid tools.
Figure 12. Gaussian curvature pseudo-color lens with overlaid tool to read the numeric value of the curvature. (Original images courtesy of Steve Mann)
By clicking a button on the trackball, the user can disconnect the trackball from the sheet and enable its use for scrolling and zooming a selected application area. If a sheet is over this application, the user can now move an application object to a widget instead of moving a widget to an object. This is a convenient way to use the see-through interface on illustrations that are too large to fit on the screen.
In most applications, a control panel competes for screen space with the work area of the application. Toolglass sheets exist on a layer above the work area. With proper management of the sheets, they can provide an unlimited space for tools. The widgets in use can take up the entire work area. Then, they can be scrolled entirely off the screen to provide an unobstructed view of the application or space for a different set of widgets.
The see-through user interface can be used on tiny displays, such as notebook computers or personal digital assistants, that have little screen real estate for fixed-position control panels. It can also be used on wall-sized displays, where a fixed control panel might be physically out of reach from some screen positions. These tools can move with the user to stay close at hand.
A user interface layer over the desktop provides a natural place to locate application-independent tools, such as a clipboard that can copy material from one window to another.
These widgets can combine multiple task steps into a single step. For example, the vertex selection widget of figure 9 allows the user to turn on a viewing mode (wire-frame), turn on a command mode (selection), and point to an object in a single two-handed gesture.
Most user interfaces have temporal modes that can cause the same action to have different effects at different times. With our interface, modes are defined spatially by placing a widget and the cursor over the object to be operated on. Thus, the user can easily see what the current mode is (e.g., by the label on the widget) and how to get out of it (e.g., move the cursor out of the widget). In addition, each widget can provide customized feedback for its operation. For example, a widget that edits text in an illustration can include a lens that filters out all the objects except text. When several widgets are visible at once, the feedback in each one serves a dual role. It helps the user make proper use of the widget and it helps the user choose the correct widget.
The visual nature of the see-through interface also allows users to
construct personalized collections of widgets as described above.
MMM takes events from multiple input devices, such as the mouse and trackball, keeps track of which device produced which event, and places all events on a single queue. It dequeues each event in order and determines to which application that event should be delivered. MMM applications are arranged in a hierarchy that indicates how they are nested on the screen. Each event is passed to the root application, which may pass the event on to one of its child applications, which may in turn pass the event on down the tree. Mouse events are generally delivered to the most deeply nested application whose screen region contains the mouse coordinates. However, when the user is dragging or rubberbanding an object in a particular application, all mouse coordinates go to that application until the dragging or rubberbanding is completed. Keyboard events go to the currently selected application.
To support Toolglass sheets, MMM's rules for handling trackball input were modified. When a sheet is movable, trackball and thumbwheel events go to the top-level application, which interprets them as commands to move or resize the sheet, respectively. When the sheet is not movable, the trackball and thumbwheel events are delivered to the selected application, which interprets them as commands to scroll or zoom that application.
Figure 13. A simple hierarchy of applications
Ordinarily, MMM input events move strictly from the root application towards the leaf applications. However, to support the see-through interface, input events must be passed back up this tree. For example, figure 13(b) shows an application hierarchy. The left-to-right order at the lower level of this tree indicates the top-to-bottom order of applications on the screen. Input events are first delivered to the Toolglass sheet to determine if the user is interacting with a widget or lens. If so, the event is modified by the sheet. In any case, the event is returned to the root application, which either accepts the event itself or passes it on to the child applications that appear farther to the right in the tree.
The data structure that represents an MMM event is modified in three ways to support Toolglass sheets. First, an event is annotated with a representation of the parts of the application tree it has already visited. In figure 13, this prevents the root application from delivering the event to the sheet more than once. Second, an event is tagged with a command string to be interpreted when it reaches its final application. For example, a color palette click-through button annotates each mouse-click event with the command name "FillColor" followed by a color. Finally, if the widget contains a lens, the mouse coordinates of an event may be modified so the event will be correctly directed to the object that appears under the cursor through that lens.
Figure 14. Composing color-changing widgets.
Widgets can be composed by overlapping them. When a stack of overlapped widgets receives input (e.g., a mouse click), the input event is passed top-to-bottom through the widgets. Each widget in turn modifies the command string that has been assembled so far. For example, a widget might concatenate an additional command onto the current command string. In figure 14, a widget that changes fill colors (figure 14(a)) is composed with a widget that changes line colors (figure 14(b)) to form a widget that changes both fill and line colors (figure 14(c)). If the line color widget is on top, then the command string would be "LineColor blue" after passing through this widget, and "LineColor blue; FillColor cyan" after both widgets.
In addition, to improve performance, MMM applications compute the rectangular bounding box of the regions that have recently changed, and propagate this box to the root application, which determines which screen pixels will need to be updated. Generally, this bounding box is passed up the tree, transformed along the way by the coordinate transformation between each application and the next one up the tree. However, lenses can modify the set of pixels that an operation affects. A magnifying lens, for example, generally increases the number of pixels affected. As a result, the bounding box must be passed to all lenses that affect it to determine the final bounding box.
When several lenses are composed, the effect is as though the model were passed sequentially through the stack of lenses from bottom to top, with each lens operating on the model in turn. In addition, when one lens has other lenses below it, it may modify how the boundaries of these other lenses are mapped onto the screen within its own boundary. The input region of a group of lenses taken as a whole can be computed by applying the inverses of the viewing filters to the lens boundaries themselves.
Our lenses depend on the implementation of Toolglass sheets to manage the size, shape and motion of their viewing regions. This section describes two strategies we have tried for implementing viewing filters: a procedural method that we call recursive ambush, and a declarative method that we call model-in model-out. We also describe a third method that promises to be convenient when applicable, called reparameterize-and-clip. Finally, we discuss issues that arise in the presence of multiple model types.
When lenses are composed, the previous implementation may not be the original graphics language primitive, but another lens primitive that performs yet another modification, making composition recursive.
Recursive ambush lenses appear to have important advantages. Because they work at the graphics language level, they work across many applications. Because they work procedurally, they need not allocate storage. However, the other methods can also work at the graphics language level. In addition, recursive ambush lenses have three major disadvantages. First, making a new lens usually requires modifying many graphics language primitives. Second, debugging several composed lenses is difficult because the effects of several cooperating interpreters are hard to understand. Finally, performance deteriorates rapidly as lenses are composed because the result of each lens is computed many times; the number of computations doubles with the addition of each lens that overlaps all of the others.
Although MIMO lenses must allocate storage, this investment pays off in several ways. First, during the rendering of a single image, each lens computes its output models only once, and then saves them for use by any lenses that are over it. In addition, if the computed model is based on the entire original model, then redrawing the picture after a lens moves is just a matter of changing clipping regions; no new model filtering is needed. In this case, each lens maintains a table of the models it has produced. The table is indexed by the models it has received as input and when they were last modified. The action of such a lens often consists of a single table lookup.
MIMO lenses have many other advantages. Given routines to copy and visit parts of the model, the incremental effort to write a MIMO lens is small. Many of our lenses for graphical editor data structures were written in under 20 minutes and consist of under 20 lines of code. Debugging composed lenses is easy because the intermediate steps can easily be viewed. Finally, MIMO lenses can perform a large class of filtering functions because they can access the input model in any order. In particular, they can compute their output using graphical search and replace,16 as shown in figure 15 where each line segment is replaced by multiple line segments to create a "snowflake" pattern.
Figure 15. The snowflake lens. (a) Two triangles. (b) Snowflake lens over part of the scene.
An important variation of MIMO is to allow the output model to differ in type from the input model. For example, a lens might take a graphics language as input and produce pixels as output. In this case, the lens walks the original model, rather than copying it, and allocates data structures of the new model type.
Several reparameterize-and-clip lenses can be composed if the parameter changes made by these lenses are compatible. In the region of overlap, the renderer re-renders the original model after each of the overlapping lenses has made its changes to the renderer parameters. The flow of control and performance of a stack of these lenses is like that of MIMO lenses; a new output is computed for each input region received from lenses underneath. These lenses differ from MIMO in that each output is computed from the original model, and each output is always a rendering.
Supporting multiple model types requires type conversion and type tolerance. When a lens that expects one type of model as input is moved over a model of a different type, the system may automatically convert the model to be of the type required; this is type conversion. For example, all of our applications produce Interpress graphics language calls as part of drawing themselves on the screen. When a lens that takes Interpress as input is positioned over one of these applications, that application converts its model to Interpress on demand for that lens.
Figure 16. A bridge made of shaded, 3D blocks showing a 3D wireframe lens and a 2D magnifier.
Alternatively, when presented with a model it does not understand, a lens can simply pass that model through unchanged; this is type tolerance. For example, a lens that operates only on a graphics editor's data structures will only modify the image in the part of that lens's boundary that overlaps the graphics editor; other regions are unchanged.
Figure 17. The Magic Lenses logo.
The see-through interface provides a new paradigm to support open software architecture. Because Toolglass sheets can be moved from one application to another, rather than being tied to a single application window, they provide an interface to the common functionality of several applications and may encourage more applications to provide common functionality. Similarly, Magic Lens filters that take standard graphics languages as input work over many applications.
In addition to their role in user interfaces, Magic Lens filters provide a new medium for computer graphics artists and a new tool for scientific visualization. When integrated into drawing tools, these filters will enable a new set of effects and will speed the production of traditional effects. Figure 17 shows a magnifying lens and a wireframe lens used to produce our Magic Lenses logo.
Integrated into scientific visualization tools, these filters can enhance understanding by providing filtered views of local regions of the data while leaving the rest of the view unchanged to provide context, as was shown in the visualization example in figure 12.
We hope the see-through interface will prove to be valuable in a wide variety of applications. While the examples in this paper stress applications in graphical editing, these tools can potentially be used in any screen-based application, including spreadsheets, text editors, multi-media editors, paint programs, solid modelers, circuit editors, scientific visualizers, or meeting support tools. Consider that most applications have some hidden state, such as the equations in a spreadsheet, the grouping of objects in a graphical editor, or the position of water pipes in an architectural model. A collection of widgets and lenses can be provided to view and edit this hidden state in a way that takes up no permanent screen space and requires no memorization of commands.
We believe that the see-through interface will increase productivity by reducing task steps and learning time, providing good graphical feedback, and allowing users to construct their own control panels and spatial modes.
We are building two Toolglass widget toolkits. The first is a traditional toolkit in which widgets are created through object-oriented programming. The second toolkit is based on our EmbeddedButtons project;6 here, users draw new widgets and collections of widgets using a graphical editor and then apply behavior to these graphical forms, where the behavior is expressed in a user customization language.
We are designing new algorithms to increase the speed of these tools. It is clear that Magic Lens filters and, to a lesser extent, Toolglass widgets provide a new way to consume the graphics power of modern computers.
Finally, we are working to better understand how to model and implement general composition of widgets and lenses, especially those that work with multiple model and applications types.
Trademarks and Patents: Toolglass, Magic Lens and Interpress are trademarks of Xerox Corporation. Postscript is a trademark of Adobe Systems, Inc. UNIX is a trademark of AT&T. FastTRAP is a trademark of MicroSpeed Inc. Patents related to the concepts discussed in this paper have been applied for by Xerox Corporation.
2. Bartlett, Joel F. Transparent Controls for Interactive Graphics. WRL Technical Note TN-30, Digital Equipment Corp., Palo Alto, CA. July 1992.
3. Beck, Kent, Becher, Jon, and Zaide, Liu. Integrating Profiling into Debugging. Proceedings of the 1991 International Conference on Parallel Processing, Vol. II, Software, August 1991, pp. II-284-II-285.
4. Guiard, Yves. Asymmetric Division of Labor in Human Skilled Bimanual Action: The Kinematic Chain as a Model. The Journal of Motor Behavior,19, 4, (1987), pp. 486-517.
5. Bier, Eric A. and Freeman, Steve. MMM: A User Interface Architecture for Shared Editors on a Single Screen. Proceedings of the ACM SIGGRAPH Symposium on User Interface Software and Technology (Hilton Head, SC, November 11-13), ACM, New York, (1991), pp. 79-86.
6. Bier, Eric A., EmbeddedButtons: Supporting Buttons in Documents. ACM Transactions on Information Systems, 10, 4, (1992), pp. 381-407.
7. Buxton, William and Myers, Brad A.. A Study in Two-Handed Input. Proceedings of CHI '86 (Boston, MA, April 13-17), ACM, New York, (1986), pp. 321-326.
8. Buxton, William. There's More to Interaction Than Meets the Eye: Some Issues in Manual Input. Readings in Human-Computer Interaction: A Multidisciplinary Approach. (Ronald M. Baecker, William A.S. Buxton, editors). Morgan Kaufmann Publishers, Inc., San Mateo, CA. 1987.
9. Dill, John. An Application of Color Graphics to the Display of Surface Curvature. Proceedings of SIGGRAPH '81 (Dallas, Texas, August 3-7). Computer Graphics, 15, 3, (1981), pp. 153-161.
10. Goldberg, Adele and Robson, Dave, A Metaphor for User Interface Design, Proceedings of the University of Hawaii Twelfth Annual Symposium on System Sciences, Honolulu, January 4-6, (1979), pp.148-157.
11. Goodman, Danny. The Complete HyperCard Handbook. Bantam Books, 1987.
12. Harrington, Steven J. and Buckley, Robert R.. Interpress, The Source Book. Simon & Schuster, Inc. New York, NY. 1988.
13. Kabbash, Paul, MacKenzie, I. Scott, and Buxton, William. Human Performance Using Computer Input Devices in the Preferred and Non-preferred Hands. Proceedings of InterCHI '93, (Amsterdam, April 24-29), pp. 474-481.
14. Krasner, Glenn and Hope, Stephen, A Cookbook for Using the Model-View-Controller User Interface Paradigm in Smalltalk-80, Journal of Object-Oriented Programming, 1, 3, (1988), pp. 26-49.
15. Krueger, Myron W., Gionfriddo, Thomas, and Hinrichsen, Katrin. VIDEOPLACE - An Artificial Reality. Proceedings of CHI '85 (San Francisco, April 14-18). ACM, New York, (1985), pp. 35-40.
16. Kurlander, David and Bier, Eric A.. Graphical Search and Replace. Proceedings of SIGGRAPH '88 (Atlanta, Georgia, August 1-5) Computer Graphics, 22, 4, (1988), pp. 113-120.
17. MacDraw Manual. Apple Computer Inc. Cupertino, CA 95014, 1984.
18. Newman, William. Markup User's Manual. Alto User's Handbook, Xerox PARC technical report, (1979), pp. 85-96.
19. Perlin, Ken and Fox, David. Pad: An Alternative Approach to the Computer Interface. this proceedings.
20. Pier, Ken, Bier, Eric, and Stone, Maureen. An Introduction to Gargoyle: An Interactive Illustration Tool. Proceedings of the Intl. Conf. on Electronic Publishing, Document Manipulation and Typography (Nice, France, April). Cambridge Univ. Press, (1988), pp. 223-238.
21. Sarkar, Manojit and Brown, Marc H.. Graphical Fisheye Views of Graphs. Proceedings of CHI '92, (Monterey, CA, May 3-5, 1992) ACM, New York, (1992), pp. 83-91.
22. Spence, Robert and Apperley, Mark. Data Base Navigation: An Office Environment of the Professional. Behaviour and Invormation Technology,1, 1, (1982), 43-54.
23. ImageVision, Silicon Graphics Inc., Mountain View, CA.
24. Swinehart, Daniel C., Zellweger, Polle T., Beach, Richard J., Hagmann, Robert B.. A Structural View of the Cedar Programming Environment. ACM Transactions on Programming Languages and Systems, 8, 4, (1986), pp. 419-490.
25. Weyer, Stephen A. and Borning, Alan H., A Prototype Electronic Encyclopedia, ACM Transactions on Office Systems, 3, 1, (1985), pp. 63-88.