Buxton, W. (1990). Smoke and Mirrors. Byte Magazine,
Smoke and Mirrors
Little question remains that computers are more accessible
today than they have ever been before. Introduced by the Xerox Star and
popularized by machines like the Macintosh, the graphical user interface
(GUI) has had a huge impact on the usability, usage, and usefulness of
But now, nine years after the Star's introduction, I feel
locked in a time warp. This sensation is reinforced every tune I read about
yet another GUI. Each one triggers a familiar flash of déjà
Don t get me wrong. I'm not complaining that the PC and
Unix worlds are finally becoming fit for human consumption. I have the
highest respect for the teams that invented the GUI, but I just can't accept
that there are no more significant breakthroughs to come.
In an industry as new as ours, it's too early to rest
on our collective laurels. We can do far better than the "we can do GUIs,
too" attitude that is all too common today. We can explore and champion
some of the emerging alternatives to the GUI -alternatives as creative
and important in today's environment as the Xerox Star was in 1982.
In the Looking Glass
Rather than use a crystal ball to look into the future
evolution of user-interface development, I prefer to employ a little smoke
and three mirrors. Why mirrors? Because they are reflective.
Using the first mirror, you can ask, "How well does the system reflect
the human motor/sensory system?" Does it acknowledge, for example, that
most people have eyes, ears, feet, and two hands?
Using the second, you can ask, "How well does the design
reflect the human cognitive or problem-solving mechanisms?" For example,
does the system reflect how people think and make decisions?
Finally, the third mirror can test how well the technology
reflects the sociopolitical structure of day-to-day life and work. For
example, how does the technology reflect or support group activity or affect
Together, these three mirrors emphasize how user-interface
design goes well beyond questions of how to best design menus, or whether
to use a joystick or a mouse. To be truly effective, a design must provide
a reasonably undistorted reflection from all three mirrors. Very few systems
in use today stand up to this test.
Discussions of emerging or future systems tend to include
the conflict between technology and user-driven design. Too often, change
has been technology-driven, resulting in a tail-wagging-the-dog situation,
which creates more problems than it solves. The loser in this conflict
is usually the user.
Despite the pitfalls, however, technology is an important
element, not as a force to drive future development, but because of the
opportunities that it affords. Knowing the technology can help you create
a better match between what can be done and what needs to be done. However,
you need to approach the problem from both ends simultaneously.
A nonintrusive eye tracker. A video camera mounted
under the display tracks the position of the eye’s pupil and translates
the data into screen coordinates. Thus, the eye can “point.”
(Photo courtesy of L.C. Technologies, Fairfax, VA)
Look and Feel
The concept of "look and feel" has had a lot of attention
recently. It encompasses those aspects of the user interface reflected
in the first mirror - the motor/sensory system. Today's user interfaces
have far more look than feel, and the use of sound is so impoverished that
it does not even rate a mention.
Even the concept of "look" is impoverished. It is unidirectional
and doesn't take into account the capability of the eyes to indicate direction
(or to be used as an input device, as the photo illustrates). In short,
the balance is out of all proportion with what people are capable of.
Technology may be able to render wonderful ray-traced
images, but without mortgaging my house, I can't purchase a system that
lets me draw a line whose thickness varies continuously with pressure (something
I can do with a 15-cent pencil). One of the first priorities of the next
generation of user interfaces, therefore, is to correct the imbalance that
the first mirror reflects.
Multimedia is another topic that inevitably arises when
discussing emerging technologies. The discussion usually includes two principal
components: (a) Multimedia is the future! and (b) What is rnultimedia?
The resulting debate is generally more than a little confused.
Much of the excitement about multimedia is well founded.
However, by definition, multimedia focuses on the medium or the technology
rather than on the application or the user. Therein lies a primary source
of confusion. If you take a user-centered approach, you quickly see that
it's not the medium per se that is important. Rather, it is the human sensory
rnodalities and the channels of communication that multimedia uses that
make it different. Therefore, the following terms might be more appropriate
multisensory: using multiple sensory modalities;
multichannel: using multiple channels, of the same or different modalities;
multitasking: recognizing that people can perform more than one task at
a time (as driving a car demonstrates).
Seen in this light, the real value of multimedia is the
role that it can play in smoothing out the distortions seen in the first
mirror. From this perspective, you can reverse the question from "Why do
I need two-handed input or audio?" to "Since I have two hands and two ears,
why doesn't this system permit me to use them to full advantage?"
The SonicFinder and Beyond
One of the most interesting pieces of software that is
circulating in the research underground is something called the SonicFinder.
It was developed at Apple Computer's Human Interface Group by Bill Gaver.
The SonicFinder is a prototype of the Macintosh Finder based on the novel
proposition that most people can hear. This may seem fairly obvious, until
you look at the sonic vocabulary most computer systems use.
The SonicFinder uses sound in a way that reflects how
it is used in the everyday world. You can "tap" on objects to determine
their type (e.g., application, disk, and file folder) and their size (small
objects have high-pitched sounds; large objects are low-pitched). When
you drag an object, you hear a scraping sound. When a dragged object collides
with a container (e.g., a file folder, disk, or the Trashcan), you hear
a distinct sound.
All this may seem to suffer from terminal cuteness, but
how many times have you missed the Trashcan when deleting a file, or unintentionally
dropped a file into a file folder when dragging it from one window to another?
Frequently, if you're like me. Yet these are precisely the kinds of errors
that disappear when you add sound.
Machines that exploit sound are finally becoming more
common. It started with the Commodore Amiga, which comes with rich audio
and text4o-speech capabilities. Now, audio is becoming an important ingredient
in other platforms (e.g., the NeXT machine). In fact, it is the major interface
in some systems.
The challenge is in learning how to use audio effectively,
not just for music or to provide an acoustic lollipop, but as a means of
providing a sonic landscape that helps you to navigate through complex
A One-Handed Waterloo
Just as most people can hear, most can also manipulate
items with two hands. Every day, you turn pages with one hand while you
write with the other. You steer your car with one hand while changing gears
with the other. You hold a ruler or drafting machine with one hand and
use a pencil in the other. All these tasks require everyday motor skills
that computer systems largely ignore.
It seems to me that the Macintosh was designed for Napoleon:
Unless you are typing, you can work all day with one hand tucked into your
jacket. This is great if you are one-handed, but a waste if you're not.
The image of the user reflected in the technology is lopsided.
"Hands-on" computing is largely a myth. It would be better
called "hand-on" or even "finger-on." To accurately reflect human potential,
a system should let you scroll through a document by manipulating a trackball
with one hand and using the other to point with a mouse. You should be
able to scale an object using a potentiometer in one hand, while dragging
it into position with the other. Or, in a program like MacDraw, you should
be able to move the drawing page under the window using a trackball in
one hand and keeping the "pen" in the other.
High-end interactive computer-graphics systems have used
this type of interaction for years, but it has not yet penetrated the mainstream
microcomputer market. This is about to change.
The Bus Stops Here
Many of the problems of having a variety of inputs are
logistical: How do you connect this device to that machine? The Apple Desktop
Bus (ADB) is a good attempt to address this class of problem. It provides
an electrical, mechanical, and logical standard for connecting input devices
to a computer. Thus, it becomes easy to mix, match, and change devices.
But perhaps the most important (albeit hidden) capability
of the ADB is its ability to sense and distinguish among different simultaneously
connected input devices. At the recent SIGCHI conference, Dan Venolia and
Michael Chen of Apple's Human Interface Group demonstrated this capability
using a mouse and a trackball together. The result was a prototype utility
on the Mac that supported many two-handed transactions. This is a clear
case of technology that supports human needs and suggests better things
Handling the Pressure
Just using two hands is not enough, however. Another ability
that people have that current technologies don't reflect is hands' ability
to control and sense pressure. One place where this has been recognized
and used is in electronic musical keyboards. Each key has what is own as
"aftertouch" - the ability to sense how hard the key is being pressed.
Hopefully, aftertouch will soon be standard on mouse buttons, providing
real control for line thickness, scrolling, speed, and the speed of fast-forward
rewind on videos and CD-ROMs. A few manufacturers, such as Wacom and Numonics,
already make pressure-sensitive styli for digitizing tablets.
But no matter how well the look, feel, and sound of a
user interface are developed, it still may not fit how you think or you
work; therefore, it will fail. Understanding these elements brings the
second mirror into focus.
Would-be sages and futurists will tell u that we are in
the middle of an information revolution - a revolution whose impact is
matched only by the one that followed the invention of the printing press
or the industrial revolution. Unfortunately, this is false.
By definition, information is that which informs and can
serve as the basis for informed decision-making. Rather than an information
revolution, the current situation is more of a data explosion. The combined
advances in contemporary telecommunications and computational technologies
have helped to spawn an era where true information is more and more difficult
to find, and almost impossible to find in a timely manner.
Information technologies that deserve the name are less
computational engines than technologies that filter and refine data into
a form where it informs. Just as you want systems to reflect how you hear,
see, and touch (the first mirror), you want them to accurately reflect
and support how you think, learn, solve problems, and make decisions (the
The spreadsheet is one of the greatest successes in the
microcomputer world because it fits the way that people think about certain
problems. Rather than generate masses of new numbers, it helps you refine
data into information by enabling you to explore and understand new relationships.
A similar notion is behind one of the emerging "hot" topics of computer
science: scientific visualization. Its objective isn't to make pretty pictures
(although many are) but to render complex data in a visual form that enables
you to better understand the underlying phenomena.
Thus far, scientific visualization has been primarily a
means of presentation. Data is rendered and displayed, but the degree of
interaction is minimal (largely due to the computational overhead of the
rendering process). However, as machines become more powerful, such rendering
techniques will b& married to state-of-the-art input technologies,
thereby creating rich interactive systems for exploring information space.
Alone in the Corner
Back in grade school, when I misbehaved, I was taken out
of the group and forced to sit alone, usually facing the wall or a corner.
Now that I've grown up and have a computer, where do I find myself? Out
of the group, sitting alone, usually facing the wall or a corner. The reasons
are different, but the punishment is the same.
The designs of the technologies used in today's workplace
have largely ignored the social dynamics (the third mirror) of how people
work. You face walls because the backs of the machines are so ugly and
full of cables that you want to hide them. You are anchored to your designated
position by the umbilical cord connecting your computer to the wall socket.
You sit alone because virtually all microcomputer systems assume that you
interact with computers one on one, face to face.
Instruments of Change
Technologies have had a major impact on how you work,
with whom you work, and who has what power. That isn't likely to change.
What can change, however, is who or what is in the driver's seat.
In the past, work has been automated and technologies
introduced based on what was possible. If a new technology became available,
it was put in the work-place and the organization had to adjust accordingly.
Since routine tasks were the easiest to program, they were the first to
have technological support.
Of all the user-related changes emerging today, perhaps
the most significant is the change from this approach. We are beginning
to realize that rather than the technology dictating the organizational
structure, the organization should dictate the technology. The key to improved
productivity isn't the technology - it's the people and how they work.
I can't overemphasize the importance of this change. No
matter how perfectly your icons and menus are designed, or how well a system
supports you in performing your job, if you are doing the wrong job, the
system is a failure.
For example, putting computers into patrol cars is intended
to help police perform their job. But if the technology causes the police
to devote more time to relatively minor offenses (e.g., unpaid traffic
fines) instead of to major crimes, the system may be a failure. The courts
are clogged with minor offenses, and little has been done to help investigate
A New Breed
The past 10 years have seen the development of a new profession:
applied psychology. Traditionally, psychology has been a discipline that
analyzed and tried to understand and explain human behavior. Now, largely
due to problems encountered in human-computer interactions, a new branch
of psychology is attempting to apply this understanding in the context
of a design art. The shift is from the descriptive to the prescriptive.
Today, a similar phenomenon exists in the discipline of
socio-anthropology. If you want the society and social structures of work
(and play) to drive technology, the obvious place to look for expertise
is in disciplines like sociology and anthropology. Like psychology, these
are traditionally analytical, not design, disciplines. However, change
is coming, and a new discipline is being born: applied socio-anthropology.
Hence, a new breed of anthropologists, such as Lucy Suchman
and Gitte Jordan (who last studied birthing rites in Central America),
are stalking the halls of the Xerox Palo Alto Research Center. They are
studying the structure of organizations and work, with the intent of laying
the foundation for a design art that takes into account the larger social
context. Like psychology, socio-anthropology is becoming a prescriptive
as well as analytical science.
Perhaps these social concerns are most visible in the
rapidly emerging areas of computer-supported cooperative work and groupware.
This is a prime example of the outside-in squeeze. On one side, theory
is growing out of the applied social sciences; on the other, important
enabling technologies - such as LANs, new display technologies, and video
Architectures like Xerox's prototype System 33 will enable
you to create, save, index, annotate, retrieve, and share documents independently
of how they were created or stored. Human concerns, such as retinal consistency
(i.e., documents' tendency to remain visually consistent) and the reality
of different platforms, will drive the design.
Telecommunications, video, and computer LANs are converging,
resulting in new forms of collaboration, such as the Cruiser system developed
at Bell Communications Research by Robert Root and Bob Kraut, and Xerox's
Mediaspace. By integrating a range of technologies, both systems permit
a degree of telepresence and remote collaboration previously impossible.
Slowly but surely, the emerging technologies are going
to let you come out of the corner to take a full and active role in the
group. As all three mirrors start to work together, they will let you do
what people do best - namely, be human.
Bringing Blue Sky Down to Earth
The danger in writing about technology and the future
is that you quickly fall into the credibility gap. I have used some isolated
examples to support my case for the inadequacy of the GUI to meet the needs
of today and tomorrow. But are they only isolated examples, or is there
some evidence of a new trend?
Evidence for a new approach to user interface design can
be found in machines such as the GridPad, Scenario's DynaWriter, Toshiba's
PenPC, Sony's Palmtop, and Go Corp.'s new laptop. All these machines have
portability and on-line character recognition in common These differences
lead the way to more than just a change of interaction style. By being
portable, the machines are freed of the anchor of their power cord. The
technology can go with the worker, rather than the worker going to the
technology. This is an important change.
Similarly, compared to the GUI, the stylus-driven interface
better matches the style of work and skills that people have built up over
a lifetime of work and education. While the systems' recognition skills
are still fairly primitive, this style of interface leads toward a way
of capturing all kinds of spatial and temporal information, such as the
types of figures and annotations found on black-hoards and notepads.
Several different techniques have been used for symbol
recognition, including template matching, feature recognition, and neural
networks. An early but elegant feature-recognition technique, called trainable
character recognition, was developed by K. S. Ledeen in 1967. It is described
in detail in Newman and Sproull's classic Principles of Interactive Computer
Graphics (McGraw-Hill, 1973 and 1979).
Being mobile still may mean working alone. But the wireless
network communications of the Agilis System point toward a time when mobile
workstations will be able to communicate with each other, and with larger
systems such as servers.
Perhaps nowhere do these concepts come together better
than in the new portable from the Active Book Company in Cambridge, England.
This package has true workstation power (5 million instructions per second
average, 10 MIPS peak) in a portable package powered by the Acorn RISC
Machine's RISC processor. In addition to having a stylus driven interface
with character recognition, it includes a touch surface that you can use
to "thumb through" the documents you are reading or editing.
The true power and insight of Active Book's machine come,
however, from other emerging technologies, especially the new Digital European
Cordless Telecommunications standard. In mid-1991, there will be a new
pan-European cellular phone network, known as D1, that will have a digital
channel with built-in error correction. Portable workstations like Active
Books will be able to network from anywhere in Europe, even when in motion,
thus greatly increasing the range and scope of both telecommunications
and information technologies.
People should and must be at the center of all these new
technologies. As these technologies evolve, the concerns become more complex
and demand ever greater attention. But I would argue that there are grounds
for optimism. As technologies evolve, so do the methods and theories of
design and analysis. New capabilities are emerging, and if you and I so
choose, we can reap their full potential by design in human terms.