Theses
Thesis/Dissertation Collections
1990
CRT-based dialogs: Theory and design
Jonathan Levine
Follow this and additional works at:
http://scholarworks.rit.edu/theses
This Thesis is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contactritscholarworks@rit.edu.
Recommended Citation
by
School of Computer Science and Technology
Computer Based Dialogs: Theory and Design
by
Jonathan Levine
A thesis, submitted to
The Faculty of the School of Computer Science and Technology
in partial fulfillment of the requirements for the degree of
Master of Science in Computer Science.
Approved by:
Professor John A. Biles
Eugene S. Evanitsky
Professor Peter G. Anderson
Abstract
CRT
(cathode
raytube)
based,
direct
selectiondialogs
for computing machines andsystemswere apparently a cure
for
issues like ease oflearning
and ease of use. Butunforeseen ~
and probably unforeseeable problems arose as
increasingly
sophisticated systems and
dialogs
weredeveloped. This paperdescribes
some oftheemerging
problems in CRT-baseddialog
design,
develops
theories about whythey
occur, and
discusses
potential solutionsforthem as abasis
forfuture
research. Thisinvestigation also provides a survey of the research into what makes programming
Table
ofContents
1. Introduction 1-1
2. Background 2-1
3.
Dialog
Theory: Purpose andNature,
Consistency
and Motivation . . . 3-14.
Dialog
Theory: Application and Effectiveness 4-15.
Dialog
Theory: MachineIntelligenceand Future Dialogs 5-16.
Dialog
Theory: DesignMethodology
6 1Appendices
A.
Programming
Language Issues A- 1B.
Bibliography
B-1CHAPTER 1
INTRODUCTION
Operational Definitions: User Interfaceand
Dialog
A user interface isa mediumthatconverts
data
intoa usefulform
anddisplays
it.Figure2shows a specificinstanceof a user
interface,
a machine-to-humandisplay.The user interface provides a window into what a machine
does,
but
notnecessarily
how
itdoes
it. Although it'sadisplay,
it may actuallyhide
the underlyingcomplexityofthe machine
-or eventhemachine itself- fromthe
user. For
instance,
Figure 2: A
Display
UserInterface
|
Machinei- Data Information >t
Person
it wouldn't
be
use-ful to show
someone the gears
of a mechanical
clock and expect
them to easily tell
time. Clocks
have
associated withthem a userinterface-- the
face
andhands
- that interpretsthe gearmotion in a useful way and conveys that information visually, and sometimes
audibly, to people. When a user interface exists, the human-machine interaction is
non-interactive.
A
dialog
is a mediumfor
interactive communication, comprising more than oneuser interface
(Figure
3;
see Hartson and Hix[p.
8]
for adifferent
operationalFigure
3: AConversation[Dialog"!
Machine
Data
Information
Information
Data
Person
and a machine,the
interaction takes
the
form
of aconversation. For
example, when
typing
into anopen computer
file,
the computercontains a lot of
data
about thefile.
One user interface provides feedback aboutpart ofthe
data:
that thefile
is open,where in thefile
the nexttyped character willgo, a
display
of some number of lines above andbelow
the currentlocation,
thename of the
file,
etcetera. The user providesdata
to change the textby
adding,deleting,
copying, or moving characters, closing thefile,
renaming thefile,
and soon. This
data,
given to the machine via the other user interface in the form ofcommands, is interpreted and sentto the machine asuseful information aboutwhat
todo. The two user interfaces in thisexample comprise a dialog.
Depending
on thecomputer,the
dialog
can takemanyforms.
Butthemeaning ofthedata
is essentiallythesame.
Operational Definition: Human-to-HumanConversation
A human-to-human conversation is an an asynchronous
6-tuple
machineincluding
two concurrenthardware
platforms[for
a similarargument, see Runcimanand Hammond]. The 6-tuple is:
Conversation = (transmitter/receiver, transmitter/receiver, voice,sight,
Transmitter
/receivers:(the
twopeoplehaving
theconversation)Voice:
(hearing,
speaking,auditory
cues)Hearing:
(speech
recognition,auditory
cues)Speaking:
(verbal
output: wordsand sounds)Auditory
cues:(verbal
inflection(tone),
sound recognition)Sight:
(gestures,
reading, visualcues)Gestures:
(facial
expressions, hand and arm movement,body
language)
Reading:(written
language)
Visualcues:
(color,
attire, etc.)Preconceived notions:
(partial
knowledge,
guesses andimplications,
contextperception, prejudices)
Partial
knowledge: (Part
of whatwillbe
said is known and part unknown[Campbell,
p.68-69];
the unknownpartiswhatislearned
fromtheconversation; the known part iswhatistransferred from previous
learning
and previous
knowledge)
Guesses:
(about
theother personbased
on previousknowledge,
aboutthecontext
based
on previousknowledge
Context perception:
(Interpretations
and assumptions aboutmeaning(produced
by
the generator)ofthevoiceand sightvaluesproducedby
thegenerator,assumptionsaboutcontext ofconversation)
Common generator:
(a
common grammar which generatesthestructure ofthespokenwords; alsogeneratesmeaning forthecomponents ofsight andvoice)
Human-to-machine conversationsleaveouta
large
partofthis 6-tuple.Asa resultthey
leave
out a large portion ofthedata
so importantto effective communication,improvement
we can make. Human conversation isfull
of ambiguity,misunderstanding,
andintangibles,
and adding componentsdoesn't
alwayshelp.
Operational Definition: DirectSelection
Dialog
A
direct
selectiondialog
is a cathode-ray-tube(CRT)
based
dialog
that allowsmachine or system users to control the machine and to get information about the
machine'sstatus. The
dialog
contains a set of"electronic
analogs"
or
"metaphors,"
picturesoffamiliar items
(see
Figure5,
forinstance)
representing machinefeatures,
and
"menus,"
containing lists of other machine functions. All the objects (called
icons)
and menu items canbe
selectedby
touching
thedesired
object (eitherwith afinger
or another pointingdevice,
often a mouse). Selection either invokes andreveals the represented
function,
or makes the icon available tohave
someoperation
(eg.:
move;delete)
performed on it.Many
ofthe objects, which are toolstheoperator usesto
do his
job,
areon thescreen mostofthetime.The
kind
ofdirect
selectiondialogs
wewill examine are asynchronous in thattheuser
has
available a variety oftasks,
any of which canbe
chosenby
him
at any time."At almost any point in the work on one
task,
the end user can switch to anothertask and,
later,
back
to thefirst
.... Asynchronous . . .dialogue
is sometimes alsocalled event-based
dialog
because
end-useractionsthatinitiatedialogue
sequences(e.g. clicking
the mousebutton
on anicon)
are viewed as input events. The systemprovidesresponsesto each
event"
[Hartson and
Hix,
p. 1 1].Iconsare often coded
by
color or shade to providedifferent
meaning abouttheirstate. An icon that's
lit
on the screen,when allthe othericonsare not litoften meansthat icon
has been
selected(touched)
by
the operator. This lit / notlit
code is veryshape ofthe
icon,
theoutlineofthe icon(for
example, gray andblack
outlineshave
different
meaningsinsomesystems),whatthe iconlooks like
when it isopened, andso on, that
describe
the objectit represents. Thegraphical codeis,
in manydialogs,
asource of many
interesting
observationsfor it is rich with meaning, and we will lookatthe meaningsofa coupleofgraphicalcodes in
detail.
Note on aPragmaticApproach to Dialogs
"The question of whether computers can think isjust
like
the question ofwhether submarinescan swim."
-(attributed to)
EdsgerDijkstra-"The effort of using machines to mimic the human mind
has
always struck me asrathersilly. Iwould ratherusethemto mimicsomething better.
"
-Dijkstra, 1989,
p. 1401-Can a computer imitate human brain
function?
Are computers intelligent? Willthey
everbe
intelligent? Researchersintensely
argue thefollowing
sides of thesequestions:
Computer/brain identity: At the
deepest
level,
brains
and computersarethe same. Ifwe candetermine
the brain'smakeup,we canbuild
athinking
machine[eg.
Hofstadter,
p. 559].In termsofhuman-computer
interaction,
thisisthe simpler case: Certain components comprisehuman-to-human
conversation.Aswe include moreoftheseExperiential
mind: Thehuman
mind andfunction have
theirbasis
inbiology,
and in cultural,historical,
and individualexperience
[Lakoff].
So machinescan never model thehuman brain.
Forexample, nocomputer could reallycomprehend a
story
about a personlaughing
unless itcould
laugh.
From a
human-computer
interaction perspective, thisis thehardercase: Sincemachineswill neverbe
ableto modelhuman-human
dialog,
weneed tolookin newdirections
foryet-unknown things tofacilitate
human-computerinteraction.
But,
regardless of which is correct, usefulness alwaysdetermines
the directioncomputer research takes.
Money
is spenttherebecause
money is made there. Andit's notclear thatit will
be
useful tohave
machines modeling thehuman
brain. Forinstance,
is itusefultohave
a machinethatdoes
math athuman
speed?Possibly
not.But Hofstadter
[p.
677-678]
implies that a machine withhuman
brain capabilitymight
do
math like ahuman,
thatis,
slowly.So I will use
both
models pragmatically to investigate what's useful withoutworrying whether computers can
"swim."
Can computers really think? The answer,
The Purpose and
Method
ofInvestigationLucy
Suchman calls it"the
problem ofhuman
- machinecommunication."
It's a
commonplace
today
(sentiments
examined in Figures 1 and4)
that smart machinesare notthatsmart.
They
don't do
what we tell them.
They
do
the^?$
opposite of what we tell them.
They
break
at just the wrongtime
because
we pressed thewrong
key.
We can'tfigure
outhow to work them. We can't
figure
outwhatthey'retrying
totell us. And so on. Researchers
work
hard
tofind
the causes ofthese
kinds
of problems andeliminate
them,
and we've madesome progress. Figure4: Justanother
day
But,
thelatest
advanceshave
produced a crop of previously unknown, andprobablyunforeseeable, questionsand problems
for
dialog
designers.
These includequestions aboutthe relationshipsbetween dialogsand
- the
nature of work
for both
individuals and society,including
thehows
andwhysof:
-what peoplethink abouttheir work,
-
how do
peopleseethemselves and their roles,
-what peoplethinkaboutthe machines
they
useatwork,-
how
people learnto
do
theirjobs,
-
how
people move aboutin theirworkspace,
-whatmotivatespeopleto
do
certain tasksand notothers,how
can using machinesbe
madeenjoyable,
-what makes some people move effectivethan others attheir
jobs;
the
definition
andmeaning
ofconsistency
indialog-behavior
design anddialog-graphic
design;
how
people cometodiscover meaning
:what makes
dialogs
(or particular elements ofdialogs)
meaningful tousers,
exactly what roles
do
things like context and the interactions betweencontexts play in
providing
meaning to users ofdialogs;
-what roles artificial intelligence play in
making
machines easier to learn anduse; and,
even such esoteric subjects such as whether emotions are computable
functions.
I'll examine some of these in
detail
--and some in much less detail
--so that
future
dialog
design effortsthatface
these questions will atleasthave
abetter
ideaof what to look
for
than wedid.
I've tried to suggest possible answers to thequestions, or to
describe
ways in which problems thatdon't have
clear solutionsmight
be
managed to thedesigner's
advantage. That thishasn't
always beenpossible is
due,
inturns,
toa lack ofcurrently availabletechnology
or of insufficientclevernesson
my
part.The overarching theme of this investigation is
that,
in spite of individualdifferences
in machine operators, and in spite of thedifferent
environmentsthey
work
in,
there are psychological similarities, common threadsof understanding anddialogs
can promoteease oflearning
and ease of useto increaseproductivity
in theface
ofincreasing
machinecomplexity
-that
is,
how dialogs
canhelp
makemachines more useful - is the
first
thrust ofthis
investigation.
As the second maintheme,
I'lldescribe
techniquesfor
designing
thesemoreusefuldialogs.
First,
I'lldiscuss,
by
example, someofthe problemsexisting
in today'sdialogs
andpresent assumptions about why these problems exist and why
they
remain hardproblems.The issues I'll addressare:
consistency
within andbetween
dialogs;
- the
meaning
ofthemetaphors and graphic codes indirect
selectiondialog;
where a user'smotivationto learnto use machinescomes
from;
and
-the question of
dialog
openness and extensibility.Theassumptions arethat:
-
dialogs
and the graphical codes that comprise them are often
inherently
inconsistent;
-inconsistency
is manageable and sometimes very useful; it is sometimesappropriate todesign inconsistencies intoa
dialog;
context
defines
orhelps
define
a graphical code'smeaning(by
graphical codeI mean the various
looks
and states icons and otherdialog
elements can takeon);
-each graphical code should
be
minimized(contain
the smallest possiblenumberofelements) if
they
aretobe
useful in morethan onecontext;- the
number of graphical codes should
be
minimized; as often as possible,various graphical codesshould
be
combined intoa singlecode, whose various- in
good
dialogs,
users enforce a consistent meaning onto thedialogs
that'smore general and useful than the true
meaning
ofthe graphicswhich maybe
inconsistent;
all users attempttoproducing
consistent meanings, even wherethere are none; and, since users
find different
ways of using systems thandesigners
can envision,dialog
designs
should enable users to discover meaning ontheirown;
-allpeople
find
certain thingseasierthanotherthings;
discovering
whatthosethings are and
incorporating
themelegantly
into adesign
is a main goal ofthe
dialog
designer;
- a good
dialog
shouldbe
invisible(un-apparent)
to the user; it shouldfit
seamlessly into
his
workflow;
like
a pencil, its use shouldbe
so natural thatpeople arevirtuallyunawareofusing
it;
-machineintelligence isa majoraid tomaking machineseasyto learn and use;
-systems should
be designed for
users ratherthanhaving
dialogs designed
forsystems; and,
knowledge
of the userstask andhuman
psychology shouldbe
programmed into the functional code of the machine, not only into the
dialog;
and,- the
entire machine or system, not just the
dialog,
musthelp
enableease-of-learning
and ease-of-use.I'llalso
discuss
methodsofincorporating
these assumptions intofuture
machines.Finally
I'llsurveythe research intowhat makesprogramming languageseasyto learnand use.
Beyond Direct Selection Dialogs
Direct selection
dialogs
were a robust and powerful discovery. But no onehas
taken the next step and the great complexity and power of new machines and
again,
difficult
to learn and use. This paper will produceincreasingly
general andcomprehensive
definitions
ofboth
"user interface"and "dialog." "User interface"
will
be
redefined as a window not only into machinefunction but
into humanfunction
aswell, thatis,
a usefuldisplay
ofhow
people think and behave. Dialog'sdefinition will
incorporate
thisbroader definition
of userinterface and so,will itselfbecome
broader,
to include the threads common to alldialog
users: processes togenerate meaning and motivation, and answersto questions about how to use the
darn
machine.In a study of
dialogs,
one must remember that much of this is soft rather thanhard
science, in the same sense that psychology is soft. Music evokes emotion buthow
isnotclear. In any case composersare still ableto producedelightful tunes andpeople arestill motivated tospend
lots
of moneyto listen.Similarly,
though none ofmy assumptions can
be
proven mathematically,they
are useful guides through theart of
dialog
design. I will not prove thehypotheses
I presentthough I will provideevidence
for
them as I generate them.My
purpose,instead,
is to point the way forCHAPTER
2BACKGROUND
The First Direct Selection Dialogs
The
first breakthrough
to modern,direct
selectiondialogs
was theAlto,
developed
atthe Xerox Palo Alto Research Center(PARC)
in 1972 - 73 as a personalcomputer workstation
[Wadlow].
The Alto'sdesign
was "biased toward interactionwith the user and away
from
significant numericalprocessing"
[Thacker
et al., p.549-550]. Each Alto
had
abitmap
display,
apointing
device called a mouse(the
now-familiar mousewasinvented atStanfordResearch Institute
[English
etal.]), andcould run up to 16 tasks concurrently. Another important element relevantto work
methods was its connection to a 3 megabit per second Ethernet enabling remote
filing,
printing, and electronic mail. Availablefor
the Alto were word processors,graphics, and other programsthattook a quantum
leap
beyond
previous programsofthese types.
They
were menudriven
and thedirect
ancestor of the iconicdirect
selection
dialogs.
The Alto was not intendedfor
sale,but
in 1978 Xerox gave 50 toStanford, Carnegie-Mellon,
and MIT. Inlight
of subsequent events, this mayhave
been
the mostexpensivegive-away inhistory.
In
1981,
Xerox announced thefirst
productbased
on theAlto,
the 8010 StarInformation System
[Smith
et al.]. Star took the conceptsdeveloped
and refined inyears of use with the Alto and went a step
further.
For ease oflearning,
Starprovided a
desktop
"metaphor"
on its screen, the graphical equivalent of a real
desktop.
On the "desktop" were other graphical metaphors, pictures of objects("icons")
that one would normallyfind
on adesk
or in an office:documents,
printers,
file
cabinets,file folders.
Each icon retained its normal, well knownoperator itwas used
for
the exactsame purpose and in the exactsame way as a realfolder.
To the user, the icon actually was afolder.
For ease of use, Star provided asimplified command set that was never
hidden
from the operator. Icons opened,with a single command, into "windows," a region of the screen containing the
contents ofthe icon.
Up
tosixwindows- moreon laterversions- could
be
openedonthe
desktop
at a time.With a single command,text,
graphics, and iconscouldbe
transferred
between
windows.Inspiteofits
revolutionary
nature,andthough it'sstillbeing
produced underthename
Viewpoint,
Starwasamarketing failure
forXeroxdue
to mismanagement andhigh price. Somewhere in its
development
cycle, Xerox showed the Alto to a youngentrepreneur named Steve Jobs. As the story goes, Jobs left the presentation with a
lightbulb
on inhis
head,
took the idea back tohis
company, AppleComputer,
andproceeded with
his
engineers, some raidedfrom
Xerox,
todevelop
theLisa,
thelinearancestor ofthe Macintosh computer.
"
'Office
equipment analystshave
started referring to PARC-stylesystems as
'Lisa-like,' not
'Star-like,' '
noted a reporter. 'Apple's
nextcomputer,
Macintosh,
scheduled to ripen into a commercialproduct
by
theend ofthisyear,couldfurther
identify
ApplewithPARC's ideas. The engineering manager
for
Macintosh camefrom
PARCwherehis last
big
project was a personalcomputer'"
[Smith
andAlexander,
p. 241 - 242].Macintosh popularized the
direct
selection,iconic,
mousedrivendialog
[Hartson andHix,
p. 12]. With alow
price, strong marketing, and lots ofthird-party
applications,the Mac eventuallytookoffand influenced a generation of softwaredevelopersand
Star still providesthe model
for
alldirect
selection CRTbased dialogs.
Today,
10years afterStarwas
developed,
the entire computerindustry
is attempting to movetowards consistent, Star-like environments
for
their machines and networks. Nosignificant
breakthroughs have
occurred indialog
design
since Star.But, theorizing
about
dialogs
anddialog
design
has,
ontheotherhand,
exploded,and theliteratureabout
dialog
design
anddesign methodology
is massive.Early
Dialog
Theory
"There
are alsonospecial entitiescalled 'files'in thesystem
-- Alan
Kay, 1969,
p.127-Early
theoretical -- somesay
visionary,but
that's yet tobe
demonstrated--accounts of what computer
based dialogs
might look like in thefuture
appearedwith Alan Kay's Dynabookconcept
[Kay
etal.,1976]
and Nicholas Negroponte's TheArchitectureMachine
[Negroponte].
Negroponte's The Architecture Machine was, in Brand's words
[p. 148],
the"founding
image" thatkicked
off the endeavor thatbecame
the MIT Media Labwhere,
today,
onecan see somanysolutions and potential solutionstouserinterfaceand
dialog
design
issues. Negropontesawthe machine and the user as partners, nota master and slave, together
achieving
what neither could achieve alone.Negropontewrote:
"Imagine
a machine that canfollow
yourdesign
methodologyand atthe same time
discern
and assimilate your conversationalidiosyncrasies. Thissame machine, afterobserving your
behavior,
could
build
a predictive model of your conversationalperformance .... The
dialog
wouldbe
so personal that younot understand yours. In
fact,
neither machine wouldbe
able to talkdirectly
to theother.Thedialog
wouldbe
so intimate--even exclusive -- that
only mutual persuasion and compromise would
bring
aboutideas,
ideas
unrealizableby
eitherconversantalone"
[p.
12-13]."To accomplish all
this,
Negroponte posited ahigh degree
of intelligence in themachine, a
high degree
ofintimacy
between
machine and user, and a richdialog
between
them"[Brand,
p. 149].In 1968
[Rose,
p.60],
AlanKay,
building
onhis
own doctoral dissertation[Kay,
1969],
and Papert's[1970,
1972]
and Feurzeig's[1971]
importantwork with childrenand the Logo computer
language
[Kay,
1976,
p.8; Rose,
p.61],
conceived of theDynamic Notebook
("Dynabook"),
a notebook sized computer that everyone fromyoung childrento professional
businessmen
and engineerswould carrywith them. It wouldbe
simple enoughfor
anyone to use, powerful enough for the mostsophisticated applications, and apparently embody a style similar to the one
Negroponte envisioned: an intelligent and useful helper/ partner
[Kay,
1976;
Rose,
1987]. His optimistic prediction
[Kay,
1977]
that thistechnology
wouldbe
availableand useful inthe 1980's
has
not proventrue.Kay
stillholds
ontohis
influential vision[Kay,
1984;
Rose, 1987]
but
recentlyhe has
more cautiously suggested that we stillneed
leaps
inboth
artificial intelligence anddialog
technology
to achieve aDynabooksituation
[Fisher,
1988].A numberof researchers were influenced
by
Metaphors We LiveBy,
Lakoff andJohnson's
[1980]
thorough examination of the pervasiveness, usefulness, andbased
dialogs.
Lakoff and Johnson's work gave an important theoreticalunderpinning
toworkthathad already begun.
Dialog
Design PrinciplesandMethodology
". . . Theprinciplesmust yield
sufficiently
precise answers thatthey
can actuallybeof use: Statementsthatproclaim 'Considertheuser'
are validbut worthless. "
- Donald A.
Norman, 1983,
p. 1-In
1983,
Halasz and Moran put something of a damper on thetheory,
if nottheimplementation,
ofthe usefulness of graphical metaphorsin dialogsby
publishing anumber of examples of cases where metaphors
hindered
rather than helped users.Theirsolution wasconceptual models
--displaying
to theuserthe internal workingsof the system rather than
hiding
itbehind
a metaphor. But that required a newmethod of
developing
products and a special simplicity and coherence in systemdesign.
Noonehas
actuallytaken thatstep, butthetheoretical foundation exists.In
1983,
three PARC researchers,Card,
Moran(the
sameMoran),
andNewell,
attempted to take the rapidly
developing
knowledge in cognitive psychology andcreate an "applied
information-processing
psychology"
[p.
3]. Theirgoal wastohelp
move what was known about human cognition systematically into useful
applications in human-computer interaction.
They
attempted todefine
howproducts should
be developed for
the user within the technological constraints. Intheirconception
"...the primary professional - the computersystem designers
--[should]
be
the main agents of applied psychology. Much as acivil engineer learns to apply
for
himself the relevant physics ofbridges,
the system designer shouldbecome
the possessor oftheThen and
only
then will itbecome
possible forhim
to tradehuman behavioral
considerations against the many othertechnical
design
considerations of system configuration andimplementations.
For this tobe
possible, it is necessary that apsychology
ofinterface design be
cast in termshomogeneous
with those
commonly
used in other parts of computer scienceand that it
be
packaged inhandbooks
that make its applicationeasy"
[p.
12-13].To
help
with thedefinition,
they
set up experimentsto explain whywhat worked worked, and whywhatdidn't
workdidn't.
They
show, forexample, thatthe mouse is thebest pointing device for
text editing, and why that'strue. It was a significantworkthat
helped focus
research inhuman-computer
interaction.Unfortunately,
as I've mentioned, no one, except for the Star projectdesigners,
has
actuallydesigned
productsby
developing
the entiresystem fromthe users pointof view and, perhaps
because
ofthis,
dialog
designs have
not significantly changed in manyyears. Simon[1987]
discussed
one view ofwhythishappens
in hisblistering
criticism ofthe humanfactors
profession,"Will
Egg Sucking
Ever Become aScience?"
Simon
believes
thathumanfactors
researchersemploy"a
hodgepodge
of quickfixes
that evolved over the years into aparadigm that is taught and employed as sacrosanct, when in
fact
it is woefully inadequate andfrequently
incompetentwhen used to obtain the information needed to understand,design,
and control complex, real-world man-machine systems.
[As
aresult, engineers,] who used to seek out
human
factors
data,
have
learned to ignore it.They
have
learned that our quantitative answersareshams of quasi-precision and require somuch qualification that in many cases the engineer's judgment
would probably
be
equallyasgood withoutit"
Parenthetically,
the article's name camefrom,
thelate
Admiral Hyman Rickover'sresponse to a proposal
for
a majorhuman factors
program in the research anddevelopment
ofnavy
ships. Said theAdmiral,
"[the
proposal] is about as useful asteaching
yourgrandmotherto suckanegg."
Dialog
Design CriteriaIn the early
1980's,
the AirForce, Army, Navy,
and NASAbegan
work on a Handbook ofPerception
and Human Performance. It was an analysis of all theliterature
on userinterface design
up to that point. The result was a 3000 page
document,
divided
into 65 subareas anddesigned
as a usable presentation ofbehavior data
todesign
engineers.Thework wassummarized in ShawandMcCauley
[1984],
which presents a gooddiscussion
of the major guidelinesfor (and
demonstrated
the vastness ofthe publisheddata
on)dialog
design.
One ofthe main resultswas:"
'Know
thy
user'must go
beyond
mere identification andstereotyping
ofthe user population, especially when these dataare obtained through indirect means or logical conjecture. Without
direct
contactbetween designer
and the user groups, underestimation of thediversity
and capabilities of the user groups can occur. Interaction betweendesigners
and the users can provide invaluable insights into thedifferences between
designers
ofsystems and users ofsystems"
[p. 51].
In virtually
every book
I've read on design criteria, there is explicit criticism of themethodology
computer programmers use todesign dialogs for
their programs, and the user models on which theirdialogs
arebased.
Heckel's[1984]
easy-to-read,people enjoy, even
love,
in a variety of other mediaincluding film,
literature,
animation, magic, even the culinary arts. He says:
"The
general reader need not beafraid that this
book
istootechnical.Indeed,
such a reader has a distinctadvantageoverthecomputer expertsinthat
he has less
to unlearn"[p. xii].
Another example among the vast
literature
ondialog
design criteria, andperhaps more typical of the bulk of the work in this area, is Lin Brown's
[1988]
straightforward series of recommendations.
They
include specific examples ofwhatto
do
and notdo
in areas of designdisplay
formats,
nomenclature, color usage,graphics, information systems,status and errormessages,system responsetime, data
entry, input and output
devices,
etcetera. The book contains goodrecommendations.
But,
asCard, Moran,
and Newellwarn, and as we will see, "thereismoreto human-computer interactionthan can
be
caught withchecklists"
[p. 8].
I'll also mention
here
two prominent researchers (though there are many others)who've each published a
large
body
ofwork: Donald Norman [eg.:1983, 1988]
andBen Shneiderman
[eg.:
1987,
1988]. Both these menhave
been influentialand activein
developing,
testing,
and encouraging whathave become
standarddialog
designprinciples.
Recent Dialogs
There
have been
a number of attemptsatproviding programmerswith graphicalprogramming languages. One of
these,
the AlternateReality
Kit(ARK),
wasdeveloped
at PARCand provides userswith theabilityto changethe laws of physics,gravity, the speed of
light,
electrical charge in objects, etcetera, and move objectsaround the planet"
[Smith,
1988,
p. 6]).Randall
Smith,
ARK's creator,has found
thisuseful
for
doing
simulations andfor researching how
accurate people's perceptionsof
reality actually
are.One
interesting
element ofARK isthat it provides an exampleof the
potentially
exclusive nature ofease-of-learning
on the onehand,
andincreased
usefulnessontheother. I willhave
anopportunity
togo intothis questionlater.
Anothergraphical
programming language
is Trillium[Henderson,
1986].Trillium,
written in Interlisp-D
(Xerox's
version ofLISP)
anddeveloped
intheearly 1980's,
isanenvironment in which
designers
with noprogramming background
can design andtest
dialogs.
It isadialog
for
developing
dialogs.
Itwas influential andthe principlesdeveloped
by
theTrilliumdesigners
and usershave been
incorporated intoa numberof environments
for producing
dialogs, including,
perhaps, Apple's HyperCardsoftware,andthe
dialog
development language
onthe NeXT machine.There are problems with graphical programming. For example, in a videotaped
presentation of
ARK,
Smith points out alarge
number of graphical objects withcomplex connections that together look like a morass
but
that couldbe
written inonlya couple of
lines
of code.And,
on the otherhand,
he
showedlarge
amounts ofcomplex code that would
be
verydifficult
todevelop
in graphical form. In bothcases, standard
programming
is considerably simpler. These problems inhibit theacceptance of graphical programming languagesand language
designers have
notyet solved them. In
Trillium,
it turned out that fordialogs
of any significantcomplexity, users still
had
to learn to write LISPcode. In this case, Apple's HyperCardsoftware
has
solved some of this problem, makingdialogs
easier to produce.But,
even in
HyperCard,
complex interactions andbehaviors
mustbe
programmedby
In the mid 1980's Henderson and Card
[1986]
and others[Henderson
andCard,
p.213
-217]
found
that users of word processors expend significant effort(they
"suffer"
[p.
217])
continually closing
ormoving
onedocument
window to getinformation in another,
due
to small screen size. On the otherhand,
peopledoing
the same task with pencil and paper tend to spread their
documents
out on largetables so none are
hidden.
Using
data
gleanedfrom
studies of the analogous "thrashing"phenomenon in virtual memory systems,
they
devised
a solution called"Rooms."
Rooms is simply a multiple
desktop
environment where each room is anew
desktop.
The user can create and setup each room separately for each task. Hemight create a separate "mail room,"
for
instance,
wherehe
only doestasks relatingtoelectronic mail. Each room is
significantly
lesscrowdedthan asingledesktop.Thatmeansthere is more screen spaceto
display
thedata
required todo
each task sincedocuments
not relevant to that task are in adifferent
room. In a videotapedpresentation of
Rooms,
Henderson claimedthat,
afterusingRooms,
going back to asingle
desktop
apparentlyfeels
asconstrainingas returningfrom
adesktop
dialog
toa
line
editor.Futureof
Dialog
DesignFora
look
atthefuture
ofdialogs
see Brand'sbook, Inventing
theFuture atMIT.In it
he discusses (with
someawe) such obscureand remarkabletopics as "sincereeyecontact with a
computer"
[p.
145]. Artificial intelligence willbe
increasingly
important
todialog
design:
the machines willhave
to pick up some of the slack asthe
feature
sets explode. And there are many exciting ideasfloating
around.But,
inaway, theyet-unrealized works ofthe earlyvisionaries,
Kay,
Negroponte,
and a fewhow
people will, in thefuture,
interact with computers and computer-basedCHAPTER 3
DIALOG THEORY: PURPOSEAND
NATURE,
CONSISTENCY AND MOTIVATIONThe Purpose ofthe UserInterface
As a teacher of
BASIC,
I always taught the Print statementfirst,
not because it'seasy
but because
unless one can see a result,the most sophisticated calculations areuseless
(and,
in this sense, every programmer is a user interface designer: Print(along
withInput)
statementsdefine
thedialog
fora BASICprogram).At the same
time,
Printhas
a natural meaning, and so is anintuitive,
simplifying user interface to an underlying,hidden,
and far more complex seriesof commands,the languageofthe
hardware,
themachine code.With this example we can refine our understanding of the purpose of the user
interface and the methods user interfaces use to achieve their purpose. User interfaces:
- hide
underlying complexity, perhaps even entire machines, fromthe users todecrease
learning
time and promote ease ofuse.And,
-they
make the usefulandintendedresults of computationsobviousto decreaselearning
timeand promote ease ofuse.
-These boil down to increased productivityforthe user.
For example, machines would be difficult (high level programming
language),
impossible
(clocks),
or even dangerous (lightswitch) to use without a user interface. In each case, productivity decreases without a user interface. (Of course, it's nothumanisticto consider accidental death a loss of productivity. Butthat's exactly one
The Purpose of Dialogs
Dialogs retain the general user interface goals, ease of
learning,
ease of use,andincreased
productivity (speed
ofaccurate use)for
the user.However,
adialog
ismorehelpful than the user interface alone in
achieving
these goalsbecause,
as a specialcase with more
defining
elements, it is more specific in all aspects. And one of theprimary
human factors
principles indesigning
for
usability is to alwaysbe
specificinstead of general
[Heckel,
p.74;
Smith et al, p. 248].Examples: User Interface and
Dialog
InteractionThe
home
telephonedisplays
a simple user interface-an address-specifier (the
"dial")
and a talk/listendevice
(usually
ahandset)
- that hidesvast complexity. The networksoftware, the
hardware,
the paththevoicetakes,
and themathbehind
itall are invisible to the user. Thedialog
consists ofdialing
andlistening
fortones,
ringing,
busy,
or disconnected-destination messages, and speaking with the operator. Without its address specifier userinterface,
phones would not be aproduct. It's the
"dial"og
that makes it convenient. Notice that over the last fourdecades
the user interfacehasn't
changed(very
much) while the dialogsbecome
increasingly
sophisticated1and, not coincidentally,complicated.
Programming
languages are user interfacesfor machine code. Given that there isa minimal set of required elements in programming logic
(if-then-else, loop,
sequence), it
becomes
adialog
problemto representthese structures in simplerways1. For example,onemight needtoenter 15or20digitstomake certain connectionslikeoverseas
for
easier interactionsduring
design,
coding,debugging,
and testing.Assembly
code,
BASIC, FORTRAN, Pascal,
C++ and graphical environments are alldialog
based
attempts to enable programmers to more simply use the same old logic.Previously,
machine code madeprogramming
an obscure art available to afew
experts, and even
they
could not write very complex or large programs withoutbeing
overwhelmedby
complexity. Advances in programming languages madeprogramming a usefulactivity, and moneywasspenton these effortsfor exactly this
reason.
Today,
the user interface is a CRT orflat
panel display. Thedialog
is the interaction between the userand machineon the userinterfaceas code iswritten.The standard analog clock isa user interfaceto time-of-day. Although it's easyto
use it's not easy to learn for it is surprisingly complex. For example, the "2" on the clockface issometimesatwo and sometimes aten,
depending
on which hand points to it. Sometimes it's even afourteen,
if one is in the right country. This flies in the face of all modern assumptions about user interface design: it is inconsistent andpotentiallyconfusing. Butwe spentyears as children
learning
howto use clocks and we "errorhandle"
the confusing parts unconsciously.
So,
the designers get awaywith it.
Note that the
time-telling
mechanism is pure user interface withoutdialog
forthere is no interaction. The setting mechanism, on theother
hand,
requiresadialog
(for
more on the meaning of clockssee Smith [1987]).Now,
I also want to extend our definition of user interface beyond machinefunction and use it in the sense of a window or
display
of any function. Naturallanguage is a user interface to thought. Each language represents concepts perhaps
unknown in cultures using anotherlanguage. Forexample, Inuit
has,
asI recall, 15 ordon't
usuallydistinguish between
thesedifferent forms
of snow except maybebetween wet and
dry
snow,and it'slikely
that mostof usdon't
evenknow
abouttheother
distinctions.
We can take this
further
and say that mathematics notation is a user interface. Forittoo isalanguage
and a convenientwindowto thoughts of a particularkind.2Yetanother conceptual user interface is art, a user interface to human emotions. I don't want to
be
accused oftaking
all this too farbut
I want to use thisthinking
belowand laterwhen I examineWaltDisney
asdialog
designer.
History
Speculatively: the
first dialogs
were conceptual. Languages were windows into thoughts. Another early conceptual userinterface was mathematics,beginning
withcounting3 and
progressing toward formalism. A third was art, a user interface
window- and perhapsa
dialog
-to theemotions.As
language,
math, and art seem to be among thedefining
elements of thehuman
being,
we could call these the"natural"
user
interfaces,
parts of the"natural"
mind.
Direct selection dialogs are the oldest kind of machine dialogs and were much
more
direct
thanthey
are today. In early industrial revolution machines, waterwheelsfor
instance,
users dealtwith obvious relationships between thecontrols and the machine: stop pushing and the machine stops; or, pullthelever,
watch thegears 2. Perhaps theexistence oflanguageistheuserinterfaceandtheuse ofthelanguage inconversationisdialog. Thissort ofbroaddefinitionsuitslanguage, which itself issopowerful, clear,and also ambiguous.
disengage,
hear
the slowing machinery. More recently these relationships wereincreasingly
hidden from
the worker. Gearsdisappeared
into "black boxes" orbecametoo complexto understand
by looking
atthem. The machinesseemed moremysterious and understanding how
they
workedbecame
more difficult.Early
programmable machines, moving
dolls,
the Jacquardloom,
and various arithmeticengines with their complex mechanical arrangements of gears and rods, are good
examples.
Even with the
development
of electronic computers,direct
selection remaineddominant for a while. Machine language limited programming to an elite group
who
directly
manipulated the hardware. But it was mindbreaking
work to codemore than small functions. What's more, for the
layman,
the level of abstractionfromthe
functioning
hardware waseven greaterthan merelyhiding
gears in a box.Now there were no gears, no obvious physical representation of the
work-in-progress to conceptualize. The answers to some of these problems were the
user-interfaces-to-the-hardware: assembly code, operating systems, compiled languages
were all attempts to control the hardware through simplifying, interactive
filters,
that
is,
dialogs.Interestingly,
though the designers of the early languages were interested increating something that was easy to use, the users of the
languages,
theprogrammers,were not, and thesimpler programming languages enabled a torrent
of software thatwas a boon to
industry
and replaced billions ofsecretaries, butwashard
to learn and use.So,
whilethe new programshelped to make money,they
alsowasted money. Difficulty-of-use was not recognized as a problem for quite a while.
In
1978,
GraceMurray
Hopperputitthisway:"There was also the fact that there were
beginning
to bewere unwilling to learn octal code and manipulate
bits.
They
wanted an easierwayofgetting answers out ofthecomputer. So the primary purposes were not todevelop
aprogramming
language,
and wedidn't
give ahoot
about commas and colons. We were after getting correct programs writtenfaster,
and getting answers for peoplefaster.
I am sorrythatto some extent Ifeel
the programminglanguage
communityhas
somewhat lost track of those two purposes. We weretrying
to solve problems and get answers. And I think we should go back somewhat to thatstage.We were also endeavoring to provide a means that people could use, not that programmers or programming language designers could use. But rather, plain, ordinary people, who had problems
they
wanted to solve. Some were engineers; some were business people. And we tried to meet the needs of the various communities. I think we somewhat lost track of that too."Maybe we can attributethisto thefactthat these programs made more moneythan
they
wasted.Anyway,
programmers heard complaints.Eventually
they
began to listen.In the early
1970's,
Xerox researchers devised the direct manipulationdialog
for the Alto computer [Thacker et al;Wadlow]
consisting of menu selections on a CRTand a pointing
device,
the mouse. In the late1970's,
graphical representations(called
icons)
oftheobjectstheuserwas manipulating replaced words asthe mainaccess elementofthe
dialog display
in a product called Star [Smith etal]. Icons are graphical metaphors for familiar objects (Figure 5).
They're used to map properties of Printer
Docu ment
familiar objects
(eg. folders
inside otherfolders)
into analogous properties of theless familiar objects the computer user
deals
with(eg.
tree structurefiling),
and tothereby
hide
unfamiliar or complex elements of the lessfamiliar
objects. WithStar,
directselection returned asthe main method of
manipulating
complex machines.The Natureand Functionof Machine-Based Dialogs
"There's no sense in
being
precise when you don't even know whatyou'retalking
about."-
(attributed to)
Johnvon Neumann
-We can see this trend as a synthesis ofthe "natural"
(conceptual)
user interfacesand the older
direct
selection machines ofthe early industrial revolution. Instead ofvisual and auditory cues
directly
tohardware,
there are layers of abstraction fromthe hardware. But instead of confusing, abstractions are pressed into service as
helpers.
They
are given physical forms that represent in"natural"
terms the work
that the user is attempting to carry out as if
they
were leversdirectly
controllinghardware on an ancient machine. In this sense the abstractions
incorporate,
ratherthan
hide,
the natural humanthinking
(thatis,
the things everyone alreadyknows)
about the task Like the BASIC Print statement, the iconic abstractions produce a
simpler interactive task
(a
dialog)
due to their natural meaning, and each is asimplifying user interfaceto the hardware.
We're nowprepared to produce a more complete definition of
"dialog:"
When a
machine contains a user interface to the
"natural"
mind (the things everyone
already
knows)
and a user interface to the system (hardware and software), andwhen thetwo interactto achieve the same goals as a single user
interface,
adialog
A
human-to-machine
dialog
is a manifestation of user requirements, an attemptto model
human psychology
andbehavior
in a particularuserpopulation,and tolink
that model to the
hardware's
function to produce a-one
hopes
-simple
interaction.
Psychology
has
advanced a lotbut
most of it is still mysterious, so I'llheed von Neumann's warning asI proceed.
Macro 5090
Dialog
Coding
Figure 6 picturesthe Xerox 5090
duplicator dialog.
The usertouches icons on theCRTto program the machine. File cabinets contain
file
folders. Each file folder ajobtype and contains scorecards. The scorecard contains machine
features
available inits file folder.
By
selecting a scorecardicon,
a work area, containing theprogramming options for that
feature,
appears to the right. Each scorecard icondisplays its current programming so that the user can always see the programming
for the entire job.
So,
scorecards are menus of machinefeatures,
and a feedbackmechanism
describing
whatthe user will getwhenhe
presses Start.In Current
Program,
Standard the user can program a standard copying jobwithfeatures
likeReduce-Enlarge,
SidesImaged,
Copy
Quality,
and so on. In thescorecard, the user selects a
feature he
wants to program and then selects thedesired
settings in the work area. He repeats those steps for each desired feature.Actually,
every machine feature is programmed in this way, so this isn't toointeresting. More
interesting
is what happens to the icons when the user selects.- 4J
Q
O <f>
rsi
1 c
o til o <
o
X
B
o
CJ < 70
<D
c -O ^
0
i
<2a
(0*~ O
w o W
00
IN ,_
OO r- ^
s
**MO
-z
-ac
-a 3
^
kirn
\Sl
^7
0
^7
> <
"I
iD
rvi
Micro 5090
Dialog Coding
Each
icon's meaning
isdefined
by
5dimensions
of coding on the 2 dimensional CRT: The icon's- look
(the
graphic
form
used to representa machine ordialog
feature),
-location on the CRT inthe macro metaphor
(file
folder)
structure,- shadow
state
(visible
orinvisible),
-color
(full
color orghosted), and
-an implicit
historical
dimension (how itachieved itscurrent state).The look of an icon
depends
either on its relationship to thefiling
metaphor (a file foldertab,
forinstance)
orthemachinefeature
it represents.Wealso defined an icon's meaning in termsof its location. Aswe'll see, an icon in
a scorecard has a completely different meaning from the same icon in a work area.
The location also helps
define
the icon's context. Forinstance,
onlyfile
folder tabscan appear in the area on
top
ofthe file folder.When an icon hasa shadow, it'savailablefor selection. If an icon has no shadow,
selecting it has no effect.4
An icon can also be full colororghosted (outlined). On thesurface,one would say
the full color code means that the icon (or the feature the icon represents) is "on"
and the ghosted code means the icon is "off." But the reality is much less
straightforward.
Inconsistency
1: Internal Variation inMeaning
The icon states along the color dimension
(full
color/ghost)have different
meanings in
different
contexts. A full colorfile
cabinet means: "You are in this pathway."file
foldertab means: "You've programmed the machineforthis [the
name onthe
tab]
particular class ofinputand/or outputpaper."
scorecardtab means: "You'veselected a set offeatures ofthis
[the
name onthetab]
categorywhich you can nowinspectorprogram."
scorecard icon means: "Thismachine
feature
ison. You'll getthe effectthisfeature
provides if you pressStart."
workarea icon means:
"You've
programmedthisvalue forthecurrentlyselectedfeature."
There are other
interesting
variations. For example, a ghosted scorecard iconmeansthefeature is off. Butoften, in orderto turn thatfeature off,a workarea icon
(usually
called"Off")
mustbe full color("on").Inconsistency
2: Internal Variationin EffectsDeeper and more
interesting
variations in meaning occur when we look at thefifth
dimension,
howan icon getsto a state, instead ofjust the meaning of a state.For example, different icons reach differentstates
by
the same user action.Selecting
a ghosted scorecard icon will not make it full color. That's because the user
has
notprogrammed the feature
by
that action and it will notbe
applied to the job ifhe
presses Start. Touch a ghosted work area
icon,
on the otherhand,
and it will go fullwith that work area selection, the related, ghosted scorecard icon goes full color.
How an icon achieved its current state can affect its meaning enough to alter user
behavior.
Now,
the result or effect of a user action maybe
an end in itself (eg. pressingStart,
or programming Reduce-Enlarge in a program saved forfuture
use (calledProgram
Ahead))
or a meansto some other end(eg.
programming Reduce-EnlargefortheCurrent Program). The lattercase incorporatesan abstraction, a separatestep
in achieving the desired result, so its effect on the result isn't necessarily completely
clear.
Also,
that particular medial action may affect orbe
affected-perhaps in
unpredictable ways
-by
other medial steps on the way to the desired result.5 Ontheother
hand,
theformerachieves adirect
and(usually)
complete result.The distinction between these is important for each imposes different meanings
on the same
dialog
coding of the results of the identical action. This changes themeaning of the feature selection and how it relates to the application of that
feature,
contributing to a major barrier for novices,thefeatureapplication problem.Thefeature application problem can becharacterized
by
thefollowing
example:It's one
thing
for novice programmers to correctly state what a discrete structure like aloop
or a conditional is. It is anotherthing
entirely to apply the appropriate structures to algorithms and in the correct order so that the program will solve the problem. Experienced programmers can apply these structures efficiently becausethey
have a largebody
ofknowledge,
have
itefficiently organized, and can use the knowledge to generalize about solutions and constraints. Students who did poorly in my
5. Take,forinstance,themedialstepofprogramming TabsasthecopystockinCurrent Program. The 5090thenautomaticallyprogramsa half inchimageShiftwithout userintervention,sothat printingcanbe doneonthe tabportionoftheTabstock. Usershavetoldme thatthey
sometimes programShift
by
programming TabstockinsteadofprogrammingtheShiftfeature,evenifthey'renotusing Tabstockinthatjob.Theydo it becauseinsomeinstances it'sapparently
introduction
to programming courses could explain what aloop
or a conditional was.
But,
they
had
troubleapplying
those structuresto create correct programs.See the Appendix A for further
discussion
ofthefeature
application problem.Butit'sOK: No one
Notices,
and Graphical EleganceThis is all a
bit
disconcerting
to thedesigner
because,
after agonizing over thecoding
inconsistencies,
it turns outthat no one noticesthem. Like the analog clockdesigners,
we got awaywith it. And there's abig
advantage: "Graphical elegance isoften found in simplicity of design and complexity of
data"
[Tufte,
p. 177]. And wedid exactly that:
by
overloading the meaning of some of the coding dimensionswithout perceived
inconsistency
we minimized the required number of graphicalcodes.6This led toa simple designwith a complex message.
We've all been exposed to designs we perceive as inconsistent. That means that
the kind of economy of code we achieved on 5090 does not happen automatically.
The important point is: there exists a
"line
ofinconsistentmeaning."
Belowthe
line,
inconsistent meanings are not perceived and economy of coding and its
corresponding elegance is available to the designer. Above the
line,
theinconsistencieswill confuse and inhibitoperator performance.
6. Weperceivethiskindofeconomyaselegantbecauseofthesurprisingpsychologicalconnections
itmakes andbecauseitsaves on "psychical
expenditure."
Foraremarkablediscussionon pleasure
duetopsychicaleconomy,seeFreud, 1960,especially p 41 - 45andChapter IV.Also,seethe
The Location ofthe Line of
Inconsistent
Meaning
The 5090 user
has
to remember only two codes on the color dimension. Themeanings in the
different
contexts are analogous: yes I'mhere,
no I'm nothere;
yesI'm on, no I'm not on; yes I'm going to use this
input,
no I'm not going to use thisinput,
and so on.Now,
what we've identified isa more general, more abstractmeta-meaning ofthe coding which iseasyto rememberand, in some natural way, mapsto
each context: full color = yes. Ghosted = no.
One abstracts meaning
from
the code and context. Maybe users abstract themeta-meaning from the code and context, completely
bypassing
the true meaning.This meta-meaning is completely valid and though it doesn'ttell the whole story, it
reduces the user's memory load
because
now there are only two meanings toremember instead oftwo foreach context. This makesthecode more usefulbecause
reduced memory load translatesinto improved performance
[see
Miller].The designer'sjob
is,
as Heckel points out, to provide the specific meaning, butnote that the user provides the meta-meaning. In some way the designer has
enabled this abstraction to take place. What is the enabler? When the users are
given a consistent code like full color,
they
look fora single meaning to cut across allcontexts. In this case the single meta-meaning
"yes"
was perhaps so natural that
novice usersprovided itunconsciously.
What if the color dimension had three or more code elements for each context
rather than two? The user would need to remember a lot more meanings for each
context and performance would
drop
[Broadbentj.Also,
itwould be harderand lessSo,
we can say that the "line ofinconsistent
meaning"is drawn where a
minimized
(binary)
code can represent a general and consistent meta-meaning usedin pl