David E. Weekly's Website

My Letter To Clifford Nass

November 10, 1999

Hello Professor Nass. My name is David Weekly. I am the student who approached

you after class just now to ask you what would happen if I disagreed with you. I

am a junior in Computer Science here.

I realize you have hardly had time yet to raise objectionable points, but what I

am disagreeing with is the tack that I see you (potentially) taking in the

class. I think it is a bad idea for technical and sociological reasons to try

and make computers interface to people in a human-like way. That’s not to say

that I do not think that computer interfaces should not be ultimately intuitive;

quite the contrary. But I raise that intuition is cultivated in the environment

and with habit. The mouse, for instance, was an entirely unintuitive device;

people stepped on it, raised it into the air and (in one of the Star Trek

movies, if you remember) spoke into it before it was clear what it did. And yet

the vast majority of Americans today can use a mouse with alacrity, almost as an

extension of themselves. The point here is that the mouse had little direct

analogy to anything commonly human and was quite unintuitive from the

start…but the mouse was designed in such a way as that people could adapt to

it quickly.

So my argument is that we need to make interfaces that one can adapt to rapidly

but are not neccessarily intuitive or human-like from the start. Having made

this point, I will go further to say that I do not believe that making computers

human-like is wise. As Luddite as it may sound coming from a CS major, dumping

humans to be replaced by automatons in human-human interaction scenarios (e.g.,

a restaurant, ticketing, phone operators) will rarely IMHO make the world a

better place. It will be a more efficient place, but one replacement leads to

another and as sure as day we will make this world a miserable and lonely

location if we only think about efficiency. Please note that I’m not talking

about factory work here, or non-human-human scenarios.

If we interface to the computer as we do to reality, our perception of reality

is altered. The consequences of this must be examined before we rashly rush in

with the latest AI, 3D, and multiplayer technology to produce immediately

compelling and intuitive interfaces. If we can talk to a computer like we talk

to our friends than we may start talking to our friends like we do the

computers. We become frustrated with the limits of reality and stop being able

to truly appreciate it. (How many times have you been annoyed at not being able

to ‘grep’ a book in your hands?)

I am aware that many of these statements are broad and overarching, perhaps

generalizing a little too much and overlooking important exceptions. At the same

time, I believe that there is a fundamental truth to them that needs to be

considered. I see the computers of the future optimally having interfaces that

we today might find complex and alien, but which adapt well to how humans act

and think and also have an awareness of the skill, preferences, and emotion of

the user. I do not see ‘robotic pals’ even ‘virtual pals’ (s/pals/agents/ or

whatever your preferred word is) as being a desirable future. As cute as the

Office Agents are, I despise them for trying to be lifelike. Consistency should

be king in interfaces and having a help system inconsistent with the rest of the

operating system to provide a feature that the vast majority of users dislike

seems to have been a poor decision on the part of Microsoft.

And what’s with having that little moving pen at the bottom of a Word document while you type? Were they too dumb to realize how distracting that is without providing any useful functionality at ALL?

Well that’s my $0.02. Maybe I’m just another naive student, but I feel my ideas

deserve at the least a solid rebuke before I’ll back down on them.

Yours,

David E. Weekly


© 2020 David E. Weekly, Built with Gatsby