David E. Weekly's Website

Rethinking Computer Interfaces

January 31, 2001

The first dot-com boom has come and gone and we now find ourselves

in the gritty interim period, where only innovation, cunning, and

realized profits will save us and bring us back to prosperity. Gone,

thankfully, are the lucrative days when anyone with a webpage could

call themselves an “ebusiness” and acheive an obscene level of

rapport with investors. A lot of new business models have been tried and

a handful succeeded, but most failed.

What I thought particularly sad about the situation is that there

have only been a handful of attempts to really think outside the

box in terms of the computing experience and a user interface.

Some of this is understood, since so much of that which was new was

being thought up on the web and it was natural to think of the user

experience in “HTML mode.” HTML being just designed for document

formatting (and not a user experience akin to a program), the set

of operations we could perform using web applications actually

has decreased. There’s only so much you can do in the way of novel

interfaces if they have to be in HTML and rendered in a web browser.

Some creative folks have figured out how to give us a bit more power

in these experiences, such as having popup calendars to fill in a

date, but support of even such basic interface interactions as “drag

and drop” or uploading a folder of information at a time are made

impossible with HTML. HTML centrism has really made narrow possibilities for user interfaces; the move to try and shift all local applications to web applications has reduced the potential richness of user experiences.

To its credit, Macromedia has done a lot to attempt to fight this

battle and enable genuinely new and interactive experiences on the web.

However, such experiences have been primarily focused on entertainment

and artistic applications. There aren’t very many practical

Shockwave or Flash applications out there. Part of this has to do

with the fact that although both enable rich interaction experiences,

neither is designed as a full API for programming fully functional

applications. They make it easy to display data, but not to obtain it

or to interact with resources present on the local computer.

Thankfully, there is at least one trend that may bring hope to new user

experiences: peer-to-peer (P2P) computing. With most P2P computing

models, a program has to sit on the desktop that gives you access

to information through a network of peers, as opposed to a web server.

Since each P2P network uses a different client access program (and

indeed some of these networks have multiple clients!), there are a

rich array of opportunities to create novel end-user experiences.

We can only hope that some companies will take advantage of this

opportunity to experiment with making novel interfaces.

Now what is it that I mean by “novel interfaces?” It’s not that I mean

novel in the sense of entertaining, but instead, novel in the sense that

an interface that is unique, innovative, and takes advantage of the

client platform.

Most user interface designs are static. They are taken from the

standpoint of the text publishing world – to have an interface that is nicely

laid out is a high compliment. This aesthetic, as mentioned above,

works well for web design, which is largely a collection of pages with a

certain static visual arrangement. (This site is no exception.) It is

what I will call a “paper aesthetic.”

But paper is not by any means a natural aesthetic. Whether you’re a

creationist or a scientist, humans weren’t built to stare at flat

sheets of paper (or, indeed, flat monitors) for days and days on end.

It just isn’t how we work. We were built to focus in on things that

need our attention, keeping aware of our peripheral surroundings for

changes, quietly noting subtle changes and becoming alerted

to objects moving quickly and visibly.

This concept of motion bringing attention has been used primarily in one place on the web: banner ads. Since in most cases they are the only thing moving on the page, our attention often drifts to them. Taking a look at news.com‘s new user interface as applied to

particular pages, we see that they have restructured their site such that the only motion on the page is dead center and occupies a good percentage of screen real estate. It sits in the middle of the text, forcing your eyes to come to it as the text wraps around it. You can’t help but look at it: it’s their new series of Flash ads. Naturally, these ads pay much better since the use is forced to view them.

But what if we took some of these base concepts, these notions of

what attracts our attention, what distracts us, and what informs us,

and created a new style of user interface more closely adopted to an

ideal aesthetic – an interface that makes more sense for humans?

What would such an interface look like?

First off, in the real world, when we are working on something, we

bring it up close to us: it occupies the majority of our field of

vision. We get up and close to the screw needing tightening; we

push up close to the blueprint to consider each line. The watchmaker

does not engage his task from arm’s length. Consequent to this, a

good user interface will cause the primary activity to occupy the

majority of a user’s field of view and will clear away irrelevant

data to let us focus on the task at hand. While some applications

support a “full screen” mode that allows for focus on only the

task and nothing else (IE, MS Word, Excel, Adobe Acrobat, etc.),

the user experience should allow for every application to be a

central focus. Apple made a good stab at this by allowing a user

to hide all interface elements not pertaining to the currently run

application, including the finder. Most Windows users I know of

(including myself) have a very crowded desktop – while in one

sense our own fault, it ends up leaving a permanent and distracting

cruft on which I must perform all my work. In addition, most menubars

and icon trays are always visible and can’t be set to auto-hide.

This makes it considerably more difficult for a user to just focus

on what they’ve got to do. So the user must be able to concentrate

on a single task without being disturbed.

But what does it mean to be disturbed? One example of a bad

behavior in this example is Eudora’s default action when fetching

mail – if you’ve set it to go grab your email every five minutes

and put it in the background, when you do have mail, it whips you

out of whatever you’re in the middle of, takes you to Eudora, and

proudly displays a dialog box proclaiming that you have new mail.

This is obviously non-ideal behavior, but it’s also not a very

dramatic example simply because so many applications exhibit similar

characteristics. The classic thing that frustrates me is that if I’m

in the middle of wading through programs on the Start bar and some

program calls itself to attention, the computer drops the submenu

navigation (i.e., the Start bar menu goes away). The computer’s

designers are clearly making the statement “that which my program

has to say is more important than your navigation or input.” In

this modality, the computer can be considered an ouiji-based

entertainment center. You merely consent to the computer’s

understanding of how you should spend your time – it’s not designed

to take your interaction seriously.

A computer’s crisp, jerkily static imagery is poorly suited to

human interactions – we’re used to a world of “soft changes” and

things that move slowly, with a more or less smooth derivative,

instead of dialog boxes that jump out of nowhere and exclaim sounds

at us. We should come up with better interaction mechanisms for

keeping a user up to date with the status of a system without

disturbing them from a task. That is to say, an interface should

be designed such that with a cursory look, the status of the

system can be determined and changes actively reported, but not in

such a way as to be distracted. One way to do this would be through

color fades. For example, you could imagine a mail interface where

mail was sorted into various inboxes. Instead of having a list of

numbers of the quantity of unread messages in each inbox, coloring

each inbox in accordance with the importance of the unreplied

or unread messages in an inbox would allow for smooth color fades

when new messages came in. It would be subtle enough not to

distract you from a task, but by glancing over at the colors, you

could tell what you probably should look at next.

A database-backed file and email system is also called for. Instead

of putting a file into a specified category (i.e., folder), files

simply exist and can be filed under multiple categories (a job lead

from your friend Kevin about a Linux company could simultaneously

be filed under “job leads”, “Kevin”, and “Linux” without having three

copies). Data could also be retrieved via a variety of mechanisms

(looking for how recently it was composed, key words in its title,

its size, its type, etc.). As an added bonus, if implemented as a

“journalling” filesystem and coupled with a bit of data synchronization

sofftware, you could guarantee against losing documents and even

automatically have an infinite-level “undo” mechanism built in on the

filesystem layer, allowing you to version any files on your hard drive.

Modern hard drives are sufficiently large (80Gb hard drives now cost $300)

that there is no longer any feasible reason why textual, pictoral, and

even sound data should ever be deleted. Old copies should be automatically

kept around, but quietly so (e.g., they wouldn’t turn up in a general search or

clutter your views of which files existed, but you could still bring them

up explicitly).

The skills of television producers in drawing people’s attention to

information, actors, and scenes are particularly ingenius;

us programmers should learn from them and exploit the tricks they

have discovered. In the same way that many computer interfaces accidentally draw attention to the wrong things in the wrong ways (as mentioned above), we should

consider how we could purposefully bring the user’s attention to

certain things to guide and assist their focus. Apple had this in

mind when designing how alerts work in OS/X – they pull, transparently,

out of the top of a window. The motion lets the user know to look

there without surprising them, and the transparent but attached nature

of the alert lets the user know that the alert is tied to that particular

application.

The main reason why these interface elements have not previously been

incorporated into common programs is inertia: with the current set of APIs, it’s much easier to piece together a program with usual menus, square windows, and a generally consistent feel relative to other applications (this is a good thing) than something really

groundbreaking. (Sonique, with its next-gen UI, has taken years to cobble together.)

Secondly, however, it’s only pretty recently that one

could assume that an end user’s computer would be fast enough to handle

smooth transforms, 3D effects, and fades in a complex environment. But

with even the cheapest of computers shipping with high-speed 3D accelerators these

days, that assumption is being shown to be false. But we haven’t really taken the

time to question the basic parameters by which we decided upon the current

mode of interfacing with a computer whose design really originated in the late 70’s

and early 80’s at Xerox PARC – and that actually originated from Stanford work

done in the mid-70’s! Clearly, (hopefully?) there is a larger set of possibilities

for human-computer interaction enabled by faster and more advanced technologies.

Now we only need to harness it and conceive of the next wave of interfaces, in

turn to be swept away in another 10-20 years when hardware will permit another

rethinking of interaction.


© 2020 David E. Weekly, Built with Gatsby