User interface cognitive loading

Hi,

This is another "thought experiment" type activity (you, of course, are free to implement any of these ideas in real hardware and software to test your opinions -- but, I suspect you will find it easier to "do the math" in your head, instead)

I've been researching alternative user interface (UI) technologies and approaches. A key issue which doesn't seem to have been addressed is modeling how much information a "typical user" (let's leave that undefined for the moment -- it's definition significantly affects the conclusions, IMO) can manage *without* the assistance of the UI.

E.g., in a windowed desktop, it's not uncommon to have a dozen "windows" open concurrently. And, frequently, the user can actively be dividing his/her time between two or three applications/tasks "concurrently" (by this, I mean, two or more distinct applications which the user is *treating* as one "activity" -- despite their disparate requirements/goals).

But, the mere presence of these "other" windows (applications) on the screen acts as a memory enhancer. I.e., the user can forget about them while engaged in his/her "foreground" activity (even if that activity requires the coordination of activities between several "applications") because he/she

*knows* "where" they are being remembered (on his behalf).

For example, if your "windows" session crashes, most folks have a hard time recalling which applications (windows) were open at the time of the crash. They can remember the (one) activity that they were engaged in AT THE TIME but probably can't recall the other things they were doing *alongside* this primary activity.

Similarly, when I am using one of my handhelds (i.e., the entire screen is occupied by *an* application), it is hard to *guess* what application lies immediately "behind" that screen if the current application has engaged my attention more than superficially. I rely on mechanisms that "remind" me of that "pending" application (activity/task) after I have completed work on the current "task".

However, the current task may have been a "minor distraction". E.g., noticing that the date is set incorrectly and having to switch to the "set date" application while engaged in the *original* application. I contend that those "distractions", if not trivial to manage (cognitively), can seriously corrupt your interaction with such "limited context" UI's (i.e., cases where you can't be easily reminded of all the "other things" you were engaged with at the time you were "distracted").

I recall chuckling at the concept of putting a "depth" on the concept of "short term memory" (IIRC, Winston claimed something like 5 - 7 items :> ). But, over the years, that model seems to just keep getting more appropriate each time I revisit it! (though the 5 and 7 seem to shrink with age).

So, the question I pose is: given that we increasingly use multipurpose devices in our lives and that one wants to

*deliberately* reduce the complexity of the UI's on those devices (either because we don't want to overload the user -- imagine having a windowed interface on your microwave oven -- or because we simply can't *afford* a rich interface -- perhaps owing to space/cost constraints), what sorts of reasonable criteria would govern how an interface can successfully manage this information while taking into account the users' limitations?

As an example, imagine "doing something" that is not "display oriented" (as it is far too easy to think of a windowed UI when visualizing a displayed interface) and consider how you manage your "task queue" in real time. E.g., getting distracted cooking dinner and forgetting to take out the trash

[sorry, I was looking for an intentionally "different" set of tasks to avoid suggesting any particular type of "device" and/or "device interface"]

Then, think of how age, gender, infirmity, etc. impact those techniques.

From there, map them onto a UI technology that seems most appropriate for the conclusions you've reached (?).

(Boy, I'd be a ball-buster of a Professor! But, I *do* come up with some clever designs by thinking of these issues :> )

Reply to
D Yuniskis
Loading thread data ...

If (and it's a big if) I understnd where you interest lies, it is less in 'information overload' (I think the military has done a huge amount of research in this area for fighter pilots/'battlefield' conditions) and more in 'detection' of such overload/fatigue. If so, I expect a system to monitor 'key strokes' (mouse moves w'ever - user input) and their frequency/uniqueness rates. Possibly some type of eye tracking could be helpful?

I dunno

Carl

Reply to
1 Lucky Texan

Hi Carl,

1 Lucky Texan wrote:
[snip]

Yes. Though think of it as *prediction* instead of detection. I.e., what to *avoid* in designing a UI so that the user

*won't* be overloaded/fatigued/etc.

An example that came up in another conversation (off list):

You're writing a letter to . At some point in the composition, you notice that the date that has been filled in (automatically) is incorrect.

You could wait until you are done writing the letter to correct this. But, then you have to *hope* you REMEMBER to do it! :>

Or, you can do it now while it is still fresh in your mind. And return to the rest of your letter thereafter.

In, for example, Windows, you could Start | Settings | Control Panel | Date/Time and make the changes there. Then, close that dialog, close Control Panel and finally return to your text editing. Or, you could double click on the time display in the system tray and directly access the Date/Time Properties panel.

The former requires a greater degree of focus for the user. There is more navigation involved. As such, it is a greater distraction and, thus, more likely to cause the user to lose his train of thought -- which translates to an inefficiency of the interface.

The latter requires less involvement of the user (assuming knowledge of this "shortcut" is intuitive enough) and is therefore less of a distraction.

Of course, this (Windows) example is flawed in that the user can still *see* what he was doing prior to invoking the "set date" command. Chances are, he can even *read* what he has written *while* simultaneously setting the date.

Contrast this with limited context interfaces in which the "previous activity" is completely obscured by the newer activity (e.g., a handheld device, aural interface, etc.).

So, my question tries to identify / qualify those types of issues that make UI's inefficient in these reduced context deployments.

Hmmm... that may have a corollary. I.e., if you assume keystrokes (mouse clicks, etc.) represent some basic measure of work or cognition). So, the fewer of these, the less taxing the "distraction".

Reply to
D Yuniskis

Even reading rates could predict the onset of overload. Again, the Air Force has bumped into this issue. There is likely an entire branch of psychology dealing with these issues.

As for the mechanics in a system, some could perhaps be implemented with present or near-term technology. Certainly the military could justify eye-tracking, brainwave monitoring or other indicators. But reading rates, mouse click rates, typing speed, etc. Might be doable now. I can also envision some add-on widgets that might allow for, say a double right click to create a 'finger string'. As in tying a sting around your finger. A type of bookmark that would recall the precise conditions of the system (time, date, screen display, url, etc.) when the user detected something troubling. May not be as precise as 'the infilled date was wrong', but it may be enough of a clue that, when the user reviews the recalled screen later, it triggers a memory like "hmmm, what was her....OH YEAH!, that date is wrong!" .

fun stuff to think about.

Carl

1 Lucky Texan
Reply to
1 Lucky Texan

Hi Carl,

1 Lucky Texan wrote:
[snip]

Yes, but keep in mind this is c.a.e and most of the "devices" we deal with aren't typical desktop applications. I.e., the user rarely has to "read much". Rather, he spends time looking for a "display" (item) and adjusting a "control" to affect some change.

Actually, this is worth pursuing. Though not just when "detected something troubling" but, also, to serve as a "remember what I was doing *now*".

I suspect a lot can be done with creating unique "screens" in visual interfaces -- so the user recognizes what is happening

*on* that screen simply by it's overall appearance (layout, etc.). Though this requires a conscious effort throughout the entire system design to ensure this uniqueness is preserved. I suspect, too often, we strive for similarity in "screens" instead of deliberate dis-similarity.
*Taxing* stuff to think about! :> So much easier to just look at a bunch of interfaces and say what's *wrong* with them! yet, to do so in a way that allows "what's right" to be extracted is challenging.
Reply to
D Yuniskis

s
I
e

One other quick though, in some 'dedicated' systems,it can be very important to make any deviation from the operator's 'expectation' GREATLY noticeable. I've seen some poor early software in semi- automated test stations, where some small line of text changes from 'pass' to fail. That's all. Well, the expectation could be something like 97% good boards. So, as an operator, can you be relied on to notice that text change when you have just tested 100-200 bds before a bad one comes along? I told the programmer i wanted the screen to change color, the font size to increase and, if available, a beeper to sound! That is somewhat the opposite of information overload, perhaps we'd call it 'tedium' w'ever. But, as you say, these things are important. Things like preset, 'check-off' lists, and systems that do not 'assume' an operator is paying attention and require 'distinct' inputs to keep them aware - I guess that all falls near this issue huh?

Reply to
1 Lucky Texan

Hi Carl,

1 Lucky Texan wrote:
[snip]

Yes.

With non-graphic interfaces, just changing the formatting of the text (even if this is a side-effect of the *amount* of text being displayed) can be enough of a visual cue. E.g.:

"OK"

vs.

"There is something that has gone unexpectedly wrong with whatever you happened to be doing right now. So, you might want to think twice before you buy any potato chips at the market this weekend"

Yes. When responsible for debugging a large piece of ATE for a client, I got bored with the tedium of the many hours the tester would spend testing the UUT (the ATE device was actually the UUT in this case :> -- tested by yet another bit of kit). So, I would hack the test script (a proprietary format that was easy to decode) to skip some of the longer tests (that I knew already passed).

I was careful to patch everything back before final sell-off.

Almost.

When the memory test came along (keep in mind, the device being tested is an ATE device -- 600 pin tester -- so the "pattern memory" was *big*), instead of the familiar:

Test 1070 - Pattern Memory

prompt Go for coffee

This was different enough to be noticeable -- even in the tedium of those hours of noninteractive tests. Customer was not pleased. Boss was not pleased. *I* was not pleased (as I now had to sit through the entire sell-off procedure a second time with a "virgin" test disk)

I think -- to minimize the "distraction" -- you want to make these distractions *really* "no-brainers". The kinds of things that can be done in your sleep. I.e., the opposite of deliberately making them require lots of your attention (to "get them right" as in your example).

Note how many interactive desktop applications/web sites deliberately change their interfaces to force you to read what they are saying. Sort of like stores rearranging their product offerings to force you to "go looking" for what you want (instead of mindlessly -- unattentively -- proceeding directly *to* the items you seek).

Reply to
D Yuniskis

Interesting response, it seems we've had some similar experiences - though I'm from the tech end of things. I once had to do extensive environmental chamber testing and ,after a few iterations, the program finally got to a point where there was only one section (a watchdog timer test IIRC) that required operator observation, then followed by some more time, until some cabling needed to be switched over to a different unit. I bought a $7 electronic kitchen timer to clip to my shirt so I could be alerted to when I needed to be back at the chamber for either observation or cable-change. It made me more productive than just sitting there for

7-8 minutes twiddling my thumbs. I suppose nowadays, something could be done with BlueTooth/Zigbee w'ever to 'call' someone to attention. ATE stuff can be odd. Like confirming the correct COLOR LED was soldered in the right location, or the audio circuitry is functioning correctly, etc.

I have also used a barcode system to 'marry' serial numbers together in inventory as a system (like daughter boards to a MB, or simm mem to a CPU board) in which the software was a little cumbersome requiring putting down the unit or the scan gun to make a simple keyboard entry (spacebar or enter). I REALLY wished at that time I'd had a footswitch hooked thru a wedge or something to make that linefeed. I suppose in today's systems USB might be a good way to implement that. (Software that was inconsistent 'block to block' ,about data entry is another pet peeve. Why is THIS screen 'anykey', the last screen was 'spacebar', the next one needs 'enter' - GRRRRR! lol!)

Reply to
1 Lucky Texan

It was George Miller who discussed the seven or so items that can be held in immediate memory, in his famous paper "The Magical Number Seven, Plus or Minus One."

Leon

Reply to
Leon

Hi Carl,

1 Lucky Texan wrote:

This device (plus *it's* tester) didn't need anyone to babysit it. But, it was a "one-of-a-kind" system (we later built a "spare") so you tended not to take much for granted. Plus, the amount of *power* available within the racks made it dangerous to leave unattended (DC power distribution was via 1" dia exposed copper bars -- "remove all jewelry, belts, eyeglasses, etc. while servicing")

In general, you wanted to be with the device because any faults that turned up could usually be fixed quickly and the test restarted. OTOH, if you wandered away and came back an hour later, the UUT plus tester could have been sitting there "idle" for the past 59 minutes...

Ha!

Our device wasn't being "built" so much as being "debugged". So, you didn't *expect* it to pass all of the tests. But, you didn't know *when* it would uncover a problem that needed to be diagnosed/repaired.

Keyboard testers. :>

This is just the same ol', same ol' issue... folks writing software that they never *use* (and, often, don't fully understand).

E.g., the subject of my post: designing the *entire* user interface while keeping in mind how "distractions" will affect the user's efficiency and proficiency with the device.

It's hard to get *everything* right when dealing with a user (esp. as users have different tastes/preferences). *But*, failing to even *consider* the device from the user's perspective is just irresponsible (sinful? negligent?)

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.