Hi,
This is another "thought experiment" type activity (you, of course, are free to implement any of these ideas in real hardware and software to test your opinions -- but, I suspect you will find it easier to "do the math" in your head, instead)
I've been researching alternative user interface (UI) technologies and approaches. A key issue which doesn't seem to have been addressed is modeling how much information a "typical user" (let's leave that undefined for the moment -- it's definition significantly affects the conclusions, IMO) can manage *without* the assistance of the UI.
E.g., in a windowed desktop, it's not uncommon to have a dozen "windows" open concurrently. And, frequently, the user can actively be dividing his/her time between two or three applications/tasks "concurrently" (by this, I mean, two or more distinct applications which the user is *treating* as one "activity" -- despite their disparate requirements/goals).
But, the mere presence of these "other" windows (applications) on the screen acts as a memory enhancer. I.e., the user can forget about them while engaged in his/her "foreground" activity (even if that activity requires the coordination of activities between several "applications") because he/she
*knows* "where" they are being remembered (on his behalf).For example, if your "windows" session crashes, most folks have a hard time recalling which applications (windows) were open at the time of the crash. They can remember the (one) activity that they were engaged in AT THE TIME but probably can't recall the other things they were doing *alongside* this primary activity.
Similarly, when I am using one of my handhelds (i.e., the entire screen is occupied by *an* application), it is hard to *guess* what application lies immediately "behind" that screen if the current application has engaged my attention more than superficially. I rely on mechanisms that "remind" me of that "pending" application (activity/task) after I have completed work on the current "task".
However, the current task may have been a "minor distraction". E.g., noticing that the date is set incorrectly and having to switch to the "set date" application while engaged in the *original* application. I contend that those "distractions", if not trivial to manage (cognitively), can seriously corrupt your interaction with such "limited context" UI's (i.e., cases where you can't be easily reminded of all the "other things" you were engaged with at the time you were "distracted").
I recall chuckling at the concept of putting a "depth" on the concept of "short term memory" (IIRC, Winston claimed something like 5 - 7 items :> ). But, over the years, that model seems to just keep getting more appropriate each time I revisit it! (though the 5 and 7 seem to shrink with age).
So, the question I pose is: given that we increasingly use multipurpose devices in our lives and that one wants to
*deliberately* reduce the complexity of the UI's on those devices (either because we don't want to overload the user -- imagine having a windowed interface on your microwave oven -- or because we simply can't *afford* a rich interface -- perhaps owing to space/cost constraints), what sorts of reasonable criteria would govern how an interface can successfully manage this information while taking into account the users' limitations?As an example, imagine "doing something" that is not "display oriented" (as it is far too easy to think of a windowed UI when visualizing a displayed interface) and consider how you manage your "task queue" in real time. E.g., getting distracted cooking dinner and forgetting to take out the trash
[sorry, I was looking for an intentionally "different" set of tasks to avoid suggesting any particular type of "device" and/or "device interface"]Then, think of how age, gender, infirmity, etc. impact those techniques.
From there, map them onto a UI technology that seems most appropriate for the conclusions you've reached (?).
(Boy, I'd be a ball-buster of a Professor! But, I *do* come up with some clever designs by thinking of these issues :> )