Pointing devices et ilk

Hi,

I've been systematically exploring (i.e., using in real life, not just an academic exercise) different UI technologies and devices. Pointing devices, motion controllers, 3D "mice", etc. Most recently a digitizing pen (very disappointing).

I sort of expected not to find a "clear winner" (regardless of cost). But, have been surprised at just how *bad* the options actually are!

["winner" not in the sense of "for use on my workstation" but, rather, winner in terms of effectiveness in an unspecified UI for an embedded device -- i.e., build your device's interface with this in mind]

Each device/technology has some "best" means of application (e.g., the pen was great at "drawing and erasing") and usually a whole host of other "applications" where it was poor or even abysmal (e.g., the pen sucks for textual entry and anything other than "left click" usage).

So, choice of UI technology should be tightly integrated with the design of the UI. (and/or support multiple interface modalities)

E.g., touchpads work for "buttons", not "data entry". Keyboards work for data entry, not freehand drawing. etc.

Any *specific* advice or pointers to research to help with this sort of assessment? Or, personal opinions of actual devices that excel in particular applications. E.g., I would regard the pen

*highly* in those drawing applications -- even precise, CAD-ish drawing! (though I'd also put a regular digitizing tablet in that category as well)

--don

Reply to
Don Y
Loading thread data ...

A few years ago we were developing a USB pointing device reference design and quite a bunch of mice and track balls around. One of them was a trackball for an arcade. Someone in my office conected the hand wired prototype board for this project and added some push button switches for mouse buttons.

He used this for several years. It was rather nice to use because the ball was a pool ball, heavy with lots of inertia good for small movements and had a very nice feel to it.

w..

Reply to
Walter Banks

Understood. The trackball in (e.g.,) Atari's _Football_ had an excellent feel (heft) to it. But, it was considerably larger... more like a Bocce ball (~4" dia).

I had used a "standard" bowling ball to make a "foot mouse" many years ago for similar reasons -- really fine position control (such that you could trust your *foot* to do it -- leaving your hands available for other uses).

But, it sucked when you had to make *big* motions -- you couldn't afford to get the ball spinning fast because you would lose control over it in short order! (besides, it was difficult to spin a ball with one foot with any speed!)

My solution (at the time -- remember, everything had to be external to the "PC" to avoid dealing with PC device drivers) was a digital "transmission", of sorts. Adjusting the dividers from the shaft encoders to get more or less "reduction".

This was insanely ineffective! You had to consciously *think* about the magnitude of your moves in order to select the right "gear" beforehand. :<

It just seems that the nature of "pointing" is too intimately tied to WHAT you are trying to point *at*! A "button" (icon)? The position between two typed characters? A spot on a print page? A location in a 3D model? etc.

And, it seems interface designers *tend* to want to pick *one* input device and hope that it can do it all...

E.g., the digitizer pen ON SCREEN implicitly requires a 1:1 scale. The "cursor" wants to track the tip of the pen/stylus. OTOH, a tablet digitizer breaks that connection -- the pen can behave as a mouse, of sorts, expressing RELATIVE motion (at any arbitrary scale).

What did you end up choosing for *you* "reference design"? Or, was it predefined (by a client)?

Reply to
Don Y

Humans have widely varying degrees of adaptability. Interfacing with one method...any method...is likely to be more easily accepted that multiple interfaces.

I started with a mechanical mouse. Worked great until I ran out of desk space. I tried a trackball. Got reasonably skilled, but I had multiple computers and one trackball. Switching back and forth confused my brain. I switched to a keyboard with integrated touch pad to the right of the qwerty section. I had much difficulty adapting to it. But I had little trouble with a separate touch pad directly beneath the qwerty section so I could get both hands on it for drag/drop/point/click actions. Freehand drawing is still an issue. With a program that constrains motion, like draw straight line/box/circle/horizontal/vertical, it works fine. Actions like drawing a line are easier because you can see the line as you move the endpoint. For actions that expect you to determine the proper point BEFORE you see the result, it's less intuitive.

I got a gyration mouse that you just waive around like a game controller. Great for pointing at the button that changes the TV channel, but not much good for drawing a schematic. But people seem to get along just fine with them playing tennis or bowling on the game box. Accelerometers measure dynamics. Great for apps that rely on dynamics.

I had no trouble adapting to drawing on the touch-screen of a PDA or smart phone. But when I tried a separate Wacom tablet, things went to hell. The disconnection between the hand and eye made it unusable. I'm sure I could train myself to use it...people do it all the time. The less you use it, the more likely you'll get confused by a different interface. For occasional use, I can draw better with my touch pad mouse than the Wacom.

While you should certainly consider the optimal interface for your task, it's also wise to consider the history/training/usage patterns/ adaptability of your user. If their first try confuses them, they may not buy. If they don't buy, it don't matter how good your interface is.

If you need an example, look at the linux operating system on the desktop. It's arguably the best desktop OS on the planet, but if people can't get past their initial experience, they won't adopt it. And it doesn't take much to put them off.

Reply to
mike

Of course! Along with limitations/inabilities, etc.

Yet most "systems" stick to just one!

I actually found the *one-handed* trackball was a liability when I had to fine-position and then *click*. Activating the muscles to do the "click" would inevitably cause the ball to roll a bit.

The same sort of problem plagued the pen interface I've been using. Pressing the "right mouse" button on the pen's barrel causes the tip of the pen to move (you have to *squeeze* the pen, sort of, to actuate the button). The alternative is to hold the pen in place and wait for a visual indication to tell you that it is now in "right click mode" (so, lifting the pen causes a right click instead of a left click). But, if the pen tip moves, the item that was under the pen tip ends up being "dragged" (not what you intended).

I also found it tended to leave me with carpal tunnel-like pain (though that may have been a consequence of the particular trackball in that larger ones tend to force your wrist "out of line")

[If you want to really confuse your brain, try typing on two keyboards sharing a single display! Never know which keyboard is "connected" to the "image" you are currently viewing -- until you've typed a few characters and don't see them appear! Then wonder what you just *did* in the display that you AREN'T seeing!]

Because you had to move your hands off the keyboard to "find" the touchpad?

Even when that requires some other action (like holding SHIFT depressed)? This was the advantage I had with the "foot mouse" described elsewhere.

[I am now preparing to try the opposite approach: using hands for positioning and *feet* for the "clicks" (two pedals)]

Meaning using the touchpad in "absolute" mode?

My gyromice could be operated as optical mice *or* "free space" (hand waving) mice. The latter only seemed to make sense when used with a projector (think: presentations)

I thought gyromouse actually had a gyro inside? (in fact, I think I gutted an old one just to check).

So, 1:1 where the device supplies the "ink"?

I'm not sure I understand why? Were your eyes on the *tablet* or the screen while drawing? E.g., I love a tablet for CAD work. It just feels better than a mouse, etc.

Agreed. And, you also have to consider the environment in which the interface is (will be) used. "While driving" is different from seated at a desktop, etc. How much of a cognitive load it presents to the user when the user may be engaged in other, "more important" activities.

[This is my pet peeve with modern cell phones/tablets -- you need to commit both eyes and at least one hand and a good bit of your attention in order to do anything with them!]

E.g., I strongly dislike the IBM "nib" as it always seems like a chore to use -- right in the middle of the keyboard, no less. Similarly (same sort of technology), the SpaceBall (motion controller) also feels like a "forced" interface. It has to offer lots of value to overcome it's usability issues.

Yup. When charged with buying a microwave oven (decades ago) for M-in-Law, wife suggested the "big knob" interface. It seemed klunky to me (and, something that would BREAK from use!). But, it was the perfect interface for my MinL. Very easy for her to relate to -- even though it was a nonlinear control (i.e., easy to pick between lots of small time values but harder to pick *between* larger ones -- that turkey is going to defrost for 15 minutes... not 15:07!)

Reply to
Don Y

The choice is getting wider too. There are now Temporal Interfaces (Brain Wave Readers) and Eye Motion Detectors to contend with too. What the user wishes to select to use will depend on the sort of task that they need to accomplish and the constraints on the movements they will be able to make. So, yes, UI design should always be an important part of the task analysis you do in developing your applications.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
 Click to see the full signature
Reply to
Paul E Bennett

. . .

The reference design was a generic USB mouse controller. The code dealt with the quadrature output from most mice / pointing devices of the

time.

w..

Reply to
Walter Banks

Exactly. But, there doesn't seem to be any "underlying science" that you can consult to sort this out ahead of time. I.e., its as if you have to see/experience an *actual* interface and evaluate its particular capabilities/limitations *before* you can decide if it is appropriate to your "task"/application.

THE FLIP SIDE OF THIS is that an interface, once chosen, LIMITS the future applications that you can develop/deploy on that platform!

For example, I designed a "gesture" interface initially inspired by the likes of "Graffiti" -- crude a character recognizer. I extended this concept to incorporate "glyphs" (for want of a better term that would be consistent with the Graffiti notion) that weren't real "characters" -- yet, were easy to "issue" (i.e., "draw") and resolve/recognize.

Reconsidering this, instead, as a gestural interface (and not a "character recognizer") was a quantum shift in how I looked at the functionality the interface provided.

I.e., it is hard to differentiate an 'O' from a '0' -- even with the benefit of context! An 'I' from a '1' from an 'l', etc. So, if the goal is to recognize *characters*, the problem is much more complex and *robust* solutions are impractical (or expensive).

E.g., the character recognizer on this pen digitizer interface probably gets one in ten characters wrong -- apparently totally ignoring context! (why would I ever write "w0u1d"? -- note there are *two* digits in that "misrecognition"!) And, that's with a "real PC" to implement the recognition algorithm! Imagine how it would fare when strapped for resources!

If, instead, I treat this as a gesture recognizer and define gestures to be 2D directed, connected "paths", then I am freed of the constraint of having to come up with ways to "issue" each "glyph" in the user's potential "alphabet/symbol set".

Instead, I can pick "gestures" that are more "orthogonal" and, as a result, achieve higher recognition rates with lower resource utilization. E.g., "square", "triangle", "star", "circle", "horiz line", "vert line", "slash", "backslash", 'L', "mirrored L", "flipped L", "flipped mirrored L", "squiggle", "hourglass", etc.

Then, the problem becomes one of the mnemonic *binding* of gestures to "actions" -- while recognizing the gestures is trivialized.

But, as a result, I lose the ability to "write prose" using that interface! So, it's lousy for a wordprocessing application. Yet maps well onto an "automation controller"! WITH THE STIPULATION THAT THE APPLICATION NEVER EVOLVES OUTSIDE OF THIS DOMAIN!

How would you "do CAD" on your smartphone/tablet/tablet PC? :< E.g., just sending mail/USENET has proven tedious with the "pen" interface! :< (gee, what could be more natural for WRITING than a *pen*???! :< )

Reply to
Don Y

Ah, OK. So, you weren't researching UI's, per se, but, rather, implementing a *specific* one (mouse).

Reply to
Don Y

There are some attempts going on. This book by one of the Safety Critical Systems Club members (UK) has some of the research into the topic for High Integrity Systems where the user interface can be a key factor into thesafety of the system.

"Human Factors in Safety Critical Systems" by Felix Redmill and Jane Rajan, ISBN 0-7506-2715-8.

It is probably still early days for such research though and new devices will be cropping up all the time.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
 Click to see the full signature
Reply to
Paul E Bennett

Safety gets lots of attention because there are costs BORN BY THE MANUFACTURER associated with every aspect of a product's operation and deployment.

But, other applications have similar costs -- BORN BY THE USERS! Few vendors seem to consider these as "valid".

E.g., I absolutely *despise* the "spin wheel" on old iPods. Granted it has some "cool" appeal (yawn in 2013) and is more robust than a mechanical of same. But, clearly not well thought out -- why not a raised ring (or two concentric rings effectively making a CHANNEL for your fingertip to traverse) to guide your finger WITHOUT REQUIRING THE USE OF YOUR EYES *and* A SECOND HAND TO HOLD THE BODY OF THE UNIT!

(I defined many of my *touchpad* "gestures" to exploit the "edge" of the touchpad's recessed surface. E.g., you can draw a "square" just by hugging the outer edge of the recessed rectangular frame; a vertical line by traveling down one edge; a horizontal by traveling across the

*adjacent* edge; etc.)

Thanks, I'll have SWMBO add it to her next Amazon order.

I suspect there will never be a real "science" in the sense of codified rules/theorem, etc. Rather, a collection of *existing* interfaces accompanied by tabulations of relative strengths and weaknesses. (this is the approach I have taken -- empirically). So, maybe it helps folks

*anticipate* what might be "wrong" (or *right*!) with a proposed new or existing interface. But, I suspect there will still be lots of experimentation involved -- often at the hands of the end user *after* release!

:<

Reply to
Don Y

FWIW, Kensington has made such a thing for many years:

formatting link

Reply to
Robert Wessel

In the non safety world I am sure you are right. As to your tabulations of scoring for each of the interface methods, are you able to incorporate that into a document that can be shared? It might be a quite useful work to start from for those whose researches may be able to add to that knowledge.

For the moment I am looking at potential sensor arrays for autonomous robotics. A whole different kettle of fish.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
 Click to see the full signature
Reply to
Paul E Bennett

I'm at a loss as to how something could even be "scored" (borrowing your term from below). Esp in any sense that would be applicable "across technologies".

E.g. you can score the performance of a particular techology in terms of "intended command" vs. "perceived command" (so, for a pen-based character recognizer, that would be the number of good/bad recognitions). But, how do you rate the *severity* of a "bad"? Or, *weight* the probability of various potential "bads"?

E.g., misinterpreting STOP as ACCELERATE is far worse that STOP as SLOWDOWN!

And, how do you compare "pen recognition" to "mouse clicks"? Or, eye/head tracking? Apples and Bazookas!

I can try to clean it up (so its not quite as "personal"/embarassing. But, as I said above, it's mainly a collection of "observations" from playing with various input devices (for several weeks at a time). Much of the criticisms may not even be pertinent to the "technology" involved.

E.g., "poor battery life" (well, is that something inherent in the technology? Or, just a crappy implementation?)

Other comments are speculations on my part. E.g. *I* have trouble using a trackball (thumb driven) to position *and* click -- without the ball shifting position *as* I click. Is this a physical problem of mine? Yet, I suspect folks with PT or ET would have this problem most definitely!

An otherwise healthy, vision compromised individual can probably have (learn to have) little problem using a tactile braille display. OTOH, a diabetic blind would (eventually) find it useless.

A 3D auralizer works adequately for folks with normal hearing but for folks with mid/high freq loss, the effect falls off rapidly.

How the hell do you put these on a scale? *ANY* scale??

Tactile, positional, vision, etc.??

Reply to
Don Y

I think this is a perfect example of UI design failure.

The use case of many people is either they're heating a shop-bought product, in which case it tells you 'microwave on full power for 4 minutes'. Or they're just heating up a bit of , where you just have to pick a time and see how it works out. Maybe longer time and power control for defrost. That's all the features 90% of users need.

But microwaves seem to have dozens of settings for cooking chicken, vegetables, soup, whatever. You have to punch in the cooking time to the nearest second (but there's not much difference between defrosting that turkey for 15 or 16 minutes). And it's frequently difficult to work out how to start the thing.

The worst example is a microwave that wouldn't do anything until you set the clock. Bit tedious when the microwave shares a socket with other appliances and frequently gets unplugged. And the clock has no bearing on its cooking facilities for 99% of users (in theory you can set it to turn on 5 minutes before you walk through the door, but nobody does).

So 'big knob' seems to be the perfect UI. It's just a pity it's only fitted to cheap microwaves.

'Couldn't program a video recorder' used to be a way of saying you weren't technically minded in the 1980s - I think it says more about the terrible UIs that these things had.

Theo

Reply to
Theo Markettos

It would be fine if and only if I can use the same control to set 15minutes (+-5%) and 15seconds (+-5%). Easy with /simple/ push button controls, effectively impossible with rotary knob.

Of course there are too many microwaves which are "so simplified" that you can't figure out how to specify mm:ss time and x% power. Nor what any of the icons mean either.

Reply to
Tom Gardner

Exactly. Or, failing to consider the *types* of users you are likely to encounter.

E.g., for me, it's much easier to press "1" and know I will get 1 minute. (I could similarly press TIMER 1 0 0 START but why press 5 keys when one will suffice?) FOR ME, trying to position a *dial* at "1:00" would be tedious -- more effort than it was worth (esp if it didn't have detents at convenient settings!)

Yeah, and what's "full power"? Size of oven seems to correlate with power capability. But, the *food* going in it is the same, regardless!

Actually, we have learned to use ours with more flexibility -- but, still far less than it is capable:

- heat up COLD coffee or tea? BEVERAGE

- warm up coffee or tea, or "half a cup"? +30SECONDS

- defrost a burger? DEFROST 2, rotate DEFROST 2; rotate; DEFROST 1

- warm up some marinara sauce? REHEAT 5 2 ("5" being the mnemonic for "S"auce chosen by the oven manufacturer)

Other than that, just push '1' or '2' (1:00 or 2:00, respectively) and try again after inspecting the meal.

Too many buttons that don't serve any REAL purpose.

OTOH, they did get some things right: CLEAR tosses the entire entry (instead of deleting the last keystroke -- d'uh).

And, some things colossally WRONG (can't start the EGG TIMER (i.e., maggy stays off!) with the door open! (though you can open it after it has started -- I think).

And, can't use the oven while timer is in use. Gee, they can't keep track of TWO -- make that THREE as the RTC is also running (but not displayed!) -- different down counters simultaneously? As a result, if you're baking a potato (in conventional oven) and have set the timer, you can't use the microwave during that time!

Instead, you are forced to wait for some "convenient" time remaining to be displayed (something easy to remember AND perform "clock math" upon), STOP the timer, throw your item in the microwave, QUICKLY (because you don't want to have to account for the time you may have spent doing these things!) specify the cook time, START, wait, remove cooked item and RESET the timer to the time at which you interrupted it LESS the cook time you specified.

Sheesh! Buy a $10 standalone timer, instead (i.e., remove that feature from the oven!)

And small differences in time aren't reflected in the power delivered to the "load" (meal)! I.e., the maggy is typically gated on and off at some duty cycle for the period you have specified. But, we're not talking about some fraction of a second on and off IN EACH SECOND. Rather, many seconds ON followed by many OFF! So, adding three seconds to cook time could just mean that OFF time is 3 seconds longer -- which it WOULD HAVE BEEN had you simply removed the item from the oven 3 seconds earlier!

Keypads also let you make *big* mistakes! SWMBO had set the microwave for 15:00 on high -- instead of 0:15. Luckily, some daemon running in my subconscious noticed, "gee, the microwave sound has been persistent for quite some time now! What the hell is she cooking??"

The *downside* is that it will invariably break more readily than a membrane keypad. And, is harder to keep clean (getting "under" the knob)

Apparently, a large number of current device returns are related to "too difficult to use" (or, too impatient to LEARN how to use).

IMO, designers should bear the cost of the complexity required to MAKE THEIR DEVICES SIMPLE! If that makes your job harder... Become a plumber, instead.

E.g., my automation system seeks to make the UI as lean as possible FROM THE USER"S PERSPECTIVE. It does so by making the design far more complex.

E.g., instead of RADIO 89.3MHz or even RADIO WXYZ, you say RADIO.

*Which* station is tuned depends on *who* is making the request (SWMBO likes the local Jazz station, I prefer R&R). Or, time of day (SWMBO listens to "news" in the mornings, Jazz thereafter). Or, location (SWMBO listens to classical while in her studio). Or, day of week (SWMBO listens to certain comedic shows broadcast at certain times on certain days). [Me? Easy to please. Same ol', same ol' -- just the VOLUME level changes! :> ]

It would be A LOT easier to push all this choice back into the user's lap. And, easy to JUSTIFY doing so: "What happens if she wants to listen to SOMETHING ELSE?" (well, then she will have to TELL you that! But, she shouldn't have to say: Turn on the radio. Tune it to xxx. Set the volume level to yyy and route the audio into the studio -- until dinner time")

Reply to
Don Y

The ones that I have seen (knob) do this by having nonlinear ranges. I.e., you can effectively secify a fine portion of the first minute; a coarser portion of the second minute, etc. But, when you get up to ~15 minutes, you are effectively stuck with 15 vs 14 or 16. (there are just so many degrees in the dials range of motion! :>)

But, this means users have to be able to think in nonlinear terms! I.e., you don't turn the dial twice as far to get twice the time!

Ours has text labels. Though the display is cryptic. And, many concepts are fuzzy. Is BEVERAGE 2 supposed to mean *2* cups of coffee? Does REHEAT 5 2 (instead of REHEAT 5 or REHEAT 5 1) mean "twice as much sauce?"

I think the *best* interface would be a hole that you could poke your finger through and *feel* the temperature of the product! WITHOUT risking an RF burn!! :>

[I once grumbled to MD re: annual prostate exam: "There has GOT to be a better way!" (of course, some of that the result of the physical discomfort; and some because of the psychological! :> ) He *immediately* replied: "This is inexpensive, rarely breaks, minimal risk to patient *and* is incredibly sensitive!" (imagine designing a device that an "feel" as easily as a finger tip) It was easy to agree with him. But didn't make me any happier to have his finger up my ass!! :< ]
Reply to
Don Y

There as some very few kitchen appliances that use linguistic variables rather than crisp engineering notation for control. Anyone who has used fuzzy logic knows just how relevant linguistic variables are to the problem at hand.

Most stove tops use linguistic variables most ovens use crisp variables. A stove top (I just checked ours) has simmer, high, medium high, medium, medium low, low, and off on a continuous single knob.

Very old instrumentation used linguistic variables before there were standard calibration for crisp scales.

In the kitchen instructions on convenience products have detailed instructions that are very often in logistic variable terms. For example the following instructions on a Knorr soup packet

Directions:

  1. Empty contents into saucepan; add 4½ cups (1 L) cold water.
  2. Bring to a boil, stirring constantly.
  3. Reduce heat; partially cover and simmer for 15 minutes, stirring occasionally. 4 to 6 servings, 4½ cups (1 L)

Lots of UI could be implemented using linguistic variables.

w..

Reply to
Walter Banks

Depends. As "early" as 50 years ago, we had a stovetop burner with a control calibrated in *degrees* (temp sensor would cling to bottom of pot/pan to close the loop)

Good point. But, I think depends on the nature of the task at hand and how well suited it is to "subjective" interpretation. E.g., "loud" and "soft" mean different things to different users (and in different situations!). To someone with an artistic bent, "green" means something more precise than it does to the average joe (who considers lots of different colors to be "green") But, if there are a fixed set of choices, "green" can be agreed upon (even if begrudgingly) by all.

Heating up a pot of soup is largely subjective -- the user decides how hot is hot enough/too hot.

OTOH, if you've ever tried to convey a non-trivial Rx to another person, people have different ideas for what "pinch", "dash", etc. mean (they are actually calibrated amounts, NOT fuzzy concepts).

E.g., I have a pineapple cheesecake Rx that "everyone" loves! But, it's very hard to make right -- simply because its hard to describe *how* fast you have to stir the pineapple (while reducing it), how hot the pan should be, how long it should remain on the heat, what temperature, etc. So, folks who want the Rx need to watch me *make* one -- 5 hours -- before they can understand my written description of the process. Linguistic variables are too imprecise; and crisp variables don't really exist (do you measure stirring speed in "RPM"? How do you quantify the motion of the stirring utensil *in* the pan? etc)

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.