------------
I have a quick-and-dirty little project that I probably should have begged off -- but, it so closely resembles work that I'm doing on another project that I opted to take it on. /Carte Blanche/ is an excellent motivator! :>
As with other projects in recent past, it's a distributed control system. Unlike other projects, (almost) all of the "application" software resides in a central "controller": a COTS SFF PC running a baremetal MTOS with no physical I/O's beyond disk (for persistent store) and NIC (for RPC's).
The "remote" nodes are relatively simple (dirt cheap) "motes" that do little more than hardware and software "signal conditioning" for the physical I/O's that they happen to have available, locally. This saves *thousands* of dollars of cablng costs wire and labor (which often has to be done by a union electrician, "inspected", etc.).
A (portion of a) local namespace might appear to be:
/devices /audio /microphone /speaker /display /screen /backlight /touchpanel /switches /1 /2 /3 /lamps /1 /2 /3 ...
All of these are exported to the central controller FROM EVERY NODE. I.e., as if each was a "network share", "NFS export", etc. (in some cases, some or all may be exported to other nodes as well! E.g., node1 might act as the user interface for a "headless" node5)
The central controller builds namespaces from this composite set of exports. E.g., one such might be:
/devices /audio /node1 /microphone /speaker /node2 /microphone /speaker ... /display /node1 /screen /backlight /node2 /screen /backlight ...
while another, equally valid and *concurrently* active might be:
/devices /node1 /audio /microphone /speaker /display /screen /backlight /node2 /audio /microphone /speaker /display /screen /backlight ...
Still one more might be:
/devices /audio /talk ("microphone" from node 1) /listen ("speaker" from node 5) /display /screen (from node 3) /backlight (from node 3) /touchpanel (from node 3) /switches /left-limit ("3" from node 8) /home ("3" from node 2) /right-limit ("1" from node 7) /lamps /red ("1" from node 5) /green ("2" from node 4) /blue ("1" from node 3) ...
while another task has a different SET of the SAME NAMES -- but bound to different "node instances". I.e., the same identical piece of code running in that "another task's" namespace would result in "identical" activities -- but taking place on an entirely different set of node I/O's using these different namespace bindings.
[This is really slicker than snot! Tasks need not be concerned with where their I/O's are located nor worry about interfering with the activities of other tasks: if a resource isn't bound in *your* namespace, there is NOTHING you can do to interfere with it nor any concern that *it* might interfere with *your* activities! The protection domains are absolute -- much like operating in a chroot(8) jail!]With this, I can build little "virtual machines" (poor choice of term as it has alrady been appropriated for other DIFFERENT use) out of components from anywhere in The System. E.g., I could bind "/switches/3" on node *1* to "/button"; and "/lamps/1" on node *5* to "/bulb" and then use
if (/button == ON) /bulb = ON
to implement a "remote" light switch. No hassle of sending messages across the network to specific IP addresses, implementing an IDL, marshalling RPC arguments, etc.
At the same time, I can have another task that works in a richer namespace -- e.g., the third one, above (introduced as "while another") -- that can do:
for (N in 1..number_of_nodes) for (lamp in 1..3) /devices/nodeN/lamps/lamp = OFF
to implement a "master reset" (after ensuring that no other task tries to turn any of these ON after having done so)
[This is important as it shows how the multiple displays, etc. can be handled from a central controller without ever burdening any individual task with knowledge of more than a *single* display, etc.]The takeaways/executive summary: - resources appear in namespaces - anything that doesn't appear in a namespace can't be referenced (i.e., *accessed*!) - any new INDEPENDANT namespace can be constructed by binding names from an existing namespace to NEW names in the new namespace - a resource may appear in multiple namespaces concurrently and with different names - resources can exist on different nodes without that being reflected in the construction of ANY of the namespaces in which they are enumerated - having a name (handle) for a resource allows it to be accessed without regard for physical location (!)
There's a small amount of "work" involved implementing this to create the "device interfaces" (i.e., the software that makes a particular digital output look like "/lamp/1" and control its state via "ON" vs. "OFF" -- or "DIM", "FLASH", etc.). Much of the application effort involves deciding how to split the namespace into task-specific namespaces (i.e., limiting what any task can do to ONLY the things that it SHOULD be able to manipulate and query; a task that expects to send and receive characters via a serial port doesn't necessarily need to be able to alter the baud rate or other characteristics of the PHYSICAL interface!).
For example, if a task shouldn't be able to access a resource, then it's namespace shouldn't include any references BOUND to that resource! If a task shouldn't be able to perform a particular operation on a particular resource (e.g., never turn it "OFF"), then an agency/proxy can be created to process requests from the task and dispatch APPROVED requests to the actual resource; the "bare" resource never appearing in the constrained task's namespace but the proxy appearing, instead!
The biggest problem lies in the use of a central controller in the implementation. While it makes the nominal application much easier to implement (more resources, richer development environment, etc.), it complicates the sharing of physical resources on the individual nodes!
I.e., each *node* can access its own *local* "namespace" (and, if I so choose, even portions of namespaces on other nodes -- in much the same way that the central controller does!). This allows some failsafes to be locally implemented (e.g., "turn off ALL lamps if they've been left on for more than 12 hours" or "if unable to contact node X for more than 23 minutes, flash /lamp/2 at a rate of 1 Hz")
THE PROBLEM
The real issue lies with the user interface devices -- display (screen,touchpanel), audio (in/out), etc. (others not mentioned in my above descriptions). They are shared resources that may
*not* want to be "uniquely held" by a particular agency/task.E.g., while the central controller (aka "main application") typically "holds" all of the user interfaces (as *it* is interacting with the various users), an asynchronous "event"/exception may be raised in a local node that needs to be conveyed to the "application" (usually, with some urgency). In a uniprocessor/tighly coupled SMP implementation, the communication is local and reliable. The application layer eventually recognizes the event and decides how the information should be conveyed to the user and/or the event handled.
The application is AWARE of the event! (i.e., if it needs to do something beyond the notification, it *can* do those things)
In the distributed case, that may not be true (e.g., the "event" may be "Communication Failure"). So, it may not be possible for the local node to post the event to the central controller and the central controller to turn around and "announce" the event to the user *at* the local node (because the controller holds the display resource for that node).
The local node may, thus, need to borrow/override some portion of a user interface to convey that information to (and elicit an acknowledgement from) the user WITHOUT *depending* on the central controller to perform that interaction.
The local node has no knowledge of the semantic content of *its* display -- the controller has been painting it based on the needs of the application as interpreted and implemented by the controller. So, the local node has no idea as to which portions of the display might be precious AT THE CURRENT MOMENT. Yet, it has to use *some* portion of the display to communicate this to the user!
[Note "display" can be a visual or aural indication; same issues apply]I liken this to early Windows (printer) drivers that would throw up a modal dialog to complain about the printer being out of paper/toner. The "driver" unilaterally decided to grab a portion of the display device WITHOUT REGARD for what the user was doing at the time. (i.e., there's no way the driver could KNOW what the user was doing or the importance of individual pieces of display real estate!) And, it would do so in the most "noticeable" place on the display surface -- right where the user was typically looking at the time!
Then, to add insult to injury, took the focus away from the user was doing to DEMAND acknowledgement from the user -- as if a paper shortage was the most important issue the user was facing, at that time. (from the driver's perspective, it probably *was*!)
[Note that this has been "fixed" to present those notifications elsewhere -- in a specific location of the display (Tray) and only AFTER the user expresses an interest in knowing the details of the "alert"! Fine -- if you've got the display real-estate to spare!]This also presents opportunities for undesired behaviors to creep into the UX -- the user may be in the process of "doing something" based on the screen's contents an OhNoSecond prior to the dialog appearing (e.g., pressing a key, clicking on something, etc.) and can't stop himself quickly enough for the action to be processed by the interrupting, focus stealing dialog. He may never even *see* the dialog if his eyes are elsewhere (e.g., engaged in some activity scripted from written notes placed on the worksurface).
There are two aspects of this that are consequential to the distributed nature of the implementation:
- redirecting the interpretation of the user's actions to the proper "context" (local focus vs. normal, remote focus)
- informing the "original" overlaid application that the focus has now shifted (i.e., if the application was expecting an acknowledgement in a particular time period, that may not be forthcoming because the request/indicaation for that action it is not APPARENT to the user) -- there may not be an operable communication path between the two parties!
As it's a control system, neither the local nor remote "activities" can really "pause". If a motor is in motion, it must stop before it causes damage. If a user must interact with a mechanism, that must be done before the mechanism advances to another stage in its operation. Etc. Just because the UI is "interrupted" doesn't mean the process will be!
In the past, for "richer" implementations, I've reserved a portion of the user interface for "express messages" -- to ensure that they can always be seen WITHOUT COMPETING with the normal display content provided by the application. In my HA project, I use a spatialized "display" to present specific indicators/annunciators in different parts of the "space" around the user's head. So, a "chime off to the left" can indicate one thing while a "buzz" off to the right can indicate something else. I rely on the character and location of the sound to reinforce the user's "remembrance" of the event despite his being actively engaged ("focused") on some other activity.
Here, I don't have the physical resources to implement such a static "reservation".
One possible approach is to just overlay and wait for some sort of acknowledgement. The argument being that if the user isn't WATCHING the display (to acknowledge this asynchronous notification), then he's not MISSING anything in the overlaid application, either! And, if he's dilly-dallying in addressing that event, then he "deserves" the consequences of the ignored/overlaid message!
Another approach is to alter the display in some unique way (invert it, flash it, etc.) to draw attention to the fact that a notification is pending. Then, await the user's explicit acknowledgement to present (and eventually dismiss) that. I.e., RESERVE the "blink attribute" for this indication.
Still another is to use an alternate user interface channel to convey this alert (e.g., something on the audio to indicate the presence of pending video; something in the video to indicate the presence of pending audio!)
An even more Draconian approach might be to add a "special indicator" ("check engine light", buzzer, etc.) that serves this purpose (but, that adds to recurring cost).
[Note that I've not mentioned these sorts of interactions/notifications BETWEEN nodes. E.g., when nodeX is acting as the UI for nodeY!]Preferences? Anything I've not considered?