PDA as "X terminal"

Actually, there /are/ VNC server setups which work in this "insane" way. I use VNC quite a bit on windows machines, since it is the only decent way to remotely access them (with Linux I can do most things with ssh - vastly more efficient).

The VNC server gets information from Windows when parts of the screen are re-drawn. It doesn't cover everything - accelerated stuff, 3D parts, DirectX, video, etc., are always going to be a mess when you are using remote displays. Windows also doesn't seem to be able to see changes to command prompt boxes (don't ask me why!), so these are polled. But the VNC server gets information about rectangular areas that may need to be resent. It then compares the new bitmap with its copy of the frame buffer to see if it has really changed, and to determine how to send it (raw data, lossless compressed data, lossy compressed, etc).

It can also go a step further. With some versions of VNC (I use tightvnc) you can install an extra screen driver (the "mirage" driver) that spies on the windows graphics GDI calls. With this in place, vnc can monitor the actual GDI calls ("draw a line here", etc.) and pass these over the link to the client. It can always fall back on a straight pixel copy if necessary, but use of "mirage" can greatly reduce the load on the VNC server, as well as the bandwidth.

I have no idea if there is a similar system for Linux VNC servers - as I say, I have had little need of VNC on Linux.

Reply to
David Brown
Loading thread data ...

t
t

All these internal tricks are done not to save time, but because access to the windows framebuffer is not an option for the applications. I remember the tightvnc people had at some point some post which said why it would not be easy at all to support vista IIRC (reason was no access to display memory).

This is part of the reason why I said DPS was particularly vnc friendly.

Telling the client to draw primitives on the screen is not part of the RFB protocol, so this must be some other flavour of vnc, if it really works this way. I don't know a tightvnc which does that (but I have not recently checked the news there).

BTW tightvnc is a bit sluggish to me (I use only vnc clients on windows), realVNC works better for me.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
Didi

You have to look at what you end up doing "in" the display. I.e., the nature of changes that you make between "updates". (you also have to decide if *you* can control the update() or if you are at the mercy of an asynchronous update process, etc.

You don't have to look at individual pixels. For example, in many text-based UI's, I present a layered window interface. I.e., a hierarchy of menus implemented as pop-ups layered atop each other.

So, I *know* that the changes to any particular screen image will be roughly rectangular in nature. (there may be some coincidental "non-changes" within this region but trying to optimize those is an insignificant saving)

As such, I just track begin/end changes for a rectangular region:

Point begin; Point end;

if (row < begin.row) { begin.row = row; } else if (row > end.row) { end.row = row; }

if (column < begin.column) { begin.column = column; } else if (column > end.column) { end.column = column; }

Now, when update() is called, I just repaint the RECTANGULAR region between "begin" and "end".

If you are updating the display after *a* new window is created (or old window destroyed), then your changes will be confined to a rectangular region somewhere (i.e., where the window is/was). So, you don't have to look at the individual positions

*within* that region -- just update it in its entirety.

If your application is aware of the underlying display mechanisms, it can exploit that knowledge to give improved performance at little cost. For example, in my example, the application deliberately builds one window at a time before update(). Otherwise, if the application updated *two* windows in very different portions of the screen (consider a small window in the upper left corner and another small window in the lower right corner), then the simplified begin/end tracking would cause overly large portions of the display to be updated (in the 2 small window case, the entire screen would be redrawn even though just two small "corners" have changed -- and nothing *else*!).

E.g., I often have a clock on-screen tucked into a corner. When I display a new time, the begin/end markers reflect the start and end of that "clock" region. I *then* update the display *before* I draw any new windows -- because the windows will typically be "far away" from that "clock display" and I don't want to have to update all the unchanged portions of the display in that larger region.

In other cases, I have a fancier UI package that draws exploding/collapsing windows, etc. Using the begin/end simplification results in the entire contents of the exploding window being redrawn each time it is (rapidly!) resized. This really looks bad because the time required to repaint the window in its most condensed representation is considerably less than in its fully expanded representation.

When using this UI, I track more details so that I can update smaller portions of the physical display at each stage (e.g., just the borders of the expanding window as they "move" outward).

You (and your customers) have to be the judge of that.

Yes. This is analagous to the begin/end tracking I mention above. The cost of the test is greatly reduced based on the

*assumption* that there will be enough changes within the begin/end region that it isn't worth trying to further optimize *within* that region.

No, you don't look at individual pixels. You look at "drawing objects" (lines, circles, regions, etc.) and track *their* "extents". Then, find the smallest enclosing object (depends on what types of objects you can transfer in your protocol -- my curses only deals with one and two dimensional regions) and track *that* as the description of what must be "update()-ed".

Reply to
D Yuniskis

In a multitask multi-window OS there is no "you", different tasks can do different things in various windows. Of course DPS tasks can control window update (draw for a while and signal the change afterwards, this with a timeout).

But how do you know there is no other change done by another application. Or do all applications draw into the framebuffer _and_ forward their doings to the vnc server? That would at least double the application overhead so at the end of the day you will be less efficient.

Oh no. The window can be only partially visible, parts of it covered by other windows' edges etc., so you don't know that.

Been there considered all that :-).

So what happens if the application draws a line in one window (it clips it to its limits and draws it into its offscreen buffer) and the line is only half visible because of a covering window. You clip the line to all possible windows and do that with every line? Give me a break :-).

Let's put it in numbers: an 800x600, 16 bpp buffer is roughly

1 megabyte. On the system I run DPS currently, DDRAM is about 1 Gbyte/second fast (32 bit 266 MHz data rate). Somewhat less in reality but close enough for our purposes. Comparing 1M frames 16 times per second makes 32 megabytes transferred, or 3% of the memory bandwidth; I'll put that against any multitask multiwindow OS running a VNC server, just point me to one to compare to. Assuming your forwarding method doubles the application graphics overhead it will take you to reach 1.6% load for graphics to be beaten. IOW, having a _constant_, reasonably low overhead for change detection can only be beaten on very simple systems (no multi-window etc.).

Finally, I was wrong on my 10%. In fact not me, my tt command (which I wrote :-) was wrong. It lists real % overhead only when there is some significant system load, otherwise it includes idling, task switch etc. I now tried it again - ran the "hog" task, which just does "bra *" - and the results can be seen at

formatting link
. Clearly my tt hack does not work all that well, percentages add up only to 99 in this case (I have seen 100 and I have seen 98 :-) ). But for the sake of my own usage, to estimate stuff etc. it is OK. The tt task itself takes up that much time because it scrolls the entire window up (graphically, in fact dps does it for it when it sees the LF at the bottom line).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
Didi

From what you've said, I would still be inclined to go the VNC/RDP route, although maybe that's just because I don't know enough about the specifics.

The application server is probably running off mains, while the PDA is running off batteries. If the PDA is mostly idle, to me that's "conserving battery life" rather than "wasting processing capacity".

Also, server-side processing power is a commodity; requiring a given level of PDA-side processing power may reduce the choices and/or increase the price. And graphics processing power is a fairly cheap commodity nowadays; I'd guess that an 8400GS with 256MiB of RAM could do the rendering for 100 clients @800x600 both faster and more cheaply than upping the PDA spec to a version with accelerated graphics.

Reply to
Nobody

That's called an "iPod touch".

--
Grant Edwards               grant.b.edwards        Yow! BARRY ... That was
                                  at               the most HEART-WARMING
 Click to see the full signature
Reply to
Grant Edwards

Just a data point, but a web browser like NetSurf has a framebuffer mode that's pretty much just page (think you can switch off the navigation bar). For example:

formatting link

NetSurf has no Javascript which makes it really fast, and it's designed for slow hardware (while it'll run on a 30MHz ARM6, its original target was a

200MHz StrongARM). But it depends what you want - if you need JS then other browsers might support something similar.

Theo

Reply to
Theo Markettos

This is essentially how Linux VNC works... the VNC server is a modified version of the X window system (ie the thing that applications talk to - similar to XFree86, x.org), but the X11 code is modified so that it uses VNC as a display interface rather than a driver for some graphics card. By modifying the code it has full access to all graphics events.

Theo

Reply to
Theo Markettos

OK:

I've thought of using iPod touch's *if* I could permanently disable (i.e., destroy) the portion of the device that makes it usable as a media player (irrecoverably).

The goal is to make this very noticeably part of "my" device and not usable in its original function. This is to discourage them from "growing legs" ("Cool! I'll just slip this in my pocket as I depart tonight and load my songs on it..."). People would find "high replacement costs" a definite downside (especially as they aren't inexpensive!)

Imagine if, for example, you went to a museum and the "self guided tour" was built on devices like these as a platform. You'd be asked to put up a $200 deposit to ensure their return at the end of the tour -- else too many "patrons" would *deliberately* take them home.

Reply to
D Yuniskis

Can I configure it and the PDA so that it *locks* into this mode? I.e., nothing short of reflashing the device (or something with a similarly high bar) can turn the browser

*off* (and, presumably, the browser *never* crashes)

JavaScript would only be necessary if I couldn't have some other processes running alongside the browser (this would be a kludge). I am more than happy just pushing "drawing primitives" at the handheld and waiting for input events.

I.e., just like I could do with an X server running on that device. I want the handheld to behave AS IF it had previously been part of this product and someone *cut* it out and reconnected it to the device with a long, invisible cord. (i.e., no, it doesn't have an address book, or a media player, or "Solitaire", or... it's *just* the display off of Product X)

Thanks, I'll chase it down...

Reply to
D Yuniskis

A *quick* (I'm otherwise preoccupied :< ) look at the specs shows *more* than I need :> A notable omission is any means of expansion (microSD slot is effectively useless for "bolt on" hardware -- CF would have been ideal; SD a distant runner up).

I couldn't find anything suggestive of pricing (other than some other refurbished models) but I can chase that down a bit later...

Exactly. If *I* want to push audio through the interface, that's

*my* perogative -- not the *user's* ("I think I'll just switch on the phone and call home...")

Ideally, I'd prefer a BSD platform (decades of experience there). But, I can adapt, if need be.

Thanks!

Reply to
D Yuniskis

OK, that's a bit more difficult. I guess I'd look for a PDA that's well supported by Linux/OPIE/Qtopia. The problem with that is by the time they're "well suppored" they're usually no longer in production.

formatting link

There are always some interesting OEM devices, but usually you've got to buy in volume:

formatting link

--
Grant Edwards               grant.b.edwards        Yow! ... I want a COLOR
                                  at               T.V. and a VIBRATING BED!!!
 Click to see the full signature
Reply to
Grant Edwards

I'm not worried about the production issue. Right now, I am researching the issues that come with making an interface portable (for certain classes of applications and users).

E.g., you don't think about someone walking off with a 20" touch monitor -- it doesn't happen very often. :> Ownership (usership?) of such a device is different than highly portable devices which can be more readily interchanged.

But, there are myriad other issues that are consequential to the portable/smaller implementation. Graphics become more significant (reading small text requires a disproportionately greater amount of the user's attention which can conflict with other activities -- the "texting while driving" syndrome). On a 20" interface, a user can *casually* read legends as they are physically large. Scale that same interface down and the legends become illegible -- or, occupy a larger portion of the available display.

In addition to visual consequences, smaller means more precision required in the user's "digit-al" interaction with the touch panel. Controls can't be scaled down unless you want to force the user to use a stylus (which then increases the cognitive load on the user). Smaller controls make it harder for people with motion disorders (e.g., ET) to interact with the device.

Weight also becomes an issue. A heavy device becomes burdensome to carry all day. A lighter device often implies reduced battery capacity, features, etc. There are motion disorder consequences as well (a device with a certain amount of "heft" dampens some of the effects of ET).

Small devices are more readily used outdoors. So, utility in sunlight (does the screen get washed out? do you have to keep the backlight set high to compensate?) has to be evaluated.

Portable devices make location aware computing more of a challenge. A large device is typically sited in a fixed location. That location can be known to the application and it's behavior remains static wrt that parameter. OTOH, if a device is *mobile*, the application needs to change its behavior *dynamically* (instead of "at boot").

A portable device *as* a credential raises security issues; someone *can* walk off with it (much easier than a larger device acting as a similar credential). How you address these possible vulnerabilities, etc.

[deep breath]

What I want to do, now, is come up with something that I can deploy and, from which, gather usage metrics to better understand these issues, their consequences and other things that come up. With "typical" users (not users who are overly friendly with the device/system).

As a first step to that, I wanted to port existing applications to something easily (sacrificing performance, etc.) just to get a *personal* feel for the issues that are likely to arise. I want "real users" to see a beta version of a device, not an alpha. I don't want them familiar at all with the alpha version as that would influence their acceptance/willingness to use the beta version (and, The Market's ultimate attitude towards the *production* version)

"Make TWO to throw away..."

formatting link

Yes (see above). That;s the easy part! Then, you *know* what you want and just have to find someone who can hit your price/feature point. *Getting* to that point is the hard part! :>

Reply to
D Yuniskis

Who wouldn't?

OpenBSD runs on Palm these days, and for an interface, one might look at Evas from the enlightenment project (now in Beta) ... It supports the Xscale PXA 2X0 Arm based palms, including the Tx and the Tungsten ... you could do worse than having what you asked for 8-).

Cheers, Rob Sciuk

Reply to
Spam

It's certainly true that VNC is more efficient with an OS (or graphics layer) designed to work with it. vncserver on Linux is more efficient than windows, because it works directly with the X server rather than having to use backdoor hacks as is needed on Windows. And I'm sure it is even more efficient with DPS.

It's always possible that I've misunderstood something here. I certainly find that using the mirage driver reduces the load on the server, and I /believe/ that it reduces the bandwidth. But I haven't done any benchmarking or serious comparative testing, and I could certainly be wrong as to /why/ it works faster.

I've found that the different flavours of vnc have leapfrogged each other with regard to performance and features. The great thing about them being open source is that you have multiple implementations that add features and improvements that they think are important for their users. And if one of the other implementations has something they want, they can merge that in with their own code while giving out the code for their own additions.

Reply to
David Brown

,

What I found particularly great with VNC is its wide popularity. It opened the door for me to put the netMCA series where I can deliver DPS based products accessible over the net from "any" PC. Over the (many...) years I have put a major effort in staying PC/ windows independent so having a way to use their display, keyboard and mouse to access DPS is really a great asset to me. Having 100 MbpS widely available has also been important, of course.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
Didi

There appear to be *lots* of folks in that "wouldn't" camp! :-/

I've been looking at a bunch of devices that I've rounded up recently as well as over-the-years. I ruled out most of the Palms (that I have) as too light/flimsy. Some are too gimmicky (e.g., the "sliding motion" -- what else could you call it? -- of the T3). I'm also not fond of the reserved area on the Palm screens.

Personally, I the HP 3900's seem the best candidate so far. They feel "substantial" (plus the expansion sleeves are a win). But, I think they would only "fit" use by a "man" (using that as a stereotypical term) -- too meaty for a smaller framed man or "woman" to have to lug around all day long.

If I had to make a decision *today*, I'd opt for the HP hx4700 based on size/weight/features/etc. It seems more manageable in terms of size/weight. The CF slot is built-in instead of being provided by the sleeve. Battery seems ample (I would have to see how it fares after long term use and repeated partial charges, etc.). It's a

600MHz PXA270 so has more than enough balls to "draw displays". (Accessing the stylus is a bit of a pain -- but, that will be removed so it's not germane to my needs.)

I'll have a look at some of the tablet products and see if any will give me the expansion capability in the right size range.

If OBSD is supporting the PXA's, I suspect NBSD does, as well? I'll poke around both sites and see what turns up. Thanks!

Reply to
D Yuniskis

Sorry, I meant "you" as in "The Application" (regardless of how many threads) vs. an "asynchronous" process that grabs the contents of the "virtual frame buffer" and sends it out over the wire.

In other words, if your application can scribble on the frame buffer and *then* inform/invoke something that moves that out to the physical display (via VNC or whatever), then you can arrange for all of the parts of the virtual display to be updated *before* anything (RFB) tries to pass them along to the outside world.

If, for example, you *know* you are going to construct an empty window and *then* paint some "controls"/contents into it, it would be much more efficient for you to have finished filling in the window *before* updating the physical display.

Let me explain how (my) curses implementation works and you can see the parallel to a pixel-based frame buffer.

The "virtual screen" is a two dimensional array of "cells". Each cell is a (character, attributes) tuple -- attributes being things like color, bold, dim, blink, underline, invisible, etc.). Forget how this is represented in memory. Just pretend it's an array of "characters".

ANYONE (task/thread) who wants to send something to the display uses the curses API to do so. I.e., no one writes directly into that array. So, their are calls to let you write *a* character (with attributes) at a particular place (row, column), to erase a portion of the screen, etc.

We're dealing with text, typically. So, you often pass strings onto the screen. I.e., an array of characters placed at a particular position "on the screen".

[note that the window system resides ON TOP of curses. So, your task typically talks to the windowing API which, in turn, talks to the curses API, etc. At least, that is how it is *logically* structured -- the implementation blurs these layers to increase efficiency]

Anytime curses -- acting at the request of some task -- writes into the virtual display, the manner of it's actions are known (by the developer). For example, the hook that lets you write a string into the display writes from left to right (d'uh!). As such, it knows that the leftmost part of the display that will be altered is that of the *first* character that will be written. The SECOND character will never be to the left of the first one (this is obvious :>). Nor will any of the subsequent characters.

So, the curses routine can look at the "begin" column (as in "beginning of changed area") variable and compare it to the column number in which the first character will be written. If the column being updated is to the left of the current "begin" value, the begin value can be updated to reflect *this* column as the leftmost that has been altered -- all characters which follow it (in this "write string" function invocation instance) will be to the right of this point -- so, begin need never be examined again (in this function instance).

Likewise, the *last* character position written in this string is the only one that must be examined and compared to the "end" variable as it will be the RIGHTMOST change made to the display in this function call.

Note that the first charater written might have been to the RIGHT of the "end". Or, the last character might have been to the LEFT of the "begin". Regardless, the two tests I described will accurately cover all possibilities. ALL OF THE CHARACTERS BETWEEN THE FIRST AND LAST ARE NOT CONCERNED WITH begin AND end! (i.e., there is no added cost for tracking them).

[I am just describing one version of the "change" algorithm; and, approaching it incrementally. Not to insult your intelligence but, rather, to develop the argument, logically and for the benefit of others reading over your shoulder]

Character displays (TTYs) are line-oriented. There aren't usually primitives (ANSI 3.64) that let you deal with "regions". You can position the "display cursor" and then overwrite/insert/etc. WITHIN A LINE, typically.

As such, the changes on line 1 aren't related to the changes on line 2 (or 6) -- in the TTY's mind. Of course, if you are drawing (text) windows on the screen, then the contents of lines 1 and 2 may have a very definite relationship to each other (e.g., if you are drawing a box around a region of text then the position of the box's "side" coincides in lines 1 *and* 2...)!

So, you can track a begin/end for each row of the virtual display. When you eventually want to update the PHYSICAL display to coincide with the virtual display, you can look at the begin/end values for each row, in turn, and effectively transmit the "set cursor position to begin" command sequence (specific to the particular type of TTY) followed by the characters from virtual display columns "begin" through "end".

[in reality, you look at the cost of this operation vs. other alternatives. E.g., if begin is '2' and end is '79', it may be more efficient to send the entire line than to incur the cost of positioning the cursor, first]

Instead of tracking begin/end for each row in the display, if you *know* you have something like a windowing API sitting on top of this, you can opt, instead, to track a single begin/end for the entire display -- where begin and end are (row,column) tuples. This exploits *your* knowledge of the fact that you will typically be invoking the curses API with calls like: write_string(row, column, blah); write_string(row+1, column, foo); write_string(row+2, column, baz); as the window you are writing into is located at (row,column).

Then, your update() routine just does: for (row = begin.row; row > If you are updating the display after *a* new

Correct. But, at some level in your window system, there is code that knows which window pixels are "exposed". I.e., which portions of the virtual framebuffer will get scribbled on. Assuming you don't have something like the SHAPE extension, everything boils down to a bunch of rectangles (a window overlapping another window is, worst case, five rectangles)

No. Something eventually parses the window hierarchy (either whole the line is being drawn or when you map the windows onto the display) and decides which portion of which window actually gets drawn on *this* particular pixel in the frame buffer. When that pixel is written into the framebuffer, you know that "pixel at (row,col) has been changed; does this affect begin/end?"

You are free to move the windows around and then "redraw" them. Some other window's contents may likely end up being drawn on *that* pixel. Or not. The begin/end information doesn't care what window it is coming from. All that is important is that a particular frame buffer location has been *changed*. I.e., your "change detector" LATER would have found this pixel. I'm just giving it advance warning of where it is -- and, more importantly, where it *isn't* (i.e., "don't check anything outside of this rectangular region because I haven't made any changes there!")

But your "constant" is constant even if nothing has changed in the display! All the begin/end (and similar) tracking does is give you advance notice of where you are *likely* to find changes (and where you WON'T find ANY!). Furthermore, they give information about the nature of those changes.

E.g., you can compare two frame buffers and get lots of detail regarding individual pixels that have changed (for example, writing an 'F' over an 'E' in a line of text *within* a window, etc.). But, that can be too much detail. You have to then decide "gee, the cost of treating that single pixel plus the pixel two dots to the left as individual changes exceeds the cost of treating this 8x8 region as a single 'change'". This is because you are looking at dots -- the contextual information is no longer present (is this dot part of a window border that was drawn? am I likely to find other dots nearby that have also changed? or, is this just one little dot in a sea of constancy?)

That was my point about where your RFB code lies in the "layering" of things. As you blur those layers, you end up (potentially) gaining efficiency because you can propagate more information between those layers.

Only you can tell what your code does. I'm suggesting you look at your user interface and see if all of your changes are, in fact, confined to specific (even if they vary over time) regions that *some part* of your code is aware of -- but that your "change routine" is having to *detect*, each time you invoke it. Then, consider parameterizing your "change" routine so that it only looks at regions that you consider

*likely* to have seen changes -- and, consequentially, *ignoring* regions that you KNOW you haven't changed!
Reply to
D Yuniskis

I can't understand the enthusiasm for Linux given the BSD platforms ...

I don't disagree with your asessement. I have in front of me an HP IPAQ

2410, and a Palm TX (both PXA270's), as well as an old Pilot 5000 (no wireless). Here's a link to the Handhelds.org site which has an assessement (likely out of date) of the state of Linux on various models.

formatting link

The Zaurus is supported pretty well by OpenBSD, and even does a nightly build -- no really, OpenBSD doesn't like cross compilation for platform support, they have a zaurus which builds makeworld in the lab.

There is a very talented young individual, Marek Vasut(?sp)? who worked on his thesis getting both Linux onto Palm, and putting OpenBSD there, but he has had a bit of a falling out with Theo, the details of which I am unsure. As for NetBSD support ... start with the Zaurus and work from there, I guess ...

I have been hopeful of making older PDA's useful for hacking and such, particularly given the 802.11 support, but in spite of getting Linux and other operating systems running on them, their viability is limited. I've had some degree of success with a Nintendo DS and the Homebrew movement for raw (on the hardware) hacking, also with 802.11 support.

At this point, I'm probably going to deal my old PDAs off, and move to an Android tablet ... possibly an Archos, or a Witstec A81E ... but who knows what will be announced next week. I can wait a week or two ...

Cheers, Rob.

Reply to
Spam

The FreeBSD kernel is a thing of sheer beauty. Linux is definitely a stone soup, by comparison. Whole pieces of code copied, pasted, and patched in places. But the herd goes where it goes.

Jon

Reply to
Jon Kirwan

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.