Lightweight Browser

Wow, we've come a long way from the Andalusian Video Snail (which, unfortunately, I can no longer find).

I think I still have a copy of the file full of VT-100 escape sequences which draws a Christmas tree with twinkling lights and a train running around it.

--
/~\  Charlie Gibbs                  |  Microsoft is a dictatorship. 
\ /        |  Apple is a cult. 
 X   I'm really at ac.dekanfrus     |  Linux is anarchy. 
/ \  if you read it the right way.  |  Pick your poison.
Reply to
Charlie Gibbs
Loading thread data ...

And you've got it wrong, as a few others have already tried to point out.

I will try to make the case clearer.

A better example than Wikipedia would be a newspaper website in the current era. First, it is entirely possible to have multiple articles (or their abstracts) written by different content experts on a single web page. Further, as you note, the content expert's contribution is probably stored somewhere off in a database. It may be displayed on many pages, in any number of formats, possibly syndicated across different newspaper websites, and the content expert need have no specific knowledge of this. Indeed, each of those pages displaying the expert's contribution might be unique because of the non-expert- or different-expert-content on the page. It makes no sense to say that a content expert has authored any of these pages.

What you seem to be trying to describe with "injection" (which I agree with a previous poster is a misuse of the term) are simply dynamic web pages. Page schemas are assembled first on the server, possibly from many components fetched from different sources, according to a prescription created by what one might consider the page author. (The author need not be a single individual and there very well may be no corresponding html file, anywhere). The layout may be simple, say header/content/footer, the richness of the page would then arise from the content of the components, which may come from different parts of an organization or different organizations (including, e.g., ad vendors). The page author probably knows very little about the components, including those contributed by content experts, except to allocate each its own bit of real estate on the page.

The schema constructed by the server is fetched by the user's browser

-- which continues the page assembly process by fetching further content, again possibly from widely varying sources (which certainly may include servers other than that from which the page schema was served). The result is a webpage seen by the user which, again, may be unique to that user on that browser at that particular moment.

To a good approximation, I believe a user navigating the web today will encounter 100% dynamic pages. Dynamic pages are not the exception, they are the rule.

The "standard Wikipedia header" as a model for web pages is largely incorrect and certainly misleading. The headers and footers (and other components) may be highly customized for the particular user viewing them (and there's a whole industry focussed entirely on doing just that). I can assure you that retail websites put a great deal more time/effort/money into this customization than they do into fronting a title for a product, its image, it's SKU, and an "Add to Cart" button in the center of a page, necessary though that is. The Wikipedia request for donations is a simple example of such customization but it misrepresents what, in general, is going on with pages and their construction: there may be much more complexity to the construction of headers and footers than in what the user perceives as the content of a page. And of course none of this happens unless the page author provides for it.

Untrue for the past 20 years or more. A page can download arbitrary Javascript which can then alter the page via the DOM to do just about anything one could imagine, including change HTML tags or their contents on the fly. (You used to be able to find some examples of crude web page animations using this method around the Internet - I haven't actually seen these myself for quite a while).

Another issue. Apache does serve files, e.g., images, but most of the non-image content a user sees will come from one of many application servers (probably on the other side of an inner-tier firewall from Apache). Applications on these servers receive requests forwarded from Apache and return responses to those requests for Apache to return to the user's browser. Apache serves in a central role but it is largely a pass-through for much of the non-image web content.

Design, construction and operation of a modern website is very different than dropping some static html and image files into Apache's Documents directory and then typing 'apachectl'. Lots of people have been working on them for 30 years, after all.

Tom

Reply to
Tom Blenko

Unfortunately none of the suggestions cut the mustard :( Although most of them are indeed lightweight and pretty quick to respond. There are very few websites that render anywhere near correctly. However, some sort-of good news is that on the Pi4 with a devuan install Vivaldi is quite quick, and seems to be less of a hog than either its parent or firefox.

--
W J G
Reply to
Folderol

Sadly as expected. I'll postpone my next review for a year or so then.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Indeed, and these days many apps are themselves webservers. The app's interface happens to be HTML sent over a TCP socket, and central to the app is the database that the content is based on (think about Gmail or Google Docs as an example here).

There can be one or many instances of the app, depending on its complexity, and the database can be internal or separate. Traffic to this farm of apps is directed by frontend server(s), that take care of things like load balancing, failover and TLS.

Static files like PDFs, images and video are only URL entries in the database - the URLs are embedded in the output HTML, and the browser fetches them from a separate server, perhaps a CDN like Akamai, who will redirect that to a server closest to the user.

At no point is a traditional webserver like Apache involved at all here - the apps are their own thing written in Python or Node or Java or whatever, the frontend server is a proxy like HAProxy, and the CDN is a blackbox service.

It's entirely different from a 1990s Apache install, and that's why all this discussion about webhosts adding advert headers to content doesn't make sense. The content lives in the DB - unless you're writing on a platform like a blogging service (where the service turns your plain text into pages, including ads), if you write the app you get to decide what HTML gets generated and which Javascript callouts you include. That JS can utterly change what the user sees, which is why you should be careful what you include.

Theo

Reply to
Theo

"Deloptes" wrote

| According my understanding there is no "injection", but "loading" will be | more appropriate as the operators of a site do allow advertisements and | similar to be loaded from third party on the web site. You usually consent | with this when visiting the site. | That's putting a good face on it. I never consent to visit Doubleclick and ask for an ad. Most people have no idea that's going on. They think they're visiting a webpage at acme.com. They have no idea that acme.com is feeding them to the wolves, and are in no position to consent. Most people don't even know what browser they're using.

Yes, injection is a misleading term. On the other hand, it's also misleading to say a website owner allows ads to be loaded from 3rd-party websites. They don't "allow" it! They write it into their pages, in order to make money, hijacking your browser to go visit various ad and spyware companies without your permission. And they do their best to hide that ugly truth from visitors. By what right do they do such a thing? They have no right to trick you into visiting other websites. The simple fact is that they can get away with it, so they do. Why can they get away with it? Because visitors don't understand that it's happening and the concepts to explain it are difficult. For the average person it's all a black box.

| You also seems to not accept the fact that there are dynamic web pages | generated by webserver extentions such as the apaches php extention. | Please, note that whatever methodology is used, the visitor of a web site | receives a HTML sent to the browser. The dynamic part is only the | generation of this HTML opposed to the static content you pretend to be the | only "true" HTML.

That's a bit confusing. The page you get is "static" insofar as it's a file. It may or may not be assembled when you ask for it. It may or may not be customized to suit your browser, screen, etc.

Traditionally, dynamic refers to client-side script, as in DHTML or dynamic HTML. Everything that moves or changes, aside from a few CSS elements, is script, which makes the page DHTML.

So if I give you a page customized with PHP, which has no script, that's a static page. If I give you a file off my server full of javascript, that's a dynamic page. Your description is, at best, the webmaster's point of view.

Reply to
Mayayana

Hmm, well you never said that you wanted webites to render correctly, and that's a very significant requirement. You're limited to browsers based on Chromium or Firefox because they're the only ones that website developers bother to test with, and their browser engines are both bloated and slow to my standards.

Some of us don't mind, or ever prefer, the broken (or perhaps better termed "unintended") rendering of websites in lightweight browsers. No adds, no pop-ups or hovering bars, no custom fonts, content isn't hidden for the sake of styling. It's only when needing to interact with some Javascript-based system that the need for a mainstream browser unfortunately arises.

--
__          __ 
#_ < |\| |< _#
Reply to
Computer Nerd Kev

Those two statements don't follow. "Render correctly" means show what the valid parts of HTML and CSS state. It does not mean show what the author intended to be shown.

With non-valid code any browser is free to render whatever it likes and still be correct. If the requirement is "render identically to what huge bloated guessing engines extract from nonsense", then yes, those very engines are the only ones to do it.

--




/ \  Mail | -- No unannounced, large, binary attachments, please! --
Reply to
Axel Berger

There are no bug free HTML and CSS renderers. Web designers design to the bugs of the mainstream browsers.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Another that might fit the bill is Vivaldi - there is not a Debian ARM download for it. Not previously mentioned because its not really lightweight, but now the OP has upped the requirement from 'lightweight' to 'must render graphical websites correctly', it would seem to be in scope. Its an Opera replacement - from the same people.

I looked at it some time back - seemed to be fairly quick but I couldn't get on with the options it offered for scaling on-screen text, but ymmv.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

For 'not' read 'now' - sorry about the typo.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

"Ahem A Rivet's Shot" wrote

| > Those two statements don't follow. "Render correctly" means show what | > the valid parts of HTML and CSS state. It does not mean show what the | > author intended to be shown. | | There are no bug free HTML and CSS renderers. Web designers design | to the bugs of the mainstream browsers.

Yes. There have been lots of little things in the past, such as one browser rendering a 1px border on elements that another browser doesn't. Some of those things are not so clearcut. It's a system that was designed for flexibility, displaying on a variety of devices. It wouldn't be realistic to imagine there's one valid rendering in all cases. You just have to code it and see how it looks. I'm amazed browsers can do it at all. It's become incredibly complex.

Though in the past I've found that anything not IE will render complex layout dependably. I can design for IE quirks mode and all other browsers, allowing dependable rendering in pretty much all cases using only two versions of my pages. But those pages are actually hybrids, also. For example, some versions of IE can't render a DIV as a block element. So to work in both IE and Firefox I've had to do things like nested DIVs and TABLEs. The purists will say that's wrong. But the purists don't have pages that need to render correctly. :)

Reply to
Mayayana

Oh but they do, the only thing is their definition of correctly is "according to the standard" while the marketing director's definition of correctly is "identical to the wireframes we signed off on six months ago". Never mind that those wireframes were concocted in a mixture of word processors and drawing packages and are nigh on impossible to achieve.

There's a reason I avoid front-end work :)

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Based on Chromium according to the Wikipedia page. But the OP is apparantly satisfied with it.

They also now say that they're using a Pi4, so the true need for efficiency probably isn't as great as it would be if they were using a less powerful Pi model.

--
__          __ 
#_ < |\| |< _#
Reply to
Computer Nerd Kev

formatting link

snails.vt

These days you probably want "slowcat" to view them, same site:

formatting link

./slowcat -d 80000 snails.vt

worked okay for me, but not nearly as well as I think it would be on a real slow terminal.

xmas.vt at the grox site.

More "videos":

formatting link

/* cc signature.c -lm&&./a.out>a.out.out&&slowcat -d 40000 a.out.out */ main(){int i,j,k,l;for(i=-12;i8;l>>=4)p(97-(l>>22)+(l&15));for(j=-12;jk;)p((l=k*k+++i*i+j*j)

Reply to
Eli the Bearded

Indeed, but the Vivaldi people have quietly picked out most of the crap.

Also runs 'reasonably' on a Pi3. I don't have any older ones.

--
W J G
Reply to
Folderol

Not exactly lightweight, but I was using PaleMoon, but that project seems to have folded. I'm mostly using Brave now. Its fast and seems fairly bugfree, though it has annoyances too, i.e. runs GoogleEarth but not GoogleEarthPro because the socalled MIME (application vs extension) table is inaccessable and they don't have a bug tracker, only an all-purpose bulletin board.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

Many thanks. I've filed this one.

--
/~\  Charlie Gibbs                  |  Microsoft is a dictatorship. 
\ /        |  Apple is a cult. 
 X   I'm really at ac.dekanfrus     |  Linux is anarchy. 
/ \  if you read it the right way.  |  Pick your poison.
Reply to
Charlie Gibbs

I basically completely rewrote slowcat.c today:

formatting link

(Source also at end of post, for future proofing.)

It makes snails.vt display well now. Suggested usage:

slowcat -b 9600 snails.vt

Where -b means "emulate baud rate" instead of the old "specify a delay". That's just a simplicity of usage fix. The real fix is disabling the stdio buffering, so the delays are more accurate. I also now fixed it so it now process STDIN or use multiple files.

Elijah

------ /* * Inspired by, but a nearly complete rewrite of: * slowcat.c - slow down the display of a file * copyright (c) 2001,2002,2007 dave w capella All Rights Reserved * found here: *

formatting link
* in August 2020. * * Original license (unchanged): * * distributed under the terms of the GNU Public license * * * * There is NO WARRANTY, and NO SUPPORT WHATSOEVER. * * Original build / install unchanged: * * building: make slowcat && mv slowcat $HOME/bin * * (assuming that you have a personal bin directory) * * Usage vastly changed. "slowcat -h" will show usage now. * */

#include #include #include #include

int debug = 0; char version[] = "2020 Aug 20: first rewrite by Eli the Bearded";

void show_version(char *name) { printf("%s version %s\n", name, version); }

void usage(FILE *where, char *name) { fprintf(where,"usage: %s\n", name); fprintf(where,"\t slowcat -b BAUDRATE [ filename ... ]\n"); fprintf(where,"\t slowcat -d USECONDS [ filename ... ]\n");

fprintf(where,"\nAdditional options:\n"); fprintf(where,"\t -v be more verbose\n"); fprintf(where,"\t -V show version\n"); fprintf(where,"\t -h show this help\n");

fprintf(where,"\nRuns something like basic cat(1), but slows the\n"); fprintf(where,"output down to emulate a BAUDRATE connection (of\n"); fprintf(where,"75 to 128000) or with an explicit delay between\n"); fprintf(where,"bytes output of 1 to 20000 useconds. Recall that\n"); fprintf(where,"baud is a rate of bits per second, so 1200 baud\n"); fprintf(where,"is one bit ever 833.3 useconds, one byte every\n"); fprintf(where,"6666.4 useconds, and 150 bytes per second.\n"); }

int main(int argc, char **argv) { int c, option; useconds_t usecs = 100; long baud; FILE *fp; char *fnam;

while ( (option = getopt(argc, argv, "b:d:vVh")) != -1 ) { switch (option) { case 'v': debug = 1; break;

case 'b': baud = strtoul( optarg, NULL, 10); /* 75 is the lowest generally accepted baud * rate, but even the Bell 101 modem did 110. * At the other end, 128k baud is ISDN. */ if(baud 128000L) { baud = 128000; } usecs = 8000000 / baud; break;

case 'd': usecs = strtoul(argv[2],NULL,10); if(usecs 20000) { usecs = 100; } break;

case '?': fprintf(stderr, "Unrecognized option: %c\n", optopt); usage(stderr, argv[0]); exit(1);

case 'V': show_version(argv[0]); exit(0); break;

case 'h': usage(stdout, argv[0]); exit(0); break; } /* switch */ } /* while */

setbuf(stdout, NULL);

if ( optind != argc ) { fnam = argv[optind++]; fp = fopen( fnam, "r"); if(fp == NULL) { fprintf(stderr,"Filename %s errored out\n",fnam); exit(2); } } else { fp = stdin; }

do { setbuf(fp, NULL);

while ( (c = fgetc( fp )) != EOF ) { putchar(c); usleep(usecs); }

fclose(fp); fnam = argv[optind++];

if ( optind

Reply to
Eli the Bearded

Great work. For the user that's trying to be difficult there might be an unexpected unsigned to signed conversion: an argument >LONG_MAX will trigger the

Reply to
A. Dumas

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.