Larkin, Power BASIC cannot be THAT good:

library,

is

with

is

Whoosh, i heard that blow by me. Never mind, i can use search tools.

Reply to
JosephKK
Loading thread data ...

LISP is derived from the lambda calculus, and was at the core of most of the early symbolic algebra packages like REDUCE. In LISP everything is a linked list of functions - and it can be compiled to fast native code. It is exceptionally good for certain types of symbolic programming.

WIKI has a short piece on the mathematics which looks superficially OK

formatting link

Unlambda is about as minimalist and cryptic as a Turing complete language can get:

formatting link

Regards, Martin Brown

Reply to
Martin Brown

A typical spurious Larkin just so story. Arguing from ignorance.

Branch prediction and speculative execution on these CPUs is so good that they only mispredict on the final termination before loop exit.

I did. Theory and practice agree just fine here. The differences between CPUs though even in the Core 2 family is startling even on the relatively small sample I have immediately to hand.

There is absolutely no measurable difference between a count up or count down assembler loop on a Core2 Quad, Core 2 Duo or P4. There is enough slack with the faster CPUs to hide 3 immediate operand instructions without altering the loop timing at all (just detectable on a P4 3GHz).

Fastest on the Core 2 Q6600 was the modern array style indexing mov eax, [edi+4*ecx] etc at 2.159 +/- 0.008 This is typical of modern compiler output for x86.

Runner up was the cache aware algorithm at 2.190 +/- 0.007 Obvious simple pointer based loop gave 2.233 +/- 0.007 Cutting out word aligned 16 bit reads gave 2.215 +/- 0.008 Loop unrolling slowed it down to 2.250 SIMD was slowest of all at 2.288 (very disappointing)

The big surprise was the portable Mobile Core 2 Duo T5200 @ 1.6GHz SIMD was fastest at 2.356 +/- 0.020 Array style indexing next at 2.420 +/- 0.010 Pointer loop using addition 2.450 +/- 0.012 Pointer loop using LOOP 3.225 +/- 0.026 !!!!

*Big* surprise using LOOP instruction on this older Core2 CPU slowed things down by a massive 0.8s. I repeated it several times as I didn't believe it at first. But it is a solid result.

On older and weaker CPUs the SIMD and cache aware code did better.

You have no idea what you are talking about. It is some coincidental change in the generated code (possibly use of LOOP instruction to count down).

If you knew how to examine the generated code you would have done so by now. But instead you keep harping on about the magical properties of Power Basic.

I grant you its code generator must be fairly good - but the timing pattern on the latest CPUs is extremely flat. In other words any half way reasonable loop construct executes much faster than the memory subsystem can supply the data to work on. Evidence is that it is the write back to main memory that is causing all the problems.

The code generated is all that matters going up or down memory makes no difference at all. There is more than enough slack in the loop timing. It is utterly dominated by memory access delays.

Regards, Martin Brown

Reply to
Martin Brown

Not utterly. Unless you use some sort of cache prodding to keep the memory bus 100% busy, memory transfers will studder a bit as the add loop works on blocks of cache, and loop speed will affect overall timing somewhat. It would be fun to scope the dram chip selects to see the exact pattern, but I have other priorities.

I'm running a 1.87 GHz dual-core Xeon (HP ProLiant "server", 2G ram) Xp Pro, PowerBasic v4. I get 0.225 average summing time counting up,

0.207 counting down.

"A typical spurious Larkin just so story. Arguing from ignorance."

Ignorance? I ran two versions of a program and measured the execution times. Does that violate the protocols of computer science?

John

Reply to
John Larkin

library,

assignment is

as a

list

with

assigns 7

...) is

infix

=20

=20

Can't check it, gasping for air. Oh well, i kinda knew that already. Lambda calculus is on my extensive "to do" list already.

Reply to
JosephKK

in=20

are=20

bit adds=20

chips.

only a=20

is a=20

then=20

execution=20

=20

=20

=20

but=20

very=20

Thank you for taking the time to do the tests that prove what has already been discussed.

Reply to
JosephKK

??? there is int argc, char *argv[] which is considerably more standardisation than the windows XP or vista command-line has.

it seems like an interesting concept.

Reply to
Jasen Betts

And that's not counting the GUI, or the non win/dos versions.

Reply to
Jasen Betts

In MS-DOS, it's even easier: you get one string up to 127 characters long. it's up to you to play with it, which is usually tokenization first (which is just what the C compiler does when it cooks up your args).

How it's programmed doesn't matter. He's probably referring to switches and order (switches before arguments?).

Tim

--
Deep Friar: a very philosophical monk.
Website: http://webpages.charter.net/dawill/tmoranwms
Reply to
Tim Williams

This can be a nuisance, as the program's documentation tends to document its behaviour in terms of distinct arguments, without documenting how the string is parsed into arguments. This can be an issue if one of the arguments is an arbitrary string which may contain spaces, quotes etc. Nowadays, you can usually rely upon it using the MSVCRT parser, but you occasionally run into exceptions.

It also doesn't help that _spawnvp() et al concatenate their arguments without quoting, so the program doesn't necessarily get the exact same list of arguments which the caller provided.

Reply to
Nobody

In those cases, either the string would have to go at the end (how else can you tell it's an arbitrary string?), or it would have to be encapsulated somehow (such as quotes).

Last time I wrote a command line parser, I happened to be writing in assembly. There, it grabbed a byte, checked if it was a token, delimiter or switch (i.e., "/"). If it was a switch, it scanned for the WORD "?/" or etc. (being that endianness actually reads "/?" as "?/". Odd at a glance, and not useful for more than two character switches, but interesting in its own way.) After each switch was found, a flag was set indicating it had been found, and therefore should not be checked for again. In that particular program, switches could go anywhere in the command line, before or after the argument. The first token found that wasn't a switch was copied as the argument (path, in this case); any subsequent arguments were ignored.

Meanwhile, in QBasic, I've done plenty of command lines, but since COMMAND$ is so temping, and because QB sadly doesn't provide any tokenizing functions, I usually just go lame and do something like OPEN COMMAND$ FOR INPUT AS #1, no switches at all. OTOH, in C, you get all parameters tokenized already, so it's quite easy to look at them in order. That's kind of nice. (Hmm, if QB had the foresight, it could be COMMAND$(n) instead!)

Tim

--
Deep Friar: a very philosophical monk.
Website: http://webpages.charter.net/dawill/tmoranwms
Reply to
Tim Williams

PowerBasic (the console compiler version) has a couple of nice built-in PARSE commands.

John

Reply to
John Larkin

Sheesh. It's a phone call for you. The 70's want their interpreted language technology back.

Reply to
AZ Nomad

without

equivalent

state

megahertz,=20

finishes

takes 5ms.

compatible.

passing,

I recommend that you take that up with Kemeny and Kurtz, who defined and were the main developers of the old Dartmouth BASIC. True BASIC was their implementation for the PC. The history may interest you, or not.

other.

Reply to
JosephKK

Hah. Interpreters rule. Unbeaten for debugging, especially for the range of constructs you can reconstruct on-the-fly.

Actually, is PowerBasic interpreted, or is it only compiled? FreeBASIC is compile-only.

Tim

--
Deep Friar: a very philosophical monk.
Website: http://webpages.charter.net/dawill/tmoranwms
Reply to
Tim Williams

PB is an excellent compiler. It's better than a lot of C compilers for runtime speed. Debugging is at least as good as any interpretive Basic I know of.

John

Reply to
John Larkin

Yes, but there's been some progress made in the last forty years. BASIC is a piece of shit and utterly unuseable unless stacked to the gills with proprietary extensions. Use any of these proprietary extensions, and the code is no longer portable to any other compiler.

I use python when I want to use an interpreted language.

Reply to
AZ Nomad

And what's wrong with "proprietary extensions" if they get work done?

Use any of these proprietary

And why would it need to be?

PowerBasic makes it easy for non-programmers to get engineering calculations done quickly and easily. What's wrong with that?

John

Reply to
John Larkin

Okay language, lots of features, large userbase. Its main drawback is that it's slower than a heavily-sedated snail. Proprietary extensions aren't really an issue when there's only ever likely to be one implementation.

Other than having a larger user base, it doesn't really seem to have any advantages over Lisp.

For quick computation and data-processing tasks, I'd pick Haskell, but that's only an option if you can operate outside of the imperative paradigm. That's likely to be an issue for EEs, as embedded programming tends to be heavily state-oriented.

Reply to
Nobody

That's fine if you never reuse code. Just wonderful if you write everything from scratch and never use third party code libraries.

I've got better things to do than write a doubly linked list package for the hundreth time. I'd rather have a language where I can use libraries to do the work use use code examples from others.

Reply to
AZ Nomad

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.