Larkin, Power BASIC cannot be THAT good:

range

Not only did I not write Excel, I never use it. Spreadsheets are idiotic toys.

How about this:

formatting link

It does all the internals - lookup, DDS, interpolation, modulations, summing, user-programmable microengines - at 128 MHz on 8 channels, the equivalent of roughly 40G saturating math operations per second, on a cheap Spartan FPGA. Well, we did epoxy a heatsink on top. Of course, engineers can parallelize and pipeline, but programmers can't.

Why are you confusing the simple terms "linear search" and "sort"? How very strange.

And what difference would it make, anyhow, whether an algorithm scales or not if it doesn't need to scale?

John

Reply to
John Larkin
Loading thread data ...

But is the absolute difference in run-time between the algorithms more than the difference in programming time?

If you only run a program once, 30 seconds coding plus 1 minute run-time is an improvement over 5 minutes coding plus 1 second run-time.

Even if a program gets used regularly, computers are cheaper than people. For bespoke software, buying a faster computer is often cheaper than paying more to make the software go faster.

Reply to
Nobody

It makes PIC assembler look intuitive. PostScript was designed to minimise the burden on the interpreter, not the programmer.

Reply to
Nobody

Hence the use of floating-point math for everything?

Reply to
Spehro Pefhany

I wasn't suggesting using Haskell for embedded programming, but for "support" tasks, e.g. crunching data.

You can use it for imperative programming. There are some cases where it would be better than an imperative language, as you get to define the evaluation semantics.

Reply to
Nobody

Yeah, but even 16kB of flash ROM and an equal amount of RAM can get quite a lot done. It seems to me that it's the highly unequal ratio we now see between flash ROM and RAM that makes many high-level languages difficult to implement.

Well, if you're doing something that really needs speed or tight timing, I agree, assembly is the way to go (at least if you don't have any programmable logic around to make things "really hard" :-) ) -- and usually it's not that hard to mix and match C or BASIC with an HLL.

At times it just seems like we've regressed a bit in terms of how much we can accomplish on a given piece of hardware, even with a HLL... take a look at some of the old "handheld computers" from the 1980s, e.g.,

formatting link
(32kB RAM, runs interpreted C or assembly) or even
formatting link
(16kB RAM, runs BASIC). Or even the "early advanced" HP calculators, like the HP 28s --

128kB ROM, 32kB RAM -- incredibly powerful for its day.

---Joel

Reply to
Joel Koltner

Depends whether the progammer and the user are the same person or not (from the user's POV). Depends whether there are one or more users. Depends whether users are customers or not.

In the latter case, for ex., you have to count wasted time, computing resources increase, or whatever for all your customers. Compare this with your increased production cost, plus some profit, shared among all your customers and you have something more meaningful and probably not the same conclusions.

M$ is pretty great at wasting the time and resources of their customers. The point is their sloppiness (their customers time) costs them almost nothing. Except some bad reputation in the long term...

--
Thanks,
Fred.
Reply to
Fred Bartoli

FP math is pretty much essential for (real) graphics.

Integer-based graphics APIs have (or had) a place in low-resolution video displays where you can see individual pixels, and where the graphics are dominated by unscaled bitmaps and (mostly orthogonal) straight lines.

Integer coordinates don't make sense if you're operating at 300+ DPI, in physical dimensions (mm or points rather than pixels), with scaled images, thick (>1 pixel) lines, Bezier curves, arbitrary transformations, etc.

Reply to
Nobody

You aren't going to be able to run Python in that ;)

No, just the numbers really. High-level languages are designed for desktop and server systems. As RAM has increased from a few megabytes through hundreds of megabytes into gigabytes, software has followed suit. No-one is going to worry if the interpreter uses a megabyte before it even gets to printing "hello, world".

But by today's standards, BASIC and C are both low-level languages. BASIC has int, float, string, and arrays thereof, C has int, float, struct and array (strings are just arrays of 8-bit ints).

Modern HLLs have lists, tuples, dictionaries, objects (OOP), functions as data values, closures, continuations, iterators, arbitrary-precision arithmetic.

To put it in perspective: the megabyte of RAM that a modern HLL might use probably costs less than 16K did in the 1980s. If (say) $20-worth of RAM was a reasonable amount to use back then, why isn't it reasonable now?

Reply to
Nobody

It was a little OTT in the 1980s. Postscript printers had more powerful CPUs than many of the computers of the day. Not what I'd call minimizing the burden. They scaled it back with PDF (and with the limitations for Type 1 fonts).

Reply to
Spehro Pefhany

Yeah, but if I rip out all the introspection and class-based stuff, then I can.

I suppose at that point I've severely crippled the language, though. :-)

But years ago "hello, world" *didn't* take up a megabyte, even in C (and C's at a disadvantage in that most people would use printf -- which is large -- and that would tend to pull in all of stdio and stdlib, which are large... languages that at least had basic I/O built-in such as Pascal would do better, I'd think).

Here's a quick list, stolen from

formatting link
a.. Ada - 19K b.. Asm - 432B c.. C - 8.9K d.. C++ - 9.5K e.. C# - 11K f.. Common Lisp - 25MB g.. D - 230K h.. F# - 11K i.. Fortran - 9.3K j.. Haskell - 358K k.. Java - 12K l.. Modula-3 - 11K m.. Oberon-2 - 13K n.. Objective-C - 8.9K o.. OCaml - 121K p.. Pascal - 107K q.. Scheme - 15K r.. Standard ML - 166K C is doing quite well there -- so is Objective-C, for that matter.

---Joel

Reply to
Joel Koltner

Power Basic console compiler: 6.5K EXE file, no RTS. That's including a SLEEP statement so you can see it.

John

Reply to
John Larkin

That's just wonderful if one line programs are all you can manage.

I'd be far more interested in the bytes/line for a typical 10,000 line application.

Reply to
AZ Nomad

I

top

e
s

C's

-- =A0

.

tter,

l:

How about for Microsoft SQL Server?

I'd looove to migrate my SQL Server code in VBA to C. Any ideas?

Michael

Reply to
mrdarrett

This may be of interest.

formatting link

Free online version of Michael Abrash's optimization strategies.

Michael

Reply to
mrdarrett

better,

Ny heads-up display simulator only took 464 lines. I have no idea how much it might have taked in C or whatever.

We recently rewrote our old DOS material control system in PBCC. Here's one screen...

ftp://jjlarkin.lmi.net/MAX.jpg

It's a single program, 17901 lines (including whitespace and comments), compiles to a 416 kbyte EXE file. Compile takes 1.7 seconds.

Runtime is a lot bigger, since we read the entire parts data base (about 5000 parts now) into a memory array at startup, and build/sort a few other arrays.

This uses brute-force linear string compares for user 'search' commands, as opposed to some database thing. It's done before you can let go the key.

PowerBasic does have the BLOAT command...

formatting link

John

Reply to
John Larkin

I was referring specifically to the interpreter, rather than the graphics library. The graphics library was certainly heavy-duty, but that was inevitable for generating production-quality graphics.

The performance of the LaserWriter relative to the early Macs doesn't seem so extreme when you consider that an office might have had a dozen Macs and one LaserWriter.

Also, pushing the work onto the printer allows the same PostScript data to be used for both the office printer and the Linotronic used for final production. That won't work with pre-rendered bitmaps (and generating a full-page, 2450 DPI bitmap on an early Mac was out of the question).

These are interpreter limitations, designed to make the code more like "data" and less like "code". The graphics library is essentially unchanged (e.g. PDF doesn't support alpha-blended bitmaps, even though it's straightforward to implement on a monitor).

Reply to
Nobody

I believe you typical IT guy would disallow such software on the grounds that it can't readily grow to 5,000,000 parts, so clearly you aren't "planning for the future." :-)

Reply to
Joel Koltner

Then again, John might not plan on having any IT types around. :)

--
You can\'t have a sense of humor, if you have no sense!
Reply to
Michael A. Terrell

In most high level languages basic things like efficient sorting, hash tables and searching are already available in a library somewhere. RTFM and use them rather than reinvent the wheel badly.

Electronics engineers are more used to this approach than software engineers. They do not automatically try to roll their own electrolytic capacitors or make transistors from scratch every time.

For a use once and throw away program it maybe OK. But these quick hacks have a nasty tendency to end up being used again and again. After the fifth time of using it the second method wins out handsomely.

That depends on how often the program is used. Certain commercial programs that are widely used are a lot slower than they should be due to stupidity in the design producing an inefficient bloatware solution. It is always a risk in the time to market versus performance tradeoff.

On time, on budget, on spec - pick any two for hardware. You are lucky to get one of these on target with the average software house.

Regards, Martin Brown

Reply to
Martin Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.