high accuracy math library

Yeah, but... division is done by multiple multiplications, and if it takes five stages for 64-bit, it only takes six stages for 128-bit; each two multiplies doubles the precision of the result. So, the division penalty is proportionally less for extensions to high precision.

The high-order speedup to multiply is with FFT techniques. Has anyone built those into hardware for long-word computing?

Reply to
whit3rd
Loading thread data ...

FFT multiplication algorithms are inefficient for less than about 10,000 digits - that's a little big for a hardware solution!

Of course, it is possible to have instructions and hardware blocks that accelerate parts of the process. Many DSP processors have special instructions to improve the speed of FFT's (though that is generally for filtering rather than multiplication).

Reply to
David Brown

I don't have any problem with Basic. I sometimes use it as VBA dialect in inside Excel to automate jobs I can't otherwise be bothered doing.

Ripping content authored by someone else as output by MS Office "save for web" into a compact shape fit to go on a web page for instance!

There is something wrong with how they are doing it then! One dimensional arrays should be roughly equivalent performance in any compiled language. C can do 1-D array indexing the same way as Basic.

C becomes messy when there are 2D or higher dimensional arrays since that is implemented as a pointer to a pointer which does not cache well. If they were using that array construct then they would be at a disadvantage but it should not be by a factor of 4. Nothing like.

GCC is a lukewarm optimiser too - its main claim to fame is being free. They should evaluate the latest MSC 2022 compiler if speed is important. (free to download you have to license it for business use)

Almost all serious large scale computing in C people flatten their huge arrays to 1 dimension using clumsy macros to do the indexing (or use another language like Fortran which supports N dimensional arrays well).

Whatever turns you on...

Reply to
Martin Brown

On a sunny day (Mon, 17 Jan 2022 13:26:09 +0000) it happened Martin Brown <'''newspam'''@nonad.co.uk> wrote in <ss3qph$pao$ snipped-for-privacy@gioia.aioe.org>:

The big plus of GCC is the portability for example all that X86 stuff I wrote compiles and runs on ARM, a RISC machine (Raspberry Pi 4), and sometimes even better. Only problems is some libraries broken by 'maintainers', I had to rewrite libforms. As to multidimensional arrays... I use structures and linked lists, yes all pointers, but as fast as it goes. libc is nice, and leaves little to be wished for.

What macros?

Python is something that I hope will just disappear written by people who did not want to understands C and the hardware it seems. And Cplusplus is a crime against humanity.

I designed some vector processing card for the IBM PC back in the eighties. That was fast. These days people use GPUs for bitcoin mining.... Do not think it is energy efficient .... so no bitcoins here, but there computing power versus power consumption counts.

I looked up FFT multiplication after reading about it here... Never done that, Makes sense somehow.

Reply to
Jan Panteltje

Back in the very long ago, when I was a grad student, I used to really like HP's Rocky Mountain Basic. I had a 9816 with some huge 20 MB hard drive and a bunch of hardware I/O to run my laser microscope.

I managed to cadge an Infotek math coprocessor board plus their RMB-compatible compiler, which really helped the speed.

The great thing about RMB was that it made instrument control a breeze--it was written by the outfit that made the instruments, which helps. ;)

Haven't used BASIC since about 1987, even though I have a few instruments that run RMB and can be used as controllers for other boxes (notably the HP 35665A).

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Indeed.

When someone uses pointers for C arrays, it's an indication that they are being smart-arse rather than smart - they are trying to micro-optimise the source code instead of writing it clearly and letting the compiler do the job. Without other information or seeing the code, it is of course impossible to tell - but there is certainly no simple explanation why the same array code could not be written in the same way in C and get at least as good performance.

No, two dimensional arrays in C are /not/ pointer-to-pointer accesses.

If you have:

int xs[10][100];

Then accessing "xs[a][b]" is just like accessing element "100 * a + b" of a one-dimensional array. There are no extra levels of indirection, and xs is /not/ an array of pointers.

There is no disadvantage. It would be absurd if C were designed in such a way that access to two-dimensional arrays had a pointless layer of extra pointers to slow it down and waste memory. (Poor quality compilers might generate poor quality object code from multi-dimensional array access, but that's a matter of the compiler optimisation. It used to make sense to use manual pointer arithmetic instead of array expressions, in the old days when compilers were weak.)

gcc is an excellent optimiser. MSVC is not bad too (for C++ - it's a shitty C compiler), and the same goes for clang and Intel icc. Each will do better on some examples, worse on others. And each requires extra effort, careful flag selection, and compiler-specific features if you want to squeeze the last few drops of performance out of the binaries. (This can make a big difference if you can vectorise the code with SIMD instructions.)

Nonsense. That was the case 20 years ago, but not now.

(They might do horrible things to their code to make them work well with SIMD, as automatic vectorisation is still more of an art than a science.)

The cool stuff in PB is libraries, not native - but it might be libraries that are always included by the tool so that you don't need any kind of "import" statement.

Certainly it is insanity to use C when you want networking, emails, and the like. If you are doing something big enough that you want the speed and efficiency of C, use C++, Go, Rust, D, or anything else that will do a better job of letting you write such code easily and safely. If not, use Python as it makes such code vastly simpler and faster to write, and has more libraries for that kind of thing than any other language.

Basic was probably a solid choice for such things a couple of decades ago. And of course if you are used to it, and it is still good enough, then that's fine - good enough is good enough.

Reply to
David Brown

On a sunny day (Mon, 17 Jan 2022 18:07:14 +0100) it happened David Brown snipped-for-privacy@hesbynett.no wrote in <ss47o2$ohn$ snipped-for-privacy@dont-email.me:

That is not correct, read libcinfo, it should be on your lLinux system, else it is here:

formatting link
use networking in C all the time wrote several servers, irc client, email client, this newsreader, so much, no problem. If it [networking] is TOO difficult for you, then call 'netcat' from C (or your script or from anything else). In Linux that is, But really, writing a server in C is only a few lines.

C++ is a crime against humanity.

,>Go, Rust, D, or anything else that will do

The Sinclair ZX80 / ZX81 BASIC was very very good. asm is even more fun.

Reply to
Jan Panteltje

If you don't like the poison gas, lay off the pickled herring. ;)

Cheers

Phil Hobbs

Reply to
Phil Hobbs

PowerBasic is wonderful. A serious compiler will a great UI and all sorts of intrinsic goodies. It also allows inline asm with variable names common to Basic and ASM. That is handy now and then.

Reply to
jlarkin

Yes.

If possible. Sometimes I use a coarse-fine two-step algorithm.

Nor have I, although for such things as time, integers are preferred because the precision does not vary with magnitude.

Depending on the hardware, a pair of 64-bit integers can be scaled such that one integer is the integer part and the other integer is the fractional part, the decimal point being between the two integers. Depending on hardware and compiler, this may be faster than floating point.

It's usually possible to reformulate to multiply by the reciprocal.

Yes, that example comes up in practice a good bit.

Later in this thread you did mention Pade approximations, which turned out to be good enough for simulating planetary systems with chaos.

In ray-tracing problems, the big hitters are sine and cosine, and some tangent. I have not needed to determine if the current approximations are good enough, but I'm suspicious, given that a slight displacement of the intersection point on a curved optical surface will deviate the ray ever so slightly, most likely changing the ray segment length ever so slightly, ...

I'm pretty sure that it was an actual application. Probably been replaced by now.

Yes, that would be the classic test, needed only once in a while.

Joe Gwinn

Reply to
Joe Gwinn

Right. Of course manufacturing errors in a real lens will limit the accuracy of a given ray trace faster than the mumerical limitations.

This is a slower version of the classical pool table problem. Go ahead and rack up a nice set of perfectly elastic balls on a lossless table with perfectly elastic cushions in a vacuum at zero kelvins, (*) Then pick up your perfectly elastic cue, put some very high friction chalk on it, and break.

Your eye-hand coordination is perfect, of course, but nevertheless the cue ball's position and momentum are slightly uncertain due to the Heisenberg inequality, $\delta P \delta S \ge \bar{h}/2 $. This is a very small number, so your break appears perfect. Two balls go straight into pockets, and the rest keep rattling round the table.

Any uncertainty in the momentum of a given ball causes an aiming uncertainty that builds up linearly with distance. That makes the point of collision with the next ball slightly uncertain, causing an angular error, which builds up linearly with distance till the next collision.... The result is an exponential error amplifier.

In the absence of loss, the Heisenberg uncertainty of the motion of the cue ball gets multiplied exponentially with time, until after 30 seconds or so it becomes larger than the ball's diameter--in other words, past that point it's impossible even in principle to predict from the initial conditions which balls will hit each other.

(At that point you start simulating the pool table as though it were a globular star cluster instead of a planetary system.) ;)

Cheers

Phil Hobbs

(*) Yes, like the spherical cow emitting milk isotropically at a constant rate...

Reply to
Phil Hobbs

On a sunny day (Mon, 17 Jan 2022 11:58:00 -0800) it happened snipped-for-privacy@highlandsniptechnology.com wrote in snipped-for-privacy@4ax.com:

I once wrote

formatting link
is sort of a way to allow inline asm in MCS BASIC for the 8052 micro I think it was... also from the eighties.
formatting link

Reply to
Jan Panteltje

On a sunny day (Mon, 17 Jan 2022 13:43:31 -0500) it happened Phil Hobbs snipped-for-privacy@electrooptical.net wrote in snipped-for-privacy@electrooptical.net:

Good protection clothes are essential,

formatting link
guys here yesterday replacing asbestos roof. Same for Cpushplush dispose of it with care. I like herring, the smell does not bother me, when I was very young my father could not get me past a herring stand (many in Amsterdam) without me having one.

These days those may be contaminated with anything from plutonium to plastics though.

Reply to
Jan Panteltje

Yes, although I'd venture that the LIGO conspiracy will have rather better optics than is common.

Hmm. Slower?

I'm not sure that it would be a good idea to witness billiard balls traveling at the speed of light colliding with anything solid. A few light years standoff might be safe.

Well, 50 light years would be safer, as we'd all be dead before the radiation pulse arrived at Earth.

Yes, that is the bounding case for sure. My hand isn't quite that steady, probably for lack of sufficient practice.

Yes.

Well, I've known babies like that.

Joe Gwinn

Reply to
Joe Gwinn

Funnily enough the same is true of astrodynamics.

It is a bad idea to use cosine directly since:

cos(x) = 1 - 2*sin(x/2).

Quite often physics problems have terms in "cos(x)-1" or nearly so.

The terms in "sin(x)-x" are much more problematic and in the limit x<0.25 pretty much have to be computed by a Pade approximation (or a much older method of summing a fairly well convergent polynomial series)

One of the other tricks I have been working on is moving everything into expressions in tan(x/2) which eliminates some independent ripple errors on the lsb of the sin and cos expansions. Only really worthwhile on platforms that don't compute sincos as a matched pair.

I expect the codebase survives somewhere in an archive. Big scientific kit often gets revamped at some point. The sheer mechanical inginuity of the mirror supports in that thing are amazing at holding it still. Remarkable the number of black hole mergers that they and the other gravitational wave detectors have seen.

It is what got me into using the GCC compiler for certain tests. Normally I stick with the MS VC/C++ environment. But their lack of 80bit real support is annoying. I might get around to writing a C++ stub to implement the most useful parts one day. But for now GCC and Salford's Fortran will do everything I need for extended precision.

Reply to
Martin Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.