scientists as superstars

Fencepost errors are also relatively easy to test for. Having a good method of generating test vectors is important.

For things like CUDA or OpenCL you're smack up against the von Neumann Bottleneck. I expect the video card makers to embrace interfaces faster than ePCI relatively soon. Like M.2NVMe .

I may get back to it, but for one thing I work on ( VST plugin convolution ), the GPP approach wins for now.

Other than that, it depends on what you mean by "massively parallel." With bog standard open/read/write/close Linux driver ioctl() and event driven things like select()/poll()/epoll() it gets quite a bit easier.

If that's not good enough, shared memory is a possibility. There are other paradigms.

When I think of libraries, I think of what's available for Fortran.

Writing libraries otherwise isn't that good of a business model. Open source libraries are only as good as the people who steer them.

The "boost" library quite should be a wonderful thing; it is , sometimes but more often it's just a whacking great overhead.

But it's not like there are healthy markets around for "bolt makers" in software. And there's got to be a limit to how good the tools actually are. Turns out I can start using CLANG at home; I'll see how impressive it is.

The problem in software is pretty simple 0 half the practitioners have been doing it less than five years. Throw in the distractions of developing for the Web and it's even worse.

--
Les Cargill
Reply to
Les Cargill
Loading thread data ...

In Linux, realtime threads are in "the realtime context". It's a bit of a cadge. I've never really seen a good explanation of that that means.

Sorry; never used BSD.

Realtime threads are simply of a different group of priorities. You can install kernel loadable modules ( aka device drivers ) to provide a timebase that will make them eligible. SFAIK, you can't guarantee them to run. You may be able to get close if you remove unnecessary services.

I don't think this does what you want.

"Cons: Tasks that runs in the real-time context does not have access to all of the resources (drivers, services, etc.) of the Linux system."

formatting link

I haven't set any of this up on the past. Again - if we had an FPGA, there was a device driver for it and the device driver kept enough FIFO to prevent misses.

--
Les Cargill
Reply to
Les Cargill

In Linux if one thread is real time, all the threads in the process have to be as well. Any compute-bound thread in a realtime process will bring the UI to its knees.

I'd be perfectly happy with being able to _reduce_ thread priority in a user process, but noooooo. They all have to have the same priority, despite what the pthreads docs say. So in Linux there is no way to express the idea that some threads in a process are more important than others. That destroys the otherwise-excellent scaling of my simulation code.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

That last bit makes me wonder. Priority settings have to conform to "policies" but there have to be more options than "the same".

This link sends me to the "sched(7)" man page.

formatting link

Might have to use sudo to start the thing, which is bad form in some domains these days.

There's a lot of info on the web now that seems to indicate you can probably do what you need done.

FWIW:

formatting link
(also%20known%20as%20pthreads).

--
Les Cargill
Reply to
Les Cargill

On a sunny day (Sun, 2 Aug 2020 16:26:26 -0400) it happened Phil Hobbs wrote in :

Assuming RT means 'real time' No First Unix / Linux (whatever version) is not a real time system. It is a multi-tasker and so sooner or later it will have to do other things than your code. For a kernel module to some extend you can service interrupts and keep some data in memory. It will then be read sooner or later by the user program.

For threads in a program anything time critical is out. Many things will work though as for example i2c protocol does not care so much about timing, I talk to SPI and i2c chips all the time from threads.

The way I do 'real time' with Linux is add a PIC to do the real time stuff, or add logic and a hardware FIFO, FPGA if needed.

All depends on your definition of 'real time' and requirements. Here real time DVB-S encoding from a Raspberry Pi, uses two 4k x 9 FIFOs to handle the task switch interrupt.

formatting link

Reply to
Jan Panteltje

It can be, for the reasons you note below: definition of terms.

There are many telecoms Unix/Linux and Java programs that are realtime.

Obviously real time != fast, but that's boringly obvious.

In the telecoms industry "real time" often means time guarantees are statistical, e.g. connect a call with a mean time less than 0.5s.

Personally as a customer and engineer I would prefer

95th percentile rather than mean since it is a better indication of the performance limit, but as a vendor mean is more convenient.

Anybody trying to use Linux as a fast hard realtime system is going to have to use a specialised kernel. Even then they can be screwed by caches and interrupts.

Reply to
Tom Gardner

Perhaps I was unclear. I'm talking about thread classes, one of which is officially called "real time", i.e. high priority, not about the design of real time systems.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Oh, I know what the docs say. What they don't tell you is that (a) None of that scheduler stuff applies to user processes, just realtime ones; (b) You can't mix real time and user threads in the same process; and (c) You can't adjust the relative priority of threads in a user process, no way, no how. You can turn the niceness of the whole process up (or down, if you're running as root), but you can't do it thread-by-thread.

That means that when I need a communications thread to preempt other threads unconditionally in a compute-bound process such as my simulator, I have no way to express that in Linux. If I put a compute-bound thread in the realtime class, it brings the UI to its knees and the box eventually crashes.

For my purposes, I'd be perfectly happy if I could _reduce_ the priority of the compute threads and leave the comms threads' priority alone, but nooooooo.(*)

This limitation destroys the scaling of my simulator on Linux--it's about a 30% performance hit on many-host clusters. Works fine in Windows and OS/2, but the Linux Kernel Gods don't permit it, hence my question about BSD.

The only way to do something like that in Linux appears to be to put all the comms threads in a separate process, which involves all sorts of shared memory and synchronization hackery too hideous to contemplate.

Cheers

Phil Hobbs

(*) When I talk about this, some fanboi always accuses me of trying to hog the machine by jacking up the priority of my process, so let's be clear about it.

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

There's an interesting 2016 paper about serious performance bugs in the Linux scheduler here:

formatting link

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Apologies for inspiring you to repeat yourself.

Have you ruled out (nonblocking) sockets yet? They're quite performant[1]. This would give you a mechanism to differentiate priority. You can butch up an approximation of control flow, and it should solve any synchronization problems - you won't at least need sempahores.

[1] but perhaps not performant enough...

There is MSG_ZEROCOPY.

There is always significant confusion about priority. Pushing it as a make/break thing in a design is considered bad form :) But sometimes...

--
Les Cargill
Reply to
Les Cargill

Did you look at the paper I posted upthread? Its title is "The Linux Scheduler: A Decade of Wasted Cores." (2016) That's about it.

I'm using nonblocking sockets already. On Windows and OS/2, even on multicore machines, the realtime threads get run very soon after becoming runnable. They don't have that much to do, so they don't bog down the UI or the other realtime services.

The design breaks the computational universe down into shoeboxes full of sugar cubes. Each shoebox is a Chunk object, and communicates via six Surface objects, each with its own high priority thread. Surface has two subclasses, LocalSurface and NetSurface, which communicate via local copying and sockets respectively, depending on whether the adjacent Chunk is running on the same machine or not.

The Chunk data arrays are further broken down into Legs (like a journey, not a millipede). A Leg is a 1-D row of adjacent cells that all have the same updating equations and coefficients,(*) plus const references to the four nearest neighbours and the functions that do the updating. Generating the Legs is done once at the beginning of the run, and the inner loop is a single while() that iterates over the list of Legs, once on each half timestep (E -> H then H -> E).

This is a nice clean design that vectorizes pretty well even in C++ and runs dramatically faster than the usual approach, which is to use a triple loop wrapped around a switch statement that selects the updating equation and coefficients for each cell on each half-step.

It runs fine on Linux as well, except that there's this unfortunate tendency for the Surface threads to sit around sucking their thumbs when they should be running, sometimes for seconds at a time. That really hurts on highly-multicore boxes, where you want to run lots of small Chunks to get the shortest run times.

It's an optimizing simulator, and it may need 100 or more complete runs for the optimizer to converge, especially if you're using several optimization parameters and have no idea what the optimal configuration looks like. That can easily run to 10k-100k time steps altogether, especially with metals, which require very fine grid meshes. (FDTD's run time goes as N**4 because the time step has to be less than n/c times the diagonal of the cells.)

Cheers

Phil Hobbs

(*) Dielectrics are pretty simple, but metals are another thing altogether, especially at high frequency where the "normal metal" approximation is approximately worthless. The free-electron metals (copper, silver, and gold) exhibit large negative dielectric constants through large ranges of the IR, which makes the usual FDTD propagator unstable.

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

It has been tried and it all ended in tears. Viper was supposed to be correct by design CPU but it all ended in recrimination and litigation.

Humans make mistakes and the least bad solution is to design tools that can find the most commonly made mistakes as rapidly as possible. Various dataflow methods can catch a whole host of classic bugs before the code is even run but industry seems reluctant to invest so we have the status quo. C isn't a great language for proof of correctness but the languages that tried to force good programmer behaviour have never made any serious penetration into the commercial market. I know this to my cost as I have in the past been involved with compilers.

Ship it and be damned software development culture persists and it existed long before there were online updates over the internet.

--
Regards, 
Martin Brown
Reply to
Martin Brown

It is worse than that. According to some that have sat on the standards committees (e.g. WG14), and been at the sharp end of user-compiler-machine "debates", even the language designers and implementers have disagreements and misunderstandings about what standards and implementations do and say.

If they have those problems, automated systems and mere users stand zero chance.

Well, Java has been successful at preventing many stupid mistakes seen with c/c++. That allows idiot programmers to make new and more subtle mistakes :(

Rust and Go are showing significant promise in the marketplace, and will remove many traditional "infelicities" in multicore and distributed applications.

Ada/SPARK is the best commercial example of a language for high reliability applications. That it is a niche market is an illustration of how difficult the problems are.

Yup. But now there are more under-educated programmers and PHBs around :(

Reply to
Tom Gardner

On a sunny day (Tue, 11 Aug 2020 10:02:32 +0100) it happened Martin Brown wrote in :

I think it is not that hard to write code that simply works and does what it needs to do. The problem I see is that many people who write code do not seem to understand that there are 3 requirements:

0) you need to understand the hardware your code runs on. 1) you need to know how to code and the various coding systems used. 2) you need to know 100% about what you are coding for.

What I see in the world of bloat we live in is

0) no clue 1) 1 week tinkering with C++ or snake languages. 2) Huh? that is easy ..

And then blame everything on the languages and compilers if it goes wrong.

And then there are hackers, and NO system is 100% secure.

Some open source code I wrote and published runs 20 years without problems. I know it can be hacked...

We will see ever more bloat as cluelessness is build upon cluelessness, problem here is that industry / capitalism likes that. Sell more bloat, sell more hardware, obsolete things ever faster keep spitting out new standards ever faster,

Reply to
Jan Panteltje

Although I tend to agree with you I think a part of the problem is that the people who are any good at it discover pretty early on that for typical university scale projects they can hack it out from the solid in the last week before the assignment is due to be handed in.

This method does not scale well to large scale software projects.

Although I have an interest in computer architecture I would say that today 0) is almost completely irrelevant to most programming problems (unless it is on a massively parallel or Harvard architecture CPU)

Teaching of algorithms and complexity is where things have gone awry. Programmers should not be reinventing the square or if you are very lucky hexagonal wheel every time they should know about round wheels and where to find them. Knuth was on the right path but events overtook him.

Compilers have improved a long way since the early days but they could do a lot more to prevent compile time detectable errors being allowed through into production code. Such tools are only present in the high end compilers rather than the ones that students use at university.

Again you can automate some of the most likely hacker tests and see if you can break things that way. They are not called script kiddies for nothing. Regression testing is powerful for preventing bugs from reappearing in a large codebase.

I once ported a big mainframe package onto a Z80 for a bet. It needed an ice pack for my head and a lot of overlays. It was code that we had ported to everything from a Cray-XMP downwards. We always learned something new from every port. The Cyber-76 was entertaining because our unstated assumption of IBM FORTRAN 32 bit and 64 bit reals was violated.

The Z80 implementation of Fortran was rather less forgiving than the mainframes (being a strict interpretation of the Fortran IV standard)

I do think that software has become ever more over complicated in an attempt to make it more user friendly. OTOH we now have almost fully working voice communication with the likes of Alexa aka she who must not be named (or she lights up and tries to interpret your commands). (and there are no teleprinter noises off a la original Star Trek)

--
Regards, 
Martin Brown
Reply to
Martin Brown

No language will ever force good programmer behavior. No software can ever prove that other software is correct, or even point at most of the bugs.

Proper hardware protections can absolutely firewall a heap of bad code. In fact, make it un-runnable.

If a piece of code violates the rules, it should be killed and never allowed to run again. Software vendors would notice that pretty quick.

Online code updates should of course be disallowed by default. It's an invitation to ship crap code now and assume it will be fixed some day. And that the users will find the bugs and the black-hats will find the vulnerabilities.

Why is there no legal liability for bad code?

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

That's impossible. Not even Intel understands Intel processors, and they keep a lot secret too.

There are not enough people who can do that.

Generally impossible too.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

e

that

Benjamin, I've got one word for you, ELUA!

Many states have passed laws making shrink wrap opening the same as signing a contract as well as clicking a button on a web page meaning you agree to a contract you have never read.

I've seen web sites that have broken links for the "terms and conditions". The law should be written so that makes them subject to charges of fraud.

Similar things are done at face to face contract signings. I had power of attorney for a friend once who was out of the country and selling a house. They handed me the contract to sign which I read, then turned it over to f ind the back had the proverbial small print but also dark gray on light gra y background!!! I cried foul, but it didn't go far. The lady offered to r ead it to me.

WTF is wrong with people? Why would they want to pull crap like this?

--

  Rick C. 

  --+ Get 1,000 miles of free Supercharging 
  --+ Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

This is why all the really smart people are in software.

--

  Rick C. 

  -+- Get 1,000 miles of free Supercharging 
  -+- Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

It's not impossible, but it may limit your choice of hardware.

Then we'd better make the coding systems more transparent, and work out how to train more people to a higher level.

You need to get a lot closer to 100% than the people who want to get the job done usually seem to imagine. Waving your hands in the air and declaring it impossible isn't a constructive approach.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.