scientists as superstars

cc and ld are part of the build process, and the build process is meaningfully parallel. (Actually, modern ld /is/ meaningfully parallel, especially for link-time optimisation.) How "meaningfully parallel" a task is often depends on how you choose to view it.

Yes. So do most (all?) parallel programming languages. When Go runs tasks in parallel, it runs them on different OS threads - a language and its run-time libraries don't get to run on more than one core without OS support.

Languages and their run-times or VM's can have a kind of cooperative multitasking within one thread, where there are separate logical executions but only one is running at a time. These can be useful, of course, and are often supported (with names like coroutines, async, generators, greenlets, fibres - details and names vary). But they are not "meaningful parallelism" in that they don't do more work in the same time, they simply give you other choices of how to structure your code.

The details vary, but all parallelism in languages is done by using OS processes or threads, with some kind of inter-process or inter-thread communication (queues, pipes, locks, shared memory, etc.).

Reply to
David Brown
Loading thread data ...

We still have "an unknown error occurred". No programming language can prevent laziness... However...

In the naughties, I introduced our team (about 40 programmers, mostly using C++) to use of an error management data type that means you could not write a function that returned an error condition without the required ceremony: Defining the error condition in a specialised error definition language, including the name, context variables, message template (text with slots for variable expansion), language context (for translators), with encouragement to construct the message in three parts: problem/reason/solution. Only after this had been done, the build-time code generators would allow the error condition to be signalled - with appropriate values for any variables.

The cost was low, but the pay-off was immense; we could automatically determine which messages had been translated into which languages and give translators all they needed for other languages. We could signal an error on a server, but display the message in a different client language context. We could print a PDF error message manual that contained additional context and help. Etc, etc... it was a powerful system...

The need for good error reporting is a problem in *all* software and for

*all* users, yet there is almost no support for it in existing languages or frameworks.

Poor error reporting is responsible for more than 50% of user frustration with information technology.

Clifford Heath.

Reply to
Clifford Heath

formatting link

is an example of that approach. It doesn't seem to be ruling the world at the moment.

formatting link

is a bit older.

The Viper provably correct computer is more recent - 1987.

formatting link

It doesn't seem to got anywhere either. I heard a bit about it before we left Cambridge (UK) in 1993.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

Viper was interesting - a processor with a formal mathematical proof of correctness. RSRE absolutely did not want people to think it might be used in missile fire control systems, oh no, never.

IIRC they flogged it to the Australians, then the Australians noted there was a missing step between the top level spec and the implementation. They sued and won.

There are three problems with any component that is mathematically proven: - most of the system isn't mathematically proven - is the initial spec "correct" - it is too difficult to do in practice

I don't remember NewSpeak, the associated programming language, ever becoming practical

Reply to
Tom Gardner

On a sunny day (Thu, 23 Jul 2020 21:40:02 +0200) it happened Jeroen Belleman wrote in :

Yes, what solution and programming languages are suitable depends on the application hardware, for example a firing solution for a micro size drone will have to have the math written for a very simple embedded system, maybe even in asm. The same firing solution for say a jet can be done in whatever high language makes you drooling. I like Phil Hobbs link to the story about that programmer and his use of the drum revolution time..

formatting link
For better pictures:
formatting link

And apart from the number of bugs in the higher level version, the failure rate also goes up with the number of components and chip size in a system, especially in a radiation environment. So fro ma robustness POV my choice would be the simple embedded version, not as easy to hack as most (windows??) PCs either, less power so greener.

Reply to
Jan Panteltje

- jerks will still hack ugly programs.

No language will fix the mess we have. Serious hardware protection will.

Reply to
John Larkin

No, it cannot, for deep theoretical and deep practical reasons.

A trivial example: there's no way that hardware protection can protect against some idiot using addition where subtraction is required.

That's not a theoretical example. A few years ago my energy supplier set my monthly payments too low. When it noticed that I was getting further behind with my payments, it responded by /reducing/ my monthly payments. Rinse and repeat another two times!

As for "better" languages, they help by reducing the opportunities for making boring old preventable mistakes.

Reply to
Tom Gardner

Sure, the program will report his bank balance wrong. Or abend. But it needn't crash the system, or inject viruses, or ransomware everything.

It should be flat impossible for any application program to compromise the OS, or any other unrelated application. Intel and Microsoft are just criminally stupid. I don't understand why they are not liable for damages.

We are in the dark ages of computing. Like steam engines blowing up and poaching everybody nearby.

Reply to
John Larkin
[ about computing bugs ]

Like what? Error-correcting memory? Redundant CPUs and voting? Nonrewritable firmware?

There aren't any hardware solutions to (for instance) facial-recognition that unlocks a phone on seeing a face of a child who resembles his parent.

Reply to
whit3rd

Both of which are ill-defined concepts.

These are just different ways of having the wrong result.

That's an ideal. Sadly, we don't live in an ideal world, and the people who promise us that we could, if only we did things their way, aren't to be trusted.

Many of them know that they are lying, and ones who sincerely believe their own claims are even more dangerous.

John Larkin lives in his own personal dark age. He doesn't know much and comforts himself with the delusion that everybody else is equally miserably ignorant. His grandiose self-image prevents him from noticing that this isn't always true.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

Agreed.

But there are no silver bullets that can "fix the mess we have". "The mess" is too many significantly different messes, including philosophy and human frailties.

[1] e.g. what do you mean by "correct"
Reply to
Tom Gardner

Yup, and once we get into classifications, there are infinite examples.

How would you classify a table that one or more people are sitting on? Or (as in my dining room) a chair with a potted plant on it?

Or, since dogs are four legged mammals, a dog that has had a leg amputated?

And then there's the whole emerging topic of machine learning "adversarial attacks".

formatting link

Reply to
Tom Gardner

Check out Qubes OS, which is what I run daily. It addresses most of the problems you note by encouraging you to run browsers in disposable VMs and otherwise containing the pwnage.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

That sort of thing is just another layer of kluge. It's sad that it's necessary.

--
John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 
 Click to see the full signature
Reply to
jlarkin

I did.

It doesn't like Nvidia graphics cards, and that's all my new machine has :(

Reply to
Tom Gardner

That's a bit much. For one thing, there's no reason for anyone to ever release any software that exhibits any of the top CVE pathologies, regardless of tooling or methodology.

We didn't - those with whom I worked. We designed against it, coded against it and tested against it. I have no evidence of *one* actual defect in anything I released from about 1989 onward.

I'm excluding "yeah, you did this, and I meant that". Those are perfectly understandable requirements mistakes.

What you see is the population of developers doubling every five years.

Well, I hate to be that guy, but it probably takes more than five years to reach the journeyman phase. If I count college, it was about that for me.

So what we do is move the goalposts and redefine "work" to mean "knitting together frameworks into deployments".

Specifically:

They had a lot of co-conspirators, then. Perhaps you don't recall, but the sheer change in cost created massive opportunity once PCs got big enough to do real work. And nobody paid anyone for rigor in the work.

The value created was substantial.

Electricity-as-power has a much higher historical body count.

--
Les Cargill
Reply to
Les Cargill

I've replaced thousands of failed TTL ICs over the decades.

Reply to
Michael Terrell

I mostly run it on $150 eBay Thinkpad T430s (and up) and Supermicro AMD tower boxes. I wouldn't be without it at this point.

Cheers

Phil Hobbs (posting from a $150 eBay T430s)

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

It is ironic that a lot of the potentially avoidable human errors are typically fence post errors. Binary fence post errors being about the most severe since you end up with the opposite of what you intended.

The average practitioner today really struggles on massively parallel hardware. If you have ever done any serious programming on such kit you quickly realised that the process which ensures all the other processes are kept busy doing useful things is by far the most important.

There is still scope for some improvement but most of the ways it might happen have singularly failed to deliver. There are plenty of very high quality code libraries in existence already but people still roll their own :( An unwillingness of businesses to pay for licensed working code.

The big snag is that way too many programmers do the coding equivalent in mechanical engineering terms of manually cutting their own non standard pitch and diameter bolts - sometimes they make very predictable mistakes too. The latest compilers and tools are better at spotting human errors using dataflow analysis but they are far from perfect.

--
Regards, 
Martin Brown
Reply to
Martin Brown

I wrote a clusterized optimizing EM simulator that I still use--I have a simulation gig just starting up now, in fact. I learned a lot of ugly things about the Linux thread scheduler in the process, such as that the pthreads documents are full of lies about scheduling and that you can't have a real-time thread in a user mode program and vice versa. This is an entirely arbitrary thing--there's no such restriction in Windows or OS/2. Dunno about BSD--I should try that out.

Does anybody here know if you can mix RT and user threads in a single process in BSD?

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.