the secret sauce

Intel tried to make a dedicated mobile processor architecture for smartphones etc. circa 2010, it was Atom-derived and code-named "Moorestown" and predictably didn't live up to the hype.

Reply to
bitrex
Loading thread data ...

Reply to
bitrex

It's been done:

Unfortunately at the time a decade ago the performance-per-watt advantages of mesh architecture/tile CPUs didn't seem to justify the extra workload in writing software for them from a cost perspective in anything but niche applications.

Reply to
bitrex

Or rather I should say "it's been done" ages ago in technology-time, when the process size didn't allow for anything but a mesh architecture when putting hundreds of general-purpose CPU cores on a die. I don't think the current relatively multiprocessor CPUs that have 32, 64 etc. cores on a die use a tile architecture they're more like traditional multicore processors where each core has a dedicated onboard cache and its own lines to the shared cache and main memory bus, the feature size has shrunk enough they can do that

Reply to
bitrex

It has old root, yes, but the formal proof (including to particular silicon) was only completed a couple of years ago.

Rather than trying to prove the GCC compiler, they prove that the generated code matches the C code. That needs some annotations in the C that get passed through to the assembly, but it allows proving the C, and the machine proves that the assembly matches the C. There is a separate proof that the assembly on that particular silicon meets the high-level guarantees of the secure operating system.

It's all very impressive stuff.

As I did, at the time ;). I used to pull fan-fold listings out of the bin to follow Robert Elz' work building the BSD tty driver. He would work all night (because no students to interrupt him) while watching the cricket on the TV from England.

Yes, Tanenbaum claims (with some merit) to be the father of Linux. Linus certainly would not have got started without it. I love his work.

Clifford Heath

Reply to
Clifford Heath

Am 12.01.21 um 04:59 schrieb Clifford Heath:

Oh, I think that Tanenbaum saw that completely different. I remember he said Linus could never ever pass an examination with him. It was in the context that the Linux kernel was monolithic and AT was on the micro kernel boat.

BTW his lecture was about the Free University Compiler Kit. Those who got the pun were just a few, even in the large audi max. I must have been bad even then. I wrote a a version of the underlying Experimental Machine (in principle a p-code thing) for the Z8000. It was faster than the 68K version by my friends. Yeah!

We had a group project in VLSI design and I managed to steer it to a

16 bit p-code machine similar to AT's EM. With some simplifications, on HP's contemporary dynamic n-mos process (with that pre-charge etc). Unluckily on the multi-project wafer, there was a metal line across our chip that nobody spotted. :-(

Gerhard

Reply to
Gerhard Hoffmann

While *you* might welcome and benefit from such a language, I think the industry requires more than just a language to address the coming generation of coding challenges.

Developers -- even those accustomed to writing multithreaded code -- are just not comfortable with TRUE multiprocessing (e.g., loosely coupled, distributed systems). They still seem to treat function/procedure invocations as "collections of statements" instead of considering that they may not behave (temporally) as other "local" statements.

How many apps "hang" while the code accesses the resolver... and the resolver takes an unexpectedly long time to respond? Did the developer fail to anticipate this condition? Or, did he assume it was an infrequent enough occurrence that he could ignore it?

Reply to
Don Y

Microkernels can be made very secure in theory. But they are more complex to write and work with - that means more mistakes. And it means fewer people work with them, so fewer features, so fewer users.

There is no such thing as a "secure" programming language, or a "reliable" programming language - any more than is there such a thing as "insecure" languages.

Programming languages are not the problem. It is the programmers, the development managers, the programs written, and the development processes that are the problem.

There is no difficulty in writing safe, secure and reliable code in C. There are plenty of tools available to help. It is not the C language that gives you buffer overruns - any more than it is PHP that gives you SQL injection attacks. It is simply a failure to write good quality good - no more, no less.

Different languages can make it easier to write safer code, or more efficient code, or code with different balances for different purposes. It would be far wiser to write a program that does a lot of text handling in Python rather than C, because it would involve much simpler, shorter and clearer code - thus less risk of errors. It would be far wiser to write an OS driver in C than Python, because it would be far more efficient and avoid the risk of problems due to non-deterministic behaviour.

That would seem a natural choice.

Reply to
David Brown

So do I. That's why I usually program in C or C++. But I choose good tools - and I know how to use them. And I use good programming habits, good coding structures, and good testing practices. And if C or C++ are not the right languages for the task, I use a more appropriate language (or pass the task on to someone who knows the other language).

It is /people/ that cause security problems and other bugs in software, not the languages.

Sometimes the cause is programmers who can't or won't do a good enough job. More often, in professional contexts, it is higher up in the system - managers who don't understand the issues, who push their coders to get something working as soon as possible, who don't appreciate the need for training, standards, testing, etc.

Writing good, solid, secure and reliable code takes time, expert developers, and lots of resources. Writing code that mostly works during your testing, is very much cheaper and faster.

That applies to /all/ programming languages.

(There is a possible exception with more "academic" languages like Haskell, in that poor or mediocre programmers are never going to learn them, and poor and mediocre managers are never going to let their developers use them - so if someone is programming in Haskell, they are probably a good programmer.)

Reply to
David Brown

So do I, but in a commercial consulting market I do have to dig people out of the holes that they have got themselves into and more often than not with C. It is by far the most common commercially used language now.

When I could get it I much preferred Modula 2 work under DOS, OS/2 or Windows. Some of those compilers and tools were way ahead of their time. But the vast majority of stuff has been C/C++ on Doze.

The problem is that a better mousetrap doesn't always sell. Ask Betamax. Business choose buggy code quick to market over accurate specifications. The ship it and be damned brigade - we can always issue a patch later.

--
Regards, 
Martin Brown
Reply to
Martin Brown

Agreed.

Not /the/ problem, but /a/ major problem.

The evidence is against you there, for the reasons you note in your previous paragraph, plus the evolution over time of C.

A major problem is that some languages are used for historical reasons, for applications which would be better served by more modern languages.

But you know that since you gave examples :)

Reply to
Tom Gardner

Agreed.

I tried to port a Modula2 program once, and ran into too many problems with non-standard libraries and the like. No, I didn't (nee to) try too hard :)

Reply to
Tom Gardner

I don't disagree, but you are an exception rather than the rule.

Shame, but you aren't scaleable :)

Reply to
Tom Gardner

No, not at all.

If you took away C, the same incompetent (or under-trained, over-worked) programmers would do the same poor job with whatever they had left. Using Ada, Modula-2 or Rust does not make you a better programmer or less likely to make mistakes. It might change details of the kind of mistake you get, however.

There is a general rule that the number of errors per line of code is roughly the same, regardless of programming language. You can reduce the risk of error for a particular task by using a language that lets you handle the task in fewer lines - so it is important to pick the right language for the task. That language might well be C.

Programmers that are expected to generate a certain number of lines of code per day will, of course, generate the same number of bugs regardless of the language.

It's the development methodologies that make the difference, not the language. No language is going to protect you from "the code ships on Friday, no matter what", or "it worked on my machine", or "Compiler warnings? It's only a warning - if it were important, it would be an error," or misinterpreting specifications, or "We need a new encryption algorithm that runs efficiently. This programmer writes fast code - he can design it," or even just "I know C. That makes me qualified to be a software designer".

The difficulty is not because of the language - the difficulty is inherent in the task of writing safe, secure and reliable code.

Now, you /can/ argue that C makes it easier to write /bad/ code than some other languages. That's why anyone interested in writing decent software uses a subset of the language - just as anyone interested in writing decent software in Ada, Modula-2, or anything else uses a subset. Sometimes these subsets have a name - SPARK for Ada, Misra-C as an example for C. Other times they are just "use the company coding standard and choice of compiler warnings".

Bare C lets you write things like this:

if (safe) doThisRiskyThing(); doThatRiskyThing();

The C subset used by "real programmers" does not. Such errors are /easily/ avoided by:

  1. Training the programmer to think about what they are doing.
  2. Using static checking systems (compiler warnings, linters, etc.).
  3. Using a coding standard that forbids anything close to such code formats.
  4. Using code reviews.
  5. Testing the damn thing.

Yes, there have been cases of such code released in the wild. That requires a failure of all of these 5 points above. And if your development methodologies don't enforce all 5 points (any one of which would have caught that error), it is your development system that is broken. A change of language will not help.

I have written my fair share of bugs over the years, in my many programs (mostly C in embedded systems, also C++, and lots of assembly in the old days, with Python, Pascal, and a few other languages on PCs). But I can honestly say that I have never had that particular one in my C coding. I can count the number of buffer overruns I have had on one hand, all due to typos, and none of them left my office. I had one project where I had a dynamic memory leak - it used a library with poor documentation about how it handled pointer ownership. I haven't missed a "break" in a switch statement. The solid majority of "typical C errors" just don't make it past the first three steps above.

For the kinds of mistakes I make in my C programming, I can make them just as easily with Ada or Pascal or Rust.

The problem with C lies not in the C language - it lies in people being taught how to churn out C code (or C# code, or Java, or Python), rather than being taught how to program.

I certainly think that most programs that are written in C, would be better written in something else. That doesn't mean I think for an instant that the people who are currently writing bugging code in C could write bug-free code if they changed languages.

Reply to
David Brown
[snip]

That's not why Betamax failed. It failed because Sony offered business temrs and conditions far more restrictive (and expensive) than VHS, and VHS was good enough. In other words, Sony badly overplayed their hand.

Joe Gwinn

Reply to
Joe Gwinn

No, I mean that Linus bootstrapped from Minix, not that the architectures are comparable.

CH

Reply to
Clifford Heath

Sure thing. This is no different than hearing some aluminum allow is "as strong" as (low grade) steel. Intel is still king in servers where speed and reliability matter. I'm not seeing any arm desktops either. Intel still dominates the markets they always have. Oh, they have some of the most widely accepted network chips too, and have for decades. How did ARM mess up so bad in networking chipsets?

Reply to
Cydrome Leader

Intel dominate servers because they dominate servers. It is a conservative market - it takes time to change, it takes time to prove that you have a good long-term reliability, and it takes time to build up a network of suppliers, support, etc.

Intel no longer dominate in speed. For pretty much any workload (desktop, workstation or server), AMD chips either beat Intel chips, or completely crush Intel chips - all for a lower price. And for many server workloads (especially where you have strong parallelisation, need general computing power, and are not dominated by memory bandwidth), these new generations of ARM chips even beat the AMD ones.

(In speed for the power, or speed for the dollar, Intel have been well below the compition for many years.)

Those are the bare technical performance facts. But the choice of processor does not depend on these facts alone, and that is why Intel is still the main supplier in the server world.

If your server runs Windows or x86-specific programs, you need an x86 processor - you can't use ARM (or Power, or anything else). But for many server tasks, you run Linux and your code is mainly in Java, PHP, Ruby, or various other interpreted languages. Or you have access to the code and can compile for whatever target you want. For such tasks, x86 compatibility is irrelevant and ARM (or Power, or others) will do fine.

On workstations and desktops, x86 compatibility is essential for the majority of serious users - Windows dominates which forces x86 compatibility. Even on Linux, most programs that are not part of distributions will be x86 only. (Having said that, Chromebooks running on ARM are increasingly common.)

No one who knows what they are doing, and who has an otherwise free choice, would buy an Intel processor for a desktop or workstation these days - you'd buy AMD, and get a faster device for the money.

As for networking chipsets, ARM doesn't make networking chipsets. Intel does, as a kind of side business (it has many devices other than just processors). Intel makes pretty good network chips, but so do many other companies.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.