2066-pin processors

Can we quote you on that? :-)

(I made another post correcting Clifford and explaining what I meant.)

Reply to
David Brown
Loading thread data ...

It is entirely possible to write bug-free software, but it gets disproportionately harder as the software's task gets more complex. Writing bug-free software for a toaster is possible. Writing bug-free software for a mobile phone is so impractical as to be effectively impossible.

We can counter this effect, to some extent, by making clear separations of parts with controlled and limited interfaces between them (the Unix philosophy of software development, rather than the Windows philosophy). That reduces the problem, but does not eliminate it.

And software is never alone. /Your/ code may be bug-free, but are you confident that the OS is bug-free? That third-party code in the system is bug-free? That the compiler and its libraries are bug-free? What about the cpu you are running on? (Spectre shows how a system can be insecure even when the software is secure.) You may have written your own bug-free networking stack, but do you know how your Ethernet MAC reacts to badly formed packets?

I am /not/ saying "all software has bugs" - certainly not as an excuse for writing bad software. I am warning against complacency and the idea that /your/ software people, unlike all others, write perfect code that is bug-free and absolutely secure.

Reply to
David Brown

These qualifiers have particular purposes, and you certainly do want to use them appropriately. That does not mean you want to have to control which parts of memory get which bits of data or code. C is not assembly

- you describe the program in a high level language, and the compiler generates as efficient code as it can that satisfies the source code.

When I write "const int abc = 123;", I am /not/ saying "I want you to put number 123 in ROM, with a label abc." I am saying "abc is an int which will have the number 123 - I will not be changing that number". It's up to the compiler to put it in flash, put it in code as a "load immediate", hold it in a register, use it for compile-time calculations, etc.

There are some toolchains for some embedded processors that have extra keywords like "flash" or "xdata" for putting variables and constants in specific places. This is because the processors in question are so brain-dead and painful (for C, at least), and the toolchains are not smart enough, so that they need this manual help.

And of course there are occasions when you want a particular effect that cannot be expressed properly in C, and so you have extensions and manual control. But that is rare.

I have used a compiler that treated "const" as "put this in flash", on a processor with different instructions for accessing flash data and ram data. It was a serious PITA that hindered normal programming.

Reply to
David Brown

The methods were used before return stacks were common, and probably before the term "self-modifying code" was coined. But clearly it /is/ self-modifying code.

Indeed. And that is often painfully slow.

Yes, self-modifying code is popular for "stealth" code - malware, software protection schemes, etc.

Reply to
David Brown

Even there, most people's office environments are relatively safe from hackers or other nasty misuse of passwords. The kinds of people who would use each others' passwords in an office would usually do so whether the passwords were on post-its or remembered and shared verbally.

Reply to
David Brown

Sure. My career has always been about generating ideas. And this is a "design" group. Dare to be crazy.

If we don't know what "secure" means, we're unlikely to create it.

--

John Larkin   Highland Technology, Inc   trk 

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

"Fast" is unlimited hence arbitrary.

Efficiency is bounded between 0 and 1, so "efficient" means close to

  1. "Secure" means that it is very difficult for an unwelcome party to steal data or otherwise compromise our intended system. The limit is absolutely secure, zero vulnerabilities, impossible to compromise.

You can sometimes make something secure against a

But we should be entirely sure, with zero successful worms/viruses/trojans attacking our computers. Zero security updates. We could start by believing that is possible.

You

Or design it to be secure from the start.

AV programs demonstrate that that wasn't, and isn't, being done.

--

John Larkin   Highland Technology, Inc   trk 

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

... and never underestimate the difficulty of having a bug-free /specification/ for what the system /ought/ to do. Preferably in the absence and presence of faulty internal and external components.

Reply to
Tom Gardner

Lately our embedded products have an internal NORM/LOCK slide switch. Some of our customers demand that. In the LOCK position, it is impossible to write to the nv memory (usually serial flash) that stores the code, the cal table, and current settings.

Customers can remove the cover, set the LOCK state, replace the cover, and add lots of stickers over the screws.

We are not aware of any of our products ever being hacked.

--

John Larkin   Highland Technology, Inc   trk 

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

There's a computer type for that goal: Harvard architecture. Harvard architecture splits the computer memory between instructions and data. No data ever becomes/creates an instruction without external intervention.

And, for the purpose of write/compile/run/debug cycles, that's a very annoying environment indeed. Von Neumann architecture is the rule nowadays. Despite the irritations that come from malware, it's much LESS irritating than Harvard architecture would be.

Even firmware (the most Harvard-like of contemporary softwares) is rewritable nowadays, and for good reason. We've recently found out about the internal reprogrammable microcode in Intel CPUs; scary stuff, that, but powerful and perhaps inevitable.

Reply to
whit3rd

Don't forget the "Management Engine", a small processor embedded in Intel processors, that runs Minix.

Yes, that too has security bugs.

formatting link

Reply to
Tom Gardner

Memory management hardware does the same thing, separates memory into enforced code and data spaces. That would help except that people seldom use it. The c language generally compiles into mixed-up messes.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

Why is that 'enforced', if one can compose a program, compile, and link and run a program? Memory management is SOFTWARE option, not hardware enforced. You can steer right past the limit. Malware can, too.

Reply to
whit3rd

Your products lead a very sheltered, small existence. Let us know how you feel about security once you've rushed more than a hundred thousand WiFi-controlled LVDT testers to market.

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Design 
Website: https://www.seventransistorlabs.com/
Reply to
Tim Williams

Yup. I've standardized on 2011ish vintage AMD Magny Cours servers, which don't have that crap, but do have hardware virtualization support (equivalent to Intel vtx and vtd), running Qubes OS.

Not perfect, but Highly Recommended.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

JL would probably have bought a small Caribbean island by then. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

This would be more impressive if John Larkin had ever acknowledged that he'd posted a wrong or silly idea. Egomaniacs are remarkably prone to forget or ignore data that makes them look less impressive than their desired self-image.

--
Bill Sloman, Sydney
Reply to
bill.sloman

The other methods of exiting some loop way down the hierarchy tend to be much worse. The cleanest is probably to set a flag that is tested at each level, e.g. while(i++ < count && bail == 0){}, but that's almost certainly worse.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

how

Holiday resort, or tax haven? Selling 100,000 LVDT testers does sound unlik ely.

I have run across liner variable differential transformers from time to tim e in my career, but they aren't mass market devices, and something that tes ted or simulated them would sell in even smaller volumes.

I think the last LVDT I actually worked with was in the Metals Research GaA s single crystal pulling machine

formatting link

r-pressure to slow down evaporation to tolerable levels, so it wasn't an ap plication that used up a lot of LVDTs.

--
Bill Sloman, Sydney
Reply to
bill.sloman

It is usually unlimited. It means it is relative - your system is faster than something else, rather than simply "fast".

Yes. But again the levels picked are arbitrary - 80% might be considered "efficient" until a new version comes along with 90% efficiency, and the old one is now "inefficient". The term is relative, not absolute.

Terms like "fast", "efficient", "secure" are always relative. Sometimes there is an obvious reference point, perhaps some sort of average of alternative systems. Mostly it is a marketing term, comparing one supplier's products to their chosen alternatives from rivals - and thus as helpful as claiming "best in it's class".

Yes - either in terms of unauthorized acquisition of data (not "stealing" data, unless you subscribe to Holywood's misuse of such terms), hindering the system from doing its intended job, or otherwise causing trouble.

If by "limit" you mean something you aim towards but can't get past, then I agree. It is not something achievable, except in specific ways for specific cases. (A system with no remote connections is secure against remote attacks, for example.)

It is /not/ possible. But it might be possible to get the risks down to such a low level that they are not something people need to worry about. There is a critical difference between those two statements.

I fully agree that it is possible to get a significant reduction in the risks. I fully agree that this requires a change in attitudes - both amongst suppliers, and users.

Of course you include security from the start of the design. (And I agree that this is something companies often get wrong - you certainly can't make something secure by trying to tag on "security" as a patch late in the development process.)

And you should certainly design it to be secure from any classes or types of attacks that are practical to eliminate - and do so from the start.

But you can't design something to be /perfectly/ secure, from the start or at any other time.

I agree.

Don't misunderstand me here - I too think security /could/ be much better, and /should/ be much better, in most systems and software. But I want to aim for getting as good as practically and economically feasible without disrupting usability too much. I don't want to aim for a mythical impossibility.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.