64 bit OS

Hate is too strong a word but, yes, unless you have access to source code all the way down to the machine itself you can never be sure of your ground in safety critical applications.

For example, I was contracted to work on the development of a CANBus steering mechanism which used one of the object repository. When I wished to check the efficacy of my proposed changes by drilling down to everything that would be affected by the change, I was refused access to the source of the already implemented classes.

Reply to
Gareth's Downstairs Computer
Loading thread data ...

You should check out the SYMBOL machine designed and built in the 1970s (at the University of Iowa, IIRC).

--
-michael - NadaNet 3.1 and AppleCrate II:  http://michaeljmahon.com
Reply to
Michael J. Mahon

Makes sense, code should be built to the spec of the underlying libraries and not to the implementation. If behaviour outside the spec matters it needs to be bought into the spec, and preferably tested for.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

It is lack of knowledge of the implementation that hides bugs.

Reply to
Gareth's Downstairs Computer

of the underlying

.

No, it's lack of testing.

There will always be a limit to how far you can drill down through the software stack, firmware, microcode, and hardware implementation, and an

even bigger limit on how much of that you can really understand by itself, and as a complete system. The only way to know if the entire system behaves in the way you want it, is to test it thoroughly.

There are completely open source projects that have been around for 20+ years, had thousands of eyes on them, and were thought to be of a pretty

good quality by those who understood the implementation inside out. Then

new fuzzing test tools came along, throwing all sorts of unexpected inputs at the code, and a huge number of bugs were exposed.

---druck

Reply to
druck

One way to create subtle and nasty bugs is to write code that depends on the implementation rather than the spec.

Give this man a cigar.

Yep - and when you do the tests will tell you exactly what behaviour you can count on while the spec should tell you exactly what behaviour you should be able to count on. In an ideal world these two are the same ... never seen it yet, but I've seen it come close.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

It happens anyway, even when programmers don?t look at the implementation.

--
https://www.greenend.org.uk/rjk/
Reply to
Richard Kettlewell

All code depends upon the implementation.

For example, string handling in C is not part of the language spec.

Reply to
Gareth's Downstairs Computer

... which is where I came in, being denied access to implemented classes.

Reply to
Gareth's Downstairs Computer

Wrong! For an example some years ago I was involved in creating a tool kit for people building products in the company (online scientific journals) - there came a time when I found that an efficiency problem in the lowest level of the toolkit (by now about four years old in production use with a lot of higher level toolkit and product sitting on top of that bottom layer). It was subtle and fiddly - fixing it properly required rewriting the entire of the lowest level.

In many places that would be deemed too risky - but we had good unit tests and a strong philosophy of documenting interfaces and working to interface documentation. So when I rewrote the entire bottom layer keeping just the API and got it to pass the (untouched) unit tests we released it (staged of course - integration tests, acceptance tests ...) - *nothing* went wrong anywhere.

It is part of the library spec - there's more than one spec. Every layer you depend on should have a spec, if it doesn't don't depend on it.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

However, string handling in C is, and always has been, part of the standard library. There is a specification (manpage) for every string handling function and there is (or should be) a set of tests that confirm that the code meets the specification. Hence you should write code that uses conformant calls to these functions.

The same applies for every function in the Standard Library just as it does for 'bigger' languages than C and their support libraries. If you write a conformant call that doesn't work as advertised then you've found a bug and should report it.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

On a sunny day (Sun, 29 Apr 2018 15:46:34 +0000 (UTC)) it happened Martin Gregorie wrote in :

Indeed, /var/www/pub/libc.info.txt (in other directories and forms on other systems) always was, and is, my C programming reference. Without that you are stuck in Linux. It describes every function in the library and often has some examples on how to use those. It is a MUST read, must have at hand. It is only 69804 lines in my editor. :-)

I usually use the editor search feature (I use 'joe' as editor) to find info on a function.

yes man function_name also works, but libc.info is THE thing to have (and read of course).

Reply to
Jan Panteltje

Agreed, though my goto source for any set of related functions that I'm not familiar with is

"UNIX Systems Programming for SVR4" by David A Curry (pub O'Reilly)

Its still very relevant despite its age (my copy is dated July 1996) because the standard library has scarcely changed since then: So far I haven't found anything in it that didn't work exactly as described, whether I was using a UNIX (NCR and DEC) or Linux since then.

It's a better source than manpages if you're starting to use an unfamiliar set of library functions because it does good job of explaining how they work together and interact - something that's often missing from manpages - and has useful example code as well as explanations in a similar style to the K&R book.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

On a sunny day (Sun, 29 Apr 2018 16:42:48 +0000 (UTC)) it happened Martin Gregorie wrote in :

libc.info has many examples, the big advantage of having it on the system is that you can use the editor search function. Does not take so much space as a book either..

The other thing one may need is all the rfc files that are around, describing the various standards, for example rfc977 for Usenet that I used to write this newsreader. Google is a good help too.

Reply to
Jan Panteltje

I've grabbed it from the GNU site. It looks useful, so thanks for the pointer, but a quick look for using poll() for asynchronous i/o shows that it isn't a replacement for "UNIX System Programming": on this topic, at least, the book is much clearer. A closer look shows me that there's quite a lot of overlap at the detail level, but the book offers a better way into using some of the more complex sets of related functions.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

As Dijkstra said, ?Testing can prove the presence of bugs, but never their absence.?

As an example, consider testing a 32 x 32 bit multiply. The hardware will be long dead (along with the tester!) before the testing is complete.

Testing has the same limits as programming, and is subject to similar bugs, both logical and clerical. Its advantage is that it is a kind of orthogonal ?implementation? of the spec, and so less likely to contain similar bugs. And about that spec...

Good rule: ?Don?t get cocky!?

--
-michael - NadaNet 3.1 and AppleCrate II:  http://michaeljmahon.com
Reply to
Michael J. Mahon

On a sunny day (Sun, 29 Apr 2018 17:47:44 +0000 (UTC)) it happened Martin Gregorie wrote in :

OK, I cannot be the judge of that, do not have that book. I am very happy I found it way back then (1998 or so) and for sure without it I could not have written all the code I did. I do remember learning all the networking stuff from it. polling, yes, and select() :-)

Reply to
Jan Panteltje

... and should be ready to write a work-around in case the supplier of the library tells you to get stuffed.

--
/~\  cgibbs@kltpzyxm.invalid (Charlie Gibbs) 
\ /  I'm really at ac.dekanfrus if you read it the right way. 
 X   Top-posted messages will probably be ignored.  See RFC1855. 
/ \  Fight low-contrast text in web pages!  http://contrastrebellion.com
Reply to
Charlie Gibbs

2^64 possible inputs. Google did 2^63 iterations of the SHA-1 compression function recently. Your multiplier is testable (in that sense) if you?re prepared to throw some money at it.

A 64x64 multipler would be another matter...

--
https://www.greenend.org.uk/rjk/
Reply to
Richard Kettlewell

Two points:

- the test harness and list of tests *must* be written without reference to the code being tested and, ideally, *should* not be written by the author of the code to be tested, though in many cases the latter is not going to happen. Writing tests from the spec is vital, especially as doing it can smoke out errors and omissions in the spec.

- the test harness and tests should be written to report only deviations from a set of expected results and *should* have the same lifetime as the code they test - IOW each time the code is amended it should be regression tested by rerunning the tests against it and fixing any unexpected deviations from expected results, followed by adding any new tests that the changes require and/or modifying the expected results.

All this is easier to do than it might appear:

- its quite easy to write a test harness that plays a set of scripted tests through a custom module that interfaces the harness to the code being tested. The custom interface modules are little more than cut'n'paste exercises once you've written the first one.

- since the tests are scripted its very easy to add more tests for edge cases and to test normal operation with inductive methods.

- generating and modifying expected results is easy too, if the test harness lists test scripts as they are executed, the scripts contain comments about expected results, and the output contains actual results immediately after the scripted actions that cause them to be output. Under these conditions the expected results are merely the captured output from a clean run of a test script.

- determining whether a test was successful or not can be done with using 'diff' to compare this runs output with the expected results: if any differences are found, the test failed.

I've been using this rest method for the contents of both C and Java code libraries for quite a long time now: it works well for both languages.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.