Dimension of a matrix

Ah ha! I always had my suspicions about you. Those videos you published are just a con to make us /think/ you are human!

Reply to
David Brown
Loading thread data ...

Excellent.

How do you suggest we clone your special powers a few million times? :)

Who was it who said something to the effect that if the answer doesn't have to give correct results, then the program can be made arbitrarily fast and arbitrarily small and written arbitrarily soon? ISTR Wirth or Dijkstra.

Reply to
Tom Gardner

It helps if you're writing embedded code, which tends to not deal with too much variable sized data or have complicated control flow. Otherwise how can you really tell? Inspection stops helping much once the program is large enough. Do you do any fuzz testing etc.?

Reply to
Paul Rubin

I think the key point is program design, rather than testing. Testing can help show that you /have/ bugs, it doesn't show that you /don't/ have bugs.

I don't have many bugs involving overrunning arrays either (despite being a mere human). I just make sure I know the size of my arrays when I use them.

Reply to
David Brown

That's what everyone says, yet those bugs keep turning up ;-).

Reply to
Paul Rubin

Try telling that to the XP/TDD brigade who have been taught by external consultants - "but there's a green light after testing" :(

Mentioning the old aphorism that "you can't test quality into a product" usually meets blank incomprehension, and only rarely a glimmer of enlightenment.

Reply to
Tom Gardner

This _is_ an embedded group.

When I'm writing desktop code I'm more prone to use 'at', or otherwise actively test for overrunning boundaries.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com 

I'm looking for work -- see my website!
Reply to
Tim Wescott

But design is about constraints, and testing can show that constraints are met. That has a seriously Pareto ( 90/10 ) effect on development outcomes. The other 10 percent takes the other 90 percent of time allotted.

Me too.

Apparently, that is too hard and we'll all suffer deadly incursions.

--
Les Cargill
Reply to
Les Cargill

That's not how this works. Tests that catch bugs early save money.

The other problem is that people take as a corollary "so don't bother testing."

Zero defects is unattainable. 10^-x defects isn't.

--
Les Cargill
Reply to
Les Cargill

There are some kinds of defects that can be avoided by good design. Some kinds can be spotted by static error checking at compile time. And some kinds can be best spotted using tests.

Array overruns are best avoided by good programming practice - you know exactly what size your array is when you use it. You don't make assumptions - make sure you /know/. And they are usually very hard to find by tests - the effect of an array overrun is typically "something odd is happening" because you have written to a different memory object unexpectedly.

You cannot sensibly use run-time checks of array sizes, except occasionally during development. You can only check the array bounds on access if you know the bounds - and if you know the bounds during the access, then the code that makes the accesses already knows these bounds and should not attempt such accesses. At best, you might spot typographical errors - using "sizeOfInBuffer" when you meant "sizeOfOutBuffer".

Reply to
David Brown

The one guy I know in the "TDD brigade" (James Grenning) would tell you that if someone came away with that conclusion then either they weren't listening or the consultant wasn't teaching the whole TDD mantra.

You can't test quality into a product overall, because the number of tests you need to run becomes functionally infinite.

But if your individual functional units are simple enough, you can cover all the bases in your unit testing. It's still a matter of external analysis to make sure that (a) the specifications for the units guarantee that the whole will work when they're roped together, and (b) the unit tests are sufficient.

I do know this about TDD: on those parts of the code where I can and do implement it, I get things running sooner and with less fuss than on those parts where I start by saying "oh, I don't need to do this" or "oh, this will be too hard to do 'cuz it's embedded".

--
Tim Wescott 
Control systems, embedded software and circuit design 
I'm looking for work!  See my website if you're interested 
http://www.wescottdesign.com
Reply to
Tim Wescott

If you have to do those checks manually, then some developers might not include them or may remove them when building the production release.

Leaving those checks (either manual or compiler generated) in the production build may stop your code from turning into a CVE if an attacker can influence your program logic via an external mechanism in a way that you never thought to test for during development.

IOW, I do believe that run-time checks have a vital place, even in production use, especially in today's heavily interconnected world.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

And that is really all this is.

Static checking won't find much amongst people who have good self-checking habits. I've seen two cases where ancient code bases were cleaned up using static checkers, and it was mostly noise.

Where static analysis shines is "before every checkin." But it all adds up.

Now how hard was that?

you effectively cannot reliably test for them. They must be prevented by other practices.

Yep.

So I often start with "all buffers are the same size", and then reduce them as needed, using a typedef enum to ... enumerate all the possible sizes.

In that case, you change one at a time, following it through its entire lifetime. If you can't follow it through its entire lifetime, then you have a much bigger problem.

And to be totally honest, I'll at times "write an MMU" - have an array of struct of the addresses and lengths of all the buffers, then add extent checks that can be turned off.

Most people who have buffer overrun problems have opaque void * style buffer being flung around. Tsk.

--
Les Cargill
Reply to
Les Cargill

That's another annoyance of mine. :-)

If I had my way, integers would be unsigned by default in all of today's programming languages and you would have to ask for a signed integer if you wanted one. As it is, all of my variables are declared as unsigned, regardless of language (assuming the language supports it), unless I need a signed one.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

The rules may be opaque - one toolchain had been rebuilt with the default char signedness set to unsigned char. Maybe it's just me, but I have to check. It doesn't take too long.

I have seen so many toolchains, some of which have bizarre extensions...

That unfortunately does not mean they do not exist.

--
Les Cargill
Reply to
Les Cargill

What happens if you subtract 3 from 2? Are these unsigneds supposed to represent natural numbers, so you throw an exception if you get a negative result? Or are you saying all languages should use twos complement wraparound arithmetic including the ones that have exact integers now?

Reply to
Paul Rubin

Some toolchains have plain char as signed, others as unsigned. The target ABI might specify it, it might not. On the same platform, different compiler versions might swap the default. Compiler flags can change the default.

But it should never matter - if your code is affected by the signedness of plain "char", your code is /bad/. If you need a signed char, use "signed char" (or, more usually, int8_t). If you need an unsigned char, use "unsigned char" (or uint8_t).

For most of my targets and tools, I don't even know whether a plain char is signed or unsigned - and that is the way it should be.

This is not a matter of "opaque rules", "strange bugs", "extensions", or "compilers that don't follow the standards". The C standards are perfectly clear on this matter, and the rules are simple. Some people misunderstand the rules, and think that plain char is always signed - but it is /your/ job, as a professional programmer, to learn the rules correctly.

Extensions are fine. If you want to use a toolchain's extensions, you read the docs for the toolchain and you use them.

But you are suggesting that some compilers - a relatively modern version of MS VC, no less - arbitrarily break a simple and fundamental part of standard C that was worked the same way since the first K&R book, and gives no conceivable advantage by being broken. I simply do not believe this happened - I think your test code has a bug, or you misinterpreted it, or misremembered the problem.

There are compilers with bugs - that is certainly true. There are a few compilers sold commercially that have serious bugs that you have a realistic chance of hitting.

But you are suggesting a design decision here to actively work in a way contrary to the C standards. That is much rarer. It is not impossible, but it is much rarer. I can think of four cases that I know about where compiler manufacturers have made such decisions - in each case it is because they thought the change gave significant benefits to the developers and made the developers' work easier. In every case, they were wrong, because breaking the standards means it is harder to write correct code. But the manufacturers had good reasoning for their decisions.

Making sizeof an array parameter act differently from the standards would be a stupid idea. People rarely use array parameters (rather than pointers) anyway, and even more rarely apply "sizeof" to them. It would break the standards for no reason, and greatly complicate the use of array parameters. It would not happen except perhaps on a "toy" compiler made by complete amateurs (and while such tools do exist, this is /not/ going to be the first symptom you find).

Reply to
David Brown

Presumably he uses signed integers in situations where it makes sense to subtract 3 from 2, and in other situations he simply does not subtract 3 from 2. Making your "noOfOranges" variable signed does not let you subtract 3 oranges from 2 oranges.

Reply to
David Brown

I'd be concerned about code reuse. *I* will know about the array bounds and hopefully document them well and remember them when I pick up the code in six months. I'm not so sure that the next guy will do the right thing, particularly after I've left a project...

--
Best wishes, 
--Phil 
pomartel At Comcast(ignore_this) dot net
Reply to
Phil Martel

If you think your variable might go negative, you use a signed integer otherwise you use an unsigned integer.

Exactly. You pick the data type which most closely models the problem you are trying to solve. Picking a signed integer when you know your data is always going to be positive is not a good match if unsigned integers are available in your chosen language.

In my code (both work and hobbyist) I use unsigned values vastly more frequently than I do signed values and another reason is that signed integers have caused security issues in the past.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.