Do not use arrays. Use vectors.
VLV
andrew queisser wrote:
Do not use arrays. Use vectors.
VLV
andrew queisser wrote:
I also prefer 2). It's obvious from looking at that one line that the entire contents of "arr" is being overwritten. Using the approach in 1), you've got to search around for both the definition of SIZE_ARR and the declaration of arr[] before you know what glancing at the single line of code in 2) has told you.
-- Grant Edwards grante Yow! .. I want FORTY-TWO at TRYNEL FLOATATION SYSTEMS
Your versions are equivalent only when the components of the array are of unity size (sizeof(arr[0]) == 1).
You need to create a macro for the number of elements of the array if the base type is changed to something with a size different from 1.
Pure byte operations (like memcpy()) need the sizeof operator, of course.
Why does 1) fail if the size (SIZE?_ARR) is changed? I've not seen such a thing, yet.
-- Tauno Voipio tauno voipio (at) iki fi
Both ways are wrong. This is how it should be done:
template class ARRAY { T *t; u32 n_elements;
public: ARRAY(u32); ~ARRAY(); BOOL Put(T&, u32); BOOL Get(T&, u32); };
And no more problems.
Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
Number 2) has the merit of being self adjusting when the array size changes which is nice but you will be defining the size of the array somewhere anyway, likely with a macro, so you will actually end up using both methods.
My personal preference is for 1) because my array size often relates to some physical quantity - e.g. RX_FIFO_SIZE or NUMBER_OF_MUX_INPUTS and I believe using the macro gives a clearer idea of what the code is actually doing at a system level. I define all the macros in a header of the same name as the code file so anyone wondering what the numbers actually are doesn't have far to look.
When dealing with fixed size arrays I see two options for sizing arrays and referring to their size later on. Given a definition of an array like this:
#define SIZE_ARR 100 char arr[SIZE_ARR];
I could:
1) use constants (fails when someone changes the size of arr):memcpy(arr, src, SIZE_ARR);
2) use sizeof (fails when someone changes arr from an array to a pointer):memcpy(arr, src, sizeof arr);
Right now I prefer 2) for embedded code since we have very few mallocs in our code.
Opinions?
Thanks, Andrew
Ahh, yes, of course. Shows you for how long I've preferred version 2.
So 1) should be rewritten as:
memcpy(arr, src, SIZE_ARR*sizeof arr[0]);
Andrew
[snip]
What I meant is it fails if someone changes the size of the array, not the macro, as in
char arr[SOME_NEW_SIZE];
and all pieces of code that rely on arr being sized by SIZE_ARR go out of date.
Andrew
A problem with 2) is that if you later modify the code and pass the array to a function where you process it, there will be problems.
Original code: #define SIZE_ARR 100
void fun(char* src) { char arr[SIZE_ARR];
memcpy(arr, src, sizeof arr);
. . . }
Revision 1.1 of the code:
#define SIZE_ARR 100
void fun(char* src) { char arr[SIZE_ARR];
fun2(arr, src);
. . . }
void fun2(char arr*, char* src) { memcpy(arr, src, sizeof arr); }
Well my problems with EITHER method by your definition is assumptions the main one being arr and src are of same type and do not have size problems. if src is SMALLER than arr, you are copying garbage into the end of arr, which may well get processed!
The assumption that both arrays are same types (or can be copied and processed accordingly) is another issue, more important than whether you use macros or sizeof methods.
Too often I have seen code bork due to using the wrong size either way and screing up (puts hand up as to having done it a couple of times himself). I have seen some abominations to do byte extraction from short arrays of data prior to byte swapping or processing as a way of avoiding types.
The amount of times I have seen mistakes of using the size of char/long array to copy to the opposite type and having processing errors is too often. Especially if copying extra garbage data that may be zero (or null data to the processing) until an event changes something else. Usually this is one of those 'freak' events that only users can perform to outwit any testing or specifications that then highlights a code issue.
These are more important issues for readability, maintenance and just plain old review of the code.
-- Paul Carpenter | paul@pcserviceselectronics.co.uk PC Services
Not everyone has access to C++ and STL. Thanks for playing.
pete
-- pete@fenelon.com "there's no room for enigmas in built-up areas"
Show me a good PIC16 or 8051 C++ compiler with STL? Thanks. You can go home now.
pete
-- pete@fenelon.com "there's no room for enigmas in built-up areas"
On Tue, 06 Mar 2007 18:18:58 GMT, Vladimir Vassilevsky wrote in comp.arch.embedded:
-- Jack Klein Home: http://JK-Technology.Com
As for the C++ compiler, check
Always welcome. You can go to hell now.
VLV
You such a stupid bore. Use vectors, I said.
typedef struct { void *ptr; u8 type_info; u32 n_elements;
} VECTOR;
Is that better?
VLV
Jack Kle> On Tue, 06 Mar 2007 18:18:58 GMT, Vladimir Vassilevsky
And you appear to be an arrogant untrainable top-posting boor. Go ask your parents to instruct you in manners and civility.
--
If you are going to memcpy things all over the place and hope to be resistant to random code changes in random places you are pretty much screwed either way.
If you are going to have multiple dependencies on SIZE_ARR I would suggest gathering them all together so it is obvious there are multiple dependencies and one change may require others. For example define a macro for a specific memcpy of this array. It is also easier to add some (perhaps debug flag dependant) assertions to such macros to try to protect against misuse.
8752 old yes. Outdated? X52 core is one of the most popular 8 bits. That make 2 known C++ 8052 compilers. The word is they leave very little room for your code. Whet you need a tiny low power CPU that cost under a buck and needs to turn on a light after you hold the button for 2 seconds you think about a PIC. And your glad you can program it in C rather than PIC ASM.
I see where you're coming from and it is a fair point. There is risk in having macros in the same namespace but it also forces you to use more unique names for them which IMHO improves the readability of the code. A good compiler will notice if you're redefining a macro so the risk is in using the wrong one in the code - and is alleviated by having more unique names.
I think a lot of it boils down to the fact I find it easier to have the header open in one window for reference to such things as macros and types (although I usually keep types in a dedicated header) and the source in another to save scrolling to the top every time I want to expand a macro. However, I would agree that your method is probably better.
And you cannot open the file in two windows viewing it at different locations in the file because....
Robert
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.