Octets with non-8 bit bytes...

Actually, I was in error, according to this link:

formatting link

From this it's clear that the usage predates the US silver dollar.

--
Bill
Posted with XanaNews Version 1.16.3.1
Reply to
William Meyer
Loading thread data ...

I think it was a Spanish "dollar", though. Pieces of eight.

--
Al Balmer
Balmer Consulting
 Click to see the full signature
Reply to
Alan Balmer
[...]

Neither. It's 19 bits. By definition of "byte" in the C standard, one 'byte' is whatever the size of type char is. And, perverse as it may be, CHAR_BIT==19 is allowed. Even if the native addressing unit of the processor is, say, 11 bits ;-). Now, don't get me wrong, no compiler writer in a remotely sane state of mind would actually do that, but it's their customers and their own mental health that dictates that, not the definition of C.

Read the fine print on pointer arithmetics in the C standard with such an implementation in mind, and many of the seemingly crazy clauses and restictions will suddenly begin to make sense...

I guess this would be 32 bits since 24 bits would not be

You're not getting my point. Which is that the C standard only demands that a byte is directly addressable, but not that everything directly addressable by the hardware must be a byte (by C's definition of the term) of its own. Otherwise, on 8051s a byte would have to be

1 bit wide, because they can address single bits.
--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

Not really. Such a system, if someone were actually enough of a B.C.W.f.H. (a Bastard Compiler Writer from Hell) to create one, would simply discard 13 bits of each 32bit word. For the sheer fun of it, they might opt to use bits 0..4, 8..12, 16..20 and 24 .. 27. And all sizeof()s of C standard types other than the 'char' group might be different, prime numbers, each with it's own selection of bits used.

Relax. The chance you'll actually ever encounter such a beast are, luckily for all of us, quite negligible ;-)

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

Does it matter if they are made of wood?

12 bits?

What is it called on an 18-bit machine?

20 bits?

21 bits?

24 bits?

Best Wishes

Reply to
Jeff Fox

I can't imagine Octet meaning anything but 8 bits. However, over time, Byte has been used in many ways. There were a lot of 7 bit bytes at one time - a chunk big enough to hold an ASCII character in machines not measured in powers of 2. Word is almost certainly not the right term here. That usually implies the natural length of data elements on the machine - 16 bit words on 16 bit machines, 36 bit words on 36 bit machines, etc.

Gee ain't terminology a pain :-)

Regards, Steve

Reply to
Steve Underwood

In fact CHAR_BIT==21 would even make a lot of sense.

The Unicode Scalar Value (USV) is from 0x0..0x10FFFF, thus fitting nicely into 21 bits.

Paul

Reply to
Paul Keinanen

[ Stuff snipped]

One portable way I have seen to force C to use a specific size is the following:

typedef struct {unsigned data:8;} BYTE;

The only problem is that if you define a variable as

BYTE foo;

you have to use it with following syntax

foo.data=20;

At least the compiler will have to do the masking etc. if necessary. Most reasonable compilers should generate the same code for a normal unsigned char variable and this type if it happens to be the same width.

Regards Anton Erasmus

Reply to
Anton Erasmus

You're right, I'm not getting your point. If a byte must be directly addressable and be able to hold a character, then it would have to be either 24 or 32 bits on this hardware since 19 bits is not directly addressable.

--
Rick "rickman" Collins

rick.collins@XYarius.com
 Click to see the full signature
Reply to
rickman

14 bits: fortbit

31 bits: month

52 bits: deck

76 bits: trombone 80 bits: PhileasFogg
144 bits: dergrossbit

365 bits: year

366 bits: leapyear
--
Rick "rickman" Collins

rick.collins@XYarius.com
 Click to see the full signature
Reply to
rickman

Yes, you're right.

--
Bill
Posted with XanaNews Version 1.16.3.1
Reply to
William Meyer

Getting data to have the same memory layout in different compilers and on different architectures are a total different ballgame. The method I proposed only handles the size issue. To handle the actual memory layout one would have to use #pragmas or some other non-portable directives to do this. I do agree that this is inherantly non-portable.

Regards Anton Erasmus

Reply to
Anton Erasmus

Hi Alex,

here's how I would handle this issue ...

Alex Sanks wrote:

Being aware that complete portability across very different platforms will be not achievable, I'd sstart with declaring different environment with #define PLATFORM_1 etc.

Bc. data types might differ (as you indicate) whereever they're received/transmitted/transformed, use 'typedef' or similar methods to abstract from machine/compiler specific types wherever possible. Your u8, u32 types are a good start.

Data doesn't move by itself. Instead, you have to move data. How, might differ depending on the peripheral (USB controller,...) and/or on the machine/platform/compiler. Thus, again you might use #define/#ifdef to handle data movements. This means, install generic handler routines to move data, which do all the machine/platform/compiler/periphery dependent stuff, and which provide a generic interface.

Then you can keep the rest of the software (almost ?) portable.

If you want some fairly good examples of how to do such things, you might want to read the source code files of the Linux OS. Especially of interest for your work might be the book "Linux device drivers" ,Alessandro Rubini, which gives a lot of good hints.

I guess you mean Performance='fast execution'. So, you might want to do things in compile time whereever possible. One example to do a thing in compile time instead of runtime, is this: u16 x; u8 y; x=1234; y = x & 0xff; /* this is the runtime approach */

u8 y; union { u16 x16; u8 x8_1; u8 x8_2; } ux; ux.x16=1234; y=ux.x8_2; /* this is the compile time approach */ Though this approach is highly machine/compiler dependent, it may save you lot of time, thus giving you performance gain.

Probably, you'll have to implement such a thing individually for every new environment. Which means, that the scope of such critical things should be as small as possible.

I count only 30 ...?!

Approach depends on where you need your performance and where the conversions are cheapest. One might be to define a 16bit type which holds two octets: typedef union { u16 the16wide; u16 the8wideH:8; u16 the8wideL:8; } my16; .... my32; Based on this you could define your structure with

typedef struct { my32 Signature; my16 Tag01; my16 Tag23; ... } Cbw; For convenience, you might define: #define Signature0 Signature.the8wide0 ... #define Tag0 Tag01.the8wideH #define Tag1 Tag01.the8wideL

Then you can access the data as usual with Cbw.Tag1 or with Cbw.Signature or however you want.

Certainly, that's ugly. Since I did some device drivers which had to be "a little bit" portable, I'd say that you'll not be able to avoid such ugly things. With a little effort, you can write things so that they are readable without chance to misunderstand what is meant. This would be a great thing and certainly highly appreciated by those people who have to modify your code later.

I'd try to hide all the ugly things in extra files and keep them as small as possible. Try to keep the rest neat, instead.

You're free to like it or dislike it. That's life that you don't always get what you like...

If so, why not just define a typedef u16 BYTE; define the structure as you wrote andignore the unused bytes.

However, to avoid that some other function works with corrupted data, or corrupts them, I'd write access functions which are the only to work on the structure.

Hide the data structure in a .c-file together with the manipulator functions, and export in the .h-file not the structure, but the manipulator functions together with an abstract, generic container type. This could look like:

---------- file.h ------------ typedef unsigned short u16; typedef unsigned long u32; typedef struct { u32 signature; u16 tag; .... } genericCBW; int read_data(genericCBW * returnsDataHere); int write_data(genericCBW * needsDataHere);

---------------------------

--------- file.c ------------ #define PLATFORM all the ugly stuff you can't avoid... all your conversions necessary, all the real read/write accesses and the definitions for all the real data storage elements. (your real Cbw structures) int read_data(genericCBW * returnsDataHere) { returnsDataHere->signature = functionwhichknowshowtogetSignature(); ... } int write_data(genericCBW * needsDataHere) { functionwhichknowshowtowriteSignature(needsDataHere->signature); }

---------------------------

Bernhard

Reply to
Bernhard Holzmayer

umm... yes it is. the "gotcha" is simply that any larger datatype needs to be at least 38 bits long (if such exists - they might not).

--
	Sander

+++ Out of cheese error +++
Reply to
Sander Vesik

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.